content
stringlengths
86
994k
meta
stringlengths
288
619
search results Expand all Collapse all Results 1 - 25 of 30 1. CJM 2013 (vol 65 pp. 1287) $K$-theory of Furstenberg Transformation Group $C^*$-algebras The paper studies the $K$-theoretic invariants of the crossed product $C^{*}$-algebras associated with an important family of homeomorphisms of the tori $\mathbb{T}^{n}$ called Furstenberg transformations. Using the Pimsner-Voiculescu theorem, we prove that given $n$, the $K$-groups of those crossed products, whose corresponding $n\times n$ integer matrices are unipotent of maximal degree, always have the same rank $a_{n}$. We show using the theory developed here that a claim made in the literature about the torsion subgroups of these $K$-groups is false. Using the representation theory of the simple Lie algebra $\frak{sl}(2,\mathbb{C})$, we show that, remarkably, $a_{n}$ has a combinatorial significance. For example, every $a_{2n+1}$ is just the number of ways that $0$ can be represented as a sum of integers between $-n$ and $n$ (with no repetitions). By adapting an argument of van Lint (in which he answered a question of ErdŠs), a simple, explicit formula for the asymptotic behavior of the sequence $\{a_{n}\}$ is given. Finally, we describe the order structure of the $K_{0}$-groups of an important class of Furstenberg crossed products, obtaining their complete Elliott invariant using classification results of H. Lin and N. C. Phillips. Keywords:$K$-theory, transformation group $C^*$-algebra, Furstenberg transformation, Anzai transformation, minimal homeomorphism, positive cone, minimal homeomorphism Categories:19K14, 19K99, 46L35, 46L80, , 05A15, 05A16, 05A17, 15A36, 17B10, 17B20, 37B05, 54H20 2. CJM 2012 (vol 65 pp. 1020) Monotone Hurwitz Numbers in Genus Zero Hurwitz numbers count branched covers of the Riemann sphere with specified ramification data, or equivalently, transitive permutation factorizations in the symmetric group with specified cycle types. Monotone Hurwitz numbers count a restricted subset of these branched covers related to the expansion of complete symmetric functions in the Jucys-Murphy elements, and have arisen in recent work on the the asymptotic expansion of the Harish-Chandra-Itzykson-Zuber integral. In this paper we begin a detailed study of monotone Hurwitz numbers. We prove two results that are reminiscent of those for classical Hurwitz numbers. The first is the monotone join-cut equation, a partial differential equation with initial conditions that characterizes the generating function for monotone Hurwitz numbers in arbitrary genus. The second is our main result, in which we give an explicit formula for monotone Hurwitz numbers in genus zero. Keywords:Hurwitz numbers, matrix models, enumerative geometry Categories:05A15, 14E20, 15B52 3. CJM 2011 (vol 63 pp. 1364) The Cubic Dirac Operator for Infinite-Dimensonal Lie Algebras Let $\mathfrak{g}=\bigoplus_{i\in\mathbb{Z}} \mathfrak{g}_i$ be an infinite-dimensional graded Lie algebra, with $\dim\mathfrak{g}_i<\infty$, equipped with a non-degenerate symmetric bilinear form $B$ of degree $0$. The quantum Weil algebra $\widehat{\mathcal{W}}\mathfrak{g}$ is a completion of the tensor product of the enveloping and Clifford algebras of $\mathfrak{g}$. Provided that the Kac-Peterson class of $\mathfrak{g}$ vanishes, one can construct a cubic Dirac operator $\mathcal{D}\in\widehat{\mathcal{W}}(\mathfrak{g})$, whose square is a quadratic Casimir element. We show that this condition holds for symmetrizable Kac-Moody algebras. Extending Kostant's arguments, one obtains generalized Weyl-Kac character formulas for suitable ``equal rank'' Lie subalgebras of Kac-Moody algebras. These extend the formulas of G. Landweber for affine Lie algebras. Categories:22E65, 15A66 4. CJM 2010 (vol 63 pp. 413) Generating Functions for Hecke Algebra Characters Certain polynomials in $n^2$ variables that serve as generating functions for symmetric group characters are sometimes called ($S_n$) character immanants. We point out a close connection between the identities of Littlewood--Merris--Watkins and Goulden--Jackson, which relate $S_n$ character immanants to the determinant, the permanent and MacMahon's Master Theorem. From these results we obtain a generalization of Muir's identity. Working with the quantum polynomial ring and the Hecke algebra $H_n(q)$, we define quantum immanants that are generating functions for Hecke algebra characters. We then prove quantum analogs of the Littlewood--Merris--Watkins identities and selected Goulden--Jackson identities that relate $H_n(q)$ character immanants to the quantum determinant, quantum permanent, and quantum Master Theorem of Garoufalidis--L\^e--Zeilberger. We also obtain a generalization of Zhang's quantization of Muir's identity. Keywords:determinant, permanent, immanant, Hecke algebra character, quantum polynomial ring Categories:15A15, 20C08, 81R50 5. CJM 2010 (vol 63 pp. 3) Free Bessel Laws We introduce and study a remarkable family of real probability measures $\pi_{st}$ that we call free Bessel laws. These are related to the free Poisson law $\pi$ via the formulae $\pi_{s1}=\pi^{\ boxtimes s}$ and ${\pi_{1t}=\pi^{\boxplus t}}$. Our study includes definition and basic properties, analytic aspects (supports, atoms, densities), combinatorial aspects (functional transforms, moments, partitions), and a discussion of the relation with random matrices and quantum groups. Keywords:Poisson law, Bessel function, Wishart matrix, quantum group Categories:46L54, 15A52, 16W30 6. CJM 2010 (vol 62 pp. 758) General Preservers of Quasi-Commutativity Let ${ M}_n$ be the algebra of all $n \times n$ matrices over $\mathbb{C}$. We say that $A, B \in { M}_n$ quasi-commute if there exists a nonzero $\xi \in \mathbb{C}$ such that $AB = \xi BA$. In the paper we classify bijective not necessarily linear maps $\Phi \colon M_n \to M_n$ which preserve quasi-commutativity in both directions. Keywords:general preservers, matrix algebra, quasi-commutativity Categories:15A04, 15A27, 06A99 7. CJM 2009 (vol 62 pp. 109) Sum of Hermitian Matrices with Given Eigenvalues: Inertia, Rank, and Multiple Eigenvalues Let $A$ and $B$ be $n\times n$ complex Hermitian (or real symmetric) matrices with eigenvalues $a_1 \ge \dots \ge a_n$ and $b_1 \ge \dots \ge b_n$. All possible inertia values, ranks, and multiple eigenvalues of $A + B$ are determined. Extension of the results to the sum of $k$ matrices with $k > 2$ and connections of the results to other subjects such as algebraic combinatorics are also Keywords:complex Hermitian matrices, real symmetric matrices, inertia, rank, multiple eigenvalues Categories:15A42, 15A57 8. CJM 2008 (vol 60 pp. 1050) Adjacency Preserving Maps on Hermitian Matrices Hua's fundamental theorem of the geometry of hermitian matrices characterizes bijective maps on the space of all $n\times n$ hermitian matrices preserving adjacency in both directions. The problem of possible improvements has been open for a while. There are three natural problems here. Do we need the bijectivity assumption? Can we replace the assumption of preserving adjacency in both directions by the weaker assumption of preserving adjacency in one direction only? Can we obtain such a characterization for maps acting between the spaces of hermitian matrices of different sizes? We answer all three questions for the complex hermitian matrices, thus obtaining the optimal structural result for adjacency preserving maps on hermitian matrices over the complex field. Keywords:rank, adjacency preserving map, hermitian matrix, geometry of matrices Categories:15A03, 15A04, 15A57, 15A99 9. CJM 2008 (vol 60 pp. 1149) Conjugate Reciprocal Polynomials with All Roots on the Unit Circle We study the geometry, topology and Lebesgue measure of the set of monic conjugate reciprocal polynomials of fixed degree with all roots on the unit circle. The set of such polynomials of degree $N$ is naturally associated to a subset of $\R^{N-1}$. We calculate the volume of this set, prove the set is homeomorphic to the $N-1$ ball and that its isometry group is isomorphic to the dihedral group of order $2N$. Categories:11C08, 28A75, 15A52, 54H10, 58D19 10. CJM 2008 (vol 60 pp. 923) Endomorphisms of Kronecker Modules Regulated by Quadratic Algebra Extensions of a Function Field The Kronecker modules $\mathbb{V}(m,h,\alpha)$, where $m$ is a positive integer, $h$ is a height function, and $\alpha$ is a $K$-linear functional on the space $K(X)$ of rational functions in one variable $X$ over an algebraically closed field $K$, are models for the family of all torsion-free rank-2 modules that are extensions of finite-dimensional rank-1 modules. Every such module comes with a regulating polynomial $f$ in $K(X)[Y]$. When the endomorphism algebra of $\mathbb{V}(m,h,\alpha)$ is commutative and non-trivial, the regulator $f$ must be quadratic in $Y$. If $f$ has one repeated root in $K(X)$, the endomorphism algebra is the trivial extension $K\ltimes S$ for some vector space $S$. If $f$ has distinct roots in $K(X)$, then the endomorphisms form a structure that we call a bridge. These include the coordinate rings of some curves. Regardless of the number of roots in the regulator, those $\End\mathbb{V}(m,h,\alpha)$ that are domains have zero radical. In addition, each semi-local $\End\mathbb{V}(m,h,\alpha)$ must be either a trivial extension $K\ltimes S$ or the product $K\times K$. Categories:16S50, 15A27 11. CJM 2008 (vol 60 pp. 520) Matrices Whose Norms Are Determined by Their Actions on Decreasing Sequences Let $A=(a_{j,k})_{j,k \ge 1}$ be a non-negative matrix. In this paper, we characterize those $A$ for which $\|A\|_{E, F}$ are determined by their actions on decreasing sequences, where $E$ and $F$ are suitable normed Riesz spaces of sequences. In particular, our results can apply to the following spaces: $\ell_p$, $d(w,p)$, and $\ell_p(w)$. The results established here generalize ones given by Bennett; Chen, Luor, and Ou; Jameson; and Jameson and Lashkaripour. Keywords:norms of matrices, normed Riesz spaces, weighted mean matrices, Nörlund mean matrices, summability matrices, matrices with row decreasing Categories:15A60, 40G05, 47A30, 47B37, 46B42 12. CJM 2007 (vol 59 pp. 1284) On Effective Witt Decomposition and the Cartan--Dieudonn{é Theorem Let $K$ be a number field, and let $F$ be a symmetric bilinear form in $2N$ variables over $K$. Let $Z$ be a subspace of $K^N$. A classical theorem of Witt states that the bilinear space $(Z,F)$ can be decomposed into an orthogonal sum of hyperbolic planes and singular and anisotropic components. We prove the existence of such a decomposition of small height, where all bounds on height are explicit in terms of heights of $F$ and $Z$. We also prove a special version of Siegel's lemma for a bilinear space, which provides a small-height orthogonal decomposition into one-dimensional subspaces. Finally, we prove an effective version of the Cartan--Dieudonn{\'e} theorem. Namely, we show that every isometry $\sigma$ of a regular bilinear space $(Z,F)$ can be represented as a product of reflections of bounded heights with an explicit bound on heights in terms of heights of $F$, $Z$, and $\sigma$. Keywords:quadratic form, heights Categories:11E12, 15A63, 11G50 13. CJM 2007 (vol 59 pp. 638) Distance from Idempotents to Nilpotents We give bounds on the distance from a non-zero idempotent to the set of nilpotents in the set of $n\times n$ matrices in terms of the norm of the idempotent. We construct explicit idempotents and nilpotents which achieve these distances, and determine exact distances in some special cases. Keywords:operator, matrix, nilpotent, idempotent, projection Categories:47A15, 47D03, 15A30 14. CJM 2007 (vol 59 pp. 488) Osculating Varieties of Veronese Varieties and Their Higher Secant Varieties We consider the $k$-osculating varieties $O_{k,n.d}$ to the (Veronese) $d$-uple embeddings of $\PP^n$. We study the dimension of their higher secant varieties via inverse systems (apolarity). By associating certain 0-dimensional schemes $Y\subset \PP^n$ to $O^s_{k,n,d}$ and by studying their Hilbert functions, we are able, in several cases, to determine whether those secant varieties are defective or not. Categories:14N15, 15A69 15. CJM 2007 (vol 59 pp. 186) Endomorphism Algebras of Kronecker Modules Regulated by Quadratic Function Fields Purely simple Kronecker modules ${\mathcal M}$, built from an algebraically closed field $K$, arise from a triplet $(m,h,\alpha)$ where $m$ is a positive integer, $h\colon\ktil\ar \{\infty,0,1,2,3, \dots\}$ is a height function, and $\alpha$ is a $K$-linear functional on the space $\krx$ of rational functions in one variable $X$. Every pair $(h,\alpha)$ comes with a polynomial $f$ in $K(X)[Y] $ called the regulator. When the module ${\mathcal M}$ admits non-trivial endomorphisms, $f$ must be linear or quadratic in $Y$. In that case ${\mathcal M}$ is purely simple if and only if $f$ is an irreducible quadratic. Then the $K$-algebra $\edm\cm$ embeds in the quadratic function field $\krx[Y]/(f)$. For some height functions $h$ of infinite support $I$, the search for a functional $\ alpha$ for which $(h,\alpha)$ has regulator $0$ comes down to having functions $\eta\colon I\ar K$ such that no planar curve intersects the graph of $\eta$ on a cofinite subset. If $K$ has characterictic not $2$, and the triplet $(m,h,\alpha)$ gives a purely-simple Kronecker module ${\mathcal M}$ having non-trivial endomorphisms, then $h$ attains the value $\infty$ at least once on $ \ktil$ and $h$ is finite-valued at least twice on $\ktil$. Conversely all these $h$ form part of such triplets. The proof of this result hinges on the fact that a rational function $r$ is a perfect square in $\krx$ if and only if $r$ is a perfect square in the completions of $\krx$ with respect to all of its valuations. Keywords:Purely simple Kronecker module, regulating polynomial, Laurent expansions, endomorphism algebra Categories:16S50, 15A27 16. CJM 2005 (vol 57 pp. 82) Jordan Structures of Totally Nonnegative Matrices An $n \times n$ matrix is said to be totally nonnegative if every minor of $A$ is nonnegative. In this paper we completely characterize all possible Jordan canonical forms of irreducible totally nonnegative matrices. Our approach is mostly combinatorial and is based on the study of weighted planar diagrams associated with totally nonnegative matrices. Keywords:totally nonnegative matrices, planar diagrams,, principal rank, Jordan canonical form Categories:15A21, 15A48, 05C38 17. CJM 2004 (vol 56 pp. 776) Best Approximation in Riemannian Geodesic Submanifolds of Positive Definite Matrices We explicitly describe the best approximation in geodesic submanifolds of positive definite matrices obtained from involutive congruence transformations on the Cartan-Hadamard manifold ${\mathrm {Sym}}(n,{\Bbb R})^{++}$ of positive definite matrices. An explicit calculation for the minimal distance function from the geodesic submanifold ${\mathrm{Sym}}(p,{\mathbb R})^{++}\times {\mathrm {Sym}}(q,{\mathbb R})^{++}$ block diagonally embedded in ${\mathrm{Sym}}(n,{\mathbb R})^{++}$ is given in terms of metric and spectral geometric means, Cayley transform, and Schur complements of positive definite matrices when $p\leq 2$ or $q\leq 2.$ Keywords:Matrix approximation, positive, definite matrix, geodesic submanifold, Cartan-Hadamard manifold,, best approximation, minimal distance function, global tubular, neighborhood theorem, Schur complement, metric and spectral, geometric mean, Cayley transform Categories:15A48, 49R50, 15A18, 53C3 18. CJM 2004 (vol 56 pp. 134) Linear Operators on Matrix Algebras that Preserve the Numerical Range, Numerical Radius or the States Every norm $\nu$ on $\mathbf{C}^n$ induces two norm numerical ranges on the algebra $M_n$ of all $n\times n$ complex matrices, the spatial numerical range $$ W(A)= \{x^*Ay : x, y \in \mathbf{C}^n,\ nu^D(x) = \nu(y) = x^*y = 1\}, $$ where $\nu^D$ is the norm dual to $\nu$, and the algebra numerical range $$ V(A) = \{ f(A) : f \in \mathcal{S} \}, $$ where $\mathcal{S}$ is the set of states on the normed algebra $M_n$ under the operator norm induced by $\nu$. For a symmetric norm $\nu$, we identify all linear maps on $M_n$ that preserve either one of the two norm numerical ranges or the set of states or vector states. We also identify the numerical radius isometries, {\it i.e.}, linear maps that preserve the (one) numerical radius induced by either numerical range. In particular, it is shown that if $\nu$ is not the $\ell_1$, $\ell_2$, or $\ell_\infty$ norms, then the linear maps that preserve either numerical range or either set of states are ``inner'', {\it i.e.}, of the form $A\mapsto Q^*AQ$, where $Q$ is a product of a diagonal unitary matrix and a permutation matrix and the numerical radius isometries are unimodular scalar multiples of such inner maps. For the $ \ell_1$ and the $\ell_\infty$ norms, the results are quite different. Keywords:Numerical range, numerical radius, state, isometry Categories:15A60, 15A04, 47A12, 47A30 19. CJM 2003 (vol 55 pp. 1000) Some Convexity Results for the Cartan Decomposition In this paper, we consider the set $\mathcal{S} = a(e^X K e^Y)$ where $a(g)$ is the abelian part in the Cartan decomposition of $g$. This is exactly the support of the measure intervening in the product formula for the spherical functions on symmetric spaces of noncompact type. We give a simple description of that support in the case of $\SL(3,\mathbf{F})$ where $\mathbf{F} = \mathbf{R}$, $\mathbf{C}$ or $\mathbf{H}$. In particular, we show that $\mathcal{S}$ is convex. We also give an application of our result to the description of singular values of a product of two arbitrary matrices with prescribed singular values. Keywords:convexity theorems, Cartan decomposition, spherical functions, product formula, semisimple Lie groups, singular values Categories:43A90, 53C35, 15A18 20. CJM 2003 (vol 55 pp. 91) Some Convexity Features Associated with Unitary Orbits Let $\mathcal{H}_n$ be the real linear space of $n\times n$ complex Hermitian matrices. The unitary (similarity) orbit $\mathcal{U} (C)$ of $C \in \mathcal{H}_n$ is the collection of all matrices unitarily similar to $C$. We characterize those $C \in \mathcal{H}_n$ such that every matrix in the convex hull of $\mathcal{U}(C)$ can be written as the average of two matrices in $\mathcal{U}(C) $. The result is used to study spectral properties of submatrices of matrices in $\mathcal{U}(C)$, the convexity of images of $\mathcal{U} (C)$ under linear transformations, and some related questions concerning the joint $C$-numerical range of Hermitian matrices. Analogous results on real symmetric matrices are also discussed. Keywords:Hermitian matrix, unitary orbit, eigenvalue, joint numerical range Categories:15A60, 15A42 21. CJM 2002 (vol 54 pp. 571) Diagonals and Partial Diagonals of Sum of Matrices Given a matrix $A$, let $\mathcal{O}(A)$ denote the orbit of $A$ under a certain group action such as \begin{enumerate}[(4)] \item[(1)] $U(m) \otimes U(n)$ acting on $m \times n$ complex matrices $A$ by $(U,V)*A = UAV^t$, \item[(2)] $O(m) \otimes O(n)$ or $\SO(m) \otimes \SO(n)$ acting on $m \times n$ real matrices $A$ by $(U,V)*A = UAV^t$, \item[(3)] $U(n)$ acting on $n \times n$ complex symmetric or skew-symmetric matrices $A$ by $U*A = UAU^t$, \item[(4)] $O(n)$ or $\SO(n)$ acting on $n \times n$ real symmetric or skew-symmetric matrices $A$ by $U*A = UAU^t$. \end{enumerate} Denote by $$ \mathcal{O}(A_1,\dots,A_k) = \{X_1 + \cdots + X_k : X_i \in \mathcal{O}(A_i), i = 1,\dots,k\} $$ the joint orbit of the matrices $A_1,\dots,A_k$. We study the set of diagonals or partial diagonals of matrices in $\mathcal{O}(A_1,\dots,A_k)$, {\it i.e.}, the set of vectors $(d_1,\dots,d_r)$ whose entries lie in the $(1,j_1),\dots,(r,j_r)$ positions of a matrix in $\mathcal {O}(A_1, \dots,A_k)$ for some distinct column indices $j_1,\dots,j_r$. In many cases, complete description of these sets is given in terms of the inequalities involving the singular values of $A_1, \dots,A_k$. We also characterize those extreme matrices for which the equality cases hold. Furthermore, some convexity properties of the joint orbits are considered. These extend many classical results on matrix inequalities, and answer some questions by Miranda. Related results on the joint orbit $\mathcal{O}(A_1,\dots,A_k)$ of complex Hermitian matrices under the action of unitary similarities are also discussed. Keywords:orbit, group actions, unitary, orthogonal, Hermitian, (skew-)symmetric matrices, diagonal, singular values Categories:15A42, 15A18 22. CJM 2001 (vol 53 pp. 758) Inequivalent Transitive Factorizations into Transpositions The question of counting minimal factorizations of permutations into transpositions that act transitively on a set has been studied extensively in the geometrical setting of ramified coverings of the sphere and in the algebraic setting of symmetric functions. It is natural, however, from a combinatorial point of view to ask how such results are affected by counting up to equivalence of factorizations, where two factorizations are equivalent if they differ only by the interchange of adjacent factors that commute. We obtain an explicit and elegant result for the number of such factorizations of permutations with precisely two factors. The approach used is a combinatorial one that rests on two constructions. We believe that this approach, and the combinatorial primitives that have been developed for the ``cut and join'' analysis, will also assist with the general case. Keywords:transitive, transposition, factorization, commutation, cut-and-join Categories:05C38, 15A15, 05A15, 15A18 23. CJM 2001 (vol 53 pp. 470) Hyperbolic Polynomials and Convex Analysis A homogeneous real polynomial $p$ is {\em hyperbolic} with respect to a given vector $d$ if the univariate polynomial $t \mapsto p(x-td)$ has all real roots for all vectors $x$. Motivated by partial differential equations, G{\aa}rding proved in 1951 that the largest such root is a convex function of $x$, and showed various ways of constructing new hyperbolic polynomials. We present a powerful new such construction, and use it to generalize G{\aa}rding's result to arbitrary symmetric functions of the roots. Many classical and recent inequalities follow easily. We develop various convex-analytic tools for such symmetric functions, of interest in interior-point methods for optimization problems over related cones. Keywords:convex analysis, eigenvalue, G{\aa}rding's inequality, hyperbolic barrier function, hyperbolic polynomial, hyperbolicity cone, interior-point method, semidefinite program, singular value, symmetric function Categories:90C25, 15A45, 52A41 24. CJM 2000 (vol 52 pp. 141) Numerical Ranges Arising from Simple Lie Algebras A unified formulation is given to various generalizations of the classical numerical range including the $c$-numerical range, congruence numerical range, $q$-numerical range and von Neumann range. Attention is given to those cases having connections with classical simple real Lie algebras. Convexity and inclusion relation involving those generalized numerical ranges are investigated. The underlying geometry is emphasized. Keywords:numerical range, convexity, inclusion relation Categories:15A60, 17B20 25. CJM 2000 (vol 52 pp. 197) Sublinearity and Other Spectral Conditions on a Semigroup Subadditivity, sublinearity, submultiplicativity, and other conditions are considered for spectra of pairs of operators on a Hilbert space. Sublinearity, for example, is a weakening of the well-known property~$L$ and means $\sigma(A+\lambda B) \subseteq \sigma(A) + \lambda \sigma(B)$ for all scalars $\lambda$. The effect of these conditions is examined on commutativity, reducibility, and triangularizability of multiplicative semigroups of operators. A sample result is that sublinearity of spectra implies simultaneous triangularizability for a semigroup of compact operators. Categories:47A15, 47D03, 15A30, 20A20, 47A10, 47B10
{"url":"http://cms.math.ca/cjm/msc/15?page=1","timestamp":"2014-04-18T13:22:16Z","content_type":null,"content_length":"73405","record_id":"<urn:uuid:56007df2-db55-4776-941e-1a12110fb02c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Bananas, Lenses, Envelopes and Barbed Wire Bananas, Lenses, Envelopes and Barbed Wire A Translation Guide by Edward Z. Yang One of the papers I've been slowly rereading since summer began is "Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire", by Erik Meijer, Maarten Fokkinga and Ross Paterson. If you want to know what {cata,ana,hylo,para}morphisms are, this is the paper to read: section 2 gives a highly readable formulation of these morphisms for the beloved linked list. Last time, however, my eyes got a little bit glassy when they started discussing algebraic data types, despite having used and defined them in Haskell; part of me felt inundated in a sea of triangles, circles and squiggles, and by the time they reached the laws for the basic combinators, I might as well have said, "It's all math to me!" A closer reading revealed that, actually, all of these algebraic operators can be written out in plain Haskell, and for someone who has been working with Haskell for a little bit of time, this can provide a smoother (albeit more verbose) reading. Thus, I present this translation guide. Type operators. By convention, types are $A, B, C\ldots$ on the left and a, b, c... on the right. We distinguish these from function operators, though the paper does not and relies on convention to distinguish between the two. $A \dagger B \Leftrightarrow$ Bifunctor t => t a b $A_F \Leftrightarrow$ Functor f => f a $A* \Leftrightarrow$ [a] $D \parallel D' \Leftrightarrow$ (d, d') $D\ |\ D' \Leftrightarrow$ Either d d' $_I \Leftrightarrow$ Identity $\underline{D} \Leftrightarrow$ Const d $A_{(FG)} \Leftrightarrow$ (Functor f, Functor g) => g (f a) $A_{(F\dagger G)} \Leftrightarrow$ (Bifunctor t, Functor f, Functor g) => Lift t f g a $\boldsymbol{1} \Leftrightarrow$ () (For the pedantic, you need to add Hask Hask Hask to the end of all the Bifunctors.) Function operators. By convention, functions are $f, g, h\ldots$ on the left and f :: a -> b, g :: a' -> b', h... on the right (with types unified as appropriate). $f \dagger g \Leftrightarrow$ bimap f g :: Bifunctor t => t a a' -> t b b' $f_F \Leftrightarrow$ fmap f :: Functor f => f a -> f b $f \parallel g \Leftrightarrow$ f *** g :: (a, a') -> (b, b') where f *** g = \(x, x') -> (f x, g x') $\grave{\pi} \Leftrightarrow$ fst :: (a, b) -> a $\acute{\pi} \Leftrightarrow$ snd :: (a, b) -> b $f \vartriangle g \Leftrightarrow$ f &&& g :: a -> (b, b') -- a = a' where f &&& g = \x -> (f x, g x) $\Delta x \Leftrightarrow$ double :: a -> (a, a) where double x = (x, x) $f\ |\ g \Leftrightarrow$ asum f g :: Either a a' -> Either b b' where asum f g (Left x) = Left (f x) asum f g (Right y) = Right (g y) $\grave{\i} \Leftrightarrow$ Left :: a -> Either a b $\acute{\i} \Leftrightarrow$ Right :: b -> Either a b $f\ \triangledown\ g \Leftrightarrow$ either f g :: Either a a' -> b -- b = b' $abla x \Leftrightarrow$ extract x :: a where extract (Left x) = x extract (Right x) = x $f \rightarrow g \Leftrightarrow$ (f --> g) h = g . h . f (-->) :: (a' -> a) -> (b -> b') -> (a -> b) -> a' -> b' $g \leftarrow f \Leftrightarrow$ (g <-- f) h = g . h . f (<--) :: (b -> b') -> (a' -> a) -> (a -> b) -> a' -> b' $(f \overset{F}{\leftarrow} g) \Leftrightarrow$ (g <-*- f) h = g . fmap h . f (<-*-) :: Functor f => (f b -> b') -> (a' -> f a) -> (a -> b) -> a' -> b' $f_I \Leftrightarrow$ id f :: a -> b $f\underline{D} \Leftrightarrow$ const id f :: a -> a $x_{(FG)} \Leftrightarrow$ (fmap . fmap) x $VOID \Leftrightarrow$ const () $\mu f \Leftrightarrow$ fix f Now, let's look at the abides law: $(f \vartriangle g)\ \triangledown\ (h \vartriangle j) = (f\ \triangledown\ h) \vartriangle (g\ \triangledown\ j)$ Translated into Haskell, this states: either (f &&& g) (h &&& j) = (either f h) &&& (either g j) Which (to me at least) makes more sense: if I want to extract a value from Either, and then run two functions on it and return the tuple of results, I can also split the value into a tuple immediately, and extract from the either "twice" with different functions. (Try running the function manually on a Left x and Right y.) 7 Responses to “Bananas, Lenses, Envelopes and Barbed Wire A Translation Guide” 1. I agree, they went a bit overboard with the symbols. I think the type of (<-*-) should be Functor f => (f b -> b’) -> (a’ -> f a) -> (a -> b) -> a’ -> b’ 2. Also the type of `double` should be: double :: a -> (a, a) Hoping this renders correctly… 3. See also related work on translating accumulations: * http://splonderzoek.blogspot.com/2009/09/upwards-and-downwards-accumulations-on.html * http://github.com/spl/splonderzoek/blob/master/Accumulations.hs 4. Awesome, thanks for writing this up! This would have been extremely helpful for me when I read that paper for the first time a few years back… I should probably give it another read this summer. (LaTeX pro tip: \i and \j produce variants without the dots, which should be used when putting accents over an i or a j.) 5. Sjoerd, Douglas and Brent, thanks for the corrections, I’ve updated the post accordingly! (I also took the liberty of editing your comments slightly). 6. A good one, I went through pretty much the same process when reading that paper. Crazy notation. I believe A_(FG) should translate to “(Functor f, Functor g) => g (f a)” 7. Thanks Niklas, it’s been fixed. The new translation is a little disingenuous, unfortunately, because the notation in the paper permits A_FGH, whereas in Haskell we have to explicitly parenthesize
{"url":"http://blog.ezyang.com/2010/05/bananas-lenses-envelopes-and-barbed-wire-a-translation-guide/comment-page-1/","timestamp":"2014-04-20T05:43:09Z","content_type":null,"content_length":"27250","record_id":"<urn:uuid:6efc5431-d76a-4b0e-b0fa-56e9c733595e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
12 projects tagged "hash" FEHASHMAC is a collection of publicly known hash algorithms integrated into a command-line utility. Currently 42 hash algorithms belonging to 12 algorithm families are supported, including the five SHA-3 finalist contributions, plus HMAC for each algorithm. FEHASHMAC contains a set of over 540 known test vectors and results for each algorithm such that the correct implementation for each hardware platform and compiler version can be directly verified. FEHASHMAC supports bitwise hash calculation for algorithms with available bitwise test vectors. Currently this applies to the SHA algorithms: sha1, sha224, sha256, sha384, sha512, and to the five SHA-3 finalists. The so-called Gillogly bitwise input has only been tested for sha1, but is also implemented in the SHA-2 hashes. Bitwise hash calculation is also supported in sha512-224, sha512-256, and whirl, but there are no bitwise test vectors available. FEHASHMAC can also calculate hashed message authentication codes
{"url":"http://freecode.com/tags/hash?page=1&with=&without=18","timestamp":"2014-04-17T23:27:04Z","content_type":null,"content_length":"71801","record_id":"<urn:uuid:38b50e45-7124-4c17-8f42-293474e458ae>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Stat 421-502 computer code Stat 421-502 Design and analysis of experiments 12/08/10 Homework 9 assigned, due 12/15/10. Quiz 2 will be on Mon, Dec 6. 11/29/10 Homework 8 assigned, due 12/06/10. 11/18/10 Homework 7 assigned, due 11/29/10. 11/10/10 Homework 6 assigned, due 11/17/10. 11/03/10 Homework 5 assigned, due 11/10/10. 10/25/10 Homework 4 assigned, due 11/01/10. Quiz 1 will be on Wed, Nov 3. 10/18/10 Homework 3 assigned, due 10/25/10. 10/11/10 Homework 2 assigned, due 10/18/10. 10/04/10 Homework 1 assigned, due 10/11/10. Please follow the homework guidelines. Meeting times Instructors • Peter Hoff • Office: C-319 Padelford • Office Hours: T Th 10:00-11:00 or by appointment. • hoff at stat dot washington dot edu • Yuwen Dai • Office: Padelford B-302 • Office Hours: Th 1:30-3:30 or by appointment. • daiyuwen at u dot washington dot edu • Hoff, P., "Lecture notes" • R Development Core Team, "An Introduction to R" (html, pdf ). • Montgomery, D., "Design and Analysis of experiments" (any edition, recommended but not required). The supported software for 502 is the R statistical environment. R code will be provided for most of the in-class data analysis examples. Approximate schedule: Week 1 : Experiments, test statistics, completely randomized designs, significance testing Week 2 : Review of normal theory tests and confidence intervals, basic decision theory, power and sample size Week 3 : Treatment effects model, ANOVA Week 4 : SS decomposition, geometric interpretation Week 5 : Treatment comparisons, model diagnostics Week 6 : Factorial treatment designs Week 7 : ANOVA decomposition for the additive model Week 8 : ANOVA for the interaction model, model comparison, and normal-theory testing Week 9 : Complete and incomplete block designs, Latin square designs Week 10 : Fractional Factorial designs, aliasing, confounding, resolution Week 11 : Split plot designs, different size experimental units, repeated measures My current plan is to have • about 7 or 8 regular homeworks, • three in-class quizzes. • a take-home midterm and a take-home final. Homeworks and quizzes will be worth 40 points, and take-homes worth 60 points. Late policy: Each turned in item receives an initial grade of x, then the actual grade is y=x exp(-d/8), where d is the number of working days after the due date I receive the work. Everyone receives one grace day to be applied to one homework for the entire quarter.
{"url":"http://www.stat.washington.edu/hoff/courses/stat421-502/index.html","timestamp":"2014-04-21T04:32:56Z","content_type":null,"content_length":"5659","record_id":"<urn:uuid:95505179-9683-4ea7-847c-dd4674d92286>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithm of the Week: Spatial Indexing with Quadtrees and Hilbert Curves Some time ago at Oredev, after the sessions, there was "Birds of a Feather" - a sort of mini-unconference. Anyone could write up a topic on the whiteboard; interested individuals added their names, and each group got allocated a room to chat about the topic. I joined the "Spatial Indexing" group, and we spent a fascinating hour and a half talking about spatial indexing methods, reminding me of several interesting algorithms and techniques. Spatial indexing is increasingly important as more and more data and applications are geospatially-enabled. Efficiently querying geospatial data, however, is a considerable challenge: because the data is two-dimensional (or sometimes, more), you can't use standard indexing techniques to query on position. Spatial indexes solve this through a variety of techniques. In this post, we'll cover several - quadtrees, geohashes (not to be confused withgeohashing), and space-filling curves - and reveal how they're all interrelated. Quadtrees are a very straightforward spatial indexing technique. In a Quadtree, each node represents a bounding box covering some part of the space being indexed, with the root node covering the entire area. Each node is either a leaf node - in which case it contains one or more indexed points, and no children, or it is an internal node, in which case it has exactly four children, one for each quadrant obtained by dividing the area covered in half along both axes - hence the name. A representation of how a quadtree divides an indexed area. Source: Wikipedia Inserting data into a quadtree is simple: Starting at the root, determine which quadrant your point occupies. Recurse to that node and repeat, until you find a leaf node. Then, add your point to that node's list of points. If the list exceeds some pre-determined maximum number of elements, split the node, and move the points into the correct subnodes. A representation of how a quadtree is structured internally. To query a quadtree, starting at the root, examine each child node, and check if it intersects the area being queried for. If it does, recurse into that child node. Whenever you encounter a leaf node, examine each entry to see if it intersects with the query area, and return it if it does. Note that a quadtree is very regular - it is, in fact, a trie, since the values of the tree nodes do not depend on the data being inserted. A consequence of this is that we can uniquely number our nodes in a straightforward manner: Simply number each quadrant in binary (00 for the top left, 10 for the top right, and so forth), and the number for a node is the concatenation of the quadrant numbers for each of its ancestors, starting at the root. Using this system, the bottom right node in the sample image would be numbered 11 01. If we define a maximum depth for our tree, then, we can calculate a point's node number without reference to the tree - simply normalize the node's coordinates to an appropriate integer range (for example, 32 bits each), and then interleave the bits from the x and y coordinates -each pair of bits specifies a quadrant in the hypothetical quadtree. This system might seem familiar: it's a geohash! At this point, you can actually throw out the quadtree itself - the node number, or geohash, contains all the information we need about its location in the tree. Each leaf node in a full-height tree is a complete geohash, and each internal node is represented by the range from its smallest leaf node to its largest one. Thus, you can efficiently locate all the points under any internal node by indexing on the geohash by performing a query for everything within the numeric range covered by the desired node. Querying once we've thrown away the tree itself becomes a little more complex. Instead of refining our search set recursively, we need to construct a search set ahead of time. First, find the smallest prefix (or quadtree node) that completely covers the query area. In the worst case, this may be substantially larger than the actual query area - for example, a small shape in the center of the indexed area that intersects all four quadrants would require selecting the root node for this step. The aim, now, is to construct a set of prefixes that completely covers the query region, while including as little area outside the region as possible. If we had no other constraints, we could simply select the set of leaf nodes that intersect the query area - but that would result in a lot of queries. Another constraint, then, is that we want to minimise the number of distinct ranges we have to query for. One approach to doing this is to start by setting a maximum number of ranges we're willing to have. Construct a set of ranges, initially populated with the prefix we identified earlier. Pick the node in the set that can be subdivided without exceeding the maximum range count and will remove the most unwanted area from the query region. Repeat this until there are no ranges in the set that can be further subdivided. Finally, examine the resulting set, and join any adjacent ranges, if possible. The diagram below demonstrates how this works for a query on a circular area with a limit of 5 query How a query for a region is broken into a series of geohash prefixes/ranges. This approach works well, and it allows us to avoid the need to do recursive lookups - the set of range lookups we do execute can all be done in parallel. Since each lookup can be expected to require a disk seek, parallelizing our queries allows us to substantially cut down the time required to return the results. Still, we can do better. You may notice that all the areas we need to query in the above diagram are adjacent, yet we can only merge two of them (the two in the bottom right of the selected area) into a single range query, requiring us to do 4 separate queries. This is due in part to the order that our geohashing approach 'visits' subregions, working left to right, then top to bottom in each quad. The discontinuity as we go from top right to bottom left quad results in us having to split up some ranges that we could otherwise make contiguous. If we were to visit regions in a different order, perhaps we could minimise or eliminate these discontinuities, resulting in more areas that can be treated as adjacent and fetched with a single query. With an improvement in efficiency like that, we could do fewer queries for the same area covered, or conversely, the same number of queries, but including less extraneous area. Illustrates the order in which the geohashing approach 'visits' each quad. Hilbert Curves Suppose instead, we visit regions in a 'U' shape. Within each quad, of course, we also visit subquads in the same 'U' shape, but aligned so as to match up with neighbouring quads. If we organise the orientation of these 'U's correctly, we can completely eliminate any discontinuities, and visit the entire area at whatever resolution we choose continuously, fully exploring each region before moving on to the next. Not only does this eliminate discontinuities, but it also improves the overall locality. The pattern we get if we do this may look familiar - it's a Hilbert Curve. Hilbert Curves are part of a class of one-dimensional fractals known as space-filling curves, so named because they are one dimensional lines that nevertheless fill all available space in a fixed area. They're fairly well known, in part thanks to XKCD's use of them for a map of the internet. As you can see, they're also of use for spatial indexing, since they exhibit exactly the locality and continuity required. For example, if we take another look at the example we used for finding the set of queries required to encompass a circle above, we find that we can reduce the number of queries by one: the small region in the lower left is now contiguous with the region to its right, and whilst the two regions at the bottom are no longer contiguous with each other, the rightmost one is now contiguous with the large area in the upper right. Illustrates the order in which a hilbert curve 'visits' each quad. One thing that our elegant new system is lacking, so far, is a way of converting between a pair of (x,y) coordinates and the corresponding position in the hilbert curve. With geohashing it was easy and obvious - just interleave the x and y coordinates - but there's no obvious way to modify that for a hilbert curve. Searching the internet, you're likely to come across many descriptions of how hilbert curves are drawn, but few if any descriptions of how to find the position of an arbitrary point. To figure this out, we need to take a closer look at how the hilbert cure can be recursively The first thing to observe is that although most references to hilbert curves focus on how to draw the curve, this is a distraction from the essential property of the curve, and its importance to us: It's an ordering for points on a plane. If we express a hilbert curve in terms of this ordering, drawing the curve itself becomes trivial - simply a matter of connecting the dots. Forget about how to connect adjacent sub-curves, and instead focus on how we can recursively enumerate the points. Hilbert curves are all about ordering a set of points on a 2d plane At the root level, enumerating the points is simple: Pick a direction and a start point, and proceed around the four quadrants, numbering them 0 to 3. The difficulty is introduced when we want to determine the order we visit the sub-quadrants in while maintaining the overall adjacency property. Examination reveals that each of the sub-quadrants' curves is a simple transformation of the original curve: there are only four possible transformations. Naturally, this applies recursively to sub-sub quadrants, and so forth. The curve we use for a given quadrant is determined by the curve we used for the square it's in, and the quadrant's position. With a little work, we can construct a table that encapsulates this: Suppose we want to use this table to determine the position of a point on a third-level hilbert curve. For the sake of this example, assume our point has coordinates (5,2) Starting with the first square on the diagram, find the quadrant your point is in - in this case, it's the upper right quadrant. The first part of our hilbert curve position, then, is 3 (11 in binary). Next, we consult the square shown in the inset of square 3 - in this case, it's the second square. Repeat the process: which sub-quadrant does our point fall into? Here, it's the lower left one, meaning the next part of our position is 1, and the square we should consult next is the second one again. Repeating the process one final time, we find our point falls in the upper right sub-sub-quadrant, our final coordinate is 3 (11 in binary). Stringing them together, we now know the position of our point on the curve is 110111 binary, or 55. Let's be a little more methodical, and write methods to convert between x,y coordinates and hilbert curve positions. First, we need to express our diagram above in terms a computer can understand: hilbert_map = { 'a': {(0, 0): (0, 'd'), (0, 1): (1, 'a'), (1, 0): (3, 'b'), (1, 1): (2, 'a')}, 'b': {(0, 0): (2, 'b'), (0, 1): (1, 'b'), (1, 0): (3, 'a'), (1, 1): (0, 'c')}, 'c': {(0, 0): (2, 'c'), (0, 1): (3, 'd'), (1, 0): (1, 'c'), (1, 1): (0, 'b')}, 'd': {(0, 0): (0, 'a'), (0, 1): (3, 'c'), (1, 0): (1, 'd'), (1, 1): (2, 'd')}, } In the snippet above, each element of 'hilbert_map' corresponds to one of the four squares in the diagram above. To make things easier to follow, I've identified each one with a letter - 'a' is the first square, 'b' the second, and so forth. The value for each square is a dict, mapping x and y coordinates for the (sub-)quadrant to the position along the line (the first part of the value tuple) and the square to use next (the second part of the value tuple). Here's how we can use this to translate x and y coordinates into a hilbert curve position: def point_to_hilbert(x, y, order=16): current_square = 'a' position = 0 for i in range(order - 1, -1, -1): position <<= 2 quad_x = 1 if x & (1 << i) else 0 quad_y = 1 if y & (1 << i) else 0 quad_position, current_square = hilbert_map[current_square][(quad_x, quad_y)] position |= quad_position return position The input to this function is the integer x and y coordinates, and the order of the curve. An order 1 curve fills a 2x2 grid, an order 2 curve fills a 4x4 grid, and so forth. Our x and y coordinates, then, should be normalized to a range of 0 to 2order-1. The function works by stepping over each bit of the x and y coordinates, starting with the most significant. For each, it determines which (sub-)quadrant the coordinate lies in, by testing the corresponding bit, then fetches the position along the line and the next square to use from the table we defined earlier. The curve position is set as the least significant 2 bits on the position variable, and at the beginning of the next loop, it's left-shifted to make room for the next set of coordinates. Let's check that we've written the function correctly by running our example from above through it: >>> point_to_hilbert(5,2,3)55 Presto! For a further test, we can use the function to generate a complete list of ordered points for a hilbert curve, then use a spreadsheet to graph them and see if we get a hilbert curve. Enter the following expression into an interactive Python interpreter: >>> points =[(x, y)for x in range(8)for y in range(8)]>>> sorted_points = sorted(points, key=lambda k: point_to_hilbert(k[0], k[1],3))>>>print'\n'.join('%s,%s'% x for x in sorted_points) Take the resulting text, paste it into a file called 'hilbert.csv', open it in your favorite spreadsheet, and instruct it to generate a scatter plot. The result is, of course, a nicely plotted hilbert curve! The inverse of point_to_hilbert is a straightforward reversal of the hilbert_map; implementing it is left as an exercise for the reader. There you have it - spatial indexing, from quadtrees to geohashes to hilbert curves. One final observation: If you express the ordered sequence of x,y coordinates required to draw a hilbert curve in binary, do you notice anything interesting about the ordering? Does it remind you of anything? Just to wrap up, a caveat: All of the indexing methods I've described today are only well-suited to indexing points. If you want to index lines, polylines, or polygons, you're probably out of luck with these methods - and so far as I'm aware, the only known algorithm for effectively indexing shapes is the R-tree, an entirely different and more complex beast. Published at DZone with permission of its author, Nick Johnson. (source)
{"url":"http://java.dzone.com/articles/algorithm-week-spatial","timestamp":"2014-04-20T21:06:26Z","content_type":null,"content_length":"76949","record_id":"<urn:uuid:f66bec6b-1f5c-4d72-80e8-3dbc65f3fc85>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
About "The Map is Not the Territory" Description: Blog by a traditional percussive dancer (Appalachian style flatfooting and clogging and Canadian step dancing) who "teaches the math that directly relates to the process of making rhythm and patterns with the feet." Posts about the mathematics of percussive dance, which date back to October, 2010, have included "What You Can Learn from A Square," "The Power of Limits 4: A Jazz Metaphor in Mathematics Education," "Math & Movement Lesson: Basketball Court Pathways," "Starting Work with the Common Core State Standards," "Ode to A Dance Board," "Reflection is Good for Everyone (Even the Teacher!)," "Origami Twirling Bird: Points, Edges, Turns, Poetry and Poses," "Hundreds, Tens and Ones Revisited: Money Edition," "Supporting 'Math Values in a Rich Context' / Origami as Math," "Marx Brothers Math: Transformation & Reflection," "Marshmallow Math: Solids & Sculpture," "Knowing Numbers & UNO Update," "A Work in Progress: Paper Quilts & Hexagonal Rotation Designs," "Hundreds, Tens and Ones Revisited: Money Edition," "Totally Territorial: Cats, Maps, Area & Multiplication," "Hexagon Poetry," "Big Math: Kid Sized Geometric Structures," "If You Give a Girl a Grid...," "Prelude: Spiders Who Spin Fancy Webs," "A Flood of Self-Initiated Math," and "Sidewalk Math: Functions!" and "Is it Cheating to Use the Multiplication Chart?" and "White...Red...Light Green...Purple...Cuisenaire Rods!" Rosenfeld has developed the arts education programs Rhythm & Dance (now called Drum With Your Feet!) and Math In Your Feet.
{"url":"http://mathforum.org/library/view/76561.html","timestamp":"2014-04-18T21:43:05Z","content_type":null,"content_length":"6569","record_id":"<urn:uuid:5b728e81-0426-4243-9c0d-26a8263053ea>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
South Pasadena Precalculus Tutor Find a South Pasadena Precalculus Tutor ...Have taken many math classes since then. Have tutored students in Algebra 2. Received 5 on Calc BC exam. 15 Subjects: including precalculus, reading, algebra 1, geometry ...For the past three summers, I have worked as the lead teaching assistant for a math course for incoming freshmen at Caltech. I'm also a tutor there, and I've been working with students from elementary school to undergrad and community college (including PCC, ELAC, Mt. SAC, and others) for six years now. 51 Subjects: including precalculus, reading, chemistry, physics ...In addition to helping students prepare for standardized tests (SAT, ACT, ISEE, SSAT), I provide tutoring in history, mathematics, English, and other subjects. I also specialize in helping students write essays for college applications, teasing out what specific aspects of their lives will make them interesting to admissions officers. Aside from tutoring, I'm a professional 42 Subjects: including precalculus, English, SAT math, German ...I also provided tutoring in those classes. I was a teaching assistant in an introductory engineering class. I have also done a course in cadet teaching in which I taught math to 5th graders. 8 Subjects: including precalculus, chemistry, calculus, geometry ...For the past two years I worked individually with ten students from 6th grade to 12th grade on a weekly basis. The majority of my tutoring experience has been in mathematics (elementary school math through college calculus) and test preparation (ACT, SAT, etc.); however, I have also tutored stud... 28 Subjects: including precalculus, reading, calculus, English
{"url":"http://www.purplemath.com/south_pasadena_precalculus_tutors.php","timestamp":"2014-04-17T15:37:42Z","content_type":null,"content_length":"24159","record_id":"<urn:uuid:74e4fafd-892f-40a3-b502-c8716be26020>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
A Swimmer Heads Directly Across A River, Swimming ... | Chegg.com A swimmer heads directly across a river, swimming at 1.3 m/s relative to the water. She arrives at a point 30 m downstream from the point directly across the river, which is 87 m wide. a. What is the speed of the river current? b. What is the swimmer's speed relative to the shore? c. In what direction should the swimmer head so as to arrive at the point directly opposite her starting point? Choose a coordinate system in which across the river is the +x direction and downriver is the +y direction. Draw a sketch showing the velocity of the swimmer across the stream, the velocity of the water downstream, and the resultant velocity vector both down and across the stream. Use trigonometry and the information given to find the desired quantities
{"url":"http://www.chegg.com/homework-help/questions-and-answers/swimmer-heads-directly-across-river-swimming-13-m-s-relative-water-arrives-point-30-m-down-q934999","timestamp":"2014-04-20T18:41:04Z","content_type":null,"content_length":"19383","record_id":"<urn:uuid:67afe607-aead-4bbf-b8f4-6e507742318a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Is this cool with you? Re: Is this cool with you? We can still use it though. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Of course! I plan doing it. Let me try the zeta(3) series. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Is this cool with you? Okay, post what you get. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Hi bobbym The first hundred digits: The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Is this cool with you? That is very good, excellent work! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Thanks! And thanks for the converger. I'll try and figure it out. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Is this cool with you? I got it from some site but do not remember where. Thought it was an oddity at first but found that it really can work. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Is this cool with you? That is it, I remember it now. Thanks for finding it. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? No problem. I will try understanding it, but it seems complicated... The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Is this cool with you? They claim it is based on Pade Approximants using Chebychev polynomials but I do not see it. Their method 2 is also interesting. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? New Problem We have the numbers S={1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,6,6,6,6,7,7,7,7,8,8,8,8,9,9,9,9,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10} Eight numbers are drawn randomly from S without replacement. What is the probability that the 3 highest numbers sum to 30? A says) It is quite rare, less than .05. B says) Not really, it is quite common. C says) I am getting .05 too. D says) According to my calculations it is 2.25, but I think that is a little low. E says) It is almost 1 / 2. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Hi bobbym The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Is this cool with you? Hi bobbym, "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Hi gAr; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Hi anonimnystefy; Hi gAr; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? What is the probability if the highest 3 numbers are (9,8,3)? A says) 352/17917575 B says) Very good! C says) Why, that is wrong. D says) Hah! For the first time A and B agree and they are both wrong! E says) I failed to get a solution that agreed with my simulations. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Does it ask the probability of three highest numbers being (9,8,3)? When sorted, it must be * * * * * 3 8 9 ? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Hi gAr; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Hi gAr; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=286503","timestamp":"2014-04-18T06:01:47Z","content_type":null,"content_length":"42574","record_id":"<urn:uuid:d607c3b6-3e8a-4dd6-b1c4-6986c1bcb603>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
Possible Implications of String Theory String theory leads to some amazing (and controversial) implications. Although string theory is fascinating in its own right, what may prove to be even more intriguing are the possibilities that result from it. • Parallel universes: Some interpretations of string theory predict that our universe is not the only one. In fact, in the most extreme versions of the theory, an infinite number of other universes exist, some of which contain exact duplicates of our own universe. As wild as this theory is, it’s predicted by current research studying the very nature of the cosmos itself. In fact, parallel universes aren’t just predicted by string theory — one view of quantum physics has suggested the theoretical existence of a certain type of parallel universe for more than half a century. • Wormholes: Einstein’s theory of relativity predicts warped space called a wormhole (also called an Einstein-Rosen bridge). In this case, two distant regions of space are connected by a shorter wormhole, which gives a shortcut between those two distant regions, as shown in this figure. String theory allows for the possibility that wormholes extend not only between distant regions of our own universe, but also between distant regions of parallel universes. Perhaps universes that have different physical laws could even be connected by wormholes. In fact, it’s not clear whether wormholes will exist within string theory at all. As a quantum gravity theory, it’s possible that the general relativity solutions that give rise to potential wormholes might go away. • The universe as a hologram: In the mid-1990s, two physicists came up with an idea called the holographic principle. In this theory, if you have a volume of space, you can take all the information contained in that space and show that it corresponds to information “written” on the surface of the space. As odd as it seems, this holographic principle may be key in resolving a major mystery of black holes that has existed for more than 20 years! Many physicists believe that the holographic principle will be one of the fundamental physical principles that will allow insights into a greater understanding of string theory. • String theory and time travel: Some physicists believe that string theory may allow for multiple dimensions of time (by no means the dominant view). As our understanding of time grows with string theory, it’s possible that scientists may discover new means of traveling through the time dimension or show that such theoretical possibilities are, in fact, impossible, as most physicists • String theory and the big bang: String theory is being applied to cosmology, which means that it may give us insights into the formation of the universe. The exact implications are still being explored, but some believe that string theory supports the current cosmological model of inflation, while others believe it allows for entirely universal creation scenarios. Inflation theory predicts that, very shortly after the original big bang, the universe began to undergo a period of rapid, exponential inflation. This theory, which applies principles of particle physics to the early universe as a whole, is seen by many as the only way to explain some properties of the early universe. In string theory, there also exists a possible alternate model to our current big bang model in which two branes collided together and our universe is the result. In this model, called the ekpyrotic universe, the universe goes through cycles of creation and destruction, over and over. • The end of the universe: The ultimate fate of the universe is a question that physics has long explored, and a final version of string theory may help us ultimately determine the matter density and cosmological constant of the universe. By determining these values, cosmologists will be able to determine whether our universe will ultimately contract in upon itself, ending in a big crunch — and perhaps start all over again.
{"url":"http://www.dummies.com/how-to/content/possible-implications-of-string-theory.html","timestamp":"2014-04-21T06:35:03Z","content_type":null,"content_length":"56855","record_id":"<urn:uuid:524abbe4-af74-402b-bbfb-fcba60c5f719>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/erosem/answered","timestamp":"2014-04-20T11:05:58Z","content_type":null,"content_length":"104761","record_id":"<urn:uuid:9dbcc03b-0714-40c0-8142-06fddb10676e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
Apparatus and method for converting a number in binary format to a decimal format - Honeywell Information Systems Inc. The following patent applications, which are assigned to the same assignee as the instant application, have related subject matter and are incorporated herein by reference. Certain portions of the system and processes herein disclosed are not our invention, but are the invention of the below-named inventors as defined by the claims in the following patent application. Ser. No.: 537,910 Filing Date: Sept. 30, 1983 Title: Apparatus For Performing Simplified Decimal Multiplication By Stripping Leading Zeroes Inventors: John J. Bradley, Brian L. Stoffers, Theodore R. Staplin J., Melinda A. Widen Ser. No.: 537,928 Filing Date: Sept. 30, 1983 Title: A Nibble and Word Addressable Memory Arrangement Inventors: Melinda A. Widen, John J. Bradley, George M. O'Har Ser. No.: 537,992 Filing Date: Sept. 30, 1983 Title: An Illegal Decimal Digit Detection Apparatus For Supporting Decimal Arithmetic Operations Inventors: Thomas C. O'Brien, Melinda A. widen, Theodore R. Staplin, Jr., Ming T. Miu Ser. No.: 537,899 Filing Date: Sept. 30, 1983 Title: Decimal Arithmetic Logic Unit For Doubling Or Complementing Decimal Operands Inventors: Theodore R. Staplin, Jr., John J. Bradley, Brian L. Stoffers Ser. No.: 537,751 Filing Date: Sept. 30, 1983 Title: An Equal Nine Apparatus For Supporting Absolute Value Subtracts On Decimal Operands of Unequal Length Inventor: Brian L. Stoffers Ser. No.: 537,991 Filing Date: Sept. 30, 1983 Title: An Arithmetic Logic Unit With Outputs Indicating Invalid Computation Results Caused By Invalid Operands Inventors: John J. Bradley, Thomas C. O'Brien, George M. O'Har, Ming T. Miu, Theodore R. Staplin, Jr., Brian L. Stoffers, Melinda A. Widen 1. Field of the Invention This invention relates to data processing systems and more specifically to a data processing system which provides for execution of decimal numeric software instructions. 2. Description of the Prior Art There are primarily two different methods employed within modern data processing systems for representing numeric data. It can be represented in a binary format in which each bit within a word is given a weight of 2 raised to a power such that the least significant bit, when a binary ONE, represents 2 to the zeroth power, the next more significant bit, when a binary ONE, represents 2 to the first power and so on. Negative numbers in a binary format may be indicated by a sign bit at either the beginning or end of the number or by performing a two's complement on the number. The other method of representing numbers is to represent them in some type of decimal format. The decimal format commonly used is to have each decimal digit represented by four or more bits with the bits being binary encoded to represent the values of 0 to 9 decimal. Representing numbers in the decimal format has two disadvantages. First, as the number gets larger, more bits are required to represent a number in a decimal format than in a binary format. Second, performing decimal arithmetic operations is more complex and generally slower than binary operations because there are discontinuities at the boundaries between decimal digits which are not present between binary digits. Numerous techniques for performing decimal arithmetic operations in data processing systems are known in the prior art. Some techniques are described in the book entitled, Digital Computer Design Fundamentals, by Y. Chu, published by McGraw-Hill Book Company Inc., 1962 which is incorporated herein by reference. These techniques generally require that individual decimal digits be manipulated one digit at a time at some point in performing a decimal arithmetic operation. Therefore, in order to speed up decimal operations, what is needed are methods to efficiently manipulate individual decimal digits within a data processing system and methods which reduce the number of digits which must be manipulated during any arithmetic operation. Accordingly, it is the primary object of this invention to reduce the amount of circuitry and processing time required to convert a number in binary format to decimal format. This invention is pointed out with particularity in the appended claims. An understanding of the above and further objects and advantages of this invention can be obtained by referring to the following description taken in conjunction with the drawings. In accordance with the teaching of the present invention a method and apparatus are taught to speed conversion of a number in binary format to decimal format. First, to decrease the number of steps required in the conversion process all leading zeroes before the highest order non-zero bit of the binary number are removed. To speed conversion processing time, multiple steps in the process to add a partial sum to itself in an adder are minimized by using a multiplexer to apply the partial sum stored in a second memory to first and second inputs of an adder, rather than writing the same partial sum number to a first memory to be read out and applied to the first adder input while the partial sum is read out of the second memory and applied to the second adder input. In addition, the amount of circuitry and processing time required for conversion are further minimized by allocating only sufficient memory space to store the resultant decimal number from the conversion. The manner in which the method of the present invention is performed and the manner in which the apparatus of the present invention is constructed and its mode of operation can best be understood in light of the following detailed description together with the accompanying drawings in which like reference numbers identify like elements in the several figures and in which: FIG. 1 is a general block diagram of a data processing system utilizing the present invention; FIG. 2 is a block diagram of the data manipulation area of the microprocessor of FIG. 1. FIG. 3 is a block diagram of a portion of the control area of the microprocessor of FIG. 1; FIG. 4 is a block diagram of the commercial instruction logic of FIG. 1 which carries out the principles of the present invention; FIG. 5 illustrates the firmware microinstruction word fields of the microprocessor of FIG. 1 and the commercial instruction logic of FIG. 4; FIG. 6 illustrates the clock phase relationship of one microinstruction cycle of the microprocessor of FIG. 1; FIGS. 7A and 7B illustrate the position of 8-bit bytes and 4-bit nibbles in a 16-bit word of a data processing system utilizing the invention; FIG. 8A illustrates the format of a single or double operand basic software instruction processed by the central processing unit of FIG. 1; FIG. 8B illustrates the format of a commercial software branch instruction processed by the central processing unit of FIG. 1; FIGS. 8C-1 through 8C-3 illustrate the format of commercial software numeric, alphanumeric and edit instructions with in-line data descriptors, remote data descriptors and a combination of in-line and remote data descriptors; FIG. 9 illustrates the format of the data descriptors used by the commercial software instructions of the central processing unit/of FIG. 1; FIGS. 10A through 10D are logic block diagrams of circuitry utilized in the commercial instruction logic of FIG. 4 in accordance with the present invention; FIG. 11 is a flow chart of the method used by the central processing unit of FIG. 1 to perform decimal addition and subtraction commercial software instructions; FIG. 12A is a flow chart of a prior art method of performing a decimal multiply; FIG. 12B is a flow chart of the method used by the central processing unit of FIG. 1 to perform a decimal multiply; FIG. 13 is a more detailed flow chart of the method shown in FIG. 12B which is used by the central processing unit of FIG. 1 to perform a decimal multiply commercial software instruction; FIG. 14A is a flow chart of a prior art method of performing a decimal divide; FIG. 14B is a flow chart of the method used by the central processing unit of FIG. 1 to perform a decimal divide; FIG. 15 is a more detailed flow chart of the method shown in FIG. 14B, which is used by the central processing unit of FIG. 1 to perform a decimal divide commercial software instruction; FIG. 16 is a flow chart of the method used by the central processing unit of FIG. 1 to perform a commercial software instruction which converts a number in a binary format to a decimal format; and FIG. 17 is a flow chart of the method used by the central processing unit of FIG. 1 to perform commercial software instructions which converts a number in a decimal format to a binary format. The implementation of the embodiment illustrated in the drawings is effected with a given arrangement of circuitry. However, it is understood that other logic arrangements may be employed in carrying out the invention to adapt the invention to various types of data processors. Accordingly, it is not intended to be limited to the specific schemes shown in the drawings. Referring now to the drawings, FIG. 1 illustrates the overall data processing system in which the present invention may be used. More particularly, FIG. 1 shows a main memory subsystem 10, a central processing unit (CPU) 20 and an input/output (I/O) unit 40. The main memory subsystem 10 consists of three metal oxide semiconductor modules 12, 14 and 16. The three modules are interfaced to the central processor unit 20 and the input/output unit 40 via main bus 26. The main bus 26 gives the capability of providing access to and control of all memory modules and input/output units. The central processing unit 20 executes word oriented software instructions that operate on fixed and variable length fields. The basic unit of information in the central processor is a 16-bit word consisting of two 8-bit bytes (see FIG. 7A). A 16-bit word can also be broken into four 4-bit nibbles (see FIG. 7B). These words of information are used in groups of one, two or more for basic instructions (see FIG. 8A for an example of one of the instruction formats) or fixed or floating point binary operands (data). These words are also used in groups of one or more words for commercial instructions (see FIGS. 8C-1 to 8C-3). Bytes are also used in variable length fields as decimal or alphanumeric operands (data). CPU 20 is comprised of microprocessor 30, monitor logic 22, Read Only Storage (ROS) 24 and commercial instruction logic 28. Microprocessor 30 is an NMOS, 16-bit chip capable of arithmetic, logic, and control operations, driven by a 48-bit external firmware microinstruction words which in the preferred embodiment are contained in ROS 24. The microprocessor 30 design permits the execution of the CPU 20 basic software instruction repertoire which operates on fixed and floating point binary data. Commercial instruction logic (CIL) 28 is used in conjunction with microprocessor 30 to permit the execution of the CPU 20 commercial software instruction repertoire which operates on decimal and alphanumeric data with the microprocessor 30 under the control of bits 0-47 of a 56-bit external firmware microinstruction word contained in ROS 24. As will be seen below, ROS 24 contains 2K (where 1K=1024) of 48-bit, microinstruction words which are used to execute the asic software instructions of CPU 20 and ROS 24 also contains 2K of 56-bit microinstruction words which are used to execute the commercial software instructions of CPU 20 with bits 0-47 controlling microprocessor 30 and bits 48-55 controlling CIL 28. Microprocessor 30 is designed to directly control input/output (I/O) and memory operation for ease in integrated system designs. The microprocessor 30 design permits greater control and integration by use of a 48-bit external firmware microinstruction word that provides true horizontal microprogramming allowing up to 12 simultaneous micro-operations per 48-bit microinstruction word. The microprocessor 30 design also permits 8 external hardware interrupts which generate vectors to firmware microprogram routines as well as allowing 5 external software interrupts that are handled under firmware control. In addition microprocessor 30 provides for 10 external monitor bits originated in monitor logic 22 that are sensed and controlled by sophisticated test branch and major branch operations by logic within microprocessor control area 36 which allows for sophisticated branching operations to be performed within the firmware. Four of these ten external monitor bits are set by commercial instruction logic 28 to control test branch and major branch operations of microprocessor 30 when it and commercial instruction logic 28 are used together to execute a commercial software instruction of CPU 20. Microprocessor 30 is comprised of 5 major internal hardware logic areas as shown in FIG. 1. The five major logic areas are the data manipulation area 32 which includes the arithmetic logic unit (ALU), the memory management unit (MMU 34), the control area 36, the processor bus 37, and internal bus 38. The processor bus 37 consists of 20 address/data lines, one memory address violation line and three general purpose control lines. Processor bus 37 is connected to main bus 26 and is used to provide addresses to the main memory 10 and input/output unit 40 and to receive and send data to main memory 10 and input/output unit 40. Internal bus 38 is the major path for communications of information between the other four areas of the microprocessor chip. Internal bus 38 is 20-bit wide. There are 12 sources of information to internal bus 38 under of the 11 micro-ops within the 48-bit microinstruction word. The ALU is the default so to internal bus 38 if none of the eleven defined micro-ops are used. The data manipulation area 32 performs arithmetic and logic operations on data and does memory address generation. Data manipulation area 32 is described in greater detail in reference to FIG. 2 The control area 36 of microprocessor 30 is logically divided into 3 areas: input latches for control, testable registers, and the next address generation. Control area 36 is described in greater detail in reference to FIG. 3 below. The MMU 34 section of microprocessor 30 is comprised primarily of: a register file, a 12-bit address for base relocation, a 9-bit comparator for checking the size of a memory segment, several 2-bit ring comparators for evaluating access rights to a given segment, and storage flip-flops for indicating potential memory violations. During any CPU generated memory address cycle, the MMU 34 translates the software logical address containing a segment number, a block number and an offset value presented by internal bus 38 into a physical address which is placed on processor bus 37 which in turn is transmitted to main memory 10 via main bus 26. As can be appreciated as the description so far, CPU 20 executes software programs, the instructions of which are fetched from main memory 10, and performs arithmetic and logical operations on data also contained in main memory 10. The software program executed by CPU 20 has the ability to manipulate general and base address registers that are software visible and the current software instruction is pointed to by a program counter. These general registers, base address registers and program counter, which are visible to the software being executed by CPU 20, are physically contained within the data manipulation area 32 of microprocessor 30. Detailed operation of CPU 20 of FIG. 1 is controlled by microprocessor 30 under the control of firmware microinstructions stored in ROS 24. Each location in ROS 24 can be interpreted as controlling one microprocessor machine cycle. As each location of ROS 24 is read, the contents are decoded by control area 36 resulting in a specific operation within microprocessor 30. By grouping ROS locations, firmware microinstruction sequences are obtained that can perform a specific operation or software instruction associated with CPU 20. As each software instruction is initiated, certain bits within the operation code field of the software instruction are used to determine the starting address of the firmware microinstruction routine contained within ROS 24. The testing of certain flip-flops which are set or reset by software instruction decoding done by microprocessor 30 allow the microprocessor to branch to a more specific firmware microinstruction sequence within ROS 24 when necessary. When a commercial software instruction is encountered, microprocessor 30 branches to that portion of ROS 24 which contains 56-bit microinstruction words so that bits 0-47 are used to control the operation of microprocessor 30 and bits 48-55 are used to control the operation of commercial instruction logic 28. CIL 28 is described in greater detail with reference to FIG. 4 below. Connected to main bus 26 is a input/output unit 40. The input/output controller 42 is that portion of the input/output unit 40 which completes a data path from a peripheral device 44 to main memory 10 via main bus 26. I/O controller 42 provides a path through which the peripheral commands are initiated, in addition to controlling resulting data transfers. The microprocessor 30 interfaces with other CPU 20 logic by means of interface signals. In the preferred embodiment, the interface signals are divided into four groups according to pin assignment on microprocessor 30 and the phase relationship of the clock. The microprocessor clock is the primary element of the interface and it produces a Phase A and a Phase B signal as shown in FIG. 6. The phase relationship between Phase A and Phase B determines the functions of the interface lines because there are 129 signals (excluding power, ground, and clock timing) shared among 57 input/ output pins of microprocessor 30 of the preferred embodiment. Phase A signal is used by the microprocessor 30 and the system to reverse the direction of the processor bus 37 drivers. When Phase A is high (a binary ONE), the 48 ROS data lines from read only storage 24 and the five option lines are inputs to the microprocessor 30. When Phase A is low (a binary ZERO), all other shared signals are either inputs or outputs to microprocessor 30. Phase B signal is used to latch the signals that were gated with Phase A. When Phase B is going low, the ROS data and options are latched internal to the microprocessor 30. When Phase B is going high, all other shared signals are latched internal to the microprocessor 30. There are four signal groups defined by the microprocessor 30 interface. Each group, except group 1, consists of signals that have common pin assignments and clock phase relationship shown in FIG. 6. Group 1 consists of nine interface lines that have unshared signals. They are: three voltage, two ground, and four timing signals. Group 2 consists of 23 interface lines which control 69 signals, five of which are unused. During Phase A high, the 23 lines represent 21 ROS input data bits and two input option bits. During Phase A low, the 23 lines are bidirectional representing inputs of 16 data bus bits, 3 control signals, and 4 unused signals; or outputs of 20 address/data bus bits and 3 control signals. Group 3 consists of 19 interface lines which control 38 input signals. During Phase A low, the 19 lines represent 4 software interrupt signals, 8 hardware interrupt signals, and 7 monitor bits. During Phase A high, the 19 lines represent 14 ROS input data bits, 3 input option bits, and 2 unused signals. Group 4 consists of 13 interface lines which control 26 signals. During Phase A low, the 13 lines represent 12 ROS output address bits and an error signal. During Phase A high, the 13 lines represent 13 ROS input data bits. Referring now to FIG. 2 which illustrates the data manipulation area 32 of FIG. 1 in greater detail. In FIG. 2, the number next to the upper right hand corner of the blocks represents the number of bits of information contained in the register represented by the block. The data manipulation area, as shown in FIG. 2, performs arithmetic and logical operations on data and memory address generation. It is composed of the following eight elements: P-register 80, arithmetic logic unit (ALU) 83, G-register 84, a shifting mechanism (not shown in FIG. 2), Q-register 85, indicator register (I-register) 86, register file 82, and M-register file 81. The P-register 80 is a 20-bit memory address register that contains either the program counter or a logical memory address. The P-register can be loaded from the internal bus 38, incremented or decremented, and the output transferred to the internal bus under firmware control by various microinstruction commands. The G-register 84 is 20 bits wide and is used to hold addresses for ALU operations. It can be loaded from the internal bus 38 under firmware control by a microinstruction command and the output is sent to the B-port of ALU 83. The Q-register 85 is a 16-bit register that provides operand shifts, and holds secondary operands for ALU 83. It can be loaded from the internal bus under firmware control by a microinstruction command and the output is sent to the B-port of the ALU 83. Q-register output can be sign extended to 20 bits for ALU operations. The load Q-register microinstruction command is used for loading from internal bus 38 when shifting microinstruction commands are not used. The register 82 can be loaded from the ALU 83 output or the internal bus 38. When loaded from internal bus 38, data comes direct, or with byte 0 (the left most 8 bits) swapped with byte 1 (the right most 8 bits) of a 16 bit word, or with the right most 4 bits from internal bus 83 being loaded into the left most 4 bits of a 20 bit register in register file 82. Two write lines control the loading of registers in register file 82: one loading the right most bits 0 through 15 in 16-bit or 20-bit registers, the other loading the left bits 0D through 0A in 20-bit registers. The register file 82 output feeds the A-port of the ALU 83. The register address is taken from either the ROS data register 65 (RDDT) or the information in the F-register 51 (see FIG. 3). During Phase A high time, the address for the selected register file is generated under control of the register file address field of the microinstruction (see FIG. 5). The selected register file is read out during Phase A high and its contents are latched in an output register (not shown in FIG. 2) during Phase A low. This output register is then available to two possible destinations, the A-port of ALU 83 (under control of the ALU control field of the microinstruction, RDDT bits 18 through 22) and the internal bus 38 (under control of special control field of the microinstruction, RDDT bits 35 through 41) (see FIG. 5). The selected register file is written near the end of Phase A low time. The general command for writing the register file is supplied by RDDT bit 26 (register file load). This general command can be modified by certain special control field subcommands. The register file description that follows is in terms of software visible registers of CPU 20. There are seven 16-bit software addressable data registers, R1 through R7. They can be loaded from or stored into memory on either a word or byte basis. Each register can be used as operands in arithmetic, logical, and compare operations. R0 is a 16-bit register that is used to hold a copy of the executing software instruction. There are seven software-addressable base registers, B1 through B7. They are 20 bits wide and can be used to hold main memory addresses. B0 is a 20-bit working register used when CPU 20 is in a maintenance panel mode. Six 16-bit (DW1 through DW5) and five 20-bit (AW1 through AW5) work registers are available for temporary storage of information during firmware operations. The 16-bit registers are normally used to hold data and the 20-bit registers are normally used to hold memory addresses. AW1 is normally used as the "effective address" storage element for the currently executing software instruction. The T-register is the stack address register and is 20 bits wide. System keys and processor security keys are contained in the 16-bit S-register. Field defines the CPU identification number to allow for multiprocessor systems. A 6-bit level field defines the interrupt priority on which the CPU is currently operating. Zero is the highest and 63 the lowest priority. The history register (H-register) is 20 bits wide. It contains the history of the program counter under firmware control. The remote descriptor base register (RDBR) is used by the commercial instruction logic 28. It is 20 bits wide. The I/O data register is a 16-bit data working register that can be used for temporary storage during I/O data transfer operations. ALU 83 has full 16- or 20-bit capabilities. Overflow and carry functions are generated out of both the 16th and 20th bit positions. The 16-bit capability of the ALU is normally used when handling data, while the 20-bit capability is normally reserved for address modifications and transfers. The ALU 83 has two ports for operand inputs, the A-port and B-port. The A-port can accept either a 16-bit or 20-bit register file input. In the case where the register file selected is a 16-bit wide data register, the most significant bit is sign extended to 20 bits. The A-port can also select a value of zero as input as specified by the ALU control field of the microinstruction (RDDT bits 18 through 22) (see FIG. 5). The B-port can select as its inputs: 20 bits from the G-register, or 16, 8 or 7 bits right justified from the Q-register. The most significant bit of the field selected from the Q-register port can also select a value of zero as its input. The output of the ALU can be directed to either internal bus 38 or register file 82. When directed to the register file, its path can be direct (by special control field subcommands) or shifted left or right one bit position (also under control of special control field--all register file shift subcommands). The ALU output is directed to the internal bus 38 whenever there is no microcommand called to source the internal bus 38 (it is the default source for the internal bus 38). The carry and overflow conditions for both 16- and 20-bit operations are stored in temporary flip-flops for testing using test branch microcommands during the following cycle. The microprocessor 30 has the ability to perform various shift operations (i.e., open/closed, arithmetic/logical, left/right) on either 16-bit or 32-bit operands. Sixteen-bit data shifting can -be accomplished in one of two ways. The first takes place from output of the ALU 83 into the register file 82 and the second takes place in the Q-register 85. These two operations can be concatenated to perform the 32-bit data shift operations. Three shift microcommands are used to implement the software shift instructions of CPU 20. The shift microcommands are combined with the F-register 51 (see FIG. 3) decode (the F-register contains the software instruction word for this operation) to determine the shift type, direction, and necessary filler bits. The following Table 1 shows the decoded F-register bits and the corresponding shift type. In Table 1 the abbreviation "LE." means "less than or equal to" followed by the maximum number of bit positions that can be specified to be shifted. and the abbreviation "GE." means "greater than or equal to" followed by the maximum number of bit positions that can be specified to be shifted. TABLE 1 F-Register Shift-Type Decoding F-Register Bits 8 Through 11 Shift Type 0 Single open left shift 1 Single closed left shift 2 Single arithmetic left shift 3 Double closed left shift 4 Single open right shift 5 Single closed right shift 6 Single arithmetic right shift 7 Double closed right shift 8 Double open left shift (LE. 15) 9 Double open left shift (GE. 16) A Double arithmetic left shift (LE. 15) B Double arithmetic left shift (GE. 16) C Double open right shift (LE. 15) D Double open right shift (GE. 16) E Double arithmetic right shift (LE. 15) F Double arithmetic right shift (GE. 16) F-register bits 1 through 3 contain the R-register to be shifted, and F-register bits 12 through 15 or 11 through 15 contain the number of positions to be shifted. When F-register bit 8 equals a binary ZERO, the number of positions to be shifted is determined by F-register bits 12 through 15; and when F-register bit 8 equals a binary ONE, the number of positions to be shifted is contained in F-register bits 11 through 15. A special case exists when the count field contains a value of zero. In this case, the number of positions to be shifted is contained in register file location R1 (general purpose register). When a double word is selected (i.e., F-register bits 8 through 11 equals a hexadecimal 3, 7, 8, 9, A, B, C, D, E, or F), then F-register bits 1 through 3 must equal a 3, 5, or 7. This is necessary because it requires a combination of an implied even-numbered register and an explicitly addressed odd-numbered register to perform a double-word shift operation. When register R3 is explicitly addressed, register R2 is the implied addressed register. The even-numbered register contains the most significant bits of the double word. The I-register is eight bits wide, containing various single bit indicators in the following format: Bit 0--Overflow indicator (OV): It is set when any of the data registers R1 through R7 overflow; e.g., when a 16-bit arithmetic result produced is larger than the capacity of the register (under op-code control, out of ALU). Bit 1--Always a binary ZERO. Bit 2--Carry indicator (C): This is set when the logical capacity of a register is exceeded. The carry indicator is generated from the ALU. Bit 3--Bit test indicator (B): This bit gives the state of the last bit tested (primarily for bit test operations). Bit 4--Input/output indicator (I): It indicates whether the last I/O command was accepted by the I/O controller. Bit 5--Greater than (G) indicator. Bit 6--Less than (L) indicator. Bit 7--Unlike sign (U) indicator. The G, L and U indicators are controlled by microcommands during the compare instructions and contain the result of the last compare operation executed. Typically, the comparison involves a register and a word from memory. The indicators show whether the register contents are greater than or less than the memory word. The I-register can also be loaded from the right byte of internal bus 38, and its output goes to the right byte of internal bus with the left byte of the internal bus being set to binary ZERO bits. The microprocessor 30 has a mode register (M-register) file 81 that has eight registers, each eight bits wide (8 by 8). This file 81 can be loaded from the internal bus 38 using a microcommand or sourced to the internal bus 38 using a second microcommand. When using either of these microcommands, the M-register address is supplied by the least significant three bits of the register file address register. When neither of these microcommands are called, the M-register address defaults to location 1 (M1 register). M-register bits 1 through 7 of any M-register are testable, with the register file address field of the microinstruction (see FIG. 5) selecting the desired M-register, and the desired bit within the register being selected by a code of 1 through 7 in F-register bits (0 through 3). This causes the setting of a temporary flip-flop when a binary ONE is detected. Bit 0 of the selected M-register is sampled in each cycle by another temporary flip-flop. Testing of these temporary flip-flops is accomplished on the following cycle by using two subcommands. The various mode registers (M0-M7) are settable by software instructions so that a software program can specify how certain conditions are to be handled when they arise during the program's execution. For example, setting bit 0 in M1 to a binary ONE indicates that all software branches and jumps are to cause a trap to an entry of a software routine that will trace the program's execution. Bits 1 to 7 in M1 are used to control the trapping if an overflow occurs in the corresponding data register R1 to R7 during arithmetic operations. Bit 0 of M3 controls the trapping of overflow when a commercial software instruction is executed and for bit 1 of M3 control trapping if truncation occurs during the execution of a commercial software instruction. Referring now to FIG. 3 which illustrates the control area 36 of FIG. 1 in greater detail. Control area 36 contains additional logic and circuitry, but for the purposes of the invention, the logic has been limited to that shown in FIG. 3 which is primarily concerned with addressing read only storage 24. In particular, control are 36 also includes the microinstruction decode logic (not shown in FIG. 3) which controls the enabling and gating of the logic elements and registers of microprocessor 30. FIG. 3 also illustrates internal bus 38, monitor logic 22 and read only storage (ROS) 24. In FIG. 3, the number next to the upper right hand corner of the blocks represents the number of bits of information contained in the register represented by the block. ROS 24 may be a read only memory (ROM) or a random access memory (RAM) or any other form of memory device capable of holding firmware microinstructions. The ROS 24 contains the firmware microinstructions (or control words) which are used by microprocessor 30 and commercial instruction logic 28 to control the operation of central processing unit and more particularly to execute the software instructions of CPU 20. For each microprocessor machine cycle, a control word is fetched out of ROS 24. ROS 24 is coupled to ROS data register 65 which receives bits 0-47 of the microinstruction word fetched from read only storage 24. Each microinstruction contains an address portion and a command portion. The address portion in the microinstruction word identifies the address of the next location to be read from read only storage 24 which will be the next microinstruction to be executed by microprocessor 30. The command portion of the microinstruction identifies the operations to be performed by the microprocessor 30 and commercial instruction logic 28 during the execution of the current microinstruction. The address portion of the microinstruction word may be contained in a predetermined number of bits, for example, in the preferred embodiment it is contained in bits 0 through 12 of the microinstruction word (see FIG. 5). The command portion of the microinstruction may also be contained in a predetermined number of bits, for example, in the preferred embodiment it is contained in bits 13 through 47 which control the operation of microprocessor 30 and in bits 48 through 55 which, when present, along with bits 35 through 47 control the operation of commercial instruction logic 28 (see FIG. 5). The command portion may be further broken down into a number of fields which comprise subcommands of the microinstruction. Before describing the microinstruction word in greater detail with respect to FIG. 5, the other elements of FIG. 3 will be described. Monitor logic 22 provides status information with respect to CPU 20 and is loaded into test flip-flops 50 such that the status may be tested by the firmware. In addition to holding ten bits of dynamic status information from monitor logic 22, test flip-flop 50 hold five bits which sample the status of various CPU options. The CPU option bits should be thought of as static in nature and indicate whether or not a specific hardware option is present or not within the data processing system. For example, one CPU option bit indicates whether or not the commercial instruction logic 28 is present in the system. In addition, test flip-flop 50 contains four control flip-flops (CF1-CF4) which are available to be set or reset or to have a bit transferred under control of the firmware. These four control flip-flops are testable by the firmware. There are also ten temporary flip-flops in flip-flops 50 which are loaded during each firmware cycle with dynamic information such as whether there has been a carry or overflow from bit 16 of the ALU or a carry or overflow from bit 20 of the ALU or whether certain bits on the internal bus 38 are equal to 0, etc. These ten temporary flip-flops are also testable by firmware. The F-register 51 is a 16-bit instruction register that is loaded form internal bus 38. All bits of the F-register are testable by firmware. The low-order four bits of F-register 51 also constitute the low-order four bits of the five-bit counter F-counter 52. F-counter 52 is a five-bit counter that can be loaded form internal bus 38. F-counter 52 can be incremented or decremented. The four low-order bits of F-counter 52 are also decoded such that a 16-bit mask can be placed on internal bus 38 under firmware control. There are five possible conditions that can cause a software interrupt. These conditions are latched in software interrupt register 53. Software interrupt prinet 54 prioritizes these conditions and generates a vectored address for input into major branch logic 57. The next address generation section 55 of control area 36 contains the logic ecessary for sequencing the read only storage (ROS) 24. Test branch logic 56 is used to test 64 test conditions which can result in a 2-way branch address for ROS address register 63. These 64 test conditions are testable under firmware control and with the output of the test branch logic 56 being one bit of information into address multiplexer 1 60. Inputs to test branch logic 56 are provided by test flip-flops 50, F-register 51 and F-counter 52. Major branch logic 57 provides 15 major test branch matrixes. The majority of the inputs to these matrixes are from F-register 51 (in various combinations). Other inputs are from the monitor and option bits of test flip-flops 50. The output of major branch logic 57 is four bits of address information which is provided to address multiplexer 1 60. Register 58 provides the bits of information that correspond to the ten possible conditions that can cause a hardware interrupt. Hardware interrupt prinet 59 prioritizes these ten possible conditions and produces a four-bit output that is used by address multiplexer 2 62 to produce the 12-bit vectored hardware interrupt address when one of these ten possible conditions occur. The output of address multiplexer 1 60 provides the 12-bit nominal next address which will be loaded into ROS address register 63 and used to fetch the next microinstruction from ROS 24. This 12-bit address is nominal in the sense that this nominal next address will be used as the next address only if a hardware interrupt does not occur. A hardware interrupt will not occur if no hardware interrupts are pending or if pending hardware interrupts are inhibited by the setting of the interrupt inhibit bit within the microinstruction word (see bit 34 in FIG. 5). Address multiplexer 2 62 is used to select between the 12-bit nominal next address generated by multiplexer 1 60 and the vectored hardware interrupt address that is produced by combining the 4-bits from hardware interrupt prinet 59 with 8 leading 0 bits. The output of address multiplexer 2 62 is the 12-bit next address which is loaded into ROS address register (RAR) 63. The output of RAR 63 is used to provide the address of the next microinstruction to be fetched from ROS 24. The output of RAR 63 is also input to ROS address history register 66. ROS address history register 66 is provided so that early in the execution of the current microinstruction contained in ROS data register 65, while the next microinstruction address is being developed and transferred to ROS address register 63, ROS address history register holds the address of the current microinstruction. This current microinstruction address is used in developing the next microinstruction address if the current microinstruction calls for its use. The current address from ROS address history register 66 is also used after being incremented by incrementer 64 as the return address from microsubroutines and hardware interrupt service routines. Incrementer 64 increments by a predetermined number (e.g., by 1 in the preferred embodiment) the address contained in RAR address history register 66. Incrementer 64 is a 12-bit incrementer which will be used to source the return address stack 70 via return multiplexer 61 during a PUSH microcommand. The output of incrementar 64 is also used to provide the next ROS address value to RAR 63 for INC and INCK microcommands via address multiplexer 1 60 and address multiplexer 2 62. The INC microcommand specifies that the next ROS address is to be the current ROS address incremented by one and the INCK microcommand specifies that the next ROS address value is to be the current address value plus 1 and in addition a constant, as specified in other unused address field bits within the microinstruction, is to be placed on internal bus 38. Return address stack 70 is a 4 by 12-bit last in first out (LIFO) array used for storing the return addresses of subroutines and hardware interrupts. Return address stack 70 is initialized to hexadecimal value of 001 during clear time and its bottom location is set to 001 (hexadecimal) during each POP (return) microcommand. A PUSH microcommand causes the top of return address stack 70 to be sourced by the output of incrementer 64. A hardware interrupt causes the top of stack 70 to be sourced by the output of address multiplexer 1 60, which is the nominal next address. Incrementer 64 transfers to return address stack 70 the incremented address history from ROS address history register 66 when one subfield of the next command portion of the ROS data register 65 specifies a PUSH microcommand. This PUSH microcommand enables the storing of the return address of the microprogram microinstruction that is being executed while branching to a microprogram subroutine. In response to the PUSH microcommand, incrementer 64 provides that the incremented current ROS address from ROS address history register 66 to return address stack 70, which comprises a plurality of registers 71 through 74. Functionally, return address stack 70 is a push down storage device which comprises a plurality of work registers arrayed in a column. The only output from the stack is from top register 71 which is connected to address multiplexer 1 60. The only inputs return address stack 70 are from the top and bottom. When an address is pushed onto stack 70, it goes into register 71 after the other addresses already in the stack are pushed down the column one register. As an address is removed from the column (popped up), it is provided by top register 71 to address multiplexer 1 60 and each address stored in return address stack 70 moves up one hardware register in the column. During this pop operation, the bottom register 74, which is vacated, is loaded with the address 001 (hexadecimal). The stack can be visualized as a deck of cards, wherein access to the cards of the deck is only possible by adding or removing cards one at a time to or from the top of the deck and wherein a predetermined card (hexadecimal value 001) is added to the bottom of the deck as each card is removed from the top of the deck. Return address stack 70 thus stores the incremented current address as is provided from ROS address history register 66 when the executing microprogram branches to a subroutine. In addition, return address stack 70 stores the nominal next address output by address multiplexer 1 60 whenever a hardware interrupt occurs which vectors the execution of the firmware to a predetermined location within ROS 24 as determined by the particular hardware interrupt that has occurred. These addresses stored in return address stack 70 point to the next step of the microprogram which would have been executed except for the occurrence of a branch to a microprogram subroutine or a branch to a microprogram interrupt handling routine. Since these addresses will be stored when a branch to a microprogram subroutine occurs, or when a hardware interrupt occurs, the addresses in return address stack 70 will, upon the execution of the last microinstruction in a subroutine or hardware interrupt handling routine return the microprogram to the proper sequence. FIG. 5 illustrates the firmware microinstruction word fields of microprocessor 30 of the preferred embodiment. This microinstruction word is comprised of 56 bits (bits 0-55). Bits 0-47 which control microprocessor 30 will be discussed now with reference to FIG. 3 and bits 48-55 which control commercial instruction 28 will be discussed later with reference to FIG. 4. Bits 0 through 12 are used as the ROS address field, bits 13 through 17 are used to select registers in the register file, bits 18 through 22 are used to control the arithmetic and logic functions of the ALU and the inputs to its ports, bits 23 through 25 are used as bus control, bits 26 through 30 are used as a register modification field, bits 31 through 33 are used as memory management unit control, bit 34 is used to inhibit the occurrence of a hardwara interrupt and bits 35 through 47 are used as a special control field. The special control field (RDDT bits 35 through 47) is used to modify as well as supplement certain of the other fields in the microinstruction firmware word. The special control field provides up to three simultaneous microcommands during a given microcycle. The special control field is divided into 4 subfields (A through D) as illustrated in FIG. 5. With the interpretation of some of the subfields dependent upon the contents of other subfields. The 48 bits of the microinstructions are loaded into the ROS data register 65 at the beginning of the execution of the microinstruction. These 48 bits are referred to as signals RDDT00 through The ROS address field contains 13 bits (RDDT00 through RDDT12) and is used to generate the address of the next firmware step in a given microprogram sequence. The method for generating this next address is defined by the first five bits of the ROS address field as shown below in Table 2. TABLE 2 ROS Address Field Microoperations RDDT Bits 0 1 2 3 4 Operation 1 X X X X Jump 0 1 X X X Test Branch 0 0 1 X X Major Branch 0 0 0 1 X Increment With Constant 0 0 0 0 1 Increment Without Constant 0 0 0 0 0 Return (POP microcommand) A PUSH microcommand can be used in conjunction with any of the first five operations listed in Table 2. The PUSH microcommand, when used in combination with a jump or branch microcommand, allows the microprogrammer to store away into return address stack 70 a return address to which the microprogrammer will wish to return upon completion of the subroutine which was branched to. To facilitate the storing away of this return address by the person writing a microprogram, the PUSH microcommand pushes the contents of the ROS history address register 66 incremented by 1 by incrementer 64 onto the top of return address stack 70. The return (POP) microcommand is then used by the microprogrammer as the last firmware step of the called subroutine to return to the first location after the microinstruction which called the microprogram subroutine. One exception to the next address generation being defined by the six operations described in Table 2 is that of a hardware interrupt. When a hardware interrupt is initiated, the next ROS address will be provided as a hardware vector and the ROS address generated by the ROS address field of the present firmware word will be placed on the top of the return address stack 70 by the output of address multiplexer 1 60 being selected as the output of return multiplexer 61 and pushed onto return address stack 70. If a PUSH microcommand (as specified by special control field C in bits RDDT42 through RDDT44 in conjunction with a special coding of subfields A and B) is used in conjunction with one of the first five operations shown in Table 2, then the microprogrammer must also inhibit hardware interrupts by specifying that RDDT bit 34 is a binary ONE in order to prevent a conflicting push operation being performed as the result of the occurrence of a hardware interrupt. Since a hardware interrupt can occur (normally by the occurrence of an external asynchronous event) at any time prior to the completion of a given firmware microprogram sequence, special consideration must be given to allow for the occurrence of a hardware interrupt. The branching capabilities of the six operations defined in Table 2 are referred to as page branching and bank branching. A page is defined as 64 consecutive memory locations within ROS 24 and a bank is defined as 1024 memory locations within ROS 24 (16 Pages). The branch boundaries for the test branch operation is restricted to any location within a page. The branch boundaries for the major branch are restricted to any location within a bank. The remaining four operations of Table 2 are capable of branching or incrementing from one bank to another. The jump operation is the only next address generation method of Table 2 that allows a branch to any of the possible 4096 locations of ROS 24. Ihis is accomplished by providing, within the ROS address field, a 12-bit direct address of the next firmware microinstruction to be executed. When RDDT bit 0 equals a binary ONE, RDDT bits 1 through 12 of the present ROS data word (firmware microinstruction) as contained in ROS data register 65 are delivered directly to ROS address register 63 via address multiplexer 1 60 and address multiplexer 2 62 as the next address in the firmware microprogram, assuming of course, no intervening hardware interrupt occurs. Should a hardware interrupt occur, this nominal next address would be pushed onto the top of return address stack 70 and the generated hardware interrupt vector address, as generated by concatenating eight high order binary ZERO bits with the four bit output of hardware interrupt prinet 59, as output by address multiplexer 2 62 will be loaded into ROS address register 63 as the next ROS address. A PUSH microcommand (as defined by special control field C) can be used along with the jump operation. Hardware interrupt inhibit bit RDDT34 must be set to a binary ONE to inhibit hardware interrupts when the PUSH microcommand is used in conjunction with a jump operation. During a PUSH microcommand, the current ROS address held in ROS address history register 66 is incremented by 1 and pushed onto the top of return address stack 70 by return multiplexer 61 before the next address as specified in RDDT bits 1 through 12 is loaded into ROS address register 63. The test branch operation of Table 2 is a two-way branch using the result of one of 64 test conditions specified as part of the ROS address field in bits 3 through 8. All test branches are restricted to branching within the current page, that is, the next ROS address generated as a result of the test will always be one of two locations (depending upon the results of the test: i.e., true or false) eight locations apart but within (64 locations) currently being address by ROS address register 63. That is, depending upon the results of the test branch, the next microinstruction will be fetched from the location determined by taking the six high-order bits from the current ROS address (from ROS address history register 66) concatenated with the six low-order bits which come from bits 2, 9, 10, 11 and 12 of the ROS address field which are used for bits positions 7, 8, 10, 11 and 12 respectively within the nominal next ROS address and with the result of the test (1 or 0 corresponding to true or false respectively) being used as bit position 9 within the nominal next ROS address. This nominal next ROS address composed of the bits as described above, generated as the result of the test branch as specified in bits 0 through 13 in the current firmware microinstruction word, is the next ROS address assuming no intervening hardware interrupt occurs. Should a hardware interrupt occur, this address is placed on the top of return address stack 70 and the generated hardware interrupt vector address replaces it as the contents of the next ROS address in ROS address register 63. The PUSH microcommand can also be used along with test branch operations. As indicated above, hardware interrupts must be inhibited (by setting RDDT bit 34 equal to a binary ONE) if the PUSH microcommand is used. If the PUSH microcommand is used in conjunction with a test branch operation, the current ROS address, which is the address of the test branch microinstruction (from ROS address history register 66) incremented by 1, will be placed on the top of return address stack 70 and the microprogram will branch to the nominal next address as determined by the output of test branch logic 56. The major branch operation is a 16-way branch using the results of 15 test groups specified as part of the ROS address field in bits 5 through 8. All major branches are restricted to branching within the current bank (1024 locations). That is, the nominal next ROS address generated as a result of the major branch test will always be in one of 16 locations (depending on the output of the major branch matrix) 16 locations apart but within the bank (1024 locations) currently being addressed by ROS address history register 66. The nominal next ROS address is generated by taking bit 0 and 1 from the current ROS address and using them as bit 0 and 1 in the nominal next ROS address and taking bits 3, 4, 9, 10, 11 and 12 from the ROS address field and using them as bits 2, 3, 8, 9, 10 and 11 in the nominal next ROS address respectively. In addition, bits 4 through 7 in the nominal next ROS address are determined by the four-bit output by major branch logic 57. The nominal next ROS address as described above is generated as the result of the major branch operation specified in bit 0 through 12 of the current firmware microinstruction word contained in ROS data register 65 is the next ROS address assuming that no intervening hardware interrupt occurs. Should a hardware interrupt occur, this newly generated nominal next ROS address will be placed on top of the return address stack 70 and the generated hardware interrupt vector address replaces it as the next ROS address in RAR 63. As in the case of test branch operation, the PUSH microcommand can be used along with major branch operations. Again, as indicated above, hardware interrupts must be inhibited by setting bit RDDT34 to a binary ONE. If a PUSH microcommand is used in conjunction with a major branch operation, the current ROS address which is the address of the major branch microinstruction itself (from ROS address history register 66) plus 1 will be placed on top of the return address stack 70 and the microprogram will branch to the nominal next address as determined by the output of major branch logic The incrementing with a constant operation (INCK microcommand) as specified in ROS address field (bit 0 through 3 of the 13-bit field) of the current microinstruction causes the current value of the ROS address history register 66 incremented by 1 to be placed in the ROS address register for the next microcycle. In addition to this next address generation, the remaining 9 bits (RDOT bits 4 through 12) are used to generate an 8-bit constant plus a filler to the 20-bit wide internal bus 38 during the current microcycle. The current ROS address contained in ROS address history register 66 is incremented by incrementer 64 and the result is returned to the ROS address register 63 via address multiplexer 1 60 and address multiplexer 2 62 when an increment with constant microcommand is specified in the ROS address field of a microinstruction. Should a hardware interrupt occur, this newly generated next address will be placed on the top of return address stack 70 via return multiplexer 61 and the hardware generated interrupt vector address will be placed in ROS address register 63 to be used as the next ROS address. The PUSH subcommand can be used along with the increment operations. As with all PUSH subcommands, the hardware interrupts must be inhibited by setting RDDT bit 34 to a binary ONE when using a PUSH subcommand in conjunction with an increment operation. If a PUSH microcommand is used in conjunction with an increment operation, the current ROS address incremented by one will be placed on the top of return address stack 70, in addition to becoming the next ROS address. The increment operation (INC) subcommand as specified in the ROS address field of the microinstruction initiates the same operation as described above for the increment with constant (INCK microcommand) operation except that no constant is generated onto internal bus 38. The return operation (POP microcommand) causes the contents of the top of return address stack 70 to be loaded into ROS address register 63 via address multiplexer 1 60 and address multiplexer 2 62 to be used as the ROS address for the next microcycle. In addition, a ROS address of 001 (hexadecimal) is loaded into the bottom of the stack into register 74 each time return address stack 70 is popped one location. This loading of the bottom of return address stack 70 with the ROS address of 1 is used to detect the case of overpopping of the stack. The overpopping of the stack will result in the microprocessor being vectored to microprogram error sequence which begins at ROS location 1. Because a return operation (POP microcommand) is fullY specified by bit 0 through 4 of the ROS address field, bits 5 through 12 of the ROS address field are unused as part of the return operation. When a return operation is specified in bit 0 through 4 of the ROS address field, ROS address register 63 receives the contents of the top of return address stack assuming no intervening hardware interrupt occurs. Should a hardware interrupt occur, the return (or pop) operation will effectively be bypassed or cancelled. This cancelling of the pop stack operation when a hardware interrupt occurs during a return operation is the logical equivalent of the popping the return address from the top of return address stack 70 and immediately, within the same microcycle, pushing it back onto the top of return address stack 70. It is this cancelling or bypassing of popping return address stack 70 when a hardware interrupt occurs during a return operation that allows the return address stack to be used to contain the return addresses for both microprogram subroutine calls and for hardware interrupts. By having the push operation onto the stack associated with storing the return address for the hardware interrupt routine cancel the pop operation performed on the stack when returning from a microprogram subroutine or upon completion of a hardware interrupt service routine, the return address stack 70 does not have to be able to simultaneously move in opposite directions or to first pop up and then push down during one microcycle. This cancelling of the stack pop operation associated with a return operation by the occurrence of the push operation associated with the occurrence of a hardware interrupt does not adversely affect the flow of control within the microprogram because the hardware interrupt routine will perform a return operation as the last step in its microprogrammed interrupt service routine. The PUSH microcommand which is coded in the special control field (RDDT bits 35 through 47) must not be used in the same microinstruction with a return operation which is coded within the ROS address field (RDDT bit 0 through 12) because the results within microprocessor 30 in the preferred embodiment are unspecified. As described above, a hardware interrupt forces a branch to a fixed ROS address. This ROS address is determined by a priority network (prinet 59) which has various error signals and interrupt requests as inputs from register 58. Hardware interrupts cause the next firmware generated ROS address to be pushed onto the top of return address stack 70. If the next ROS address was to have been generated from the return address stack via a return microcommand, popping of the return address stack 70 is inhibited. Hardware interrupts must be inhibited whenever a PUSH micro-operation is performed in order to prevent the requirement to doubly push the stack, the first push being associated with the PUSH microcommand itself and the second push being associated with the saving of the return address for the hardware interrupt. When the hardware interrupt inhibit field (RDDT bit 34) is a binary ONE, hardware interrupts are inhibited, non-error condition hardware interrupts (such as those associated with memory refresh and data request) are inhibited (prevented from intervening between the execution of the current microinstruction and the execution of the next microinstruction). Hardware error condition inputs to prinet 59 (such as system clear, an attempt to access a nonexistent resource, an access violation, or a memory parity error) are not under the control of RDDT bit 34 and can intervene between any two given microcycles. The fact that hardware interrupts associated with these error conditions can not be inhibited and therefore could occur during a microinstruction which contains a PUSH microcommand does not cause a problem, because the hardware interrupt service routines associated with these noninhibitable error conditions does not do a return operation at the end and therefore do not depend upon the contents of return address stack 70 to be valid. When RDDT bit 34 is a binary ZERO, all hardware interrupts are allowed. Referring now to FIG. 4 which illustrates the instruction logic 28, of FIG. 1 in greater detail. FIG. 4 also illustrates microprocessor 30, read-only storage 24 and monitor logic 22. In FIG. 4, the number next to the upper right-hand corner of the blocks represents the number of bits of information contained in the register represented by the block and the numbers within parentheses next to signal lines represents the number of parallel signals transmitted over the signal path. As indicated above, commercial instruction logic 28 is used in conjunction with microprocessor 30 to perform the commercial software instructions of CPU 20 which do decimal arithmetic operations on decimal data and editing operations on alphanumeric data strings. Commercial instruction logic 28 consists primarily of random access memory (RAM) 1 81, RAM 2 96 and decimal adder/subtractor PROM 84, all of which operate under the control of CIL control area 100. As will be described in greater detail below, CIL control area 100 is used to decode the bits within the firmware microinstruction word which control commercial instruction logic 28. In particular, CIL control area receives bits 35 through 47 of the microinstruction firmware word shown as the special control field in FIG. 5, which are also used to control the operation of microprocessor 30 and in addition it receives bits 48 through 55 which are dedicated to the control of commercial instruction logic 28. Decoding of these microinstruction bits associated with the commercial instruction logic is performed by CIL control area 100 which produces control signals which are distributed throughout commercial instruction logic 28 to control the enabling and disabling and the selection of the various registers, gates and multiplexers. Data is transmitted between microprocessor 30 and commercial instruction logic 28 over a 16-bit wide data path which connects processor bus 37 of microprocessor 30 to transceivers 97. The output of transceivers 97 can be latched into data-in register 98 which is also 16 bits wide. As can be seen in FIG. 4, transceivers 97 can not only load data-in register 98 from the output of processor bus 37, but can also be used to transfer the output of RAM 2 data register 88 from transceivers 97 to processor bus 37. This data path from the output of RAM 2 data register 88 which is 16 bits wide, besides being connected into transceivers 97 can also be used to load data-in register 98 and thereby provide the means for loading the output of RAM 2 96 into RAM 1 81 or back into RAM 2 96. When commercial instruction logic 28 is used to perform a commercial software instruction which requires two operands, operand 1 is usually stored in RAM 1 81 and operand 2 is stored in RAM 2 96 and the output of the operation is stored back into RAM 2 96. As indicated, the arithmetic and logic unit functions of commercial instruction logic 28 are performed by decimal adder/subtractor PROM 84. PROM 84 generatas the result by using its four inputs (2 4-bit operands, 1 bit of carry-in and a 1-bit indicator indicating whether this is an add or subtract operation) to form a 10-bit address which is used to fetch an 8-bit data word which contains a 4-bit arithmetic result of the addition or subtraction and four indicators (one bit of carry-out, one bit to indicate whether one of the operands is an illegal value, a bit indicating whether the result is equal to zero, and a bit indicating whether the result is equal to nine). The coding of decimal adder/subtractor PROM 84 is shown in Table 3. Table 3 shows the encoding of the 10-bit address. The most significant bit in the 10-bit address, which has a value of 512, is used as an indicator as to whether the operation being performed is an addition or subtraction. Thus, when address bit 512 is equal to a binary ZERO, a subtraction is to be performed, and when equal to a binary ONE, an addition is to be performed. The next address bit, having a value of 256, is used to indicate whether a carry in from the previous decimal digit is to be used in calculating the result and when a binary ZERO indicates that the previous digit did not generate a carry-out and when a binary ONE indicates that the previous digit did generate a carry-out. The next four address bits, bits having a value of 128, 64, 32 and 16, are used to represent the four bits of operand 2 at the B port of decimal ALU 84 and the last four bits having values of 8, 4, 2 and 1 are used to indicate the values of operand 1 at the A port of decimal ALU 84. The 8-bit data word retrieved form PROM 1 as addressed by the 10-bit address is coded with the result as indicated in Table 3. The four lower bits of the data word (bits 3 through 0) contain the 4-bit decimal result of the addition or subtraction. The other four bits of the data word contain the four indicators which are output by decimal adder/subtractor PROM 84. The 4-bit indicators are encoded such that bit 7 of the data word (labeled "CRO" in Table 3), when a binary ZERO, indicates that there is no carry-out and, when a binary ONE, indicates that there was a carry-out. Bit 6 (labeled "ILL" in Table 3), when a binary ZERO, indicates that both operand 1 and operand 2 were legal decimal values and when a binary ONE, indicates that one or the other of operand 1 or operand 2 contained an illegal decimal value (i.e., a hexadecimal value of from A through F), bit 5 (labeled "E0" in Table 3) indicates when a binary ONE that the arithmetic result is equal to zero and when a binary ZERO that the arithmetic result is not equal to zero, and bit 4 (labeled "E9" in Table 3) when a binary ONE indicates that the arithmetic result equal nine and when a binary ZERO indicates that the arithmetic result does not equal nine. As can be seen in FIG. 4, the four indicator bits are held by decimal indicators 85 and are also an input into monitor multiplexer 80 and the carry-out bit is input into decimal adder/subtractor 84 as the carry-in bit. The 4-bit decimal arithmetic result is one input into result/zone multiplexer 91. TABLE 3 Decimal Add/Subtract PROM Encoding 8 DATA BITS 10 ADDRESS BITS 3 2 1 0 7 6 5 4 512 256 128 64 32 16 8 4 2 1 DECIMAL INDICATORS ADD CRI OP2 OP1 RESULT CRO ILL E0 E9 0 0 0 2-9 8-1 1 0 0 0 0 0 0 A-F A-F 0 1 0 0 0 0 1 3-9 8-2 1 0 0 0 0 0 1 A-F A-F 0 1 0 0 0 0 2 0-1 2-1 0 0 0 0 0 0 2 4-9 8-3 1 0 0 0 0 0 2 A-F A-F 0 1 0 0 0 0 3 0-2 3-1 0 0 0 0 0 0 3 5-9 8-4 1 0 0 0 0 0 3 A-F A-F 0 1 0 0 0 0 4 0-3 4-1 0 0 0 0 0 0 4 6-9 8-5 1 0 0 0 0 0 4 A-F A-F 0 1 0 0 0 0 5 0-4 5-1 0 0 0 0 0 0 5 7-9 8-6 1 0 0 0 0 0 5 A-F A-F 0 1 0 0 0 0 6 0-5 6-1 0 0 0 0 0 0 6 8-9 8-7 1 0 0 0 0 0 6 A-F A-F 0 1 0 0 0 0 7 0-6 7-1 0 0 0 0 0 0 7 A-F A-F 0 1 0 0 0 0 8 0-7 8-1 0 0 0 0 0 0 8 A-F A-F 0 1 0 0 0 0 9 1-8 8-1 0 0 0 0 0 0 9 A-F A-F 0 1 0 0 0 0 A 0-9 A 0 1 0 0 0 0 A A A 0 1 1 0 0 0 A B-F A 0 1 0 0 0 0 B 0-A B 0 1 0 0 0 0 B B B 0 1 1 0 0 0 B C-F B 0 1 0 0 0 0 C 0-B C 0 1 0 0 0 0 C C C 0 1 1 0 0 0 C D-F C 0 1 0 0 0 0 D 0-C D 0 1 0 0 0 0 D D D 0 1 1 0 0 0 D E-F D 0 1 0 0 0 0 E 0-D E 0 1 0 0 0 0 E E E 0 1 1 0 0 0 E F E 0 1 0 0 0 0 F 0-E F 0 1 0 0 0 0 F F F 0 1 1 0 0 1 0 1-8 8-1 1 0 0 0 0 1 0 A-F A-F 0 1 0 0 0 1 1 2-9 8-1 1 0 0 0 0 1 1 A-F A-F 0 1 0 0 0 1 2 3-9 8-2 1 0 0 0 0 1 2 A-F A-F 0 1 0 0 0 1 3 0-1 2-1 0 0 0 0 0 1 3 4-9 8-3 1 0 0 0 0 1 3 A-F A-F 0 1 0 0 0 1 4 0-2 3-1 0 0 0 0 0 1 4 5-9 8-4 1 0 0 0 0 1 4 A-F A-F 0 1 0 0 0 1 5 0-3 4-1 0 0 0 0 0 1 5 6-9 8-5 1 0 0 0 0 1 5 A-F A-F 0 1 0 0 0 1 6 0-4 5-1 0 0 0 0 0 1 6 7-9 8-6 1 0 0 0 0 1 6 A-F A-F 0 1 0 0 0 1 7 0-5 6-1 0 0 0 0 0 1 7 8-9 8-7 1 0 0 0 0 1 7 A-F A-F 0 1 0 0 0 1 8 0-6 7-1 0 0 0 0 0 1 8 A-F A-F 0 1 0 0 0 1 9 0-7 8-1 0 0 0 0 0 1 9 A-F A-F 0 1 0 0 0 1 A 0-9 A 0 1 0 0 0 1 A A A 0 1 1 0 0 1 A B-F A 0 1 0 0 0 1 B 0-A B 0 1 0 0 0 1 B B B 0 1 1 0 0 1 B C-F B 0 1 0 0 0 1 C 0-B C 0 1 0 0 0 1 C C C 0 1 1 0 0 1 C D-F C 0 1 0 0 0 1 D 0-C D 0 1 0 0 0 1 D D D 0 1 1 0 0 1 D E-F D 0 1 0 0 0 1 E 0-D E 0 1 0 0 0 1 E E E 0 1 1 0 0 1 E F E 0 1 0 0 0 1 F 0-E F 0 1 0 0 0 1 F F F 0 1 1 0 1 0 0 1-9 1-9 0 0 0 0 1 0 0 A-F A-F 0 1 0 0 1 0 1 0-8 1-9 0 0 0 0 1 0 1 A-F A-F 0 1 0 0 1 0 2 0-7 2-9 0 0 0 0 1 0 2 A-F A-F 0 1 0 0 1 0 3 0-6 3-9 0 0 0 0 1 0 3 8-9 1-2 1 0 0 0 1 0 3 A-F A-F 0 1 0 0 1 0 4 0-5 4-9 0 0 0 0 1 0 4 7-9 1-3 1 0 0 0 1 0 4 A-F A-F 0 1 0 0 1 0 5 0-4 5-9 0 0 0 0 1 0 5 6-9 1-4 1 0 0 0 1 0 5 A-F A-F 0 1 0 0 1 0 6 0-3 6-9 0 0 0 0 1 0 6 5-9 1-5 1 0 0 0 1 0 6 A-F A-F 0 1 0 0 1 0 7 0-2 7-9 0 0 0 0 1 0 7 4-9 1-6 1 0 0 0 1 0 7 A-F A-F 0 1 0 0 1 0 8 0-1 8-9 0 0 0 0 1 0 8 3-9 1-7 1 0 0 0 1 0 8 A-F A-F 0 1 0 0 1 0 9 2-9 1-8 1 0 0 0 1 0 9 A-F A-F 0 1 0 0 1 0 A 0-9 A 0 1 0 0 1 0 A A A 0 1 1 0 1 0 A B-F A 0 1 0 0 1 0 B 0-A B 0 1 0 0 1 0 B B B 0 1 1 0 1 0 B C-F B 0 1 0 0 1 0 C 0-B C 0 1 0 0 1 0 C C C 0 1 1 0 1 0 C D-F C 0 1 0 0 1 0 D 0-C D 0 1 0 0 1 0 D D D 0 1 1 0 1 0 D E-F D 0 1 0 0 1 0 E 0-D E 0 1 0 0 1 0 E E E 0 1 1 0 1 0 E F E 0 1 0 0 1 0 F 0-E F 0 1 0 0 1 0 F F F 0 1 1 0 1 1 0 0-8 1-9 0 0 0 0 1 1 0 A-F A-F 0 1 0 0 1 1 1 0-7 2-9 0 0 0 0 1 1 1 A-F A-F 0 1 0 0 1 1 2 0-6 3-9 0 0 0 0 1 1 2 8-9 1 1 0 0 0 1 1 2 A-F A-F 0 1 0 0 1 1 3 0-5 4-9 0 0 0 0 1 1 3 7-9 1-3 1 0 0 0 1 1 3 A-F A-F 0 1 0 0 1 1 4 0-4 5-9 0 0 0 0 1 1 4 6-9 1-4 1 0 0 0 1 1 4 A-F A-F 0 1 0 0 1 1 5 0-3 6-9 0 0 0 0 1 1 5 5-9 1-5 1 0 0 0 1 1 5 A-F A-F 0 1 0 0 1 1 6 0-2 7-9 0 0 0 0 1 1 6 4-9 1-6 1 0 0 0 1 1 6 A-F A-F 0 1 0 0 1 1 7 0-1 8-9 0 0 0 0 1 1 7 3-9 1-7 1 0 0 0 1 1 7 A-F A-F 0 1 0 0 1 1 8 2-9 1-8 1 0 0 0 1 1 8 A-F A-F 0 1 0 0 1 1 9 1-9 1-9 1 0 0 0 1 1 9 A-F A-F 0 1 0 0 1 1 A 0-9 A 0 1 0 0 1 1 A A A 0 1 1 0 1 1 A B-F A 0 1 0 0 1 1 B 0-A B 0 1 0 0 1 1 B B B 0 1 1 0 1 1 B C-F 8 0 1 0 0 1 1 C 0-B C 0 1 0 0 1 1 C C C 0 1 1 0 1 1 C D-F C 0 1 0 0 1 1 D 0-C D 0 1 0 0 1 1 D D D 0 1 1 0 1 1 D E-F D 0 1 0 0 1 1 E 0-D E 0 1 0 0 1 1 E E E 0 1 1 0 1 1 E F E 0 1 0 0 1 1 F 0-E F 0 1 0 0 1 1 F F F 0 1 1 0 The ability to detect whether either of the two operands contains a value greater than 9 is very useful in that it allows the decimal adder/subtractor PROM 84 to detect the case of an illegal operand which has a decimal digit which is represented by four bits and therefore can have values of from A through F hexadecimal which are illegal. The ability for the decimal adder/subtractor PROM 84 to detect illegal decimal operands as part of the addition/subtraction process eliminates the need for a separate precheck of the operands by a prescan of the operands prior to introducing them into the Sign detector PROM 78 is similar in operation to decimal adder/subtractor PROM 84. Sign detector PROM 78 uses the nine input bits to address a 4-bit data word which indicates the sign of the operands used in an arithmetic operation. Of the nine bits used to address the 4-bit data words of the sign detector PROM 78, three bits come from data-in register 98, 4 bits from sign multipliexer 77, and two bits (PACKD) and overpunch (OVPUN) from CIL control area 100. The output of sign detector PROM 78 can be gated to monitor logic 22 for inputing into microprocessor 30 via monitor multiplexer 80. The four bits output by sign detector PROM 78 indicate whether the sign is positive or negative, whether it is an illegal sign atom, whether the sign is an overpunch sign, and whether it is an overpunched zero. The resultant sign can be tested by the microprocessor 30 firmware examining the four monitor bits of test flip-flops 40 (see FIG. 3). In the preferred embodiment, the generation of the sign result is complex in that the CPU 20 allows decimal numbers to be represented in either a packed or unpacked format with trailing or leading signs and overpunch signs. RAM 1 zero multiplexer 82 at the output of RAM 1 81 and RAM 2 zero multiplexer 90 at the output of RAM 2 96 are used to allow the commercial instruction logic firmware to effectively zero out the output of RAM 1 and RAM 2 respectively so that the operand in the other RAM can effectively be added or subtracted from zero. RAM 2 data register 88 holds the 16-bit output of RAM 2 96. RAM 2 nibble multiplexer 89 is used to select the one of the four 4-bit nibbles held in RAM 2 data register 88 so that the appropriate nibble can be gated into RAM 2 zero multiplexer 90 or into double multiplexer 83 in preparation of adding or subtracting a nibble from RAM 2 96 with a nibble from RAM 1 81 in decimal adder/subtractor PROM 84. Nibble 0 multiplexer 92, nibble 1 multiplexer 93, nibble 2 multiplexer 94, and nibble 3 multiplexer 95 are used to either allow a 16-bit quantity to be loaded in from data-in register 98 RAM 2 96 or to allow a 4-bit nibble from result/zone multiplexer 91 into the appropriate 4-bit nibble of the 16-bit word stored in RAM 2 96 under firmware control. Result/zone multiplexer 91 is used to determine whether the 4-bit result from decimal adder/subtractor PROM 84 or the four zone bits are to be written into a nibble within RAM 2 96. In the CPU of the preferred embodiment, when a decimal number is stored in a packed format, each four bits of data in the decimal value represent a decimal digit having the value from zero through nine. When decimal data is stored in an unpacked (string) format, each decimal digit is represented by an eight bits in which the most significant (left) four bits of the 8-bit field represent a zone field having a value of 3 hexadecimal and the least significant (right) four bits represent the decimal values 0 through 9. Thus, in the unpacked format, each decimal digit is represented by an 8-bit field which contains the ASCII code for the decimal digit. Therefore, result/zone multiplexer 91 can select between the 4-bit result from decimal adder/subtractor PROM 84 (which can have a value from 0 through 9) or the four zone bits which are preset to 0011 binary (3 hexadecimal). For example, the decimal value 76 when stored in a packed field is stored in eight bits with the most significant 4-bit nibble containing the value 7 and the least significant 4-bit nibble containing the value 6. When the same value of 76 decimal is stored in an unpacked field, it is stored in two 8-bit bytes with the left most 4-bit nibble of each byte being a 4-bit zone field which contains the hexadecimal value of three and the right 4-bit nibble in each byte containing the decimal value of 7 in the left byte and 6 in the right byte. Therefore, the value of 76 decimal in a packed decimal field is represented by the 8-bit field of 76 hexadecimal and in an unpacked field it is represented by a 16-bit field containing 3736 hexadecimal. Double multiplexer 83 allows one input of decimal adder/subtractor PROM 84 to be selected between four bits from RAM 1 81 or four bits from RAM 2 96. When double multiplexer 83 selects one ihput of decimal adder/subtractor PROM 84 to be four bits from RAM 2 96, the effect is that the output of decimal adder/subtractor PROM 84 is double the value of the four bits from RAM 2 because in this case both inputs to the decimal adder/subtractor PROM 84 will be from RAM 2. This provides a very convenient method of multiplying by two the operand stored in RAM 2 96. Although both RAM 1 81 and RAM 2 96 contain 16-bit data words, with RAM 1 containing 16 such 16-bit words and RAM 2 containing 256 16-bit words, there are other distinctions between these two RAMs. RAM 1, as indicated above, normally holds operand 1 and may be written into only on a 16-bit word basis. That is, when information is written into RAM 1 from data-in register 98, all four nibbles of the 16-bit word are written to. The output of RAM 1 is always a single 4-bit nibble which is presented at one of two inputs to RAM 1 zero multiplexer 82. The word which is read from RAM 1 81 is controlled by RAM 1 address counter 75 and the nibble which is enabled into one input of RAM 1 zero multiplexer 82 is controlled by nibble out control 76. RAM 1 address counter 75 also receives inputs from nibble out control 76 such that consecutive decimal digits may be accessed from RAM 1 by either incrementing or decrementing a nibble counter in nibble out control 76 which in turn either increments or decrements the word counter in RAM 1 address counter 75 each time that four nibbles have been accessed. RAM 2, as indicated above, normally holds operand 2. RAM 2 96 has the ability to be written into as one 16-bit word or the ability to write any one of the four individual nibbles. The 16-bit word to be written into RAM 2 96 comes from data-in register 98 which can be loaded from processor bus 37 or from the output of RAM 2 data register 88. Individual nibbles are written into RAM 2 96 from the output of result/zone multiplexer 91 with the value of the nibble being either the result of the decimal adder/subtractor PROM 84 or a 4-bit zone field containing the hexadecimal value of 3. Nibble write control 86 determines whether a 16 bit word is written into RAM 2 96 or whether one of four individual nibbles is written into RAM 2 96. RAM 2 address counter 87 determines the 8-bit word address that is used to address RAM 2 96 for either a read or write operation. RAM 2 address counter 87 receives an input from nibble write control 86 so that consecutive nibbles may be accessed from RAM 2 96. Each time the four nibbles have been accessed, RAM 2 address counter 87 is either incremented or decremented to get to the next word which contains the next consecutive nibble. A read from RAM 2 96 results in a 16-bit data word being read out and latched into RAM 2 data register 88. The particular nibble to be accessed in the 16-bit data word is controlled by RAM 2 nibble multiplexer 89 which selects one of the four nibbles to be output into RAM 2 zero multiplexer 90 and double multiplexer 83. By contrast, a read in RAM 1 results in only four bits of one nibble being output with the other twelve bits of the three nibbles not being enabled. The enabling of the nibble which is read from RAM 1 81 is controlled by nibble out control 76. The 16-bit word read from RAM 2 96 into RAM 2 data register 88 can be written back into RAM 2 96 via data-in register 98 or it can be written into RAM 1 81 via data-in register 98 under firmware control. The output of nibble write control 86, besides controlling which one of four nibbles is write enabled into RAM 2 when a single nibble is being written into, also controls the selection of which one of four nibbles will be output by RAM 2 nibble multiplexer 89. When a 16-bit word is written into RAM 2 96, all four nibbles are write enabled by nibble write control 86. The sizes of RAM 1 81 and RAM 2 96 are determined by their use by the firmware. As indicated above, RAM 1 81 is used primarily to hold operand 1 which, in the CPU of the preferred embodiment, may be a decimal number of from 1 through 31 decimal digits including the sign. Therefore, to accommodate a 31 digit decimal number, sixteen words of 16 bits each are required in order to be able to hold the 31 bytes of the maximum decimal number if the number is an unpacked decimal number having zone bits associated with each decimal digit. RAM 2 96, on the other hand, besides normally holding operand 2 is divided into eight segments with each segment being used to either hold an operand or as a working register. For example, in the case of a decimal divide, the segments of RAM 2 96 are used to hold an original copy of operand 1, an original copy of operand 2, a packed copy of operand 2, the quotient and partial products. The microoperations of the microinstruction firmware word which control the operation of commercial instruction logic 28 will now be described with reference to FIG. 4 and FIG. 5 and Table 4 through Table 8. As discussed earlier, the addressing of read-only storage 24 is controlled by microprocessor 30 and more particularly the control area 36 of microprocessor 30. As shown in FIG. 3, the firmware microinstruction accessed in ROS 24 is controlled by the contents of ROS address register 63. Each time a firmware word is read from ROS 24 either 48 bits or 56 bits are retrieved. If the address is within the first 2K of ROS (i.e., addressed 0 through 2047) a 48-bit microinstruction word is retrieved and if the address is from 2048 through 4095 a 56-bit microinstruction word is retrieved. As described earlier, bits 0 through 47 of microinstruction word are always latched into ROS data register 65 for decoding and controlling the operations of microprocessor 30 (see FIG. 3). In addition, the special control field of the microinstruction word, bits 35 through 47, are also latched into ROS SCF register 101 of CIL control area 100 (see FIG. 4). This allows the special control field, bits RDDT35 through RDDT47, to control either operations within microprocesor 30 or within commercial instruction logic 28. As will be seen below in describing the microoperations in Table 4 and Table 5, subfield A of the special control field is used to enable some of the microoperations controlled by subfields B and C. That is, as is shown in Table 4, three of the four microoperations defined by subfield the special control field are enabled only when subfield A contains a binary 110 in RDDT35 through RDDT37. Similarly, the eight microoperations defined in Table 5 the subfield C are enabled only when subfield A contains a binary 110 in RDDT35 through RDDT37. This is shown in FIG. 4 which shows that three of the bits from sutfield B are enabled by feeding the three bits from subfield A into AND gate 107 and that the decoding of subfield C is enabled by feeding the three bits from subfield A into the enable (EN) input of decoder 106. As is shown in Table 6, and in FIG. 4, the three bits from subfield D do not require that subfield A be equal to a binary 110 in bits RDDT35 through RDDT37. The requirement certain of the microoperations specified in subfield B, C and D of the special control field are enabled only if subfield A contains a binary 110 is necessary in order to inhibit commercial instruction logic 28 from performing certain microoperations which would otherwise interfere with the microoperations being performed by microprocessor 30. Zero multiplexer 102 is used to force binary ZEROs into ROS CIL register 103 when the microprocessor 30 addresses a firmware location in the lower 2K of ROS 24. This forcing of binary ZEROs into ROS CIL register 103 is done by enabling the tri-state outputs of zero register 102 and disabling the tri-state outputs of ROS 24 bits RDDT48 through RDDT55. By forcing ZEROs into ROS CIL register 103 in this manner, decode PROM 104 and decode PROM 105 decode no operation microoperations (see Table 7 and Table 8) thereby inhibiting commercial instruction logic 28 from performing any operation when a firmware word has been retireved from ROS 24 which does not contain bits RDDT48 through RDDT55 (see FIG. 5). Therefore, zero multiplexer 102 is used to force zeros into subfields E and F which are dedicated to commercial instruction logic 28 and which are absent in the first 2K words in ROS 24. The sixteen microoperations encoded within subfield E are shown in Table 7 and the sixteen microoperations specifiable in subfield F are shown in Table 8. As can be appreciated by examining FIG. 4 and Table 4 through Table 8, many parallel microoperations for control of commercial instruction logic can be programmed into a single microinstruction word. Because subfield B which contains four bits is not encoded, four parallel microoperations can be programmed into subfield B. The subfield B operations can be performed in parallel with any of the seven microoperations that can be programmed into the three bits which are encoded in subfield C (see Table 5) Table 6 shows that three parallel microoperations can programmed into subfield D and these in turn can be performed in parallel with any of the operations of subfields B and C. Table 7 shows that this 4-bit field is encoded to provide one of fifteen microoperations which can be performed in parallel with any of the operations of subfields B, C, D and F. Table 8 illustrates that the four bits encoded to provide one of sixteen microoperations which can be performed in parallel with any of the microoperations specified in subfield B, C, D and E. It will be understood that the various control signals from CIL control area 100 are applied to the various logic elements of commercial instruction logic 28 during each microinstruction execution cycle. It will also be understood that the clock signals from microprocessor 30 provide appropriate timing signals in a conventional manner to the commercial instruction logic 28 to provide appropriate timing therefor. In order not to confuse the drawings, the particular timing and control signals fed to various elements are not shown in FIGS. 1-4, but are assumed to be provided where The microoperations of Table 4 will now be described with reference to FIG. 4. TABLE 4 Subfield B Microoperations RDDT Bits 35 36 37 38 39 40 41 Operation X X X 1 X X X Double RAM 2 thru Add/Sub PROM (CIPDUB) 1 1 0 X 1 0 X Write RAM 2 Nibble 1 1 0 X X 1 X Write RAM 2 All 1 1 0 X X X 1 Write RAM 1 All In Table 4, it can be seen that when RDDT38 is a binary ONE, that both inputs to decimal adder/subtractor PROM 84 originate from RAM 2 96 which effectively allows, when an addition is being performed, the doubling of the 4-bit nibble output from RAM 2. Thus, when RDDT38 is a binary ONE, the double multiplexer 83 selects as its output the input it received from RAM 2 nibble multiplexer 89. This also allows, by performing a subtract operation in decimal adder/subtractor PROM 84 and by selecting the zero inputs as the output of RAM 2 zero multiplexer 90, the ability to subtract the nibble output from RAM 2 from zero and thus do a complementing of the nibble from RAM 2. When RDDT39 is a binary ONE, one nibble is written into RAM 2 96. This is accomplished by enabling one of the four 4-bit segments of RAM 2 96 under the control of nibble write control 86 so that only one nibble is written into the word specified by RAM 2 address counter 87. RDDT40, when a binary ONE, controls multiplexers 92 through 95 such that the output of data-in register 98 is input into RAM 2 96 and all four nibbles in RAM 2 96 are write enabled such that full word addressed by RAM 2 address counter 87 is written into RAM 2. When RDDT41 is a binary ONE, a full 16-bit word is written into RAM 1 81 from data-in register 98 in the word specified by RAM 1 address counter 75. As can be seen in Table 4, in order for the microoperations controlled by RDDT39, RDDT40 and RDDT41, subfield A of the special control field must have the binary value 110 in RDDT35 through RDDT37. This can also be appreciated by examining Table 4, the four microoperations controlled by RDDT38 through RDDT41 can each be selected within a single microinstructicn with the exception that in order to write a single nibble into RAM 2, RDDT40 must be a binary ZERO therefore preventing both the writing of a single nibble and the writing of a full word into RAM 2 in a single microoperation. The microoperations controlled by subfield C of the special control field will now be discussed with reference to Table 5 and FIG. 4. TABLE 5 Subfield C Microoperations RDDT Bits When RDDT35-37 Equals 110 Binary RDDT Bits 42 43 44 Operation 0 0 0 No Operation 0 0 1 Packed Sign 0 1 0 Overpunched Sign 0 1 1 Subtract 1 0 0 Reset =0 And =9 Indicators 1 0 1 Increment Address RAM 1 1 1 0 Decrement Address RAM 1 1 1 1 Reset All Indicators The three bits of subfield C, that is RDDT42 through RDDT44, are decoded by decoder 106 to perform one of eight microoperations. When RDDT42 through RDDT44 are a binary 000, no operation is performed by commercial instruction logic 28. When RDDT42 through RDDT44 are a binary 001, the PACKD signal input into sign detection PROM 78 is made a binary ZERO and is used to address the portion of the sign detector PROM 78 which contains the coding for packed decimal sign types. When RDDT42 through RDDT44 are a binary 010, signal OVPUN becomes a binary ZERO and is used to address the portion of the sign detector PROM 78 which contains the encoding for overpunched signs. When bits RDDT42 through RDDT44 are a binary 011, a subtract operation is performed by decimal adder/subtractor PROM 84 by forcing the signal ADD/SUB which is one of its inputs to be a binary ZERO. When RDDT42 through RDDT44 are a binary 100, the equal zero and equal nine indicators in decimal indicators 85 are reset to zero. When RDDT42 through RDDT44 are a binary 101, the address counter in RAM 1 address counter 75 is incremented by one such that the next word in RAM 1 81 is addressed thereby allowing one of four new nibbles to be input into RAM 1 zero multiplexer 82. The exact nibble which is enabled into RAM 1 zero multiplexer 82 is determined by nibble out control 76. When RDOT42 through RDDT44 are a binary 110, RAM 1 address counter 75 is decremented by one, thereby addressing the next lower word in RAM 1 81 and making available at its output one of four new nibbles with the enabled nibble again being under the control of nibble output control 76. when RDDT42 through RDDT44 are a binary 111 all four indicators in decimal indicators 85 are reset to binary ZERO thereby indicating that there is no carry, that the result is not equal to zero, that the result is not equal to nine, and that the result is not an illegal digit. The microoperations associated with subfield D of special control field will now be described with reference to Table 6. TABLE 6 Subfield D Microoperations RDDT Bits 45 46 47 Operation 1 X X Sign to Monitor Logic X 1 X Inhibit RAM 2 to Add/Sub PROM X X 1 Inhibit RAM 1 to Add/Sub PROM As is shown in Table 6, each of these three bits is independent of the other three bits in subfield D and therefore all three microoperations may be performed in parallel. It should also be noted that the microoperations controlled by subfield D do no require that the subfield A be any particular value whereas when coding the microoperations described above in Table 5, with respect to subfield C, require that subfield A contain a binary value of 110. When RDDT45 is a binary ONE, the output of sign detector PROM 78 is enabled through monitor multiplexer 80 to monitor logic 22. When RDDT45 is a binary ZERO, the output of monitor multiplexer 80 is the decimal indicators 85 which are then transferred to monitor logic 22. When RDDT46 is a binary ONE, the zero input of RAM 2 zero multiplexer 90 is enabled to its output and thus zeros are entered as one of the operands into the decimal adder/subtractor PROM 84. When RDDT47 is a binary ONE, the zero input to RAM 1 zero multiplexer 82 is enabled into double multiplexer 83 thereby inhibiting the output of RAM 1 from entering decimal adder/subtractor PROM 84. The microoperations controlled by subfield E are illustrated in Table 7. As discussed above, these microoperations, as do those controlled by subfield F shown in Table 8, are encoded in microinstruction bits 48 through 55 which are only present in the upper 2K words of read only storage 24. Therefore, as indicated earlier, if the address used to address ROS 24 is less than 2K, the output of zero register 102 is enabled into ROS CIL register 103 thereby forcing bits RDDT48 through RDDT55 to a binary ZERO which will cause two parallel no operations to be performed by commercial instruction logic 28. When the ROS 24 address is greater than 2K, the output of bits 48 through 55 of ROS 24 is enabled into ROS CIL register 103 and one of the microoperations specified in Table 7 and one of the microoperations specified in Table 8 will be performed. TABLE 7 Subfield E Microoperations RDDT Bits When ROS Address Greater Than 2K RDDT Bits 48 49 50 51 Operation 0 0 0 0 No Operation 0 0 0 1 Load Address RAM 1 0 0 1 0 Load Count RAM 1 0 0 1 1 Load Address and Count RAM 1 (CLDAC1) = (CLDAD1 & CLDCT1)* 0 1 0 0 Load Address RAM 2 0 1 0 1 Load Count RAM 2 0 1 1 0 Load Address and Count RAM 2 (CLDAC2) = (CLDAD2 & CLDCT2)* 0 1 1 1 Load Address RAM 1 and RAM 2 (CLDADB) = (CLDAD1 & CLDAD2)* 1 0 0 0 Load Count RAM 1 and RAM 2 (CLDCTB) = (CLDCT1 & CLDCT2)* 1 0 0 1 Load Address and Count RAM 1 and RAM 2 (CLDACB) = (CLDAD1 & CLDCT1 & CLDAD2 & CLDCT2)* 1 0 1 0 Count Up RAM 1 1 0 1 1 Count Down RAM 1 1 1 0 0 Count Up RAM 2 1 1 0 1 Count Down RAM 2 1 1 1 0 Count Up RAM 1 and RAM 2 (CTUALL) = (CTUCT1 & CTUCT2)* 1 1 1 1 Count Down RAM 1 and RAM 2 (CTDALL) = (CTDCT1 & CTDCT2)* *Parallel microoperations created by decode PROM coding. Now referencing Table 7, when RDDT48 through RDDT51 are a binary 0000 a no operation is performed. When RDDT48 through RDDT51 are a binary 0001, an address from data-in register 98 is loaded into RAM 1 address countar 75 thereby permitting control of which word is addressed in RAM 1 81. When RDDT48 through RDDT51 is a binary 0010, a nibble count from data-in register 98 is loaded into nibble output control 76 thereby controlling which one of four nibbles contained in one word of RAM 1 is enabled into one input of RAM 1 zero multiplexer 82. When RDDT48 through RDDT51 is a binary 0011, a word address is loaded into RAM 1 address counter 75 and a nibble count is loaded into nibble output control 76 from data-in register 98 thereby allowing the specifying as to the word and the nibble which will be read from RAM 1 81. When RDDT48 through RDDT51 is a binary 0100 an initial address is loaded into RAM 2 address counter 87 from data-in register 98 thereby controlling which word will be written into or read from RAM 2 96. When RDDT48 through RDDT51 is a binary 0101, an initial nibble count is loaded into nibble write control 86 thereby controlling which nibble will be write enabled into RAM 2 96. The nibble counter within nibble write control 86 also controls which of the four nibbles from RAM 2 data register 88 is enabled onto the outputs of RAM 2 nibble multiplexer 89. Therefore, loading of the RAM 2 nibble count by this microoperation controls both the write enabling into RAM 2 96 and the output enabling of RAM 2 nibble multiplexer 89. When RDDT48 through RDDT51 is a binary 0110, the address counter in RAM 2 address counter 87 and the nibble counter in nibble write control 86 are loaded from data in register 98 thereby controlling the word that is addressed within RAM 2 and the nibble which is write enabled into RAM 2 and the nibble which is selected at the output of RAM 2 nibble multiplexer 89. The loading of the nibble counter by this microoperation only controls which nibble will be write enabled when a write is done and does not actually do a write into RAM 2. When RDDT48 through RDDT51 are a binary 0111, the word address contained in data-in register 98 is loaded into RAM 1 address counter 75 and RAM 2 address counter 87. When RDDT48 through RDDT51 is a binary 1000, the nibble count in data-in register 98 is loaded into the nibble out control 76 and into nibtle write control 86 thereby allowing the selection of one of four nibbles in RAM 1 and RAM 2. When RDDT48 through RDDT51 is a binary 1001, the word address and the nibble count from data-in register 98 are loaded into RAM 1 address counter 75 and RAM 2 address counter 87 and nibble out control 76 and nibble write control 86 respectively. When RDDT48 through RDDT51 is a binary 1010, the nibble counter in nibble out control 76 is incremented by one and if it changes from a nibble count of three to a nibble count of zero, the word counter in RAM 1 address counter 75 is also incremented by one. This allows nibbles to be consecutively addressed and after the four nibbles have been addressed from one word, the first nibble is addressed in the next word. When RDDT48 through RDDT51 is a binary 1011, the nibble counter in nibble out control 76 is decremented by one and if the count changes from zero to three, the word counter in RAM 1 address counter 75 is decremented by one thereby allowing consecutive nibbles to be addressed from right to left. When RDDT48 through RDDT51 is a binary 1100, the nibble counter in write control 86 is incremented by one and if the nibble counter goes from three to zero, the word counter in RAM 2 address counter 87 is also incremented by one thereby allowing ccnsecutive nibbles to be addressed from left to right. When RDDT48 through RDDT51 is a binary 1101, the nibble counter in nibble write control 86 is decremented by one and if the count goes from zero to three, the word counter in RAM 2 address counter 87 is also decremented by one to point to the next word in RAM 2 96. This decrementing of the nibble counter associated with RAM 2 by one allows consecutive nibbles in RAM 2 to be addressed from right to left. When RDDT48 through RDDT51 is a binary 1110, the nibble counter in nibble out control 76 is incremented by one as is the nibble counter in nibble write control 86 and when a counter go from three to zero, the corresponding word counter in RAM 1 address counter 75 and RAM 2 address counter 87 is also incremented by one thereby allowing consecutive nibbles to be addressed from left to right in RAM 1 and RAM 2. When RDDT48 through RDDT51 is a binary 1111, the nibble counter in nibble out control 76 is decremented by one, and the nibble counter in nibble write control 86 is decremented by one, and if either counter goes from zero to three, the associated word counter in RAM 1 address counter 75 or RAM 2 address counter 87 is decremented by one thereby allowing consecutive nibbles in RAM 1 and RAM 2 to be addressed from right to left. The microoperations controlled by subfield F will now be discussed with reference to Table 8 and FIG. 4. TABLE 8 Subfield F Microoperations RDDT Bits When ROS Address Greater Than 2K RDDT Bits 52 53 54 55 Operation 0 0 0 0 No Operation 0 0 0 1 Transceivers In to CIL 0 0 1 0 Transceivers Out to Microprocessor 0 0 1 1 Write Zone Bits to RAM 2 0 1 0 0 Decrement Address RAM 2 0 1 0 1 Load Indicators 0 1 1 0 Increment Address RAM 2 (CIAD02) = (CIPINN & CIAD02)* 0 1 1 1 Set Carry Indicator 1 0 0 0 Set Test Mode Flop 1 0 0 1 Transceivers In to CIL and Increment Address RAM 2 1 0 1 0 Transceivers In to CIL and Decrement Address RAM 2 (CINDA2) = (CIPINN & CDAD02)* 1 0 1 1 Transceivers Out to Microprocessor and Increment Address RAM 2 (COAIA2) = (CIPOUT & CIAD02)* 1 1 0 0 Transceivers Out to Microprocessor and Decrement Address RAM 2 (COTDA2) = (CIPOUT & CDAD02)* 1 1 0 1 Not Used 1 1 1 0 Not Used 1 1 1 1 Not Used *Parallel microoperations created by decode PROM coding. When RDDT52 through RDDT55 is a binary 0000, no operation is performed by commercial instruction logic 28. Again, as discussed above, this no operation is performed whenever the ROS 24 address is less than 2K because zero register 102 is enabled into ROS CIL register 103 as discussed above. When RDDT52 through RDDT55 is a binary 0001, transceivers 97 are enabled such that the data on processor bus 37 is available to data-in register 98 and data-in register 98 is clocked such that the information becomes available to commercial instruction logic 28 at the output of data-in register 98. When RDDT52 through RDDT55 is a binary 0010, the transceivers 97 are enabled to transmit data from commercial instruction logic 28 to microprocessor 30 such that the information in RAM 2 data register 98 is passed to processor thus 37. In addition, this microoperation clocks data-in register 98 such that the data from RAM 2 data register 88 is loaded into data-in register 98. By loading data-in register 98 with the contents of RAM 2 data register 88, information can effectively be transferred from RAM 2 through data-in register 98 and into RAM 1 81 by a parallel microoperation or by a subsequent microoperation. When RDDT52 through RDDT55 is a binary 0011, the zone bits at the input of result/zone multiplexer 91 are enabled onto its outputs thereby allowing the zone nibble of a binary 0011 to be loaded into one of the nibbles of a word within RAM 2 96. This is used when processing string decimal data. When RDDT52 through RDDT55 is a binary 0100, the word address counter in RAM 2 address counter 87 is decremented by one. When RDDT52 through RDDT55 is a binary 0101, the decimal indicator register 85 is loaded with the four indicator bits from decimal adder/subtractor PROM 84. This loading of indicators is normally specified whenever an add or subtract operation is performed so that the status of the indicator bits is latched into decimal indicators 85. When RDDT52 through RDDT55 is a binary 0110, the word address counter in RAM 2 address counter 87 is incremented by one thereby pointing to the next word within RAM 2 96. It should be noted that the incrementing and decrementing of the address counter for RAM 2 is controlled by subfield F and the incrementing and decrementing of the address counter for RAM 1 is controlled by subfield C so that the address counters of RAM 2 and RAM 1 can be incremented and decremented in parallel. When RDDT52 through RDDT55 is a binary 0111, the carry indicator in decimal indicators 85 is set to a binary ONE. This function is useful to allow a carry-in to be forced into decimal adder/ subtractor PROM 84. When RDDT52 through RDDT55 is a binary 1000, a test mode flop is set to indicate that a fault condition has occurred within commercial instruction logic 28. When RDDT52 through RDDT55 is a binary 1001, transceivers 97 are enabled to receive data from processor bus 37 and data-in register 98 is clocked such that the data is latched in it and at the same time the word address counter in RAM 2 address counter 87 is incremented by one. By using a series of these microoperations, consecutive locations in RAM 2 96 can be loaded from the contents of processor bus 37. When RDDT52 through RDDT55 is a binary 1010, transceivers 97 are enabled to receive the data from processor bus 37 and it is latched into data-in register 98. In parallel, the address counter in RAM 2 address counter 87 is decremented by one such that the next lower word in RAM 2 96 is addressed. As in the previous microoperation, it is useful for allowing consecutive words in RAM 2 to be loaded from processor bus 37. When RDDT52 through RDDT55 is a binary 1011, transceivers 97 are enabled to transmit the data from the output of RAM 2 data register 88 to processor bus 37 and at the same time this data is loaded into data-in register 98. In addition, the address counter in RAM 2 address counter 87 is incremented by one. This microoperation is useful to allow consecutive words in RAM 2 to be transmitted to processor bus 37 and also to allow consecutive words from RAM 2 to be loaded into data-in register 98 from which they can be loaded by a parallel microoperation into RAM 1 81. When RDDT52 through RDDT55 are a binary 1100, transceivers 97 are enabled to processor bus 37 to allow the outputting of data in RAM 2 data register 88 and at the same time the data is loaded into data-in register 98. In addition, the address counter in RAM 2 address counter 87 is decremented by one such that the next lower word in RAM 2 96 is pointed to. This microoperation is also useful to allow consecutive words in RAM 2 96 to be transmitted to processor bus 37 and loaded into data-in register 98. As in the previous microoperation, this microoperation is also useful to transfer consecutive words from RAM 2 into RAM 1. The microoperations specified by binary 1101, 1110 and 1111 are not used. The three different data types handled by the commercial instruction logic 28 are: decimal data, alphanumeric data, and binary data. The decimal and binary data types are used to represent fixed point integer numeric values. The alphanumeric data type is used to represent alphabetic, numeric and punctuation of text information. The commercial software instructions of the CPU permit arithmetic operations on the decimal data and editing operations on alphanumeric data. These commercial software instructions are performed by commercial instruction logic 28 in conjunction with microprocessor 30. The single unit of information of each data type will be referred to as an "atom". Table 9 gives the size of an atom in bits as a function of the data type. TABLE 9 Size of Data Atoms Atom Size Data Type In Number of Bits String (Unpacked) Decimal Packed Decimal 4 Alphanumeric 8 Binary 8* *This means that: single precision binary numbers consist of two atoms, o 16 bits, while double precision binary numbers consist of four atoms or 3 bits. Eight-bit atoms are also referred to as "bytes" and 4-bit atoms are also referred to as "nibbles". FIG. 7A illustrates the positiion of byte 0 and byte 1 within the 16-bit words of the preferred embodiment. FIG. 7B illustrates the positions of nibble 0 through nibble 3 in a 16-bit word. The bits are numbered 0 through 15 with bit 0 being the most significant bit (MSB) and bit 15 being the least significant bit (LSB) as illustrated in FIGS. 7A and 7B. Decimal data can be represented in either string (also referred to as unpacked) or packed form. The maximum length of a decimal operand is 31 atoms (i.e., if separate sign, then 30 digits plus sign). Decimal numbers are treated as integers with the decimal point assumed to be to the right of the least significant digit. A decimal operand can be signed or unsigned and when unsigned it is assumed to be positive. String (unpacked) decimal digits (atoms) occupy one byte position in memory. They can start and/or end on any byte boundaries. The four most significant bits of a decimal string digit are called the zone bits. The four least significant bits of decimal string digit define the value of the digit. Only the codes 0-9 are valid for the low order four bits otherwise an illegal character (IC) trap will result. Zone bits are not checked by the hardware on input but will be set to 3 hexadecimal (0011 binary) on output. String decimal data can be signed or unsigned. When unsigned the operand is considered to be positive and its length refers only to digits. When signed, the operand can have either: leading separate sign, trailing separate sign, or trailing overpunched sign. The length of the operand also includes the sign character. Table 10 gives the sign convention for string decimal operands having leading or trailing signs and Table 11 gives the sign convention for string decimal operands having trailing overpunch signs. Packed decimal digits (atoms) occupy four bits or one-half a byte position in memory (also referred to as nibbles). These digits can start and/or end on any half byte boundaries. The only valid codes for packed decimal digits are 0-9 otherwise an illegal character trap will result. Packed decimal data can be signed or unsigned. When unsigned, the operand is considered to be positive and its length refers only to digits. When signed, the sign will be the least significant atom of the operand. The length of the operand will include the sign atom. Table 12 gives the sign conventions for packed decimal operands. Sign, when specified, can only be separate trailing; i.e., the rightmost atom of the operand field. TABLE 10 Sign Convertions for String Decimal Operands Having Leading And Drawing Sig1 ASCII SIGN VALUE CHARACTER HEXADECIMAL CODE + + 2B - - 2D Note: The number of digits equals L1 and the sign occupies one atom position. TABLE 11 Sign Convention For String Decimal Operands Having Trailing Overpunch Sign HEXADECIMAL CODE RECOGNIZED HEXADECIMAL CODE SIGN DIGIT ASCII AND RECOGNIZED VALUE VALUE CHARACTER GENERATED ONLY + 0 (left brace) 7B 30 + 1 A 41 31 + 2 B 42 32 + 3 C 43 33 + 4 D 44 34 + 5 E 45 35 + 6 F 46 36 + 7 G 47 37 + 8 H 48 38 + 9 I 49 39 - 0 (right brace) - 1 J 4A - 2 K 4B - 3 L 4C - 4 M 4D None - 5 N 4E - 6 0 4F - 7 P 50 - 8 Q 51 - 9 R 52 Notes: 1. For length of operand equal to L, the number of digits equals L and sign is overpunched on rightmost digit. 2. The hardware uses only the low order 7bits of the byte to determine it sign TABLE 12 Packed Decimal Sign Conventions PACKED DECIMAL ASCII SIGN SIGN DIGIT RECOGNIZED RECOGNIZED IN HEXADECIMAL AND GENERATED ONLY BY CODE BY HARDWARE HARDWARE A + B + C + D - E + F + Alphanumeric operands consist of ASCII 7-bit characters. Their maximum size is 255 characters except as specified otherwise. Each alphanumeric character atom occupies one 8-bit byte. Alphanumeric strings in memory can start and/or end or both on either odd or even byte boundaries. Binary operands can be either 16 bits long or 32 bits long (i.e., one or two words). They are 2s complement numbers and thus the most significant bit is the sign bit and the binary point is assumed to be to the right of the least significant bit. The range (RG) of the value of a binary operand is: for a 16-bit long operand: -2 to the 15th less than or equal to RG less than or equal to 2 to the 15th-1 and for a 32-bit long operand: -2 to the 31st less than or equal to RG less than or equal to 2 to the 31st-1. Note that the binary atom is eight bits and thus the length of the operand should be either two or four atoms, otherwise unspecified results will occur. Binary operands in memory can start and/or end on either odd or even byte boundaries. There are seven types of basic software instructions: generic, branch on indicators, branch on registers, shift short and shift long, short value immediate, input/output, single operand, and double operand. The format for single and double operand basic software instructions is shown in FIG. 8A. The significance of the bits in FIG. 8A is as follows: bit 0 is always a binary ONE; bits 1, 2 and 3 are binary ZERO for single operand instructions and define a register number (1-7) in double operand instructions (the op code defines whether this is one of the 7 general (R) registers or one of the 7 address(B) registers; bits 4 to 8 define the operation code; bits 9-15 are the address syllable (AS) and are used to define either: (1) a location in memory that contains an operand, (2) a register that contains an operand, or (3) an immediate operand, where the operand is contained in the second word of the instruction. Single and double operand instructions can be either one, two or three words in length depending on the addressing mode utilized. The addressing mode is defined by the address syllable. Instructions that address a register are one word in length. Instructions that utilize an immediate operand are considered to be two words in length, including the operand, since the program counter is incremented by two in order to access the next instruction. And finally, instructions that address operands in memory can be either one, two or three (two, three or four for those requiring a mask) words in length depending on the addressing mode used. The three address modes are: absolute, base and relative addressing. Absolute addressing--(also called immediate address mode)--two-word instructions where the second word contains a 16-bit absolute address (short address format), or three word instructions where the second and third words contain a 20-bit absolute address (long address format). Base addressing--one-word instructions that define one of the seven address registers (B1-B7) as containing the address of the operand. Relative addressing--two-word instructions where the second word contains an algebraic displacement (±32K) relative to either the program counter (P relative), an address register (base relative), or the interrupt vector for the current central processor level. The commercial software instructions of CPU 20 ara classified as follows: numeric, alphanumeric, edit and branch. The format of commercial branch instructions is identical to that of the CPU branch instructions as shown in FIG. 8B where: OP CODE=specifies one of the commercial software branch instructions. Bits 0, 4 and 5 are binary ZEROs, bits 6 and 7 are binary ONEs and bits 1 through 3 specify which commercial indicator is to be tested and bit 8 specifies if a branch is to occur when the indicator is true (i.e., a binary ONE) or false (i.e., a binary ZERO). DISPLACEMENT=specifies the software instruction by its displacement in number of words forward or backward from the branch instruction to which the branch is to transfer control if the condition is The format for commercial numeric, alphanumeric and edit software instructions is given in FIGS. 8C-1 through 8C-3 where: OP CODE=specifies one of the commercial software instructions. Bits 0 through 9 of the op code word are binary ZEROs, bit 10 is a binary ONE and bits 11 through 15 specify the particular commercial software operation to be performed. DDn=Data descriptor specifies the type, size and location of the operand. DD1 refers to the first data descriptor; DD2 refers to the second and DD3 refers to the third. LABEL n=12 bit displacement capable of addressing any of up to 4K remote data descriptors. Label 1 refers to the first data descriptor; Label 2 refers to the second and Label 3 refers to the third. FIG. 8C-1 illustrates the format of a commercial software instruction using in-line data descriptors which describe the 1 to 3 operands used by the instruction with the number of operands being a function of the software instruction type. FIG. 8C-2 illustrates the format of a commercial software instruction using remote data descriptors to describe the 1 to 3 operands used by the software instruction and FIG. 8C-3 shows the format of a commercial software instruction using a combination of in-line and remote data descriptors to describe the 1 to 3 operands. The CPU distinguishes between in-line and remote data descriptors by examining bits 12 through 15 of the first word of a data descriptor. In remote data descriptors, bits 12 through 15 are binary ZEROs. A commercial software instruction can have from 1 to 3 operands which are defined by data descriptors. Data descriptors (DDs) are used to define the operand type, size and location in memory. Data descriptors can be in-line DDs (IDs) or remote DDs (RDs) but regardless of their location in memory, they have the same format. IDs are part of a commercial software instruction (see FIG. 8C-1). RDs are defined by a label within the commercial software instruction (see FIG. 8C-2). The label is a 12-bit positive integer which is used as an offset from the remote descriptor base address which is contained in the CPU 20-bit remote descriptor base register (RDBR). As a function of the instruction op code, a data descriptor can be either a: decimal DD, alphanumeric DD, or binary DD. Decimal DDs are implied by all numeric software instructions and the numeric edit instruction. Decimal DDs can refer to either string decimal or packed decimal data. The format of the DD is as shown in FIG. 9 where: WORD 1: C1, C2, C3=control bits and specify atom offset and sign information. L=specifies the length of the operand in number of atoms. T=specifies the data type: If T=binary ZERO, data is string (unpacked) decimal and IF T=binary ONE, data is packed decimal. CAS=specifies the commercial address syllable (see CAS description below). WORD 2: The contents of word 2 is either a displacement or an immediate operand (IMO) (i.e., the operand itself instead of a pointer to the operand) as defined by the CAS. In a string decimal DD the fields, shown in FIG. 9, have the following meaning: 1. C1=byte (atom) offset. a. When no indexing is specified, C1 specifies the offset within the addressed word: C1=binary ZERO, operand starts in the left most byte of the addressed word. C1=binary ONE, operand starts in the right most byte of the addressed word. b. When indexing is specified, C1 contains an atom offset value that is added to the index value and the resulting sum is used to compute the effective address of the operand. C2, C3=sign control as shown in the following tabulation: C2 C3 SIGN CONVENTION 0 0 Unsigned (assumed to be positive) 0 1 Trailing Overpunch 1 0 Leading Separate Sign 1 1 Trailing Separate Sign 3. L=Specifies the length of the operand either directly or indirectly: for L not equal to 0, then 1 less than or equal to L less than or equal to 31 for L=0, the escape to CPU registers 11-15, where: Register 4 for DD1 Register 5 for DD2 Register 6 for DD3. Register 8-10 should be zero else unspecified results will occur. Note that for unsigned or sign overpunched operands, the length refers to number of digits. For a leading or trailing separate signed operand, its length refers to L-1 digits and a sign. Note that an illegal specification trap will result if an operand has either: a length of 0, or a length of 1 and specifies separate sign (i.e., the operand consists only of a sign). 4. T=binary ZERO. 5. CAS=specifies the commercial address syllable. In a Packed Decimal DD the fields shown in FIG. 9 have the following meaning: 1. C1, C2=Nibble (atom) offset. a. When no indexing is specified then C1 and C2 specify the offset within the addressed word: 0 0 Nibble 0 (bits 0-3) 0 1 Nibble 1 (bits 4-7) 1 0 Nibble 2 (bits 8-11) 1 1 Nibble 3 (bits 12-15) b. When indexing is specified C1 and C2 contain an atom offset value which is added to the index value and the resulting sum is used to compute the effective address of the operand. 2. C3=Sign control: If C3=binary ZERO, the operand is unsigned and If C3=binary ONE, the operand is trailing sign. When unsigned, the operand is considered to be positive. When signed, only trailing sign is allowed. 3. L=specifies the length of the operand directly or indirectly. Those rules under string decimal DD for L also apply here. 4. T=binary ONE. Alphanumeric DDs are implied by all alphanumeric software instructions and the alphanumeric edit software instruction. The format of the alphanumeric DD is somewhat similar to that shown in FIG. 9 but need not be further described. Commercial software instructions generate address references through a filed called the commercial address syllable (CAS). The resolution of the CAS field for non branch instructions usually results in the formation of an effective address (EA) which points to an operand but can also directly describe an operand (i.e., an immediate operand). The CAS is a seven bit field of a data descriptor in which bits 9 through 11 are used as an address modifier, bit 12 is generally used to indicate whether direct or indirect addressing is to be used, and bits 13 through 15 are used to indicate a base register number. The CAS of FIG. 9 should not be confused with the AS used by the basic software CPU instructions of FIG. 8A although they are The general rules describing the commercial address syllable entries are as follows: 1. An in-line descriptor (ID) is a DD that is part of the instruction. 2. An ID can either point directly to the operand or to a pointer to the operand (i.e., whenever indirection is specified). 3. A remote descriptor (RD) is a DD defined by a label (which is part of the instruction) and the remote descriptor base register. 4. An RD can either point directly to the operand or to a pointer to the operand (i.e., when indirection is specified). 5. During extraction of commercial software instructions, the CPU must determine whether IDs or RDs are used. 6. The contents of the program counter and base registers are considered word addressed. 7. An index register content is considered an atom displacement and has a range of minus 2 to the 15th less than or equal to the index register content which is less than or equal to 2 to the 15th minus 1. 8. The displacement is a word displacement and has a range of minus 2 to the 15th less than or equal to the displacement which is less than or equal to 2 to the 15th minus 1. 9. When program counter plus displacement addressing is specified then the program counter contents means the address of the word containing the displacement. 10. An immediate operand (IMO) always occupies the second word of a DD. Its offset and length are specified in the DD. For offset plus length specifying more atoms than can be contained in a word, then unspecified results will occur. Note that a decimal IMO can specify any sign convention supported for normal operands. The definition of the commercial software instructions supports post indexing of data at the atom level. The index value is in atoms and it is automatically adjusted to correspond to the data type specified in the DD. During effective operand address generation, the atom index defined in one of seven index registers (R1, R2, . . . R7), may be added to the atom offset in the DD before this is algebraically added as the last step (after any indirection) to the base address. The following general rules will apply to all numeric commercial software instructions: 1. Effective address (EA) always points to the leftmost atom of the operand. 2. Greater than (G) or less than (L) indicators indicate value of result relative to zero except for convert decimal to binary (CDB), and convert binary to decimal (CBD) instructions. 3. If result is shorter than receiving field, the receiving field will be zero filled to the left. 4. Plus and minus zero are allowed on input but will be assumed to be plus zero during instruction execution. All zeros operands generated by the hardware will be plus zeros. 5. A string decimal zero=30 (hexadecimal); a packed decimal zero=0 (hexadecimal). 6. All signed results will have the hardware generated signs, regardless of the sign convention used by the operands at execution time. 7. Zone bits (leftmost four bits of a string decimal number) are always ignored on input. Nevertheless, all results will have the zone bits set to 3 hexadecimal. 8. Operands must not overlap each other, otherwise unspecified results will occur. 9. Identical overlapping operands are allowed. 10. If a trap condition is disabled the trap will not occur and the receiving field will be altered. 11. Mixing of operands (i.e., string or packed decimal) is allowed. 12. Operands having different signs conventions are allowed. 13. Any operand having a zero length or a length equal to just the sign character, and specifying separate sign, will result in an illegal specification (IS) trap condition. 14. Overflow (OV) indicator is set if the receiving field cannot accept all significant digits of the result. 15. If either operand has an illegal digit or an illegal sign, then an illegal character (IC) trap will result. 16. If a negative operand or a negative result is to be stored in an unsigned receiving field, then the sign fault indicator is set and no trap occurs. 17. If DD2 specifies IMO in any instruction except a decimal compare (DCM) instruction, an IS trap will occur. 18. If DD3 specifies IMO, an IS trap will occur. 19. Whenever a trap occurs, the state of the indicators used by the trapped instruction will be unspecified. 20. The original operands preservation cannot be guaranteed whenever either an unavailable resource trap or a bus or memory error trap occurs. If any other trap occurs, then the operands will remain 21. If the receiving field cannot accept all significant digits of the result and commercial instruction mode register (CM) specifies that no trap be generated then the receiving field will contain only low order digits (i.e., the high order digits will be lost). The numeric commercial software instructions of interest are described below. Decimal add (DAD) adds the contents of DD1 (i.e., the operand pointed to by data descriptor (1) to the contents of DD2 and the result replaces the contents of DD2 (i.e., [DD2]+[DD1] stored in [DD2], augend+addend=sum). The overflow (OV), sign fault (SF), greater than (G) and less than (L) commercial indicators will be set as a function of the result. Decimal subtract (DSB) subtracts the contents of DD1 from contents of DD2 and the result replaces the contents of DD2 (i.e., [DD2]-[DD1] stored in [DD2], minuend-subtrahend=difference). The OV, SF, G and L commercial indicators are set as a function of the result. Decimal multiply (DML) multiplies the contents of DD2 by the contents of DD1 and the result replaces the contents of DD2 (i.e., [DD2]*[DD1] stored in [DD2], multiplicand×multiplier=product). Decimal divide (DDV) divides the contents of DD2 by the contents of DD1 and the resulting quotient is stored in the contents of DD3 and the remainder replaces the contents of DD2 (i.e., [DD2]/[DD1] stored in [DD3] and remainder stored in [DD2], dividend/divisor=quotient and remainder). The OV, SF, G, and L commercial indicators are set as a function of the result. Overflow occurs if quotient is larger than DD3 receiving field length or if a divide by zero is attempted. The sign of the quotient is determined by rules of algebra and sign of remainder is equal to sign of the dividend unless the remainder is zero. Decimal compare (DCM) makes an algebraic comparison of the contents of DD1 to contents of DD2 and sets commercial indicators to indicate if contents of DD1 are equal, greater, or less than contents of DD2 (i.e., [DD1]::[DD2] results to commercial indicators). The G and L commercial indicators are set as function of comparison. Convert binary to decimal (CBD) moves and converts the contents of binary DD1 and places the decimal string or packed result in the contents of DD2 (i.e., [DD1] converted and stored in [DD2]). The OV and SF commercial indicators are set as a function of the result. Convert decimal to binary (CBD) moves and converts the contents of a decimal (string or packed) DD1 and places the binary result in the contents of DD2 (i.e., [DD1] converted and stored in [DD2]). The OV commercial indicator is set as a function of the result. Before describing the manner in which the commercial instruction logic 28 is used to perform the various decimal operations, the structure of commercial instruction logic 28 will be described in greater detail with respect to FIGS. 10A through 10D. FIG. 10A primarily shows the structure of RAM 1 81 and its associated addressing logic, FIG. 10B primarily shows the structure of RAM 2 and its associated addressing logic, FIG. 10C primarily shows the structure of decimal adder/subtractor PROM 84 along with its inputs and outputs and decimal indicators 85, and FIG. 10D primarily shows the structure of CIL control area 100. In FIGS. 10A through 10D, the same reference numerals used previously are shown in addition to the detailed circuitry required for the operation of the hardware mechanism. In FIGS. 10A through 10D the little circles at some of the inputs and outputs of the various elements are used to represent inverting inputs or outputs respectively. RAM 1 LOGIC DETAILS The operation of RAM 1 81 will now be described with reference to FIG. 10A. In the preferred embodiment, transceivers 97 is comprised of two Texas Instruments (TI) type SN74LS245 octal bus transceivers with three-state outputs as described in their publication, The TTL Data Book for Design Engineers, Second Edition, copyright 1976 by Texas Instruments, Incorporated of Dallas, Texas, which reference is incorporated herein by reference. Transceivers 97 operates such that when the direction input signal CIPINN-- at its DIR input is a binary ZERO, the data at the D inputs is transmitted to the Q outputs (i.e., the signals on line CPBX00+ through CPBX15+ is transmitted to the lines CIPB00+ through CIPB15+) and when the DIR signal is in the binary ONE state, the data at the Q inputs is transmitted to the D outputs such that the signals on line CIPB00+ through CIPB15+ is transmitted to line CPBX00+ through CPBX15+. Signals CPBX00+ through CPBX15+ come from processor bus 37 in microprocessor 30. The direction signal CIPINN- comes from a decode of microinstruction bits RDDT52 through RDDT55 as performed by decoder 105 (see FIG. 10D). The D and Q inputs and outputs of transceivers 97 are isolated whenever the function (F) input is a binary ONE as determined by signal CINOUT- at the output of NAND gate 110. Signal CINOUT- is produced by NANDing together signal PHASEA- and signal CINOUT+. Signal PHASEA- is produced by inverting clocking signal PHASEA+ from microprocessor 30 via inverter 112. Signal CINOUT+ is produced by combining signal CIPOUT- and CIPINN- from decoder 105 via NAND gate 111. Signals CIPB00+ through CIPB15+ are wire-ored together at point 113 with signals CD2L00+ through CD2L15+ to produce signals CIPI00+ through CIPI15+. Thus, transceivers 97 can input to data-in register 98 the data from processor bus 37 or can transmit to processor bus 37 the data from RAM 2 data register 88. When data-in register 98 is clocked by clocking signals CINNLD+ at its clock (C) input transitioning from the binary ZERO to the binary ONE state, the data at its D0 through D15 inputs is latched into the register and will appear at the Q0 through Q15 outputs as signals CBUS00+ through CBUS15+. Thus, data-in register 98 can store either the output of transceivers 97 when it is enabled or the output of RAM 2 data register 89 when it is enabled (see FIG. 10B). Load control signal CINNLD+ is produced by AND gate 108 combining signal CINOUT+ from NAND gate 111 and clocking signal PHASEB+ from microprocessor 30. In the preferred embodiment, RAM 1 81 is comprised of four type AM74S189 manufactured by Advanced Micro Devices, Inc. of Sunnyvale, California, and described in their publication entitled, Advanced Micro Devices Bipolar/MOS Memories Data Book, copyrighted 1982, which is incorporated herein by reference, random access memories having 16 words of 4 bits each with three-state outputs. The four address (ADR) inputs are connected to RAM 1 address counter 75 to receive signals CAD100+ through CAD103+ such that the same four-bit word will be accessed in each of the RAMS. The memory data inputs D0 through D3 are connected to receive four bits from data-in register 98 such that RAM 1-0 81-0 receives bits CBUS00+ through CBUSO3+, RAM 1-1 81-1 receives bits CBUS04+ through CBUS07+, RAM 1-2 81-2 receives bits CBUS08+ through CBUS11+ and RAM 1-3 81-3 receives bits CBUS12+ through CBUS15+. Each RAM chip is write enabled by signal CWROP1- at its write enable (WE) input. The Q outputs of each RAM 1 chip are individually enabled under the control of the F input such that at any given time only one of the four RAMS will have its four bits output enabled. The output of RAM 1-0 81-0 is controlled by signal CSEL10- from AND gate 76-3, RAM 1-1 81-1 has its outputs controlled by signal CSEL11- from AND gate 76-4, RAM 1-2 81-2 has its outputs controlled by signal CSEL12- from AND gate 76-5 and RAM 1-3 81-3 has its outputs controlled by signal CSEL13- from AND gate 76-6. AND gates 76-3 through 76-6 all have signal CWROP1- at one input. The second input of each of these AND gates comes from decoder 76-2 as signal CRD100- through signal CRD103-. Of these four signals, only one will be in the binary ONE state at any given time such that only one of signals CSEL10- through CSEL13- will be in the binary ZERO state at any given instance thereby allowing the Q outputs of RAM 1-0 through RAM 1-3 to be wire-ored together at point 114. The four bit word which is enabled at the output of one of RAM 1-0 through RAM 1-3 is determined by nibble out control 76 logic which is comprised of RAM 1 nibble counter 76-1, decoder 76-2 and AND gate 76-3 through 76-6. The 16-bit word which is written in four bits into each of RAM 1-0 through RAM 1-3 is controlled by RAM 1 address counter 75. By controlling RAM 1 in this manner, a 16-bit word can be written into RAM 1 in the word addressed by RAM 1 address counter 75 and a 4-bit nibble can be output as selected by RAM 1 address counter 75 and nibble-out control 76. RAM 1 81 is write enabled by signal CWROPl- from AND gate 115. AND gate 115 combines the clocking signals PHASEA+ and PHASEB+ from microprocessor 30 along with signal CWRT01+ which is output by AND gate 116. AND gate 116 combines signal CIPCOD+ and signal CROS41+. Signal CROS41+ comes from microinstruction bit RDDT41 and signal CIPCOD+ is derived by combining microinstruction bits CROS35+ through CROS37+ in NAND gate 117 the output of which is signal CIPCOD- which is inverted by inverter 118 to produce signal CIPCOD+. In the preferred embodiment, RAM 1 address counter 75 and RAM 1 nibble counter 86-1 are TI type SN74LS169 synchronous 4-bit up/down counters as described in The TTL Data Book for Design Engineers. Both counters are clocked each microinstruction when signal PHASEB- transitions from the binary ZERO to the binary ONE state. Signal PHASEB+ is derived by inverting clocking signal PHASEB+ from microprocessor 30 by use of inverter 119. When both count enable inputs P and T are in the binary ZERO state, the clocking of the counter will result in the counter being incremented by one or decremented by one depending upon the state of the up/down (U/D) input. RAM 1 address counter 75 is count enabled by signal CNTOPl- being in the binary ZERO state and RAM 1 nibble counter 76-1 is count enabled by signal CNTCT1- being in the binary ZERO state. RAM 1 address counter 75 counts up when signal CDWN01- is a binary ONE and counts down when that signal is a binary ZERO. Similarly, this RAM 1 nibble counter 76-1 counts up when signal CTDCT1- is a binary ONE and counts down when that signal is a binary ZERO. Signal CDWN01- is output by AND gate 120 which combines signals CDAD01- from decoder 106 and signal CTDCT1- from decode PROM 104 both of which are derived from decoding various bits of the microinstruction word (see FIG. 10D). RAM 1 address counter 75 is loaded with bits CBUS13+ through CBUS10+ when signal CLDAD1- at the load (L) input is in the binary ZERO state. Signal CLDAD1- is derived by decoding microinstruction bits via decoder PROM 104 (See FIG. 10D). Similarly, RAM 1 nibble counter 76-1 is loaded with bits CBUS15+ and CBUS14+ when under the control of signal CLDCT1- at its load (L) input. Signal CLDCT1- also comes from decode PROM 104 (see FIG. 10D). The enabling of RAM 1 nibble counter 76-1 is controlled by signal CNTCT1- which is the output of AND gate 121. AND gate 121 combines signal CTUCT1- and signal CTDCT1- both of which are derived from the microinstruction bits being decoded by decode PROM 104. The 2-bit output of RAM 1 nibble counter 76-1 which allows it to count from 0 through 3 are signal CNT101+ and CNT100+. These two signals are decoded by decoder 76-2 such that the one chip of RAM 1-0 through RAM 1-3 will have its outputs enabled which are ORed together by wire-or 114 to produce signal CRAM10- through CRAM13-. These signals are input to RAM 1 zero multiplexer 82 to provide one of two inputs to be selected between. The other inputs to RAM 1 zero multiplexer 82 is a binary ONE which when inverted produces a binary ZERO at its output if that input is selected. RAM 1 zero multiplexer 82 is always enabled by the binary ZERO appearing at its enable (F) input. The selection between the two inputs is done under the control of select (SEL) input CROS47- which is derived by inverting microinstruction bit CROS47+ by inverter 122. The output of RAM 1 zero multiplexer 82, signals COP100+ through COP103+, is input into double multiplexer 83 (see FIG. 10C). Gates 123 through 128 control the enabling of RAM 1 address counter 75 which can be enabled to either count up or count down each time a microinstruction is executed. Gate 120 controls the direction in which the address counter 75 will be counted, whether it is up or down. The inputs to AND gate 120 are signals CTDCT1- and CDAD01-. Signal CTDCT1- will be a binary ONE if a CTDCT1 microoperation is included within the microinstruction thereby indicating that RAM 1 nibble counter 76-1 is to be decremented (i.e., counted down) and signal CDAD01- will be a binary ZERO if a CDAD01 microoperation is included in a microinstruction thereby indicating that RAM 1 address counter 75 is to be decremented. Thus, the output of AND gate 120, signal CDWN01-, will be a binary ZERO if either RAM 1 nibble counter 76-1 or RAM 1 address counter 75 is to be counted down, thus setting RAM 1 address counter 75 into the decrement mode by providing a binary ZERO signal at its U/D input. The enabling of RAM 1 address counter 75 is done by control signal CNTOPl- being in the binary ZERO state at the output of AND gate 128. There are three conditions which will cause the output of AND gate 128 to be a binary ZERO and thus enable RAM 1 address counter 75 to be either decremented or incremented depending upon the state of the U/D input. If either a increment address 1 microoperation (CIAD01) or a decrement address 1 microoperation (CDAD01) is encoded in the microinstruction then signal CIAD1- will be a binary ZERO or signal CDAD01- will be a binary ZERO and the output of AND gate 127, signal CNTWD1-, will be a binary ZERO and thereby the cause the output of AND gate 128 to be a binary ZERO and enable the incrementing or decrementing of RAM 1 address counter 75. Thus, if the microinstruction contains either an increment or decrement address 1 microoperation, RAM 1 address counter 75 will be enabled. The other two conditions which enable RAM 1 address counter 75 are if RAM 1 nibble counter 76-1 has been decremented to a count of 0, or has been incremented to a count of 3 thus requiring that the word address be decremented or incremented, respectively, by 1. The output of OR gate 123, signal CNT1E0- , will be a binary ZERO if the output of RAM 1 nibble counter 76-1 is a binary ZERO. The binary ZERO of signal CNT1E0- at one input of OR gate 124 will cause signal CNTDN1- to be a binary ZERO if its other input, signal CTDCT1-, is a biinary ZERO. Signal CTDCT1- from decode PROM 104 (see FIG. 10D) will be a binary ZERO if a count down counter 1 microoperation (CTDCT1) is present in the microinstruction, or if a count down both counter 1 and counter 2 microoperation (CTDALL) is present in the microinstruction. Thus, if when counting down the output of RAM 1 nibble counter 76-1 is a binary 00, signal CNTDN1- will be a binary ZERO and cause the output of AND gate 128 to become a binary ZERO and enable the counting of RAM 1 address counter 75. Similarly, if counting up and the nibble counter reaches a count of 3, the output of NAND gate 125, signal CNT1E3-, will become a binary ZERO and if counting up, as indicated by signal CTUCT1- being a binary ZERO, the output of OR gate 126, signal CNTUPl-, will become a binary ZERO and cause the output of AND gate 128 to become a binary ZERO and enable RAM 1 address counter 75. Signal CTUCT1-, which is input to OR gate 126, comes from decode PROM 104 (see FIG. 100) and will be a binary ZERO if either a count up counter 1 microoperation (CTUCT1) or a count up both counter 1 and counter 2 microoperation (CTUALL) is present in the microinstruction. From this discussion it can be appreciated that RAM 1 address counter 75 is enabled if the address counter 75 is to be incremented or decremented by 1 as determined by a microinstruction and thus incremented to address the next word in RAM 1 81 or if counter 1 or counters 1 and 2 are being incremented or decremented and the output of the nibble counter 76-1 is equal to 3 when counting up or equal to 0 when counting down so that the next nibble can be addressed from the next word. RAM 1 nibble counter 76-1 is enabled to count by signal CNTCT1- at the output of AND gate 121 which will be a binary ZERO if either of its inputs, signal CTUCT1- or CTDCT1-, are a binary ZERO and the state of these inputs is controlled by the count up and count down microoperations as mentioned before. The counting direction of RAM 1 nibble counter 76-1 is controlled by signal CTDCT1- and will be a binary ZERO indicating that the counter is to be decremented if a count down counter 1 microoperation (CTDCT1) or a count down counter 1 and counter 2 operation (CTDALL) is present in the From the above discussion it can be appreciated that the logic in FIG. 10A permits commercial instruction logic 28 to either receive data from or send data to processor bus 37 via transceivers 97, to latch data into into data-in register 98 from either processor bus 37 or RAM 2 data register 88 and that RAM 1 81 can be addressed under the control of RAM 1 address counter 75 such that a 16-bit word can be written into RAM 1 81 or a 4-bit nibble can be read from RAM 1 81 under the control of RAM 1 address counter 75 and RAM 1 nibble counter 76-1. Further, RAM 1 address counter 75 can be loaded from data-in register 98 as can RAM 1 nibble counter 76-1 and that RAM 1 address counter 75 can be incremented or decremented and that RAM 1 nibble counter 76-1 can be incremented or decremented and that incrementing through a count of 3 will also result in the incrementing of RAM 1 address counter 75 and a decrementing through a count of 0 will result in the decrementing of RAM 1 address counter 75. Further, RAM 1 zero multiplexer 82 allows either the 4-bit nibble from RAM 1 81 to be passed on to an input of double multiplexer 83 or a zero nibble to be passed on to double multiplexer 83. RAM 2 LOGIC DETAILS FIG. 10B illustrates the logic associated with RAM 2 96. RAM 2 96 differs from RAM 1 81 in that in addition to being able to write a whole word of 16 bits into RAM 2 with the address specified by RAM 2 address counter 87, a single 4-bit nibble can be written into RAM 2 96 with the address of the word being specified by RAM 2 address counter 87 and the one of four nibbles being selected by nibble write control 86. In addition, unlike RAM 1 81 which outputs only a single nibble at a time, RAM 2 96 always outputs a full 16-bit word into RAM 2 data register 88 and one of four from that word is selected by RAM 2 nibble multiplexer 89. In the preferred embodiment, RAM 2 96 is comprised of four random access memory chips of the type 93422, manufactured by Fairchild Camera and Instruments Corporation of Mountain View, Calif., and described in their publication, Bipolar Memory Data Book, copyrighted 1979, which is incorporated herein by reference. The word within RAM 2 96 which is to be either written or read is controlled by connecting the output of RAM 2 address counter 87, signal CAD200+ through CAD207+, to the address inputs of RAM 2-0 chip 96-0, RAM 2-1 chip 96-1, RAM 2-2 chip 96-2 and RAM 2-3 chip 96-3. The outputs of RAM 2 chips 96-0 through 96-3 are enabled by connecting the function (F) input of each chip to a binary ZERO. The write enable (WE) input of RAM 2 chips 96-0 through 96-3 are controlled by signals CWR200- through CWR203- thus allowing each chip to be individually write enabled or all chips to be write enabled as will be described below. The chip select (CS) inputs of RAM 2 chips 96-0 through 96-3 are enabled by connecting the inverted input to a binary ZERO and the noninverted input to a binary ONE thus enabling the chips at all times. The four data input bits of the RAM 2 chips are connected to the output of their corresponding nibble multiplexer such that, for example, the four data inputs to RAM 2-0 chip 96-0 are connected to receive signal C2IN00+ through C2IN03+, which are the outputs of nibble 0 multiplexer 92. The outputs of nibble multiplexer 92 through 95 are always enabled by connecting the function (F) input of each multiplexer to a binary ZERO. The zeroth inputs of nibble multiplexers 92 through 95 are connected to receive a different four bits from data-in register 98 such that nibble 0 multiplexer 92 is connected to receive bits CBUS00+ through CBUS03+, nibble 1 multiplexer 93 is connected to receive signals CBUS04+ through CBUS07+ and similarly nibble 2 multiplexer 94 receives signals CBUS08+ through CBUS11+ and nibble 3 multiplexer 95 receives signals CBUS12+ through CBUS15+. The selection between the zeroth input and first input of nibble multiplexers 92 through 95 is controlled by signal CWROP2- at their select (SEL) input. Signal CWROP2- at the output of NAND gate 130 will be a binary ZERO if a write all of RAM 2 microoperation (CWROP2) is present in the microinstruction thereby causing signal CROS40+ to be a binary ONE and signal CIPCOD+ to also be a binary ONE. Thus, when a write all of RAM 2 microoperation is present, nibble multiplexers 92 through 95 will select as input to RAM 2 chips 96-0 through 96-3 the output of data-in register 98. When signal CWROP2- is a binary ZERO, the output of NAND gates 131 through 134, signals CWR200+ through CXWR203+, will all be binary ONEs and partially enable NAND gates 135 through 138, respectively. The output of NAND gates 135 through 138, signals CWR200- through CWR203-, will become binary ZEROs and write enable RAM 2 chips 96-0 through 96-3 when clocking signals PHASEA+ and PHASEB+ from microprocessor 30 become binary ONEs. Thus, it can be appreciated when a write all of RAM 2 microoperation (CWROP2) is executed, RAM 2 chips 96-0 through 96-3 will all be enabled and a different 4-bit nibble from data-in register 98 will be written into each chip via nibble multiplexers 92 through 95. Input 1 of nibble multiplexers 92 through 95 are each connected to receive the four signals CRES00+ through CRES03+ which is output from result/zone multiplexer 91 (see FIG. 10C). The first input of nibble multiplexers 92 through 95 is selected if signal CWROP2- is a binary ONE indicating that a write all of RAM 2 microoperation is not present in the microinstruction. If a write nibble to RAM 2 microoperation (CWRES2) is present in a microinstruction, signal CROS39+ at one input of NAND gate 139 will be a binary ONE as will signal CIPCOD+ at the other input of NAND gate 139 thus causing its output, signal CWRES2-, to become a binary ZERO and enable decoder 86-2. Decoder 86-2 decodes the nibble count from RAM 2 nibble counter 86-1 thus enabling one of its four outputs, signal CWR200- through CWR203- to become a binary ZERO with the other three outputs remaining in the binary ONE state. Whichever signal of CWR200- through CWR203- becomes a binary ZERO, it will disable one of NAND gates 131 through 134 and thereby cause the output of one of the NAND gates to become a binary ONE which will enable one of NAND gate 135 through 138 and cause one of their outputs, signal CWR200- through CWR203-, to become a binary ZERO when clocking signals PHASEA+ and PHASEB+ from microprocessor 30 become binary ONEs. With one of NAND gates 135 through 138 fully enabled and the other three NAND gates disabled. One of signals CWR200- through CWR203- will become a binary ZERO and write enable its corresponding RAM 2 chip 96-0 through 96-3 and thereby allow one nibble to be written into the word addressed by RAM 2 address counter 87 and the nibble addressed by RAM 2 nibble counter 86-1. As indicated before, the source of the single nibble to be written into RAM 2 96 comes from the output of result/zone multiplexer 91 as signals CRES00+ through CRES03+. The word to be written into or read from RAM 2 96 is determined by the output of RAM 2 address counter 87 bit CDA200+ through CDA207+. RAM 2 address counter 87 and RAM 2 nibble counter 86-1 are TI type SN74 LS169 4-bit up/down synchronous counters. RAM 2 nibble counter 86-1 is connected to RAM 2 address counter 87 such that when the counter is counting up a count of 3 in the RAM 2 nibble counter 86-1 will result in an incrementing of RAM 2 address counter 87 and when counting down a count of 0 in RAM 2 nibble counter 86-1 will result in the decrementing of RAM 2 address counter 87 in a manner similar to that described for RAM 1 nibble counter 76-1 and RAM 1 address counter 75. The clocking of RAM 2 address counter 87 and RAM 2 nibble counter 86-1 is in response to signal PHASEB- at their clock (C) input transitioning from the binary ZERO to the binary ONE state. RAM 2 address counter 87 can be loaded with signals CBUS13+ through CBUS05+ from data-in register 98 when signal CLDAD2- at the load (L) input is in a binary ZERO state. Signal CLDAD2- which comes from decode PROM 104 will be in the binary zero state if the microinstruction contains a load address 2 microoperation (CLDAD2). Similarly, RAM 2 nibble counter 86-1 will be loaded with the signals CBS15+ and CBUS14+ if signal CLDCT2- is a binary ZERO at its load (L) input. Signal CLDCT2- from decode PROM 104 will be in the binary ZERO state if a load counter 2 microoperation (CLDCT2) is specified in the microinstruction. The enabling of the counting of RAM 2 address counter is controlled by signal CNTOP2- at the P and T count enable inputs. Signal CNTOP2- will be a binary ZERO state if any one of the three inputs to AND gate 148 is a binary ZERO. Signal CNTWD2- which is one input to AND gate 148 will be in the binary ZERO state if either input to AND gate 147 is in the binary ZERO state. One input to AND gate 147 is signal CDAD02- which will be a binary ZERO if a decrement address 2 microoperation (CDAD02) is specified in the microinstruction. Similarly, the output of AND gate 147 will be in the binary ZERO state if an increment address to microoperation (CIAD02) is specified in the microinstruction which will cause signal CIAD02- to be in the binary ZERO state at the output of decode PROM 104 (see FIG. 10D) which is input to AND gate 147. The other two conditions that can cause the enabling of RAM 2 address counter to count up or down is if the output of OR gate 146, signal CNTUP2-, or the output of OR gate 144, signal CNTDN2-, is a binary ZERO. The output of OR gate 146 will be a binary ZERO if the output of RAM 2 nibble counter 86-1 is a 3 and a count up counter 2 microoperation (CTUCT2) is specified which will cause signal CTUCT2- to be in the binary ZERO state. Similarly, the output of OR gate 144 will be in the binary ZERO state if the output of RAM 2 nibble counter 86-1 is a 0 which will cause the output of OR gate 143, signal CNT2E0-, to be a binary ZERO and if signal CTDCT2- is a binary ZERO indicating that a count down counter 2 microoperation (CTDCT2) has been specified in the microinstruction. RAM 2 nibble counter 86-1 is enabled to count if either a count up counter 2 microoperation (CTUCT2) or a count down counter 2 microoperation (CTDCT2) has been specified such that signal CTUCT2- will be a binary ZERO or signal CTDCT2- at the input of AND gate 142 will be a ZERO which in turn will cause the output thereof, signal CNTCT2-, to be a binary ZERO. The direction of counting of RAM 2 address counter 87 is controlled by the output of AND gate 140, signal CDWN02-, which will be a binary ZERO either a count down address 2 or a count down counter 2 microoperation has been specified (CDAD02 or CTDCT2) such that either signal CDAD02- or signal CTDCT2- is a binary ZERO. The up/down direction of counting of RAM 2 nibble counter 86-1 is controlled by signal CTDCT2- which will be in the binary ZERO state and cause the counter 2 count down if a count down counter 2 microoperation (CTDCT2) has been specified in the microinstruction. If a write of a single nibble is microoperation (CWRES2) is specified in the microinstruction, signal CROS39+ will be a binary ONE and signal CIPCOD+ will be a binary ONE at the inputs of NAND gate 139 causing the output thereof, signal CWRES2-, to be a binary ZERO and enable decoder 86-2 such that one output will be in the binary ZERO state and the other three outputs will be in the binary ONE state and thereby enable three of NAND gate 131 through 134 and disable the remaining NAND gate such that only one of the output signals of NAND gates 135 through 138 will be a binary ZERO to write enable one of RAM 2 chips 96-0 through 96-3 as described above. The data read from RAM 2 96 will be enabled on to the Q outputs of RAM 2 data register 88 whenever clocking signal PHASEB- is a binary ONE at the C input and the output control signal CIPINN+ is a binary ZERO at the F input. RAM 2 data register 88 is a transparent latch which means that the outputs will follow the inputs as long as the C input remains in the binary ONE state and the outputs will be latched at the level of the inputs whenever the C input becomes a binary ZERO. When the F input signal becomes a binary ONE, the state of the C input is ignored and the Q outputs assume a high impedance state. This permits the F input to control the latching of the data into RAM 2 data register 88 such that whenever a microoperation is specified which indicates that the data is to be taken from processor bus 37 into transceivers 97, for example, when a CIPINN or a CINDA2 microoperation is specified, signal CIPINN- from decode PROM 105 will be in the binary ZERO state and cause the output of inverter 149, signal CIPINN+, to be in the binary ONE state at the F input of RAM 2 data register 88. Each 4-bit nibble grouping of output signals from RAM 2 data register 88 is one of the four data inputs to RAM 2 nibble multiplexer 89 such that the four bits output from RAM 2-0 chip 96-0 are input into the zero input and the four bits output by RAM 2-1 chip 96-1 are input into the first input, etc. RAM 2 multiplexer 89 receives selection signals CNT200+ and CNT201+ at the select (SEL) inputs from the output of RAM 2 nibble counter 86-1 such that one of four 4-bit input groups will be enabled on to the Q output to produce signal COP200+ through COP203+. The nibble output by RAM 2 nibble multiplexer 89 goes to RAM 2 zero multiplexer 90 and double multiplexer 83 (see FIG. 10C). The output of RAM 2 nibble multiplexer 89 is always enabled because of the binary ZERO at the function (F) input. The 16 bits output by RAM 2 data register 88, signals CD2L00+ through CD2L15+, besides going to RAM 2 nibble multiplexer 89 also go to transceivers 97 and data-in register 98 as described FIG. 10C illustrates the decimal add/subtract PROM 84, decimal indicators 85, sign generator PROM 78 and associated multiplexers. In the preferred embodiment, decimal add/subtract PROM 84 is a type 82S191 PROM manufactured by Signetics Corporation of Sunnyvale, Calif. PROM containing 2048 8-bit words and described in their publication, Signetics Bipolar Memory Data Manual 1982, copyright 1982, whlch is incorporated herein by reference. Also in the preferred embodiment, sign detector PROM 78 is a type 82S137 PROM, also manufactured by Signetics Corporation, having 1024 4-bit words and is described in their above named publication. Decimal add/subtract PROM 84 (also referred to as the decimal ALU) is encoded as shown in Table 3 such that the first four address bits (bits having a binary weight of 1, 2, 4 and 8) normally receive operand 1 which comes from the output of double multiplexer 83 as signals COP123+ through COP120+. These four address bits are also referred to as the A port of decimal ALU 84 because they receive the first of two decimal numeric inputs upon which the decimal arithmetic operation is performed. The next four address bits, bits having a binary weight of 16 through 128, come from the output of RAM 2 zero multiplexer 90 as signals COP223+ through COP220+ which normally will be operand 2 from RAM 2. These four bits are also referred to as the B port of decimal ALU 84 because they receive the second of two decimal numeric inputs upon which the decimal arithmetic operation is performed. The bit having a binary wait of 256 receives the carry-in signal KCARRY+ and the address bit having a binary weight of 512 recei;es signal CIPSUB- from decoder 106 which when a binary ZERO indicates that a subtract operation is to be performed and when a binary ONE indicates that an addition operation is to be performed. The eleventh address bit of PROM 84 having a binary weight of 1024 is connected to a binary ZERO and therefore results in only the first 1024 words of PROM 84 being addressed. The inverted function (F) input receives a binary ZERO and the two noninverted inputs receive a binary ONE thereby providing that the 8-bit word outputs Q0 through Q7 are continually enabled. The 8-bit word read from decimal add/subtract PROM 84 as shown in Table 3 contains the 4-bit result of adding (or subtracting) the 4-bit operand 1 to the 4-bit operand 2 in bits 0 through 3 as signal CANS03+ through CANS00+ and contain the four indicator bits as signal COPEQ9+ for controlling the equal nine indicator, signal COPEQ0+ for controlling the equal zero indicator, signal CILLEG+ for controlling the illegal nibble indicator and signal CCARRY+ for controlling the carry out indicator. As discussed above, double multiplexer 83 is used to select the operand 1 input to PROM 84 from either RAM 1 in which case signals COP103+ through COP100+ are selected, or from the output of RAM 2 multiplexer 89 which receives a nibble from RAM 2. If the nibble from RAM 2 multiplexer 89 is to be selected as operand 1, then signals COP203+ through COP200+ are selected to be gated on to the Q outputs as signals COP123+ through COP120+. Double multiplexer 83 is continually enabled by the binary ZERO at its function (F) input and the selection between the zeroth input and the first input is done under the control of signal CROS38+ at the select (SEL) input. Signal CROS38+ comes from bit RDDT38 and is a binary ONE if a double output microoperation (CIPDUB) is encoded in the microinstruction word. The operand 2 input of decimal add/subtract PROM 84 comes from the output of RAM 2 zero multiplexer 90. RAM 2 zero multiplexer selects between the binary ZEROs at the zeroth input or the output of RAM 2 nibble multiplexer 89 which is signal COP203+ through COP200+. The outputs of RAM 2 zero multiplexer 90 are always enabled by the binary ZERO at the function (F) input. The selection between the binary ZERO and the output of RAM 2 nibble multiplexer to be enabled on to the Q outputs is done by signal CROS46+ which is inverted by inverter 160 to produce signal CROS46- at the select (SEL) input of RAM 2 zero multiplexer 90. Signal CROS46+ is derived from bit RDDT46 of the microinstruction word which is a binary ONE if the inhibit RAM 2 microoperation (CINOP2) is encoded in the microinstruction word. Result/zone multiplexer 91 is used to select between a zone nibble containing a hexadecimal 3 as the zeroth input or the 4-bit decimal result coming from decimal add/subtract PROM 84 as signals CANS03+ through CANS00+ at the first input. Result/zone multiplexer 91 is always enabled by the binary ZERO at its function (F) input. Selection between the zone nibble and the decimal result nibble is done under the control of signal CWZONE- which at the select (SEL) input which will be a binary ZERO is output by decode PROM 105 when a write zone to RAM 2 microoperation (CWZONE) is encoded in the microinstruction word. The output of result/zone multiplexer 91, signal CRES03+ through CRES00+, are fed back as input to nibble multiplexers 92 through 95 thus allowing either a zone nibble or the decimal result nibble to be written into RAM 2 96. Indicator flip-flops 85-1 through 85-4 are D-type flip-flops and are all clocked by clocking signal KLDFLP- which is connected to their clock (C) input transitioning from the binary ZERO to the binary ONE state. This transition of signals KLDFLP- will occur at the end of each microinstruction to load in the status of the indicators as output by decimal add/subtract PROM 84 when signal PHASEB- at one input of OR gate 168 becomes a binary ONE if signal CLDFLP- is a binary ZERO at the other input of OR gate 168. Signal CLDFLP- comes from decode PROM 105 and will be a binary ZERO enabling the clocking of indicator flip-flops 85-1 through 85-4 if a load indicator's microoperation (CLDFLP) is present in the microinstruction. If the load indicator's microoperation is not encoded within the microinstruction, signal CLDFLP- will be a binary ONE and maintain signal KLDFLP- in the binary ONE state thus inhioiting its transition from the binary ZERO to the binary 3NE state which is necessary in order to clock the indicator flip-flops 85-1 through 85-4. Both the equal zero indicator flip-flop 85-1 and the equal nine indicator flip-flop 85-2 are arranged in a similar manner such that during certain arithmetic operations both flip-flops can be preset so that their Q output signal will be in the binary ONE state and will be reset if their corresponding output signal from decimal add/subtract PROM 84 becomes a binary ZERO thereafter. For example, equal zero indicator flip-flop 85-1 is initially set by signal CRESTC- at its set (S) input becoming a binary ZERO whizh will result in its Q output signal LOPEQ0+ becoming a binary ONE at one input to AND gate 165. Thereafter, if the loading of flip-flops is permitted by a load indicator's microoperation (CLDFLP), equal zero indicator flip-flop 85-1 will be clocked at the end of the microinstruction and the output of AND gate 165 will be determined by te COPEQ0+ signal from decimal add/subtract PROM 84 which is the other input to AND gate 165. If the 8-bit word retrieved from decimal add/subtract PROM 84 contains a binary ONE in the equal zero bit position, when equal zero indicator flip-flop 85-1 is clocked the binary ONE at the data (D) input will be clocked into the flip-ilop and the flip-flop will not change state. However, if the signal COPEQ0+ is a binary ZERO, the output of ANO gate 165, signal KOPEQ0+, will be a binary ZERO and when clocked into flip-flop 85-1 will result in its output beirg reset and its output signal LOPEQ0+ becoming a binary ZERO. Thereafter, because one input of AND gate 165 is now a binary ZERO, the flip-flop will remain in the reset state independent of the binary state of the equal zero bit from decimal add/subtract PROM 84. In this manner, the equal zero indicator flip-flop 85-1 and the equal nine flip-flop 85-2 are known as integrating indicator flip-flops in that the indicator is initially placed in a first state and once it is changed to a second state will remain in the second state independent of the fact that the data input may return to the first state. Equal nine indicator flip-flop 85-2 is similarly constructed in that it outputs signal LOPEQ9+ is fed into one input of AND gate 166 which receives its other input from decimal add/subtract PROM 84 as signal COPEQ9+. Both indicator flip-flops 85-1 and 85-2 are initially set by signal CRESTC- becoming a binary ZERO at their set inputs. both flip-flop 85-1 and 85-2 have their reset (R) inputs fixed to receive a binary CNE signal. The illegal digit (nibble) indicator flip-flop 85-3 is constructed in a manner similar to equal zero and equal nine indicator flip-flops 85-1 and 85-2 except that illegal digit indicator flip-flop 85-3 is initially preset to the binary ZERO state causing its Q output, signal KILLEG+, to become binary ZERO which is fed back into OR gate 167. Thereafter, if the illegal digit indicator bit from decimal add/subtract PROM 84, signal CILLEG+, becomes a binary ONE indicating that the result of the addition or subtraction has produced an illegal digit because one of the two operands contains an illegal digit (see Table 3), then illegal digit indicator flip-flop 85-3 will become set causing its Q output, signal KILLEG+, to assume the binary ONE state and thereafter insure that the output of OR gate 167, signal CILDIG+, will remain in the binary ONE state until such time as the indicator flip-flop is reset. Illegal digit indicator flip-flop 85-3 has its set (S) input fixed to receive a binary ONE and its reset (R) input receives signal CRLSTB- from OR gate 162 which allows the illegal indicator flip-flop 85-3 to be initially preset to the binary ZERO state. The carry output indi:ator flip-flop 85-4 is not an integrating type flip-flop such that its state will change each time it is clocked independent of its previous state. Therefore, the data input of carry-out indicator flip-flop 85-4 receives its input directly from the carry bit as output by decimal add/subtract PROM 84 as signal CARRY+. As will be seen below, it is sometimes desirable to be able to preset the carry-out indicator flip-flop 85-4 to the set state thereby causi:g its Q output, signal KCARRY+ to be a binary ONE or to be able to preset it to the binary ZERO state and thus presetting its cutput to the signal KCARRY+ to the binary ZERO state. Carry out indicator flip-flop 85-4 is preset to the binary ONE state if signal KSETCA- becomes a binary ZERO and is preset to the binary ZERO state if signal CRESTB- at its reset (R) becomes a binary ZERO. Gates 161 through 164 are used to control the presetting of indicator flip-flops 85-1 through 85-4. OR gates 161 through 163 each receive as one input a signal PHASEB- which is a clocking signal from microprocessor 30. This timing signal when it transitions from the binary ONE to the binary ZERO state is used to set or reset the indicator flip-flops 85-1 through 85-4 depending upon whether the other input of the OR gate is a binary ZERO which is controlled by a microoperation. For example, OR gate 161 receives signal CRESTX- which will be in a binary ZERO state if a reset equal zero and equal nine indicators microoperation (CRESTX) is present in a microinstruction. If this microoperation is present, the output of OR gate 161, signal CRESTA-, will become a binary ZERO when the clocking signal PHASEB- becomes a binary ZERO and thereby force the output of AND gate 164, signal CRESTC-, to the binary ZERO state and result in the presetting of the equal zero indicator flip-flop 85-1 and the equal nine indicator flip-flop 85-2 to the binary ONE state. Signal CRESTX- is produced by decoder 106 as is signal CRESET- at one input of OR gate 163. Signal CRESET- will be in the binary ZERO state if there is a reset all indicators microoperation (CRESET) in the microinstruction. If signal CRESET- is a binary ZERO at OR gate 163, its output, signal CRESTB-, will become a binary ZERO and result in the presetting to the ZERO state of the illegal digit indicator flip-flop 85-3 and the carry out indicator flip-flop 85-4. Because this signal is also the other input to AND gate 164, it will result in signal CRESTC- becoming a binary ZERO and result in the presetting to the binary ONE state of the equal zero indicator flip-flop 85-1 and the equal nine indicator flip-flop 85-2. Thus, it can be seen that a resetting of all indicators results in the setting of flip-flop 85-1 and 85-2 and the resetting of flip-flops 85-3 and 85-4. OR gate 162 produces signal KSETCA- which is connected to the set (S) input of carry out indicator flip-flop 85-4. When this signal becomes a binary ZERO it results in the presetting to the binary ONE state of the carry out flip-flop thus allowing under microinstruction control the setting of the carry out indicator to indicate that a carry out has in fact occurred. This presetting of the carry out indicator is done by use of the set carry indicator microoperation (CSETCA) which results in signal CSETCA- from decode PROM 105 becoming a binary ZERO. Sign multiplexer 77 allows the 4 least significant bits of the address into sign generator PROM 78 to be selected from either the output of double multiplexer 83 which in turn can have its output selected from either a nibble from RAM 1 or a nibble from RAM 2, or alternatively, the 4 bits out of sign multiplexer 78 can come from 4 bits within data-in register 88. Thus, under the selection of the signal CAPACKD- at the select (SEL) input of sign multiplexer 78, either signals COP120+ through COP123+ or signals CBUS04+ through CBUS07+ will be gated on to the Q output as signals CSIN04+ through CSIN07+. Signal CAPCKD- will be a binary ZERO at the select (SEL) input of sign multiplexer 77 and at the address bit having a binary weight of 128 of sign generator PROM 78 if a packed microoperation (CPACKD) is in the microinstruction as determined by decoder 106. In addition to receiving 4 address bits from sign multiplexer 77, sign generator 78 receives 3 address bits as signals CPBUS03+ through CBUS01+ from the output of data-in register 98. The last address bit of sign generator PROM 78 is signal COVPUN- which will be a binary ZERO if there is an overpunch microoperation (COVPUN) in the microinstruction as detected by decoder 106. The output of sign generator PROM 78 is enabled by the output enable (CE) inputs being fixed to a binary ZERO signal. The 4-bit word read from sign generator PROM 78 results in signals COVPUN+, COVPEO+, CSIGNN+ and CSIILL+ which are used to indicate whether the sign is an overpunch sign, is an overpunch sign equal to 0, whether it is a negative sign and whether it is an illegal sign. These four bits are input into the first input of monitor multiplexer 80. The other input of monitor multiplexer 80 receives the output of the decimal indicators 85 such that the output of monitor multiplexer 80 signals MIBGP0- through MIBGP3+ will be enabled when the clocking signal PHASEA+ at the function (F) input becomes a binary ZERO. The selection between the output of sign generator PROM 78 and the output of decimal indicators 85 is done by signal CROS45+ at the select (SEL) input. The first input from sign generator PROM 78 will be selected if there is a sign to microprocessor microoperation (CIPSGN) in the microinstruction which will cause the signal CROS45+ to be a binary ONE. The output of monitor multiplexer 80 is fed to monitor logic 32 which in turn can be used to control branching within the microprocessor 30 thus allowing the branching between microinstructions by microprocessor 30 depending upon the status of decimal indicators 85 or the output of sign generator PROM 78. Sign generator PROM 78 is encoded with the necessary data words such that its 4-bit output can be used to control branching in the microinstruction routine which executes the decimal arithmetic operations. This branching is done early in the execution of a decimal operation to test the signs of operand 1 and operand 2 before a decimal operation is performed. FIG. 10D illustrates CIL control area 100 in detail. ROS special control field register 101 latches in the special control field bits RDDT35 through RDDT47 from the microinstruction word when clocking signal PHASEBtransitions from the binary ZERO to the binary ONE state. The outputs of ROS special control field register 101 are enabled by the binary ZERO at the function (F) input thereof. Signals CROS35+, CROS36+ and CROS37- are input to NAND gate 117 to produce signal CIPCOD- at the output thereof as an indicator as to whether or not subfield A is a binary 110. Signal CROS37- is produced by inverting signal CROS37+ by inverter 109. If subfield A is a binary 110 indicating that special control field subfields B, C and D are to be interpreted as commercial instruction microoperations, signal CIPCOD- will be a binary ZERO and signal CIPCOD+ at the output of inverter 118 will be a binary one. Special control field subfield B bits which correspond to signal CROS38+ through CROS41+ are used as described above to enable various gates and to control the selection of various multiplexers. The special control field subfield C bits which correspond to signal CROS42+ through CROS44+ are input to decoder 106 to produce signal CPACKD- through CRESET- which are used as described above. The output of decoder 106 is controlled by signal CIPCOD- which will cause all the output signals to assume the binary ONE state if special control field subfield A is not a binary 110. Zero register 102 is connected to receive 8 binary ZEROs as input which are enabled onto its outputs as signals CNOP48+ through CNOP55+ if signal CNOPEN- is a binary ZERO at the function (F) input. The clock input of zero register 102 is fixed to a binary ONE. Signal CNOPEN- will enable the output if the address presented to ROS 24 is less than 2K. The output of zero register 102, signals CNOP48+ through DNOP55+, are wire-ored together with the output of ROS 24, signals RDDT48+ through RDDT55+ at wire-or 129 to produce signals KROS48+ through KROS55+. Signals RDDT48+ through RDDT55+ are enabled into wire-or 129 only if the address into ROS 24 is greater than 2K. The data at the input of ROS CIL register 103 is clocked by clocking signal PHASEB- from microprocessor 30. Signals CROS48+ through CROS51+ which correspond to subfield E of the microinstruction word are decoded by use of decode PROM 104 by using the four signals as addressing bits. Signals CROS52+ through CROS55+, which correspond to subfield F of the microinstruction word, are decoded by use of decode PROM 105 by using the four signals as addressing bits. The fifth address bit of decode PROM 104 and decode PROM 105 is tied to a binary ZERO so that the upper 16 words in each decode PROM is not used. The output of both decode PROM 104 and decode PROM 105 are enabled by the binary ZEROs at their function (F) input. Decode PROM 104 produces signal CLDAD1- through CTDCT2- and decode PROM 105 produces signals CIPINN- through QLTCTL-, the use of which has been discussed above with reference to FIGS. 10A, 10B and 10C. For the most part, the coding of decode PROM 104 and decode PROM 105 is such that only a single output signal will be in the binary ZERO state and all other outputs will be in the binary ONE state except for the few cases in which parallelism is provided by coding the PROM such that more than one bit within the 8-bit word is a binary ZERO. These cases are indicated in Tables 7 and 8 above. In the preferred embodiment, the decimal arithmetic operations that are performed by CPU 20 are microprogrammed to take advantage of commercial instruction logic 28 to reduce the execution time of the decimal commercial software instructions. For example, the decimal multiply and divide commercial software instructions are sped up by detecting leading (non-significant) zeros in the dividend and divisor and the multiplicand and multiplier, thereby reducing the field lengths which must be used when performing the operations. This leading zero detection is done by using the ability of RAM 1 81 and RAM 2 96 to be addressed from left to right (i.e. most significant to least significant digit) and by use of the decimal equal zero indicator. The decimal add and subtract commercial software instructions are sped up by use of the decimal equal nine indicator to detect cases of oversubtract (i.e., when the sign of the difference changes) and the ability to feed the decimal adder/ subtractor inputs (decimal arithmetic logic unit (ALU) 84 ports A and B) from either RAM 1 81 or RAM 2 96 or zeros are used in performing a ten's complement. The convert decimal to binary commercial software instruction is sped up by using the ability to address RAM 1 81 for left to right to strip leading zeros. The convert binary to decimal commercial software instruction is sped up by using the ability of the decimal adder/subtractor (decimal ALU 84) to have the same data fed to both inputs and by presetting the converted value receiving field to a zero digit and increasing the receiving field length as necessary (when a carry occurs) as the conversion progresses. When CPU 20 executes a commercial software instruction, microprocessor 30 and commercial instruction logic 28 operate in parallel to perform the necessary microoperations under the control of microinstructions stored in ROS 24. Microinstruction bits 0 through 47 are used to control the operation of microprocessor 30 and microinstruction bits 35 through 55 be used to control the operation of commercial instruction logic 28. The use of microinstruction bits 35 through 47 to control microprocessor 30 or commercial instruction logic 29 depend on the value of special control field subfield A. In performing commercial software instructions, microprocessor 30 is used to: read the software instructions from main memory 10, to decode the software instructions, to read the operands from main memory 10, to perform arithmetic, shifting and logical operations on binary data, and write the results of the operation back into main memory 10. During execution of commercial software instructions, commercial instruction logic 28 is used to perform logic and shifting operations on decimal and alphanumeric data. The use of commercial instruction logic 28 will now be discussed with reference to the performance of the decimal addition commercial software instruction (DAD) and the decimal subtraction software instruction (DSB). Both the DAD and DSB software instructions are advantageously performed in the preferred embodiment by use of the decimal equal nine indicator and the ability to feed either port of the decimal ALU 84 from either RAM 1 81, RAM 2 96 or zeros. During a DAD or DSB instruction which results in operand 1 (OP 1) being added or subtracted from operand 2 (OP 2) with results being stored in the operand 2. The DAD software instruction uses the absolute value subtract routine if the signs of the number to be added are different and the DSB software instruction uses the absolute value subtract microroutine if the signs of the numbers to be subtracted are the same. The equal nine indicator is needed in the absolute subtract routine to handle the case where the OP 2 field length is less than the OP 1 field length and an oversubtract occurs (i.e., the sign of the difference changes). This is because the result is calculated in the OP 2 field in RAM 2 and OP 2 is moved from main memory 10 to RAM 2 96 on a word basis and the result which is stored in RAM 2 96 is moved back to main memory on a word basis. Because OP 2 can start and end on any 4-bit nibble boundary within the 16-bit words of main memory, it is important that neighboring nibbles (known as "neighbors") that are not part of OP 2 but which are present in the word containing the most significant digit or leading sign of OP 2 and the word containing the least significant digit or trailing sign of OP 2 be preserved during the addition or subtraction so that when the result is written from RAM 2 96 back to main memory 10, the neighbors have not been changed. By preserving the neighbors, the result can be written back on a word basis without having to read the words containing the first and last digits, maske in the result and then write back the words to main memory. In the following discussion, the term "written field" will be used to indicate the field of the result which is as long as the field length of OP 2 which is used to store the result. The term "unwritten field" will be used to indicate the left most significant digits in the result having a length equal to the number of digits by which the field length of OP 1 exceeds the field length of OP 2. For example, if a DAD is performed with OP 1=-00142 and OP 2=+68 the result will be -74. In this example the written field is 2 digits long (the length of OP 2 without the sign) and the unwritten field is 3 digits long (the length of OP 1 minus the length of OP 2, both without signs). This DAD software instruction is performed in CPU 20 by placing OP 1 in RAM 1 81 and OP 2 in RAM 2 96 and developing the result in RAM 2 96 from which it is written back into the memory locations previously occupied by OP 2. Example: (+68) + (-00142) = (-74) + 68 OP 2 in RAM 2 -00142 OP 1 in RAM 1 UUUWW result where: WW = written field developed in RAM 2 UUU = unwritten field in order to preserve neighbors in RAM 2 In addition to using the equal nine indicator to determine the contents of the unwritten field, a DAD or DSB software instruction is microprogrammed to use the ability to feed the A port of the decimal adder/subtractor 84 from either RAM 1 or RAM 2 and the ability to feed zero's digits into the B port. This is used to perform a ten's complement on the written field in those cases where there is an oversubtract. The following example will describe a DAD software instruction using the example of OP 1=-00142 and OP 2=+68 which results in a sum of -74. The same example would result from a DSB with OP 1=+00142 and OP 2=+68 which results in a difference of -74. For this example, assume the DAD software instruction is at main memory location 1000 as follows: Example DAD Software Instruction Memory Memory Location Contents (Hexadecimal) (Hexadecimal) Meaning 1000 002C DAD OP code 1001 E687 data descriptor 1 (DD1) word 1 1002 1102 data descriptor 1 (DD1) word 2 1003 E207 data descriptor 2 (DD2) word 1 1004 1204 data descriptor 2 (DD2) word 2 Data descriptors DD1 and DD2 are decoded as follows (see FIG. 9): DD1: T=1: Packed decimal. C1,C2=11: OP 1 starts in nibble 3 position. C3=1: Trailing sign L=6: 5 digits and sign. CAS: OP 1 starts in word addressed by contents of base register 7 plus displacement of 1102. If B7 contains the value 1000 hexadecimal, OP 1 is located at address 2102 DD2: T=0: String (unpacked) decimal. C1=1: OP 2 starts in right byte. CR,C3=11: Trailing sign. L=3: 2 digits and sign. CAS: OP 2 starts in word addressed by contents of base register 7 plus the displacement of 1204. Since B7 contains 1000 hexadecimal, OP 2 is located at address 2204 hexadecimal. OP 1, which is a -00142 with a trailing minus sign, appears in main memory as follows: Memory Location Memory Address Contents (Hexadecimal) (Hexadecimal) 2102 NNN0 2104 DNNN where: N are neighbor nibbles. 00142 are packed decimal D is a trailing minus sign. OP 2, which is +68 with a trailing plus sign, appears in main memory as follows: Memory Location Memory Address Contents (Hexadecimal) (Hexadecimal) 2204 NN36 2205 382B where: N are neighbor nibbles. 36 is an unpacked decimal 6 with zone nibble of 3. 38 is an unpacked decimal 8 with zone nibble of 3. 2B is an unpacked trailing plus sign. The execution of the above example DAD commercial software instruction will now be described with reference to FIG. 11. FIG. 11 is a flow chart of the firmware microroutines used by CPU 20 to execute the DAD and DSB software instruction. The blocks in FIG. 11 which are referred to by the names next to them, such as DAD-001, show at a gross level the functions performed by microprocessor 30 and commercial instruction logic 38 to perform the software instruction. Some of these blocks may represent the execution of more than one 48 or 56-bit microinstruction, the form of which is shown in FIG. 5. Before entering the microroutines shown in FIG. 11, which are more or less peculiar to the DAD and DSB commercial software instructions, the CPU 20 examines the first word of the software instruction which is being executed to determine the type of operation to be performed. Once it is determined that it is a decimal arithmetic operation as determined by looking at the operation code in the first word of the instruction, the CPU 20 then prcceeds to decode the address syllable associated with data descriptor 1 to determine the main memory word address and the position within the word in which operand 1 begins. This front end processing of the software instruction then continues with the microprocessor branching to the DAD routine to block DAD-000. When the decimal add routine is entered at block DAD-000, it tests whether this is the first pass in which operand 1 is to be brought into the CPU or the second pass in which operand 2 is to be brought into the CPU. If it is the first pass, the firmware then branches to instruction DAD-001 which fetches operand 1 into RAM 1 81 one word at a time by bringing it from main memory 10 into the microprocessor 30 and then from the microprocessor processor bus 37 into transceivers 97, data-in register 98 and then into RAM 1 81. This process is performed by first loading RAM 1 address counter 78 with the address of the first word which is to be used in RAM 1 and the loading of the nibble counter in nibble-out control 76. It should be noted that the words of 0P 1 are loaded into RAM 1 by using the low four order bits of the main memory address as the four-bit address which is loaded into RAM 1 address counter 75 such that after block DAD-001 is executed, OP 1 is in RAM 1 in the following locations with the address counter pointing to word 3 in RAM 1 which contains the unit's digit and the nibble counter pointing to nibble 3 which is the position of the unit's digit. At the end of block DAD-001, the contents of RAM 1 are as follows: RAM 1 RAM 1 Contents Location (Hexadecimal) 2 NNNO 4 DNNN RAM 1 Address Counter (WP1) = 3 RAM 1 Nibble Counter (NP1) = 3 Block DAD-001, after setting an indicator to indicate the sign of operand 1, then exits to the instruction front end processing routine which proceeds to crack data descriptor 2 to determine the address of where OP 2 begins in main memory. After cracking DD2, the front end routine then branches on the software instruction operating code to an instruction in block DAD-000 which in turn determines whether this is the pass 1 of pass 2. In this case it is pass 2 so that block DAD-002 is executed. In block DAD-002 operand 2 is fetched into RAM 2 a word at a time from main memory, the sign of OP 2 is determined, and the address counter of RAM 2 address counter 87 and the nibble counter of nibble write control 86 are left to point to the unit's position of OP 2 such that at the end of block DAD-002, the contents of RAM 2 are as follows: RAM 2 RAM 2 Contents Location (Hexadecimal) 0 NN36 1 382B RAM 2 address oounter (WP2) = 1 RAM 2 nibble counter (NP2) = 1 It should be noted that in contrast to RAM 1 where the operand is loaded into the location which corresponds to the low four order bits of the memory address, the first word of OP 2 is loaded into word 0 of RAM 2 and consecutive words are loaded into locations with increasing addresses. Block DAD-002 exits to block DAD-003 which determines which operand has the shorter field length. This is done by comparing the length of OP 1 with OP 2 and determining which is shorter. This determination is necessary so that the length of the written field can be determined as well as the length of the unwritten field so that the neighbors within RAM 2 will not be destroyed if OP 2 is shorter than OP 1. In the present example, the length of OP 1 is 5 and the length of OP 2 is 2 (excluding the signs) such that the shorter field has a length of 2 digits and this length is the length of Operand 2. Therefore, the written field length will have a length of 2 digits and the unwritten field will have a length of 3 digits. Block DAD-003 then exits to block DAD-004 which compares the sign of OP 1 with the sign of OP 2. Block DAD-004 determines that OP 1 sign is a minus and that OP 2 is a plus and therefore exits to block DAD-006 because the signs are different and does not exit to block DAD-005 is taken if the signs were the same. Because the signs are different, an absolute value subtract must be performed and so a call is made to the absolute value subtract routine which is entered at block AVS-000. The absolute value subtract routine performs a subtraction by taking the contents of RAM 1 from the contents of RAM 2 and placing the result into RAM 2 a digit at a time starting with the units (the least significant) digit and working from right to left to the most significant digit. This subtraction is done in two steps in that first the written field is done in which the result is actually written back into RAM 2 a digit at a time. Then the unwritten field is done and the result of each digit position is not written into RAM 2 in order to preserve the neighboring nibbles within RAM 2. The absolute value subtract routine begins in block AVS-000 by presetting the decimal indicators 85 to an initial value such that the carry-out (CRO) indicator is set to a binary ZERO, the illegal digit (ILL) indicator is set to a binary ZERO, the equal zero (E0) is set to a binary ONE and the equal nine (E9) is set to a binary ONE. In block AVS-001 a one digit from RAM 1 In block AVS-001 one decimal digit from RAM 1 is subtracted from one decimal digit from RAM 2 by decimal ALU 84 with the carry-in coming from the carry-out indicator and the resulting digit is stored back into RAM 2 and the indicators are stored into decimal indicators 85. Block AVS-001 is repeated until the length of the shorter operand is exhausted. In this case, because in the example, operand 2 is in string (unpacked) decimal data the zone nibbles within RAM 2 are written back into RAM 2 by use of the result/zone multiplexer 91 as every other nibble is written into RAM 2 as each decimal digit is processed. In the instant example, the shorter operand is operand 2, which contains only two unpacked digits. Block AVS-001 is executed two times with the contents of RAM 2 96 and decimal indicators 85 at the end of the spin through the shorter field length (the length of the written field) being as follows: RAM 2 RAM 2 Location Contents Indicators (Hexadecimal) (Hexadecimal) CRO ILL E0 E9 0 NN32 0 0 0 0 1 362B This spinning through the shorter field length of two times by subtracting first the unit's digit position of RAM 1 from the unit's digit position of RAM 2 and storing the result in the unit's digit position of RAM 2 and then subtracting the 10's position of RAM 1 from the 10's position of RAM 2 and storing the result in RAM 2 and the writing of the zone nibbles into RAM 2 is shown in Table 13 TABLE 13 Spin Through Shorter Field Length Performing Subtract DEC ALU DEC RAM 1 RAM 2 IN OUT IND STP LC CONT W1 N1 LC CONT W2 N2 A B R CIZN CIZN 1B 2 NNN0 3 3 0 NN36 1 1 2 8 X XXXX 3 014(2) 1 3(8)2B 4 DNNN 1A 2 NNN0 3 2 0 NN36 1 0 X X 6 0000 3 01(4)2 1 (3)62B 4 DNNN 2B 2 NNN0 3 2 0 NN36 1 0 X X X XXXX 3 01(4)2 1 (3)62B 4 DNNN 2A 2 NNN0 3 2 0 NN3(6) 0 3 X X X XXXX 3 01(4)2 1 362B 4 DNNN 3B 2 NNN0 3 2 0 NN3(6) 0 3 4 6 X XXXX 3 01(4)2 1 362B 4 DNNN 3A 2 NNN0 3 1 0 NN(3)2 0 2 X X 2 0000 3 0(1)42 1 362B 4 DNNN 4B 2 NNN0 3 1 0 NN(3)2 0 2 X X X XXXX 3 0(1)42 1 362B 4 DNNN 4A 2 NNN0 3 1 0 N(N)32 0 1 X X X XXXX 3 0(1)42 1 362B 4 DNNN Table 13 illustrates the before and after states of RAM 1 81, RAM 2 96, the output of decimal adder{/fsuracOtor PROM 84 and the status of the decimal indicators 85 before and after each microinstruction is executed. The column labeled "STP" contains the step number with the rows labeled with a step number with a "B" suffix containing the status of the system before the microinstruction is executed and the rows with a step number with an "A" suffix containing the status after the microinstruction is executed. The columns under the "RAM 1" label contain the location, contents, word pointer and nibble pointer associated with RAM 1 and the column under the "RAM 2" label containing the location, contents, word pointer and nibble pointer associated with RAM 2. For example, the "LC" column under the "RAM 1" label contains the numbers 2, 3 and 4 indicating that the corresponding "CONT" column contains the contents of RAM 1 words addressed by addresses 2, 3 and 4 respectively. The "W1" column under the "RAM 1" label contains the contents of "RAM 1" address counter 75 and the column labeled "N1" contains the contents of the nibble counter in nibble out control 76. The corresponding "LC", "CONT", "W2" and "N2" columns under the "RAM 2" label contain the corresponding data for RAM 2 with the "W2" word pointer being the address stored in RAM 2 address counter 87 and the "N2" nibble pointer being nibble counter contained in nibble write control 86. The columns labeled under the "DEC ALU" column, contain the inputs and outputs of decimal ALU 84. The "A" and "B" columns under the "IN" label contain the decimal digits input into the A and B ports of decimal ALU 84 respectively. Under the "OUT" label, the "R" column contains the resultant digit output from decimal adder/subtractor PROM 84 which is one input to result/zone multiplexer 91 and the columns labeled "CIZN" contains the four indicator bits output by decimal adder/subtractor PROM 84 which are input into decimal indicators 85. The "C" column corresponding to the carry-out, elsewhere referred to as a "CRO", the "I" column corresponding to the illegal digit signal which is else sometimes labeled as "ILL", the "Z" column corresponding to the equal zero signal which is elsewhere referred to as "E0" and the "N" column containing the signal indicating the status of the equal nine which is elsewhere referred to as "EN". The column labeled "DEC IND" contains the contents of decimal indicators 85 with each individual indicator under the corresponding "CIZN" column. Step 1B shows the contents of RAM 1 before the microinstruction are executed contain the operand 1 with the word and nibble counters pointing to the unit's digit of the number which in the example is the digit decimal 2. This is indicated in Table 13 by parentheses around the nibble which is point to by the word and nibble counters. Row 1B also shows the contents of RAM 2 which indicates that the unit's digit is pointed to by the word and nibble counters such that the decimal 8 is within the parentheses. The status of the resultant digit and the indicator bits output by decimal adder/ subtractor PROM 84 before the microoperation is executed are don't care conditions and indicated by X's in Table 13. The decimal indicators have been preset prior to the adding the unit's position to the binary value 0011 as shown. In step 1, when the unit's digit of operand 1 is added to the unit's digit of operand 2, the result is shown in the row labeled step 1a. Step 1A shows, in the RAM 1 column, that after the addition the address counter and nibble counter have been incremented by one such that the nibble counter now points to the nibble 2 position and the word counter has remained at word 3 such that the 10's position of the operand 2 is now pointed to with the parentheses now being around the digit 4. Similarly, after executing the first microinstruction, the word address and nibble counter of RAM 2 have been incremented to point to the next nibble, which in this case is field of the unit's position, such that nibble zero of location 1 is now pointed to which contains tbe zone nibble of a binary 0011 (hexadecimal 3), as shown in parentheses. The output of the decimal subtract (an absolute value subtract is done because the sign of the two numbers to be added together were different) is a decimal 6 as indicated in the "R" column which is the result of subtracting 2 from 8. The output of the decimal adder/subtractor PROM 84 indicators is a binary 0000 which indicates that there has been no carry, the digit is a legal digit, the digit is not equal to zero and it is not equal to nine. This results in the updating of the decimal indicators as shown which results in the decimal indicators 85 now being equal to the binary value 0000. Returning now to the RAM 2 content column as step 1A, it is seen that the resulting digit output from decimal adder/subtractor PROM 84 has been stored back into the nibble position which was pointed to at the beginning of the microinstruction such that the decimal 6 has been stored in the unit's position of operand 2. This storing of the decimal result occurs prior to the updating of the address and nibble counters such that although at the end of the microinstruction, after its execution, the RAM 2 counter point to word 1 nibble 0 the word 1, nibble 1 counter values were used to store the result before the counters were updated. The operation shown as step 1 is performed by microinstruction which contains a CIPSUB, a CWRES2, a CLDFLP and a CTDALL microoperation within the microinstruction. These microinstructions are found in Tables 4 through 8 as described above and the operation of which was described with respect to the logic shown in FIGS. 10A through 10D. As a brief review here, the CIPSUB instruction causes the adder/subtractor PROM 84 to perform a subtract operation on the two inputs presented to it from RAM 2 zero multiplexer 90 and double multiplexer 83. In this case, RAM 2 zero multiplexer 90 is feeding the output of RAM 2 96 and double multiplexer 83 is feeding the output of RAM 1 81. The CWRES2 microoperation causes the decimal digit output from the decimal adder/subtractor PROM 84 to be written into the nibble pointed to by nibble write control 86 and RAM 2 address counter 87. The CLDFLP microoperation causes the indicators output by decimal adder/subtractor PROM 85 to be clocked into decimal indicators 85. The CTDALL microoperation causes the RAM 1 and RAM 2 nibble and word counters to be decremented after the decimal arithmetic operation has been performed. The microinstruction associated with step 1 also performs other parallel microoperations which have not been discussed but, for example, include decrementing a counter contained within microprocessor 30 to decrement the length count of the shorter operand field such that the number of times through the shorter operand field, where the arithmetic operation is performed on each digit position so that the spinning through the shorter (written) field can be terminated when it is finished. In step 2 of Table 13, which performs the writing in of the zone nibble into the unit's position of operand 2. It being remembered that operand 2 in the example case is string (unpacked) data which always has a binary THREE in the zone nibble position of each byte associated with each decimal digit. The row associated with step 2B shows the contents of RAM 1 and its associated pointers and the contents of RAM 2 and its associated pointers. In the writing of the zone nibble of the unit's position, the contents of RAM 1 are ignored and only the contents of RAM 2 are used. Before the second microoperation is executed, the word address pointer and nibble pointer of RAM 1 under the W2 and N2 columns point to location 1 and nibble 0 respectively such that the zone field containing a hexadecimal 3 is pointed to as indicated by the 3 being within parentheses in Table 13. The second microinstruction which forces the zone field into operand 2 contains a CWZONE and a CTDCT2 microoperation as describe in Tables 4 through 8 and the operation of which when the hardware is described in reference to FIGS. 10A through 10D above perform the following. The CWZONE microoperation causes result/zone multiplexer 91 to select the zone bits of binary 0011 as the output and result in that zone being written into RAM 2 96 in the nibble position pointed to by RAM 2 address counter 87 and nibble write control 86. The CTDCT2 microoperation causes the RAM 2 nibble and word counters to be decremented by one after the zone nibble is written into RAM 2 96 such that in step 2A it can be seen that the address counter now points to word 0 and the nibble counter now points to nibble 3 which contains the 10's position of operand 2 which is a decimal 6 as indicated by the digit 6 in the contents column of RAM 2 being within the parentheses. In this second microoperation as indicated by rows 2B and 2A, the output of the decimal adder/subtractor PROM 84 is a don't care condition and both the resultant digit and the indicators columns of before and after and the decimal indicators 85 are not changed as a result of the operation. Steps 3B and 3A show the before and after condition of the execution of the third microinstruction which subtracts the 10's position of operand 1 in RAM 1 from the 10th position of operand 2 in RAM 2 which results in subtracting 4 decimal from 6 decimal producing a decimal result of 2 which is stored in word 0 nibble 3 of RAM 2. The indicators as output from decimal adder/subtractor PROM 84 are again used to update decimal indicators 85. Thus, step 3 is a repeat of step 1 except that the 10's position is being manipulated instead of the unit's position. In step 4, which is a repeat of step 2, the zone nibble within the 10's digit position of operand 2 is written into RAM 2. Thus, at the end of the fourth microinstruction, the pointers and indicators are as shown in row 4A. Returning now to the description of the absolute value add and subtract routine illustrated in FIG. 11. In block AVS-001 which spins through the shorter length operand field performing the subtraction when it has been determined that all digits contained in the shorter field have been subtracted. Block AVS-001 exits to either block AVS-002 or AVS-004 depending upon whether or not the operand 2 field length is shorter than operand 1 or if operand 2 is longer or the same length as operand 1. In the example case being discussed, operand 2 field length is shorter than operand 1 field length so that block AVS-001 exits to block AVS-002. This branching to block AVS-002 is controlled by a flag which was previously set in data manipulation area 32 when in block DAD-001 which determines the shorter operand field length. In block AVS-002, the microinstruction routine restores to indicator to indicate whether or not the written field contains all zeros. This is done by setting a flag depending upon the status of the equal zero indicator of decimal indicators 85. This is done by entering decimal indicators 85 via monitor mutiplexer 80 into monitor logic 22 which is then input into microprocessor 30 such that the status of the equal zero indicator can be examined and a flag is set within data manipulation area 32 within microprocessor 30 to remember to status of the zero indicator at the end of the operation on the written field. Block AVS-002 then exits to block AVS-003 which presets the equal nine and equal zero indicators in decimal indicators 85 in preparation for spinning through the remainder of the longer operand to produce the unwritten field. This presetting of the equal nine and equal zero indicators is done by a microinstruction containing a CRESTX microoperation. Block AVS-004 is then entered to spin through the remaining (unwritten) field of operand 1 one digit at a time. Since no more digits exist within operand 2 to correspond to the hundreds, thousands, etc. positions of operand 1, zero digits are provided by RAM 2 zero multiplexer 90. The results of the subtraction cannot be written into RAM 2 because this would result in the destruction of the neighboring nibbles to the left of the most significant digit of operand 2 which are contained in RAM 2. Therefore, the arithmetic result is not written but the status of the decimal indicators is continued to be accumulated to be determined whether the unwritten field is equal to all zeros or all nines, contains an illegal digit or results in a carry-out of the most significant digit. This spinning through of the unwritten field of operand 1 is shown in Table 14 which has columns which correspond to those of Table 13 as described above. TABLE 14 Spin Through Unwritten Field Length Performing Subtract DEC ALU DEC RAM 1 RAM 2 IN OUT IND STP LC CONT W1 N1 LC CONT W2 N2 A B R CIZN CIZN 1B 2 NNN0 3 1 0 N(N)32 0 1 1 0 X XXXX 3 0(1)42 1 362B 4 DNNN 1A 2 NNN0 3 0 0 N(N)32 0 1 X X 9 1001 3 (0)142 1 362B 4 DNNN 2B 2 NNN0 3 0 0 N(N)32 0 1 0 0 X XXXX 3 (0)142 1 362B 4 DNNN 2A 2 NNN(0) 2 3 0 N(N)32 0 1 X X 9 1001 3 0142 1 362B 4 DNNN 3B 2 NNN(0) 2 3 0 N(N)32 0 1 0 0 X XXXX 3 0142 1 362B 4 DNNN 3A 2 NN(N)0 2 2 0 N(N)32 0 1 X X 9 1001 3 0142 1 362B 4 DNNN In Table 14, step 1B shows the decimal indicators 85 have been initialized such that the equal zero and equal nine indicators have been preset to the binary ONE state and the carry-out and illegal digit indicator have been left in the state they were in at the end of the written field subtraction. The microinstruction which corresponds to step 1 contains a CINOP2, a CIPSUB, a CTDCT1, and a CLDFLP microoperation. This microinstruction is repeated for steps 2 and 3 also. This microinstruction results in the B port of the decimal ALU 84 being fed a decimal zero by selecting the zero input to RAM 2 zero multiplexer 90. This is done because there is no corresponding digit within operand 2 which is stored in RAM 2. The A port of decimal ALU 84 is fed the digit pointed to by the word and nibble counters of RAM 1 which in step 1 is the hundredths position which contains a decimal 1. As in the previous steps, the carry input into decimal ALU 84 is fed from the carry-out indicator of decimal indicators 85. The result of subtracting a 1 from 0 is a decimal 9 as shown in step 1A with a resultant carry-out as indicated in the C column and a resultant equal nine indicator as indicated in the N column. This carry-out and nine result in the decimal indicator carry bit being set to a binary ONE and the equal nine bit being set to a binary ONE and the equal zero bit being set to the binary ZERO state. This microinstruction also provides that a subtract operation is done (as opposed to an addition) by the decimal ALU 84 as specified by the CIPSUB microoperation. The address and nibble counters associated with RAM 1 81 are decremented after the operation so that after step 1 the word address counter points to word 3 and the nibble counter points to nibble 0. The indicators output by the decimal ALU 84 are stored in decimal indicators 85 as directly by the CLDFLP Steps 2 and 3 are then repeated for the thousandths and ten-thousandths positions of operand 1 with corresponding zeros for operand 2 being supplied by RAM 2 zero multipliexer 90. Upon the completion of the step 3, it can be seen that the word and nibble counters of RAM 1 point to the first neighboring nibble beyond the most significant digit of operand 1 and the decimal indicators contain a binary 1001 which indicate that there was a carry-out from the last arithmetic operation, that there is no illegal digit within the unwritten field, that the unwritten field is not equal to all zeros and that the unwritten field is equal to all nines as indicated by the 3 nines which appear in the result ("R") column of the rows associated with steps 1A, 2A and 3A. Block AVS-004 spins through the unwritten operand 1 field until the counter within microprocessor 30 reaches 0 indicating that the unwritten field has been completely processed. Block AVS-004 then exits to the major branch subtract microoperation which performs a major branch within the subtract routine which results in the next microinstruction being fetched from ROS 24 as a function of which of 16 conditions exist in the four decimal indicators 85 as entered into microprocessor 30 via monitor multiplexer 80 and monitor logic 22. Before leaving the absolute value subtract routine, a brief description of block AVS-005 is in order. Block AVS-005 would be entered if the field length of operand 2 is longer than or the same length as operand 1. If the operand 1 and operand 2 field lengths are the same length, nothing remains to be done and block AVS-005 exits to the major branch subtract microinstruction which performs the 16-way branch depending upon the results of the subtraction as indicated by the decimal indicators 85. If, however, the field length of operand 2 is greater than the field length of operand 1, the subtraotion must be continued and the results written intc RAM 2. In this case, however, because operand 1 is shorter than operand 2, zeros are supplied for the non-existent leading digits of operand 1 by RAM 1 zero multiplexer 82 selecting the zero input such that the A port of decimal ALU 84 would be a decimal 0 and the B port is continued to be fed from RAM 2. It is necessary to continue to process the operand 2 digits beyond the length of the operand because there could be a carry-out from the last digit position of the subtraction performed by using the most significant digit of operand 1. Both blocks AVS-004 and AVS-005 exit by going to the major branch subtract routine which does a 16-way branch depending upon the 4 decimal indicators from decimal indicators 65. At this point, these decimal indicators contain four binary bits of information which correspond to the carry-out, illegal digit, the equal zero, and the equal nine indicators all which reflect the result of the last field processed which will, in the case of exiting from block AVS-004, be the result of the unwritten field, and in the case of exiting from block AVS-005, will be the result of the written field because there is no unwritten field in the case of where the operand 2 field length is greater than the operand 1 field length. In any case, the major branch subtract routine does a 16-way branch depending upon the indicators after the most significant digit has been processed. The major branch subtract routine is entered at block MBS-000 which performs the 16-way major branch after entering decimal indicators 85 via monitor multiplexer 80 and monitor logic 22 into microprocessor 30. Block MBS-000 performs a 16-way branch as shown in FIG. 11 which can exit to one of 16 places. The conditions required to enter any one of the 16 branch routines are shown by the 4-bit binary number shown at the various possible exits. This 4-bit binary number corresponds to the carry-out, illegal digit, equal zero and equal nine indicators of decimal indicators 85. For example, the binary 1000 branch is taken, if the carry indicator is a binary ONE indicating there was a carry-out of the most significant digit, the illegal indicator is a binary ZERO indicating that there was no illegal digit encountered, the zero indicator equals a binary ZERO indicating that the field is not all zeros and the equal nine indicator is a binary ZERO indicating that the field is not all nines. If the binary 1000 branch is taken, block MBS-001 is entered and jump is performed to a 10's complement routine. This is one of two cases of interest to us to show the use of the equal nine and equal zero indicators. When the binary 1000 branch is taken, block MBS-001 jumps to the 10's complement subroutine to perform a 10's complement on the written field because a carry-out from the most significant digit during a subtract indicates that an oversubtract has been performed and the result must be 10's complemented. Upon exiting, block MBS-001 performs a 2-way branch depending upon whether or not the operand 2 field length was greater than or the same as the operand 1 field length or if the operand 2 field length was less than the operand 1 field length. If operand 2 was longer or the same length as operand 1, the algebraic result is not equal to zero and block MBS-002 is entered to set the greater (G) and the less (L) than commercial instruction indicators which are visible to software instructions. If operand 2 is shorter than operand 1, there is an unwritten field which is not equal to all nines because the all nines indicator is not a binary ONE, and therefore, an overflow (OV) commercial instruction condition has occurred. Therefore, when block MBS-003 is entered, the overflow (OV) commercial instruction indicator is set and upon exiting that block a check is made to see whether the computer is to trap on overflow and, if so, the trap branch exit is taken to routine OV and, if not, the non-trap branch is taken to routine SGL. Block MBS-003 checks whether a trap is to occur if overflow occurs and, if so, traps to the OV routine which initiates a software trap routine which will handle the overflow condition. Exiting to the overflow trap software routine results in the result of the decimal arithmetic operation not being stored in memory. If the trap on overflow has been masked out such that trapping does not occur on overflow, SGL block MBS-003 takes the no-trap exit and goes to the routine which sets the greater (G) than and less (L) than software visible commercial instruction indicators. When this routine is processed as will be seen below, it results in truncation that saves the least significant digits of the result by storing the written field in RAM 2 into the memory location designated by data descriptor 2. Before describing the other case of oversubtract in which the indicators are equal to a binary 1001 and which begins with the execution of block MBS-004, the other possible exits from the major branch subtract routine will be briefly described. If the decimal indicators are equal to a binary 0000, it indicates that no oversubtract has occurred and therefore the result in RAM 2 is the final result and the routine exits to the SGL routine which sets the greater than and less than software indicators, fixes the sign of the result and writes the result from RAM 2 into main memory as will be seen below. If the indicators contain the binary value X1XX, it indicates that the illegal character indicator has been set indicating that at some point during the arithmetic operation, the decimal ALU 84 encountered an illegal digit on input such that the 4-bit nibble contained a value that was greater than decimal 9 (i.e., a value of from A through F hexadecimal). In the case where an illegal digit was encountered, a trap occurs to a trap handling routine which allows the software programmer to write a software routine to handle the illegal operand. In this case, the subtract routine does no clean up and stores nothing into main memory before exiting to the software routine which is programmed to handle the case of an illegal digit within one of the operands. If the decimal indicators 85 are a binary 0010, the major branch subroutine exits to block MBS-008 which does a branch depending upon the relative length of operand 1 and operand 2. If the length of operand 2 was greater than or equal to the length of operand 1, the result is equal to all zeros and therefore is not necessary to set either the greater than or less than indicator so block MBS-008 exits to the FIN routine which completes the processing of the arithmetic operation by setting the sign in RAM 2 and then writing the result, including the sign, from RAM 2 into the main memory location specified by data descriptor 2. FIN is a second entry into the SGL routine. If block MBS-008 takes the branch which indicates that the length of operand 2 is less than the length of operand 1, this indicates that the unwritten field is all zeros and a test must be made to see whether the written field is all zeros, therefore block MBS-009 is entered which examines the status of the equal zero indicator which was saved at the end of the operation on the written field. If the equal zero indicator was a binary ONE at the end of the operation on the written field, block MBS-009 exits to the FIN routine which means that the result is all zeros and therefore neither the greater or less than commercial software instruction indicators must be set. If the written field equal zero indicator was not set at completion of the operation on the written field, block MBS-009 exits to the SGL routine because the result is non-zero and either the G or L commercial software instruction indicators must be set prior to completing the execution of the commercial software instruction. The major branch subract routine can never take the branch exist which corresponds to decimal indicators of a binary 1010 because this is an arithmetically impossible case. This case would indicate that there was a carry-out of the last (most) significant digit meaning that the absolute value of operand 2 is less than the absolute value of operand 1 but yet the result has the equal zero indicator set to a binary ONE indicating that the result is 0 and this is a mathematical impossibility. If the major branch subtract routine takes the branch corresponding to indicators of a binary 0001, block MBS-010 is entered. This block is entered if the equal nine decimal indicator that is set to the binary ONE state indicating that the result of the last field processed is equal to all nines. Block MBS-010 branches on the relative length of operand 1 and operand 2 based on the previous compare of operand lengths which was made in block DAD-003. If the length of operard 2 is greater than or equal to the length of operand 1 then block MBS-010 branches to the SGL routine. In this case, the result happens to be equal to all nines. The SGL routine sets the greater and less than indicators before setting the sign and then storing the result in main memory. If the length of operand 2 is less than the length of operand 1, block MBS-010 branches to the OV routine because an overflow has occurred. The OV routine enters block MBS-003 as discussed above which sets the overflow indicator and possibly traps on the overflow condition. This overflow has occurred because there has been no carry-out of the most significant digit indicating that there has been no oversubtract so that a 10's complement will not be performed. Because the unwritten field is not all zeros the result will not fit into the field length of operand 2 and therefore an overflow has The major branch subtract routine cannot branch to either branch associated with binary indicators equals to X011 which would mean that the all zeros and all nines indicator are both simultaneously set. This is an impossible condition to have a field containing both all zeros and all nines and therefore these branch paths are never taken. Returning now to the other case of interest in which an oversubtract has occurred as indicated by the carry-out of the most significant digit meaning that the sign of the result must be changed and that a 10's complement must be performed on the written field. The case where the indicators are equal to a binary 1001 will be described. In this case, the major branch subtract routine block MBS-000 exits to block MBS-004. This block performs a jump to a 10's complement subroutine which performs a 10's complement operation on the written field contained in RAM 2. The 10's complement routine TSC is entered and block TSC-000 is executed. Block TSC-000 resets the RAM 2 address counter and the nibble write control nibble counter to point to the unit's position of the result in RAM 2. A counter in microprocessor 30 is also initialized to the length of operand 2 so that it will contain the number of decimal digits (excluding the sign) that are in operand 2 as determined by data descriptor 2. Block TSC-001 is then entered which spins through the written field contained in RAM 2 one digit at a time from the unit's position to the most significant digit position until all digits have been complemented as determined by the length of operand 2 counter being decremented until it reaches 0. This spinning through of the written field to perform the 10's complement is shown in Table 15. Block TSC-000 before exiting resets the decimal indicators such that the carry-out and illegal digit indicators are set to binary ZERO and the equal zero and equal nine indicators are set to the binary ONE state. Block TSC-001 spins through the written field contained in RAM 2 beginning in the unit's position and working to the most significant digit performing a 10's complement on each digit. This is done by subtracting each digit of the result in RAM 2 from zero and storing the result back into RAM 2. This is done by a microinstruction in block TSC-001 which contains a CIPSUB, a CIPDUB, a CINOP2, a CWRES2, a CLDFLP, and a CTDCT2 microoperation. These microoperations tell the decimal ALU 84 to perform a subtract operation, that the A input of the ALU should come from RAM 2 instead of RAM 1 which is done by selecting the output of RAM 2 nibble multiplexer 89 to be output by double multiplexer 83, and to provide that the B port of the ALU 84 receives a zero digit from RAM 2 zero multiplexer 90. In addition, these microoperations provide that the resultant digit out of decimal ALU 84 should be written back into RAM 2 in the nibble pointed to by RAM 2 address counter 87 and the nibble counter in nibble write control 86. In addition, the decimal indicators 85 are to be updated with the results of the indicators from decimal ALU 84 and after the operation is performed, the address and nibble counters in RAM 2 are to be decremented by one to point to the next more significant digit. TABLE 15 Spin Through Written Field Performing Ten's Complement DEC ALU DEC RAM 1 RAM 2 IN OUT IND STP LC CONT W1 N1 LC CONT W2 N2 A B R CIZN CIZN 1B 2 NN(N)0 2 2 0 NN32 1 1 6 0 X XXXX 3 0142 1 3(6)2B 4 DNNN 1A 2 NN(N)0 2 2 0 NN32 1 0 X X 4 1000 3 0142 1 (3)42B 4 DNNN 2B 2 NN(N)0 2 2 0 NN32 1 0 X X X XXXX 3 0142 1 (3)42B 4 DNNN 2A 2 NN(N)0 2 2 0 NN3(2) 0 3 X X X XXXX 3 0142 1 342B 4 DNNN 3B 2 NN(N)0 2 2 0 NN3(2) 0 3 2 0 X XXXX 3 0142 1 342B 4 DNNN 3A 2 NN(N)0 2 2 0 NN(3)7 0 2 X X 7 1000 3 0142 1 342B 4 DNNN 4B 2 NN(N)0 2 2 0 NN(3)7 0 2 X X X XXXX 3 0142 1 342B 4 DNNN 4A 2 NN(N)0 2 2 0 N(N)37 0 1 X X X XXXX 3 0142 1 342B 4 DNNN Table 15 illustrates the processing of each of the nibbles in the written field contained in RAM 2. Step B again shows the contents of RAM 1 and RAM 2 prior to the first operation which performs a subtracting of the specified digit in RAM 2 from 0 as indicated by the A port being provided the decimal 6 from RAM 2 and the B port being provided a 0 from RAM 2 zero multiplexer 90. After the first step is completed as shown in step 1A, the result of subtracting 6 from 0 is a decimal 4 as shown in the "R" column which is written into the unit's position in RAM 2 word 1 nibble 1 and the indicators have been set to indicate that there has been a carry-out which is really a borrow in this case. The indicators from decimal ALU 84 are used to update decimal indicators 85 such that the carry indicator is set to a binary ONE and equal zero and equal nine indicators have been reset to the binary ZERO state. Step 2 provides for the writing of the zone field into the string decimal result by use of a microinstruction containing a CWZONE and a CTDCT2 microoperation. This microoperation results in result/zone multiplexer 91 selecting the zone bits of a binary 0011 to be written into the nibble pointed to by the RAM 2 address counter and the nibble write control nibble counter as shown in step 2A which indicates that the 3 has been written into RAM 2 word 1 nibble 0 (no change in this case because the zone nibble was already present) and the counters have been updated to point to the 10's position in RAM 2. Step 3 is then performed which performs the 10's complement operation on the 10's position of the result field by subtracting decimal 2 from a decimal 0 which results in a result of decimal 7 being written into the 10's position in RAM 2 as shown in step 3A. The output of the decimal ALU indicators is also used to update the decimal indicators 85 which in this case results in no change. Step 4 is then performed in which the zone nibble is written such that it will contain a hexadecimal 3 and no updating is done of the decimal indicators 85. After spinning through the written field in RAM 2, block TSC-001 exits to block TSC-002. Block TSC-002 inverts a flag contained in microprocessor 30 which is used to indicate whether operand 2 had a plus or minus sign initially. This flag is toggled so that the sign of the result will be inverted by the SGL routine before the result is written back into main memory. Block TSC-002 then returns to the microinstruction following the microinstruction from which it was called. In this case, the return is to block MBS-004 from which the 10's complement subroutine was called. Block MBS-004 then exits by branching upon whether operand 2 was longer or the safe length as operand 1 in which case the result is not equal to zero and it branches to the set greater than or less than indicators routine SGL. If, however, the operand 2 was shorter than operand 1, block MBS-004 exits to block MBS-006. Block MBS-006 then does a branch depending upon whether or not the written field after complementing contains all zeros as indicated by the current status of the equal zero indicator of decimal indicators 85. Because the unwritten field contained all nines, the overflow condition will only occur if the written field is equal to all zeros. Therefore, if the written field contains all zeros, block MBS-006 branches to the OV routine which, as discussed earlier, results in the setting of the overflow commercial software instruction indicator and then by performing block MBS-003 and then testing for if a trap is to be performed on overflow or not. In the particular case of the example being discussed, the written field is not the equal to all zeros and block MBS-006 takes the non-zero branch and exits to the SGL routine which sets the greater than or less than commercial software instruction indicators and writes the result into main memory. When the SGL routine is entered, block SGL-000 sets either the G or L commercial software instruction indicators depending upon the result of the arithmetic operation. If the result is greater than zero, the G indicator is set to a binary ONE and the L indicator remains reset to a binary ZERO. If the result is less than zero, the L indicator is set to a binary ONE and the G indicator remains in the binary ZERO state. If the result is zero, neither the G nor L indicators are set to a binary ONE (i.e., the L indicator is set to a binary ONE if the sign of the result is negative and the G indicator is set to a binary ONE if the sign of the result is positive and the equal zero indicator of decimal indicators 85 is not a binary ONE). Block SGL-000 then exits to block SGL-001 which jumps to a subroutine to set the sign within the result. In the case of the example being described, the sign of the result should be a minus sign because the answer is a -74 decimal and therefore the sign in RAM 2 must be changed from a plus sign, which is a hexadecimal 2B, to a minus sign, which is a hexadecimal 2D in string decimal format. Therefore the trailing sign within the resultant field in RAM 1 is changed frcm a hexadecimal 2B to a hexadecimal 2D such that the contents of RAM 2 after the sign position has been set in block SGL-001 are as follows. RAM 2 RAM 2 Contents Location (Hexadecimal) 0 NN37 1 342D Block SGL-002 then writes the final result contained in RAM 2 back into main memory by transferring one word at a time under the control of RAM address counter 87 via RAM 2 data register 88 and transceivers 97 a word of the result back to microprocessor 30 which then writes the word into the main memory locations pointed to by data descriptor 2. The contents of memory after block SGL-002 are as follows: Memory Memory Location Contents (Hexadecimal) (Hexadecimal) 2204 NN37 2205 342D Thus, at the completion of the decimal addition the contents of memory in the locations previously occupied by operand 2 will contain a string decimal 74 with a trailing minus sign which is the result of adding an operand 1 of -00142 to an operand 2 of +68. After the complete result is written into main memory, block SGL-002 exits to the FETCH routine which FETCHes the next software instruction from main memory, decodes the operation codes and begins the processing thereof. The exit to the FETCH routine completes the processing of the decimal add commercial software instruction. Before leaving the decimal add commercial software instruction discussion, it should be noted that FIG. 11 also contains a flow chart for the decimal subtract commercial software instruction DSB beginning at block DSB-000. An examination of this flow chart shows that it is the same as the decimal add instruction except that blocks DSB-005 and DSB-006 are the reverse of blocks DAD-005 and DAD-006 because when a subtract is being performed, if the signs of the operands are different an absolute value addition must be performed, and if the operands are the same, an absolute value subtract must be performed. In order to complete the discussion of the decimal add and subtract instructions, it should be noted that the absolute value add routine is not shown in FIG. 11 but is similar to the absolute value subtract routine SUB except that the blocks corresponding to blocks AVS-001, AVS-004 and AVS-005 perform additions instead of subtractions in the decimal ALU 84. In addition, the blocks corresponding to blocks AVS-004 and AVS-005 in the absolute value add routine exit to a major branch addition routine instead of the major branch on subtraction routine. The major branch addition routine, which is not shown in FIG. 11, is similar to the major branch subtraction routine shown in FIG. 11 with the exception that in the major branch addition routine, it is not necessary to do any 10's complementing because there is no case in which an oversubtract can occur since the addition of two numbers having the same sign never results in a change of sign. From the above discussion it can be appreciated that the equal nine indicator and equal zero indicator, which are integrating indicators, are of use in discerning whether the result of an oversubtract in an absolute value subtract routine is within range when the receiving field length of the result is less than the source operand's field length. In the preferred embodiment, where the result is stored into the field previously occupied by operand 2, this means that if the field length of operand 2 is less than the field length of operand 1, the equal nine and equal zero flip-flops can be advantageously employed to determine whether the result will fit within the field previously occupied by operand 2. The four cases below will summarize how the equal nine indicator is used. These cases illustrate than when an absolute subtract is performed, the indicators are initially preset such that the equal nine and equal zero indicators are binary ONEs and the subtraction is performed starting with the unit's digits and working towards the most significant digit. The subtraction continues until the written field having a length equal to the length of the shorter operand has been processed. After processing the written field, the status of the equal zero indicator is remembered for later use. The processing of the rest of the operand then continues with the processing of the unwritten field after resetting the equal nine and equal zero indicators before starting processing at the least significant digit of the unwritten field The unwritten field is processed by zero digits being supplied for the missing digits within the shorter operand and the result of the subtracting each digit is not written into a field. Instead, the equal zero and equal nine indicators are integrated over the unwritten field and together with the carry-out from the most significant digit of the longest operand are used to indicate whether the unwritten field is equal to all zeros after any necessary ten's complementing is performed. In performing the ten's complement of the result when an oversubtract occurs (i.e., when the sign of the result must be changed) the ability to to feed either port of the decimal ALU 84 from RAM 2 is used along with the ability to feed a zero into the other port. The case of oversubtract is indicated by a carry out of the most significant digit position. If to carry out occurs, then the unwritten field must be equal to zero, otherwise the result will not fit within the receiving field. If a carry out occurs, meaning that an oversubtract has resulted, it means that the written field must be ten's complemented. In this case, when an oversubtract has occurred and the written field must be ten's complemented, an unwritten field of all nines and a written field ofd all zeros means that the result will not fit within the receiving field because performing a ten's complement will result in a 1 in the first digit of the unwritten field and all zeros in the written field (i.e., this is an overflow condition). If an oversubtract has occurred as indicated by a carry out of the most significant digit, and a ten's complement must be performed on the written field, if the unwritten field is equal to all nines and the written field is not equal to all zeros, the result will fit with no overflow. If an oversubtract has occurred, as indicated by a carry out of the most significant digit, and the unwritten field is not equal to all nines, it means that there is an overflow because performing a ten's complement on the result would not yield an answer which would fit in the written field. These conditions are illustrated in the below cases. Cases illustrating use of equal nine Case 1: DAD (+68) + (-00142) = (-74) 1 1 1 0 0 Carries out. 0 0 0 Leading zeros from RAM 2 zero multiplexer 90. 6 8 OP 2 in RAM 2. -0 0 1 4 2 OP 1 in RAM 1. 2 6 Written field in RAM 2. 9 9 9 Unwritten field. Carry out of 1 from most significant digit means ten's complement necessary on result. Unwritten field of all nines turned into innocuous leading zeros in ten's complemented Case 2: DSB (+68) - (+01142) = (-74) with overflow after truncation 1 1 1 0 0 Carries out. 0 0 0 Leading zeros from RAM 2 zero multiplexer 90. 6 8 OP 2 in RAM 2. -0 1 1 4 2 OP 1 in RAM 1. 2 6 Written field in RAM 2. 9 8 9 Unwritten field. Carry out of 1 from most significant digit means ten's complement necessary on result. Result of -1074 is longer than the 2 digit receiving field. An unwritten field that is not all nines will not turn into all zeros when ten's complemented and therefore results in an overflow Case 3: DSB (+68) - (+00042) = (+26) 0 0 0 0 0 Carries out 0 0 0 Leading zeros from RAM 2 zero multiplexer 90 6 8 OP 2 in RAM 2 -0 0 0 4 2 OP 1 in RAM 1 2 6 Written field in RAM 2 0 0 0 Unwritten field Carry out from most significant digit equal to 0 means no need to ten's complement the result. Equal zero indicator is used to indicate that unwritten field is all zeros and therefore result fits (no overflow) in receiving field. Case 4: DAD (+68) + (-00168) = (-00) with overflow 1 1 1 0 0 Carries out. 0 0 0 Leading zeros from RAM 2 zero multiplexer 90. 6 8 OP 2 in RAM 2. -0 0 1 6 8 OP 1 in RAM 1. 0 0 Written field in RAM 2. 9 9 9 Unwritten field. Carry out of 1 from most significant digit means ten's complement necessary on result. Result of -1OO is longer than the 2 digit receiving field. The unwritten field of all nines did not turn into innocuous leading zeros. Although the equal nine indicator was set to 1 at the end of unwritten field, the written field equal zero indicator of binary ONE was saved at the end of written field processing and this condition can be detected as an overflow condition. The use of commercial instruction logic 28 will now be discussed with reference to the performance of the decimal multiplication software instruction (DML). The DML software instruction is advantageously performed in the preferred embodiment by use of the equal zero indicator of decimal indicators 85 and the ability to access digits from left to right in RAMs 1 and 2 and also from right to left thus providing the decimal multiply routine the ability to access the operands from the most significant to the least significant digit, and from the least significant to the most significant digit. Before examining a specific example of the use of the decimal multiply (DML) software instruction, the overall method used in performing a decimal multiply will be examined with reference to FIG. 12A which shows a prior art method and FIG. 12B which shows the improved method employed in the preferred embodiment. In FIG. 12A, the flow chart shows that a multiplication is basically done by setting a partial product to zero initially and then working through the multiplier one digit at a time starting with the least significant digit (i.e., the unit's position and then working toward the most significant digits) adding the multiplicand to the partial product the number of times that correspond to the value of an isolated multiplier digit. Once the addition of the multiplicand to the partial product has been completed for a given digit, the multiplicand is shifted left one position, which amounts to multiplying it by a decimal 10, and the shifted multiplicand is then added to the partial product the number of times corresponding to the value of the next most significant in the multiplier. This process is continued until all of the digits in the multiplier have been used to add the multiplicand to the partial product. In the flow chart in FIG. 12A which shows the prior art, the multiplication is begun ln the block entitled MUL which is labeled PM00. Block PM00 exits to block PM10 which zeros out the partial product. Block PM10 exits to block PM20 which tests to see whether all digits within the multiplier have been processed and if so exit to block PM80 which indicates that the multiplication is done. If all digits within the multiplier have not been processed, block PM30 is entered and the least significant non-processed digit within the multiplier is isolated. Block PM30 exits to block PM40 which tests whether the isolated digit of the multiplier has been fully processed and if not exits to block PM60 which adds the multiplicand to the partial product. As indicated in the flow chart, this addition of the multiplicand to the partial product, is accomplished by adding each digit of the multiplicand to the partial product one digit at a time by performing decimal additions, this requires the processing that a loop to be performed a number of times there are digits in the multiplicand plus one. After the multiplicand is added to the partial product, block PM70 is entered and the isolated multiplier digit is decremented. Block PM70 then returns to block PM40 which determines whether the decremented value of the isolated multiplier digit is now equal to zero. If the decremented value of the isolated digit is not equal to zero, block PM40 exits to block PM60 which does another addition of the multiplicand to the partial product which in turn enters block PM70 to decrement the isolated multiplier digit again and this loop from PM70 back to PM40 has indicated the performed number of times that the equal to the value of the multiplier digit. When block PM40 finally determines that the isolated digit has been decremented to zero, block PM50 is entered and the multiplicand is shifted one digit position to the left relative to the partial product which amounts to multiplying the multiplicand by 10 such that when the next more significant digit of the multiplier is processed, the multiplicand when it is added to the partial product, will now have a value of ten times what it had when the previous multiplier digit was being processed. Block PM20 is then entered and a test is made to determine whether all digits within the multiplier have been processed. If not, the next more significant digit within the multiplier is isolated in block PM30 and PM30 then exits to block PM40. Block PM40 determines whether the multiplicand has been added to the partial product the number of times corresponding to the value of the isolated multiplier digit and if not exits to block PM60 to add the multiplicand to the partial product and the isolated multiplier digit is decremented in block PM70. When block PM40 determines that the isolated digit has been processed a sufficient number of times such that the multiplicand has been added to the partial product the required number of times, block PM40 exits to block PM50 which again shifts a multiplicand one digit position to the left relative to the partial product which results in again multiplying the multiplicand by 10. This process of isolating digits within the multiplier and adding the multlplicand to the partial product the number of times equal to value of the isolated digit continues such that the loop between block PM50 back to block PM20 is performed the number times there are digits within the multiplier. This process is completed when by block PM20 determines that all multiplier digits have been processed so that the multiplication is done and the process is completed in block PM80. The improved multiplication method employed in the preferred embodiment is illustrated in FIG. 12B. In FIG. 12B, the use of the prior art partial product method described above with reference to FIG. 12A is basically employed with time improving enhancements performed prior to the beginning of the process and the clean up enhancements done at the end of the process which results in significant time savings for multiplications performed on operands having leading zeros in the multiplier or the multiplicand. In FIG. 12B, the multiplication begins in the block entitled MUL and labeled IM00. Block IM00 exits to block IM01 which strips the leading zeros from the multiplier thereby reducing the effective length of the multiplier. Block IM01 then exits to block IM02 which strips the leading zeros from the multiplicand and packs the multiplicand such that if it was unpacked or string data in which each digit occupies a full byte, it is packed such that each digit only occupies a single nibble and the zone nibbles are removed from the multiplicand. In block IM01, the stripping of the leading zeros from the multiplier requires that the loop be performed the number of times that there are leading zeros in the multiplier. Block IM02 is performed the number of times there are digits within the This stripping of the leading zeros from the multiplier and the multiplicand has the effect of reducing the effective length of both the mutliplier and the multiplicand. This is done in order to reduce the number of times that loops within the multiply routine are performed. This stripping saves time because when the operands are presented to the multiply routine, their length is specified as the length of the entire field in whioh the operand resides suoh that a number having a value of 100 occupying a field which is 7 digits in length will have 4 leading zeros followed by 100. The prior art method will process this number by processing each digit within the operand including the four leading zeros. After block IM02 has stripped the leading zeros and packed the multiplicand, it exits to block IM10 which begins the multiplication itself. Blocks IM10, 20, 30, 40, 50, 60, 70 and 80 correspond to blocks PM10, 20, 30, 40, 50, 60, 70 and 80 of the prior art method and work in a similar manner. There is a difference, though, in that by stripping the leading zeros from the multiplier and multiplicand the number of times that loops are performed in the improved method can be significantly reduced depending upon the number of leading zeros. For example, the number of times that IM50 branches back to IM20 is equal to the number of significant digits in the multiplier as opposed to the number of digits in the multiplier which included leading zeros in the prior art method. Similarly, the number of times that the loop within block IM60 is performed is equal to the number of significant multiplicand digits plus 1 as opposed to the prior art method in which the loop was performed the number of multiplicand digits plus 1. Because block IM60 contains a loop which is within the bigger loop of blocks IM20 through 50, this reduction in the number of digits which have to be processed within the multiplicand can be quite significant. The stripping of the leading zeros from the multiplier and the multiplicand, thereby effectively reducing the length of the fields which must be processed, is done prior to the beginning of the multiplication itself because if one attempts to determine whether a digit is equal to zero within a multiplier or multiplicand when working from the least significant digit to the most significant digit, it is impossible to determine whether there is still a non-zero digit to the left (in a more significant position) of an embedded zero. For example, if the number 1001 is being processed, when the ten's position is processed, and the zero is discovered, one cannot simply stop processing at that point because there is a non-zero digit to the left in the one-thousandths position. This non-zero more significant digit is not easily detected in an efficient manner when working from left to right. The improved method in FIG. 12B has one other block which is different from that in the method of FIG. 12A in that block IM20 exits to block IM71 which must then provide leading zeros within the final partial product such that the field which receives the results will have the sufficient number of digits. This is done by block IM71 looping from the most significant digit within the partial product until the end of the resultant field length is reached in the most significant digit position. This loop is performed the number of times there are leading zeros. Block IM71 then exits to block IM80 which is taken when the multiplication is complete. From this discussion of the prior art and the improved method, it should be appreciated that the improved method has a little overhead at the beginning of the multiplication to strip of the leading zeros and a little overhead at the end of the multiplication operation to provide leading zeros in the final product but that this additional overhead is more than compensated for by the great reduction in the number of times nested loops within the multiplier routine must be performed. The stripping of leading zeros in the multiplier multiplicand can become significant when the number of significant digits within the operand field is relatively small as can be the case quite often in software programs. This is the case because when a software program is written, the operand field lengths are specified such that it is sufficiently long to accommodate the maximum length operand such that a field may be required to hold. For example, an operand field may be specified as containing 16 digits when, in fact, for a large portion of the time the number may never exceed 1000 such that the operands will usually have 13 or more leading zeros within them. This happens because it is easier for the software to use fixed length fields for the operands rather than to continually adjust the operand field lengths as a function of the value of the variable stored in that field at any given moment. For example, if an operand value may go to 99,999,999, rather than to attempt to continually adjust the field length of the operand which may range from a value of 0 up to the maximum number and usually has a value of less than 999, the operand is specified to have 8 decimal The implementation of the DML software instruction will now be discussed with reference to the detailed flow charts of FIG. 13 which show its implementation in a preferred embodiment. In the DML software instruction, operand 1 is the multiplier which is used to multiply operand 2 which is the multiplicand and the product is stored in the field previously occupied by operand 1. The DML software instruction will now be explored taking as a specific example as follows: In this example, a multiplicand of -2403 which is operand 2 or OP 2 is multiplied by a multiplier of which is operand 1 or OP 1 to yield a product of -732915 which is stored in the field previously occupied by OP 2. In this example, the multiplicand has 8 leading zeros and the multiplier has 6 leading zeros which will be stripped away before the actual multiplication is performed. Once the final product is calculated, leading zeros will be added to the product before it is stored into main memory. This stripping of the leading zeros from the multiplicand and the multiplier before calculating the product and the addition of leading zeros to the product at the end greatly reduces the number of steps required as was discussed above. In addition, the preferred embodiment also packs unpacked decimal data such that if string data is presented, in which case each digit of an operand requires a single byte in which the left nibble contains a zone field of binary 0011 and the right nibble contains a decimal value, it is packed to eliminate zone nibbles so that the decimal digits may be consecutively addressed in the working area of the commercial instruction logic. This packing of the operand saves having to skip the zone nibbles and results in some operations being performed in a single microinstruction as opposed to requiring two microinstructions which would be necessary in order to skip the zone nibbles. The above example values will now be discussed in more detail with reference to the detail flow charts of the decimal multiply method. For this example, assume the DML softward instruction is at main memory location 1000 as follows: Example DML Software Instruction Memory Location Memory (Hexa- Contents decimal) (Hexadecimal) Meaning 1000 0029 DML op code 1001 E707 data descriptor 1 (DD1) word 1 1002 1102 data descriptor 1 (DD1) word 2 1003 6B07 data descriptor 2 (DD2) word 1 1004 1204 data descriptor 2 (DD2) word 2 Data descriptors DD1 and DD2 are decoded as follows (see FIG. 9): DD1: T = 0: String (unpacked) decimal. C1 = 1: OP 1 starts in right byte. C2,C3 = 11: Trailing sign L = 8: 7 digits and sign. CAS: OP 1 starts in word addressed by contents of base register 7 plus displacement of 1102. If B7 contains the value 1000 hexadecimal, OP 1 is located at address 2102 hexadecimal. DD2: T = 0: String (unpacked) decimal. C1 = O: OP 2 starts in left byte. C2,C3 = 11: Trailing sign. L = 3: 2 digits and sign. CAS: OP 2 starts in word addressed by contents of base register 7 plus the displacement of 1204. Since B7 contains 1000 hexadecimal, OP 2 is located at address 2204 hexadecimal. OP 1, which is a +000305 with a trailing plus sign, appears in main memory as follows: Memory Memory Location Contents (Hexadecimal) (Hexadecimal) 2102 NN30 2105 352B where: N are neighbor nibbles. 30, 33 and with 35 are unpacked decimal digits and zone fields and corresponding to 0, 3 and 5 decimal. 2B is an unpacked trailing plus sign. OP 2, which is -2403 with a trailing minus sign, appears in main memory as follows: Memory Location Memory Address Contents (Hexadecimal) (Hexadecimal) 220A 2DNN where: N are neighbor nibbles which must be preserved when product is stored. 30, 32, 34 and 33 are unpacked decimal digits with zone fields corresponding to 0, 2, 4 and 3 decimal. 2D is an unpacked trailing minus sign. The execution of the above example DML commercial software instruction will now be described with reference to FIG. 13. FIG. 13 is a flow chart of the firmware microroutines used by CPU 20 to execute a DML software instruction. The blocks in FIG. 13 which are referred to by the names next to them, such as DML-001, show at a gross level the functions performed by microprocessor 30 and commercial instruction logic 28 to perform the software instruction. Some of these blocks may represent the execution of more than one 48 or 56-bit microinstruction, the form of which is shown in FIG. 5. Before entering the microroutines shown in FIG. 13, which are peculiar to the DML commercial software instructions, the CPU 20 examines the first word of the software instruction which is being executed to determine the type of operation to be performed. Once it is determined that it is a decimal arithmetic operation as determined by looking at the operation code in the first word of the instruction, the CPU 20 then proceeds to decode the address syllable associated with data descriptor 1 to determine the main memory word address and the position within the word in which operand 1 begins. This front end processing of the software instruction then continues with the microprocessor branching to the DML routine at block DML-000. When the decimal multiply routine is entered at DML-000, it determines whether this is a first pass in which operand 1 is to be brought into the CPU or the second pass in which operand 2 is to be brought into the CPU. If it is the first pass, block DML-000 branches to block DML-001 which fetches operand 1 into RAM 1 one word at a time by bringing it from main memory into the microprocessor, from the microprocessor processor bus 37 into transceivers 97, data-in register 98 and then into RAM 1 81. This process is performed by first loading RAM 1 address counter 75 with the address of the first word which is to be used in RAM 1 and then the loading of the nibble counter in nibble-out control 76. It should be noted that the words of OP 1 are loaded into RAM 1 by using the low four-bit order bits of the main memory address as the four-bit address which is loaded into RAM 1 address counter 75 such that at the end of block DML-001, the contents of RAM 1 are as follows: RAM 1 RAM 1 Contents Location (Hexadecimal) 2 NN30 5 352B Block DML-001 then exits to block DML-002 which then strips off leading zeros of OP 1 as it moves a copy of OP 1 into segment 0 of RAM 2. During a multiply operation, RAM 2 is broken in eight 32-word segments such that segment 0 occupies address 0-1F (hexadecimal), segment 1 occupies addressed 20-3F (hexadecimal) and segment 2 occupies address 40-5F (hexadecimal), etc. This move and strip of leading zero operation is performed by a subroutine that takes advantage of the ability of the RAM 1 and RAM 2 to be addressed from left to right (i.e., from the most significant digit to the least significant digit). The move and strip subroutine is entered with the address and nibble counter of RAM 1 pointing to the most significant digit (nibble) of OP 1 in RAM 1 and with the word counter of RAM 2 pointing to word 0 in segment 0 (i.e., location 0) and the nibble counter pointing to nibble 3. The routine is also entered with a a word of all binary zeros loaded into data-in register 98 and the decimal indicators 85 preset such that the equal zero indicator is a binary ONE. The routine then writes one word of all zeros from data in register 98 into the word pointed to by the address counter of RAM 2 by a CWROP2 microoperation. The routine takes the nibble pointed to in RAM 1 and runs it through decimal ALU 84 and writes it into the right (3rd) nibble of the word pointed to in RAM 2 by use of a microinstruction containing a CWRES2, a CLDFLP, a CRUCT1 and a CINOP2 microoperation. This microinstruction also loads the decimal indicators to indicate if the digit just moved to RAM 2 was a nibble containing a decimal 0 digit and it also increments the nibble center of RAM 1 to point to the next nibble which results in the word counter of RAM 1 being incremented if the nibble counter increments through 3. The routine then checks if OP1 is string data and if so it increments the nibble counter of RAM 1 one more time to skip over the zone nibble of the next least significant digit in RAM 1. The routine then tests to see if the equal zero decimal indicator is still a binary ZERO which means that the digit moved from RAM 1 to RAM 2 was a leading zero digit. If the moved digit was a non-zero digit, the routine increments the address counter of RAM 2 so that it will point to the next word by use of a CIAD01 microoperation and then checks to see if all of OP 1 has been moved to RAM 2. If some of OP 1 remains to be moved, the routine returns to the beginning and writes a word of zeros into RAM 2 and then moves the next digit of OP 1 from RAM 1 into RAM 2. If the moved digit of OP 1 was a zero digit such that the equal zero indicator is still a binary ONE, the routine test if all of OP 1 has been moved and if not it goes back and moves the next digit of OP 1 from RAM 1 to RAM 2 without having to zero out the word pointed to by the RAM 2 address counter because the counter is still pointing to the word which has been previously zeroed out since a non-zero digit has not yet been encountered. The strip and move routine makes use of the integrating nature of the equal zero decimal indicator in that once a non-zero digit is encountered, the address counter is incremented by one as each digit is moved even if it happens to be an embedded zero as is found in the number 1203. When the move and strip routine finishes the move, it returns to block DML-002 which tests the status of the illegal digit indicator to determine if an illegal digit is present in OP 1. The illegal digit indicator was preset to a binary ZERO prior to calling the move routine and since it is also an integrating irdicator, it states at the end of the move will indicate if any illegal (non 0-9 digit) was encountered during the move. If an illegal digit was encountered block DML-002 exits to routine IC otherwise it exits to block DML-003. At the completion of block DML-002 the contents of segment 0 of RAM 2 will have the significant digit of OP 1 in it as follows: RAM 2 RAM 2 Contents Location (Hexadecimal) This format, in which RAM 2 contains only one digit of OP 1 (the multiplier) per 16-bit word right justified, makes the digits easily accessible for entry and use in microprocessor 30 as counters in the multiplier loops later on. Block DML-003 then computes the number of significant digits in the OP 1 which is equal to the number of words occupied in RAM 2 segment 0 by the multiplier after the leading zeros have been stripped. This number of significant digits in the multiplier is used as a counter in the multiplication loop. Block DMA-003 then exits to the instruction front end processing routine which proceeds to crack data descriptor 2 to determine the main memory address of where OP 2 begins in main memory. After cracking DD2, the front end routine then branches on the software instruction operation code in block DML-000 which in turn determines whether this is the pass 1 of pass 2. In this case it is pass 2 so that the microinstructions associated with block DML-004 are executed. In block DML-004 operand 2 is brought into both RAM 1 and segment 1 cf RAM 2 a word at a time from main memory and the sign of OP 2 is determined. The copy of OP 2 in segment 1 of RAM 2 is used primarily to preserve the neighbor nibbles so that when tbe product is written back into main memory in the field previously occupied by OP 2 the neighbors are not destroyed. At the end of block DML-004, the contents of RAM 1 and segment 1 of RAM 2 are as follows: RAM 1 RAM 1 Location Contents (Hexadecimal) (Hexadecimal) A 2DNN RAM 2 RAM 2 Location Contents 10 3030 Segment 1 16 2CNN It should be noted that in contrast to RAM 1 where the operand is loaded into the location which corresponds to the low four order bits of the memory address, the first word of OP 2 is loaded into word 0 of segment 1 of RAM 2 and consecutive words are loaded into locations with increasing addresses. In block DML-005 the sign of the result is calculated by comparing the sign of operand 1 with the sign of operand 2. The resultant sign is positive if the signs of the two operands are the same, negative if the signs of the operands are not the same. In block DML-006, operand 2 is packed into segment 2 of RAM 2. This packing is done by calling in a subroutine which takes the copy of operand 2, which is in RAM 1, and packs it into RAM 2 segment 2. This packing operation also strips off leading zeros. Before performing the packed routine, a test is made in block DML-006 to determine whether operand 1 is equal to 0 and, if so, a branch is taken to the result routine (RES). The strip and pack routine used in block DML-006 is initially called with the address counter and nibble counter of RAM 1 set to point to the most significant digit of the operand 2 which is stored in RAM 1 and with the address counter and nibble counter of RAM 2 to point to word 0 of segment 2 and nibble zero within word 0. The routine then performs a microinstruction which contains a CWRES2, a CLDFLP, a CTUCT1, and a CINOP2 microoperation which results in the nibble within RAM 1 going through decimal ALU 84 through result/zone multiplexer 91 and into the nibble pointed to by the RAM 2 address and nibble counters. The RAM 1 pointers are then updated to point to the next nibble within RAM 2 to the right of the nibble which was just moved and the RAM 2 counters are left unchanged. A test is then made to determine whether or not operand 2 is in the string decimal format (i.e., if it contains zone nibbles) and if so the RAM 1 nibble counter is incremented by one to skip over the zone field and the RAM 1 address counter is incremented by one if the RAM 1 nibble counter increments through 3. A test is then made on the equal zero indicator to see whether all of the digits that have been moved through decimal ALU 84 have been equal to zero digits. If a non-zero digit has been moved through the decimal ALU 84, the nibble pointer in RAM 1 is incremented by one to point to the next nibble to the right. Again, if the nibble counter increments through 3, the corresponding address counter is incremented by one. Here again, use is being made of the integrating nature of the equal zero indicator such that leading zeros will be stripped from the operand when the operand is being packed but, once a non-zero digit is encountered, from then on all digits are moved such that even embedded zeros will be moved. This incrementing of the RAM 2 pointers when a non-zero digit is moved is done by a microinstruction containing a CTUCT2 microoperation. This process is then repeated until the complete field length of operand 2 has been processed. Upon completion of the move of operand 2 from RAM 1 and the stripping of the leading zeros and packing it in RAM 2 a test is then made to determine whether operand 2 is equal to zero by testing the equal zero indicator. If the equal zero indicator is still set indicating that all digits were zero, a branch is taken to the result routine (RES) because the answer is now known because one of the operands is equal to zero. A further test is made to determine whether operand 2 contained an illegal character for a digit and, if so, a branch is taken to illegal character (Iroutine IC). Upon the completion of block DML-006, the contents of segment 2 of RAM 0 are as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) 20 2403 Segment 2 OP 2 stripped and packed. Block DML-006 also calculates the number of significant digits in operand 2 which is equal to the number of digits of operand 2 stored in RAM 2 segment 2. Block DML-007 is then entered to move the packed copy of operand 2 from RAM 2 to RAM 1. This moved is performed a word at a time by reading a word from RAM 2 into RAM 2 data-in register 88 and back into data-in register 98 and from there into RAM 1. This move is performed by a microinstruction containing a CWROP1, a CIAD01, and a CIAD02 microoperation which results in the writing of RAM 1 and the incrementing of the address counters of RAM 1 and RAM 2 after the read and write have been performed. Before the move loop is initiated, both the RAM 1 address counter and the RAM 2 address counter were loaded with addresses to point to the first word of RAM 1 (word 0) and RAM 2 address counter was loaded to point to the first word of segment 2 (word 20). After the packed version of operand 2 in RAM 2 segment 2 has been moved to RAM 1, the contents of RAM 1 are as follows: RAM 1 RAM 1 Location Contents (Hexadecimal) (Hexadecimal) 0 2403 OP 2 stripped and packed. Block DML-008 is then entered which zeroes out the partial product field prior to initializing the multiplication loops. The partial product is stored in RAM 2 segment 2. This zeroing out of segment 2 of RAM 2 is performed by initializing the RAM 2 address counter to point to the first word of segment 2 (i.e., word 20) and then loading a word of all zeros into data-in register 98 and performing a microinstruction containing a CWROP2 and a CIAD02 microoperation which results in the writing of a word into RAM 2 after which the RAM 2 address counter is incremented by one. This loop is continued until 8 words of segment 2 in RAM 2 have been written into. At the end of block DML-008 the contents of RAM 1 and RAM 2 are as follows: RAM 1 RAM 1 Location Contents (Hexadecimal) (Hexadecimal) 0 2403 Packed version of 1 XXXX OP 2 multiplicand RAM 2 RAM 2 Location Contents 0 0003 Segment 0 1 0000 Significant digits 2 0005 OP 1 multiplier 3 XXXX 1 digit/word. 10 3030 Segment 1 11 3030 Main memory image 12 3030 of OP 2 multiplicand. 13 3030 Preserved or neighbor 14 3234 nibbles. Product will 15 3033 be unpacked into this 16 2CNN area and filled with 17 XXXX leading zeros. 20 0000 Segment 2 21 0000 Partial product . . . . . . of 32 packed zeros 28 0000 ready for multiplica- 29 XXXX tion. As can be seen above, the partial product area of segment 2 has been packed with 32 zeros which only requires the use of 8 words. The 32 zeros are used because this is the maximum length of any decimal number in the preferred embodiment. The multiply loop itself begins at the entry point labeled MULT at block DML-009. Block DML-009 gets the current multiplier digit from RAM 2 segment 0 and stores it into a register in microprocessor 30 where it can be decremented down each time the multiplicand is added to the partial product. Initially, because the multiplication is performed by using the multiplier digits from right to left, that is, the multiplication begins using the least significant (the unit's position) digit, the first time through block DML-009 loads word 2 from word 2 RAM 2 segment 2 into microprocessor 30. Block DML-009 then exits to block DML-010 which branches depending upon the value of the current multiplier digit which is being worked on. This current multiplier digit value is decremented down by one each time the multiplicand is added to the partial product. As long as the decremented value of the multiplier digit is not equal to zero, block DML-010 exits to block DML-011 and begins the addition of the multiplicand to the partial product. When the decremented value of the current multiplier digit is equal to zero, block DML-010 exits to block DML-017. Initially, because the unit's position of the multiplier in the example contains a multiplier digit of 5, it is not equal to zero and block DML-010 branches to block DML-011. In block DML-011 the multiplicand is added to the partial product. To do this, the multiplicand which is stored as the packed version of operand 2 in RAM 1 is added to the partial product which is stored in segment 2 of RAM 2. This addition is done by adding each digit of the multiplicand in RAM 1 to its corresponding digit within the partial product in RAM 2 with the addition being performed by adding the unit's position of the multiplicand to the adjusted unit's position in the partial product and then adding the ten's digit in the multiplicand to the adjusted ten's position in the partial product. This addition loop adding digit by digit of multiplicand to partial product is done the number of times that there are significant digits in the multiplicand and therefore the stripping of leading zeros done initially from the multiplicand reduces the number of times that this loop must be done. The initial unit's position within the partial product of RAM 2 segment 2 which is initialized to all zeros is initially set at the nibble which corresponds to the Lth nibble in segment 2 of RAM 2 where L is equal to the number of digits not including the sign in operand 2 because the operand 2 field will be used to hold the final product. In the example, there are 12 digits in operand 2 and therefore the 12th nibble in RAM 2 segment 2 is the unit's position in the partial product such that the unit's position is contained in word 22 nibble 3. Thus, before the first addition is done within block DML-011, the pointers in RAM 1 and RAM 2 are as shown below in which the address counter of RAM 1 points to word 0 and the nibble counter of RAM 1 points to nibble 3 such that the unit's digit of 3 will be processed and the pointers to the partial duct in RAM 2 are set to point to the unit's position in RAM 2 segment 2 such that the word counter 2 points to word 22 and the nibble counter points to nibble 3. RAM 1 RAM 1 Location Contents (Hexadecimal) (Hexadecimal) WP1 = 0 NP1 = 3 RAM 2 RAM 2 Location Contents WP2 = 22 NP2 = 3 This addition is accomplished by block DML-011 executing a microinstruction containing a CWRES2, a CTDCT1, a CTDCT2, and a CDLFLP microoperation such that one nibble from RAM 1 is added to one nibble from RAM 2 and written back into RAM 2 with the decimal indicators set to indicate the result of the addition. At the end of the addition microinstruction, the counters in RAM 1 are decremented as are the counters in RAM 2 to point to the next most significant digit within the multiplicand and partial product respectively. This loop is then done the number of times corresponding to the number of significant digits in the multiplicand so that each digit within the multiplicand is added to the partial product. In the example, this results in the four digits of the multiplicand being added to the partial product such that at the end of the loop, the partial product will contain the following: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) 20 0000 Segment 2 WP2 = 2 NP2 = 3 Block DML-011 then exits to block DML-012 which determines whether there is sufficient room to expand the partial product value. This test determines whether the adjusted unit's position in the partial product is equal to the adjusted number of significant digits being used in the multiplicand. If the adjusted unit's position is equal to the adjusted number of significant digits in the multiplicand, then there is no room to expand the partial product to accommodate any possible carry-out of the addition of the most significant in the multiplicand to the most significant digit in the partial product. If there is no room to expand the partial product, then DML-012 exits to block DML-013 which determines whether there is a need to expand the partial product. This is done by determining whether there is a carry-out of the addition of the last digit of the multiplicand with the last digit of the partial product (i.e., the most significant digit of each). If there is no carry-out (the carry-out indicator of decimal indicators 85 is equal to 0) there is no overflow and block DML-016 is entered. If there was a carry-out of the most significant digit, then an overflow condition exists and block DML-014 sets an overflow flag before exiting to block DML-016. If block DML-012 determines that there is room to expand the partial product, then the one more digit addition is performed by using a microinstruction containing a CINOP1 and a CWRES2 microoperation which results in the next digit within the partial product being added to a zero from RAM 1 zero multiplexer 82 with the carry-out from the previous digit being added as the carry-in in this addition. Block DML-015 then exits to block DML-016 which decrements the current multiplier digit before returning to block DML-010 for the next multiplier digit. Block DML-010 then tests the decremented value of the current multiplier and depending upon whether it is equal to zero or non-zero exits to block DML-011 or DML-017. Because the unit's position of the multiplier contains a decimal 5, the branch to block DML-011 would be taken 5 times adding the multiplicand into the partial product 5 times at the end of which the partial product in RAM 2 segment 2 will be as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) 20 0000 Segment 2 This is the equivalent of 5 times the multiplicand. When the current multiplier digit is exhausted in block DML-010, it exits to block DML-017. The pointer to the current multiplier digit is decremented to point to the next more significant digit in the multiplier which, the first time this block is entered, results in the pointer being changed from the unit's position to the ten's position. Block DML-017 then exits by testing to see whether there are more multiplier digits to be processed and, if not, exits to block DML-022. If there are more digits to be processed in the multiplier, block DML-017 exits to block DML-018 which shifts the multiplicand left one digit (i.e., equivalent to multiplying it by 10) relative to the partial product. This shifting of the multiplicand relative to the partial product is actually done by shifting the pointer to the unit's position in the partial product to point to the next digit to the left such that the unit's position in the multiplicand will now be added to the adjusted unit's position in the partial product. The first time block DML-018 is executed, this amounts to changing the pointers to the partial product in RAM 2 segment 2 such that the unit's position is no longer considered to be in word 22 nibble 3, but is now considered to be in word 22 nibble 2, such that the unit's position of the multiplicand will now be added to the ten's position and the partial product. Block DML-018 then exits to block DML-020 if this shifting of the starting position in the partial product does not result in the starting position being shifted out of the most significant digit of the partial product (i.e., out of word 20 nibble 0). If the shift results in the starting position in the partial product being shifted out of the most significant digit of the partial product, block DML-019 is entered and then overflow condition is flagged. However, if the starting position in the partial product has not been shifted out of the most significant digit of the partial product, a test is then made in block DML-020 to see whether there are sufficient number of remaining digits between the most significart digit and the starting position in the partial product to accommodate the adjusted length of the multiplicand. If, for example, three are only three digits between the most significant digit and the starting digit in the partial product and the multiplicand is 4 digits long, then there are not sufficient digits in the partial product to accommodate the addition of the full multiplicand and block DML-020 then adjusts the length of the multiplicand so that only the number of digits of the multiplicand will be added to the partial product. By doing this, truncation takes place as the multiply is done. This shortening of the multiplicand which amounts to adjusting its length is done in block DML-021 if necessary. After the multiplicand is shifted relative to the partial product, the exit is taken to point MULT in which the multiplicand is then added to the partial product the number of times corresponding to the value of the current multiplier digit beginning with block DML-009. In the current example in which the ten's position of the multiplier is a zero, the branch performed in block DML-010 will immediatelY result in the exit being taken to block DML-017 and the hundredths position of the multiplier will then be processed. This will result in block DML-018 being entered which will again shift the multiplicand to the left relative to the partial product and the hundredths digit of the multiplier will be processed by eventually returning via point MULT to block DML-009. At this point, when the hundredths position of the multiplier is processed, the pointers in the partial product starting position will point to the hundredths position in the partial product such that the unit's position in the multiplicand will be added to the hundredths position in the partial product. The pointers to the partial product in RAM 2 segment 2 will be as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) WP2= 22 NP2= 1 After the hundredth's digit of the multiplier has been processed by performing the loop of block DML-010 through block DML-016 three times, the current multiplier digit will have been decremented to zero and block DML-017 will be entered with the partial product then being as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) 20 0000 Segment 2 Block DML-017 then tests to see whether all multiplier digits have been processed. In this case, because there is only three multiplier digits to be processed, it exits to block DML-022. It should be noted at this point that by only processing the significant digits in the multiplier, only three digits must be processed and the leading zeros need not be processed. In this case, the three leading zeros in the multiplier need not be processed and this saves considerable time by not having to perform blocks DML-009 through DML-021 for these leading zeros. Block DML-022 is entered after the multiplication process has been completed and the partial product fully developed in segment 2 of RAM 2. Block DML-022 then gets the decimal indicators 85 into microprocessor 30 via monitor multiplexer 80 and monitor logic 22. It also clears the commercial software instruction overflow (OV), sign fault (SF), greater than (G), and less than (L) indicators. Block DML-022 then branches to block DML-023 if an overflow was detected during the multiplication process or branches to block DML-025 if no overflow occurred during the multiplicatlon. If an overflow was detected, block DML-023 is entered and a branch is taken depending upon whether the overflow trapping is enabled or not. If overflow trapping is enabled, block DML-024 is entered and the commercial software indicators are stored with overflow indicator set. Block DML-024 then branches to the overflow trap routine which processes the overflow trap. If the trapping is not enabled on overflow block DML-023 branches to block DML-025. Block DML-025 sets the G and L indicators depending upon the sign of the product. Block DML-026 is then entered and the partial product in RAM 2 segment 2 is transferred to RAM 1 one word at a time. Block DML-027 is then entered and the product in RAM 1 is then unpacked into the original operand 2 field adding the necessary zone field nibbles if the operand 2 is string decimal type. This unpacking of the product from RAM 1 into segment 1 in RAM 2 is done to both add the necessary zone nibbles, if it is string decimal data, and to keep intact the original neighboring nibbles that may occupy words with the most significant and least significant digits of operand 2. At the end of block DML-027, RAM 1 which contains the packed product and RAM 2 segment 1 which contains the final product in the decimal format of the original operand 2 are as follows for the example: RAM 1 RAM 1 Location Contents (Hexadecimal) (Hexadecimal) 0 0000 Product in packed 1 0073 format RAM 2 RAM 2 Location Contents 10 3030 Segment 1 12 3030 Unpacked product 13 3030 in string decimal 14 3733 format (i.e., with 15 3239 zone nibbles). 17 2DNN After moving the product to segment 1, block DML-027 exits to block DML-028 which fixes the sign of the product in RAM 2 segment 1 depending upon whether the sign is negative or positive, which is a function of whether the signs of operand 1 and operand 2 are like or unlike. Block DML-029 is entered and the commercial software instruction indicators are then set. Block DML-030 is then entered which moves the result from RAM 2 segment 1 back via transceivers 97 79 into microprocessor 30 and from there into main memory as specified by data descriptor 2. Block DML-030 then exits to the FETCH routine which fetches the next software instruction and begins its processing. From the above discussion of the multiplication routine, it can be appreciated that the stripping of the leading zeros from the multiplier and the multiplicand saves significant time within the multiplication routine and that by packing the multiplicand, time is also saved in that zone field nibbles do not have to be skipped during the computation of the partial product. The decimal divide operation performed by the CPU of the preferred embodiment will now be described. One method of performing a decimal divide which is well known in the art is to initialize a quotient to zero and then continually subtract the denominator from the numerator using decimal subtract operations until the difference goes negative and to increment the quotient by one each time the denominator is successfully subtracted from the numerator before the difference goes negative. When this method is used, the unit's position of the denominator is lined up with the numerator and the subtractions are performed with as many digits being processed in each subtraction as there are digits in the longer of the operands, be it the numerator or the denominator. This process, although it works, can be very slow if the denominator goes into the numerator many times. A better method of performing a decimal divide is shown in the flow chart of FIG. 14A which is a diagram of an improved prior art method. In this method, the quotient is developed by first developing the most significant digit of the quotient and then developing lesser significant digits. This is done by lining up the most significant digit in the denominator with the most significant digit in the numerator and doing a decimal subtract. If the result of the subtract is not negative, the present quotient digit is incremented by one and the subtract is performed until such time as the difference between the aligned denominator and the (convert partial numerator) numerator becomes negative at which time the denominator is added back into the difference (partial numerator) and the quotient digit is stored away. Then, the denominator is shifted one decimal digit position to the right and the shifted denominator is then subtracted from the partial numerator until such time as the difference (new current partial numerator) becomes negative at which time the new quotient digit is stored away and the partial numerator is again made positive by adding back in the denominator before the denominator is again shifted one position to the right and a new quotient digit is developed by subtracting the shifted denominator from the partial numerator until such time as the partial numerator becomes negative. This process is continued until the unit's position of the partial numerator is aligned with the unit's position of the denominator at which time all quotient digits have been developed and the remainder is the final difference. This process greatly speeds up the division over the non-aligned method previously discussed in which the denominator is simply subtracted from the numerator until the difference becomes negative. An example of this prior art method is shown in Table 16 in which a denominator of 0043 is divided into a numerator of 00006402. As can be seen, the quotient digits are developed from the most significant digit (Q1) to the least significant digit (Q5). This division results in a quotient of 000148 and a remainder of 38. TABLE 16 Prior Art Divide of 00006402 by 0043 00006402 Q1=0 -9997 Q1=1 00006 Q1=0, Q2=0 -99963 Q2=1 000064 Q2=0, Q3=0 000021 Q3=1 -999978 Q3=2 0000210 Q3=1, Q4=0 0000167 Q4=1 0000124 Q4=2 00000382 Q4=4, Q5=0 00000339 Q5=1 38 Q5=8 Quotient = 00148 Remainder = 38 An improvement on this method is shown in the flow chart of FIG. 14B which is very similar to the flow chart of 14A except that the leading zeros of the numerator and the denominator are stripped away so that the alignment of the numerator and the denominator is done on the most significant non-zero digit as opposed to the most significant digit in the numerator's and denominator's fields as was done in the method of FIG. 14A. The results of stripping away the leading zeros in the numerator and denominator is to further increase the speed by which the division can be done because the leading zero digits need not be processed when performing the subtraction operations. An example of this method is shown in Table 17 using the same denominators and numerators as used above in Table TABLE 17 Improved Divide 6402 Q1=0 21 Q1=1 -78 Q1=2 210 Q1=1, Q2=0 167 Q2=1 124 Q2=2 382 Q2=4, Q3=0 339 Q3=1 -995 Q3=9 038 Q3=8 Quotient = 148 Remainder = 38 The improved method of FIG. 14B also improves the speed by packing the numerator and denominator so that zone field nibbles are eliminated so that at the completion of developing the quotient and remainder the quotient and remainder must have leading zeros supplied as necessary as well as necessary zone field nibbles if the quotient or remainder are in the string decimal format. A specific example will now be discussed with the detailed flow charts of the method in FIG. 15. In the preferred embodiment, the decimal divide commercial software instruction takes operand 1 as the denominator and divides it into operand 2, the numerator, and stores the quotient in a field pointed to by data descriptor 3 and stores and remainder back into the field previously occupied by operand 2 which is pointed to by data descriptor 2. In the following example, a numerator of -0006402 will be divided by a denominator of -0043 to produce a quotient in a field designated as containing 6 digits plus a sign such that it wlll produce a quotient of +000148 and the remainder will go back into the field previously occupied by operand 2 such that the remainder will equal For this example, assume the DDV software instruction is at main memory location 1000 as follows: Example DDV Software Instruction: Memory Memory Location Contents Addressing (Hexa- (Hexadecimal) decimal) Meaning 1000 002B DDV op code 1001 E507 data descriptor 1 (DD1) word 1 1002 1102 data descriptor 1 (DD1) word 2 1003 6907 data descriptor 2 (DD2) word 1 1004 1204 data descriptor 2 (DD2) word 2 1005 6707 data descriptor 3 (DD3) word 1 1006 1306 data descriptor 3 (DD3) word 2 Data descriptors DD1, DD2, DD2 are decoded as follows (see FIG. 9): DD1: T=0: String (unpacked) decimal. C1=1: OP 1 starts in right byte. C2,C3=11: Trailing sign L=5: 4 digits and sign. CAS: OP 1 starts in word addressed by contents of base register 7 plus displacement of 1102. If B7 contains the value 1000 hexadecimal, OP 1 is located at address 2102 hexadecimal. DD2: T=0: String (unpacked) decimal. C1=0: OP 2 starts in left byte. C2,C3=11: Trailing sign. L=9: 8 digits and a sign. CAS: OP 2 starts in word addressed by contents of base register 7 plus the displacement of 1204. Since B7 contains 1000 hexadecimal, OP 2 is located at address 2204 hexadecimal. DD3: T=0: String (unpacked) decimal. C1=0: OP 3 starts in left byte. C2,C3=11: Trailing sign. L=7: 6 digits and a sign. CAS: OP 3 starts in word addressed by contents of base register 7 plus the displacement of 1306. Since B7 contains 1000 hexadecimal, OP 3 is located at address 2306 hexadecimal. OP 1, which is a -0043 with a trailing minus sign, appears in main memory as follows: Memory Memory Location Contents (Hexadecimal) (Hexadecimal) 2102 NN30 2104 332D where: N are neighbor nibbles. 30, 30, 34 and 33 are unpacked decimal digits of 0, 0, 4 and 3 respectively. 2D is a trailing minus sign. OP 2, which is -00006402 with a trailing minus sign, appears in main memory as follows: Memory Memory Location Contents (Hexadecimal) (Hexadecimal) 2204 NN30 2208 322D where: N are neighbor nibbles which must be preserved when the remainder is stored. 30, 30, 30, 30, 36, 34, 30 and 32 are unpacked decimal digit of 0, 0, 0, 0, 6, 4, 0 and 2 respectively. 2D is an unpacked trailing minus sign. OP 3, which is to receive the quotient, appears in main memory as follows: Memory Location Memory Address Contents (Hexadecimal) (Hexadecimal) 2306 XXXX 2307 XXXX 2308 XXXX 2309 SSNN where: X are don't care nibbles to be overlayed by quotient digits. S are don't care nibbles to be overlayed by quotient sign. N are neighbor nibbles which must be preserved. The execution of the above example, DDV commercial software instruction, will now be described with reference to FIG. 15. FIG. 15 is a flow chart of the firmware microroutines used by CPU 20 to execute a DDV software instruction. The blocks in FIG. 15 which are referred to by the names next to them, such as DDV-001, show at a gross level the functions performed by microprocessor 30 and commercial instruction logic 28 to perform the software instruction. Some of these blocks may represent the execution of more than one 48 or 56-bit microinstruction, the form of which is shown in FIG. 5. Before entering the microroutines shown in FIG. 15, which are peculiar to the DDV commercial software instructions, the CPU 20 examines the first word of the software instruction which is being executed to determine the type of operation to be performed. Once it is determined that it is a decimal arithmetic operation as determined by looking at the operation code in the first word of the instruction, the CPU 20 then proceeds to decode the address syllable associated with data descriptor 1 to determine the main memory word address and the position within the word in which operand 1 begins. This front end processing of the software instruction then continues with the microprocessor branching to the DDV routine at block DDV-000. When the decimal divide routine is entered at DDV-000, it determines whether this is a first pass in which operand 1 is to be brought into the CPU or the second pass in which operand 2 is to be brought into the CPU. If it is the first pass, the firmware then branches to block DDV-001 which fetches operand 1 (the denominator) into RAM 1 as previously descrlbed for the decimal add instruction. At the end of block DDV-001, the contents of RAM 1 are as follows: RAM 1 RAM 1 Contents Location (Hexadecimal) 2 NN30 4 332D Block DDV-001 then exits tc block DDV-002 which then strips off leading zeros of OP 1 as it moves a copy of OP 1 into segment 2 of RAM 2. This move and strip of leading zero operation is performed by the subroutine described earlier in the DML instruction. At the completion of the DDV-002 block, the contents of RAM 2 segment 2 will have OP 1 packed with leading zero stripped and zone field nibble eliminated as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) 40 43XX OP 1 packed After packing the denominator into RAM 2 segment 2 from the copy in RAM 1, block DDV-002 then exits to routine IC if an illegal character was discovered during the move operation or exits to the instruction front end if operand 1 is not equal to zero to prepare to bring in operand 2 or it exits to block DDV-003 if operand 1 is equal to zero indicating that a divide by zero would be attempted. Block DDV-003 sets the commercial instruction overflow indicator and exits to the divide by zero trap routine which causes a trap. If an illegal character was not discovered in operand 1 and if the operand is not equal to zero, the instruction front end routine eventually returns to block DDV-000 to perform the second pass in which case blcck DDV-004 is entered. Block DDV-004 tests whether an immediate operand has been specified within the divided software instruction for the operand which is to be processed next. If an immediate operand has been specified for either operand 2 or operand 3, a branch is taken to a routine IS which is an illegal specification routine which will cause a software trap because it is illegal to specify an immediate operand for an operand in which data is to be stored. Operand 2 will have the remainder stored in it and operand 3 will have the quotient stored in it. Therefore, neither operand 2 nor operand 3 may be immediate operations. If the next operand has not been specified to be an immediate operand, block DDV-004 branches to block DDV-006 to get the numerator if it is the second pass or it branches to block DDV-005 to bring in operand 3 if it is the third pass so that the neighbors may be saved when the quotient is stored into that field. During the second pass, block DDV-006 is entered and operand 2 which is the numerator is brought into RAM 1 and also into segment 1 of RAM 2 such that at the end of block DDV-006, RAM 1 and RAM 2 segment 1 contain the numerator as follows: RAM 1 RAM 1 Location Contents (Hexadecimal) (Hexadecimal) 4 NN30 OP 2 - numerator 8 322D RAM 2 RAM 2 Location Contents 20 NN30 Segment 1 21 3030 OP 2 - numerator 24 322D Block DDV-007 is then entered and a test of the sign of operand 1 and operand 2 is made to calculate the resultant quotient sign which will be positive if the signs are equal, and will be negative if the signs are not equal, and the resultant quotient sign flag is put into segment 0 word 0 of RAM 2 as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) 0 0000 Segment 0 word 0 Sign flag of quotient: Block DDV-008 is then entered and the operand 2 is then moved to RAM 2 segment 3 stripping the leading zeros and removing the zone field nibbles if it is a string operand by copying the copy of operand 2 in RAM 1 into RAM 2 such that at the end of block DDV-008 the contents of RAM 2 segment 3 are as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) 60 6402 Segment 3 OP 2 numerator packed with leading zeros stripped. Block DDV-008 exits to routine IC if an illegal character was discovered while moving operand 2 and exits to a ZERO PREP routine if operand 2 is equal to zero. A zero numerator yields a zero quotient and a zero remainder such that the length of the quotient that will be stored in the field specified by data descriptor 3 is set equal to zero and the length of the remainder which will be stored in the field specified by data descriptor 2 is set equal to zero. If operand 2 is not equal to zero as is determined by the zero indicator from decimal indicators 85 not being in the equal zero state, then a test is made to compare the length of the significant digits in operand 2 which is the numerator to the number of significant digits in operand 1 which is the denominator to determine whether just on the basis of the number of non-zero digits in the operands whether the quotient will be greater than zero. If the number of non-zero digits in operand 2 is greater than the number of non-zero digits in operand 1, then an improper fraction will result such that there will be a non-zero quotient and block DDV-009 is entered. If the length of the non-zero field of the operand 2 numerator, is less than the length of the non-zero field in the operand 1 denominator, then a proper fraction will result and it is known that the quotient will be a zero such that the length of the quotient is set to zero and the length of the remainder is set to the length of the data descriptor 2 because operand 2 will be the remainder. In this case of a proper fraction, the routine then branches to the ZERO PREP routine. If an improper fraction will result and block DDV-009 is entered, it then transfers operand 1 from RAM 2 segment to RAM 1 such that RAM 1 now contains the stripped and packed version of the denominator as follows: RAM 1 RAM 1 Location Contents (Hexadecimal) (Hexadecimal) 0 43XX OP 1 denominator packed and stripped. Block DDV-009 then transfers to block DDV-010. Block DDV-010 and block DDV-011 free up some of the registers within microprocessor 30 by storing information that is needed later in the divide operation into unused areas of RAM 2. Block DDV-012 then determines the number of expected digits in the quotient. This is done by taking the number of significant digits of the operand 2 which is the numerator and subtracting from it the number of significant digits of operand 1 which is the denominator and then adding one to the difference. In this case, the expected number of quotient digits is equal to four (which is the length of the non-zero field of operand 2) minus 2 (which is the length of the non-zero field of operand 1) plus 1 which is equal to 3. Block DDV-013 then sets up the pointers to the version of operand 2 which is to be used during the divide. If operand 2 is string data, the packed version found in RAM 2 segment 3 is used and if operand 2 was packed data then the original version found in RAM 2 segment 1 is used. After setting up the pointers to operand 2, block DDV-014 is entered which sets the current quotient digit equal to 0. In this case, the first time this block is entered, the most significant digit of the quotient is being set equal to zero. This quotient digit is kept in a register within microprocessor 30 which is incremented each time the subtraction loop that follows is done. Block DDV-015 is then entered which initializes the address counter and nibble counter of RAM 1 to point to the unit's position of the operand 1 denominator which is stored in RAM 1 such that the address counter will point to word 0 and the nibble pointer will point to nibble 1 which contains the decimal 3 digit which is the unit's position of the denominator. Block DDV-016 is then entered and the address counter and nibble counter of RAM 2 are set up to point to the current subtract starting position of operand 2 in RAM 2 which is used as the current partial numerator. In the example case this amounts to pointing to word 0 of segment 3 which is word 60 in RAM 2 and to pointing to the digit 4 which is in nibble 1 of word 60. Block DDV-017 is then entered and subtracts the denominator in RAM 1 from the current partial numerator in RAM 2 one digit at a time with the number of digits being subtracted equal to the non-zero field length of the denominator. The first time through block DDV-017, this amounts to subtracting the unit's digit of the denominator which contains a decimal 3 from the hundredths position in the numerator which contains a decimal 4 to produce a decimal 1 which replaces the hundredths digit in the partial numerator in RAM 2. The second time through the loop, the ten's position of the denominator which contains a decimal 4 is subtracted from the thousandths position in the numerator which contains a decimal 6 to produce a decimal 2 and no carry-out. Decimal 2 is stored into the one-thousandth's position in the partial numerator replacing the decimal 6. This subtraction of two digits then completes the subtraction of the denominator on a digit by digit basis from the partial numerator and block DDV-017 then exits to block DDV-019 because the first quotient digit is being worked on (i.e., the most significant digit of the quotient). For the second and subsequent quotient digits, block DDV-017 exits to block DDV-018 which subtracts a zero from the next digit to the left in the numerator using the carry-out from the previous subtraction to accommodate a possible carry-out from subtracting the most significant digit of the denominator. This subtraction of individual denominator digits from numerator digits is performed by a microinstruction containing a CIPSUB, a CWRES2, a CLDFLP, a CTDCT1 and a CTDCT2 microoperation such that a digit from RAM 1 is subtracted from a digit from RAM 2 and the result in written back into RAM 2 and the counters to the denominator in RAM 1 are decremented to point to the next more significant digit in the denominator and the counters in RAM 2 are decremented to point to the next more significant digit in the partial numerator. This subtraction of a zero from the next more significant digit within the partial numerator is accomplished by a microinstruction having a CIPSUB, a CWRES2, a CINOP1 and a CLDFLP microoperaiton which results in inhibiting the output of RAM 1 and selecting a zero from RAM 1 zero multiplexer 82 with this zero being subtracted from the digit from RAM 2 using the carry from the previous digit and writing the result back into RAM 2 while updating the decimal indicators 85. Block DDV-019 is then entered and the current quotient digit is incremented by one. The current quotient digit is kept in a a register in microprocessor 30 and incremented by one and is only transferred back to the commercial instruction logic after the value of the current quotient digit has been finally determined. Block DDV-019 then exits to block DDV-029 if the result of the subtracting the denominator from the partial numerator is equal to zero as determined by the equal zero indicator of decimal indicators 85. Block DDV-019 exits to block DDV-020 if the equal zero indicator does not indicate that the result is equal to zero and there was a carry-out of the most significant digit as indicated by the carry-out decimal indicator 85. If the result is not equal to zero and there was no carry-out then block DDV-019 goes to block DDV-015. Block DDV-015 is then entered to set the address counter and nibble counter of RAM 1 to point back to the unit's position of the denominator which is stored in RAM 1. Block DDV-016 is then entered and the address counter and nibble counter of RAM 2 are then adjusted back to the current subtract starting position so that they point to the least significant digit in the field within the partial numerator which is currently being worked on. Thus, at the end of block DDV-016 the pointers in the numerator and the denominator have been reset so that another subtract loop can be performed because the previous subtract loop did not yield a zero result and the result did not go negative. Block DDV-016 then returns to block DDV-017 which will subtract the denominator from the current subtract field within the current partial numerator. At the beginning of the second time through this subtract loop, the contents of RAM 1 and RAM 2 are as follows: RAM 1 RAM 1 Location Contents (Hexadecimal) (Hexadecimal) 0 43XX OP 2 - Denominator WP1 = 0 NP1 = 1 RAM 2 RAM 2 Location Contents 60 2102 Segment 3 OP 1=Partial numerator WP2 = 60 NP2 = 1 Before the subtract begins, the current subtract field being worked on within the partial numerator contains the result of the previous subtraction of the denominator from the partial numerator. After performing the subtract on the entire length of the denominator, block DDV-019 is entered and the current quotient digit of 1 is incremented 2 which is equal to the number of times that the denominator has been subtracted. Block DDV-019 then exits to block DDV-020 because the result is not equal to zero and there was a carry-out indicating that the denominator has been subtracted from the partial numerator one more time than the denominator will go into the partial numerator. Block DDV-020 then decrements the current quotient digit which makes it go from the value of 2 to the value of 1 to adjust it for this oversubtraction. Block DDV-021 is then entered and the RAM 1 pointers are adjusted to point to the unit's position of the denominator and the RAM 2 pointers are adjusted to point to the current subtract starting position of the partial numerator. Block DDV-022 is then entered to add back the denominator to the partial numerator to compensate for the oversubtraction. Block DDV-022 accomplishes this add by performing a microinstruction containing a CWRES2, a CLDFLD, a CDCT1 and a CDCT2 microoperation in a loop which results in adding each digit of the denominator to its corresponding digit in the partial numerator and placing the resultant digit back into the partial numerator stored in RAM 2. After adding the entire length of the denominator to the partial numerator, block DDV-022 then exits to block DDV-023 if it is not the first quotient digit which is being worked. Block DDV-023 adds one more digit by adding a zero with carry from the previous digit to the next more significant digit in the partial numerator field to handle the case of a carry-out from the previous digit. If it is not the first quotient digit, block DDV-022 exits to block DDV-024 which decrements the current quotient digit by one to adjust for the oversubtraction. Thus, the current quotient digit is adjusted from 2 to 1 the first time through this block. After adjusting the current quotient digit, block DDV-024 exits to block DDV-025 if there are more quotient digits to be determined or block DDV-026 if this was the last quotient digit to be determined (i.e., the least significant digit of the quotient). In the example case, at this point block DDV-024 would exit to block DDV-025 which sets up point of the address counter of RAM 2 to point to the position in segment 0 where the quotient digits are to be stored. Block DDV-025 then exits to block DDV-033 which store the current quotient digit into segment 0. In the example, this results in storing the first quotient digit in word 1 of segment 0 of RAM 2. This quotient digit has a value of decimal 1 such that at this point segment 0 word 0 contains the sign of the result indicator as stored earlier and the first quotient digit which is equal to a decimal 1 as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) Segment 0 0 0000 OP 3-quotient sign flag 1 0001 1st quotient digit Block DDV-034 then increments the quotient pointer to point to the next word in segment 0 for storing the next quotient digit when it is determined. Block DDV-035 then adjusts the subtract starting position within the partial numerator to the right by one decimal digit. Block DDV-035 then exits to block DDV-014 which sets the current quotient digit, which is a counter within microprocessor 30, equal to 0. Block DDV-015 and DDV-016 then set up their pointers to the unit's position in the denominator and the current subtract starting position in the partial numerator in preparation for starting a new loop which subtracts the denominator from the partial numerator as many times as it can before the result goes negative. When block DDV-016 is exited, the contents of RAM 1 and RAM 2 in their pointers are as follows: RAM 1 RAM 1 Location Contents (Hexadecimal) (Hexadecimal) 0 43XX OP 1=Denominator NP1= 1 RAM 2 RAM 2 Location Contents 60 2102 OP 1=Partial numerator The subtract loop is then performed starting a block DDV-017 and the quotient digit is determined by subtracting decimal 43 from decimal 210 as many times as it will go such that eventually block DDV-033 is entered to store the second quotient digit into the second word of segment 0 RAM 2 so that the contents of segment 0 would then be as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) Segment 2 0 0000 OP3-quotient sign flag 1 0001 1st quotient digit 2 0004 2nd quotient digit After the pointers are adjusted to the next quotient digit and to the starting position for the next subtract, the subtract routine is again entered at block DDV-017 and the denominator is again subtracted from the partial numerator. Eventually this will result in block DDV-024 being entered to decrement the current quotient counter and taking the branch that goes to block DDV-026 when the last quotient digit is being processed. Block DDV-026 then loads the address counter of RAM 2 with the address of the word in segment 0 where the last quotient digit is to be stored. This last quotient digit is the least significant digit of the quotient. Block DDV-026 then tests to determine whether the sign of operand 2 is a plus sign or a minus sign, and if it is a plus sign, exits to block DDV-027 to set a flag indicating that the remainder which will be stored in field previously occupied by operand 2 should have a plus sign and if operand 2 was negative, then block DDV-026 exits to block DDV-028 to store a flag indicating that the remainder is to have a negative sign. Block DDV-027 and block DDV-028 after setting the remainder sign flag to the proper state then exit to block DDV-032 which stores the last quotient digit. At this point, the contents of RAM 2 segment 0 which contains the quotient digits and the quotient sign flag and segment 3, which contains the remainder of the operand 2 numerator are as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) Segment 0 0 0000 OP3=quotient sign flag 1 0001 1st quotient digit 2 0004 2nd quotient digit 3 0008 3rd quotient digit Segment 3 60 0038 OP2=numerator/ Before leaving the discussion of the method by which the quotient digits are computed, the case in which block DDV-019 takes the branch that indicates the result of subtracting the denominator from the partial numerator is zero and the branch is taken to block DDV-029 will be discussed. In block DDV-029, the number of remaining quotient digits to be calculated is decremented by one and block DDV-030 is entered which sets up the address counter of RAM 2 to point to the word in segment 0 where the current quotient digit is to be stored. When this path is taken from block DDV-019, it is not necessary to decrement the current quotient digit because there has been no oversubtraction and it is further not necessary to add back the denominator into the numerator to compensate for the oversubtraction. Block DDV-030 then exits and tests whether the divide is finished or not depending upon the number of remaining quotient digits to be calculated as indicated by the quotient digit counter. If there are more quotient digits to be calculated, block DDV-030 exits to block DDV-033 and proceeds as described above. If the divide is finished as indicated by the number of remaining quotient digits being equal to zero, block DDV-030 exits to block DDV-031. Block DDV-031 is taken when the remainder is equal to zero because the result of the subtraction above was zero and the last quotient digit has been processed. Therefore, block DDV-031 sets the sign flag of the remainder to indicate that the remainder has a positive sign. Block DDV-032 is then entered and the last quotient digit is stored in a word in segment 0 of RAM 2. Block DDV-032 then exits to block DDV-036 which tests whether operand 2 is packed decimal data or string decimal data and if it is packed decimal data goes to block DDV-037 and if it is string decimal data goes to block DDV-041. In the example division, operand 2 is string decimal data so block DDV-036 exits to block DDV-041 which copies the remainder which is what is left of the partial numerator in RAM 2 segment 3 to RAM 1 such that RAM 1 at the end of block DDV-041 is as follows: RAM 1 RAM 1 Location Contents (Hexadecimal) (Hexadecimal) 0 0038 Packed remainder After copying the remainder into RAM 1, block DDV-042 then copies and unpacks the remainder from RAM 1 into RAM 2 segment 1 which contains the main memory image of operand 2 which was preserved in order to save the neighboring nibbles. After block DDV-042 unpacks the remainder, RAM 2 segment 1 is as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) Segment 1 20 NN30 Unpacked remainder 24 382D In this example, since the packed remainder only contained 4 digits, only the first four least significant digits of the remainder are unpacked into segment 1 of RAM 2 by block DDV-042 and the leading four digits within segment 1 of RAM 2 are those which are left over from the main memory image of the original operand 2. Block DDV-043 is then entered to add the zone field nibbles with leading zeros as is necessary to completely fill out the remainder in RAM 2 segment 1. In this case the execution of block DDV-043 does not result in any change to the contents of segment 1 because the 4 leading digit positions within the operand 2 were initially zeros. These were written over with leading zeros by block DDV-043 also such that at the end of block DDV-043 the contents of segment 1 of RAM 2, which is the main memory image of the remainder, are as above. Block DDV-043 then exits to block DDV-038. If the operand 2 was not a string decimal data type field, block DDV-036 would have exited to block DDV-037 to adjust the pointers to point to the leading sign position of operand 2 as stored in segment 1 of RAM 2 such that when block DDV-038 is entered from either block DDV-037 or block DDV-043 the address counter and nibble pointers of RAM 2 will point to the first nibble within the operand 2 field. Block DDV-038 is then executed to fix the sign of the remainder in the OP 2 field. Block DDV-038 also gets the commercial software instruction indicators and clears the overflow (OV), specification fault (SF), greater than (G) and less than (L) indicators. Block DDV-039 then stores the commercial software indicators. Block DDV-040 is then entered to set up for pass 3 of the instruction which brings in operand 3 and sets it up in preparation for storing the quotient into the operand 3 field. Block DDV-040 then exits to the instruction front end processing routine which does some preliminary processing upon data descriptor 3 and eventually returns to block DDV-003 which will, on the third pass, result in block DDV-005 being entered. In block DDV-005, operand 3 field is brought into RAM 2 segment 4 to save the neighboring nibbles before the quotient is written into the field and the field written back into main memory. Block DDV-004 gets the quotient sign flag from RAM 2 segment 0 word 0. Block DDV-045 sets the RAM 1 address counter to point to word 0 and the nibble counter to point to nibble 3. Block DDV-046 is then entered to take the quotient stored in RAM 2 segment 0 one digit at a time starting with the least significant digit and transferring it into RAM 2 segment 4 which contains the operand 3 field, and to supply the necessary zone bits and leading zeros as is necessary. If the operand 3 field length is greater than the number of quotient digits stored in RAM 2 segment 0. This copying of the quotient from segment 0 of RAM 1 to segment 4 of RAM 2 is done in blocks DDV-046 through DDV-059 and at the end of which RAM 2 segment 4, which contains the quotient with the proper sign, appears as RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) Segment 4 80 3030 Unpacked quotient of 81 3031 +000148 where "SS" 82 3438 will be occupied later 83 SSNN by plus sign. After DDV-059 completes the supplying of leading zeros and zone field nibbles as is necessary if there are leading zeros or if the operand 3 field is an unpacked decimal field, block DDV-060 is entered to get commercial software instruction indicators. Block DDV-061 then clears all indicators but the truncation (TR) indicator bit. Block DDV-061 then exits to block DDV-062 if the quotient is not equal to zero or to block DDV-069 if the quotient is equal to zero. If block DDV-062 is entered, a test is made to determine whether an overflow occurred during the divide operation. If an overflow occurred, block DDV-062 exits to block DDV-064 which tests whether traps are enabled and if traps are enabled, block DDV-066 is entered which sets the commercial instruction overflow indicator bit and then exits to the routine to handle the overflow trap. If traps are not enabled, block DDV-064 exits to block DDV-065. Block DDV-065 then tests the sign flag of the quotient which was previously stored. If the quotient is positive, block DDV-067 is entered and the greater than (G) indicator bit is set. If the of the quotient is negative, block DDV-068 is entered and the less than (L) indicator bit is set. Block DDV-069 is the entered to fix the sign of the quotient in RAM 2 segment 4. Block DDV-070 is then entered which stores the commercial instruction indicators. Block DDV-071 then proceeds to transfer the quotient from RAM 2 segment 4 into main memory by reading a word at a time from RAM 2 into microprocessor 30 and from there having it stored in a word of main memory. In block DDV-072 the starting address and the length of the remainder are restored to pointers and counters from where they were saved in RAM 2 segment 7 words 0-2. In block DDV-073 the remainder is transferred from RAM 2 segment 1 into main memory by taking a word at a time from RAM 2 into microprocessor 30 and writing it into main memory. Block DDV-073 completes the processing of the decimal divide instruction and exits to the routine which fetches the next software instruction from main memory in preparation for its execution. As can be appreciated from the above discussion of the decimal divide instruction, the ability to address the numerator and the denominator from the most significant digit to the least significant digit is made use of to strip loading zeros from both which results in greatly reducing the number of cycles which must be executed when performing a divide operation. The binary to decimal conversion operation performed by the CPU of the preferred embodiment will now be described. One method of converting a number in a binary format to a number in a decimal format which is well known in the art is to set an initial decimal partial sum to zero and to place the binary number in a register which can be shifted such that each bit within the binary number can be examined. The binary number is then examined beginning at the most significant bit position and the partial sum is then doubled by performing a decimal add of the partial sum to itself and adding in the bit being examined as a carry input into the least significant decimal digit of the partial sum. This process is then repeated scanning from the more significant to the least significant bits within the binary number until each bit has been examined. The decimal partial sum is added to itself each time a bit within the binary number is examined with the carry into the least significant digit of the decimal partial sum being set equal to the bit of the binary number that is being examined. Although this process works, it can be time consuming because the decimal add of the partial sum must be performed for each digit within the binary number being converted such that if a binary number is a 32-bit binary number, 32 decimal adds of the partial sum must be performed. These additions can be quite long if the addition is performed on a partial sum that has as many decimal digits in it as required to hold the largest possible number represented by the length of the binary number (i.e., 2 to the 31 power plus a sign bit). In the CPU of the preferred embodiment, an improved method is used which first strips off the leading zeros within the binary number such that if a binary number is represented in 16 bits and the most significant 8 bits are all zeros, the 8 first bits are stripped off thereby reducing the total number of decimal additions which need to take place on the partial sum. Secondly, the length of the partial sum is initially set equal to one decimal digit and the length is adjusted only as the number of digits within the partial sum increases so that each addition of the partial sum only is required to do a decimal add on as many digits as required to hold the partial sum at any given instance. A specific example of a binary to decimal conversion operation will now be discussed with respect to the detailed flow charts of the method as shown in FIG. 16. In the preferred embodiment, the binary to decimal conversion commercial software instruction takes the binary number given to it as operand 1 and it converts it to a decimal number and stores the result in the field specified for operand 2. In the following example, a binary number or -37 will be converted to a decimal number which is to be placed in a packed decimal field with a trailing sign. In the preferred embodiment, negative binary numbers are represented in a two's complement form. For the example binary to decimal conversion shown below, assume the CBD software instruction is at main memory location 1000 as follows: Example CBD Software Instruction: Memory Location Memory Addressing Contents (Hexadecimal) (Hexadecimal) Meaning 1000 0027 CBD op code 1001 0207 data descriptor 1 (DD1) word 1 1002 1102 data descriptor 1 (DD1) word 2 1003 6487 data descriptor 2 (DD2) word 1 1004 1204 data descriptor 2 (DD2) word 2 Data descriptors DD1 and DD2 are decoded as follows (see FIG. 9): DD1: T=0: String (binary DD is a string DD). C1=0: OP 1 starts in left byte. L=2: 16-bit binary operand. CAS: OP 1 starts in word addressed by contents of base register 7 plus displacement of 1102. If B7 contains the value 1000 hexadecimal, OP 1 is located at address 2102 hexadecimal. DD2: T=1: Packed decimal. C1,C2=01: OP 2 starts in nibble 1. C3=1: Trailing sign. L=4: 3 digits and a sign. CAS: OP 2 starts in word addressed by contents of base register 7 plus the displacement of 1204. Since B7 contains 1000 hexadecimal, OP 2 is located at address 2204 hexadecimal. OP 1, which is a -37 in two's complement form, appears in main memory as follows: Memory Location Memory Address Contents (Hexadecimal) (Hexadecimal) 2102 FFDB OP 2, which is where the converted number is to be stored, appears in main memory as follows: Memory Location Memory Address Contents (Hexadecimal) (Hexadecimal) 2204 NXXX 2205 SNNN where: N are neighbor nibbles which must be preserved when the converted number is stored. X are where the decimal digits of the converted number are to be stored. S is where the sign of the converted number is to be stored. The execution of the above example CBD commercial software instruction will now be described with reference to FIG. 16. FIG. 16 is a flow chart of the firmware microroutines used by CPU 20 to execute a CBD software instruction. The blocks in FIG. 16 which are referred to by the names next to them, such as CBD-001, show at a gross level the functions performed by microprocessor 30 and commercial instruction logic 28 to perform the software instruction. Some of these blocks may represent the execution of more than one 48 or 56-bit microinstruction, the form of which is shown in FIG. 5. Before entering the microroutines shown in FIG. 16, which are peculiar to the CBD commercial software instructions, the CPU 20 examines the first word of the software instruction which is being executed to determine the type of operation to be performed. Once it is determined that it is a decimal arithmetic operation as determined by looking at the operation code in the first word of the instruction, the CPU 20 then proceeds to decode the address syllable associated with data descriptor 1 to determine the main memory word address and the position within the word in which operand 1 begins. This front end processing of the software instruction then continues with the microprocessor branching to the CBD routine at block CBD-000. When the binary to decimal conversion routine is entered at CBD-000, it determines whether this is a first pass in which operand 1 is to be brought into the CPU or the second pass in which operand 2 is to be brought into the CPU. If it is the first pass, the firmware then branches to block CBD-001 which fetches operand 1 (the binary number to be converted into microprocessor 30 During the second pass, block CBD-003 is entered and the operand 2 is brought into the CPU so that it can be shifted and examined one bit at a time starting with the most significant bit. Block CBD-001 then exists to the instruction front end routine which eventually returns to block CBD-000 to perform the second pass in which case block CBD-002 is entered. Block CBD-002 analyzes the data descriptor 2 to determine the starting and end positions of the operand 2 field which is to hold the converted number and the length of the field. 81ock CBD-003 then brings in operand 2 so that the neighbors may be saved when the converted number is stored into that field in main memory. Operand 2, which is the field in which the converted number is to be stored, is brought into segment 1 of RAM 2 such that at the end of block CBD-003, RAM 2 segment 1 contains the converted number field as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) 20 NXXX Segment 1 21 SNNN OP 2 - field to receive converted Block CBD-004 is then entered and a test of the sign of operand 1 binary is made. If the sign of the binary number to be converted is negative, block CBD-004 branches to block CBD-005 to perform a two's complement on the binary operand and to note that the sign of the result should be negative. Block CBD-005 then exits to block CBD-006 which sets up the appropriate bit counter to equal either 16-bits, if a single precision binary number containing 16 bits is to be converted, or equal to 32, if a double precision number containing 32 bits is to be converted. Block CBD-006 then exits to block CBD-009. If block CBD-004 determines that the binary number to be converted is a positive number, block CBD-007 is entered and a flag is set to indicate that the result is positive. Block CBD-007 then exits to block CBD-008 which sets up a bit counter equal to 16, if it is a single precision binary number to be converted, to 32, if a double position binary number is to be converted. Block CBD-009 is then entered and the binary number within microprocessor 30 is then shifted to the left until the first non-zero binary bit is encountered and the length of the binary field which has to be converted, either 16 or 32, is adjusted to reflect the number of non-zero remaining in the binary number. If block CBD-009 determines that all bits within the binary number to be converted are zeros, block CBD-009 exits to the zero result routine, which is routine CBD-020. If the binary number to be converted is not zero, block CBD-009 after stripping off all leading zero bits then exits to block CBD-010 which puts a binary ZERO into the unit's position of the operand 2 field which will contain the converted binary number in a decimal format. At the end of block CBD-010, the operand 2 field in RAM 2 segment 1 is as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) 20 NXX0 Segment 1 21 SNNN OP 2, 1 digit partial sum of 0. Block CBD-011 then exits to ADD BIT, which is block CBD-019. Block CBD-019 then sets the carry-out indicator within decimal indicators 85 such that when the addition of the partial sum to itself is performed, the carry-out indicator will have a binary ONE in it which will be used as the carry-in when the unit's digit of the partial sum is added to itself during the decimal partial sum doubling operation. During the first time through, the carry-out indicator can be unconditionally set to indicate a carry-out because the binary number to be converted in block CBD-009 such that it is known that the bit currently being worked on at the left end of the binary number is a binary ONE. After setting the carry-out indicator within decimal indicators 85, block CBD-019 then exits to the DOUBLE routine at block CBD-012. In block CBD-012 the current decimal partial sum is doubled by adding it to itself. The carry-in bit, of the unit's position of the partial sum is set equal to the bit within the binary number which is currently being converted. In block CBD-012 this doubling of the decimal partial sum is done by initializing the address counter and nibble counter to point to the word and nibble containing the unit's digit within the partial sum. In the example case, the word address counter is set equal to 20 which is word 0 of segment 1 and the nibble counter is set equal to 3 so that it points to the third nibble which is the unit's position in the partial sum. Also, the decimal indicators 85 are initialized such that the illegal indicator is set equal to zero and the equal nine and equal zero indicators are set equal to binary ONEs. This is done by performing a CRESET microoperation. The carry-out indicator of decimal indicators 85 is set equal to the binary state of the bit which is being examined in the binary number and will be set equal to a binary ONE if the bit is a binary ONE and will be set equal to a binary ZERO if the bit is a binary ZERO. A microinstruction is then executed which contains a CIPDUB, a CWRES2, a CLDFLD, and a CTDCT2 microoperation. These microoperations have the effect of bringing the nibble pointed to by the address and nibble counters of RAM 2 out of RAM 2 nibble multiplexer 89 and through double multiplexer 83 into the A port of decimal ALU 84 and the same nibble from RAM 2 out of RAM 2 zero multiplexer 90 and into the B port of decimal ALU 84 and from there the decimal digit 4-bit result is written back into the same nibble within RAM 2 and the indicator bits from decimal ALU 84 are stored in decimal indicators 85. Then the nibble counter within nibble write control 86 is then decremented by one to point to the next more significant digit within RAM 2 which contains the decimal partial sum and the address counter is decremented by one if the nibble counter decrements through 0. This microinstruction is repeated depending upon the number of active digits there are in the partial sum. The first time through block CBD-012 there will only be one digit active within the partial sum so that this microinstruction is only executed one time. The ability of the preferred embodiment to feed both the A and the B ports of decimal ALU 84 from the outtput of RAM 2 provides a very efficient method of doubling the partial sum contained in RAM 2. At the end of the first time that block CBD-012 is executed, the partial sum stored in RAM 2 segment 1 is as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) 20 NXX1 Segment 1 21 SNNN OP 2, 1 digit partial sum of 1. It should be noted that the doubling of the partial sum done in block CBD-012 also includes a microinstruction to write and skip the zone nibbles if the operand 2 is of the string decimal type such that they have zone nibbles equal to a binary 0011. After adding all digits in the decimal partial sum, block CBD-012 exits to block CBD-013 which branches depending upon whether there is any digits left within operand 2 field length which are not currently being used in the partial sum. This branch is done by examining a counter which contains a value equal to the length of operand 2 minus the current length of the partial sum. If the length of the partial sum is less than the length of the operand 2 block CBD-013 branches to block CBD-027 which tests whether there was a carry-out of the most significant digit when the decimal partial sum was added to itself. If there was no carry-out, then block CBD-027 branches to block CBD-028. If there was a carry-out of the most significant decimal digit of the partial sum, block CBD-027 branches to block CBD-029. In block CBD-029 a decimal one is written into the next more significant digit in the decimal partial sum by feeding decimal ALU 84 with a decimal zero from RAM 1 zero multiplexer 82 into the A port and by feeding the B port with a decimal zero from RAM 2 zero multiplexer 90 while adding in the binary ONE as a carry-in. This writing of a decimal 1 in the next more significant digit of the decimal partial sum is performed by a microinstruction containing CINOP1, CINOP2, CWRES2, CLDLFP and CTDCT2 microoperations which write the output of decimal ALU 84 into RAM 2. Block CBD-029 then increments the length of the decimal partial sum so that the increased length will be used the next time the decimal partial sum is added to itself. Block CBD-030 then decrements the count of available digits by 1 so that a comparison can be made in block CBD-013 to determine if there is any room left in operand 2 to expand the decimal partial sum by 1 digit. Block CBD-030 then exits to block CBD-028 which then decrements the count of bits in the binary number that remains to be converted. Block CBD-016 is then entered and shifts the binary number one position to the left to move the next lesser significant bit into position to be converted. Upon exiting, block CBD-016 branches to block CBD-017, if the bit counter indicates that there are more bits to convert, or to block CBD-020, if all bits have been converted. If there are more bits to convert, block CBD-017 sets up the counter that indicates the number of decimal digits in the partial sum so that counter can be used when doubling the decimal partial sum. Block CBD-017 then branches to block CBD-018, if the current bit to be converted in the binary number is a binary ZERO, or to block CBD-019 if the current bit to be converted is a binary ONE. Block CBD-018 resets the carry-out indicator of decimal ALU 84 to a binary ZERO because the current bit to be converted is a binary ZERO. Block CBD-019 sets the carry-out indicator of decimal ALU 84 to a binary ONE because the current bit to be converted is a binary ONE. Blocks CBD-018 and CBD-019 both exit to block CBD-012 which doubles the decimal partial sum as discussed above and adds in the carry-out indicator into the unit's position. After doubling the partial sum, block CBD-012 then exits to block CBD-013 to again test if there are any unused digits left within the operand 2 field length. This test branches as indicated before the process is continued until all bits within the binary number have been converted. If, during this conversion process, the length of the partial sum reaches the length of the operand 2 as indicated by the available digit counter being ZERO, block CBD-013 will branch to block CBD-014. In block CBD-014 the bit counter which indicates the counter of binary bits yet to be converted is decremented by one and then a test is made to see whether there was a carry-out of the most significant bit of the partial sum. If there was no carry-out, then block CBD-014 exits to block CBD-016. If, however, there was a carry-out of the most significant digit of the decimal partial sum, an overflow condition exists because one or more digits is needed in the decimal partial sum to hold the carry-out of the most significant digit in the partial sum and block CBD-015 sets an overflow indicator which is checked later. Block CBD-016 is then entered and the next bit within the binary number being converted is shifted into position and the process continued. When block CBD-016 determines that all bits have been converted, it branches to block CBD-020. Block CBD-020 then fills the rest of the operand 2 field from the most significant digit in the partial sum through the most significant digit of operand 2 with leading zeros (and zone nibbles as necessary if the operand 2 is the string decimal type). Block CBD-020 is also entered if it was earlier determined that the binary number being converted contained all binary zeros such that the converted number will contain all decimal zeros. Upon leaving block CBD-020, the converted number as stored in RAM 2 segment 1 is as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) 20 N037 Segment 1 21 SNNN OP 2 - final partial sum with sign yet to be put in place. Block CBD-021 is then entered and the commercial instruction indicators are gotten and the overflow (OV) and sign fault (SF) indicator bits are cleared. A branch is then made to determine whether an overflow condition was encountered during the binary conversion process such that the result in RAM 2 only contains the truncated value with the most significant digits being truncated. If the overflow indicator is set, block CBD-021 branches to block CBD-022 which sets the overflow indicator in the commercial instruction indicators and does a branch depending upon whether traps are enabled or not enabled, block CBD-023 is entered and the commercial instruction indicators are stored with the overflow indicators set. If traps are not enabled or if there was no overflow during the binary conversion process, block CBD-024 is entered and the sign of the converted decimal number is set into the sign nibble within the operand 2 field in segment 1 of RAM 2. At the end of CBD-024, the contents of RAM 2 segment 1, which is the final converted decimal number are as follows: RAM 2 RAM 2 Location Contents (Hexadecimal) (Hexadecimal) 20 N037 Segment 1 21 DNNN OP 2-final converted number with trailing minus sign (-037). Block CBD-025 is then entered and the commercial instruction indicators are stored in their hardware register. Block CBD-026 is then entered and the converted binary number that has been converted to the decimal format and stored in segment 1 of RAM 2 with its neighbor nibbles is now transferred into main memory into the operand 2 field as described by data descriptor 2. This transfer is done one word at a time by moving a word from RAM 2 into main memory with the main memory address being supplied by microprocessor 30. From the above description of the convert binary to decimal software instruction, it can be appreciated that the stripping of leading zero bits in the binary number before beginning the conversion process can greatly reduce the number of times in which the decimal partial sum must be added to itself. Further, by using a decimal partial sum which has a length only sufficiently long to hold the current decimal partial sum, the number of decimal digits which must be added when doubling the partial sum is greatly reduced. The length of the partial sum is increased by one digit each time there is a carry out of the most significant digit of the decimal partial sum. The preferred embodiment takes further advantage of the commercial instruction logic 28 by using the ability to feed both the A and B ports of the decimal ALU 84 with the same operand thereby eliminated the necessity of copying the partial sum from the place where the sum is stored to the other memory so that both the memory holding the sum and the other memory can be fed into the B and A ports respectively. The decimal to binary conversion operation performed by the CPU of the preferred embodiment will now be described. The method used in the CPU of the preferred embodiment is basically to bring in the number to be converted which is in a decimal format into the commercial instruction logic 28 and to examine each decimal digit one digit at a time starting with the most significant digit. Before starting to examine the most significant decimal digit, a binary partial sum is set equal to zero. This binary partial sum is kept in microprocessor 30 which has an arithmetic logic unit which performs binary arithmetic. Then as each decimal digit in the number to be converted is examined, beginning with the most significant digit, the digit is added into the current binary partial sum after the binary partial sum is multiplied by ten. This process continues until all decimal digits have been examined and the complete binary partial sum has been calculated. A specific example of a decimal to binary conversion operation will now be discussed with respect to the detailed flow charts of the method as shown in FIG. 17. In the preferred embodiment, the decimal to binary conversion commercial software instruction takes the decimal number given to it as operand 1 and it converts it to a binary number and stores the result in the field specified for operand 2. In the following example, a decimal number of -000123 with a trailing sign will be converted to a binary number which is to be placed in a double precision (32 bits) binary field. In the preferred embodiment, negative decimal numbers are represented in a two's complement form. For the example decimal to binary conversion shown below, assume the CDB software instruction is at main memory location 1000 as follows: Example CDB Software Instruction: Memory Location Memory Addressing Contents (Hexadecimal) (Hexadecimal) Meaning 1000 002A CDB op code 1001 E707 data descriptor 1 (DD1) word 1 1002 1102 data descriptor 1 (DD1) word 2 1003 8407 data descriptor 2 (DD2) word 1 1004 1204 data descriptor 2 (DD2) word 2 Data descriptors DD1 and DD2 are decoded as follows (see FIG. 9): DD1: T=0: String (unpacked) decimal. C1=1: OP 1 starts in right byte. C2,C3=11: Trailing sign. L=7: 6 digits and a sign. CAS: OP 1 starts in word addressed by contents of base register 7 plus displacement of 1102. If B7 contains the value 1000 hexadecimal, OP 1 is located at address 2102 hexadecimal. DD2: T=0: String (binary DD is a string DD). C1=1: OP 2 starts in left byte. L=4: 32-bit binary operand. CAS: OP 2 starts in word addressed by contents of base register 7 plus the displacement of 1204. Since B7 contains 1000 hexadecimal, OP 2 is located at address 2204 hexadecimal. OP 1, which is a -000123 in string decimal format, appears in main memory as follows: Memory Location Memory Address Contents (Hexadecimal) (Hexadecimal) 2102 NN30 2105 332D where: N are neighbor nibbles which must be preserved when the converted number is stored. 30, 31, 32, 33 are string decimal digits 0, 1, 2, 3 respectively. 2D is a trailing minus sign. OP 2, which is where the converted number is to be stored, in a binary format appears in main memory as follows: Memory Location Memory Address Contents (Hexadecimal) (Hexadecimal) 2204 MSWD 2205 LSWD where: MSWD is 16bit word which will contain the 16 most significant bits of the converted 32bit binary number. LSWD is 16bit word will contain the 16 least significant bits of the converted 32bit binary number. The execution of the above example CDB commercial software instruction will now be described with reference to FIG. 17. FIG. 17 is a flow chart of the firmware microroutines used by CPU 20 to execute a CDB software instruction. The blocks in FIG. 17 which are referred to by the names next to them, such as CDB-001, show at a gross level the functions performed by microprocessor 30 and commercial instruction logic 28 to perform the software instruction. Some of these blocks may represent the execution of more than one 48 or 56-bit microinstruction, the form of which is shown in FIG. 5. Before entering the microroutines shown in FIG. 17, which are peculiar to the CDB commercial software instructions, the CPU 20 examines the first word of the software instruction which is being executed to determine the type of operation to be performed. Once it is determined that it is a convert decimal to binary operation as determined by looking at the operation code in the first word of the instruction, the CPU 20 then proceeds to decode the address syllable associated with data descriptor 1 to determine the main memory word address and the position within the word in which operand 1 The front end processing of the software instruction then continues with the microprocessor branching to the CDB routine at block CDB-000. When the convert decimal to binary routine is entered at CDB, it is determined whether this is a first pass in which operand 1 is to be brought into the CPU or the second pass in which the converted number in binary format is to be stored into the operand 2 field. If it is the first pass, the firmware then branches to block CDB-001 which fetches operand 1 so that it can be examined one decimal digit at a time starting with the most significant decimal After block CDB-001 has brought the decimal number to be converted into RAM 1, the contents of RAM 1 are as follows: RAM 1 RAM 1 Location Contents (Hexadecimal) (Hexadecimal) 2 NN30 OP 1, the string 3 3030 decimal number to 4 3132 be converted 5 332D Block CDB-002 is then entered which sets up the RAM 1 address counter and nibble counter to the most significant digit of the decimal number to be converted in RAM 1. The pointers are set to point to the nibble containing the decimal value and not to the nibble containing the zone bits of binary 0011. In the example, the address counter is set up to point to word 2 and the nibble counter is set to point to nibble 3 which contains the most significant digit which is a decimal 0. Block CDB-002 then exits to block CDB-003 where a counter is set equal to the length of the number of digits in the decimal number to be converted. Block CDB-003 also zeros out the word zero within RAM 2 so that when the decimal digit to be converted is written into nibble 3 of RAM 2 from RAM 1 the 12 leading bits will be binary ZEROs. Block CD8-004 is then entered to bring the current digit pointed to by the address and nibble counters of RAM 1 into microprocessor 30 through RAM 2. This is done by reading the current nibble out of RAM 1 through RAM 0 multiplexer 82 and double multiplexer 83 into the A port of decimal ALU 84 while feeding the B port of the decimal ALU 84 with a zero from RAM 2 zero multiplexer 90. The output of decimal ALU 84 is taken from the result/zone multiplexer 91 and into nibble 3 multiplexer 95 where it is written into word 0 nibble 3 in RAM 2. The word 0 of RAM 2 is then read into RAM 2 data register 88 and from there it is taken by transceivers 97 into microprocessor 30. The decimal digit transferred form RAM 1 into microprocessor 30 is then stored in the least significant word of a 32-bit binary partial sum with the most significant word of the 32 bit binary partial sum set equal to zero. After reading the current decimal digit from RAM 1 into RAM 2, a test is made within block CDB-004 to determine if a string decimal number is being converted and, if so, the nibble counter for RAM 1 is incremented by one by use of a microinstruction containing a CTUCT1 microoperation which increments the nibble counter and, if the nibble counter increments through 3, also increments the address counter of RAM 1. After the decimal digit has been transferred to microprocessor 30, block CDB-005 is entered and a test is made on the status of the equal zero indicator of decimal indicators 85. If the equal zero indicator indicates that a decimal zero digit was read from RAM 1, block CDB-005 transfers back to block CDB-004 to get the next digit, if all digits have not already been transferred. If block CDB-005 determines that all the digits are zero and that no more digits are left, because a decimal operand length counter has been decremented to zero, block CDB-005 exits to block CDB-006 which sets the converted binary result equal to zero because all of the digits within RAM 1 have been scanned and they were all zeros. When block CDB-005 determines that a non-zero decimal digit has been read from RAM 1 into microprocessor 30, block CDB-007 is entered. Block CDB-007 tests whether there are any digits left within RAM 1 which have not yet been converted by examining the status of the decimal operand length counter to determined whether it has been decremented to zero. If there are still digits left to be converted, the binary partial sum within microprocessor 30 is multiplied by 10 with care being taken to note if there is any overflows out of the most significant bit in the 32-bit binary number. This multiplication by ten is done by shifting the binary partial sum one bit position to the left to multiply the binary partial sum by two. The by-two result is saved and then shifted two more places to the left to effectively multiply by the binary partial sum by eight and then add the by-two partial sum is added to it to produce a binary partial sum which has been multiplied by ten. Block CDB-010 is then entered to get the next digit from RAM 1 into microprocessor 30 via RAM 2 as done before and the values in the nibble and address counter of RAM 1 are incremented to point to the next less significant digit in the decimal number being converted. Block CDB-011 is then entered and the current decimal digit entered into microprocessor 30 is then added to the binary partial sum with care again being taken to note any overflow out of the most significant bit of the 32-bit binary partial sum which is being accumulated. Block CDB-011 then returns to block BDC-007 which tests the counter indicating the number of decimal digits remaining to be converted. If the count is not zero, block CDB-011 exits to block CDB-009 which multiplies the binary partial sum by 10 as before. This process is continued from block CDB-007 through CDB-011 until all digits within the decimal number to be converted have been processed. When the counter remaining digits is equal to zero, block CDB-007 exits to block CDB-008. In block CDB-008 a test is made on decimal illegal decimal indicator of decimal indicators 85 (which is an integrating indicator as mentioned previously) to determine whether any of the digits read from RAM 1 into the microprocessor via RAM 2 contained a non-decimal digit. If so, block CDB-008 exits to the illegal character (IC) routine which handles the case of an illegal digit. If no illegal digits were found during the conversion process, block CDB-008 exits to the software instruction front end routine which does the preprocessing on data descriptor 2 and returns to the convert decimal to binary routine at block CDB-000 which tests whether this is pass 1 or pass 2, if pass 2 goes to block CDB-012. Block CDB-012 examines the data descriptor 2 and determines the length of operand 2. If data descriptor 2 is an immediate operand type, instead of a pointer to operand 2, a block CDB-012 branches to routine IS which handles illegal software instructions because immediate operands are not allowed to be used for operand which have results stored in them. If it is not an immediate operand, block CDB-012 exits to block CBD-013-CDB-013 which checks to see if the results can be stored within the one or two words specified for operand 2 taking into account that the binary result may have to have a two's complement performed on it if the result is negative. If the result will not fit within the one or two words specified for operand 2, an overflow indicator is set. In block CDB-014, the commercial instruction indicators are read and updated to set overflow if necessary. If overflow has occurred, and traps are enabled, block CDB-014 branches to the trap overflow routine (OV). If there is not an overflow or if trapping on overflow has not been enabled, block CDB-014 exits to block CDB-015 which performs a two's complement on the binary result if the result is negative as determined by the sign on the original decimal number which was converted. Block CDB-016 is then entered and the binary result consisting of either two bytes, if it is a 16-bit binary number, or 4 bytes, if it is a 32-bit binary number, as specified by data descriptor 2 are then stored from the registers in microprocessor 30 one byte at a time into main memory in order that the neighboring bytes in main memory will be preserved if the binary operand does not begin in the right byte of a word. After CDB-016 stores the results into main memory, the main memory area occupied by the converted binary number for the example contains the binary result of -123 in two's complement form as follows: Memory Memory Location Contents (Hexadecimal) (Hexadecimal) 2204 FFFF -123 in binary 2205 FF85 two's complement format. Block CBD-016 then goes to the FETCH routine which brings in the next software instruction for processing. This conversion process of converting a decimal number to a binary number can be appreciated by looking at Table 18 which shows selected steps in the decimal to binary conversion process. In Table 18, the column labeled "step", contains the last three digits of the flow chart block numbers of FIG. 17. Under the columns labeled "RAM 1" counters, the "WP1" column indicates where address counter of RAM 1 is pointing at the beginning of the step, and the column labeled "NP1" indicates where the nibble counter of RAM 1 is pointing to at the beginning of the step. The columns labeled "Decimal Digit from RAM 1", contain the decimal digit which is being examined in RAM 1 in both a hexadecimal and binary format. The columns labeled "Partial Sum" indicate the least significant portion of the binary partial sum which is being accumulated in microprocessor 30. The partial sum column in the binary format contains the 8 low order bits of the 32-bit partial sum and the column labeled "Decimal" contains the decimal equivalent. TABLE 18 Example Decimal to Binary Conversion Decimal Digit RAM 1 from RAM 1 Partial Sum Step Counters Hexa- Bi- (Bi- (Deci- (CDB-) WP1 NP1 decimal) nary) nary) mal) 004 2 3 0 0000 XXXXXXXX X 004 3 1 0 0000 XXXXXXXX X 004 3 3 0 0000 XXXXXXXX X From the above discussion of the decimal to binary conversion software instruction, it can be appreciated that in a preferred embodiment use is made of the ability of RAM 1 to address from left to right such that the decimal number being converted can be examined starting at the most significant digit and the work to the least significant digit. This ability is used to first strip all leading zeros from the decimal number and then to convert from the first most significant non-zero digit to the least significant digit within the decimal number. It can also be appreciated that the integrating nature of the equal zero and illegal character indicators within decimal indicators 85 is advantageously employed. While a preferred embodiment has been described, other modifications will be readily suggested by those of ordinary skill in the art. For example, the commercial instruction logic can be adapted to work on words that have fewer or more than 16 bits and decimal data formats using different representations with different nibbles and atom sizes. Also, although the preferred embodiment has been described in terms of a particular microprocessor, the commercial instruction logic can be used with the CPU having different microprocessors or combinatorial logic. In addition, the control of the commercial instruction logic can be done using different microoperations or combinatorial logic. Similarly, the methods used perform the various arithmetic operations can be adapted to use different While the invention has been shown and described with reference to the preferred embodiment thereof, it will be understood by those skilled in the art that the above and other changes in form and detail may be made therein without departing from the spirit and scope of the invention.
{"url":"http://www.freepatentsonline.com/4672360.html","timestamp":"2014-04-19T15:31:06Z","content_type":null,"content_length":"433661","record_id":"<urn:uuid:40694b4e-a2f5-4fa9-b7cb-7a3124480858>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Gas-Cooling of Multiple Short Inline Disks in Flow along Their Axis STP1523: Gas-Cooling of Multiple Short Inline Disks in Flow along Their Axis Lior, Noam University of Pennsylvania, Philadelphia, PA Papadopoulos, Dimitrios KTH, Stockholm, Pages: 25 Published: Jan 2010 To learn about cooling of gas quenched batches, this paper reports on numerical predictions of local and average convective heat transfer coefficients, and overall pressure drops, on batches of five axially aligned, constant temperature, short cylindrical disks (25 cm diameter, 5 cm thickness) with and without a concentric hole, with interdisk spacings of 5–20 cm, in axial turbulent flows of 20 bar nitrogen gas at inlet speeds from 10 m/s to 100 m/s, corresponding to Reynolds numbers (Re) between 3.27×10^6 and 32.7×10^6. The heat transfer coefficients along the disk surfaces vary strongly up to a worse case of two orders of magnitude for the upstream disk. This nonuniformity is much lower for the disks downstream, especially after spacing is increased beyond 0.1 m. As expected, the upstream disk exhibited rather different heat transfer coefficients than the ones downstream, the magnitude of the heat transfer coefficient and its uniformity increased with the interdisk spacing, and varied by a factor of about 4–5 along the surfaces. The average heat transfer coefficient (Nusselt number, Nu) on the disks increased approximately with Reynolds number as Re^0.85. Re did not have much influence on the nonuniformity of Nu on the disk surfaces. The overall pressure drop along the flow increases with the interdisk spacing, rising by about 60 % as the spacing is increased from 0.05 m to 0.20 m. The presence of a hole increases the heat transfer coefficient in all cases. Some suggestions for reducing the heat transfer coefficient nonuniformity are made. quenching, gas quenching, quenching simulation, quenching uniformity, convective heat transfer Paper ID: STP49184S Committee/Subcommittee: D02.11 DOI: 10.1520/STP49184S
{"url":"http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP49184S.htm","timestamp":"2014-04-19T05:05:06Z","content_type":null,"content_length":"13405","record_id":"<urn:uuid:9733a1c2-e463-40c3-a0cc-e717a543fe32>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
My area of research is Dynamical Systems, a branch of modern mathematics concerned with time evolutions of natural and iterative processes. I have worked mostly with chaotic systems. Among my favorite topics are Lyapunov exponents, entropy, fractal dimension, strange attractors, random perturbations, and rates of correlation decay. I have also worked with concrete models including particle systems (billiards) and kicked oscillators. In the last 10+ years, I have become more interested in applications, and my focus has shifted toward high dimensional systems, to systems with both deterministic and stochastic components, and systems that are out of equilibrium. More recently I have expanded my research to include Theoretical Neuroscience. Below are some highlights of my work, in roughly chronological order. References are to the "Selected Publications" that follow. Entropy, Lyapunov exponents, and fractal dimension Entropy and Lyapunov exponents are two different ways to capture dynamical complexity: entropy measures randomness in the sense of information theory, while positive Lyapunov exponents measure the rates at which nearby orbits diverge. It was known (shortly before I entered the field) that entropy is bounded above by the sum of positive Lyapunov exponents, and that these two quantities are equal for conservative systems. Ledrappier and I completed and clarified the picture: (A) We gave a necessary and sufficient condition for these two quantities to be equal, namely when the measure is SRB (see below); and (B) we proved that in general, the gap between them can be expressed in terms of the dimensions of the invariant measure. These relations are very general; they hold for all diffeomorphisms and flows on finite dimensional manifolds. The results above are part of a subject called nonuniform hyperbolic theory. I would like to extend this theory to stochastic and infinite dimensional systems. Averaging effects of random noise should simplify the dynamical picture [2]. Extension to infinite dimensions will expand the scope of this theory to include semiflows defined by, e.g., dissipative parabolic PDEs; see [10] for a first step. Natural invariant measures For Hamiltonian systems, Liouville measure is clearly the natural invariant measure. What plays the role of Liouville measure for dissipative systems, such as those with attractors? The short answer is: SRB measures -- except that things are a bit more complicated. In the 1970s Sinai, Ruelle and Bowen discovered these measures for uniformly hyperbolic attractors, a class of chaotic attractors satisfying strong geometric conditions. This body of ideas, minus the assertion of existence, was extended to general attractors by Ledrappier and myself (see [1]). Not every chaotic attractor admits an SRB measure, however, and it is very hard to determine if a given attractor does or not. These questions have remained unsettled; [3],[4] and [7] are among the few results known, and [3] was the first time SRB measures were constructed for genuinely nonuniformly hyperbolic attractors. Decay of time correlations By definition, deterministic dynamical systems have memory. The more chaotic a system is, the more rapidly it mixes up its phase space geometry, equivalently, the faster the decay of its time correlations (with respect to smooth test functions). The following two sets of results are my main contributions in this topic [4],[5]: (A) Via a so-called "tower" construction, I connected sufficiently hyperbolic (or chaotic) systems to countable state Markov chains. Leveraging these Markov-like structures, I showed that many statistical properties of the system are determined by tail properties of return times to certain reference sets. For example, exponentially decaying tails lead to exponential correlation decay, central limit theorems, large deviation principles, etc. (B) I proposed to connect the tail properties above directly to the geometry of the map, promoting the idea that to gain insight into the mode of correlation decay for a system that is predominantly hyperbolic, one should focus on its most nonhyperbolic parts. (A) and (B) together offered a unified way to get a handle on statistical properties of large classes of dynamical systems. I demonstrated that on a few examples, including the 2D periodic Lorentz gas [4]; others have used this method many more times. Strange attractors Even though there is no formal definition of a "strange attractor", everyone agrees that its dynamics cannot be simple. Wang and I undertook a systematic study of what can be thought of as "strange attractors of the simplest kind". We called them rank-one attractors, referring to the fact that while these attractors can live in phase spaces of any dimension, they have only one direction of instability, with strong contraction in all other directions. One might expect to see such attractors following a regime's loss of stability, and that is what my co-authors and I (and others) have shown: rank-one attractors occur naturally in periodically kicked oscillators, in periodically forced systems undergoing Hopf bifurcations, with homoclinic loops, and in certain slow-fast systems. They can occur in systems defined by ODEs as well as PDEs; see e.g. [9]. This project consisted, in fact, of two separate parts, the second of which is reported above. The first part was a 130 page paper in which Wang and I identified a set of geometric conditions and proved that they imply the existence of rank-one attractors [7]. This part of our work benefited from the techniques of Benedicks and Carleson, in their analysis of the Hénon maps. We extracted certain ideas from this one example, and developed them into a general class of dynamical systems that includes all of the examples above. We also gave a full description of the geometric and statistical properties of the attractors in this class. Nonequilibrium dynamics Systems in the real world do not operate in isolation; they are driven by external forces, and interact with the outside world. Much of current dynamical systems theory ignores such interactions, understandably so for reasons of simplicity. Below are two of my attempts to "push the envelope": (A) An important source of inspiration is nonequilirbium statistical mechanics. In [6], Eckmann and I investigated the steady states of a class of mechanical chains connected to two unequal heat baths. Our aim was to elucidate how (a) dynamical properties and (b) local thermal equilibria factor in the determination of macroscopic observations such as mean energy and particle density (B) Demers and I studied systems with holes: once the orbit of a point enters a "hole", it is lost forever. Starting from an initial distribution, relevant questions include the rate at which mass escapes, surviving distributions, etc. For a prototypical result, see [8], which treats a billiard table with holes. Modeling the visual cortex (Theoretical Neuroscience) Stripping away countless layers of complexity, one can model certain parts of the brain as a complicated network of spiking neurons with many unknown parameters. It is a dynamical system, but instead of being handed a known map or equation and asked to deduce its properties, here one has at one's disposal bits of biological facts and experimental data, i.e. outputs of the system, from which to back out the rules of the dynamics and parameters. I am involved in a computational modeling project with Rangan the goal of which is to study the primary visual cortex (V1) of higher mammals (primates). We have successfully constrained a parsimonious network with ~ 10 free parameters to match a comparable number of experimental results, giving some confidence to the possibility that our dynamical regime may be indicative of the operating point of real cortex. An emergent phenomenon is that when there is strong competition between excitation and inhibition, as is believed to be the case in V1, spiking patterns tend to be irregular yet highly structured, fueled by positive feedback of recurrent excitation. Some results and model predictions are reported in [11]. 1. (with F. Ledrappier) The metric entropy of diffeomorphisms, Part I: Characterization of measures satisfying Pesin's entropy formula, Part II: Relations between entropy, exponents and dimension, Annals of Math., 122, (1985), 509-574. 2. (with F. Ledrappier) Entropy formula for random transformations, Prob. Th. Rel. Fields, 80, (1988), 217-240. 3. (with M. Benedicks) Sinai-Bowen-Ruelle measures for certain Hénon maps, Invent. Math, 112, (1993), 541-576. 4. Statistical properties of dynamical systems with some hyperbolicity, Annals of Math., (1998), 585-650 5. Recurrence times and rates of mixing, Israel J Math. 110, (1999), 153-188 6. (with J.-P. Eckmann) Nonequilibrium Energy Profiles for a Class of 1-D Models, Commun. Math. Phys., 262, (2006), 237-267. 7. (with Q.D. Wang) Toward a theory of rank one attractors, Annals of Math., 167, No.2, (2008), 349-480. 8. (with M. Demers and P. Wright) Escape rates and physically relevant measures for billiards with small holes, Commun. Math. Phys., Vol 294, 2, (2010), 353-338. 9. (with K. Lu and Q. D. Wang) Strange attractors for periodically forced parabolic equations, AMS Memoirs, published online (2012). 10. (with Z. Lian) Lyapunov exponents, periodic orbits and horseshoes for semiflows on Hilbert spaces, J. Amer. Math. Soc., 25 (2012), 637-665. 11. (with A. Rangan) Emergent dynamics in a model of visual cortex, J Comp. Neurosci., published online (2013) • Ergodic theory of differentiable dynamical systems, "Real and Complex Dynamics", Ed. Branner and Hjorth, NATO ASI series, Kluwer Academic Publishers (1995), 293-336 • Developments in chaotic dynamics, AMS Notices, Nov. 1998 • (with N. Chernov) Decay of correlations for Lorentz gases and hard balls, Encycl. of Math. Sc., Math. Phys. II, Vol. 101, Ed. Szasz, (2001), 89-120. • What are SRB measures, and which dynamical systems have them?, J. Stat. Phys., 108 Issue 5, (2002) 733-754. • (with K. Lin) Dynamics of periodically-kicked oscillators, J. Fixed Point Theory Appl., 7 no. 2 (2010), 291-312. • Mathematical theory of Lyapunov exponents, to appear in J Phys. A (2013).
{"url":"http://cims.nyu.edu/~lsy/research-highlights.html","timestamp":"2014-04-17T03:51:30Z","content_type":null,"content_length":"12928","record_id":"<urn:uuid:cbf84661-d63d-4854-ade3-8e71de75ecda>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Noob question about templates & inheritance 10-24-2008 #1 Registered User Join Date May 2008 Hi all, I've got a noob question to post. I've got this class named "Vector": using namespace std; template<class T> class Vector; template<class T> ostream& operator<<(ostream&, const Vector<T>&); template <class T> class Vector friend ostream& operator<< <>(ostream&, const Vector<T>&); T* data; unsigned len; Vector(unsigned = 10); Vector(const Vector<T>&); virtual ~Vector(void); Vector<T>& operator =(const Vector<T>&); bool operator==(const Vector<T>&); T& operator [](unsigned); unsigned getLength(void) {return len;} I've also got a class named "AssociativeArrayInheritance" that descends form Vector, as you can see: template<class KeyType, class ValueType> class AssociativeArrayInheritance : public Vector<Pair<KeyType, ValueType> > AssociativeArrayInheritance(unsigned size=0):Vector<Pair<KeyType, ValueType> >(size){} ValueType& operator [](const KeyType); template<class KeyType, class ValueType> ValueType& AssociativeArrayInheritance<KeyType, ValueType>::operator [](const KeyType key) unsigned i = Vector::getLength (); // HOW CAN I CALL THIS METHOD BY INHERITANCE? :| iCAN'T SEEM TO FIGURE OUT... When compiling this file, the last line (line163) ("unsigned i = Vector::getLength ();") is getting me the following error: (all code is in file "vector2.h") vector2.h: In member function ‘ValueType& AssociativeArrayInheritance<KeyType, ValueType>: vector2.h:163: error: ‘template<class T> class Vector’ used without template parameters Can anybody help me? I know it must be something stupid & basic but I've searched the web, talked with colleagues, and by now I should be delivering this to S teacher... Last edited by blacknail; 10-24-2008 at 05:51 PM. Well, the error seems pretty straightforward -- you have a Vector, but it's a Vector of ... what? KeyTypes? ValueTypes? Something Elses? How do I know? And how do you expect to call getLength without an object? You have to have a specific Vector in mind to call a member function on it. You have to have a specific Vector in mind to call a member function on it. With inheritance do I have to create an instance of the class to call a method in the super class? If it's a non-static method, yes. All methods need an object to operate on. But do you mean, "do I need a separate instance of the base class to call base class methods from within the derived class?" The answer to that question would be no. If you have permission to execute the method in the base class, you can call it from the derived class just by itself. class base { void base_function() {} class derived : public base { void derived_function() { [edit] Can you tell I didn't read the rest of the thread? Sorry for this rather useless post. [/edit] Last edited by dwks; 10-24-2008 at 06:03 PM. Seek and ye shall find. quaere et invenies. "Simplicity does not precede complexity, but follows it." -- Alan Perlis "Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra "The only real mistake is the one from which we learn nothing." -- John Powell Other boards: DaniWeb, TPS Unofficial Wiki FAQ: cpwiki.sf.net My website: http://dwks.theprogrammingsite.com/ Projects: codeform, xuni, atlantis, nort, etc. Okay, I missed that you wanted the inherited version. So then it's Vector<Pair<KeyType,ValueType> >::getLength. Is there a reason you need the base version and not the inherited version? (Did you override it, and now need the original?) In first place, thanks to all. I was in the verge of despair and now I'm getting better. Second, tabstop, I only have getLength() defined in the superclass (Vector) and I want to call it in sub class (AssociativeArrayInheritance). As simple as that! Then do so. Everything in Vector is in AssociativeArrayInheritance. That was just what I was thinking. I just thought that I could call the method by just typing getLength() in the subclass :S Anyway, thanks for all! You must type "this->getLength()". This is a rather complicated aspect of the name lookup rules in the face of templates. The C++ FAQ Lite describes why. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law Don't forget to make getLength a const method as well. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" 10-24-2008 #2 10-24-2008 #3 Registered User Join Date May 2008 10-24-2008 #4 10-24-2008 #5 10-24-2008 #6 Registered User Join Date May 2008 10-24-2008 #7 10-24-2008 #8 Registered User Join Date May 2008 10-25-2008 #9 10-25-2008 #10
{"url":"http://cboard.cprogramming.com/cplusplus-programming/108495-noob-question-about-templates-inheritance.html","timestamp":"2014-04-17T22:35:25Z","content_type":null,"content_length":"80539","record_id":"<urn:uuid:35229f95-c12b-47b6-8199-5a8ed2ee1d0a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Applied Nonlinear Dynamics and Chaos UBC MATH 345, Section 201, Jan-April 2013 Calendar description: Phase plane methods, bifurcation and stability theory, limit-cycle behavior and chaos for nonlinear differential equations with applications to the sciences. Assignments involve the use of computers. [3-1-0] Prerequisite: A score of 68% or higher in one of MATH 215, MATH 255, MATH 256, MATH 265. • Most recent announcements: ○ We have posted past exams in the Problem set and Exams page. The final exam will cover those sections from chapters 2-3 and 5-9 that we lectured, see the Lecture summary page. It will have 7-8 problems, similar to the 2012 exam. ○ Old homework and lab assignments are placed in folders outside of my office. ○ Office hours in the exam period: 2pm, Tue and Thu, April 9, 11, 16 and 18. ○ For the final exam, one page of 2-sided cheat sheet and a non-graphic non-programable calculator are allowed. ○ Final Exam: Friday April 19, 3:30pm, MATH 104. Textbook: Nonlinear Dynamics And Chaos: With Applications To Physics, Biology, Chemistry, And Engineering, by Steven H. Strogatz, Westview Press, 2001. Lecture Time & Location: Mon Wed Fri 10:00am, MATH 203. Instructor: Dr. Tai-Peng Tsai, Math building room 109, phone 604-822-2591, ttsai at math.ubc.ca. Office hours: Mon 4pm-5:15pm, Tue 11am-12:15pm, and by appointment (Tsai's schedule). • The lab assignments use the computer software XPPAUT, which can be downloaded at XPPAUT Homepage. See the same page for documentation and tutorial, and install.pdf for installation instruction. Back to Tsai's homepage or UBC Math.
{"url":"http://www.math.ubc.ca/~ttsai/courses/345-13S/","timestamp":"2014-04-16T10:32:56Z","content_type":null,"content_length":"4445","record_id":"<urn:uuid:7b93132a-c438-4b93-b890-812944d8c580>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
EnVision Math enVision Math Grades K-3: Booklet A-E (Math Diagnosis And Intervention System. Teacher's Guide Part 1) Program Name: EnVision Math By Sara Dion, Kelly Hogan, and Kathryn DeBettencourt Publisher: Scott Foresman-Addison Wesley "Envision a math program where pictures do the talking. enVisionMATH is the first math program that develops math concepts through interactive and visual learning." EnVision Math is a concept-based interactive math program designed for grades K-6. It comes in English and in Spanish. The program contains accommodations and modifications for English language learners and students with disabilities. The program provides teachers with ideas for differentiated instruction for students performing below grade level as well as gifted students. The program emphasizes the use of assessment as an ongoing tool for helping teachers gauge what levels their students are currently performing at. At the end of each lesson is a "Writing to Explain" portion in which the student is asked to explain his or her thought process. EnVision also bases itself around the research-based idea that it is best for teachers to teach the same content to all students while varying the amount of support depending on the students' developmental level. Envision Philosophy: Topics organized to help teachers teach what they want, when they want . . . EnVision Math in Schools EnVisionMath is a research based program that centers around the specific curriculum within any given classroom. Over several years of research, the developers of this program have found that students benefit best from the incorporation of pictures and diagrams, assistive technology and hands on materials. In doing this, students are able to develop a conceptual understanding of mathematics in a way that is meaningful and enriching. In addition, the authors of this particular math program have found that skills are learned best when they build upon previously learned skills, in a developmentally progressive manner. This program breaks skills up into different parts and then each part builds on the previous one until students have fulfilled each of the required parts of the program. EnVision Math also caters to the specific needs of the teachers in the classroom who are responsible for teaching with this program. Each school that submits to this program has planned curriculum planning periods to discuss how to teach the hands on materials that are incorporated into this program. In addition, there are specific books that teachers are able to use as guidance for this program. Teachers get all the materials that they need and they are directed in how to relay information to students in ways that are innovative and necessary. In addition, teachers receive help in multi-sensory instruction. This programs involved hands on maniplatives, pictures and diagrams as well as assistive computer technology programs for students to use as they develop their mathematics Each student receives a workbook and a problem-solving handbook. The workbook contains exercises on the topic, while the problem solving handbook teaches problem solving strategies. These strategies include looking for a pattern, drawing a picture, and making a table. It describes each strategy and says when it is best to use each one. This is very helpful because it supplements the content knowledge by actually teaching strategies for struggling students. This program is used with both general education students and special education students, and is based around state standards that the students will be tested on during state testing. The program provides several opportunities for practice: it comes with an array of overhead manipulatives and manipulatives to use in centers. This is beneficial for students who learn best by seeing and doing. These manipulatives include everything that you would expect in a math curriculum: shape block, base-10 rods, counters, play money, and much more! The program also comes with computer software to compliment textbook instruction. This software includes stories, games, and songs to reinforce the math concepts being taught. This allows students to practice their newly acquired skills in a fun way. The program really ensures that students have a firm understanding on what they are learning. The worksheets have sections with headings such as “Do you know HOW?” and “Do you UNDERSTAND?” to help develop metacognitive skills. After a while, students will ask themselves if they understand the material without even thinking! This is also useful for enforcing concept-based knowledge; students are able to understand that math needs to make sense, that there needs to be a how and a why. Progress monitoring and interventions are built right into the program. Each lesson has a “review What You Know” section at the beginning, so as to level the playing field between all students. At the end of each lesson is a “Quick Quiz” to informally and effectively assess student learning. Each lesson comes with ways to differentiate for below-level, on-level, and above-level students. This allows for specialized instruction with the opportunity for flexibility in grouping, since students are assessed after every lesson. The students are given more formal benchmark tests every four chapters. Built-in intervention lessons can be used at any point of the instructional process. This process is laid out in a convenient flow chart that covers the course of the entire school year. EnVision Math Training EnVision Math provides multiple training opportunities in order to accommodate all teachers. This program has face-to-face program training. It works with teachers to teach them how to implement EnVision math into their classroom. These training consultants also work to provide teachers with classroom management techniques and strategies on how to analize student work. In addition to face-to-face training sessions, Pearson also provides online services. Teachers can gain training in EnVision Math through the web with “self-paced modules and instructor-led webinars”. This online training is available at anytime, 24/7. Teacher's can also contact Pearson at anytime through phone or on their website for additional training information. Pearson's goal is to accommodate all teachers and to make sure that all teachers gain the proper training for EnVision Math at their convenience. There is also an online teacher resources page on the EnVision website that teachers can refer to. Teacher Resources Strengths The strengths of Envision Math are that it is adapted to multiple grade levels. There is a different EnVision Math program of each grade kindergarten through sixth grade. Another great aspect of EnVision Math is that it is provided in both English and Spanish. Through the country there are many school with Spanish speaking students, EnVision makes it possible for these students to learn with their English speaking classmates. Weaknesses The one weakness about EnVision Math is that it is not translated into languages other than Spanish. It would be a great improvement if this program could be written in other languages as well so that it can be taught to all students no matter what language they speak. Inclusion The EnVision Math program can be used in an inclusive classroom. EnVision math is “a math program where every child counts. Data-driven differentiated instruction takes the guess work out of helping students achieve.” Throughout the EnVision Math website it is clearly stated that “EnVision MATH gives every student the opportunity to succeed.” Final Reflection: The EnVision Math program is a great math program for a teacher at any point in his or her career. However, in reference to adapting to this program into the first year of teaching, the EnVision Math program provides a vast network of resources and opportunities to encourage a successful year. First year teachers are able to access online training videos and materials as well as on-site training sessions with professionals. As a new teacher, it is often hard to plan enriching lessons with limited materials but Envision math provides lesson ideas and resources to help a teacher allow students to succeed within the classroom. This program is interactive and involves communication at all levels between teachers and students. In addition, EnVision is often adopted by all schools within a district, allowing first year teachers to reach out to their colleagues to develop better lessons with outside support. "EnVision Math is a program that allows me to focus on the specific needs of the students in my classroom. I have such an array of learners and in order to meet each of my students' needs I need a program like EnVision to outline materials and keep me on track. Overall, this program has been successful in my classroom and I generally only find trouble with a few of the time periods provided for certain concepts." -- Mrs. S* (special education teacher) Program Authors: Dr. Randall I. Charles, Dr. Randall "Skip" Fennell, Dr. Janet H. Caldwell., Ms. Alma B. Ramirez, Dr. Mary Cavanagh, Ms. Kay B. Simmons, Ms. Dinah Chancellor, Dr. Jane F. Schielack, Dr. Juanita "Nita" Copley, Dr. William Tate, Dr. Warren D. Crown, John A. Van de Walle Consulting Authors: Dr. Veronica Galvan Carlan, Stuart J. Murphy, Ms. Jeanne F. Ramos ELL Consultants/Reviewers: Jim Cummins, Ms. Alma B. Ramirez Pearson Educational Inc. (2009). Scott Foresman and Addison Wesley EnVision Math. Retrieved on November 14, 2009 at: Foresman, Scott and Addison Wesley. (2009). EnVision Math Texas. Retrieved on November 14, 2009 at:
{"url":"http://specializedreadingmathprograms.providence.wikispaces.net/EnVision+Math?responseToken=275540971718683fd5119b258b3f4b4b","timestamp":"2014-04-23T11:46:58Z","content_type":null,"content_length":"59094","record_id":"<urn:uuid:56d45c5a-2d52-4550-8a3d-cd038ea76641>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
problem about coalition formation Our local networks went down this morning and I started thinking about . Didn't get very far, but on the way, came up with the following question. players, and a binary game tree, each of whose leaves is labelled with one of the players, who is deemed to be the winner if that leaf is the outcome of the game. Each node of the tree is also labelled with a player, with the understanding that if that node is reached during the game, the that player gets to choose one of the two subtrees. Question: how hard is it to find the smallest coalition that has the property that they can ensure that one of their members ends up winning? (There are two versions of this question: you can require them to nominate a winner in advance, or you could just require that the winner is one of them, but precisely who, depends on the behaviour of the non-members of the coalition.) Comment: Let's focus on the second of the above versions. In that case, there are many winning coalitions of size /2. Why? You can partition the players into 2 subsets (opposing teams that play against each other), then treat each subset as a single player, resulting in a 2-player version. You can find out which of those two "players" can win, in a bottom-up fashion. So, if the subsets are the same size, you end up finding a subset consisting of half the players, that can win. Another question: leaving aside computational issues, should it always be possible to find winning coalitions of size /2? Probably not. Disclaimer: I don't know if this is a new problem, it could may have been worked on already. 4 comments: Don't the nodes on the shortest path from the root to a leaf form a good enough coalition? That would give a log n bound (for any tree since non-branching nodes can be ignored) or am I missing something obvious. The problem with your suggestion is that the label for that leaf (the winner) may be different from the labels of the nodes on the path that reach it. So, log n of the players can indeed choose an outcome, but not necessarily one in which one of those players is the winner. The winner is always welcome. He doesn't have to do anything but win (and I assume share the winnings at the end). Uh, that looks suspiciously like a valid observation! So, now the problem is looking easier than I thought it was (although we don't quite have a polynomial-time algorithm.) (BTW, I originally had in mind a related version in which each leaf of the tree identifies a particular loser, and a successful coalition should consist of a subset of agents that can avoid any of them being the loser. But it's now looking to me as if, in this version also, there should be successful coalitions of size proportional to log n.)
{"url":"http://paulwgoldberg.blogspot.com/2008/12/problem-about-coalition-formation.html","timestamp":"2014-04-20T03:10:20Z","content_type":null,"content_length":"88834","record_id":"<urn:uuid:5746c82c-02a6-428f-8fb7-6f02f938f340>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Sample Size Selection Using a Margin of Error Approach Posted by mddiadmin on October 1, 2006 Manufacturers must use a sensible approach when choosing sample sizes and testing protocols to validate packaging processes. Manufacturers must employ sample sizes that ensure that packaging systems are safe, meet regulatory requirements, and maintain sterility. Photo courtesy of Beacon Convertors (Saddle Brook, NJ). The rise in nosocomial (hospital-acquired) infections in recent years has brought attention from epidemiologists, microbiologists, healthcare professionals, government officials, and the media. Handling procedures, hospital hygiene practices, and even the genetics of the microbes have all been scrutinized. The Centers for Disease Control and Prevention (CDC) indicate that a number of microbes have been implicated as causing nosocomial infections through the years.^1 The microbes responsible for these infections include the following: • Staphylococcus aureus. • Enterobacteriaceae. • Pseudomonas aeruginosa. • Klebsiella pneumoniae. Figure 1. (click to enlarge) Resistance of vancomycin-resistant among ICU patients, 1995–2004. Source: NNIS. CDC also suggests that increases in nosocomial infections can be linked with the organisms becoming more resistant to both processes and drugs that are intended to kill them. Figures 1–4, which consist of data collected by the National Nosocomial Infections Surveillance System (NNIS) and published by CDC, depict the increasing antimicrobial resistance of specific strains of various microbes over time. Figure 2. (click to enlarge) Resistance of fluoroquinolone-resistant Pseudomonas aeruginosa among ICU patients, 1995–2004. Source: NNIS. The most likely source of nosocomial infections relates to handling and hygiene procedures of the healthcare providers and hospitals. Nevertheless, judicious device manufacturers are doing their part to ensure that device packages and their contents do not contribute to increasing infection rates. As a result, the integrity testing of medical device packaging is of escalating importance. Medical device manufacturers are required to package devices so that they withstand “the rigors of shipping and arrive at the point of use in a safe and functional condition.”^2 Additionally, they must employ testing protocols that provide objective evidence that the process will produce an expected outcome. These protocols give manufacturers a “high degree of assurance” that their packages will maintain integrity from the sterilization process until the product is delivered to the patient. Figure 3. (click to enlarge) Resistance of third-generation cephalosporin-resistant Kliebsiella pneumoniae among ICU patients, 1995–2004. Source: NNIS. At the same time, the industry faces immense pressures regarding the cost of healthcare. Providers, patients, insurance companies, and government officials all indicate that reducing the cost of healthcare is a major priority. In sum, the device industry faces an increasing level of expectation that must be delivered at a reduced cost. Figure 4. (click to enlarge) Resistance of Methicillin (oxacillin)-resistant Staphylococcus aureus (MRSA) among ICU patients, 1995–2004. Source: NNIS. Consequently, choosing the appropriate sample size and testing protocols to validate the processes can be a daunting task. A sensible approach includes considering risks and understanding the ramifications associated with choosing too many or too few samples. Risk Considerations in Qualifying Sterile Medical Packaging The Risk of Being Too Risk Averse. It is important to bring package-product systems to market that ensure patient safety, meet all legal and regulatory requirements, and maintain product sterility. However, the costs associated with being too conservative are often overlooked. When a manufacturer employs an excessive level of test severity or conservatism, the consequences can delay the introduction of an otherwise beneficial device. Such a delay means that superior treatments are not available as quickly as possible. An overly cautious approach can also yield excessive waste and increased disposal costs, both of which are associated with overpackaging. Additionally, although it is seldom considered, unwarranted and excessively high standards result in loss of opportunities to reduce the cost of healthcare. The Risk of Being Too Risky. The costs associated with being too risky are, perhaps, more obvious than those associated with a more conservative approach. Bringing a product to market that has not received thorough package testing is asking for problems. The biggest concern is patient harm. Many patients who need medical devices and hospitalization are already compromised in terms of overall heath. For some patients, such as those on immunosuppressive therapy, the contamination of a device with even one colony-forming unit (CFU) may eventually be life threatening. Apart from the obvious concerns regarding patient risk, inadequate package testing employing too few samples can lead to product recalls, damage to company reputation, lawsuits, increased FDA scrutiny, and decreased stock prices. Within the hospital, questionable products can lead to delays in the operating room and a lack of confidence in the product. All of these outcomes are highly The Benefit of Appropriate Risk Taking. Although the quality system regulation (QSR), 21 CFR 820, does not require a specific risk management system, it does require risk analysis “where appropriate” in design validation, which is typically where package integrity testing would occur. ISO 14971 is recognized by FDA as a system that employs risk-based decision making and an analysis of potential risks associated with an unexpected outcome.^3 This standard provides a framework for establishing that a device will be safe and effective as well as meet end-user requirements. It provides an excellent structure for analyzing risk. Through careful consideration of risk, including cost and benefit, positive outcomes can be maximized, and undesirable consequences minimized. Balancing these issues comes into focus in choosing an appropriate sample size when validating your processes. Important factors to be considered when determining an appropriate sample size include the possible risks associated with a given failure, the likelihood of the failure, relevant consequences associated with the failure, and any relevant sterile barrier system history. Current Approaches to the Selection of Sample Size When choosing sample sizes, it is important to consider the type of packaging being tested, as well as issues such as sterile barrier system history. Photo courtesy of Oliver Medical (Grand Rapids, The QSR dictates that manufacturers establish methods and controls (i.e., a quality system) to ensure that processes are validated “with a high degree of assurance.” This regulation applies to packaging processes and, consequently, to the integrity of packages. However, FDA does not dictate what type of program needs to be implemented. There is no mention of an acceptable quality level (AQL) in current government documents. The agency does not dictate how many packages should be subjected to simulated distribution testing to ensure that packages meet all functional requirements. Instead, FDA broadly defines what must be done, leaving details (such as what constitutes a high degree of assurance) up to device manufacturers. Therefore, manufacturers of medical devices use a variety of techniques to determine suitable sample sizes. Some techniques are appropriate, while others are not. Inappropriate Techniques. Anyone who has ever worked in a production facility, medical device R&D group, or other corporate function has heard the phrase, “because that is how we have always done it.” Although this may work for some decisions, like where to host a company picnic, it is not a sound defense of a sample size. Sample-size justifications that have been inherited in this way may reflect historical judgments made years before the broad application of statistical methods and statistical process control. An equally inappropriate technique for determining appropriate sample sizes is to arbitrarily pick a number. Arbitrarily picked sizes tend, for some reason, to be round numbers, such as n = 10 or n = 30. Thirty is a particularly interesting case, as many people believe it to be the magical, statistically valid sample size. However, the magic of this myth is easily dispelled. Consider a lot of 50 units; by sampling n = 30, 60% of the total population produced has been sampled. If the sample were pulled appropriately (i.e., throughout the run), and the process were in control, some reasonable assumptions can be made about the portion of the population that was not sampled. But now consider sampling n = 30 in a lot of 2 million units. Using n = 30 would represent 0.0015% of the total population produced. Arbitrarily choosing 30, without context, such as considering lot size or risk factors associated with a failure, is not a sound approach. Figure 5. (click to enlarge) A simplified guide to calculating a sample size for variable data. Source: NIST. Another way of dispelling the n = 30 approach is to determine what confidence is obtained if a sample size of 30 is used for pass-fail data. By quick calculation, if a target defect rate of no more than 1% is needed, but a sample size of 30 is used, the detectable difference will be 3.5% (see Figures 5 and 6). That would mean that in any typical production batch, 3 out of 100 might be Figure 6. (click to enlarge) A simplified guide to calculating a sample size for attribute data. Source: NIST. A different approach is to examine resulting confidence intervals. The exact one-sided upper 90% confidence limit for a population rate based on observing no failures in a random sample of 30 is 0.0738. Therefore, seeing no failures in 30 does not rule out a population or production percentage of up to 7.38% at 90% confidence. Clearly, one out of every 14 products having a defect would be outrageous. Nevertheless, if inadequate sample sizes are selected for pass-fail data, huge risks may be undertaken by unknowing companies. Finally, in some circumstances, uninformed companies may limit sample sizes based on the assumption that producing and testing units would be too expensive. Although the economics of any product development initiative must be thoroughly understood, the cost of testing a robust sample size is often not properly weighed against the cost of not testing. The effect to the company of just one inaccurate conclusion that compromises the integrity of a sterile barrier system and results in a recall or harms a patient far surpasses the incremental costs of testing additional units. Appropriate Techniques. Most appropriate techniques for choosing sample sizes are based on statistical approaches, some of which are outlined below. Previously Published Sampling Plans. Perhaps the most common way companies approach the development of a sampling plan is by relying on plans that have been created by others. One of the most ubiquitous sampling plans is MIL-STD 105E, which is a sampling plan for attribute data. It later became ANSI/ASQC Z1.4–2003, “Sampling Procedures and Tables for Inspection by Attributes—E-Standard.” Sampling plans like these are appropriate for processes that have been shown to be stable and capable. However, a note of caution is in order. The proper sampling plan must be selected based on whether the data are attribute or variable. Attribute data are also called binary data, meaning that they provide pass-fail information and nothing more. They are converted to discrete data by counting the number of passes or fails. Variable data, on the other hand, are measured on a continuous scale. The weight of a product, for instance, is an example of variable data. Regardless of the type of data, using plans like the MIL-STD generally requires an AQL. An AQL is user defined. In other words, customers determine the acceptable level of product quality, or the numbers of failures they are willing to accept and still deem the incoming product as acceptable. Although consumers may be willing to accept a specified number of failing widgets, it is unlikely that healthcare providers or immunocompromised patients would provide manufacturers with any level of allowable defects. Statistically Determining Your Own Sample Size. Another sound statistical approach is for companies to determine their own sample size, based on confidence intervals. This is not as difficult as it sounds. The confidence level required will be chosen based on an understanding of the risks associated with a failure, the history of the product, the likelihood of a failure, etc. The greater the level of confidence needed, the larger the number of samples required, and the more certainty exists regarding the population being produced. Sample size can be calculated for both attribute and variable data (see Figures 5 and 6). These calculations require manufacturers to have examined, and challenged, their processes to understand the inherent variability. This variability is reflected in the standard deviation, which is one of the numbers required for the calculation. Obtaining standard deviation from multiple runs across several lots of material is strongly recommended as this is a more realistic setup for your process. The larger the standard deviation of a process, the larger the sample required to have a smaller margin of Two separate groups of statistics are available to help determine an appropriate sample size: statistics that assist with estimation, and statistics that are used in hypothesis testing. Estimation is concerned with a margin of error. In hypothesis testing, more-advanced statistics are used to prove whether there is a difference between two treatments. This article utilizes the estimate approach and is concerned only with margin of error. A future article will deal with exact methods and hypothesis testing, including an understanding of the power associated with a given sample size. Most people are familiar with the term margin of error, as it is often used during political campaign seasons when polls are taken regarding candidates' likelihood of receiving votes. The more formal, statistical definition of margin of error is one-half of the width of a confidence interval. An example would be a preelection poll that questioned people regarding their voting preference. If pollsters wanted to create a 95% confidence interval that was 4 percentage points wide from the results, they would be need a sample size associated with a margin of error of 2%. Calculating a Sample Size The National Institute of Standards and Technology (NIST) provides an online statistics reference that covers this simple approach to calculating sample sizes (see Figures 5 and 6).^4 There are different approaches for calculating a sample size for variable data and for attribute data. Variable Data. Calculating a sample size for variable data follows four steps. Step 1. Determine the margin of error in sample populations that needs to be seen. Being able to detect small margins of error will require larger samples. As a result, an informed decision needs to be made regarding what the target value is and what the acceptable tolerance is on either side of that target. For example, could you live with a seal strength result that is ±1 lb of force? Probably not, if the target is 1.5 lb and the acceptable range is 0.5 for keeping the product contained within the package and 4 lb for being able to pull the package open at the hospital. A more-realistic margin of error might be ±0.2 lb. Step 2. Determine the standard deviation. The easiest way of finding the standard deviation is by looking at some historical data. Prudent manufacturers will consider that real life presents variations in the materials, shifts, operators, plant temperatures, etc. The standard deviation should reflect the changes that will occur during the course of a normal run. This type of analysis of the performance of a system yields valuable information regarding how tightly processes are controlled. Step 3. Crunch the numbers using the NIST equation where stdev is the standard deviation. The degree of confidence should reflect the judgment of the manufacturer. Device manufacturers will consider the risk associated with product failure, the history of the device, the characteristics of the device and package, the likelihood of a failure, etc., when determining the appropriate confidence bound. It is important to note that the 1.96 in the equation provides for a 95% confidence bound. If a 90% confidence bound is preferred, use 1.645 instead. In addition, there is an assumption that the sample mean is normally distributed. The sample mean should be tested using any one of a number of tests to ensure that the normal probability assumption has been met before proceeding with this approach. So, using the earlier example, the equation would be which would yield n = 8.64. Step 4. Do a sanity check. Does this sample size, intuitively, seem right? Is it likely to produce the required outcomes? Attribute Data. Calculating a sample size for attribute data follows five steps. Step 1. Determine the margin of error that needs to be detected, as a percentage. The margin of error represents the sensitivity of the statistical technique's ability to estimate differences in samples' values. To estimate small differences in a population, a larger sample is needed. Extremely variable processes will require even larger samples. For example, would device manufacturers be satisfied knowing that they were within ±2% of a target percent defective? Usually not. A more likely d would be 0.5%, or 0.005. Step 2. Determine the likely percent defective, q. For example, to know that there are less than 1% defective would require that q = 0.01. Step 3. Determine the likely percentage of conforming product, p. This is simple, because p = 1.0 – q. If q = 0.01, then p = 0.99. Step 4. Crunch the numbers using the NIST equation, where d is the margin of error, in percent; q is the percent defective; and p is the percent conforming. Therefore, which makes n = 1521.27. Always round the result up; in this case, the sample size would be 1522. Step 5. Do a sanity check. Does this sample size, intuitively, seem right? Is it likely to produce the required outcomes? The high sample size of 1522 might be surprise some people. However, consider the amount of information that is provided in a pass-fail test. Each sample unit provides only one small piece of information. This is in stark contrast with the amount of information provided by each trial of an experiment that provides variable data. Each data point not only provides information regarding its own value, but it also provides information on how far away it is from the average value, whether it is away from the mean in a positive or negative direction, and what the expected variation may be for other samples. The calculations for attribute sample sizes require product lots to be somewhat large. The statistical requirement is that n × p > 5 and also that n(1 – p) > 5. Entering different values for p shows that the smaller the percent defective, the larger the sample size must be for these statistical approaches to remain valid. Companies can either use a previously published plan, such as ANSI/ASQC Z1.4, or create their own based on a fundamental understanding of statistical principles to choose sample sizes. But regardless of the method chosen, a company's processes must be under control for a plan to work properly. Judicious manufacturers recognize that they need to make informed, defensible decisions regarding their sample sizes. However, informed decisions are impossible in an information vacuum. Techniques presented in this article can not only provide insight into calculating appropriate sample sizes for validation; they also show that manufacturers must begin to develop an understanding of their process variability and consider myriad other factors when choosing an appropriate sample size. The substance, recommendations, and views set forth in this article are not intended as specific advice or direction to medical device manufacturers and packagers, but rather are for discussion purposes only. Medical device manufacturers and packagers should address any questions to their own packaging experts and have an independent obligation to ascertain and ensure their own compliance with all applicable laws, regulations, industry standards, and requirements, as well as their own internal requirements. Nick Fotis is director of packaging for Cardinal Health (McGaw Park, IL) and can be contacted at nick.g.fotis@cardinal.com . Laura Bix is as assistant professor at the Michigan State University School of Packaging. E-mail her at bixlaura@msu.edu. 1. Robert A Weinstein, Centers for Disease Control and Prevention, “Nosocomial Infection Update,” Emerging Infectious Diseases 4, no. 3 [online] July–September 1998 [cited 29 June 2006]; available from Internet: www.cdc.gov/ncidod/eid/vol4no3/weinstein.htm. 2. John Spitzley, “The State of Sterile Package Integrity Testing in the Medical Device Industry,” in WorldPak Proceedings, (East Lansing, MI: Michigan State University, 2002). 3. ISO 14971:2000, “Medical Devices—Application of Risk Management to Medical Devices” (Geneva: International Organization for Standardization, 2000). 4. National Institute of Standards and Technology, “Selecting Sample Sizes,” Engineering Statistics Handbook [online] (Gaithersburg, MD: National Institute of Standards and Technology) [cited 29 June 2006]; available from Internet: www.itl.nist.gov/div898/handbook/ppc/section3/ppc333.htm. Copyright ©2006 Medical Device & Diagnostic Industry Tags: Printer-friendly version
{"url":"http://www.mddionline.com/article/sample-size-selection-using-margin-error-approach","timestamp":"2014-04-21T05:40:51Z","content_type":null,"content_length":"67557","record_id":"<urn:uuid:3759c828-5a60-4ce0-9126-64012b67d8e7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
GMAT Tip of the Week Data Sufficiency – Where “No” Means “Affirmative” (This is one of a series of GMAT tips that we offer on our blog.) My hat is off to whomever created the Data Sufficiency question type, which holds within its format several delightfully crafty ways to elicit an incorrect answer. Perhaps none is more tricky and understated, however, than the method by which the test preys on our innate connection between the word “no” and its connotation of “negative”. (Author’s Note: This same connection was exploited recently-and-brilliantly on an episode of 30 Rock, in which Tracy Jordan exclaims that Jack’s medical test results were “positive” — meaning “good news” — because the actual results came back “negative”. But I digress…) To illustrate, a Data Sufficiency question might ask: Is x > 0? A simple enough question, it would seem – is x positive – but then consider a potential first statement: 1) IxI = -x This statement tells us that the absolute value of the number is equal to itself multiplied by negative one. Because of this, the number cannot be positive – any positive number multiplied by negative one becomes negative, but all absolute values are either positive or zero. A positive number simply cannot satisfy statement 1, and the answer to the overall question — Is x > 0? — is “no”. Herein lies the rub — the answer to the question is “no,” which may lead you to believe that statement 1 is “negative” or “undesirable,” because of that connotation of the word “no” at which we just arrived. However, “no” is a definitive answer to the overall question. Given the information in statement one, we can prove one answer to that question, which means that “Statement (1) ALONE is If you see, it would be easy, and somewhat intuitive, to “eliminate” statement 1 because it provided the answer “no”. But that’s not what Data Sufficiency questions ask — instead, they ask “do you have enough information to answer the question?”. Because of that, a definitive answer of “no” is, in fact, enough to answer the question, and so you must remind yourself that “no” means “sufficient” . To combat this common pitfall, I suggest writing the word “sufficient” at the top of your noteboard, and glancing at it each time you answer a Data Sufficiency question to remind yourself what the question is specifically asking. Veritas Prep offers a full lesson on the Data Sufficiency question format, as well as hundreds of Data Sufficiency practice problems in its quantitative curriculum. For more information, please take a look at all of Veritas Prep’s GMAT preparation options.
{"url":"http://www.veritasprep.com/blog/2009/02/gmat-tip-of-the-week-19/","timestamp":"2014-04-19T22:06:14Z","content_type":null,"content_length":"46243","record_id":"<urn:uuid:1abef7c0-2f40-4cf0-9dac-0881f7395d05>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
André-Quillen cohomology and rational homotopy of function spaces , 903 "... Abstract. Let Aut(p) denote the space of all self-fibre-homotopy equivalences of a fibration p: E → B. When E and B are simply connected CW complexes with E finite, we identify the rational Samelson Lie algebra of this monoid by means of an isomorphism: π∗(Aut(p)) ⊗ Q ∼ = H∗(Der∧V (∧V ⊗ ∧W)). Here ..." Add to MetaCart Abstract. Let Aut(p) denote the space of all self-fibre-homotopy equivalences of a fibration p: E → B. When E and B are simply connected CW complexes with E finite, we identify the rational Samelson Lie algebra of this monoid by means of an isomorphism: π∗(Aut(p)) ⊗ Q ∼ = H∗(Der∧V (∧V ⊗ ∧W)). Here ∧V → ∧V ⊗ ∧W is the Koszul-Sullivan model of the fibration and Der∧V (∧V ⊗ ∧W) is the DG Lie algebra of derivations vanishing on ∧V. We obtain related identifications of the rationalized homotopy groups of fibrewise mapping spaces and of the rationalization of the nilpotent group π0(Aut♯(p)) where Aut♯(p) is a fibrewise adaptation of the submonoid of maps inducing the identity on homotopy groups. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=14536920","timestamp":"2014-04-21T03:48:44Z","content_type":null,"content_length":"12330","record_id":"<urn:uuid:17fc7582-c88e-4a26-89fd-f93163b742af>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Waves on a String Hitting a key on a piano causes a hammer to come up from underneath and hit a string (actually a set of three). The result is a pair of pulses moving away from the point of impact. So far you have learned some counterintuitive things about the behavior of waves, but intuition can be trained. The first half of this section aims to build your intuition by investigating a simple, one-dimensional type of wave: a wave on a string. If you have ever stretched a string between the bottoms of two open-mouthed cans to talk to a friend, you were putting this type of wave to work. Stringed instruments are another good example. Although we usually think of a piano wire simply as vibrating, the hammer actually strikes it quickly and makes a dent in it, which then ripples out in both directions. Since this chapter is about free waves, not bounded ones, we pretend that our string is infinitely long. After the qualitative discussion, we will use simple approximations to investigate the speed of a wave pulse on a string. This quick and dirty treatment is then followed by a rigorous attack using the methods of calculus, which may be skipped by the student who has not studied calculus. How far you penetrate in this section is up to you, and depends on your mathematical self-confidence. If you skip the later parts and proceed to the next section, you should nevertheless be aware of the important result that the speed at which a pulse moves does not depend on the size or shape of the pulse. This is a fact that is true for many other types of waves. Intuitive ideas Consider a string that has been struck, (a), resulting in the creation of two wave pulses, (b), one traveling to the left and one to the right. This is analogous to the way ripples spread out in all directions from a splash in water, but on a one-dimensional string, "all directions" becomes "both directions." We can gain insight by modeling the string as a series of masses connected by springs. (In the actual string the mass and the springiness are both contributed by the molecules themselves.) If we look at various microscopic portions of the string, there will be some areas that are flat, (c), some that are sloping but not curved, (d), and some that are curved, (e) and (f). In example (c) it is clear that both the forces on the central mass cancel out, so it will not accelerate. The same is true of (d), however. Only in curved regions such as (e) and (f ) is an acceleration produced. In these examples, the vector sum of the two forces acting on the central mass is not zero. The important concept is that curvature makes force: the curved areas of a wave tend to experience forces resulting in an acceleration toward the mouth of the curve. Note, however, that an uncurved portion of the string need not remain motionless. It may move at constant velocity to either side. Approximate treatment We now carry out an approximate treatment of the speed at which two pulses will spread out from an initial indentation on a string. For simplicity, we imagine a hammer blow that creates a triangular dent, (g). We will estimate the amount of time, t, required until each of the pulses has traveled a distance equal to the width of the pulse itself. The velocity of the pulses is then ± w/t. As always, the velocity of a wave depends on the properties of the medium, in this case the string. The properties of the string can be summarized by two variables: the tension, T, and the mass per unit length, μ (Greek letter mu). If we consider the part of the string encompassed by the initial dent as a single object, then this object has a mass of approximately μw (mass/length x length=mass). (Here, and throughout the derivation, we assume that h is much less than w, so that we can ignore the fact that this segment of the string has a length slightly greater than w.) Although the downward acceleration of this segment of the string will be neither constant over time nor uniform across the string, we will pretend that it is constant for the sake of our simple estimate. Roughly speaking, the time interval between (g) and (h) is the amount of time required for the initial dent to accelerate from rest and reach its normal, flattened position. Of course the tip of the triangle has a longer distance to travel than the edges, but again we ignore the complications and simply assume that the segment as a whole must travel a distance h. Indeed, it might seem surprising that the triangle would so neatly spring back to a perfectly flat shape. It is an experimental fact that it does, but our analysis is too crude to address such details. The string is kinked, i.e. tightly curved, at the edges of the triangle, so it is here that there will be large forces that do not cancel out to zero. There are two forces acting on the triangular hump, one of magnitude T acting down and to the right, and one of the same magnitude acting down and to the left. If the angle of the sloping sides is θ, then the total force on the segment equals 2T sin θ. Dividing the triangle into two right triangles, we see that sin θ equals h divided by the length of one of the sloping sides. Since h is much less than w, the length of the sloping side is essentially the same as w/2, so we have sin θ = 2h/w, and F=4Th/w. The acceleration of the segment (actually the acceleration of its center of mass) is a = F/m = 4Th/μw^2 . The time required to move a distance h under constant acceleration a is found by solving h= 1/2 at ^2 to yield Our final result for the velocity of the pulses is The remarkable feature of this result is that the velocity of the pulses does not depend at all on w or h, i.e. any triangular pulse has the same speed. It is an experimental fact (and we will also prove rigorously in the following subsection) that any pulse of any kind, triangular or otherwise, travels along the string at the same speed. Of course, after so many approximations we cannot expect to have gotten all the numerical factors right. The correct result for the velocity of the pulses is The importance of the above derivation lies in the insight it brings -that all pulses move with the same speed - rather than in the details of the numerical result. The reason for our too-high value for the velocity is not hard to guess. It comes from the assumption that the acceleration was constant, when actually the total force on the segment would diminish as it flattened out. Rigorous derivation using calculus (optional) After expending considerable effort for an approximate solution, we now display the power of calculus with a rigorous and completely general treatment that is nevertheless much shorter and easier. Let the flat position of the string define the x axis, so that y measures how far a point on the string is from equilibrium. The motion of the string is characterized by y(x,t), a function of two variables. Knowing that the force on any small segment of string depends on the curvature of the string in that area, and that the second derivative is a measure of curvature, it is not surprising to find that the infinitesimal force dF acting on an infinitesimal segment dx is given by (This can be proven by vector addition of the two infinitesimal forces acting on either side.) The acceleration is then a =dF/dm, or, substituting dm=μdx, The second derivative with respect to time is related to the second derivative with respect to position. This is no more than a fancy mathematical statement of the intuitive fact developed above, that the string accelerates so as to flatten out its curves. Before even bothering to look for solutions to this equation, we note that it already proves the principle of superposition, because the derivative of a sum is the sum of the derivatives. Therefore the sum of any two solutions will also be a solution. Based on experiment, we expect that this equation will be satisfied by any function y(x,t) that describes a pulse or wave pattern moving to the left or right at the correct speed v. In general, such a function will be of the form y=f(x-vt) or y=f(x+vt), where f is any function of one variable. Because of the chain rule, each derivative with respect to time brings out a factor of ± v . Evaluating the second derivatives on both sides of the equation gives Squaring gets rid of the sign, and we find that we have a valid solution for any function f, provided that v is given by
{"url":"http://www.vias.org/physics/bk3_03_03.html","timestamp":"2014-04-17T18:46:06Z","content_type":null,"content_length":"16137","record_id":"<urn:uuid:2415e589-b2fb-4632-ba0c-6bb0675e06b4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Know benchmarks for converting between common fractions, decimals and percentages. Level Four > Number and Algebra Know benchmarks for converting between common fractions, decimals and percentages. Teaching resources for this Achievement Objective: • Numeracy Activities • Number and Algebra • Level One Order numbers in the range 0–100. Order the numbers in the range 0–1000. Order whole numbers in the range 0–1 000 000. Identify symbols for any fraction, including tenths, hundredths, thousandths, and those greater than 1. Find equivalent fractions and order fractions. Know benchmarks for converting between common fractions, decimals and percentages. Identify and order decimals to three places. Order fractions, decimals and percentages. • Numeracy Activities • Number and Algebra • Level Four Find equivalent fractions and order fractions. Know benchmarks for converting between common fractions, decimals and percentages. Know benchmarks for converting between fractions, decimals and percentages. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number link activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number link activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number link activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number link activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number link activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Figure It Out Activities • Number and Algebra • Level Four This is a level 4 number activity from the Figure It Out series. It relates to Stage 7 of the Number Framework. • Numeracy Activities • Number and Algebra • Level Four Know benchmarks for converting between common fractions, decimals and percentages. • Numeracy Activities • Number and Algebra • Level Four Know benchmarks for converting between common fractions, decimals and percentages
{"url":"http://www.nzmaths.co.nz/ao/know-benchmarks-converting-between-common-fractions-decimals-and-percentages","timestamp":"2014-04-17T17:15:02Z","content_type":null,"content_length":"33826","record_id":"<urn:uuid:703dcf29-b32b-40b0-ab40-b951bb5027a4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Logical relation and the typed lambda calculus Results 11 - 20 of 32 - ANNALS PURE APPL. LOGIC , 1996 "... ..." "... Traditional static type systems are effective for verifying basic interface specifications. Dynamicallychecked contracts support more precise specifications, but these are not checked until run time, resulting in incomplete detection of defects. Hybrid type checking is a synthesis of these two appro ..." Cited by 17 (0 self) Add to MetaCart Traditional static type systems are effective for verifying basic interface specifications. Dynamicallychecked contracts support more precise specifications, but these are not checked until run time, resulting in incomplete detection of defects. Hybrid type checking is a synthesis of these two approaches that enforces precise interface specifications, via static analysis where possible, but also via dynamic checks where necessary. This paper explores the key ideas and implications of hybrid type checking, in the context of the λ-calculus extended with contract types, i.e., with dependent function types and with arbitrary refinements of base types. - Fundamenta Informaticae , 2006 "... Parametric polymorphism constrains the behavior of pure functional programs in a way that allows the derivation of interesting theorems about them solely from their types, i.e., virtually for free. Unfortunately, standard parametricity results — including so-called free theorems — fail for nonstrict ..." Cited by 13 (5 self) Add to MetaCart Parametric polymorphism constrains the behavior of pure functional programs in a way that allows the derivation of interesting theorems about them solely from their types, i.e., virtually for free. Unfortunately, standard parametricity results — including so-called free theorems — fail for nonstrict languages supporting a polymorphic strict evaluation primitive such as Haskell’s seq. A folk theorem maintains that such results hold for a subset of Haskell corresponding to a Girard-Reynolds calculus with fixpoints and algebraic datatypes even when seq is present provided the relations which appear in their derivations are required to be bottom-reflecting and admissible. In this paper we show that this folklore is incorrect, but that parametricity results can be recovered in the presence of seq by restricting attention to left-closed, total, and admissible relations instead. The key novelty of our approach is the asymmetry introduced by left-closedness, which leads to “inequational” versions of standard parametricity results together with preconditions guaranteeing their validity even when seq is present. We use these results to derive criteria ensuring that both equational and inequational versions of short cut fusion and related program transformations based on free theorems hold in the presence of seq. - in : Logic From Computer Science, Mathematical Science Research Institute Publications 21 , 1992 "... What equations can we guarantee that simple functional programs must satisfy, irrespective of their obvious defining equations? Equivalently, what non-trivial identifications must hold between lambda terms, thought-of as encoding appropriate natural deduction proofs ? We show that the usual syntax g ..." Cited by 12 (4 self) Add to MetaCart What equations can we guarantee that simple functional programs must satisfy, irrespective of their obvious defining equations? Equivalently, what non-trivial identifications must hold between lambda terms, thought-of as encoding appropriate natural deduction proofs ? We show that the usual syntax guarantees that certain naturality equations from category theory are necessarily provable. At the same time, our categorical approach addresses an equational meaning of cut-elimination and asymmetrical interpretations of cut-free proofs. This viewpoint is connected to Reynolds' relational interpretation of parametricity ([27], [2]), and to the Kelly-Lambek-Mac LaneMints approach to coherence problems in category theory. 1 Introduction In the past several years, there has been renewed interest and research into the interconnections of proof theory, typed lambda calculus (as a functional programming paradigm) and category theory. Some of these connections can be surprisingly subtle. Here we a... - Theoretical Computer Science , 1995 "... . Lambda definability is characterized in categorical models of simply typed lambda calculus with type variables. A category-theoretic framework known as glueing or sconing is used to extend the Jung-Tiuryn characterization of lambda definability [JuT93], first to ccc models, and then to categor ..." Cited by 11 (0 self) Add to MetaCart . Lambda definability is characterized in categorical models of simply typed lambda calculus with type variables. A category-theoretic framework known as glueing or sconing is used to extend the Jung-Tiuryn characterization of lambda definability [JuT93], first to ccc models, and then to categorical models of the calculus with type variables. Logical relations are now a well-established tool for studying the semantics of various typed lambda calculi. The main lines of research are focused in two areas, the first of which strives for an understanding of Strachey's notion of parametric polymorphism. The main idea is that a parametricly polymorphic function acts independently from the types to which its type variables are instantiated, and that this uniformity may be captured by imposing a relational structure on the types [OHT93, MSd93, MaR91, Wad89, Rey83, Str67]. The other line of research concerns lambda definability and the full abstraction problem for various models of - Presented at Workshop on Issues in the Theory of Security (WITS’04 , 2004 "... Abstract. The paper investigates which of Shannon’s measures (entropy, conditional entropy, mutual information) is the right one for the task of quantifying information flow in a programming language. We examine earlier relevant contributions from Denning, McLean and Gray and we propose and motivate ..." Cited by 11 (0 self) Add to MetaCart Abstract. The paper investigates which of Shannon’s measures (entropy, conditional entropy, mutual information) is the right one for the task of quantifying information flow in a programming language. We examine earlier relevant contributions from Denning, McLean and Gray and we propose and motivate a specific quantitative definition of information flow. We prove results relating equivalence relations, interference of program variables, independence of random variables and the flow of confidential information. Finally, we show how, in our setting, Shannon’s Perfect Secrecy theorem provides a sufficient condition to determine whether a program leaks confidential information. 1 - In Dynamic Languages Symposium (DLS , 2007 "... Types are the central organizing principle of the theory of programming languages. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that define its types, and its semantics is determined by the interactions among those constructs. The sou ..." Cited by 11 (4 self) Add to MetaCart Types are the central organizing principle of the theory of programming languages. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that define its types, and its semantics is determined by the interactions among those constructs. The soundness of a language design—the absence of ill-defined programs— follows naturally. The purpose of this book is to explain this remark. A variety of programming language features are analyzed in the unifying framework of type theory. A language feature is defined by its statics, the rules governing the use of the feature in a program, and its dynamics, the rules defining how programs using this feature are to be executed. The concept of safety emerges as the coherence of the statics and the dynamics of a language. In this way we establish a foundation for the study of programming languages. But why these particular methods? Though it would require a book in itself to substantiate this assertion, the type-theoretic approach - JOURNAL OF AUTOMATED REASONING , 1999 "... Coloring terms (rippling) is a technique developed for inductive theorem proving which uses syntactic dierences of terms to guide the proof search. Annotations (colors) to symbol occurrences in terms are used to maintain this information. This technique has several advantages, e.g. it is highly go ..." Cited by 7 (5 self) Add to MetaCart Coloring terms (rippling) is a technique developed for inductive theorem proving which uses syntactic dierences of terms to guide the proof search. Annotations (colors) to symbol occurrences in terms are used to maintain this information. This technique has several advantages, e.g. it is highly goal oriented and involves little search. In this paper we give a general formalization of coloring terms in a higher-order setting. We introduce a simply-typed calculus with color annotations and present appropriate algorithms for the general, pre- and pattern unification problems. Our work is a formal basis to the implementation of rippling in a higher-order setting which is required e.g. in case of middle-out reasoning. Another application is in the construction of natural language semantics, where the color annotations rule out linguistically invalid readings that are possible using standard higher-order unification. , 1991 "... We present a category-theoretic framework for providing intensional semantics of programming languages and establishing connections between semantics given at different levels of intensional detail. We use a comonad to model an abstract notion of computation, and we obtain an intensional category fr ..." Cited by 6 (2 self) Add to MetaCart We present a category-theoretic framework for providing intensional semantics of programming languages and establishing connections between semantics given at different levels of intensional detail. We use a comonad to model an abstract notion of computation, and we obtain an intensional category from an extensional category by the co-Kleisli construction; thus, while an extensional morphism can be viewed as a function from values to values, an intensional morphism is akin to a function from computations to values. We state a simple category-theoretic result about cartesian closure. We then explore the particular example obtained by taking the extensional category to be Cont, the category of Scott domains with continuous functions as morphisms, with a computation represented as a non-decreasing sequence of values. We refer to morphisms in the resulting intensional category as algorithms. We show that the category Alg of Scott domains with algorithms as morphisms is cartesian closed. We... - Computer Languages , 1992 "... Abstract Increasingly sophisticated applications of static analysis make it important to precisely characterize the power of static analysis techniques. Sekar et al. recently studied the power of strictness analysis techniques and showed that strictness analysis is perfect up to variations in consta ..." Cited by 5 (0 self) Add to MetaCart Abstract Increasingly sophisticated applications of static analysis make it important to precisely characterize the power of static analysis techniques. Sekar et al. recently studied the power of strictness analysis techniques and showed that strictness analysis is perfect up to variations in constants. We generalize this approach to abstract interpretation in general by defining a notion of similarity semantics. This semantics associates to a program a collection of interpretations all of which are obtained by blurring the distinctions that a particular static analysis ignores. We define completeness with respect to similarity semantics and obtain two completeness results. For first-order languages, abstract interpretation is complete with respect to a standard similarity semantics provided the base abstract domain is linearly ordered. For typed higher-order languages, it is complete with respect a logical similarity semantics again under the condition of linearly ordered base abstract domain. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=140102&sort=cite&start=10","timestamp":"2014-04-18T20:55:49Z","content_type":null,"content_length":"38797","record_id":"<urn:uuid:d1eea096-3bdc-4509-b5fb-1e3d5df8ff0e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimum spanning tree of a random graph up vote 9 down vote favorite Consider $n$ points arbitrarily located on the plane. Consider a random graph $G$ drawn from $G(n, \frac12)$ on these points (i.e. the Erdos-Renyi random graph where every edge is selected with probability $\frac12$). What is known about the geometry of the minimum spanning tree of such a graph? I am interested in pointers to any literature on this, but something like the following might be a concrete example: Thm. With high probability, the Minimum Spanning Tree has weight within a factor of $\alpha$ of the MST on the complete graph on the same points. random-graphs co.combinatorics 1 Related : mathoverflow.net/questions/38824 – David Speyer Sep 14 '12 at 18:13 My guess is that the size of the minimal spanning tree in your random graph ought to be fairly close (within a multiplicative constant) of the size of the euclidean minimal spanning tree, since only half the edges are missing. So you might start by looking at J. Michael Steele, "Growth Rates of Euclidean Minimal Spanning Trees with Power Weighted Edges". – Robert Young Sep 14 '12 at Thanks for the pointers! They seem to be only slightly different models (randomly placed points etc), but I will definitely take a look. – Pradipta Sep 16 '12 at 3:29 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged random-graphs co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/107202/minimum-spanning-tree-of-a-random-graph","timestamp":"2014-04-20T11:32:45Z","content_type":null,"content_length":"49401","record_id":"<urn:uuid:799ce9f6-b0d0-451c-ab09-89c77ab93ff1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Root asymptotics of spectral polynomials for the Lamé operator "... The classical Heun equation has the form ..." - J. Math. Anal. Appl "... Dedicated to Professor Masaki Kashiwara on his sixtieth birthday Abstract. We obtain integral representations of solutions to special cases of the Fuchsian system of differential equations and Heun’s differential equation. In particular, we calculate the monodromy of solutions to the Fuchsian equati ..." Cited by 4 (2 self) Add to MetaCart Dedicated to Professor Masaki Kashiwara on his sixtieth birthday Abstract. We obtain integral representations of solutions to special cases of the Fuchsian system of differential equations and Heun’s differential equation. In particular, we calculate the monodromy of solutions to the Fuchsian equation that corresponds to Picard’s solution of the sixth Painlevé equation, and to Heun’s equation. 1. - J. Approx. Theory "... Abstract. We establish a hierarchy of weighted majorization relations for the singularities of generalized Lamé equations and the zeros of their Van Vleck and Heine-Stieltjes polynomials as well as for multiparameter spectral polynomials of higher Lamé operators. These relations translate into natur ..." Cited by 2 (1 self) Add to MetaCart Abstract. We establish a hierarchy of weighted majorization relations for the singularities of generalized Lamé equations and the zeros of their Van Vleck and Heine-Stieltjes polynomials as well as for multiparameter spectral polynomials of higher Lamé operators. These relations translate into natural dilation and subordination properties in the Choquet order for certain probability measures associated with the aforementioned polynomials. As a consequence we obtain new inequalities for the moments and logarithmic potentials of the corresponding root-counting measures and their weak- ∗ limits in the semi-classical and various thermodynamic asymptotic regimes. We also prove analogous results for systems of orthogonal polynomials such as Jacobi polynomials. 1. , 2007 "... Abstract. We review several results on the finite-gap potential and Heun’s differential equation, and we discuss relationships among the finite-gap potential, the WKB analysis and Heun’s differential equation. 1. ..." Cited by 1 (1 self) Add to MetaCart Abstract. We review several results on the finite-gap potential and Heun’s differential equation, and we discuss relationships among the finite-gap potential, the WKB analysis and Heun’s differential equation. 1. , 2009 "... Polynomial solutions to the generalized Lamé equation, the Stieltjes polynomials, and the associated Van Vleck polynomials have been studied since the 1830’s, beginning with Lamé in his studies of the Laplace equation on an ellipsoid, and in an ever widening variety of applications since. In this pa ..." Cited by 1 (0 self) Add to MetaCart Polynomial solutions to the generalized Lamé equation, the Stieltjes polynomials, and the associated Van Vleck polynomials have been studied since the 1830’s, beginning with Lamé in his studies of the Laplace equation on an ellipsoid, and in an ever widening variety of applications since. In this paper we show how the zeros of Stieltjes polynomials are distributed and present two new interlacing theorems. We arrange the Stieltjes polynomials according to their Van Vleck zeros and show, firstly, that the zeros of successive Stieltjes polynomials of the same degree interlace, and secondly, that the zeros of Stieltjes polynomials of successive degrees interlace. We use these results to deduce new asymptotic properties of Stieltjes and Van Vleck polynomials. We also show that no sequence of Stieltjes polynomials is orthogonal. "... Abstract. The well-known Heun equation has the form j Q(z) d2 ff d + P(z) + V (z) S(z) = 0, dz2 dz where Q(z) is a cubic complex polynomial, P(z) and V (z) are polynomials of degree at most 2 and 1 respectively. One of the classical problems about the Heun equation suggested by E. Heine and T. Stie ..." Add to MetaCart Abstract. The well-known Heun equation has the form j Q(z) d2 ff d + P(z) + V (z) S(z) = 0, dz2 dz where Q(z) is a cubic complex polynomial, P(z) and V (z) are polynomials of degree at most 2 and 1 respectively. One of the classical problems about the Heun equation suggested by E. Heine and T. Stieltjes in the late 19-th century is for a given positive integer n to find all possible polynomials V (z) such that the above equation has a polynomial solution S(z) of degree n. Below we prove a conjecture of the second author, see [17] claiming that the union of the roots of such V (z)’s for a given n tends when n → ∞ to a certain compact connecting the three roots of Q(z) which is given by a condition that a certain , 2011 "... We review properties of certain types of polynomial solutions of the Heun equation. Two aspects are particularly concerned, the interlacing property of spectral and Stieltjes polynomials in the case of real roots of these polynomials and asymptotic root distribution when complex roots are present. ..." Add to MetaCart We review properties of certain types of polynomial solutions of the Heun equation. Two aspects are particularly concerned, the interlacing property of spectral and Stieltjes polynomials in the case of real roots of these polynomials and asymptotic root distribution when complex roots are present. "... Aspects of the asymptotic theory of linear ordinary differential equations ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3795088","timestamp":"2014-04-17T07:27:59Z","content_type":null,"content_length":"27388","record_id":"<urn:uuid:69471f7a-f820-411e-b1fe-ef18128ac707>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Another Calculus Problem. e ∫ (x^2-1)/(x) dx= 1 A) e- (1/e) B) e^2 - e C) (e^2/2)-e+ (1/2) D) e^2-2 E) (e^2/2)-(3/2) frozenflames First find the antiderivative of $\int\frac{x^2-1}{x}dx$ Which is, $\int x-\frac{1}{x}dx$ Which is, $\frac{x^2}{2}-\ln |x|$, But it is taken from $[1,e]$ Which by the fundamental theorem is, $\frac{e^2}{2}-1-\frac{1}{2}=\frac{e^2}{2}-3/2$
{"url":"http://mathhelpforum.com/calculus/1970-another-calculus-problem.html","timestamp":"2014-04-17T12:44:12Z","content_type":null,"content_length":"32934","record_id":"<urn:uuid:2c36f975-0ca5-4c57-b475-88b3d00d2b56>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Post a New Question | Current Questions 11th grade chemistry needs help Indicate your subject in the "School Subject" box, so those with expertise in the area will respond to the question. Thursday, December 2, 2010 at 5:47pm 11th grade physics needs help Indicate your subject in the "School Subject" box, so those with expertise in the area will respond to the question. Thursday, December 2, 2010 at 5:29pm 11th grade Centripetal Acceleration = v^2/r = (circumference/time)^2/r = (2*pi*r/t)^2)/r = ((2*3.14*50/14.3)^2)/50 = 9.64 m/s^2 Tuesday, November 30, 2010 at 12:58am 11th grade The ratio of H to O (2 to 1) must stay the same if the only product is H2O, regardless of the amount of H2O produced. Tuesday, November 30, 2010 at 12:32am 11th grade Newton's Second Law: F(or weight) = mass x acceleration. so, a = F/mass = 3.45/1.5 You do the math and don't forget the units! Tuesday, November 30, 2010 at 12:18am 11th grade physics a = change in velocity / change in time so.. a = 8 - 32 / 4 a = -6m/s^2 Monday, November 29, 2010 at 9:32pm 11th grade A 625-kg racing car completes one lap in 14.3 s around a circular track with a radius of 50.0 m. The car moves at constant speed. (a) What is the acceleration of the car? Monday, November 29, 2010 at 9:01pm 11th grade What did the volume of water double FROM? Is the water gaseous? Is it at the same T and P as the reactants? There must be more information that goes with this question. Monday, November 29, 2010 at 5:54pm 11th grade If the volume of water produced during the reaction doubled, what would happen to the ratio of hydrogen to oxygen in the following equation: 2 H2+ O2 --> 2 H2O Monday, November 29, 2010 at 4:27pm 11th grade which ons is stronger 1)A solid cylinder of 2)A hollow cylinser of iron 3) A cylinder made up of slits of iron Sunday, November 28, 2010 at 11:19am 11th grade A 16.0 kg box is released on a 39.0° incline and accelerates down the incline at 0.267 m/s2. Find the friction force impeding its motion Saturday, November 27, 2010 at 9:21pm 11th grade accounts cash book find journal entries for. (1) discounted bill of rs .1000 at1% by bank (2) interest allowed by bank rs.5000 Monday, November 22, 2010 at 11:18pm College Algebra this is 11th grade highschool math. if you dont know this drop out now before you waste aany more of your parents' money Wednesday, November 17, 2010 at 4:44pm 11th grade The following data are weights of hummingbirds (in grams) randomly selected from a population. Which histogram best portrays this data set? (You may want to use your TI-83 to help you.) Wednesday, November 17, 2010 at 1:16pm 11th grade avgforce*time= mass*changevelocity This is the force the wall exerts on her, reverse the direction for her force. Tuesday, November 16, 2010 at 4:09pm 11th grade An 88g arrow shot with Force=110N. Distance traveled is 78cm. What is speed of arrow? Monday, November 8, 2010 at 5:29pm 11th grade physics how high above the surface of the Earth would an object need to be for its weight to be half its weight on the surface? Monday, November 8, 2010 at 1:09pm 11th grade physics if action and reaction are always equal and opposite, why dont they always cancel each other and leave no force for acceleration of the body? Monday, November 8, 2010 at 1:20am 11th grade physics find the value of 60 joule per minute on a system which has 10 cm,100gm, and 1 minute as fundamental units. Monday, November 8, 2010 at 1:13am 11th grade Math 37t = 54t-162 -17t = -162 t = 9.52941165 Thursday, November 4, 2010 at 11:38pm 11th Grade AP English i have to wright an essay tomorro comparing The Great Gatsby and The Color of Water. Can anyone help me get some ideas/topics on what i should wright about? Thursday, November 4, 2010 at 1:40pm 11th grade Math Distance = rate * time Let t = time for first car 37t = 54(t-3) Solve for t. Thursday, November 4, 2010 at 1:34pm 11th grade physics You place a box with a mass of 20kg on an inclined plane that makes a 35.0° angle with the horizontal. What is the component of the gravitational force acting down the inclined plane? Tuesday, November 2, 2010 at 3:36pm 11th grade Sodium Vapor lamps emit a characteristic yellow light . What can we deduce about sodium atoms, based on this observation? Monday, November 1, 2010 at 6:59pm 11th grade Please post the correct School Subject so a teacher in that field will read your post. Sra Sunday, October 31, 2010 at 7:18pm 11th grade volatile liquids with lower boiling points give better results than those with higher boiling points. suggest a reason for this. Sunday, October 31, 2010 at 4:30pm 11th grade Chemistry the compound adrenaline contains 56.79% c,6.56% h,28.27% and 8.28% n by the mass what is the empiricol formula Thursday, October 28, 2010 at 12:35pm 11th grade Algebra The answer to this question is -3. The equations work out to be -3n-7=-2n-4, and then you can just work it out for yourself. Thursday, October 21, 2010 at 12:03pm 11th grade Use the following functions to solve. f(x) = x2 + x + 1; g(x) = 5 - 2x; h(x) = -x2 Solve f(g(-1)) Wednesday, October 20, 2010 at 12:08pm Math, not "11th grade" Please type your subject in the School Subject box. Any other words, including obscure abbreviations, are likely to delay responses from a teacher who knows that subject well. Tuesday, October 19, 2010 at 8:41am 11th grade a meter stick of wight 0.8 n is pivoted at 40 cm mark at which mark 1n load should be located to balence the sticke Tuesday, October 19, 2010 at 8:14am 11th grade science needs help Indicate your subject in the "School Subject" box, so those with expertise in the area will respond to the question. Monday, October 18, 2010 at 3:25pm 11th grade a 65kg diver jumps off of a 10m tower. find the diver´s velocity when he hits the water? Sunday, October 17, 2010 at 11:51pm 11th grade how do bacterial plasmids relate to resistance and how do they relate to genetic engineering. i am very syuck and i need help please Sunday, October 17, 2010 at 10:45pm 11th grade When the shuttle bus comes to a sudden stop to avoid hitting a dog, it decelerates uniformly at 5.3 m/s2 as it slows from 9.5 m/s to 0 m/s. Find the time interval of acceleration for the bus. Answer in units of s Monday, October 11, 2010 at 11:33am 11th grade math secondary largest possible answre is Answer: x⋅(38-2⋅x)2 Answer: 8.06181572077 Friday, October 8, 2010 at 6:38pm 11th grade math had this same probem for college pre-calculue couldnt solve B any other way but guess and check/logic x=4.5 Friday, October 8, 2010 at 6:27pm 11th grade physics? Indicate your subject in the "School Subject" box, so those with expertise in the area will respond to the question. Monday, October 4, 2010 at 4:28pm 11th grade Algebra A number is multiplied by -3 and then this product is decreased by 7. The result is 4 less than twice the opposite of the number. what is the number? Sunday, October 3, 2010 at 8:42pm 11th grade How would I solve the following equation for x? 2 sin^2(x) + 3 tanx secx = 2 I've tried the problems from different approaches, but couldn't come up with a solution. Could you please provide your thought process. It would be greatly appreciated. Thanks! Friday, October 1, 2010 at 4:45pm 11th grade How many uranium atoms are there in 9.5 g of pure uranium? The mass of one uranium atom is 4 × 10−26 kg Thursday, September 30, 2010 at 9:13pm 11th grade A roller coaster moves horizontally, then travels 45 m at an angle of 30 degrees above the horizontal. What is its displacement from its starting point? Wednesday, September 29, 2010 at 8:12pm 11th grade sound travels at 350 m/s in warm air. "If you hear the echo in 10 seconds, how far away is the reflecting surface? Tuesday, September 28, 2010 at 5:31pm 11th grade VRMS =sqrt[(3RT)/M] and The most probable speed is 81.6% of the rms speed, and the average speed 92.1% (distribution of speeds). Take care with units of M and R Friday, September 24, 2010 at 11:25am Math, not "11th grade" Please type your subject in the School Subject box. Any other words, including obscure abbreviations, are likely to delay responses from a teacher who knows that subject well. Friday, September 24, 2010 at 8:39am 11th grade An automobile with an initial speed of 4.56m/s accelerates uniformly at the rate of 2.4m/s2. Find the final speed of the car after 4.4 s. Answer in units of m/s. Tuesday, September 21, 2010 at 5:41pm 11th grade When the shuttle bus comes to a sudden stop to avoid hitting a dog, it decelerates uniformly at 3.8 m/s2 as it slows from 8.8 m/s to 0 m/s. Find the time interval of acceleration for the bus. Answer in units of s. Tuesday, September 21, 2010 at 5:36pm Math, not "11th grade" Assistance needed. Please type your subject in the School Subject box. Any other words, including obscure abbreviations, are likely to delay responses from a teacher who knows that subject well. Monday, September 20, 2010 at 7:06pm 11th grade When the shuttle bus comes to a sudden stop to avoid hitting a dog, it decelerates uniformly at 3.8 m/s2 as it slows from 8.8 m/s to 0 m/s. Find the time interval of acceleration for the bus. Answer in units of s. Monday, September 20, 2010 at 6:32pm Math needs h elp The School Subject is not 11th grade. If it is Math, please state that. Sra Sunday, September 19, 2010 at 6:03pm 11th grade the line with the equation x+y=3 is graphed on the same xy-plane as the parabola with vertex (0,0) and focus (0,3). What is the point of intersection of the two graphs? Sunday, September 19, 2010 at 5:54pm 11th grade the line with the equation x+y=3 is graphed on the same xy-plane as the parabola with vertex (0,0) and focus (0,3). What is the point of intersection of the two graphs? Sunday, September 19, 2010 at 5:53pm 11th grade for example in #1. 2x-1=y if y=0; 2x-1=0 then x=1/2 if x=0; 2(0)-1=y then y=-1 on the graph plot the points (1/2,0) and (0,-1) draw a line containing those points.. Thursday, September 16, 2010 at 9:08pm 11th grade need help with these problems can anyone help me- 2x-1=y 2x=y-6 3y=-6x x+4y=12 Thank you Thursday, September 16, 2010 at 8:47pm 11th grade science (help needed) Indicate your subject in the "School Subject" box, so those with expertise in the area will respond to the question. Thursday, September 16, 2010 at 1:21pm 11th grade math 4(x^2-4x+3) You can factor this further by seeing which 2 numbers subtract to 4 (your middle term) and add to 3 (your last term). Post if you want it checked/need help. Tuesday, September 14, 2010 at 10:49pm 11th grade Algebra II 2x-5y+z=23 4x+3y-5z=1 3x-3y-4z=9 Tuesday, September 14, 2010 at 6:56pm 11th grade Angle QRS is a straight angle. If a=2c, c=2b, and b=d/3, what are the values of a,b,c,and d? a+b+c+d = 180 Tuesday, September 14, 2010 at 6:35pm 11th grade I think it says what do you do after gym class (without the question mark)...Entonces, voy a la mi clase de matematicas. (Then, I go to math class.) Thursday, September 9, 2010 at 10:41pm 11th grade Lo que se hace despues de la clase de educacion fisica. Thursday, September 9, 2010 at 10:31pm 11th grade V = Ab*h/3. Ab = area of base. Ab = 10cm * 6cm = 60cm^2, V = 60cm^2 * 4cm /3 = 80cm^3. Sunday, September 5, 2010 at 9:12pm 11th grade math Let x = one angle, then 3x - 8 = other angle and they add to 180º. Solve for x. Tuesday, August 31, 2010 at 12:08pm 11th grade You are traveling at a rate of 50 m/s. If you accelerate at a rate of -15 m/s2, what will be your velocity after 3 seconds? Wednesday, August 25, 2010 at 7:46pm 11th grade look up its density: volume= mass/density Monday, August 23, 2010 at 5:10pm 11th grade 192 m = 0.192 km tan 8.4 = 0.192/d Sunday, August 22, 2010 at 7:01pm 11th grade in the distance you see an arch 192m. you estimate your line of sight with the top of the arch to be 8.4 degrees above the horizontal. appox how far in kilometers are you from the base of the arch? Sunday, August 22, 2010 at 6:57pm 11th grade Maths Can someone help me understand what that even function & odd function is, in trignometry Sunday, August 8, 2010 at 8:49am 11th Grade Physics A stone falls from a tower and travels 100m in the last second before it reaches the ground. Find the height of the tower Thursday, August 5, 2010 at 12:37pm 11th Grade Physics If a particle's motion is described as x = ut+1/2at^2, where x is position, t is time and u and a, are constants. Show that the acceleration of the particle is constant Thursday, August 5, 2010 at 12:35pm 11th grade Calculate the molecular mass of the nonionic solutes: 64.3 g of solute in 390 g of water raises the boiling point to 100.680 degrees C. Wednesday, August 4, 2010 at 3:03am 11th Grade Physics The maximum speed of a bus is 72km/h, it accelerates uniformly at the rate of 1m/s^2 and retards uniformly at the rate of 4m/s^2. Find the least time in which it can do a journey of 1km ? Tuesday, July 27, 2010 at 9:02am 11th grade Physics @ Sonia i'm not being rude it's ur thinking and this forum is to ask questions n not to judge someone's behaviour or emotions Sunday, July 25, 2010 at 2:32am 11th grade Physics Difference between Atomic spectrum and emission spectrum. First give me the clear cut definition of both and then the difference!!! Saturday, July 24, 2010 at 2:28pm Maths 11th Grade 2π/5 radians = 72° so let one angle be x then the other is x+72 we know x + x+72 = 90 2x = 18 x = 9 So the two angles are 9° and 81° Wednesday, July 21, 2010 at 1:59pm Maths 11th Grade The difference between the two acute angles of a right-angled triangle is 2(pie)/5 radians. Express the angles in degrees?? How to Solve it Wednesday, July 21, 2010 at 1:25pm 11th grade I wonder if it dissolved? Stirring increases the rate of dissolving, pretty critical if you are measuring temp. Heat is lost rapidly, so if it is dissolving slowly, you won't see the max temp peak. Sunday, July 18, 2010 at 1:47pm 11th grade in the heat of soultion lab what would happen to the delta T if the soultion CaCl2 was not stireed with the water? Sunday, July 18, 2010 at 1:20pm 11th grade,physical science what are the advantages and disadvantages of chlorine? what is chlorine? Thursday, July 15, 2010 at 3:54am Grade 11th Physics I have already shown you how to do the three problems. Each requires one step. I will be happy to verify that you have followed directions Thursday, July 8, 2010 at 6:59pm Grade 11th Physics Sir I want the answers so i could match my answers with yours Thursday, July 8, 2010 at 8:36am Grade 11th Maths This problem is related to Chapter-Sets. Please solve the question using x method. x method means x ∈ A ∩ B Q. If AUB=Ø, then prove that A=Ø,B=Ø Tuesday, June 29, 2010 at 7:12am Grade 11th Maths This problem is related to Chapter-Sets. Please solve the question using x method. x method means x ∈ A ∩ B Q. If A-B=A then show that A∩B=Ø Tuesday, June 29, 2010 at 7:11am Grade 11th Maths This problem is related to Chapter-Sets. Please solve the question using x method. x method means x ∈ A ∩ B Q. If A⊂B, then prove that B'⊂A' Tuesday, June 29, 2010 at 7:09am Grade 11th Chemistry How many molecules of water and oxygen atoms are present in 0.9g of Water. this numerical is based on Mole Concept. Kindly solve it!!! Monday, June 28, 2010 at 1:55pm 11th Grade Physics I want two examples based upon law of conservation of charge i know the definition just want two examples!!! Friday, June 25, 2010 at 7:39am 11th grade i realy dont understand how they must of felt when the pirates arrived and how they felt towards their diffrence in manners. Sunday, June 20, 2010 at 3:01pm science? needs h elp The School Subject is NOT 11th grade. Please list carefully: science, chemistry, etc. whatever it may be to get the proper volunteer teacher. Sra Wednesday, June 16, 2010 at 1:06pm 11th grade Calculated how many grams of methane (CH4) are in a sealed 80 mL flask at room temperature (22ºC) and 780 mm Hg of pressure. Wednesday, June 16, 2010 at 11:09am 11th grade chemistry How many grams are in 2.7 mol of table salt, NaCl (molar mass of NaCl=58.44 g/mol)? Wednesday, June 2, 2010 at 11:03am 11th grade math Divide the area by the length. width = (x^3 + 12x^2 +47x +60)/(x + 5) That ratio can be simplified by factoring x+5 out of the numerator, and cancelling it with the deominator. You will be left with x^2 + 7x + 12 for the length. Tuesday, May 25, 2010 at 2:16am 11th grade write an expression that represents the width of a rectangle with length x+5 anmd area x to the 3 power + 12x to the 2nd power+ 47x+60 Monday, May 24, 2010 at 11:38pm Physics 11th grade The Io i used was (1*10^-12) for A and (1*10^-10) for B. That was according to my physics book. I got A correct with 630.9573445 W/m^2. Friday, May 21, 2010 at 8:53pm Physics 11th grade I get 6309 W/m^2 for A and 10^-10 W/m^2 for B. Do you have the correct Io ? Friday, May 21, 2010 at 8:47pm 11th grade English Write your own journal entry from the perspective of a member of Bradford's colony. What do you think of these unruly pirates? What shocks you the most about these surly characters? How do the manners of the pirates differ from yours? Sunday, May 16, 2010 at 3:32am 11th grade Physics A student makes a magnet by winding wire around a nail and connecting the wire to a battery. Which end of the nail will be the north pole? Wednesday, May 5, 2010 at 8:38am 11th grade If it was 1 gram it would be 540 calories, but you have 500 g so how many calories? Tuesday, May 4, 2010 at 11:17am 11th grade The latent heat of vaporization of water is 540 calories/gram. How many calories are required to completely vaporize 500 grams of water? Tuesday, May 4, 2010 at 11:08am 11th grade Algebra 2 Still the same guy, just explaining why: 9 is the radius of the pizza (diameter/2). Area of a circle is pii*r^2. Pizza was divided into 10 slices so 1 slice is 1/10 of the total area, thereby 3/10 is the area of 3 slices from the whole pizza. This is like 7th grade stuff tho tbh. Monday, April 26, 2010 at 12:31pm 11th grade Friday, April 23, 2010 at 12:21am 11th grade chemistry If 4.0g of H2 are made to react with excess CO, how many grams of CH3OH can theoretically be produced according to CO+2H2 yields CH3OH? Tuesday, April 20, 2010 at 9:16am Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/11th_grade/?page=7","timestamp":"2014-04-19T12:48:49Z","content_type":null,"content_length":"32009","record_id":"<urn:uuid:5147501b-4e3b-46ad-ae50-4a07d917bc2b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Common Core Standards : CCSS.Math.Content.HSA-APR.D.6 Common Core Standards: Math 6. Rewrite simple rational expressions in different forms; write ^a(x)⁄[b(x)] in the form q(x) + ^r(x)⁄[b(x)], where a(x), b(x), q(x), and r(x) are polynomials with the degree of r(x) less than the degree of b(x), using inspection, long division, or, for the more complicated examples, a computer algebra system. This standard is like a Rube Goldberg machine. It looks far more complicated than it really is. It all boils down to dividing polynomials and expressing the answer properly. Students should have already touched division and fractions at least a little, having been through the third grade and all. In a way, it's just a continuation of the Remainder Theorem, so we recommend covering that first. Students should already know how to divide polynomials by factoring or long division. As with many divisions, they won't all be perfect and a remainder will be left over. Instead of just writing what the remainder is, we now expect students to actually do something with it. Let's say we're dividing a(x) by b(x), and our answer is q(x) with remainder r(x). Just like the Remainder Theorem, if r(x) = 0, then b(x) is a factor of a(x). We know that already. But what if b(x) doesn't divide a(x) with remainder 0? Well, just like simplifying ^13⁄[4] to 3 with a remainder of 1, or 3¼, we can write ^a(x)⁄[b(x)] as q(x) with remainder r(x), or ^r(x)⁄[b(x)]. Just like a remainder of 1 divided by 4 means ¼, a remainder of r(x) divided by b(x) will give us ^r(x)⁄[b(x)]. All the talk about the degree of r(x) being less than the degree of b(x) just means that r(x) should be "smaller" than b(x). It wouldn't make sense to split ^13⁄[4] into 2 ^5⁄[4] because we can still divide 5 by 4. It's the same idea, only polynomial-style. We suggest relating these polynomial quotients to fractions of integers so that students don't feel overwhelmed. It's understandable for them to be confused when we throw seven different functions at them, but they'll be a lot more receptive when they're working with concepts they already know. Students should also not be afraid of the big bad long division monster. Often, factoring is near impossible to figure out when remainders are involved and there are times when synthetic division just won't cut it. Students should give in and embrace long division and their lives will be better for it.
{"url":"http://www.shmoop.com/common-core-standards/ccss-hs-a-apr-6.html","timestamp":"2014-04-16T04:13:51Z","content_type":null,"content_length":"47101","record_id":"<urn:uuid:795fc79e-103d-44f4-b7b3-bd57c8806c37>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
has a book on the topic of robotic manipulator on the International Space Station is operated by controlling the angles of its joints. Calculating the final position of the astronaut at the end of the arm requires repeated use of the trigonometric functions of those angles. All of the trigonometric functions of an angle θ can be constructed geometrically in terms of a unit circle centered at O. Trigonometry (from Greek trigōnon "triangle" + metron "measure")[1] is a branch of mathematics that deals with triangles, particularly those plane triangles in which one angle has 90 degrees (right triangles). Trigonometry deals with relationships between the sides and the angles of triangles and with the trigonometric functions, which describe those relationships. Trigonometry has applications in both pure mathematics and in applied mathematics, where it is essential in many branches of science and technology. It is usually taught in secondary schools either as a separate course or as part of a precalculus course. Trigonometry is informally called “trig.” A branch of trigonometry, called spherical trigonometry, studies triangles on spheres, and is important in astronomy and navigation. Table of Trigonometry, 1728 Main article: History of trigonometry Trigonometry was probably developed for use in sailing as a navigation method used with astronomy.[2] The origins of trigonometry can be traced to the civilizations of ancient Egypt, Mesopotamia and the Indus Valley, more than 4000 years ago.[citation needed] The common practice of measuring angles in degrees, minutes and seconds comes from the Babylonian's base sixty system of numeration. The Sulba Sutras written in India, between 800 BC and 500 BC, correctly computes the sine of (=45°) as in a procedure for "circling the square" (i.e., constructing the inscribed circle).[citation needed The first recorded use of trigonometry came from the Hellenistic mathematician Hipparchus[1] circa 150 BC, who compiled a trigonometric table using the sine for solving triangles. Ptolemy further developed trigonometric calculations circa 100 AD. The ancient Sinhalese in Sri Lanka, when constructing reservoirs in the Anuradhapura kingdom, used trigonometry to calculate the gradient of the water flow. Archeological research also provides evidence of trigonometry used in other unique hydrological structures dating back to 4 BC.[3] The Indian mathematician Aryabhata in 499, gave tables of half chords which are now known as sine tables, along with cosine tables. He used zya for sine, kotizya for cosine, and otkram zya for inverse sine, and also introduced the versine. Another Indian mathematician, Brahmagupta in 628, used an interpolation formula to compute values of sines, up to the second order of the Newton- Stirling interpolation formula. In the 10th century, the Persian mathematician and astronomer Abul Wáfa introduced the tangent function and improved methods of calculating trigonometry tables. He established the angle addition identities, e.g. sin (a + b), and discovered the sine formula for spherical geometry: Also in the late 10th and early 11th centuries, the Egyptian astronomer Ibn Yunus performed many careful trigonometric calculations and demonstrated the formula Indian mathematicians were the pioneers of variable computations algebra for use in astronomical calculations along with trigonometry. Lagadha (circa 1350-1200 BC) is the first person thought to have used geometry and trigonometry for astronomy, in his Vedanga Jyotisha. Persian mathematician Omar Khayyám (1048-1131) combined trigonometry and approximation theory to provide methods of solving algebraic equations by geometrical means. Khayyam solved the cubic equation x3 + 200x = 20x2 + 2000 and found a positive root of this cubic by considering the intersection of a rectangular hyperbola and a circle. An approximate numerical solution was then found by interpolation in trigonometric tables. Detailed methods for constructing a table of sines for any angle were given by the Indian mathematician Bhaskara in 1150, along with some sine and cosine formulae. Bhaskara also developed spherical The 13th century Persian mathematician Nasir al-Din Tusi, along with Bhaskara, was probably the first to treat trigonometry as a distinct mathematical discipline. Nasir al-Din Tusi in his Treatise on the Quadrilateral was the first to list the six distinct cases of a right angled triangle in spherical trigonometry. In the 14th century, Persian mathematician al-Kashi and Timurid mathematician Ulugh Beg (grandson of Timur) produced tables of trigonometric functions as part of their studies of astronomy. The mathematician Bartholemaeus Pitiscus published an influential work on trigonometry in 1595 which may have coined the word "trigonometry". In this right triangle: sin A = a/c; cos A = b/c; tan A = a/b. If one angle of a right triangle is 90 degrees and one of the other angles is known, the third is thereby fixed, because the three angles of any triangle add up to 180 degrees. The two acute angles therefore add up to 90 degrees: they are complementary angles. The shape of a right triangle is completely determined, up to similarity, by the angles. This means that once one of the other angles is known, the ratios of the various sides are always the same regardless of the overall size of the triangle. These ratios are given by the following trigonometric functions of the known angle A, where a, b and c refer to the lengths of the sides in the accompanying figure: • The sine function (sin), defined as the ratio of the side opposite the angle to the hypotenuse. • The cosine function (cos), defined as the ratio of the adjacent leg to the hypotenuse. • The tangent function (tan), defined as the ratio of the opposite leg to the adjacent leg. The hypotenuse is the side opposite to the 90 degree angle in a right triangle; it is the longest side of the triangle, and one of the two sides adjacent to angle A. The adjacent leg is the other side that is adjacent to angle A. The opposite side is the side that is opposite to angle A. The terms perpendicular and base are sometimes used for the opposite and adjacent sides respectively. Many people find it easy to remember what sides of the right triangle are equal to sine, cosine, or tangent, by memorizing the word SOH-CAH-TOA (see below under Mnemonics). The reciprocals of these functions are named the cosecant (csc or cosec), secant (sec) and cotangent (cot), respectively. The inverse functions are called the arcsine, arccosine, and arctangent, respectively. There are arithmetic relations between these functions, which are known as trigonometric identities. With these functions one can answer virtually all questions about arbitrary triangles by using the law of sines and the law of cosines. These laws can be used to compute the remaining angles and sides of any triangle as soon as two sides and an angle or two angles and a side or three sides are known. These laws are useful in all branches of geometry, since every polygon may be described as a finite combination of triangles. Extending the definitions Graphs of the functions sin(x) and cos(x), where the angle x is measured in radians. Graphing process of y = sin(x) using a unit circle. Graphing process of y = tan(x) using a unit circle. Graphing process of y = csc(x) using a unit circle. The above definitions apply to angles between 0 and 90 degrees (0 and π/2 radians) only. Using the unit circle, one can extend them to all positive and negative arguments (see trigonometric function ). The trigonometric functions are periodic, with a period of 360 degrees or 2π radians. That means their values repeat at those intervals. The trigonometric functions can be defined in other ways besides the geometrical definitions above, using tools from calculus and infinite series. With these definitions the trigonometric functions can be defined for complex numbers. The complex function cis is particularly useful See Euler's and De Moivre's formulas. Students often use mnemonics to remember facts and relationships in trigonometry. For example, the sine, cosine, and tangent ratios in a right triangle can be remembered by representing them as strings of letters, as in SOH-CAH-TOA. Sine = Opposite ÷ Hypotenuse Cosine = Adjacent ÷ Hypotenuse Tangent = Opposite ÷ Adjacent Alternatively, one can devise sentences which consist of words beginning with the letters to be remembered. For example, to recall that Tan = Opposite/Adjacent, the letters T-O-A must be remembered. Any memorable phrase constructed of words beginning with the letters T-O-A will serve. Another type of mnemonic describes facts in a simple, memorable way, such as "Plus to the right, minus to the left; positive height, negative depth," which refers to trigonometric functions generated by a revolving line. Calculating trigonometric functions Main article: Generating trigonometric tables Trigonometric functions were among the earliest uses for mathematical tables. Such tables were incorporated into mathematics textbooks and students were taught to look up values and how to interpolate between the values listed to get higher accuracy. Slide rules had special scales for trigonometric functions. Today scientific calculators have buttons for calculating the main trigonometric functions (sin, cos, tan and sometimes cis) and their inverses. Most allow a choice of angle measurement methods, degrees, radians and, sometimes, Grad. Most computer programming languages provide function libraries that include the trigonometric functions. The floating point unit hardware incorporated into the microprocessor chips used in most personal computers have built in instructions for calculating trigonometric functions. Applications of trigonometry Main article: Uses of trigonometry There are an enormous number of applications of trigonometry and trigonometric functions. For instance, the technique of triangulation is used in astronomy to measure the distance to nearby stars, in geography to measure distances between landmarks, and in satellite navigation systems. The sine and cosine functions are fundamental to the theory of periodic functions such as those that describe sound and light waves. Fields which make use of trigonometry or trigonometric functions include astronomy (especially, for locating the apparent positions of celestial objects, in which spherical trigonometry is essential) and hence navigation (on the oceans, in aircraft, and in space), music theory, acoustics, optics, analysis of financial markets, electronics, probability theory, statistics, biology, medical imaging (CAT scans and ultrasound), pharmacy, chemistry, number theory (and hence cryptology), seismology, meteorology, oceanography, many physical sciences, land surveying and geodesy, architecture, phonetics, economics, electrical engineering, mechanical engineering, civil engineering, computer graphics, cartography, crystallography and game development. like this are used to measure the angle of the sun or stars with respect to the horizon. Using trigonometry and a marine chronometer , the position of the ship can then be determined from several such measurements. Common formulae Main article: Trigonometric identity Main article: Trigonometric function Certain equations involving trigonometric functions are true for all angles and are known as trigonometric identities. Many express important geometric relationships. For example, the Pythagorean identities are an expression of the Pythagorean Theorem. Here are some of the more commonly used identities, as well as the most important formulae connecting angles and sides of an arbitrary triangle. For more identities see trigonometric identity. Trigonometric identities Inverse functions Further reading List of identities Exact constants Generating trigonometric tables Law of sines Law of cosines Law of tangents Pythagorean theorem The Trigonometric integral Trigonometric substitution Integrals of functions Integrals of inverses Pythagorean identities Sum and product identities Detailed, diagramed proofs of the first two of these formulas are available for download as a four-page PDF document at Image:Sine Cos Proofs.pdf. Half-angle identities Note that is correct, it means it may be either one, depending on the value of A/2. Stereographic ( or parametric ) identities where . Triangle identities Laws of Sines and Cosines In the following identities, A, B and C are the angles of a triangle and a, b and c are the lengths of sides of the triangle opposite the respective angles. Law of sines The law of sines (also know as the "sine rule") for an arbitrary triangle states: where R is the radius of the circumcircle of the triangle. Law of cosines The law of cosines (also known as the cosine formula, or the "cos rule") is an extension of the Pythagorean theorem to arbitrary triangles: or equivalently: Law of tangents The law of tangents: See also External links Find more about Trigonometry on Wikipedia's sister projects: Dictionary definitions Textbooks Quotations Source texts Images and media News stories Learning resources v Major fields of Logic · Set theory · Category theory · Algebra (elementary – linear – abstract) · Discrete mathematics · Number theory · Analysis · Geometry · Trigonometry · Topology · Applied mathematics · Probability · Statistics · Mathematical physics Hidden categories: All articles with unsourced statements Articles with unsourced statements since February 2007 Articles with unsourced statements since March 2008 Related word on this page Related Shopping on this page
{"url":"http://wikipedia.atpedia.com/en/articles/t/r/i/Trigonometry.html","timestamp":"2014-04-16T04:12:55Z","content_type":null,"content_length":"67454","record_id":"<urn:uuid:2758d97b-ab63-4e92-af11-69addbba9cbb>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Definability of the direct limit up vote 7 down vote favorite Suppose $\kappa$ is a measurable cardinal and $U$ is a normal measure on $\kappa$. $M_1$ is the ultrapower of the universe $V$ constructed form $U$,$j_{01}$ is the induced elementary embedding, $M_2$ is the ultrapower of $M_1$ constructed from $j_{01}(U)$...We can iterate this process. We know $M_0,M_1,M_2,...$ and $j_{01},j_{02},j_{12},..$ are all classes of the universe, i.e. they are all definable. Also, they form a direct system, so we can define the direct limit $(M_{\omega},E_{\omega})$. $M_{\omega}$ and $E_{\omega}$ are both subset of the universe $V$, but my question is why they are definable subset of the universe? add comment 1 Answer active oldest votes There are in fact many different definable ways to represent the direct limit. One particularly concrete representation of the direct limit is to use maximal threads, that is, maximal sequences $\langle x_i \mid j\leq i\lt\omega\rangle$, where $x_{i+1}=j_{i,i+1}(x_i)$, where $j_{i,i+1}:M_i\to M_{i+1}$ is the ultrapower of $M_i$ by $U_i=j_{0,i}(U)$, a normal measure on $ \kappa_i$, where $j_{i,j}:M_i\to M_j$ is the corresponding composition of these embeddings. One then defines the $\in$ relation in $M_\omega$ by consulting the common coordinates, and this is well-defined. Alternatively, one may take the disjoint union of the $M_i$, and declare that any two such points are equivalent when they map to the same point in some later $M_i$. And there are many other representations of the direct limit, and these may all be defined by defining the constituent elements of the system used to define the direct limit, often taking the quotient by an equivalence relation, which is definable from the system and the measure. up vote 5 The point is that any of these concrete representations of the direct limit can be defined from the measure $U$. With $U$ as a parameter, one may define any of the classes $M_i$ and also down vote the embeddings $j_{i,j}:M_i\to M_j$, and furthermore, we may do so uniformly in $i$ and $j$. Thus, the point is that not only are the various $M_i$ and embeddings $j_{i,j}$ definable accepted individually, but uniformly in the parameters $i$, $j$, and this is what it takes to define the direct limit using any of the usual representations of the direct limit. In short, we use the fact that we can define the entire system of embeddings $j_{i,j}:M_i\to M_j$ uniformly in order to know that we can build a representation of the direct limit of that system. But what is more, since the direct limit is well-founded, there is in fact a canonical way to represent the direct limit: the Mostowski collapse of your favorite representation. This provides $\langle M_\omega,{\in}\rangle$ as a transitive model of set theory, using the actual $\in$ relation. And the result will be a proper transitive class $M_\omega\subset V$, definable from $U$ as a parameter, with definable class embeddings $j_{i,\omega}:M_i\to M_\omega$. Thanks for your kind answer. If the entire system $<M_i|i<\omega>$ and $<j_{i,j}|i<j<\omega>$ are definable, then pick any representation of the direct limit, it is definable. But I still not well understand why the entire system is definable? $<M_i|i<\omega>$ is a sequence of classes, in general, an infinite sequence of classes need not be definable. But I have a feeling this sequence $<M_i|i<\omega>$ is similar to a sequence defined by induction, so whether "a sequence of classes defined by induction" is indeed definable? – Song Li Aug 3 '12 at Yes, the classes and the system are definable by induction. One defines the relations $x\in M_i$ and $j_{i,k}(x)=y$, meaning that one defines the classes $\\{(x,i)\mid x\in M_i\\}$ and 1 $\\{(i,k,x,y)\mid j_{i,k}(x)=y\\}$, so one needn't encounter problems from forming "sequences of classes". The point is that the definition of whether $x\in M_{i+1}$ depends only very locally on $M_i$; in general, one cannot legitimately define classes by induction: for example, there are serious isses with defining the classes $\text{HOD}^n$. See jdh.hamkins.org/ generalizationsofkuneninconsistency – Joel David Hamkins Aug 3 '12 at 11:30 My previous comment seems to be missing the set braces { } around those two classes. – Joel David Hamkins Aug 3 '12 at 12:57 I just read the remark on iterated HOD classes in your paper. For the iterated ultrapower case, the graph is definable, but this does not happen in iterated HOD case. Thanks for your helpful answer again. – Song Li Aug 3 '12 at 13:05 1 Hi Joel. I didn't know of the Zadrozny paper, only of an earlier paper by Jech, "Forcing with trees and ordinal definability", Ann. Math. Logic 7, (1974/75), 387–409. Nice reference. – Andres Caicedo Aug 3 '12 at 16:38 show 1 more comment Not the answer you're looking for? Browse other questions tagged set-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/103832/definability-of-the-direct-limit","timestamp":"2014-04-20T18:50:23Z","content_type":null,"content_length":"57892","record_id":"<urn:uuid:665d3ccc-c4bd-4b64-9df7-4060b013448b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
John Armstrong Well it’s been quite a while, but I think I can carve out the time to move forwards again. I was all set to start with Lie algebras today, only to find that I’ve already defined them over a year ago. So let’s pick up with a recap: a Lie algebra is a module — usually a vector space over a field $\mathbb{F}$ — called $L$ and give it a bilinear operation which we write as $[x,y]$. We often require such operations to be associative, but this time we impose the following two conditions: \displaystyle\begin{aligned}{}[x,x]&=0\\ [x,[y,z]]+[y,[z,x]]+[z,[x,y]]&=0\end{aligned} Now, as long as we’re not working in a field where $1+1=0$ — and usually we’re not — we can use bilinearity to rewrite the first condition: so $[y,x]=-[x,y]$. This antisymmetry always holds, but we can only go the other way if the character of $\mathbb{F}$ is not $2$, as stated above. The second condition is called the “Jacobi identity”, and antisymmetry allows us to rewrite it as: That is, bilinearity says that we have a linear mapping $x\mapsto[x,\underline{\hphantom{X}}]$ that sends an element $x\in L$ to a linear endomorphism in $\mathrm{End}(L)$. And the Jacobi identity says that this actually lands in the subspace $\mathrm{Der}(L)$ of “derivations” — those which satisfy something like the Leibniz rule for derivatives. To see what I mean, compare to the product where $f$ takes the place of $y$, $g$ takes the place of $z$, and $\frac{d}{dt}$ takes the place of $x$. And the operations are changed around. But you should see the similarity. Lie algebras obviously form a category whose morphisms are called Lie algebra homomorphisms. Just as we might expect, such a homomorphism is a linear map $\phi:L\to L'$ that preserves the bracket: We can obviously define subalgebras and quotient algebras. Subalgebras are a bit more obvious than quotient algebras, though, being just subspaces that are closed under the bracket. Quotient algebras are more commonly called “homomorphic images” in the literature, and we’ll talk more about them later. We will take as a general assumption that our Lie algebras are finite-dimensional, though infinite-dimensional ones absolutely exist and are very interesting. And I’ll finish the recap by reminding you that we can get Lie algebras from associative algebras; any associative algebra $(A,\cdot)$ can be given a bracket defined by $\displaystyle [x,y]=x\cdot y-y\cdot x$ The above link shows that this satisfies the Jacobi identity, or you can take it as an exercise. This is part four of a four-part discussion of the idea behind how the Higgs field does its thing. Read Part 1, Part 2, and Part 3 first. At last we’re ready to explain the Higgs mechanism. We start where we left off last time: a complex scalar field $\phi$ with a gauged phase symmetry that brings in a (massless) gauge field $A_\mu$. The difference is that now we add a new self-interaction term to the Lagrangian: $\displaystyle L=-\frac{1}{4}F_{\muu}F_{\muu}+(D_\mu\phi)^*D_\mu\phi-\left[-m^2\phi^*\phi+\lambda(\phi^*\phi)^2\right]$ where $\lambda$ is a constant that determines the strength of the self-interaction. We recall the gauged symmetry transformations: If we write down an expression for the energy of a field configuration we get a bunch of derivative terms — basically like kinetic energy — that all occur with positive signs and then the potential energy term that comes in the brackets above: $\displaystyle V(\phi^*\phi)=-m^2\phi^*\phi+\lambda(\phi^*\phi)^2$ Now, the “ground state” of the system should be one that minimizes the total energy, but the usual choice of setting all the fields equal to zero doesn’t do that here. The potential has a “bump” in the center, like the punt in the bottom of a wine bottle, or like a sombrero. So instead of using that as our ground state, we’ll choose one. It doesn’t matter which, but it will be convenient to pick: where $\phi_0=\frac{m}{\sqrt{\lambda}}$ is chosen to minimize the potential. We can still use the same field $A_\mu$ as before, but now we will write Since the ground state $\phi_0$ is a point along the real axis in the complex plane, vibrations in the field $\chi$ measure movement that changes the length of $\phi$, while vibrations in $\theta$ measure movement that changes the phase. We want to consider the case where these vibrations are small — the field $\phi$ basically sticks near its ground state — because when they get big enough we have enough energy flying around in the system that we may as well just work in the more symmetric case anyway. So we are justified in only working out our new Lagrangian in terms up to quadratic order in the fields. This will also make our calculations a lot simpler. Indeed, to quadratic order (and ignoring an irrelevant additive constant) we have $\displaystyle V(\phi^*\phi)=m^2\chi^2$ so vibrations of the $\theta$ field don’t show up at all in quadratic interactions. We should also write out our covariant derivative up to linear terms: $\displaystyle D_\mu\phi=\frac{1}{\sqrt{2}}\left(\partial_\mu\chi+i\partial_\mu\theta-ie\phi_uA_\mu\right)$ so that the quadratic Lagrangian is Now, the term in parentheses on the right looks like the mass term of a vector field $B_\mu$ with mass $e\phi_0$. But what is the kinetic term of this field? \displaystyle\begin{aligned}B_{\muu}&=\partial_\mu B_u-\partial_u B_\mu\\&=\partial_\mu\left(A_u-\frac{1}{e\phi_0}\partial_u\theta\right)-\partial_u\left(A_\mu-\frac{1}{e\phi_0}\partial_\mu\theta\ right)\\&=\partial_\mu A_u-\partial_u A_\mu-\frac{1}{e\phi_0}\left(\partial_\mu\partial_u\theta-\partial_u\partial_\mu\theta\right)\\&=F_{\muu}-0=F_{\muu}\end{aligned} And so we can write down the final form of our quadratic Lagrangian: $\displaystyle L^{(2)}=\left[-\frac{1}{4}B_{\muu}B_{\muu}+\frac{e^2\phi_0^2}{2}B_\mu B_\mu\right]+\left[\frac{1}{2}\partial_\mu\chi\partial_\mu\chi-m^2\chi^2\right]$ In order to deal with the fact that our normal vacuum was not a minimum for the energy, we picked a new ground state that did minimize energy. But the new ground state doesn’t have the same symmetry the old one did — we have broken the symmetry — and when we write down the Lagrangian in terms of excitations around the new ground state, we find it convenient to change variables. The previously massless gauge field “eats” part of the scalar field and gains a mass, leaving behind the Higgs field. This is essentially what’s going on in the Standard Model. The biggest difference is that instead of the initial symmetry being a simple phase, which just amounts to rotations around a circle, we have a (slightly) more complicated symmetry to deal with. For those that are familiar with some classical groups, we start with an action of $SU(2)\times U(1)$ on a column vector $\phi$ made of two complex scalar fields with a potential of the form: $\displaystyle V(\phi)=\lambda\left(\phi^\dagger\phi-\frac{v^2}{2}\right)^2$ which is invariant under the obvious action of $SU(2)$ and a phase action of $U$. Since the group $SU(2)$ is three-dimensional there are three gauge fields to introduce for its symmetry and one more for the $U(1)$ symmetry. When we pick a ground state that breaks the symmetry it doesn’t completely break; a one-dimensional subgroup $U(1)\subseteq SU(2)\times U(1)$ still leaves the new ground state invariant — though it’s important to notice that this is not just the $U(1)$ factor, but rather a mixture of this factor and a $U(1)$ subgroup of $SU(2)$. Thus only three of these gauge fields gain mass; they become the $W^ \pm$ and $Z^0$ bosons that carry the weak force. The other gauge field remains massless, and becomes $\gamma$ — the photon. At high enough energies — when the fields bounce around enough that the bump doesn’t really affect them — then the symmetry comes back and we see that the electromagnetic and weak interactions are really two different aspects of the same, unified phenomenon, just like electricity and magnetism are really two different aspects of electromagnetism. This is part three of a four-part discussion of the idea behind how the Higgs field does its thing. Read Part 1 and Part 2 first. Now we’re starting to get to the really meaty stuff. We talked about the phase symmetry of the complex scalar field: which basically wants to express the idea that the physics of this field only really depends on the length of the complex field values $\phi(x)$ and not on their phases. But another big principle of physics is locality — what happens here doesn’t instantly affect what happens elsewhere — so why should the phase change be global? To answer this, we “gauge” the symmetry and make it local. The origin of the term is fascinating, but takes us too far afield. The upshot is that we now have the symmetry transformation: where $\alpha$ is no longer a constant, but a function of the spacetime point $x$. And here’s the big problem: since $\alpha$ varies from point to point, it now affects our derivative terms! Before we had and similarly for $\phi^*$. We say that the derivatives are “covariant” under the transformation; they transform in the same way as the underlying fields. And this is what lets us say that and makes the whole Lagrangian symmetric. On the other hand, what do we see now? We pick up this extra term when we differentiate, and it ruins the symmetry. The way out is to add another field that can “soak up” this extra term. Since the derivative is a vector, we introduce a vector field $A_\mu$ and say that it transforms as $\displaystyle A_\mu'(x)=A_\mu(x)+\frac{1}{e}\partial_\mu\alpha(x)$ Next, we introduce a new derivative operator: $D_\mu=\partial_\mu-ieA_\mu$. That is: $\displaystyle D_\mu\phi(x)=\partial_\mu\phi(x)-ieA_\mu(x)\phi(x)$ And we calculate So the derivative $D_\mu\phi(x)$does vary the same way as the underlying field $\phi(x)$ does! We call $D_\mu$ the “covariant derivative”. If we use it in our Lagrangian, we do recover our symmetry, though now we’ve got a new field $A_\mu$ to contend with. Just like the electromagnetic potential we use the derivative $F_{\muu}=\partial_\mu A_u-\partial_u A_\mu$ to write $\displaystyle L=-\frac{1}{4}F_{\muu}F_{\muu}+(D_\mu\phi)^*D_\mu\phi-m^2\phi^*\phi$ which is now symmetric under the gauged symmetry transformations. It may not be apparent, but this Lagrangian does contain interaction terms. We can expand out the second term to find: phi^*\phi-ieA_\mu\phi^*\partial_\mu\phi-e^2A_\mu A_\mu\phi^*\phi\end{aligned} Our rules of thumb tell us that if we vary the Lagrangian with respect to $A_\mu$ we get the field equation $\displaystyle\partial_\mu F_{\muu}=ej_\mu$ which — if we expand out $F_{\muu}$ as if it’s the Faraday field into “electric” and “magnetic” fields — give us Gauss’ and Ampère’s law in the presence of a charge-current density $j_\mu$. The charge-current, in particular, we can write as $\displaystyle j_\mu=-i\left(\phi^*\partial_\mu\phi-\partial_\mu\phi^*\phi\right)-2eA_\mu\phi^*\phi$ or, in a gauge-invariant manner, as $\displaystyle j_\mu=-i\left[\phi^*D_\mu\phi-(D_\mu\phi)^*\phi\right]$ which is just the conserved current from last time with the regular derivatives replaced by covariant ones. Similarly, varying with respect to the field $\phi$ we find the “covariant” Klein-Gordon $\displaystyle D_\mu D_\mu\phi+m^2\phi=0$ and, when this holds, we can show that $\partial_\mu j_\mu=0$. So we’ve found that if we take the global symmetry of the complex scalar field and “gauge” it, something like electromagnetism naturally pops out, and the particle of the complex scalar field interacts with it like charged particles interact with the real electromagnetic field. This is part two of a four-part discussion of the idea behind how the Higgs field does its thing. Read Part 1 first. Okay, now that we’re sold on the Lagrangian formalism you can rest easy: I’m not going to go through the gory details of any more variational calculus. I do want to clear a couple notational things out of the way, though. They might not all matter for the purposes of our discussion, but better safe than sorry. First off, I’m going to use a coordinate system where the speed of light is 1. That is, if my unit of time is seconds, my unit of distance is light-seconds. Mostly this helps keep annoying constants out of the way of the equations; physicists do this basically all the time. The other thing is that I’m going to work in four-dimensional spacetime, meaning we’ve got four coordinates: $x_0$, $x_1$, $x_2$, and $x_3$. We calculate dot products by writing $v\cdot w=v_1w_1+v_2w_2+v_3w_3-v_0w_0$. Yes, that minus sign is weird, but that’s just how spacetime works. Also instead of writing spacetime vectors, I’m going to write down their components, indexed by a subscript that’s meant to run from 0 to 3. Usually this will be a Greek letter from the middle of the alphabet like $\mu$ or $u$. Similarly, instead of writing $abla$ for the vector composed of the four spacetime derivatives of a field I’ll just write down the derivatives, and I’ll write $\partial_\ mu f$ instead of $\frac{\partial f}{\partial x_\mu}$. Along with writing down components instead of vectors I won’t be writing dot products explicitly. Instead I’ll use the common convention that when the same index appears twice we’re supposed to sum over it, remembering that the zero component gets a minus sign. That is, $v_\mu w_\mu$ is the dot product from above. Similarly, we can multiply a matrix with entries $A_{\muu}$ by a vector $v_u$ to get $w_\mu=A_{\muu}v_u$; notice how the summed index $u$ gets “eaten up” in the process. Okay, now even without going through the details there’s a fair bit we can infer from general rules of thumb. Any term in the Lagrangian that contains a derivative of the field we’re varying is almost always going to be the squared-length of that derivative, and the resulting term in the variational equations will be the negative of a second derivative of the field. For any term that involves the plain field we basically take its derivative as if the field were a variable. Any term that doesn’t involve the field at all just goes away. And since we prefer positive second-derivative terms to negative ones, we usually flip the sign of the resulting equation; since the other side is zero this doesn’t matter. So if, for instance, we have the following Lagrangian of a complex scalar field $\phi$: $\displaystyle L=\partial_\mu\phi^*\partial_\mu\phi-m^2\phi^*\phi$ we get two equations by varying the field $\phi$ and its complex conjugate $\phi^*$ separately: It may not seem to make sense to vary the field and its complex conjugate separately, but the two equations we get at the end are basically the same anyway, so we’ll let this slide for now. Anyway, what we get is a second derivative of $\phi$ set equal to $m^2$ times $\phi$ itself, which we call the “Klein-Gordon wave equation” for $\phi$. Since the term $m^2\phi^*\phi$ gives rise to the term $m^2\phi$ in the field equations, we call this the “mass term”. In the case of electromagnetism in a vacuum we just have the electromagnetic fields and no charge or current distribution. We use the Faraday field $F_{\muu}=\partial_\mu A_u-\partial_u A_\mu$ to write down the Lagrangian $\displaystyle L=-\frac{1}{4}F_{\muu}F_{\muu}$ which gives rise to the field equations $\displaystyle\partial_\mu F_{\muu}=0$ or, equivalently in terms of the potential field $A$: \displaystyle\begin{aligned}\partial_\mu\partial_\mu A_u&=0\\\partial_u A_u&=0\end{aligned} The second equation just expresses a choice we can make to always consider divergence-free potentials without affecting the predictions of electromagnetism; the first equation looks like the Klein-Gordon equation again, except there’s no mass term. Indeed, we know that photons — the particles associated to the electromagnetic field — have no rest mass! Turning back to the complex scalar field, we notice that there’s a certain symmetry to this Lagrangian. Specifically, if we replace $\phi(x)$ and $\phi^*$ by for any constant $\alpha$, we get the same result. This is important, and it turns out to be a clue that leads us — I won’t go into the details — to consider the quantity $\displaystyle j_\mu=-i(\phi^*\partial_\mu\phi-\phi\partial_\mu\phi^*)$ This is interesting because we can calculate \displaystyle\begin{aligned}\partial_\mu j_\mu&=-i\partial_\mu(\phi^*\partial_\mu\phi-\phi\partial_\mu\phi^*)\\&=-i(\partial_\mu\phi^*\partial_\mu\phi+\phi^*\partial_\mu\partial_\mu\phi-\partial_\mu\ where we’ve used the results of the Klein-Gordon equations. Since $\partial_\mu j_\mu=0$, this is a suitable vector field to use as a charge-current distribution; the equation just says that charge is conserved! That is, we can write down a Lagrangian involving both electromagnetism — that is, our “massless vector field” $A_\mu$ and our scalar field: $\displaystyle L=-\frac{1}{4}F_{\muu}F_{\muu}-ej_\mu A_\mu$ where $e$ is a “coupling constant” that tells us how important the “interaction term” involving both $j_\mu$ and $A_\mu$ is. If it’s zero, then the fields don’t actually interact at all, but if it’s large then they affect each other very strongly. This is part one of a four-part discussion of the idea behind how the Higgs field does its thing. Wow, about six months’ hiatus as other parts of my life have taken precedence. But I drag myself slightly out of retirement to try to fill a big gap in the physics blogosphere: how the Higgs mechanism works. There’s a lot of news about this nowadays, since the Large Hadron Collider has announced evidence of a “Higgs-like” particle. As a quick explanation of that, I use an analogy I made up on Twitter: “If Mirror-Spock exists, he has a goatee. We have found a man with a goatee. We do not yet know if he is Mirror-Spock.” So, what is the Higgs boson? Well, it’s the particle expression of the Higgs field. That doesn’t explain anything, so we go one step further. What is the Higgs field? It’s the (conjectured) thing that gives some other particles (some of their) mass, in certain situations where normally we wouldn’t expect there to be any mass. And then there’s hand-waving about something like the ether that particles have to push through or shag carpet that they have to rub against that slows them down and hey, mass. Which doesn’t really explain anything, but sort of sounds like it might and so people nod sagely and then either forget about it all or spin their misconceptions into a new wave of Dancing Wu-Li Masters. I think we can do better, at least for the science geeks out there who are actually interested and not allergic to a little math. A couple warnings and comments before we begin. First off: I’m not going to go through this in my usual depth because I want to cram it into just three posts, albeit longer ones than usual, even though what I will say touches on all sorts of insanely cool mathematics that disappointingly few people see put together like this. Second: Ironically, that seems to include a lot of the physicists, who are generally more concerned with making predictions than with understanding how the underlying theory connects to everything else and it’s totally fine, honestly, that they’re interested in different aspects than I am. But I’m going to make a relatively superficial pass over describing the theory as physicists talk about it rather than go into those underlying structures. Lastly: I’m not going to describe the actual Higgs particle or field as they exist in the Standard Model; that would require quantum field theory and all sorts of messy stuff like that, when it turns out that the basic idea already shows up in classical field theory, which is a lot easier to explain. Even within classical field theory I’m going to restrict myself to a simpler example of the sort of thing that happens. Because reasons. That all said, let’s dive in with Lagrangian mechanics. This is a subject that you probably never heard about unless you were a physics major or maybe a math major. Basically, Newtonian mechanics works off of the three laws that were probably drilled into your head by the end of high school science classes: Newton’s Laws of Motion 1. An object at rest tends to stay at rest; an object in motion tends to stay in that motion. 2. Force applied to an object is proportional to the acceleration that object experiences. The constant of proportionality is the object’s mass. 3. Every action comes paired with an equal and opposite reaction. It’s the second one that gets the most use since we can write it down in a formula: $F=ma$. And for most forces we’re interested in the force is a conservative vector field, meaning that it’s the (negative) gradient (fancy word for “derivative” that comes up in more than one dimension) of a potential energy function: $F=-abla U$. What this means is that things like to move in the direction that potential energy decreases, and they “feel a force” pushing them in that direction. Upshot for Newton: $ma=-abla U$. Lagrangian mechanics comes at this same formula with a different explanation: objects like to move along paths that (locally) minimize some quantity called “action”. This principle unifies the usual topics of high school Newtonian physics with things like optics where we say that light likes to move along the shortest path between two points. Indeed, the “action” for light rays is just the distance they travel! This also explains things like “the angle of incidence equals the angle of reflection”; if you look at all paths between two points that bounce off of a mirror, the one that satisfies this property has the shortest length, making it a local minimum for the action. Let’s set this up for a body moving around in some potential field to show you how it works. The action of a suggested path $q(t)$ — the body is at the point $q(t)$ at time $t$ over a time interval $t_1\leq t\leq t_2$ is: $\displaystyle S[q]=\int\limits_{t_1}^{t_2}\frac{1}{2}mv(t)^2-U(q(t))\,dt$ where $v(t)=\dot{q}(t)$ is the velocity vector of the particle, $v(t)^2$ is the square of its length, and $U(x)$ is a potential function depending only on the position of the particle. Don’t worry: there’s a big scary integral here, but we aren’t going to actually do any integration. The function on the inside of the integral is called the Lagrangian function, and we calculate the action $S$ of the path $q$ by integrating the Langrangian over the time interval we’re concerned with. We write this as $S[q]$ with square brackets to emphasize that this is a “functional” that takes a function $q$ and gives a number back. Of course, as mathematicians there’s really nothing inherently special about functions taking functions as arguments, but for beginners it helps keep things straight. Now, what happens if we “wiggle” the path a bit? What if we calculate the action of $q'=q+\delta q$, where $\delta q$ is some “small” function called the “variation” of $q$? We calculate: $\displaystyle S[q']=\int\limits_{t_1}^{t_2}\frac{1}{2}m(\dot{q}'(t))^2-U(q'(t))\,dt$ Taking the derivative $\dot{q}'$ is linear, so we see that $\dot{q}'=\dot{q}+\delta\dot{q}$; “the variation of the derivative is the derivative of the variation”. Plugging this in: \displaystyle\begin{aligned}S[q']&=\int\limits_{t_1}^{t_2}\frac{1}{2}m(\dot{q}(t)+\delta\dot{q}(t))^2-U(q(t)+\delta q(t))\,dt\\&=\int\limits_{t_1}^{t_2}\frac{1}{2}m(\dot{q}(t)^2+2\dot{q}(t)\cdot\ delta\dot{q}(t)+\delta\dot{q}(t)^2)-U(q(t)+\delta q(t))\,dt\\&\approx\int\limits_{t_1}^{t_2}\frac{1}{2}m(\dot{q}(t)^2+2\dot{q}(t)\cdot\delta\dot{q}(t))-\left[U(q(t))+abla U(q(t))\cdot\delta q(t)\ where we’ve thrown away terms involving second and higher powers of $\delta q$; the variation is small, so the square (and cube, and …) is negligible. So what’s the difference between this and $S[q]$ ? What’s the variation of the action? $\displaystyle\delta S=S[q']-S[q]=\int\limits_{t_1}^{t_2}m\dot{q}(t)\cdot\delta\dot{q}(t)-abla U(q(t))\cdot\delta q(t)\,dt$ where again we throw away negligible terms. Now we can handle the first term here using integration by parts: \displaystyle\begin{aligned}\delta S=S[q']-S[q]&=\int\limits_{t_1}^{t_2}-m\ddot{q}(t)\cdot\delta q(t)-abla U(q(t))\cdot\delta q(t)\,dt\\&=\int\limits_{t_1}^{t_2}-\left[m\ddot{q}(t)+abla U(q(t))\ right]\cdot\delta q(t)\,dt\end{aligned} “Wait a minute!” those of you paying attention will cry out, “what about the boundary terms!?” Indeed, when we use integration by parts we should pick up $\ddot{q}(t_2)\cdot\delta q(t_2)-\ddot{q} (t_1)\cdot\delta q(t_1)$, but we will assume that we know where the body is at the beginning and the end of our time interval, and we’re just trying to figure out how it gets from one point to the other. That is, $\delta q$ is zero at both endpoints. So, now we apply our Lagrangian principle: bodies like to move along action-minimizing paths. We know how action changes if we “wiggle” the path by a little variation $\delta q$, and this should remind us about how to find local minima: they happen when no matter how we change the input, the “first derivative” of the output is zero. Here the first derivative is the variation in the action, throwing away the negligible terms. So, what condition will make $\delta S$ zero no matter what function we put in for $\delta q$? Well, the other term in the integrand will have to vanish: $\displaystyle m\ddot{q}(t)+abla U(q(t))=0$ But this is just Newton’s second law from above, coming back again! Everything we know from Newtonian mechanics can be written down in Lagrangian mechanics by coming up with a suitable action functional, which usually takes the form of an integral of an appropriate Lagrangian function. But lots more things can be described using the Lagrangian formalism, including field theories like electromagnetism. In the presence of a charge distribution $\rho$ and a current distribution $j$, we take the potentials $\phi$ and $A$ as fundamental and start with the action (suppressing the space and time arguments so we can write $\rho$ instead of $\rho(x,t)$: $\displaystyle S[\phi,A]=\int_{t_1}^{t_2}\int_{\mathbb{R}^3}-\rho\phi+j\cdot A+\frac{\epsilon_0}{2}E^2-\frac{1}{2\mu_0}B^2\,dV\,dt$ When we vary with respect to $\phi$ and insist that the variance of $S$ be zero we get Gauss’ law: $\displaystyleabla\cdot E=\frac{\rho}{\epsilon_0}$ Varying the components of $A$ we get Ampère’s law with Maxwell’s correction: $\displaystyleabla\times B=\mu_0j+\epsilon_0\mu_0\frac{\partial E}{\partial t}$ The other two of Maxwell’s equations come automatically from taking the potentials as fundamental and coming up with the electric and magnetic fields from them. A comment just came in on my short rant about electromagnetism texts. Dripping with condescension, it states: Here’s the fundamental reason for your discomfort: as a mathematician, you don’t realize that scalar and vector potentials have *no physical significance* (or for that matter, do you understand the distinction between objects of physical significance and things that are merely convenient mathematical devices?). It really doesn’t matter how scalar and vector potentials are defined, found, or justified, so long as they make it convenient for you to work with electric and magnetic fields, which *are* physical (after all, if potentials were physical, gauge freedom would make no sense). On rare occasions (e.g. Aharonov-Bohm effect), there’s the illusion that (vector) potential has actual physical significance, but when you realize it’s only the *differences* in the potential, it ought to become obvious that, once again, potentials are just mathematically convenient devices to do what you can do with fields alone. P.S. We physicists are very happy with merely achieving self-consistency, thankyouverymuch. Experiments will provide the remaining justification. The thing is, none of that changes the fact that you’re flat-out lying to students when you say that the vanishing divergence of the magnetic field, on its own, implies the existence of a vector I think the commenter is confusing my complaint with a different, more common one: the fact that potentials are not uniquely defined as functions. But I actually don’t have a problem with that, since the same is true of any antiderivative. After all, what is an antiderivative but a potential function in a one-dimensional space? In fact, the concepts of torsors and gauge symmetries are intimately connected with this indefiniteness. No, my complaint is that physicists are sloppy in their teaching, which they sweep under the carpet of agreement with certain experiments. It’s trivial to cook up magnetic fields in non-simply-connected spaces which satisfy Maxwell’s equations and yet have no globally-defined potential at all. It’s not just that a potential is only defined up to an additive constant; it’s that when you go around certain loops the value of the potential must have changed, and so at no point can the function take any “self-consistent” value. In being so sloppy, physicists commit the sin of making unstated assumptions, and in doing so in front of kids who are too naïve to know better. A professor may know that this is only true in spaces without holes, but his students probably don’t, and they won’t until they rely on the assumption in a case where it doesn’t hold. That’s really all I’m saying: state your assumptions; unstated assumptions are anathema to science. As for the physical significance of potentials, I won’t even bother delving into the fact that explaining Aharonov-Bohm with fields alone entails chucking locality right out the window. Rest assured that once you move on from classical electromagnetism to quantum electrodynamics and other quantum field theories, the potential is clearly physically significant. Before we push ahead with the Faraday field in hand, we need to properly define the Hodge star in our four-dimensional space, and we need a pseudo-Riemannian metric to do this. Before we were just using the standard $\mathbb{R}^3$, but now that we’re lumping in time we need to choose a four-dimensional metric. And just to screw with you, it will have a different signature. If we have vectors $v_1=(x_1,y_1,z_1,t_1)$ and $v_2=(x_2,y_2,z_2,t_2)$ — with time here measured in the same units as space by using the speed of light as a conversion factor — then we calculate the metric as: $\displaystyle g(v_1,v_2)=x_1x_2+y_1y_2+z_1z_2-t_1t_2$ In particular, if we stick the vector $v=(x,y,z,t)$ into the metric twice, like we do to calculate a squared-length when working with an inner product, we find: $\displaystyle g(v,v)=x^2+y^2+z^2-t^2$ This looks like the Pythagorean theorem in two or three dimensions, but when we get to the time dimension we subtract $t^2$ instead of adding them! Four-dimensional real space equipped with a metric of this form is called “Minkowski space”. More specifically, it’s called 4-dimensional Minkowski space, or “(3+1)-dimensional” Minkowski space — three spatial dimensions and one temporal dimension. Higher-dimensional versions with $n-1$ “spatial” dimensions (with plusses in the metric) and one “temporal” dimension (with minuses) are also called Minkowski space. And, perversely enough, some physicists write it all backwards with one plus and $n-1$ minuses; this version is useful if you think of displacements in time as more fundamental — and thus more useful to call “positive” — than displacements in space. What implications does this have on the coordinate expression of the Hodge star? It’s pretty much the same, except for the determinant part. You can think about it yourself, but the upshot is that we pick up an extra factor of $-1$ when the basic form going into the star involves $dt$. So the rule is that for a basic form $\alpha$, the dual form $*\alpha$ consists of those component $1$-forms not involved in $\alpha$, ordered such that $\alpha\wedge(*\alpha)=\pm dx\wedge dy\wedge dz\wedge dt$, with a negative sign if and only if $dt$ is involved in $\alpha$. Let’s write it all out for easy reference: \displaystyle\begin{aligned}*1&=dx\wedge dy\wedge dz\wedge dt\\ *dx&=dy\wedge dz\wedge dt\\ *dy&=dz\wedge dx\wedge dt\\ *dz&=dx\wedge dy\wedge dt\\ *dt&=dx\wedge dy\wedge dz\\ *(dx\wedge dy)&=dz\ wedge dt\\ *(dz\wedge dx)&=dy\wedge dt\\ *(dy\wedge dz)&=dx\wedge dt\\ *(dx\wedge dt)&=-dy\wedge dz\\ *(dy\wedge dt)&=-dz\wedge dx\\ *(dz\wedge dt)&=-dx\wedge dy\\ *(dx\wedge dy\wedge dz)&=dt\\ *(dx\ wedge dy\wedge dt)&=dz\\ *(dz\wedge dx\wedge dt)&=dy\\ *(dy\wedge dz\wedge dt)&=dx\\ *(dx\wedge dy\wedge dz\wedge dt)&=-1\end{aligned} Note that the square of the Hodge star has the opposite sign from the Riemannian case; when $k$ is odd the double Hodge dual of a $k$-form is the original form back again, but when $k$ is even the double dual is the negative of the original form. Now that we’ve seen that we can use the speed of light as a conversion factor to put time and space measurements on an equal footing, let’s actually do it to Maxwell’s equations. We start by moving the time derivatives over on the same side as all the space derivatives: \displaystyle\begin{aligned}*d*\epsilon&=\mu_0c\rho\\d\beta&=0\\d\epsilon+\frac{\partial\beta}{\partial t}&=0\\{}*d*\beta-\frac{\partial\epsilon}{\partial t}&=\mu_0c\iota\end{aligned} The exterior derivatives here written as $d$ comprise the derivatives in all the spatial directions. If we pick coordinates $x$, $y$, and $z$, then we can write the third equation as three component equations that each look something like $\displaystyle\frac{\partial\epsilon_x}{\partial y}dy\wedge dx+\frac{\partial\epsilon_y}{\partial x}dx\wedge dy+\frac{\partial\beta_x}{\partial t}dx\wedge dy=\left(\frac{\partial\epsilon_y}{\partial x}-\frac{\partial\epsilon_x}{\partial y}+\frac{\partial\beta_z}{\partial t}\right)dx\wedge dy=0$ This doesn’t look right at all! We’ve got a partial derivative with respect to $t$ floating around, but I see no corresponding $dt$. So if we’re going to move to a four-dimensional spacetime and still use exterior derivatives, we can pick up $dt$ terms from the time derivative of $\beta$. But for the others to cancel off, they already need to have a $dt$ around in the first place. That is, we don’t actually have an electric $1$-form: In truth we have an electric $2$-form: $\displaystyle\epsilon=\epsilon_xdx\wedge dt+\epsilon_ydy\wedge dt+\epsilon_zdz\wedge dt$ Now, what does this mean for the exterior derivative $d\epsilon$? \displaystyle\begin{aligned}d\epsilon=&\frac{\partial\epsilon_x}{\partial y}dy\wedge dx\wedge dt+\frac{\partial\epsilon_x}{\partial z}dz\wedge dx\wedge dt\\&+\frac{\partial\epsilon_y}{\partial x}dx\ wedge dy\wedge dt+\frac{\partial\epsilon_y}{\partial z}dz\wedge dy\wedge dt\\&+\frac{\partial\epsilon_z}{\partial x}dx\wedge dz\wedge dt+\frac{\partial\epsilon_z}{\partial y}dy\wedge dz\wedge dt\\=&\ left(\frac{\partial\epsilon_y}{\partial x}-\frac{\partial\epsilon_x}{\partial y}\right)dx\wedge dy\wedge dt\\&+\left(\frac{\partial\epsilon_x}{\partial z}-\frac{\partial\epsilon_z}{\partial x}\right) dz\wedge dx\wedge dt\\&+\left(\frac{\partial\epsilon_z}{\partial y}-\frac{\partial\epsilon_y}{\partial z}\right)dy\wedge dz\wedge dt\end{aligned} Nothing has really changed, except now there’s an extra factor of $dt$ at the end of everything. What happens to the exterior derivative of $\beta$ now that we’re using $t$ as another coordinate? Well, in components we write: $\displaystyle\beta=\beta_xdy\wedge dz+\beta_ydz\wedge dx+\beta_zdx\wedge dy$ and thus we calculate: \displaystyle\begin{aligned}d\beta=&\frac{\partial\beta_x}{\partial x}dx\wedge dy\wedge dz+\frac{\partial\beta_x}{\partial t}dt\wedge dy\wedge dz\\&+\frac{\partial\beta_y}{\partial y}dy\wedge dz\ wedge dx+\frac{\partial\beta_y}{\partial t}dt\wedge dz\wedge dx\\&+\frac{\partial\beta_z}{\partial z}dz\wedge dx\wedge dy+\frac{\partial\beta_z}{\partial t}dt\wedge dx\wedge dy\\=&\left(\frac{\ partial\beta_x}{\partial x}+\frac{\partial\beta_y}{\partial y}+\frac{\partial\beta_z}{\partial z}\right)dx\wedge dy\wedge dz\\&+\frac{\partial\beta_z}{\partial t}dx\wedge dy\wedge dt+\frac{\partial\ beta_y}{\partial t}dz\wedge dx\wedge dt+\frac{\partial\beta_x}{\partial t}dy\wedge dz\wedge dt\end{aligned} Now the first part of this is just the old, three-dimensional exterior derivative of $\beta$, corresponding to the divergence. The second of Maxwell’s equations says that it’s zero. And the other part of this is the time derivative of $\beta$, but with an extra factor of $dt$. So let’s take the $2$-form $\epsilon$ and the $2$-form $\beta$ and put them together: \displaystyle\begin{aligned}d(\epsilon+\beta)=&d\epsilon+d\beta\\=&\left(\frac{\partial\beta_x}{\partial x}+\frac{\partial\beta_y}{\partial y}+\frac{\partial\beta_z}{\partial z}\right)dx\wedge dy\ wedge dz\\&+\left(\frac{\partial\epsilon_y}{\partial x}-\frac{\partial\epsilon_x}{\partial y}+\frac{\partial\beta_z}{\partial t}\right)dx\wedge dy\wedge dt\\&+\left(\frac{\partial\epsilon_x}{\partial z}-\frac{\partial\epsilon_z}{\partial x}+\frac{\partial\beta_y}{\partial t}\right)dz\wedge dx\wedge dt\\&+\left(\frac{\partial\epsilon_z}{\partial y}-\frac{\partial\epsilon_y}{\partial z}+\frac{\ partial\beta_x}{\partial t}\right)dy\wedge dz\wedge dt\end{aligned} The first term vanishes because of the second of Maxwell’s equations, and the rest all vanish because they’re the components of the third of Maxwell’s equations. That is, the second and third of Maxwell’s equations are both subsumed in this one four-dimensional equation. When we rewrite the electric and magnetic fields as $2$-forms like this, their sum is called the “Faraday field” $F$. The second and third of Maxwell’s equations are equivalent to the single assertion that $dF=0$. Let’s pick up where we left off last time converting Maxwell’s equations into differential forms: \displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\d\beta&=0\\d\epsilon&=-\frac{\partial\beta}{\partial t}\\{}*d*\beta&=\mu_0\iota+\frac{1}{c^2}\frac{\partial\epsilon}{\partial t}\end{aligned} Now let’s notice that while the electric field has units of force per unit charge, the magnetic field has units of force per unit charge per unit velocity. Further, from our polarized plane-wave solutions to Maxwell’s equations, we see that for these waves the magnitude of the electric field is $c$ — a velocity — times the magnitude of the magnetic field. So let’s try collecting together factors of $c\beta$: \displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\d(c\beta)&=0\\d\epsilon&=-\frac{1}{c}\frac{\partial(c\beta)}{\partial t}\\{}*d*(c\beta)&=\mu_0c\iota+\frac{1}{c}\frac{\partial\epsilon}{\partial Now each of the time derivatives comes along with a factor of $\frac{1}{c}$. We can absorb this by introducing a new variable $\tau=ct$, which is measured in units of distance rather than time. Then we can write: The easy thing here is to just write $t$ instead of $\tau$, but this hides a deep insight: the speed of light $c$ is acting like a conversion factor from units of time to units of distance. That is, we don’t just say that light moves at a speed of $c=299\,792\,457\frac{\mathrm{m}}{\mathrm{s}}$, we say that one second of time is 299,792,457 meters of distance. This is an incredibly identity that allows us to treat time and space on an equal footing, and it is borne out in many more or less direct experiments. I don’t want to get into all the consequences of this fact — the name for them as a collection is “special relativity” — but I do want to use it. This lets us go back and write $\beta$ instead of $c\beta$, since the factor of $c$ here is just an artifact of using some coordinate system that treats time and distance separately; we see that the electric and magnetic fields in a propagating electromagnetic plane-wave are “really” the same size, and the factor of $c$ is just an artifact of our coordinate system. We can also just write $t$ instead of $c t$ for the same reason. Finally, we can collect $c\rho$ together to put it on the exact same footing as $\iota$. \displaystyle\begin{aligned}*d*\epsilon&=\mu_0c\rho\\d\beta&=0\\d\epsilon&=-\frac{\partial\beta}{\partial t}\\{}*d*\beta&=\mu_0c\iota+\frac{\partial\epsilon}{\partial t}\end{aligned} The meanings of these terms are getting further and further from familiarity. The $1$-form $\epsilon$ is still made of the same components as the electric field; the $2$-form $\beta$ is $c$ times the Hodge star of the $1$-form whose components are those of the magnetic field; the function $\rho$ is $c$ times the charge density; and the vector field $\iota$ is the current density. To this point, we’ve mostly followed a standard approach to classical electromagnetism, and nothing I’ve said should be all that new to a former physics major, although at some points we’ve infused more mathematical rigor than is typical. But now I want to go in a different direction. Starting again with Maxwell’s equations, we see all these divergences and curls which, though familiar to many, are really heavy-duty equipment. In particular, they rely on the Riemannian structure on $\mathbb{R}^3$. We want to strip this away to find something that works without this assumption, and as a first step we’ll flip things over into differential forms. So let’s say that the magnetic field $B$ corresponds to a $1$-form $\beta$, while the electric field $E$ corresponds to a $1$-form $\epsilon$. To avoid confusion between $\epsilon$ and the electric constant $\epsilon_0$, let’s also replace some of our constants with the speed of light — $\epsilon_0\mu_0=\frac{1}{c^2}$. At the same time, we’ll replace $J$ with a $1$-form $\iota$. Now Maxwell’s equations look like: \displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\{}*d*\beta&=0\\{}*d\epsilon&=-\frac{\partial\beta}{\partial t}\\{}*d\beta&=\mu_0\iota+\frac{1}{c^2}\frac{\partial\epsilon}{\partial t}\end Now I want to juggle around some of these Hodge stars: \displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\d(*\beta)&=0\\d\epsilon&=-\frac{\partial(*\beta)}{\partial t}\\{}*d*(*\beta)&=\mu_0\iota+\frac{1}{c^2}\frac{\partial\epsilon}{\partial t}\end Notice that we’re never just using the $1$-form $\beta$, but rather the $2$-form $*\beta$. Let’s actually go back and use $\beta$ to represent a $2$-form, so that $B$ corresponds to the $1$-form $*\ \displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\d\beta&=0\\d\epsilon&=-\frac{\partial\beta}{\partial t}\\{}*d*\beta&=\mu_0\iota+\frac{1}{c^2}\frac{\partial\epsilon}{\partial t}\end{aligned} In the static case — where time derivatives are zero — we see how symmetric this new formulation is: For both the $1$-form $\epsilon$ and the $2$-form $\beta$, the exterior derivative vanishes, and the operator $*d*$ connects the fields to sources of physical charge and current. • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives April 2014 M T W T F S S « Sep
{"url":"https://unapologetic.wordpress.com/author/drmathochist/page/4/","timestamp":"2014-04-20T05:43:36Z","content_type":null,"content_length":"183342","record_id":"<urn:uuid:cbabd793-e317-4a0e-ab0e-9adfc26650e1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
CCSS.Math.Content.HSS-CP.A.2 - Wolfram Demonstrations Project US Common Core State Standard Math HSS-CP.A.2 Demonstrations 1 - 7 of 7 Description of Standard: Understand that two events A and B are independent if the probability of A and B occurring together is the product of their probabilities, and use this characterization to determine if they are independent.
{"url":"http://www.demonstrations.wolfram.com/education.html?edutag=CCSS.Math.Content.HSS-CP.A.2","timestamp":"2014-04-20T03:13:39Z","content_type":null,"content_length":"26072","record_id":"<urn:uuid:d6277096-6e85-4b38-812f-39e54a65dc2d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
Distributed beam forming with phase-only control for green cognitive radio networks Cognitive radio (CR) is an intelligent radio system and is able to share the spectrum with licensed users (LU). By adopting adaptive beam forming techniques, CR can reuse the spectrum with LU via directing main beams towards CR users while displaying nulls towards LU. In this article, we present a new distributed beam forming (DB) technique, and study the performance of its application to decentralized CR networks. The presented DB method only controls the phase of the transmitted signal of each CR node in the CR network, and therefore, it is called phase-only DB (PODB) method. It can be implemented at each node independently with only prior knowledge of its own location, and directions of distant CR (DCR) users and LU. The average beam pattern and the average gains for an arbitrary number of CR nodes of PODB method are discussed, showing that CR nodes can constructively transmit signals to DCR users with less interferences to LU via employing the PODB method to form and direct main beams towards directions of DCR users while nulls towards directions of LU. PODB proves to be a green DB method by prolonging the lifetime of the CR network due to effective battery power consumptions at CR nodes. The cumulative distributed function of the beam pattern for a large number of CR nodes is derived and analysed to demonstrate that the PODB method increases the possibility that the transmitted power of the whole CR network at the directions of LU is lower than a certain threshold, which guarantees that CR network causes less disturbing effect to LU. distributed beam forming; collaborative beam forming; cognitive radio networks; phase-only distributed beam forming; null-steering distributed beam forming; green cognitive radio networks 1. Introduction Cognitive radio (CR) is a new concept radio presented in [1,2], which is a promising approach to solve the intensive usage of the precious natural source-spectrum [3]. As described in [3], CR is capable of sensing the communication environment and adapting to it by adjusting its parameters. By applying a beam forming technique, CR can direct its main beams towards CR users while putting nulls to licensed users (LU) in uplink in order to share the same spectrum with LU without disturbing them [4,5]. Distributed beam forming (DB), which is also referred to as collaborative beam forming, is originally employed as an energy-efficient scheme to solve the long distance transmission by wireless sensors networks (WSN), in order to reduce the amount of required energy and consequently to extend the utilization time of the sensors. The basic idea of DB is that a set of nodes in wireless networks act as a virtual antenna array and then form a beam towards a certain direction to collaboratively transmit a signal. DB is proposed in [6] and it shows that by employing K collaborative nodes, the collaborative beam forming can result in up to K-fold gain in the received power at a distance access point. A cross-layer approach to DB for wireless adhoc networks is discussed in [7] with more complicated models and two time phases of communication steps. The improved beam pattern and connectivity properties are shown in [8], and a reasonable beam forming performance affected by a node synchronization error is discussed in [9]. DB has also been introduced in relay communication systems [10-14]. In [10-13], several DB techniques for relay networks with flat fading channels have been developed, while in [14] frequency selective fading has been considered. DB requires accurate synchronization; in other words, the nodes must start transmitting at the same time, synchronize their carrier frequencies, and control their carrier phases so that their signals can be combined constructively at the destination. Several synchronization techniques for CB can be found in [15,16], and a review is given in [17]. As discussed in [4,5], the CR base station (BS) needs to be equipped with array antenna (AA) so that it can form and direct main beams to its own users while steering nulls towards the directions of LU via adaptive beam forming techniques. In this study, instead of requesting CR BS to be deployed together with antenna arrays, we consider decentralized CR networks, which is formed by distributed CR users in a certain area. Those CR users are therefore regarded as CR nodes. Meanwhile DB is suggested to be employed by the CR network to form beams towards distant CR (DCR) users so that the CR network is able to forward the signals to DCR users cooperatively. Thus, the whole network decreases the energy consumption effectively, and also increases the communication range. However, the CR network distinguishes itself from WSN in that it is designed to achieve two goals under one restricted condition, i.e., the received power at DCR users should be maximized, while that at LU be minimized. The restriction limits that CR nodes are only aware of its own location, as well as the directions of DCR users and LU. The constraint supports DB for a CR network functional even without cluster nodes. If cluster nodes are selected, each CR node is able to have the information of locations of all CR nodes during the process of information exchanging and sharing. Then, the DB problem can be described as designing adaptive beam formers with an irregular AA structure. In this article, we consider no CR cluster nodes, which is the basic and simplest network application. For DB, the aim of a CR network can directly be translated into generating main beams towards DCR users while displaying nulls towards LU. In addition, the weights calculation can be performed by each node, and for each node it should be independent from the prior knowledge of other nodes due to the restriction introduced above. This article also suggests generating a certain angular range of nulls instead of point nulls towards LU. The reason is due to the study of the probability density function (pdf) of the angle of arrival (AOA) of the scattered wave at the mobile station in wireless communications [18-20], which will be explained later in this article. Recently, DB for a CR network has also been discussed in [21-23]. In [21], the effect of noise in phase synchronization on the resultant beam pattern has been analysed. A DB scheme based on zero-forcing beam forming has been presented in [22], where the idea that a CR network can be regarded as a relay network to forward the signal to the destination CR node is adopted. It is worth noticing that in [23], the authors have also addressed the above problem of directing main beam to a required direction while putting nulls to unwanted directions. This fits well the idea of introducing DB to CR network. A novel null-steering DB method has been presented in [23] in a way that each node can calculate its own weight by itself without prior knowledge of others. Despite that the DB weights derived in [23] achieve our goal of CR network successfully, further improvements of DB in application to green CR networks are appreciated. In this article, we present a phase-only DB (PODB) method by only controlling the phase of the transmitted signal of each node. With the proposed PODB method, the lifetime of the whole CR network is prolonged, which will be proved in this article. The PODB method is able to direct main beams towards DCR users while nulls are formed towards directions of LU. It can be implemented independently by each node based on very limited knowledge, including directions of DCR users and LU, and its own location. The weight of the PODB method can be regarded as nothing more than a phase perturbation of each node, which has the same amplitude as that of others. By adding certain virtual directions, the proposed PODB method can also generate broadened nulls in a defined range around directions of LU. The average beam pattern is also derived to show what the beam pattern with an arbitrary number of collaborative nodes will converge to. The average gains at both directions of LU and DCR users are calculated approximately. What is more, the cumulative distributed function (CDF) of the beam pattern is derived; it demonstrates the possibility that the transmitted power of the whole CR network at every direction can be lower than a certain threshold. The lifetime of a wireless network with both null-steering DB which has been presented in [23] and the PODB method is discussed and compared, in order to show the "green" aspect of the latter method. The rest of the article is organized as follows. First in Section 2, the model of the CR network will be presented, as well as several necessary assumptions. Then the PODB method is proposed with and without broadened nulls. The average beam pattern of the PODB method, and its gains at certain directions are discussed in Section 3, as well as the derived CDF of the beam pattern generated by the PODB method. Several simulation results are shown in Section 4. The greenness of the PODB method is illustrated in Section 5 by comparing the lifetime of the wireless networks adopting null-steering DB and the PODB method. Finally, Section 6 concludes the article. Notations: ||a||[∞], ||a||[2 ]denote the infinitive norm and Euclidean norm of vector a. (·)^T, (·)* and (·)^H denote the transpose, the conjugate, and the Hermitian transpose, respectively. (A)[m, n ]represents the element of the mth row and nth column of matrix A, and (a)[k ]represents the kth element of vector a. E[·] stands for the statistical expectation, and denotes converge with probability one. J[n](·) is the nth-order Bessel function of the first kind, and I[0](·) is the zeroth-order-modified Bessel function. Re[·] and Im[·] represent the real and imaginary parts of a variable, respectively. 2. Problem formulations and the proposed PODB method 2.1. Geometrical structure of CR networks The geometrical structure of the CR network and also distant receiver terminals including LU and DCR users are illustrated in Figure 1. As shown in Figure 1, K CR nodes are uniformly distributed on a disc centred at O with radius R. Let us denote their polar coordinates of the kth CR node as (r[k], Ψ[k]). L DCR users are considered as access points, which locate in the same plane at . Meanwhile, M LU are also coexisting with DCR users. Their locations are (A[i], ϕ[i]), i = 1,2,...,M. The CR nodes in the CR network are requested to form a virtual antenna array and collaboratively transmit a common message S(t). 2.2. Necessary assumptions Without generality, we also adopt the following assumptions: (1) The number of CR nodes are larger than that of DCR users together with LU, i.e. K > L+M. This is basically required to solve the later matrix equations. In addition, many adaptive beam forming techniques also request that the number of AAs is larger than that of constrained directions where main beams and nulls are dedicatedly directed towards. (2) All DCR users and LU are located in the far-field of CR network, such that (3) The bandwidth of S(t) is narrow, so that it guarantees that S(t) is almost constant during 2R/c second where c is the speed of light. It has been discussed in [24] that OFDM scheme is a proper and recommended candidate for CR due to its flexible adaptation of the spectrum. Since OFDM signal can be regarded as a combination of narrow-band modulated signals, the proposed method in this article can also be implemented in a wide band CR OFDM system. The spatial effect of signal scattering and reflection at LU are taken into account. As a result, the angular spread at LU is suggested to be eliminated by displaying spread nulls around each direction of LU rather than point nulls. However, other channel effects, such as multipath fading and shadowing, are ignored in this article. For simplicity, the proposed PODB is explained and presented when L = 1. In case L > 1, we show in [25] how to generate more than one main beam towards DCR users. We simplify by (A[0], ϕ[0]) when there is only one DCR user. 2.3. System model of CR network Let x[k](t) denote the transmitted signal from kth node, where w[k ]is the weight adopted by the kth node. The received signal at an arbitrary point (A, ϕ) in the far-field due to the kth node transmission is [23], where d[k ]is the distance between the kth node and the access point (A, ϕ), and is the signal path loss with γ donating the path loss exponent. In [23], β[k ]and d[k ]are approximated by where . The two approximations above are due to assumption 2. It also ensures , and then β[k ]≈ β. Substituting Equations (3) and (4) into Equation (2) it follows that [23], If we adopt the initial phase of each node as [6], the received signal at (A, ϕ) is We define The power received at (A, ϕ) is denoted by p(ϕ). If S(t) has normalized power, p(ϕ) is shown as The goal of our proposed PODB method is to find w, where w = (w[1], w[2], ..., w[K])^T, which satisfies Based on Equation (7), it is easy to verify that . Therefore, is always satisfying. The constraint in (9) shows that each w[k ]is no more than a phase adjustment. In other words, we only control the phase modification of each node. A different optimization problem is discussed in [23], which has been proposed based on null-steering beamformers. It has the constraint of the total power of w, i.e. w^Hw ≤ P[T], instead of |w[i]| = 1, i = 1,2,...,K. 2.4. The proposed PODB method We propose the solution of Equation (9) in this section. First we assume W = 0, where W is the width of the spread nulls towards each LU, which requires only a point null at each LU direction, and then we give further results of how to generate spread nulls by modifying the angles. Equation (9) also appeared in the application of adaptive array techniques to solve the problems of array pattern nulling, which was studied in [26]. The proposed solution in [26] linearized the nonlinear problem by considering small position perturbation of each antenna element, and then derived analytic results. It was also proved in [26] that good nulling performance could be achieved, even though the solutions were based on approximations. Unfortunately, the results shown in [26] could be only adopted by uniformly linear arrays (ULA). In the case of CR network, the structure and the motivation of the beam forming problem are very different from that in [26]. As mentioned before, our goal is to find a set of weights (w[1], w[2], ..., w[K]), which can be implemented by each node under the constraint that only the information of (r[k], Ψ[k]), the directions of DCR (ϕ[0]) and LU (ϕ[i], i = 1,2,..., M) are available. We assume that w[k ]has the form of Then, we perform a Taylor expansion in Equation (7) and retain the first two terms. We obtain F[C](ϕ) represents the second item of (11) and is a cancellation pattern which can be used to achieve M nulls. Let we define the following variables where μ ∈ ℜ^K × 1, x[• ]= Re[u[•]], and y[• ]= Im[u[•]]. where e is a K × 1 vector with one as its entities. Thus, Equation (11) can be simplified as According to Equation (9), when W = 0, in order to steer nulls at direction of ϕ[m], = 1,2,..., M, we require Due to assumption 1, Equation (13) has a solution and can be solved as the following. Using the results in [6], Equation (13) can be further written as If we define The solution of Equation (15) is where (0) is a vector with length M and 0 as its entries. To make the calculation of μ be able to be implemented by each node in a distribution fashion, we need to calculate Γ differently, which has a good approximation of original counterparts, and the entries of it depends only on the information that each node is entitled to have (r[k],Ψ[k]), ϕ[0], ϕ[m], m = 1,2,...,M. Theorem 1: Give a matrix U, which is defined as U ≜ (u[1], u[2], ..., u[M]) = X+jY, we have the following results: Proof: See Appendix 1 Theorem 2: The entries of Γ have the following results: Proof: See Appendix 2 Based on Theorem 2, we can replace Γ by E[Γ], and then calculate (E[Γ])^-1 by If we define Then by combining (16) and (22), we have The weight of the kth CR node is represented by So far we have derived the weight of the PODB method, which shows that the computation of w[k ]for each node only depends on c, Γ[2], and (y[m])[k]. The first two can be regarded as constant vectors and a constant matrix, as shown in their definitions and Theorem 2. While in order to calculate (y[m])[k], each node only requires the prior knowledge of its own location (r[k], Ψ[k]), and directions of all LU ϕ[m], m = 1,2,...,M. It is worth noticing that the information of the direction of DCR user ϕ[0 ]is also needed for deciding the initial phase of each node φ[k ]in Equation (6). 2.5. PODB method with spread nulls The principle of the necessity of generating spread nulls has been explained in [25]. In wireless communication systems, the received signal and power spectra at the mobile station at wireless communication depend on the pdf of the AOA of the scattered wave. Clarke considered a uniform AOA pdf over [-π, π) [18]. However, it has been argued and experimentally demonstrated that the scattering encountered in many environments results into a non-uniform pdf of AOA [19]. Reference [20] suggests the two-parameter Von Mises pdf as a flexible and generalized model for the pdf of AOA, which includes non-isotropic scattering cases, and also the isotropic one as a special case. The pdf is given as [20] where θ[p ]accounts for the main direction of the AOA scatter components. Parameter l ≥ 0 controls the width of the AOA of scatter components. Figure 2 shows the p[Θ](θ) with different l for θ[p ]= 0. Based on Von Mises model, point nulls which can be generated by CR networks are not sufficient enough for energy depression at directions of LU, because the power around the nulling direction may also leak into LU due to spatial scattering. As a result, instead of generating point nulls, CR network is required to display spread nulls around each direction of the LU. Figure 2. Von Mises pdf for the AOA of scatter components at mobile station. Null broadening (NB) technique was originally developed as a robust array beam forming technique. It was also regarded as a beam pattern synthesis method. Mailloux [27] presented a NB method by simple modification of the covariance matrix of the received signal. While in [28], a similar NB algorithm was also presented by applying a transform to the same covariance matrix. Both methods were capable of providing notch at locations of interferences signals. In [29], two NB algorithms as covariance matrix taper methods are defined, and considered as effective robust adaptive beam forming techniques, imparting robustness into adaptive pattern by judicious choice of null placement and width. As for wireless communication, NB was employed in a cellular communication system, particularly an example was given in space division multiple access system for downlink beam forming [30]. The above proposed NB methods are all based on ULA. They all require the prior knowledge of the distance between every two AA elements. In our case, this requested information can be translated into the distance between every two CR nodes in CR networks. Consequently, this does not align with our constraint that each node is only aware of its own location. However, one simple way of generating spread nulls is to add N virtual LU sources ϕ[i], i = 1,2,..., N around each direction of existing LU with the angle range of W/2, e.g. |ϕ[i]-ϕ[m]| ≤ W/2, m = 1,2,...,M. The number of those virtual directions N can be chosen with a compromising between computation load and the depth of spread nulls. If N is large enough, all the angles around directions of the LU are continuously all regarded as nulling points, thus the spread nulls in the beam pattern are maintaining the same depth with that of the nulling point ϕ[m]. However, with large N, the size of matrix Γ[2 ]in Theorem 2 increases to (MN) × (MN), and consequently each node must have larger computation burden. 3. Properties of the beam pattern generated by the PODB method In this section, we discuss the properties of the beam pattern that generated by the presented PODB method. The analysed properties include the average beam pattern for an arbitrary number of CR nodes K, the average gains at the directions of DCR user ϕ[0], and at those of all the LU ϕ[m], m = 1,2,...,M. Meanwhile the CDF of the beam pattern with larger K is also derived. 3.1. Average beam pattern The result of average beam pattern is given by Theorem 3. Theorem 3: Using the presented PODB method, the average beam pattern, which is defined as when ϕ[• ]≠ ϕ[0], can be approximated by and the approximation error is bounded by . Proof: See Appendix 3 3.2. Average gain at direction of DCR The average gain at ϕ[0 ]is approximated by 1, i.e. Proof: See Appendix 4 3.3. Average gains at directions of LU The average gain at ϕ[m], m = 1,2,...,M is Proof: See Appendix 5 For CR network, it is very important to limit the transmission power at directions of LU. From Equation (27) and (28), we can see that towards LU directions, the average transmission power are K - 1 times lower than that towards the direction of DCR user. When there is no LU available, i.e. M = 0. Equation (23) gives the result of μ[k ]= 0. Consequently, This is consistent with the average beam pattern of the DB method presented in [6], which has no specific nulling points. 3.4. CDF of the beam pattern generated by PODB method The CDF of the beam pattern generated by PODB method with large number of K is given by Theorem 4. Theorem 4: When K is large enough, the CDF of the beam pattern at different ϕ[• ]can be approximated by where for γ ≥ 0 is the CDF of the Rayleigh distribution, sgn(x) = |x|/x and . Proof: See Appendix 6. We can write when ϕ[• ]= ϕ[m], m = 1,2,..., M Proof: See Appendix 7. From the above equation, we can conclude that when ϕ[• ]= ϕ[m], m = 1,2,..., M, the probability of the total transmitted power of CR network lower than P[0 ]can be expressed by a constant, which is decided by P[0], K and . Based on Equation (21) and the definition of c, can be only determined by ϕ[0], ϕ[m], m = 1,2,...,M and R/λ. Usually P[0 ]is defined by LU due to its spectrum interference tolerance threshold. Consequently, if we want to increase the probability of the case that CR network transmitting less power than P[0 ]towards directions of the LU, we need to increase K. In other words, more CR nodes need to be added to the same CR network (maintaining the same network structure R/λ) to collaboratively perform PODB. 4. Simulations and results The average beam pattern of DB methods are shown in Figure 3. They are DB method without considering nulling directions, which has been presented in [6], the proposed PODB method, and null-steering DB method in [23]. In the simulation, we choose R/λ = 2, K = 100 and ϕ[0 ]= 0°. Meanwhile, two LU are considered, which locate at the directions of ϕ[1 ]= 30°, ϕ[2 ]= -15°. Figure 3 shows that PODB and null-steering DB methods successfully depress transmission power towards the directions of LU. Their performances are similar to each other. The DB method without nulling has a symmetrical average beam pattern without particular nulling points. As mentioned in Equations (27) and (28), the average gain at ϕ[0 ]= 0° approximates to 1, while those at ϕ[1 ]= 30°, ϕ[2 ]= -15° are close to 10 log(1/K) = -20 dB. Figure 3. Average Beampatterns of three DB methods. Figure 4 shows the average beam pattern with only one LU, which has the direction of ϕ = 30°. The other simulation conditions remain the same with Figure 3. In Figure 4, we also consider the average beam pattern of PODB method with broadened nulls, which assumes two virtual LU directions around ϕ = 30°, which are located at ϕ[1 ]= 20°, ϕ[2 ]= 40°. It can be seen from Figure 4 that PODB method with NB generates spread nulls around ϕ = 30° with the width of W = 20° and depth about -20 dB. Figure 4. Average Beampatterns of PODB methods. Figure 5 shows the CDF of the beam pattern at different angle ϕ, which has been discussed in Theorem 4. As required in Theorem 4, the number of nodes in CR network needs to be large, so that the sum of random variables can be regarded as Gaussian distribution random variables. Therefore, we consider two cases with K = 100, and K = 500. Meanwhile, only one LU is considered, i.e. ϕ = 5°. The DCR user is at the direction of ϕ[0 ]= 0°. P[0 ]in Theorem 4 is chosen as P[0 ]= 0.01. It can be seen from Figure 5 that, in the angle range of 5° ≤ ϕ ≤ 7°, considering the probability that the transmitted power of CR network lower than P[0], PODB method has much higher chance than the DB method without nulling. Meanwhile, as discussed in Section 3.4, Prob{P(ϕ[m])|P(ϕ[m]) ≤ P[0]} is a constant which depends on ϕ[1], ϕ[0 ]and K. When more nodes participate in CR network to perform DB (K is increasing), Prob{P(ϕ[m])|P(ϕ[m]) ≤ P[0]} is also increasing, which is also demonstrated in Figure 5. 5. Greenness of the PODB method The battery consumptions of the network adopting the proposed PODB method and null-steering DB method presented in [23] are illustrated in Figures 6 and 7. Figure 6 shows that to perform PODB, each node of CR network consumes the same amount of battery power at the same time. The power consumption of the whole network is "balanced", because of the constraint in Equation (9), i.e. |w[k]| = 1. While null-steering DB method utilizes the battery power of each node "unevenly", as shown in Figure 7. When the battery of one node is dying, the others nodes are still power supplied. Figure 6. Power consumption of CR network with PODB method. Figure 7. Power consumption of CR network with null-steering DB method. For null-steering method, we introduce a new concept of "DB cycle". It describes the process from the moment that all nodes in the network start participating DB till the moment that one node in the network first runs out of its battery. Then, the rest of the nodes of the network will start computing the new weights of DB method with one less number of the nodes, and thus another round of DB cycle begins. The number of CR nodes K in a DB cycle is requested to satisfy k ≥ K[0], 1 ≤ K[0 ]≤ K, where K[0 ]is the least number of CR nodes required by CR network to perform DB, due to the restriction of the transmission power towards directions of LU We denote the normalized the weights of the ith DB cycle of null-steering DB method as and it satisfies that . Assume is the length of the time period of the ith DB cycle of the null-steering DB where p^(i) is the battery power consumption of the CR node, which finishes its battery first in the ith DB cycle. Then, the total lifetime of CR network by adopting null-steering method can be written as The total energy consumption is The total amount of energy of the whole network is fixed, due to the fact that all nodes will consume all of their batteries power in the end. Let us denote the total energy of the CR network as E. It is easy to verify that Only when K[0 ]= 1, However, the concept of DB cycle is not necessary for PODB, because all the nodes will be finishing their batteries at the same time. We define the normalized weights of the PODB method as w[po], and it satisfies . Then, the lasting time of the network with the PODB method using the same amount of energy E is Using the results of Equations (34), (35), (37) and (39), we can write that Thus, it has been proved that the life time of the network adopting PODB method is longer than that adopting null-steering method. Only when K[0 ]= 1, the network has the same lifetime by employing both two DB methods. 6. Conclusions A new DB method, which is called PODB method, has been proposed in this article in the application of a green CR network. It is implemented in a way that CR nodes transmit signals constructively towards directions of DCR users and destructively to the LU. Considering the average beam pattern of the PODB method, it achieves the highest gain at the direction of DCR users and the lowest to the LU. The PODB method can be computed at each node in a so called distributed way, i.e., each node is only aware of its own location, and the directions of LU and DCR users. The PODB is different from the null-steering-based DB method meaning that the weights of the former can be regarded as a phase perturbation of each node. Consequently, it utilizes energy of the whole network evenly and effectively, a characteristic which renders itself to green concept. In a CR network, it is extremely important to calculate the probability of the case that the power transmitted by the CR network towards directions of LU is lower than a certain threshold. Therefore, the CDF of the beam pattern of the PODB method has been derived in this article assuming a large number of CR nodes. Comparing with the DB method without particular nulling directions, the PODB method achieves much higher chances of transmitting less power towards directions of LU. To increase the chances that the disturbances to LU, which are caused by the CR network are less than the interference power threshold that the LU can tolerate, the presented CDF suggests that more CR nodes should participate in the CR network to perform DB. Appendix 1 Proof of Theorem 1 It has been discussed in [23] that where , and the pdf of z[k ]has been presented in [6], The above exponential item can be further simplified by the following We define Equation (43) can be written as has the same distribution with z[k], because the pdf of z[k ]in (42) has nothing to do with ϕ[m], ϕ[n]. Therefore, using the same method, (45) can be further written as Especially, when m = n, Equations (41), (47) and (49) show the results of Theorem 1. Appendix 3 Proof of Theorem 3 Based on Equations (7) and (10) As mentioned before, μ[k ]is a small perturbation of the phase of each node. We here adopt the approximation of e^x by e^x ≈ 1+x , and it holds the following result [31], Then we have Equation (57) can be approximated by We define a new random variable Based on the pdf of z[k], we have the following results [6] Using the result in Theorem 1, it is easy to calculate the expectations of the following variables, Consequently from Equations (63), (64) and (23), the expectation of ξ[k ]can be calculated by From Equation (65), it is easy to verify that Meanwhile, and are identical independently distributed variables. ξ[k ]and are independently distributed random variables. Thus, with the result in Equations (62) and (65), considering the general case that ϕ[• ]≠ ϕ[0], we can deduce the second item in Equation (60) further by Substituting Equation (68) into Equation (60), we have final results The error of the approximation in (61) can be bounded using the result of (58) by Based on Equation (23) and the results of Equation (62), we can deduce that In Theorem 2, it holds that By substituting Equations (75), (72) into (70), we can write The approximation error in the second item of Equation (60) can be written Then the estimation error ε between and can be written based on the result of Equation (77) Appendix 5 Proof of Equation (28) Theorem 3 and the definition of Γ[2 ]in Theorem 2 illustrate that, when ϕ[• ]= ϕ[m], m = 1,2,...,M, where v[m ]denotes a M × 1 vector with ones as the entities of the mth row. Meanwhile based on the definition of (q[1], q[2], ..., q[M]) we can write when ϕ[• ]= ϕ[m], m = 1,2,...,M, By substituting (85) into the average beam pattern in Theorem 3, Which shows the result of Equation (28). Appendix 6 Proof of Theorem 4 We define a new random variable Definition of ξ[k ]is shown in Appendix 3 Equation (61). and are identical independently distributed variables. Thus, if K is large enough, according to the Central Limit Theorem, χ has a Gaussian distribution, i.e. χ ~ N(ρ, σ^2), where Using the result discussed in Equations (62) and (65) in Appendix 3 From Equation (75) in Appendix 3, Thus, by adopting the results shown in (91) and (92), From Equation (93), if we define a new random variable γ by γ ≜ |χ-ρ|, the distribution of γ is Rayleigh distribution, i.e. , where . We then are able to calculate the probability Prob{P(ϕ[•])|P(ϕ [•]) ≤ P[0]}. We next discuss the outcome for the different cases. By combining (94) and (95), it is easy to verify Theorem 4. Sign up to receive new article alerts from EURASIP Journal on Wireless Communications and Networking
{"url":"http://jwcn.eurasipjournals.com/content/2012/1/65","timestamp":"2014-04-19T22:06:31Z","content_type":null,"content_length":"234926","record_id":"<urn:uuid:aac68c6c-841c-4e35-801b-826c06df3e4c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Three Ways to Reduce Errors in Your Excel SUM Formulas Work In Progress...After nearly ten years, I'm redesigning ExcelUser.com. This is the new design. You can learn more here. Also, if you find something wrong with the site, please tell me about the problems. And thanks for your patience.--Charley Kyd Excel Formulas Three Ways to Reduce Errors in Your Excel SUM Formulas Simple spreadsheet edits can introduce costly errors. Here are three techniques that can help you to reduce such problems. by Charley Kyd, MBA Microsoft Excel MVP If you aren't careful, simple edits to your reports and analyses can cause significant errors. A contractor was preparing a bid using Lotus 1-2-3. At the last minute, he realized that he had overlooked a $100,000 expense. So he inserted a row in his list of costs, entered the expense, recalculated, and submitted his bid. Unfortunately, he made the mistake shown above. When he inserted the new row he forgot to modify the SUM function at the bottom of the column. Here, the 100 looks like it's included in the total, but it's not. Because the contractor won the bid but lost his shirt, he tried to sue Lotus. But the judge tossed the case out of court. Lotus couldn't be responsible for their users' stupid mistakes, the judge This article describes three ways to protect yourself against errors like these. One uses an automatic setting in Excel; another uses a range-name trick that you've probably never seen before; and one uses very convenient spreadsheet formatting. Using Excel's Automatic Solution You could have arrived at this point by inserting row 7, as shown. Or you could have entered the numbers and formula shown in the range B4:B8. When you enter a value cell B7, Excel can automatically modify the SUM formula to include that new value. To set up Excel to do this, choose Tools, Options. In the Edit tab, make sure there's a check in the checkbox titled "Extend data range formats and formulas." This behavior has many quirks. Here are several: • The original SUM function must include at least three cells in its range. That is, if the formula in cell B8 were =SUM(B5:B6), Excel would not modify the SUM formula when you enter a value in cell B7. • This behavior works if the SUM formula is below a column or to the right of a row of data. It does not work if it's above the column or to the left of a row. • Excel modifies the SUM formula if it is within 20 cells of the new data. In the figure, for example, Excel will update a SUM formula in cell B27, but not in cell B28. These aren't all the quirks, but they're enough to suggest that alternate approaches would be useful. Using the 'NextUp' Relative Reference To see how this works, I need to explain two aspects of Excel range names. First, Excel names can use absolute or relative references. Typically, we use absolute references. That is, we could define MyCell as =Sheet1!$A$1. But we also could define a name using relative references. To illustrate, assume that cell B5 is the active cell. We could define the name NextUp as =Sheet1!B4. That is, cell B4 is the next cell above cell B5, the active cell. Because NextUp uses a relative reference, the name is defined with reference to the active cell, and that relative reference applies to any cell in which the name is used. For example, if NextUp is used in cell M50 the name would refer to cell M49. There's one significant problem with this definition of NextUp, however: It's defined in terms of Sheet1. Therefore if cell D5 of Sheet3 is active, NextUp will reference cell D4 of Sheet 1. That won't do at all. To get around this problem, we must change the way that NextUp is defined. In addition to using a relative cell reference, we also must use a relative sheet reference. Again, assuming that cell B5 is the active cell, we define NextUp as: (Because relative sheet references are uncommon, you should know one unique characteristic of them. A name using a relative sheet reference and an absolute cell reference, like... ...would refer to the specified cell in every active worksheet. Unlike normal cell references, this reference will not change if you insert rows above cell A3 or columns to its left, or if you cut and paste the referenced cell to a new location. That is, a name defined as shown always will reference cell A3 until you manually change its definition.) With NextUp properly defined, we can use the name in our SUM formulas. In the figure, that is, we can change the SUM formula to be: If there's a chance that a date or some other numeric title would appear in cell B3, you could use the formula: Here, the N() function returns the numeric value of cell B3. That is, it returns zero if the cell is a label, otherwise it returns the value. Honestly, I seldom use this second approach, because I seldom include dates without a border row, as shown below. However, it's useful to know that a solution exists if you ever need it. After you enter one of the two formulas above, it's always a good idea to check that the NextUp reference is working correctly. To do so, select the cell that contains the formula; copy the text of the reference (here, "B3:NextUp"); press the F5 function key to launch the Go To dialog box; paste the reference text into the Reference edit box; then press OK. After you do so, Excel should select the expected range. Using Border Rows Here, all summary formulas are anchored in the gray cells. For example, the formula in cell B8 is: One useful feature of this approach is that it leaves no doubt about where to insert new data. You know that any data inserted between the gray borders always will be included in your summary If you prefer to use range names in your formulas, rather than cell addresses, assign the names in row 2 to the areas bounded by the gray borders. That is, select the range B2:C7, choose Insert, Name, Create; ensure that Top Row is checked; then choose OK. To assign the range names to formulas, choose Insert, Name, Apply; make sure that the names you want to apply are selected; then choose OK. This approach would change =SUM(B3:B7) to =SUM(Sales). Wrapping Up If the contractor who got into trouble using a Lotus spreadsheet had used any of these techniques with Excel, he probably would have saved himself some expense and embarrassment. Perhaps they'll save you some problems as well.
{"url":"http://exceluser.com/formulas/errors_sums.htm","timestamp":"2014-04-17T19:17:30Z","content_type":null,"content_length":"19072","record_id":"<urn:uuid:39cd1a76-3f2d-40b5-8841-a5c4ed881a15>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
The Net Advance of Physics • General: □ Quantum decoherence [Wikipedia] □ Decoherence Website with extensive bibliography [Erich Joos] □ The Role of Decoherence in Quantum Mechanics by Guido Bacciagaluppi [Stanford Encyclopedia of Philosophy] □ On the Interpretation of Measurement in Quantum Theory by H. D. Zeh [Found. Phys. 1, 69 (1970)] □ Consistent Histories and the Interpretation of Quantum Mechanics by R. B. Griffiths [J. Stat. Phys. 36, 219 (1984)] □ The Interpretation of Quantum Mechanics by Roland Omnès [Princeton, 1994] □ Equivalent Sets of Histories and Multiple Quasiclassical Realms by Murray Gell-Mann and James B. Hartle (1994/04) □ A Review of the Decoherent Histories Approach to Quantum Mechanics by J. J. Halliwell [Ann. N. Y. Acad. Sci. 755, 726 (1995)] □ Decoherence: Concepts and Examples by Claus Kiefer and Erich Joos (1998/03) □ Elements of Environmental Decoherence by Erich Joos (1999/08) □ Decoherence and the Appearance of a Classical World in Quantum Theory by Erich Joos et al. [Berlin: Springer, 2003] □ Some Recent Developments in the Decoherent Histories Approach to Quantum Theory by J. J. Halliwell (2003/01) □ Decoherence and the transition from quantum to classical by Wojciech H. Zurek [Revised 2003 version of Physics Today, 44, 36 (1991)] □ Decoherence, einselection, and the quantum origins of the classical by Wojciech H. Zurek [Rev. Mod. Phys. 75, 715 (2003)] □ Between classical and quantum by N. P. Landsman [Handbook of the Philosophy of Physics, Elsevier (2005)] • Re: QUANTUM LOGIC:
{"url":"http://web.mit.edu/redingtn/www/netadv/Xhistories.html","timestamp":"2014-04-21T15:16:25Z","content_type":null,"content_length":"3360","record_id":"<urn:uuid:a8d40452-229a-418f-99c3-38ecf6be323a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/coda/medals","timestamp":"2014-04-20T16:17:54Z","content_type":null,"content_length":"58035","record_id":"<urn:uuid:6481c81d-775b-4832-8be0-ef555b0696e8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Fibonacci Search up vote 3 down vote favorite Somebody please explain me the fibonacci search algorithm. I have tried numerous resources around and searched a lot, but the algorithm is still unclear. Most of the resources described it in link with binary search, but I didn't understand 'em. I know fibonacci search algorithm is an extension of binary search, which I know quite well. My books failed to explain as well. I know about fibonacci numbers defined as F(n) = F(n-1) + F(n-2), so no need to explain that. Updating the question by adding what exactly I didn't understand as @AnthonyLabarre said: The book I'm using has strange symbols without any explanations. Posting the algo here, please help. if(key == a[mid]) return mid; // understood this, comes from binary search if(key > a[mid]) { if(p == 1) return -1; // What is p? It comes as a function arg mid = mid + q; //Now what's this q? Again comes a function arg p = p - q; // Commented as p=fib(k-4) q = q-p; // q = fib(k-5) } else // key < a[mid] { if(q == 0) return -1; mid = mid - q; temp = p - q; p = q; // p=fib(k-3) q = temp; // q = fib(k-4) return fibsearch(a, key, mid, p, q); c algorithm search fibonacci Hm, what did you check so far, maybe this? – home Sep 29 '11 at 15:16 The fact that you didn't get it only proves that it's too complicated for you, not that the books failed. Learn to accept responsibility! – Blindy Sep 29 '11 at 15:16 @home I checked that and this as well- ics.forth.gr/~lourakis/fibsrch & a couple of others, including animations. Still unclear. ^_^ – Nilesh Govindrajan Sep 29 '11 at 15:17 @Blindy I'd be more glad to do it, but unfortunately, I've to do it in my practical file. – Nilesh Govindrajan Sep 29 '11 at 15:17 1 Could you point out exactly what you don't understand about it? It's hard for me to guess, especially if you understand binary search. – Anthony Labarre Sep 29 '11 at 15:18 show 4 more comments 2 Answers active oldest votes I'll try to keep things short and clear. Let's say you have a sorted Array A. This array has elements in it, in increasing values. You must find a particular element inside this array. You want to partition this whole Array into sub arrays such that the access time to i th element in the Array is not directly proportional to i. That means a non liner quicker method. Here comes Fibonacci Series in help. One of the most important properties of Fibonacci series is the "golden ratio". You partition the array into sub-arrays at indexes which fall in fibonacci series (0,1,1,2,3,5,8,13,21, 34...). So your array will be partitioned into intervals like A[0]...A[1], A[1]...A[1], A[1]...A[2], A[2]...A[3], A[3]...A[5], A[5]...A[13], A[13]...A[21], A[21]...A[34], and so on. Now since the array is sorted, just by looking at the starting and ending element of any partition will tell you which partition your number lies in. So, you traverse the elements A[0], A[1], A[2 ], A[3], A[5], A[8], A[13], A[21], A[34]... unless the current element is greater than the element you are looking for. Now you are sure that your number lies between this current up vote 7 element and the last element you visited. down vote accepted Next, you keep the elements from A[f(i-1)]..A[f(i)], where i is the index you were currently traversing; f(x) is fibonacci series, and repeat the same procedure unless you find your If you try to calculate the complexity of this approach, this comes to be O(log(x)). This has the advantage of reducing the "average" time required to search. I believe you should be able to write down the code yourself. add comment The references provided in the comments are correct, but I'll try and word it differently for you. It continually divides the list into sublists whose length is a number in the Fibonacci sequence (n = F(m)), then it searches at the next to last index which is also in the Fibonacci sequence (F(m-1)). So if a list or sublist is n items long, where n = F(m), it will search at F(m-1) first, then if the sought value is greater than the found value, it will then work with the sublist from F up vote 2 (m-1)+1 to F(m), or if the sought value is less than the found value, it will work with the sublist from 1 to F(m-1). down vote Due to the nature of Fibonacci numbers, either of these sublists will also have a length that is a Fibonacci number, and the process will repeat. The advantage of the algorithm is that at each step the next address searched in the list will be closer to the current address than at the same step of a normal binary search, which is why this algorithm has an advantage in slow sequential access media such as tape drives. add comment Not the answer you're looking for? Browse other questions tagged c algorithm search fibonacci or ask your own question.
{"url":"http://stackoverflow.com/questions/7599479/fibonacci-search","timestamp":"2014-04-18T06:08:26Z","content_type":null,"content_length":"79327","record_id":"<urn:uuid:129cccdf-29fe-46de-96ee-4c08712b2de5>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Relative expression software tool (REST©) for group-wise comparison and statistical analysis of relative expression results in real-time PCR • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Nucleic Acids Res. May 1, 2002; 30(9): e36. Relative expression software tool (REST©) for group-wise comparison and statistical analysis of relative expression results in real-time PCR Real-time reverse transcription followed by polymerase chain reaction (RT–PCR) is the most suitable method for the detection and quantification of mRNA. It offers high sensitivity, good reproducibility and a wide quantification range. Today, relative expression is increasingly used, where the expression of a target gene is standardised by a non-regulated reference gene. Several mathematical algorithms have been developed to compute an expression ratio, based on real-time PCR efficiency and the crossing point deviation of an unknown sample versus a control. But all published equations and available models for the calculation of relative expression ratio allow only for the determination of a single transcription difference between one control and one sample. Therefore a new software tool was established, named REST© (relative expression software tool), which compares two groups, with up to 16 data points in a sample and 16 in a control group, for reference and up to four target genes. The mathematical model used is based on the PCR efficiencies and the mean crossing point deviation between the sample and control group. Subsequently, the expression ratio results of the four investigated transcripts are tested for significance by a randomisation test. Herein, development and application of REST© is explained and the usefulness of relative expression in real-time PCR using REST© is discussed. The latest software version of REST© and examples for the correct use can be downloaded at http://www.wzw.tum.de/gene-quantification/. Reverse transcription (RT) followed by polymerase chain reaction (PCR) is a powerful tool for the detection and quantification of mRNA. Nowadays real-time RT–PCR is widely and increasingly used, because of its high sensitivity, good reproducibility and wide quantification range (1,2). It is the most sensitive method for the detection and quantification of gene expression levels, in particular for low abundance mRNA (1,2), in tissues with low concentrations of mRNA (e.g. bone marrow, fatty tissues), from limited tissue samples (e.g. biopsies, single cells) (3,4) and to elucidate small changes in mRNA expression levels (1,2,5). However, it is a very complex technique with various substantial problems associated with its true sensitivity, reproducibility and specificity and, as a fully quantitative methodology, it suffers from the problems inherent in real-time RT–PCR. Generally, two quantification strategies can be performed: an absolute and a relative quantification. In absolute quantification the absolute mRNA copy number per vial or capillary is determined by comparison with appropriate external calibration curves (2). An absolute quantification makes it easier to compare expression data between different days and laboratories, because the calibration curve is a non-changing solid and reliable basis. The relative expression is based on the expression ratio of a target gene versus a reference gene and is adequate for most purposes to investigate physiological changes in gene expression levels. Trends can be better explained by relative quantification, but the results are strongly dependent on the reference gene and the normalisation procedure used. Some mathematical models have already been developed to calculate the relative expression ratios of single samples (6–8; http://docs.appliedbiosystems.com/pebiodocs/04303859.pdf), with or without efficiency correction. Equation 1 shows the most convenient mathematical model, which includes an efficiency correction for real-time PCR efficiency of the individual transcripts (6). Ratio = (E[target])^Δ^CP[target(control – sample)]/(E[ref])^Δ^CP[ref(control – sample)] 1 The relative expression ratio of a target gene is computed, based on its real-time PCR efficiencies (E) and the crossing point (CP) difference (Δ) of an unknown sample versus a control (ΔCP[control – sample]). In mathematical models the target gene expression is normalised by a non-regulated reference gene expression, e.g. derived from housekeeping genes, glyceraldehyde-3-phosphate dehydrogenase (GAPDH), albumin, actins, tubulins, cyclophilin, 18S ribosomal RNA (rRNA) or 28S rRNA (9–11). But all published equations and available models for the calculation of relative expression ratios allow for the determination of only a single transcription difference between one control and one sample (n = 1), e.g. given in an DNA array experiment, and not for a group-wise comparison for more samples (n > 2), given in an experimental trial. Therefore, a new software tool was established, named REST© (relative expression software tool), which compares two groups, with up to 16 data points in the sample group versus 16 data points in the control group, and tests the group differences for significance with a newly developed randomisation test. Nevertheless, the successful application of real-time RT–PCR and REST© depends on a clear understanding of the practical problems. Therefore, a clear experimental design, application and validation of the applied real-time RT–PCR remains essential for accurate and fully quantitative measurement of mRNA transcripts. This paper explains the development of REST© application, discusses the technical aspects involved in an experimental trial and illustrates the usefulness of relative expression in real-time RT–PCR using REST©. Animal experiment, total RNA extraction and reverse transcription Total RNA extraction was performed from rat liver as described previously (12). Adult rats were either fed with physiological zinc concentrations (control group, 58 p.p.m. Zn, n = 7) or suffered 22–29 days under zinc depletion (sample group, 2 p.p.m. Zn, n = 6) (W.Windisch, manuscript submitted). Isolated total RNA integrity was electrophoretically verified by ethidium bromide staining and by an average optical density (OD) OD[260]/OD[280] nm absorption ratio of 1.97 (range 1.78–2.09). Either 330, 1000 or 3000 ng total RNA was reverse transcribed with 100 U Superscript II Plus RNase H^ – reverse trancriptase (Gibco Life Technologies, Gaithersburg, MD) in a volume of 40 µl, using 100 µM random hexamer primers (Pharmacia Biotech, Uppsala, Sweden) according to the manufacturers’ instructions. Therefore, concentrations of 8.25, 25 or 75 ng cDNA (= reverse transcribed total RNA) per µl were achieved. Optimisation of RT–PCR Highly purified salt-free primer for target gene metallothionein (MT) (forward primer, CTC CTG CAA GAA GAG CTG CT; reverse primer, TCA GGC GCA GCA GCT GCA CTT) and for reference gene GAPDH (forward primer, GTC TTC ACT ACC ATG GAG AAG G; reverse primer, TCA TGG ATG ACC TTG GCC AG) were generated commercially (MWG Biotech, Ebersberg, Germany). The MT primer set is able to amplify the transcripts of MT isoform 1 and MT isoform 2 mRNA. Conditions for real-time PCRs were optimised in a gradient cycler (Mastercycler Gradient, Eppendorf, Germany) with regard to Taq DNA polymerase (Roche Molecular Biochemicals, Basel, Switzerland), forward and reverse primers, MgCl[2] concentrations (Roche Molecular Biochemicals) and various annealing temperatures (54–66°C). RT–PCR amplification products were separated on a 4% high resolution NuSieve agarose (FMC Bio Products, Rockland, ME) gel electrophoresis and analysed with the Image Master system (Pharmacia Biotech). Optimised conditions were transferred to the following LightCycler real-time PCR protocol. LightCycler real-time PCR For determination of test and software variations all applications of different total cDNA input were performed in triplets (MT 1–3 and GAPDH 1–3). Real-time PCR mastermix was prepared as follows (to the indicated end-concentration): 6.4 µl water, 1.2 µl MgCl[2] (4 mM), 0.2 µl forward primer (0.4 µM), 0.2 µl reverse primer (0.4 µM) and 1 µl LightCycler–Fast Start DNA Master SYBR Green I (Roche Molecular Biochemicals). Nine microlitres of master-mix was filled in the glass capillaries and a 1 µl volume of cDNA (either 8.25, 25 or 75 ng) was added as PCR template. Capillaries were closed, centrifuged and placed into a cycling rotor. A four-step experimental run protocol was used: (i) denaturation program (10 min at 95°C); (ii) amplification and quantification program repeated 40 times (15 s at 95°C; 10 s at 60°C for MT or 10 s at 58°C for GAPDH; 20 s at 72°C; 5 s at 86°C for MT or 5 s at 84°C for GAPDH with a single fluorescence measurement); (iii) melting curve program (60–99°C with a heating rate of 0.1°C per s and a continuous fluorescence measurement); (iv) cooling program down to 40°C. To improve SYBR Green I quantification a high temperature fluorescence measurement point at the end of the fourth segment was performed (13). It melts the unspecific PCR products below the chosen temperature, e.g. primer dimers, eliminates the non-specific fluorescence signal and ensures accurate quantification of the desired GAPDH and MT real-time RT–PCR product, respectively. For the described mathematical model it is necessary to determine the CPs for each transcript. The CP is defined as the point at which the fluorescence rises appreciably above the background fluorescence. In this study the Second Derivate Maximum Method was performed for CP determination, using LightCycler Software 3.5 (Roche Molecular Biochemicals). For statistical evaluations of the determined CP variations and calculated relative expression variations (Tables (Tables11–3), data were analysed for significant differences by ANOVA using approximate tests (Sigma Stat for Windows Software®, Version 2.0; Jandel Corporation). ANOVA of CV of inter-assay variation of MT and GAPDH CPs determined in rat liver by real-time RT–PCR started either with 8.25 (1–3), 25 (4–6) or 75 ng (7–9) cDNA per capillary Factor of down-regulation of MT and GAPDH expression levels in rat liver under zinc depletion (= NOT normalised by the GAPDH expression) Our goal was the development of a software tool that allows for a relative quantification between groups, and a subsequent test for significance of the derived results with a suitable statistical model. Further, the software must be able to run on a widely available platform, which can be used worldwide on different computer systems. For that reason it was programmed to run in Microsoft Excel® (Microsoft Corporation). In what follows, the four pages of REST© and the statistical model, a Pair Wise Fixed Reallocation Randomisation Test© are described in detail. Page 1—Introduction On the introduction page the basic settings are made for the REST© application (Fig. (Fig.1).1). Up to four genes and one reference gene can be labelled. Different background colours in the spreadsheets and the print command are shown and described. Pink cells indicate cells for data input, blue cells indicate data output, grey cells are used for calculation purposes and output of the CP variation, the red box will start the Randomisation Test itself and the printer icon indicates ‘print this page’. Further, the relative expression equation is given with direct links to the data input section on page CP input + randomisation test. Page 2—PCR efficiency The PCR efficiency calculation is facultative and not obligatory for the user (Fig. (Fig.2).2). To generate the data basis for the determination of PCR efficiency of each transcript, it is recommended to use various dilutions in triplets of a pool of all available cDNAs. This ensures the best estimation of the PCR efficiency. If the user wants to determine the real-time PCR efficiencies, an import via copy and paste of cDNA starting concentrations in dilution row and the corresponding CP values measured by the real-time PCR machine is possible. Depending on the real-time PCR platform used, CP values can be determined either by the Threshold Cycles = Fit Point Method (all platforms) or Second Derivate Maximum Method (only LightCycler). Up to three CPs can be inserted in the table (run 1–3) per cDNA starting concentration and REST© determines the slope with a logarithmic algorithm, as published earlier (1,6,14), as well as an indication of the linearity of this logarithmic alignment using Pearson’s correlation coefficient. The real-time PCR efficiencies were calculated from the slope, according to the established equation E = 10^[–1/slope] (1,14). E is in the range from 1 (minimum value) to 2 (theoretical maximum and optimum). If no real-time PCR efficiencies are calculated here, REST© assumes an optimal efficiency of E = 2.0 on the following pages and further procedures. Page 3—CP input + randomisation test On the top the calculated PCR efficiencies or alternatively E = 2.0 are shown and will be the basis for the calculation and randomisation test (Fig. (Fig.3).3). Up to 16 CP data per group (control or sample group) can be inserted for the reference gene and up to four target genes (input section of page 3 is not shown). On clicking the red box, the Randomisation Test application window will appear. Here the range of the data set must be defined, for the control group and sample group, by touching the last cell containing the last CP data point (on the bottom right of the pink input window). Further, the number of randomisations can be chosen and the randomisation test will be started on clicking OK. It is recommended that at least 2000 randomisations be performed (see next section statistical model). Page 3—CP input + randomisation test. The numeric results of the randomisation test are given in the Randomisation Data Output box: the concerned Genes, the CP mean of control group (Control Means), the CP mean of sample group (Sample Means), the Expression Ratios normalised by the reference gene, the corresponding p-Values, the Expression Ratios-nn not normalised by the reference gene, the corresponding p-Values-nn and the number of Randomisations performed. To simplify matters for the user, additional answer sentences were created according to the calculated results. They are divided into the Randomisation Test Results (normalised by reference gene expression) and Randomisation Test Results (not normalised by reference gene expression). The sentences tell the user if the sample group in comparison with the control group is up- or down-regulated and illustrates the factor of regulation and if this up- or down-regulation is significantly different or not. For up-regulation, the factor of regulation is equal to the given value in the Randomisation Data Output box. In the case of down-regulation, the regulation factor is illustrated as a reciprocal value (1/expression ratio or 1/expression ratio-nn, Page 4—Ratio + variation output The mean CP of the genes, the CP variations and the coefficient of variation (CV) are calculated and shown to illustrate the reproducibility and variation of the investigated group data subsets (Fig. Page 4—Ratio + variation output. Statistical model: Pair Wise Fixed Reallocation Randomisation Test© Differences in expression between control and treated samples were assessed in group means (Fig. (Fig.1)1) for statistical significance by randomisation tests (15,16; http://www.bioss.ac.uk/smart/ unix/mrandt/slides/frames.htm). Permutation or randomisation tests are a useful alternative to more standard parametric tests for analysing experimental data. They have the advantage of making no distributional assumptions about the data, while remaining as powerful as more standard tests (16). The rationale for the randomisation test is that standard parametric tests (such as analysis of variance or t-tests) depend on assumptions, such as normality of distributions, whose validity is doubtful. In our case, where the quantities of interest are derived from ratios and variances can be high, normal distributions would not be expected, and it is unclear how a parametric test could best be constructed. A randomisation test avoids making any assumptions about distributions, and is instead based on one we know to be true: that treatments were randomly allocated. The test is conducted as follows. A statistical test is based on the probability of an effect as large as that observed occurring under the null hypothesis of no treatment effect. If this hypothesis is true, the values in one treatment group were just as likely to have occurred in the other group. The randomisation test repeatedly and randomly reallocates the observed values to the two groups, and notes the apparent effect (expression ratio in our case) each time. The proportion of these effects which are as great as that actually observed in the experiment gives us the P-value of the test. They calculate P-values by obtaining the proportion of random allocations of the mean observed data to the control and treated sample groups that would give greater indications of a treatment effect than that observed. If this is small, then there is evidence that the observed treatment effect is not simply the result of random allocation. Thus, the test makes no assumptions concerning the distribution of measured gene expression in any hypothesised population—it assumes only the random allocation of treatment. In practice, it is impractical to examine all possible allocations of data to treatment groups, and a random sample is drawn. If 2000 or more samples are taken, a good estimate of the P-value (SE < 0.005 at P = 0.05) is obtained. In the applied Pair Wise Fixed Reallocation Randomisation Test© for each sample, the CP values for reference and target genes are jointly reallocated to control and sample groups (= pair wise fixed reallocation), and the expression ratios are calculated on the basis of the mean values as described above. They are deemed to give greater indications of a treatment effect than that actually observed if │log R│ > │log R[0]│ where R[0] is the true expression ratio and R the result of reallocation. In the Pair Wise Fixed Reallocation Randomisation Test© a two-sided test was performed. The randomisation tests were carried out using a Microsoft Excel® macro (Microsoft Corporation) attached to a purpose-built spreadsheet and running in the background of REST©. Confirmation of primer specificity Specificity of RT–PCR products was documented with high resolution gel electrophoresis and resulted in a single product with the desired length (MT, 106 bp; GAPDH, 197 bp). In addition, a LightCycler melting curve analysis was performed which resulted in single product-specific melting temperatures: 87.4°C (GAPDH) and 89.7°C (MT). No primer primer–dimer formations were generated during the applied 40 real-time PCR amplification cycles. Real-time PCR amplification efficiencies and variation Real-time PCR efficiencies were calculated from the slopes given in LightCycler software (Roche Molecular Biochemicals LightCycler Software®, Version 3.5). The corresponding real-time PCR efficiency (E) of one cycle in the exponential phase was calculated according to the equation: E = 10^[–1/slope], as described earlier (1,6,14). Investigated transcripts showed real-time PCR efficiency rates for MT (E[MT] = 1.67) and GAPDH (E[GAPDH] = 1.88) in the investigated range from 120 pg to 75 ng cDNA input, repeated six times, with high linearity [Pearson correlation coefficient (r) > 0.989]. To mimic different reverse transcription efficiencies and to confirm precision and reproducibility of real-time PCR, as well as for REST©, three replicates of real-time RT–PCR at each of various cDNA input concentrations (three times more and three times less concentrated) were performed and real-time RT–PCR and REST© variations (CV) were determined. As shown in Table Table1,1, variations of investigated transcripts are based on the CP variation and remained stable between 2.43 and 10.03% for MT and 1.59 and 12.89% for GAPDH; the latter showing a dependence on the cDNA input in real-time PCR. CP itself decreased with increasing cDNA input in both factors and groups. Variation and reproducibility of REST© On the basis of the previously published mathematical model (6), REST© calculates the relative expression ratios on the basis of group means for target gene MT versus reference gene GAPDH and tests the group ratio results for significance. Normalised and not-normalised expression results were compared. Normalised by GAPDH expression. As presented in Table Table2,2, the down-regulation factor (reciprocal value of ratio) of MT mRNA in the case of zinc deficiency was calculated by REST© starting from different cDNA concentrations. Further different runs (MT 1–3 and GAPDH 1–3, n = 3 × 9) were compared to calculate all possible combinations between individual real-time runs. Derived variations and the influence of deviating cDNA starting amounts on the REST© calculated relative expression ratio, and the significance of the performed randomisation test are presented in Table Table2.2. Over all investigated combinations (n = 27) a mean factor of down-regulation of 44.505 (CV = 26.83%) was observed. No significant differences between cDNA starting concentration on expression ratio could be Factor of down-regulation of MT versus GAPDH expression levels in rat liver under zinc depletion (= normalised by the GAPDH expression) No normalisation by GAPDH. In Table Table33 the factor of down-regulation of MT mRNA in the case of zinc deficiency was calculated by REST© without normalisation by the reference gene. For MT a mean down-regulation factor of 28.081 (CV = 10.22%) and for GAPDH of 0.677 (CV = 29.79%) were observed. No significant differences between cDNA starting concentration on expression ratio could be found either for MT or GAPDH. Today, real-time RT–PCR using fluorescence dyes significantly simplifies and accelerates the process of producing reproducible and reliable quantification of mRNA (1). This has led to the development of new kinetic RT–PCR methodologies that are revolutionising the possibilities of mRNA quantification (17). Absolute quantification is very common, where an appropriate external calibration curve is used to determine the absolute mRNA copy number (2). On the other hand, relative expression will be increasingly performed by researchers according to several established mathematical models (6–8). But until now no reliable application was available for a group-wise calculation of the relative expression ratio and a subsequent statistical comparison of the results by a statistical test. Herein, a new software tool is presented and described, which allows for such a group comparison and statistical analysis. REST© is based on an efficiency corrected mathematical model for data analysis. It calculates the relative expression ratio on the basis of the PCR efficiency (E) and crossing point deviation (ΔCP) of the investigated transcripts (6) and on a newly developed randomisation test Crossing-point determination For the determination of CP in general two methods can be chosen: the Fit Point Method or adequate methodologies like Threshold Cycle (18,19) where CP will be measured at constant fluorescence level and the Second Derivative Maximum Method where CP will be measured at the maximum increase or acceleration of fluorescence, even if the fluorescence levels between curves are different (14). Besides the LightCycler, the Fit Point Method or Threshold Cycle are used in TaqMan® (PE Applied Biosystems, Foster City, CA), RotoGene® (Corbett Research, Sydney, Australia), iCycler® Thermal Cycler (Bio-Rad, Hercules, CA) and Multiplex Quantitative PCR System® (Stratagene, La Jolla, CA). The Second Derivate Maximum Method is an algorithm exclusively used in LightCycler software (Roche Molecular Biochemicals LightCycler Software®, Version 3.5). The normalisation of the target gene with an endogenous standard is recommended. REST© allows for a normalisation of the target genes with a reference gene. On both mathematical models the Pair Wise Fixed Reallocation Randomisation Test© is performed and the results are presented in the appropriate output windows. Researchers can decide if they want to correct the data or not. The basis of data normalisation is the expression result of an endogenous desirable unregulated reference gene transcript to compensate inter-PCR variations (sample to sample variations) between the runs. If the CP deviation of the chosen reference gene has the same mean in the control as in sample group mean [ΔCPref [(mean control – mean sample)] = 0] then a stable and constant reference gene mRNA level is given. Real-time RT–PCR-specific errors in the quantification of mRNA transcripts are easily compounded with any variation in the amount of starting material between the samples. This is especially relevant when the samples have been obtained from different individuals, and will result in the misinterpretation of the derived expression profile of the target genes (1). Here some questions arise: what is the appropriate reference gene for an experimental treatment and investigated tissue (11,20)? Commonly used housekeeping genes (9) are suitable for reference genes, since they are present in all nucleated cell types and necessary for basic cell survival. The mRNA synthesis of housekeeping genes is considered to be stable in various tissues, even under experimental treatments (9–11). But numerous treatments and studies have already shown that housekeeping genes are regulated and vary under specific experimental conditions (21–24). This is a fundamental problem for each relative quantification and correction or normalisation in nucleic acid based models, given in array experiments as well as in mRNA expression analysis. If one desired reference is regulated in a specific experimental trial it remains to the investigator to decide which gene can fit the hypothesis of a non-regulated reference for a reliable normalisation. Therefore, he has to test for more housekeeping genes, and calculate a Housekeeping Gene Index© (publication in preparation). According to this Housekeeping Gene Index©, which is based on the expression of at least three housekeeping genes, a more reliable basis of normalisation in relative quantification using REST© can be postulated. The endogenous control or the calculated Housekeeping Gene Index© should be expressed at roughly the same CP range as the target gene (1). In same CP range, reference and target underwent already the same cycle condition, real-time RT–PCR kinetics, with respect to polymerase activation (heat activation of polymerase) or inactivation and reaction end product inhibition by the generated RT–PCR product (25). REST© can give you the first essential hints if a normalisation via the chosen reference gene is useful (by the factor of regulation and p-value-nn of the randomisation test of the reference), or if the reference is not suitable, because it is significantly regulated. Efficiency correction Beside the normalisation by a reference, the PCR efficiency in real-time PCR has a major impact on the accuracy of the calculated expression result (Roche Molecular Biochemicals LightCycler Relative Quantification Software, Version 1.0). A correction for efficiency, as performed in equations 1 and 2, is recommended and results in a more reliable estimation of the ‘real’ expression ratio compared with the no efficiency correction. Small efficiency differences between target and reference genes generate false expression ratio, and the researcher over or underestimates the ‘real’ initial RNA amount. When the difference (Δ) in PCR efficiency (E) is ΔE = 0.03 between target and reference gene, the falsely calculated difference in expression ratio is 46% in the case of E[target] < E[ref] and 209% in the case of E[target] > E[ref] after 25 performed cycles. This difference will increase dramatically by higher efficiency differences ΔE = 0.05 (27 and 338%) and ΔE = 0.10 (7.2 and 1083%) and higher cycles performed. Therefore, efficiency corrected quantification is calculated automatically by REST©, based on the method described on page 2 (Fig. (Fig.2).2). It is recommended to perform the determination of real-time PCR efficiency in triplets for every tissue separately in a pool of all starting RNAs to accumulate all possible impacts on PCR efficiency. As is known, each tissue exhibits an individual PCR efficiency, caused by RT and PCR inhibitors (purified in RNA extraction) and by variations in the total RNA pattern extracted. Relative quantification software Up to now only one relative quantification software program for real-time PCR has been available and is distributed by Roche Molecular Biochemicals: the LightCycler Relative Quantification software (Version 1.0; Roche Molecular Biochemicals). The mathematical algorithm on which the Roche Molecular Biochemicals software is based is unpublished, and might be the one discussed earlier (6,8). Ratio = [(E[ref])^CP[sample]/(E[target])^CPsample]/[(E[ref])^CP[calibrator]/ (E[target])^CP[calibrator]] 2 The LightCycler Relative Quantification software allows only for a comparison of maximal triplets (n = 3), of a target versus a calibrator (cal) gene (which is identical to the control), both corrected via a reference (ref). The relative and normalised expression ratio is calculated on the basis of the median of the performed triplets and computed according to the given equation 3 (Roche Molecular Biochemicals LightCycle Relative Quantifiation Software, Version 1.0). This equation contains a correction factor (CF) as well as a multiplication factor (MF) which are provided in the product-specific applications by Roche Molecular Biochemicals. Ratio concentration (conc) are derived from relative standard curves using the CP median values. Target to reference ratios of all samples are referenced to the target to reference ratio of the calibrator. Thus, it is important to correct for lot-to-lot differences of the calibrator for comparability of data (Roche Molecular Biochemicals LightCycle Relative Quantifiation Software, Version 1.0). Ratio = [conc[(target sample)]/conc[(reference sample)] * MF]/ [conc[(target calibrator]/conc[(reference calibrator)] * CF] 3 Advantages of REST© REST© allows a comparison of four target genes with a reference gene in two experimental groups with up to 16 data points per group. Relative quantification of a target transcript is based on the mean CP deviation of control and sample group, normalised by a reference transcript. Real-time PCR efficiency correction can be performed and is highly recommended. Normalisation via an endogenous standard can be performed according to the users demand, but it is recommended to compensate inter-RT–PCR (or sample to sample) variations (Roche Molecular Biochemicals LightCycle Relative Quantifiation Software, Version 1.0), variations in RNA integrity, RT efficiency differences and cDNA sample loading variations (26). Therefore, a high reproducibility of RT and RT efficiency which greatly varies between tissues, the applied RNA isolation methodology and the RT enzymes used (27,28) are not important any more. Herein, different cDNA input concentrations were tested (±300%) to mimic these huge RT variations and resulted in no significant changes of relative expression ratio evaluated by REST©. Also, the reproducibility of the developed mathematical model used in REST© was given, based on the exact determination of real-time amplification efficiencies and low LightCycler CP variability documented in REST©. Pair Wise Fixed Reallocation Randomisation Test© Randomisation tests with a pair-wise reallocation were seen as the most appropriate approach for this application. They make no assumptions about the distribution of observations in populations, which would always be questionable for gene expression measurements. Instead, they assume that animals were randomly allocated to control and treatment groups, which is known to be true if the experimental protocol was adhered. They are more flexible than non-parametric tests based on ranks (Mann–Whitney, Kruskal–Wallis, etc.) and do not suffer a reduction in power relative to parametric tests (t-tests, ANOVA, etc.) They can be slightly conservative (i.e. type I error rates lower than the stated significance level) due to acceptance of randomisations with group differences identical to that observed, but this mainly occurs when used with discrete data (which gene expression data are not) and small sample sizes. REST© using the Pair Wise Fixed Reallocation Randomisation Test© is presented for a better understanding of relative quantification analysis in real-time RT–PCR. In rat liver the MT down-regulation in the zinc deficiency group versus the control group lead to similar results using either a normalisation or no normalisation via GAPDH. Real-time RT–PCR in combination with REST© is the method of choice for any experiments requiring sensitive, specific and reproducible quantification of mRNA. The software developed, based on the described mathematical model, exhibits suitable reliability as well as reproducibility in individual runs, confirmed by high accuracy and low variation independent of huge template concentration variations. The latest version of REST© and examples for the correct use can be downloaded at http://www.wzw.tum.de/gene-quantification/. The author thanks D. Schmidt for technical assistance. The experimental trial was performed in collaboration with the Animal Nutrition and Production Physiology, Center of Life and Food Sciences, Technical University of Munich, under the supervision of Dr W. Windisch. Bustin S.A. (2000) Absolute quantification of mRNA using real-time reverse transcription polymerase chain reaction assays. J. Mol. Endocrinol., 25, 169–193. [PubMed] 2. Pfaffl M.W. and Hageleit,M. (2001) Validities of mRNA quantification using recombinant RNA and recombinant DNA external calibration curves in real-time RT–PCR. Biotechnol. Lett., 23, 275–282. Lockey C., Otto,E. and Long,Z. (1998) Real-time fluorescence detection of a single DNA molecule. Biotechniques, 24, 744–746. [PubMed] Steuerwald N., Cohen,J., Herrera,R.J. and Brenner,C.A. (1999) Analysis of gene expression in single oocytes and embryos by real-time rapid cycle fluorescence monitored RT–PCR. Mol. Hum. Reprod., 5, 1034–1039. [PubMed] Wittwer C.T., Ririe,K.M., Andrew,R.V., David,D.A., Gundry,R.A. and Balis,U.J. (1997) The LightCycler: a microvolume multisample fluorimeter with rapid temperature control. Biotechniques, 22, 176–181. Pfaffl M.W. (2001) A new mathematical model for relative quantification in real-time RT–PCR. Nucleic Acids Res., 29, 2002–2007. [PMC free article] [PubMed] 7. ABI Prism (2001) Relative quantification of gene expression. 7700 Sequence Detection System User Bulletin 2. 8. Soong R., Ruschoff,J. and Tabiti,K. (2000) Detection of colorectal micrometastasis by quantitative RT–PCR of cytokeratin 20 mRNA. Roche Molecular Biochemicals Internal Publication. Marten N.W., Burke,E.J., Hayden,J.M. and Straus,D.S. (1994) Effect of amino acid limitation on the expression of 19 genes in rat hepatoma cells. FASEB J., 8, 538–544. [PubMed] Foss D.L., Baarsch,M.J. and Murtaugh,M.P. (1998) Regulation of hypoxanthine phosphoribosyltransferase, glyceraldehyde-3-phosphate dehydrogenase and beta-actin mRNA expression in porcine immune cells and tissues. Anim. Biotechnol., 9, 67–78. [PubMed] Thellin O., Zorzi,W., Lakaye,B., De Borman,B., Coumans,B., Hennen,G., Grisar,T., Igout,A. and Heinen,E. (1999) Housekeeping genes as internal standards: use and limits. J. Biotechnol., 75, 291–295. [ Pfaffl M.W., Meyer,H.H.D. and Sauerwein,H. (1998) Quantification of the insulin like growth factor-1 mRNA: development and validation of an internally standardised competitive reverse transcription-polymerase chain reaction. Exp. Clin. Endocrinol. Diabetes, 106, 502–512. [PubMed] 13. Pfaffl M.W. (2001) Development and validation of an externally standardised quantitative Insulin like growth factor-1 (IGF-1) RT–PCR using LightCycler SYBR^® Green I technology. In Meuer,S., Wittwer,C. and Nakagawara,K. (eds), Rapid Cycle Real-time PCR, Methods and Applications. Springer Press, Heidelberg, pp. 281–191. 14. Rasmussen R. (2001) Quantification on the LightCycler. In Meuer,S., Wittwer,C. and Nakagawara,K. (eds), Rapid Cycle Real-time PCR, Methods and Applications. Springer Press, Heidelberg, pp. 21–34. 15. Manly B. (1997) Randomization, Bootstrap and Monte Carlo Methods in Biology. Chapman & Hall. 16. Horgan G.W. and Rouault,J. (2000) Introduction to Randomisation Tests.Biomathematics and Statistics Scotland. Orlando C., Pinzani,P. and Pazzagli,M. (1998) Developments in quantitative PCR. Clin. Chem. Lab. Med., 36, 255–269. [PubMed] Higuchi R., Fockler,C., Dollinger,G. and Watson,R. (1993) Kinetic PCR analysis: real-time monitoring of DNA amplification reactions. Biotechnology, 11, 1026–1030. [PubMed] Gibson U.E., Heid,C.A. and Williams,P.M. (1996) A novel method for real time quantitative RT–PCR. Genome Res., 6, 1095–1001. [PubMed] Haberhausen G., Pinsl,J., Kuhn,C.C. and Markert-Hahn,C. (1998) Comparative study of different standardization concepts in quantitative competitive reverse transcription–PCR assays. J. Clin. Microbiol., 36, 628–633. [PMC free article] [PubMed] Bhatia P., Taylor,W.R., Greenberg,A.H. and Wright,J.A. (1994) Comparison of glyceraldehyde-3-phosphate dehydrogenase and 28S-ribosomal RNA gene expression as RNA loading controls for northern blot analysis of cell lines of varying malignant potential. Anal. Biochem., 216, 223–226. [PubMed] Bereta J. and Bereta,M. (1995) Stimulation of glyceraldehyde-3-phosphate dehydrogenase mRNA levels by endogenous nitric oxide in cytokine-activated endothelium. Biochem. Biophys. Res. Commun., 217, 363–369. [PubMed] Chang T.J., Juan,C.C., Yin,P.H., Chi,C.W. and Tsay,H.J. (1998) Up-regulation of beta-actin, cyclophilin and GAPDH in N1S1 rat hepatoma. Oncol. Rep., 5, 469–471. [PubMed] Zhang J. and Snyder,S.H. (1992) Nitric oxide stimulates auto-ADP-ribosylation of glyceraldehydes 3 phosphate dehydrogenase. Proc. Natl Acad. Sci. USA, 89, 9382–9385. [PMC free article] [PubMed] Kainz P. (2000) The PCR plateau phase—towards an understanding of its limitations. Biochim. Biophys. Acta, 1494, 23–27. [PubMed] Karge W.H., Schaefer,E.J. and Ordovas,J.M. (1998) Quantification of mRNA by polymerase chain reaction (PCR) using an internal standard and a nonradioactive detection method. Methods Mol. Biol., 110, 43–61. [PubMed] Mannhalter C., Koizar,D. and Mitterbauer,G. (2000) Evaluation of RNA isolation methods and reference genes for RT–PCR analyses of rare target RNA. Clin. Chem. Lab. Med., 38, 171–177. [PubMed] 28. Wong L., Pearson,H., Fletcher,A., Marquis,C.P. and Mahler,S. (1998) Comparison of the efficiency of M-MuLV reverse transcriptase, Rnase H-M-MuLV reverse transcriptase and AMV reverse transcriptase for the amplification of human immunglobulin genes. Biotechnol. Tech., 12, 485–489. Articles from Nucleic Acids Research are provided here courtesy of Oxford University Press • PubMed PubMed citations for these articles • Substance PubChem Substance links Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC113859/?tool=pubmed","timestamp":"2014-04-19T06:59:25Z","content_type":null,"content_length":"104874","record_id":"<urn:uuid:3ddedd03-c2c4-444b-b9c9-50d959fd1b48>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
IB Physics/Fields and Forces Topic 6 Fields and ForcesEdit ^[1] ^[2] 6.1 Gravitational Force and FieldEdit 6.1.1 State Newton's universal law of gravitationEdit • Every single point mass attracts every other point mass with a force that is proportional to the product of their masses and is inversely proportional to the square of their separation* $F = G \frac {Mm}{r^2}$ ☆ G = universal gravitation constant (6.67 x 10^-11)Nm^2kg^-2 determined by Henry Cavendish ☆ M = source mass (where the field is coming from) ☆ m = test mass (mass being affected by force, though it has a gravitational force on its own) ☆ r = the distance between the centres of each mass 6.1.2 Define gravitational field strengthEdit A space where a small test mass feels a force due to its mass. 6.1.3 Determine the gravitational field due to one or more point massesEdit • gravitational field can be shown using gravitational field lines • gravitational field lines must be evenly dispersed around the point mass • more lines indicates a greater field magnitude 6.1.4 Derive an expression for gravitational field strength at the surface of a planet, assuming all its mass is concentrated at the centreEdit In other words, an equation that expresses gravitational field strength in terms of the distance away from the source must be found; a function g(r) that calculates the gravitational field strength when the distance away is known. This is because in the situation, we want to find the gravitational field strength when we know how far the surface is from the source. $g = \frac {F}{m}$ $F = G \frac {Mm}{r^2}$ $g = \frac {G \frac {Mm}{r^2}}{m}$ • The m's cancel so: $g = G\frac {M}{r^2}$ 6.2 Electric Force and FieldEdit 6.2.1 State two types of chargeEdit • positive (+) = a deficiency of electrons • negative (-) = an excess of electrons 6.2.2 State and apply The Law of Conservation of ChargeEdit • The net charge of an isolated system is conserved. The charge stays constant. • Charge can neither be created nor destroyed. • Using this law, we can always predict how many positive and negative charges exist 6.2.3 Describe and Explain the difference in electrical properties of conductors and insulatorsEdit • Conductors □ substances that allow electron to easily flow through them □ Examples: metal graphite □ Superconductor: a perfect conductor, all substances become superconductors at 0 Kelvin • Insulator □ Substances that do not allow electrons to flow easily through them □ Examples: plastics, rubber □ There are no perfect insulators 6.2.4 State Coulomb's LawEdit • The electric force between two charges are proportional to the product of their charges and inversely proportional to the square of the distance between them • It acts along the line joining the two charges $F = k \frac {q_1 q_2}{r^2}$ □ F = Force of electric charge attraction/repulsion □ q[1] = source charge □ q[2] = test charge □ r = distance between centre of charges □ k = coulomb constant (8.99 x 10^9)Nm^2C^-2 in a vacuum ☆ In other media use $\frac {1}{4\pi E_0}$ where E[0] is the permitivity constant of free space 6.2.5 Define electric field strengthEdit • The force felt per unit charge by a small positive test charge at that point in the electronic field. $E= \frac {F}{q}$ • Electric fields exist around charges and combinations of charges 6.2.6 Determine the electric field strength due to one or more point chargesEdit 6.2.7 Draw the electric field patterns for different charge configurationsEdit • Need pictures • Electric field always flows from positive to negative 6.3 Magnetic Force and FieldsEdit 6.3.1 State the moving charges give rise to magnetic fieldsEdit • Oersted's basic principle of electromagnetism: moving charges produce a magnetic field 6.3.2 Draw magnetic field patterns due to currentsEdit • charge moving through wire • solenoid • freely moving charge in a magnetic field 6.3.3 Determine the direction of the force on a charge moving in a magnetic fieldEdit • Right Hand Rule #3 □ fingers are magnetic field □ thumb is direction of velocity □ palm is force • or Left Hand Rule (FBI rule) □ name fingers of your left hand with letters FBI, starting from thumb, and hold the three fingers so that there are right angles between each two. □ F (thumb) stands for force □ B (index finger) is the magnetic field direction □ I (middle finger) is the direction of current (same as that of velocity of positive charge and opposite to velocity of negative charge) 6.3.4. Define the magnitude and direction of a magnetic fieldEdit • Magnetic field moves from north to south and is flipped within a magnet Last modified on 3 April 2014, at 04:54
{"url":"http://en.m.wikibooks.org/wiki/IB_Physics/Fields_and_Forces","timestamp":"2014-04-18T15:41:19Z","content_type":null,"content_length":"24382","record_id":"<urn:uuid:c625a1bf-219d-4bed-856a-193f467d768d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
The Effect of Bilayer Graphene Nanoribbon Geometry on Schottky-Barrier Diode Performance Journal of Nanomaterials Volume 2013 (2013), Article ID 636239, 8 pages Research Article The Effect of Bilayer Graphene Nanoribbon Geometry on Schottky-Barrier Diode Performance ^1Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor Bahru, Malaysia ^2Nanotechnology Research Center Nanoelectronic Group, Physics Department, Urmia University, Urmia 57147, Iran ^3Department of Electrical Engineering, Islamic Azad University, Yasooj Branch, Yasooj 63614, Iran ^4Centre for Artificial Intelligence and Robotics (CAIRO), UTM, 81310 Skudai, Johor Bahru, Malaysia ^5Department of Electrical, Computer and Biomedical Engineering, Islamic Azad University, Qazvin Branch, Qazvin 34185-1416, Iran Received 21 August 2013; Revised 11 October 2013; Accepted 17 October 2013 Academic Editor: Munawar A. Riyadi Copyright © 2013 Meisam Rahmani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Bilayer graphene nanoribbon is a promising material with outstanding physical and electrical properties that offers a wide range of opportunities for advanced applications in future nanoelectronics. In this study, the application of bilayer graphene nanoribbon in schottky-barrier diode is explored due to its different stacking arrangements. In other words, bilayer graphene nanoribbon schottky-barrier diode is proposed as a result of contact between a semiconductor (AB stacking) and metal (AA stacking) layers. To this end, an analytical model joint with numerical solution of carrier concentration for bilayer graphene nanoribbon in the degenerate and nondegenerate regimes is presented. Moreover, to determine the proposed diode performance, the carrier concentration model is adopted to derive the current-voltage characteristic of the device. The simulated results indicate a strong bilayer graphene nanoribbon geometry and temperature dependence of current-voltage characteristic showing that the forward current of the diode rises by increasing of width. In addition, the lower value of turn-on voltage appears as the more temperature increases. Finally, comparative study indicates that the proposed diode has a better performance compared to the silicon schottky diode, graphene nanoribbon homo-junction contact, and graphene-silicon schottky diode in terms of electrical parameters such as turn-on voltage and forward current. 1. Introduction Graphene nanoribbon (GNR) has attracted much attention of researchers for nanoelectronic applications because of its uniqueness electronic characteristics such as linear energy dispersion and width-tunable energy band gap [1–5]. Due to quantum confinement effect, GNR with width and thickness less than De-Broglie wave length can be assumed as a one-dimensional (1D) material [6]. GNR as an unwrapped carbon nanotube (CNT) is illustrated in Figure 1. It is notable that the properties of GNR are similar to CNT, but the planar structure of GNR guarantees better rectification current-voltage characteristic due to more accessible contact compared to other carbon-based materials [6]. Bilayer graphene nanoribbon (BGN) with unique physical and electrical properties has been incorporated in different nanoscale devices such as field effect transistors (FETs), tunnel transistors, and schottky diodes [7–11]. The BGN schottky-barrier diode has turned out to be of great interest holding the fact that it presents better performance compared to conventional semiconductor junction contacts [12]. In this paper, analytical modeling of BGN schottky-barrier diode is presented in which the effect of BGN geometry and temperature on the proposed diode performance is investigated. 2. Schottky-Barrier Contact Schottky-barrier diode is a nonideal contact between a semiconductor and a metal. Rectification of alternating current is a main relevant characteristic of the schottky-barrier diode [2]. The thermionic emission theory is known as the most important step in rectifying the contact with an n-type semiconductor throughout transport of electrons on top of the anticipated barrier [13]. Schottky-barrier diode based on GNR as a powerful device can be applied in a wide range of nanoelectronic applications because of its special advantages such as low injunction capacitance, low turn-on voltage, fast recovery time, and high operating frequencies [13, 14, 16–18]. Also, RF signal rectification/detection, mixing, and imaging have been found to be some of its wide applications [ 13, 14, 16–18]. The different stacking arrangements of BGN (AA and AB) in interlayer coupling between the top and bottom layers are weaker than those of interlayer coupling within layers leading to an electrical conduction [14, 19]. The electrical conduction through the layers is exposed by the carrier hopping between the π orbits, with the van der Waals forces contributing to the coupling interaction among the layers’ structure [19]. The transport conduction channels are essentially generated by passing through the layers of BGN rather than between those layers [19]. As shown in Figure 2, the different configuration of AB stacking compared to AA is because of the shift in the distance of lattice in the armchair edges. Apparently, BGN with AB stacking is modeled in form of two honeycomb lattices with pairs of in-equivalent sites as (A[1], B[1]) and (B[2], A[2]) which are located in the top and bottom layers, respectively. However, in AA-stacked BGN, the pairs of in-equivalent sites (A[1], B[1]) and (A[2], B[2]) are located in the top and bottom layers, respectively. It is noteworthy that in BGN the carriers can move only in the -direction. However, the carriers are confined to the - and -Cartesian directions being less than the De-Broglie wave length [14, 19]. Different stacking sequences of BGN show metallic and semiconducting (with a band gap of 0.02eV) properties, respectively [20–22]. This means that, by engineering of BGN different arrangements, the schottky-barrier contact can be designed as depicted in Figure 3. 3. Proposed Model The energy dispersion relation of BGN in the attendance of a perpendicular electric field is obtained by the tight-binding technique. In this method, a self-consistent Hartree approximation can be utilized to estimate the induced charges on the different layers of BGN. This is because, as the Fermi surface of the intrinsic BGN communicates to the K-points, the tight-binding method is a suitable technique for the low energy excitations [23]. In addition, the tight-binding method with a single orbital per atom includes coupling parameters between the neighbor sites within BGN layers. In fact, the orbitals are considered to study the behavior of electrons around the Fermi level. The different sp^2 orbitals that are lower than the orbital in energy will have much more overlap with the orbitals of the same symmetry on adjacent atoms [23]. Furthermore, the interaction between the orbitals of adjacent atoms is small, which results in bonding and antibonding orbitals close to the Fermi level. Therefore, the resulting bonding and antibonding molecular orbitals will recline below and above the Fermi level, respectively [23]. It is notable that the bonding and antibonding combinations are equal and communicate to orbitals which are limited to one of the two sublattices. This means that these orbitals are non-bonding in the nearest-neighbor approximation, so their relative energy is zero [23]. For the proposed 1D BGN schottky-barrier diode, the tight-binding technique is adopted in order to calculate the energy band structure of BGN with AB stacking [24]: where is the interlayer hopping energy, is the applied voltage, and . Equation (1) can be exposed as where is the bias voltage, is the wave vector, , , is the Fermi velocity, and is the lattice-spacing [25–28]. Figure 4 illustrates the energy band structure of BGN near the Fermi level which is plotted based on (2). Unbiased BGN () indicates zero band gap; however, energy dispersion of biased BGN by nonzero value of the applied voltage () makes a gap between the conduction and valance bands. It is notable that the size of gap depends on the applied voltage and can be externally controlled by perpendicular electric field. The density of states (DOS) as a fundamental factor shows the number of available states at each energy level that is occupied [29]. Hence DOS for BGN can be modeled as Over energy band structure, carrier concentration can be calculated by integrating the distribution function as where , which is achieved from the band energy equation (2). Considering the importance of carrier concentration, the proposed model of carrier concentration for BGN schottky-barrier diode in the degenerate and nondegenerate regimes is analytically studied. The band gap exhibits a nondegenerate approximation as the distance of Fermi level is more than from either the conduction or valance bands [30]. In this region, because of high difference between and , one (1) can be neglected in comparison with the exponential function as specified in (4). Consequently, the model in the nondegenerate regime is carried out by the following equation as Fermi level in the degenerate approximation is located less than away from the conduction and valence bands or situated within a band [30]. In this case, we can neglect the in comparison with (1) because the value of is very small. Therefore, the model of carrier concentration in the degenerate regime is proposed as As shown in Figure 5, the proposed model of carrier concentration for BGN schottky-barrier diode is approximated by the nondegenerate limit, particularly in low value of the normalized Fermi energy , where (). On the other hand, the model can be approximated by the degenerate limit, specifically in high value of . In the structure of schottky-barrier contact, carriers are injected directly from the metal into the empty space of semiconductor. So, current density is defined [13] as where is the magnitude of the electronic charge and is the carrier velocity in the direction of transport. Kinetic energy is utilized as a main parameter over the Fermi level to calculate the current density [14]. By substitution of the carrier concentration model in (7), the current of BGN schottky-barrier diode is analytically derived as (8). As specified in (8), the current of the proposed diode is a function of various physical and electrical characteristics including the carrier effective mass (), channel area (), temperature (), applied bias voltage (), and thermal voltage (): where is the area of the channel which is proportional with channel width, , and [31]. 4. Results and Discussion The purpose of this study is to highlight the influence of BGN geometry and temperature characteristics on the performance of the schottky-barrier diode. As can be seen in Figure 6, the rectification current-voltage characteristic of the proposed diode at different values of the width is illustrated. Apparently, there is a significant rise in the forward current of the diode as the BGN width increases. Strong dependence of I-V characteristic to the mentioned geometry parameter demonstrates that the increment of BGN width plays an important role in the forward current of the device. In other words, the diode performance will be enhanced by the left-shifted turn-on voltage. To get a better insight into the effect of BGN geometry on the increment of the diode current, two significant factors play an important role, which are the transparency of schottky-barrier contact and the extension of the energy for carrier concentration [32]. For the first parameter, as the diode current and schottky-barrier height are affected significantly by the charges, the channel width effect on the current through the schottky-barrier contact is taken into account in the proposed model. Furthermore, when the center of the channel is unoccupied with the charge impurities, the current increases due to the fact that free electrons are not affected by the positive charges [32]. The effect of second parameter emerges at the beginning of the channel where the barrier potential reduces as a result of low charge density. This phenomenon leads to widening the energy casement and relieve of electron flow in the channel [32]. Moreover, due to the long mean free path (MFP) of GNR, scattering effect is not dominant [32]; therefore increasing the BGN geometry will result in a larger forward current. Figure 7 indicates the turn-on voltage versus width characteristic of BGN schottky-barrier diode in which the turn-on voltage of diode decreases by increasing the width of BGN. As depicted in Figure 7, (100, 0.88) as an inflection point can be considered a turning point after which a remarkable change is expected to result. According to Figures 6 and 7, the turn-on voltage of the diode is obtained for different values of BGN width as shown in Table 1. Thermodynamic stability is one of the most important properties of BGN [30]. In fact, the carrier transport in BGN is a vital phenomenon that determines the I-V characteristic of the device. The effect of temperature on current-voltage characteristic of BGN schottky-barrier diode is investigated in Figure 8. According to the relationship between the current and conductance [31], the presented model indicates a strong temperature dependence of current-voltage characteristic showing that the lower value of turn-on voltage appears as the more temperature increases. In other words, the turn-on voltage will be shifted leftwards and the proposed diode performance will be enhanced. In fact, the conductance of BGN is expected to be affected by the temperature. It has been demonstrated that the minimum conductance of BGN depends on the bias voltage [30]. In BGN, the temperature dependence of conductance is strongly affected by lower applied perpendicular electric field, and weakly temperature dependence of conductance has been reported in higher electric field [30]. According to the zero-gap semiconductor characteristic, the conductance of GNR near the charge-neutrality peak is basically temperature independent for small bias voltages. In contrary, the temperature dependence of conductance in BGN is markedly different from that measured in the GNR. On the other hand, no sign of increase in BNG conductivity with increasing temperature away from the neutrality point has not been observed [30]. It is notable that the temperature dependence of current-voltage characteristic in BGN revealed a new effect of a memory step close to the charge neutrality voltage. The effect is related to the slow relaxation processes in BGN. This characteristic of electron transport in BGN can be adopted in high-temperature applications. It is concluded that temperature is an effective factor in the forward current and turn-on voltage of BGN schottky-barrier contact, leading to movement of free carriers in diode. According to the simulated result in Figure 7, the point of (100, 0.88) is an inflection point, and hence the width of 100nm is considered as an inflection width in Figure 8. Figure 9 indicates a comparative study of the proposed device and the typical I-V characteristic of a silicon schottky diode [13, 14]. Apparently, the effective turn-on voltage of silicon schottky diode is about 0.3V. The proposed BGN schottky-barrier diode also possesses a turn-on voltage of 0.1V that is smaller than the voltage of silicon schottky diode. Accordingly, the proposed diode shows a better performance compared to the conventional silicon schottky diode in terms of the turn-on voltage. As shown in Figure 10, the proposed BGN schottky-barrier diode has a better performance compared to the graphene-silicon schottky diode and GNR homojunction diode in terms of forward current. Moreover, smaller turn-on voltage of the proposed diode (0.1V) in comparison with GNR homojunction diode is indicated in Figure 10. Accordingly, an acceptable rectifying performance is seen comparable with conventional rectification behavior of the schottky diodes. Difference between the BGN schottky-barrier diode and the conventional diodes in terms of mentioned electrical parameters can be associated with diode switching characteristics. It clearly gives an illustration of the fact that GNR-based device characterized by steep subthreshold slope displays a faster transient between on-off states [32]. In fact, a small value of subthreshold slope denotes a small change in the input bias which can modulate the output current and thus leads to less power consumption [32]. It is concluded that due to some excellent properties of GNR such as quantum transport, long spin-diffusion length, and extremely high carrier mobility [32], the proposed schottky-barrier diode can be used as a high speed device in future nanoelectronics. 5. Defects Effect on the Device Performance There are various controllable defects in GNR, including the adatoms, vacancies, substitution, disorder, and stone wales (SW) defects [33]. It is noteworthy that the electronic properties of GNR can be functionalized via inhomogeneities, vacancies, topological defects, doping, adsorption, chemical functionalization, and molecular junctions [34]. It has been investigated that the existence of the defects in GNR is energetically more favorable than in fullerene or CNT [33]. The inhomogeneities, vacancies, and defects can lead to scattering in GNR. These defects persuade long-range deformations, which alter the electron routes [35]. The bond angle of the edge close to the SW defects is reduced from 120° to 116°. In contrast to the shrinking through the width axis, the SW defects stretch from 4.88°A to 5.38°A through the length axis direction, and the transformation energy for the symmetric SW defects is 5.95eV [33]. SW defects have been predicted to modify the band structure and DOS of GNR and hence to impact upon its transport properties [36]. Recent results demonstrate that the defects alter GNR’s chemical reactivity on the chemisorption processes [36]. SW defects can enhance the tendency of graphitic layers to transform into the nonplanar nanostructures, and the defects can play an important role in the intrinsic rumpling of GNR [37]. Additionally, the symmetry effects yield a remarkable conductance decrease in the SW defects configuration [33]. Moreover, the maximum value of Fermi velocity considering the SW defects is 5.25 × 10^5m/s which is about 50% less than the perfect GNR. This is because the defects modify the electron trajectories [38]. According to the ballistic conductance and carrier velocity dependence of current-voltage characteristic [39, 40], it is concluded that the performance of the schottky-barrier diode can be infused by inhomogeneities, vacancies, and defects. As a future work, the effects of SW defects on the BGN schottky-barrier diode performance will be analytically studied. 6. Conclusion In this study, BGN with various stacking arrangements (AA and AB) is applied as a metal and semiconductor contact in a junction schottky-barrier device. According to this assumption, an analytical model for BGN schottky-barrier diode is presented, and the effect of BGN geometry and temperature on I-V characteristic of the device is studied. Based on what has been discussed, by increasing the BGN width, the forward current of diode increases. Moreover, the simulated result indicates that the lower value of turn-on voltage appears as the more temperature increases, which guarantees a better performance of the proposed diode. Finally, a comparative study of the proposed model with silicon schottky diode, GNR homojunction contact, and graphene-silicon schottky diode is presented. Accordingly, an acceptable rectifying performance is seen comparable with conventional rectification behavior of the schottky diodes. To optimize the performance of BGN schottky-barrier diode, the presented model can be applied as a useful tool. It is concluded that the model can assist in comprehending experiments involving GNR schottky-barrier based devices. The authors would like to acknowledge the financial support by Research University grant of the Ministtry of Higher Education (MOHE), Malaysia, under Projects Q.J130000.7123.02H24 and Q.J130000.7123.02H04. Also thanks are due to the Research Management Center (RMC) of Universiti Teknologi Malaysia (UTM) for providing excellent research environment in which they completed this 1. K. S. Novoselov, A. K. Geim, S. V. Morozov et al., “Electric field in atomically thin carbon films,” Science, vol. 306, no. 5696, pp. 666–669, 2004. View at Publisher · View at Google Scholar · View at Scopus 2. A. Kargar and D. L. Wang, “Analytical modeling of graphene nanoribbon Schottky diodes,” in Carbon Nanotubes, Graphene and Associated Devices III, vol. 7761 of Proceedings of the SPIE, San Diego, Calif, USA, August 2010. View at Publisher · View at Google Scholar 3. K. Alam, “Transport and performance of a zero-Schottky barrier and doped contacts graphene nanoribbon transistors,” Semiconductor Science and Technology, vol. 24, no. 1, Article ID 015007, 2009. View at Publisher · View at Google Scholar 4. M. H. Ghadiry, M. Nadi, M. Rahmani, M. T. Ahmadi, and A. B. A. Manaf, “Modelling and simulation of saturation region in double gate graphene nanoribbon transistors,” Semiconductors, vol. 46, no. 1, pp. 126–129, 2012. View at Publisher · View at Google Scholar · View at Scopus 5. M. T. Ahmadi, Z. Johari, N. A. Amin, A. H. Fallahpour, and R. Ismail, “Graphene nanoribbon conductance model in parabolic band structure,” Journal of Nanomaterials, vol. 2010, Article ID 753738, 4 pages, 2010. View at Publisher · View at Google Scholar · View at Scopus 6. M. T. Ahmadi, M. Rahmani, M. H. Ghadiry, and R. Ismail, “Monolayer graphene nanoribbon homojunction characteristics,” Science of Advanced Materials, vol. 4, no. 7, pp. 753–756, 2012. 7. Y. Ouyang, Y. Yoon, and J. Guo, “Scaling behaviors of graphene nanoribbon FETs: a three-dimensional quantum simulation study,” IEEE Transactions on Electron Devices, vol. 54, no. 9, pp. 2223–2231, 2007. View at Publisher · View at Google Scholar · View at Scopus 8. Y. Yoon, G. Fiori, S. Hong, G. Iannaccone, and J. Guo, “Performance comparison of graphene nanoribbon FETs with Schottky contacts and doped reservoirs,” IEEE Transactions on Electron Devices, vol. 55, no. 9, pp. 2314–2323, 2008. View at Publisher · View at Google Scholar · View at Scopus 9. Q. Zhang, T. Fang, H. Xing, A. Seabaugh, and D. Jena, “Graphene nanoribbon tunnel transistors,” IEEE Electron Device Letters, vol. 29, no. 12, pp. 1344–1346, 2008. View at Publisher · View at Google Scholar · View at Scopus 10. A. Naeemi and J. D. Meindl, “Conductance modeling for graphene nanoribbon (GNR) interconnects,” IEEE Electron Device Letters, vol. 28, no. 5, pp. 428–431, 2007. View at Publisher · View at Google Scholar · View at Scopus 11. Q. Liang and J. Dong, “Superconducting switch made of graphene-nanoribbon junctions,” Nanotechnology, vol. 19, no. 35, Article ID 355706, 2008. View at Publisher · View at Google Scholar 12. D. Jena, T. Fang, Q. Zhang, and H. Xing, “Zener tunneling in semiconducting nanotube and graphene nanoribbon p-n junctions,” Applied Physics Letters, vol. 93, no. 11, Article ID 112106, 3 pages, 2008. View at Publisher · View at Google Scholar 13. D. A. Neamen, Semiconductor Physics and Devices, University of New Mexico, Albuquerque, NM, USA, 3rd edition, 2003. 14. M. Rahmani, M. T. Ahmadi, R. Ismail, and M. H. Ghadiry, “Performance of bilayer graphene nanoribbon Schottky diode in comparison with conventional diodes,” Journal of Computational and Theoretical Nanoscience, vol. 10, no. 2, pp. 323–327, 2013. 15. C.-C. Chen, M. Aykol, C.-C. Chang, A. F. J. Levi, and S. B. Cronin, “Graphene-silicon Schottky diodes,” Nano Letters, vol. 11, no. 5, pp. 1863–1867, 2011. View at Publisher · View at Google Scholar · View at Scopus 16. S. Sankaran and K. O. Kenneth, “Schottky barrier diodes for millimeter wave detection in a foundry CMOS process,” IEEE Electron Device Letters, vol. 26, no. 7, pp. 492–494, 2005. View at Publisher · View at Google Scholar · View at Scopus 17. A. Kargar and C. Lee, “Graphene nanoribbon schottky diodes using asymmetric contacts,” in Proceedings of the 9th IEEE Conference on Nanotechnology, pp. 243–245, Genoa, Italy, July 2009. 18. D. Jimenez, “A current-voltage model for Schottky-barrier graphene-based transistors,” Nanotechnology, vol. 19, no. 34, Article ID 345204, 2008. View at Publisher · View at Google Scholar 19. S. M. Mousavi, M. T. Ahmadi, H. Sadeghi et al., “Bilayer graphene nanoribbon carrier statistic in degenerate and non degenerate limit,” Journal of Computational and Theoretical Nanoscience, vol. 8, no. 10, pp. 2029–2032, 2011. View at Publisher · View at Google Scholar · View at Scopus 20. S. Latil and L. Henrard, “Charge carriers in few-layer graphene films,” Physical Review Letters, vol. 97, no. 3, Article ID 036803, 4 pages, 2006. View at Publisher · View at Google Scholar · View at Scopus 21. M. Koshino, “Electron delocalization in bilayer graphene induced by an electric field,” Physical Review B, vol. 78, no. 15, Article ID 155411, 5 pages, 2008. View at Publisher · View at Google 22. M. Koshino, “Electronic transport in bilayer graphene,” New Journal of Physics, vol. 11, no. 9, Article ID 095010, 2009. View at Publisher · View at Google Scholar 23. M. Rahmani, R. Ismail, M. T. Ahmadi, and M. H. Ghadiry, “Quantum confinement effect on trilayer graphene nanoribbon carrier concentration,” Journal of Experimental Nanoscience, 2013. View at Publisher · View at Google Scholar 24. E. V. Castro, K. S. Novoselov, S. V. Morozov et al., “Electronic properties of a biased graphene bilayer,” Journal of Physics Condensed Matter, vol. 22, no. 17, Article ID 175503, 2010. View at Publisher · View at Google Scholar · View at Scopus 25. T. Stauber, N. M. R. Peres, F. Guinea, and A. H. C. Neto, “Fermi liquid theory of a Fermi ring,” Physical Review B, vol. 75, no. 11, Article ID 115425, 10 pages, 2007. View at Publisher · View at Google Scholar · View at Scopus 26. D. S. Novikov, “Numbers of donors and acceptors from transport measurements in graphene,” Applied Physics Letters, vol. 91, no. 10, Article ID 102102, 2007. View at Publisher · View at Google 27. Z. Q. Li, E. A. Henriksen, Z. Jiang et al., “Band structure asymmetry of bilayer graphene revealed by infrared spectroscopy,” Physical Review Letters, vol. 102, no. 3, Article ID 037403, 4 pages, 2009. View at Publisher · View at Google Scholar · View at Scopus 28. E. V. Castro, N. M. R. Peres, J. M. B. L. Dos Santos, F. Guinea, and A. H. C. Neto, “Bilayer graphene: gap tunability and edge properties,” Journal of Physics: Conference Series, vol. 129, no. 1, Article ID 012002, 2008. View at Publisher · View at Google Scholar · View at Scopus 29. F. Guinea, A. H. C. Neto, and N. M. R. Peres, “Interaction effects in single layer and multi-layer graphene,” European Physical Journal, vol. 148, no. 1, pp. 117–125, 2007. View at Publisher · View at Google Scholar · View at Scopus 30. H. Sadeghi, S. M. Mousavi, M. Rahmani, M. T. Ahmadi, and R. Ismail, “Bilayer graphene nanoribbon transport model,” in Advanced Nanoelectronics, chapter 7, Taylor & Francis, New York, NY, USA, 31. S. Datta, Quantum Transport: Atom to Transistor, Cambridge University Press, New York, NY, USA, 2005. 32. M. Rahmani, M. T. Ahmadi, H. Karimi, M. Saeidmanesh, E. Akbari, and R. Ismail, “Analytical modeling of trilayer graphene nanoribbon Schottky-barrier FET for high-speed switching applications,” Nanoscale Research Letters, vol. 8, no. 1, article 55, 2013. View at Publisher · View at Google Scholar 33. H. Zeng, J. Zhao, J. W. Wei, and H. F. Hu, “Effect of N doping and Stone-Wales defects on the electronic properties of graphene nanoribbons,” The European Physical Journal B, vol. 79, no. 3, pp. 335–340, 2011. View at Publisher · View at Google Scholar 34. J. Kotakoski, A. V. Krasheninnikov, U. Kaiser, and J. C. Meyer, “From point defects in graphene to two-dimensional amorphous carbon,” Physical Review Letters, vol. 106, no. 10, Article ID 105505, 4 pages, 2011. View at Publisher · View at Google Scholar · View at Scopus 35. A. H. C. Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, “The electronic properties of graphene,” Reviews of Modern Physics, vol. 81, no. 1, pp. 109–162, 2009. View at Publisher · View at Google Scholar · View at Scopus 36. L. Chen, H. Hu, Y. Ouyang, H. Z. Pan, Y. Y. Sun, and F. Liu, “Atomic chemisorption on graphene with Stone-Thrower-Wales defects,” Carbon, vol. 49, no. 10, pp. 3356–3361, 2011. View at Publisher · View at Google Scholar · View at Scopus 37. J. Ma, D. Alfè, A. Michaelides, and E. Wang, “Stone-Wales defects in graphene and other planar sp^2-bonded materials,” Physical Review B, vol. 80, no. 3, Article ID 033407, 4 pages, 2009. View at Publisher · View at Google Scholar 38. Y. J. Sun, F. Ma, D. Y. Ma, K. W. Xu, and P. K. Chu, “Stress-induced annihilation of Stone-Wales defects in graphene nanoribbons,” Journal of Physics D, vol. 45, no. 30, Article ID 305308, 2012. View at Publisher · View at Google Scholar 39. M. Rahmani, R. Ismail, M. T. Ahmadi, M. J. Kiani, and K. Rahmani, “Carrier velocity in high-field transport of trilayer graphene nanoribbon field effect transistor,” Science of Advanced Materials . In press. 40. M. Rahmani, M. T. Ahmadi, H. F. A. Karimi, M. J. kiani, E. Akbari, and R. Ismail, “Analytical modeling of monolayer graphene-based NO[2] sensor,” Sensor Letters, vol. 11, no. 2, pp. 270–275, 2013. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/jnm/2013/636239/","timestamp":"2014-04-18T07:28:35Z","content_type":null,"content_length":"171478","record_id":"<urn:uuid:698e78b1-7dd3-4628-9469-1cfe2522cae4>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Missing At Random (MAR) After considering MCAR, a second question naturally arises. That is, what are the most general conditions under which a valid analysis can be done using only the observed data, and no information about the missing value mechanism, Pr(r | y[o], y[m])? The answer to this is when, given the observed data, the missingness mechanism does not depend on the unobserved data. Mathematically, Pr(r | y[o], y[m]) = Pr(r | y[o]). This is termed Missing At Random, abbreviated MAR. This is equivalent to saying that the behaviour of two units who share observed values have the same statistical behaviour on the other observations, whether observed or not. For example: As units 1 and 2 have the same values where both are observed, given these observed values, under MAR, variables 3, 5 and 6 from unit 2 have the same distribution (NB not the same value!) as variables 3, 5 and 6 from unit 1. Note that under MAR the probability of a value being missing will generally depend on observed values, so it does not correspond to the intuitive notion of 'random'. The important idea is that the missing value mechanism can expressed solely in terms of observations that are observed. Unfortunately, this can rarely be definitively determined from the data at hand! Examples of MAR mechanisms • A subject may be removed from a trial if his/her condition is not controlled sufficiently well (according to pre-defined criteria on the response). • Two measurements of the same variable are made at the same time. If they differ by more than a given amount a third is taken. This third measurement is missing for those that do not differ by the given amount. A special case of MAR is uniform non-response within classes. For example, suppose we seek to collect data on income and property tax band. Typically, those with higher incomes may be less willing to reveal them. Thus, a simple average of incomes from respondents will be downwardly biased. However, now suppose we have everyone's property tax band, and given property tax band non-response to the income question is random. Then, the income data is missing at random; the reason, or mechanism, for it being missing depends on property band. Given property band, missingness does not depend on income itself. Therefore, to get an unbiased estimate of income, we first average the observed income within each property band. As data are missing at random given property band, these estimates will be valid. To get an estimate of the overall income, we simply combine these estimates, weighting by the proportion in each property band. In this example, a simple summary statistic (average of observed incomes) was biased. Conversely, a simple model (estimate of income conditional on property band), where we condition on the variable that makes the data MAR, led to a valid result. This is an example of a more general result. Methods based on the likelihood are valid under MAR. However, in general non-likelihood methods (e.g. based on completers, moments, estimating equations & including generalised estimating equations) are not valid under MAR, although some can be 'fixed up'. In particular, ordinary means, and other simple summary statistics from observed data, will be Finally, note that in a likelihood setting the term ignorable is often used to refer to and MAR mechanism. It is the mechanism (i.e. the model for Pr(R | y[o])) which is ignorable - not the missing
{"url":"http://missingdata.lshtm.ac.uk/index.php?option=com_content&view=article&id=76%3Amissing-at-random-mar&catid=40%3Amissingness-mechanisms&Itemid=96","timestamp":"2014-04-16T17:10:38Z","content_type":null,"content_length":"17699","record_id":"<urn:uuid:adebcaa4-c58b-4b09-9368-2373672fda9d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Arthur Cayley Cayley, Arthur (kāˈlē) [key], 1821–95, English mathematician. He was admitted to the bar in 1849. In 1863 he was appointed first Sadlerian professor of mathematics at Cambridge. His researches, which covered the field of pure mathematics, included especially the theory of matrices and the theory of invariants. The algebra of matrices was the tool Heisenberg used in 1925 for his revolutionary work in quantum mechanics. The concept of invariance is important in modern physics, particularly in the theory of relativity. Cayley's collected papers were published in 13 volumes (1889–98). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on Arthur Cayley from Fact Monster: • James Joseph Sylvester - Sylvester, James Joseph Sylvester, James Joseph, 1814–97, English mathematician. He studied ...
{"url":"http://www.factmonster.com/encyclopedia/people/cayley-arthur.html","timestamp":"2014-04-19T00:32:24Z","content_type":null,"content_length":"20153","record_id":"<urn:uuid:e55aa1d3-5db7-4223-96c7-637a08ee979d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Julia on Wednesday, April 27, 2011 at 12:19pm. A trap door, of length and width 1.65 m, is held open at an angle of 65.0 degrees with respect to the floor. A rope is attached to the raised edge of the door and fastened to the wall behind the door in such a position that the rope pulls perpendicularly to the trap door. If the mass of the trap door is 16.8 kg, what is the torque exerted on the trap door by the rope? I've seen several incorrect answers to this exact question when I searched online. I thought I would provide the correct general method on how to do this problem and give you the answer you should get if it is done correctly. (see below for method) • Physics - Julia, Wednesday, April 27, 2011 at 12:33pm All right. Think of the trap door as a bar and draw a free-body diagram (FBD) with it as such. Assuming this is a uniform system, the center of mass will also be at the center of the door and that is also where gravity will be pulling on the door. Since the door is!@#$%^&ed at an angle to the floor, the pull of gravity will also be at an angle. Calculate this and make sure you draw the gravity in the correct orientation in your FBD. The force of the rope is obviously perpendicular to the door, so keep that in mind. Now, the system is in equilibrium, therefore, the sum of the torques due to gravity (Tg) and the torque due to the rope (Tc) must be equal to zero. Now, you have to find the force of the rope pulling on the door from this equation, using the fact that you know that T=rFsin(x). where r is radius from hinge to rope, F is force of rope pulling on door, and 90 is the angle of force to door where r is radius from hinge to center mass, F is force due to gravity, and x is angle at which you calculated gravity was pulling on the door. Rearrange to get force due to rope This gives you the force with which the rope pulls on the door. Now, you have a force you can plug into your torque equation T=rFsin(x). Plug it in and solve for torque. For this particular problem using these values, you should get that the torque exerted on the trap door is equal to 57.4 N*m. I hope this helps any of you who might be struggling with this Related Questions Physics - trap door of length and width 1.65 m is held open at an angle of 65.0 ... Physics - trap door of length and width 1.65 m is held open at an angle of 65.0 ... Physics - Assume cathy has to drag her suitcase along the floor of the airport ... Physics - Assume cathy has to drag her suitcase along the floor of the airport ... Physics - A rope is used o pull a metal box 15.0m across the floor. The rope is ... Physics - A 46.4-N force is applied to the outer edge of a door of width 1.26 m ... physics - A rope of negligible mass passes over a puley of a negligible mass ... physics - A trapdoor on a stage has a mass of 20.8 kg and a width of 1.51 m (... physics - A trapdoor on a stage has a mass of 20.8 kg and a width of 1.51 m (... Physics - Beth, a construction worker, attempts to pull a stake out of the ...
{"url":"http://www.jiskha.com/display.cgi?id=1303921184","timestamp":"2014-04-16T04:27:12Z","content_type":null,"content_length":"10305","record_id":"<urn:uuid:de867c16-260a-4545-a4ec-054bdd842679>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
About Time Trial Triumphs Author: Steve Risberg, Terry Trotter, and Annie Fetter Description: Algebra, difficulty level 2. At 9:05 a.m. Ernesto starts riding at a speed of 440 yards per minute. At 9:20 a.m. Johnny starts riding, but he's going 20% faster than his little brother. At what time, to the nearest minute, will Johnny catch Ernesto? Please Note: Use of the following materials requires membership. Please see the Problem of the Week membership page for more information. Problem page: /library/go.html?destination=3004 Support page: Online Resource Page #3004 Solution page: Problem #3004
{"url":"http://mathforum.org/library/problems/more_info/65333","timestamp":"2014-04-16T05:00:16Z","content_type":null,"content_length":"5897","record_id":"<urn:uuid:9ef95253-2673-4bf7-872f-7aa4d624bb78>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
4th order differential equation?? April 21st 2010, 03:50 PM #1 Junior Member Jan 2010 d4y/dx4 - 8( d2y/dx2) + 16y = 0 This is on one of my '2nd order differential equations' problem sheets and its a of fourth order leaving me a bit confused. Ive tried reducing the order but I cant get it. Is there a rule/theorem Ive missed?? A helping hand would be greatly appreciated Why do you need to reduce it? 1) Create a system z = d2y/dx2. What does that do after some substitution? 2) Factor the characteristic polynomial in ignorance ==> Yea I tried that method and it was fine for the first 2 terms but I didnt know what to do with the 16y term, integrating twice seems a bit messy?? so i get z'' -8z + 16y= 0 How to convert the y value?? Did you factor the Characteristic Polynomial? That's a great place to start. April 21st 2010, 05:24 PM #2 MHF Contributor Aug 2007 April 22nd 2010, 03:20 AM #3 Junior Member Jan 2010 April 22nd 2010, 09:08 PM #4 MHF Contributor Aug 2007
{"url":"http://mathhelpforum.com/differential-equations/140582-4th-order-differential-equation.html","timestamp":"2014-04-18T15:17:08Z","content_type":null,"content_length":"38358","record_id":"<urn:uuid:655c2690-4e55-4288-a5f3-9148a34251aa>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory of Computation Theory of Computation, V22.0453.01 Final exam date. Thursday December 21, 10:00-11:50am, room 513 WWH Instructor. Richard Cole, 430WWH, tel: 998-3119, cole@cs.nyu.edu. Class time. 9:30-10:45pm, Tuesday/Thursday, room 513, Warren Weaver Hall. First meeting: Tuesday, September 5. Midterm Date: Tuesday, October 24, in class. Office hours. Tuesday/Thursday 11-12, and by appointment. Mailing list, home page. There is a class mailing list; please join it (see http://www.cs.nyu.edu/mailman/listinfo/v22_0453_001_fa06); it is intended for discussion of course related materials and announcements if there are any (to subscribe, follow the instructions on the mailing list web page). The course home page can be accessed from the department home page (http://www.cs.nyu.edu/) by following the links to course home pages and then to this course, or directly at http://www.cs.nyu.edu/courses/fall06/V22.0453-001/index.html Course Goal and Syllabus. The goal of this class is to develop the ability to evaluate and write mathematical claims in computer science, so as to be able to: Judge when a problem is solved (and equally important, when it is not yet solved). Explain clearly and precisely why an algorithm is correct and what it computes. The specific topics covered will include proofs techniques, finite automata and regular languages, pushdown automata and context free languages, Turing Machines and decidable and undecidable problems, and NP-completeness. Assignments. There will be more or less weekly homeworks comprising problems drawn from the textbook and elsewhere. Late homeworks will not be accepted (except in the event of illness or other unavoidable circumstances). If for some reason you will be unable to hand in a homework on time, please discuss it with me beforehand. While you may discuss homework problems with your fellow students, you must write up your solutions in your own words. Be aware that you are unlikely to perform well on exams unless you gain practice at problem solving on the homeworks. Academic Integrity. Please take note of the course and departmental policy on this matter: http://www.cs.nyu.edu/web/Academic/Undergrad/academic_integrity.html Assessment. The homeworks will comprise 40% of the overall grade, the midterm 20% and the final 40%. However, if the grade on the final is better than the midterm grade it will replace the midterm grade. Exams will be closed book. Required text. Michael Sipser, Introduction to the Theory of Computation, Thomson. The second edition has some modest advantages in that in includes solutions to a selection of problems; however, it is OK to use the first edition. Another text. Daniel I.A. Cohen, Introduction to Computer Thoery. This text provides a lot of examples and can be quite helpful. However, it does not cover material for the whole course, and the approach it takes differs in places from the one being used in this course. Homework Details. I encourage you to handwrite your homework, legibly of course, rather than typeset it. In my experience, when typesetting, often too much effort is spent on the appearance of the homework and minor yet significant errors are overlooked. Also, if your homework solution has multiple pages, please staple them; please don't fold down the corners or use paperclips, for the pages are much more likely to come apart. Finally, if handwriting, please use an easy to read ink color (blue or black, not red or green). Homeworks and handouts. Homework 1 Homework 2 Homework 3 Homework 4 Homework 5 Homework 6 Homework 7 Homework 8 Homework 9 Homework 10 Homework 11 Homework 12 Homework 13 Homework 14 Sample Midterm Sample Final Turing Machines handout Finite Automata handout Finite Automata, Part 2 handout Finite Automata, Part3 handout Last modified: December 13, 2006
{"url":"http://www.cs.nyu.edu/courses/fall06/V22.0453-001/index.html","timestamp":"2014-04-16T19:28:43Z","content_type":null,"content_length":"6840","record_id":"<urn:uuid:7985a592-2050-4244-ada4-3af1d3a62352>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
A class of symmetric polynomials with a parameter Results 1 - 10 of 19 - TRANS. AMER. MATH. SOC , 1996 "... A power series is introduced that is an extension to three sets of variables of the Cauchy sum for Jack symmetric functions in the Jack parameter α. We conjecture that the coefficients of this series with respect to the power sum basis are nonnegative integer polynomials in b, the Jack parameter sh ..." Cited by 11 (6 self) Add to MetaCart A power series is introduced that is an extension to three sets of variables of the Cauchy sum for Jack symmetric functions in the Jack parameter α. We conjecture that the coefficients of this series with respect to the power sum basis are nonnegative integer polynomials in b, the Jack parameter shifted by 1. More strongly, we make the Matchings-Jack Conjecture, that the coefficients are counting series in b for matchings with respect to a parameter of nonbipartiteness. Evidence is presented for these conjectures and they are proved for two infinite families. The coefficients of a second series, essentially the logarithm of the first, specialize at values 1 and 2 of the Jack parameter to the numbers of hypermaps in orientable and locally orientable surfaces, respectively. We conjecture that these coefficients are also nonnegative integer polynomials in b, andwemake the Hypermap-Jack Conjecture, that the coefficients are counting series in b for hypermaps in locally orientable surfaces with respect to a parameter of nonorientability. "... In this paper we present a Maple library (MOPs) for computing Jack, Hermite, Laguerre, and Jacobi multivariate polynomials, as well as eigenvalue statistics for the Hermite, Laguerre, and Jacobi ensembles of Random Matrix theory. We also compute multivariate hypergeometric functions, and offer both ..." Cited by 10 (6 self) Add to MetaCart In this paper we present a Maple library (MOPs) for computing Jack, Hermite, Laguerre, and Jacobi multivariate polynomials, as well as eigenvalue statistics for the Hermite, Laguerre, and Jacobi ensembles of Random Matrix theory. We also compute multivariate hypergeometric functions, and offer both symbolic and numerical evaluations for all these quantities. We prove that all algorithms are well-defined, analyze their complexity, and illustrate their performance in practice. Finally, we also present a few of the possible applications of this library. , 2003 "... The relationship (resemblance and/or contrast) between quantum and classical integrability in Ruijsenaars-Schneider systems, which are one parameter deformation of Calogero-Moser systems, is addressed. Many remarkable properties of classical Calogero and Sutherland systems (based on any root system) ..." Cited by 8 (5 self) Add to MetaCart The relationship (resemblance and/or contrast) between quantum and classical integrability in Ruijsenaars-Schneider systems, which are one parameter deformation of Calogero-Moser systems, is addressed. Many remarkable properties of classical Calogero and Sutherland systems (based on any root system) at equilibrium are reported in a previous paper (Corrigan-Sasaki). For example, the minimum energies, frequencies of small oscillations and the eigenvalues of Lax pair matrices at equilibrium are all “integer valued”. In this paper we report that similar features and results hold for the Ruijsenaars-Schneider type of integrable systems based on the classical root systems. - in the proceedings of the Workshop on superintegrability in classical and quantum systems, ed. P Winternitz, CRM series "... A new generalization of the Jack polynomials that incorporates fermionic variables is presented. These Jack superpolynomials are constructed as those eigenfunctions of the supersymmetric extension of the trigonometric Calogero-Moser-Sutherland (CMS) model that decomposes triangularly in terms of the ..." Cited by 7 (6 self) Add to MetaCart A new generalization of the Jack polynomials that incorporates fermionic variables is presented. These Jack superpolynomials are constructed as those eigenfunctions of the supersymmetric extension of the trigonometric Calogero-Moser-Sutherland (CMS) model that decomposes triangularly in terms of the symmetric monomial superfunctions. Many explicit examples are displayed. Furthermore, various new results have been obtained for the supersymmetric version of the CMS models: the Lax formulation, the construction of the Dunkl operators and the explicit expressions for the conserved charges. The reformulation of the models in terms of the exchange-operator formalism is a crucial aspect of our analysis. - Markov Processes and Related Fields "... We propose to abandon the notion that a random matrix has to be sampled for it to exist. Much of today's applied nite random matrix theory concerns real or complex random matrices (β = 1, 2). The threefold way so named by Dyson in 1962 [2] adds quaternions (β = 4). While it is true there are only th ..." Cited by 3 (3 self) Add to MetaCart We propose to abandon the notion that a random matrix has to be sampled for it to exist. Much of today's applied nite random matrix theory concerns real or complex random matrices (β = 1, 2). The threefold way so named by Dyson in 1962 [2] adds quaternions (β = 4). While it is true there are only three real division algebras (β = dimension over the reals), this mathematical fact while critical in some ways, in other ways is irrelevant and perhaps has been over interpreted over the decades. We introduce the notion of a ghost random matrix quantity that exists for every beta, and a shadow quantity which may be real or complex which allows for computation. Any number of computations have successfully given reasonable answers to date though di culties remain in some cases. Though it may seem absurd to have a three and a quarter dimensional or pi dimensional algebra, that is exactly what we propose and what we compute with. In the end β becomes a noisiness parameter rather than a dimension. 1 "... In this paper we construct a discrete linear operator K which transforms A2 Macdonald polynomials into the product of two basic 3φ2 hypergeometric series with known arguments. The action of the operator K on power sums in two variables can be reduced to a generalization of one particular case of the ..." Cited by 1 (0 self) Add to MetaCart In this paper we construct a discrete linear operator K which transforms A2 Macdonald polynomials into the product of two basic 3φ2 hypergeometric series with known arguments. The action of the operator K on power sums in two variables can be reduced to a generalization of one particular case of the Bailey’s summation formula for a very-well-poised 6ψ6 series. We also propose the conjecture for a transformation of 6ψ6 series with different arguments. 1 "... Abstract. We consider a deformation of Kerov character polynomials, linked to Jack symmetric functions. It has been introduced recently by M. Lassalle, who formulated several conjectures on these objects, suggesting some underlying combinatorics. We give a partial result in this direction, showing t ..." Cited by 1 (0 self) Add to MetaCart Abstract. We consider a deformation of Kerov character polynomials, linked to Jack symmetric functions. It has been introduced recently by M. Lassalle, who formulated several conjectures on these objects, suggesting some underlying combinatorics. We give a partial result in this direction, showing that some quantities are polynomials in the Jack parameter α with prescribed degree. Our result has several interesting consequences in various directions. Firstly, we give a new proof of the fact that the coefficients of Jack polynomials expanded in the monomial or power-sum basis depend polynomially in α. Secondly, we describe asymptotically the shape of random Young diagrams under some deformation of Plancherel measure. Résumé. On considère une déformation des polynômes de Kerov pour les caractères du groupe symétrique. Cette déformation est liée aux polynômes de Jack. Elle a été récemment définie par M. Lassalle, qui a proposé plusieurs conjectures sur ces objets, suggérant ainsi l’existence d’une combinatoire sous-jacente. Nous donnons un résultat partiel dans cette direction, en montrant que certaines quantités sont des polynômes (dont on contrôle les degrés) en fonction du paramètre de Jack α. Notre résultat a des conséquences intéressantes dans des directions diverses. Premièrement, nous donnons une nouvelle preuve de la polynomialité (toujours en fonction de α) des coefficients du développement des polynômes de Jack dans la base monomiale. Deuxièmement, nous décrivons asymptotiquement la forme de grands diagrammes de Young distribués selon une déformation de la mesure de Plancherel. , 2003 "... By solving the two variable differential equations which arise from finding the eigenfunctions for the Casimir operator for O(d, 2) succinct expressions are found for the functions, conformal partial waves, representing the contribution of an operator of arbitrary scale dimension ∆ and spin ℓ togeth ..." Add to MetaCart By solving the two variable differential equations which arise from finding the eigenfunctions for the Casimir operator for O(d, 2) succinct expressions are found for the functions, conformal partial waves, representing the contribution of an operator of arbitrary scale dimension ∆ and spin ℓ together with its descendants to conformal four point functions for d = 4, recovering old results, and also for d = 6. The results are expressed in terms of ordinary hypergeometric functions of variables x, z which are simply related to the usual conformal invariants. An expression for the conformal partial wave amplitude valid for any dimension is also found in terms of a sum over two variable symmetric Jack polynomials which is used to derive relations for the conformal partial waves. PACS no:
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=321135","timestamp":"2014-04-16T22:45:34Z","content_type":null,"content_length":"35711","record_id":"<urn:uuid:ed027d55-c1b9-4610-9004-379320bb914f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Help on this inequation July 17th 2010, 03:20 AM #1 Junior Member Jun 2010 Help on this inequation $\sqrt{-x-2}-\sqrt[3]{x+5}<3$ In this inequation what i'd usually do is bring -x-2 and x+5 to the "same root" that's 6. But i don't think i'm going anywhere because of that 3 on the right. What would you suggest me to do here? the condition is $x\in(-\infty,-2]$ Have you tried graphing the functions $y = \sqrt{-x-2} - \sqrt[3]{x + 5}$ and $y = 3$ and determining the $x$ values for which this inequality holds true? i need to prepare for an exam, i don't think doing graphs will help me if i have to solve a similar inequation Solve the equation $\sqrt{-2-x}+ \sqrt[3]{x+ 5}= 3$. The points where those are equal or where the roots are not defined (x must be less than -2, of course) separate ">" from "<". On the contrary, you should NEVER try to solve any equations or inequations without picturing what you are actually dealing with (i.e. graphing). To gain a deep understanding of function analysis, you need to understand and be able to connect the numerical, graphical and algebraic representations of your function. If you want to use the 6th roots $x=-2\ \Rightarrow\ \sqrt{0}-\sqrt[3]{3}<3$ $x=-3\ \Rightarrow\ \sqrt{1}-\sqrt[3]{2}<3$ $x=-4\ \Rightarrow\ \sqrt{2}-\sqrt[3]{1}<3$ $x=-5\ \Rightarrow\ \sqrt{3}-\sqrt[3]{0}<3$ $x=-6\ \Rightarrow\ \sqrt{4}-\sqrt[3]{-1}=3$ July 17th 2010, 03:27 AM #2 July 17th 2010, 03:45 AM #3 Junior Member Jun 2010 July 17th 2010, 05:07 AM #4 MHF Contributor Apr 2005 July 17th 2010, 07:07 AM #5 July 17th 2010, 11:22 AM #6 MHF Contributor Dec 2009
{"url":"http://mathhelpforum.com/pre-calculus/151173-help-inequation.html","timestamp":"2014-04-17T19:11:40Z","content_type":null,"content_length":"49362","record_id":"<urn:uuid:453aba62-fb1a-404f-b6ac-2b5e4a2b2573>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability portable Stability experimental Maintainer m.p.donadio@ieee.org Safe Haskell Safe-Inferred The module contains a function for performing the bilinear transform. The input is a rational polynomial representation of the s-domain function to be transformed. In the bilinear transform, we substitute 2 1 - z^-1 s <-- -- * -------- ts 1 + z^-1 into the rational polynomial, where ts is the sampling period. To get a rational polynomial back, we use the following method: 1. Substitute s^n with (2/ts * (1-z^-1))^n == [ -2/ts, 2/ts ]^n 2. Multiply the results by (1+z^-1)^n == [ 1, 1 ]^n 3. Add up all of the common terms 4. Normalize all of the coeficients by a0 where n is the maximum order of the numerator and denominator :: Double T_s -> ([Double], [Double]) (b,a) -> ([Double], [Double]) (b',a') Performs the bilinear transform :: Double w_c -> Double T_s -> Double W_c Function for frequency prewarping
{"url":"http://hackage.haskell.org/package/dsp-0.2.2/docs/DSP-Filter-IIR-Bilinear.html","timestamp":"2014-04-16T11:16:54Z","content_type":null,"content_length":"8335","record_id":"<urn:uuid:42f56b5d-1e62-4ed7-8a9b-48cea9ec331f>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Factors and Multiples The students need factoring practice. This section will help the children get practice finding factors, least common multiple, and greatest common factor. Factor puzzle and game We are going to begin decimals. This is a section that will help the student become more comfortable with decimals. http://www.harcourtschool.com/glossary/math_advantage/glossary6.html Math terms and animated instructions http://www.quia.com/mathjourney.html Answer math problems right to travel around the world This section will help the students get better aqainted with spatial shapes. They will deal with shapes and everything that goes with shapes. They will get practice finding area. http://www.harcourtschool.com/activity/mmath/mmath_dr_gee.html Nets http://www.mathrealm.com/CD_ROMS/GeometryWorld.php#Try all kinds of geometry games http://www.mathcats.com/explore/sculptor.html sculpting with geometry Fractions are the most confusing concept for children. This is just some practicing games to help the students get the concept better. http://www.harcourtschool.com/activity/mmath/mmath_frac.html Fraction man http://www.funbrain.com/cgi-bin/fob.cgi?A1=a&A2=1&amp;amp;amp;amp;amp;A11=3&A12=5 Fresh Baked Fractions http://teacher.scholastic.com/maven/chili/index.htm Fraction story Cajun Mystery http://teacher.scholastic.com/maven/loudlou/index.htm Fraction Mystery http://www.learnalberta.ca/Launch.aspx?content=%2fcontent%2fmesg%2fhtml%2fmath6web%2fmath6shell.html Fractional Probabiltiy lesson http://www.iknowthat.com/com/L3?Area=FractionGame&COOK= Fraction games This is section about Tesselations and Tangrams. These are just for fun to get the students thinking about geometry. http://www.apples4theteacher.com/square.html This game helps with spatial reasoning. http://www.apples4theteacher.com/chinese-tangrams.html Tangram game http://www.funbrain.com/cgi-bin/poly.cgi figuring area http://www18.big.or.jp/~mnaka/home.index.html Tesselations http://www.madras.fife.sch.uk/maths/homelearning/Ambleside%20Flash/protractor.swf visual angle measurements This is a spy site for remembering the tricks for math facts. Here are math games for whole number operation practice. These are some fun ways to get better at this concept. I would like to welcome you to my math blog. I hope that your family enjoys and finds this blog helpful. Mrs. Sanderson
{"url":"http://www.mad4math.blogspot.com/","timestamp":"2014-04-18T20:43:17Z","content_type":null,"content_length":"34306","record_id":"<urn:uuid:71a00486-41f5-4879-8369-f85333c4c31c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Spreadsheet Tutorials Spreadsheets are useful for: • organizing and presenting data • for visualizing data using charts • for data analysis of all kinds. In this series of tutorials we demonstrate spreadsheet skills and techniques that you can use for data management and analysis using Microsoft Excel 2000. Much of the material covered here is conceptual and will apply in any version of Excel, or some other spreadsheet. In a few cases what you’ll see in your spreadsheet may be different and you’ll need to find the equivalent command or feature in your spreadsheet. The introduction describes the terms used throughout the series. Terms and Techniques The Introduction Basic Skills: Organizing, Formatting, Importing Data Part One: Moving a table Part Two: Formatting the table Part Three: Organizing and Sorting Making use of data copied from a web page in Excel Intermediate Skills: Working with Functions Basic Techniques Cell Referencing Intermediate Skills: Charting Create A Basic Chart Understanding the Chart Wizard Line Charts: the basics Line Charts: Advanced techniques and comparison with XY charts XY (Scatter) charts: the basics XY (Scatter) charts: plotting a function Pie Charts Histograms Part One: Creating a histogram manually Histograms Part Two: Using the Data Analysis Toolpak Installing the Data Analysis Toolpak Pareto Charts: Create a Pareto chart Advanced Skills: Statistics and Data Analysis Summary Statistics Inference From Charts: Trend Lines and correlation Regression Analysis The Least Squares Method
{"url":"http://commons.esc.edu/smatresources/category/technology/spreadsheet-tutorials/","timestamp":"2014-04-16T18:56:24Z","content_type":null,"content_length":"55243","record_id":"<urn:uuid:b32ec948-448d-4962-b53c-585cdf958993>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 622.10041 Autor: Erdös, Paul; Nathanson, Melvyn B. Title: Problems and results on minimal bases in additive number theory. (In English) Source: Number theory, Semin. New York 1984/85, Lect. Notes Math. 1240, 87-96 (1987). Review: [For the entire collection see Zbl 605.00005.] Let h in N. A set A of nonnegative integers such that every sufficiently large integer is representable as the sum of h elements of A is called an asymptotic basis of order h. In this paper a number of old and new problems concerning the classical asymptotic bases (squares, higher powers, primes) as well as the general properties of asymptotic bases are discussed. Of special interest are questions about the existence of minimal bases A, i.e. A is an asymptotic basis of order h but no proper subset of A has this property. Reviewer: J.Zöllner Classif.: * 11B13 Additive bases 11B83 Special sequences of integers and polynomials 11P05 Waring's problem and variants 11P05 Waring's problem and variants 11P32 Additive questions involving primes 00A07 Problem books Keywords: sums of squares; sums of higher powers; sums of primes; asymptotic basis; problems; minimal bases Citations: Zbl 605.00005 © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/62210041.htm","timestamp":"2014-04-20T18:51:29Z","content_type":null,"content_length":"4457","record_id":"<urn:uuid:87240267-14dc-4d96-935d-2c4aae63147f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Frame works help April 21st 2011, 06:49 AM #1 Frame works help Basic maths frameworks How to find the reaction force on B in below frameworks? step by step explanation would be appreciated because I can't understand my freak tutorial ! Basic maths frameworks How to find the reaction force on B in below frameworks? step by step explanation would be appreciated because I can't understand my freak tutorial ! hello silvercats! it'd be better if you post the exact problem. is there an equilateral triangle hanging freely? have you shown the free body diagram there? this problem cannot be solved until you specify this. Also at B you have listed two different coordinate systems. You have to choose one of them and then stick with it, else you will get confusing answers. AB and BC bars are connected to B smoothly(no friction).A and C corners are fixed to 2a long two pints which is horizontal.is this ok? To be clear,here is the answer they have given but i don't understand AB ~ Y(2a cos 60)-x(2a Sin 60)-w(a cos 60) = 0 ~ Y-(√3)x = w/2 ------- (1) CD ~ y(2a cos 60)+x(2a Sin 60) + w(a cos 60)=0 ~ Y+(√3)x=-w/x -----------(2) and they solve it and I am like :O Hmmmm....I had taken the x and y to be proposed coordinates. Are the x and y labeling reaction forces at B? the forces applied to AB at the point A by the rod AC has not been considered. why? consider this force too in the FBD. It will have two components. same story will be there with point C on rod BC. again two more unknown forces are introduced. we have a total of 6 unknowns now. x,y, two components of force on point A, and two components of force on point B. you must be aware of the fact that we can have three equations of statics for a rigid body in two dimensions.Consider rod AB, these three equations can be: 1) summation of torques about point A is zero. 2)summation of torques about point B is zero. 3)vector sum of the forces acting on AB is zero. similarly we will have 3 more equations when we consider rod BC's statics. hence 6 equations and six unknowns. Did this help? To be clear,here is the answer they have given but i don't understand AB ~ Y(2a cos 60)-x(2a Sin 60)-w(a cos 60) = 0 LHS is summation of torques about A and this is set equal to zero since AB is in equilibrium ~ Y-(√3)x = w/2 ------- (1) CD ~ y(2a cos 60)+x(2a Sin 60) + w(a cos 60)=0 LHS is summation of torques about C and this is set equal to zero since BC is in equilibrium. ~ Y+(√3)x=-w/x -----------(2) and they solve it and I am like :O note that even though the FBD is not complete( see my previous post) yet these equations give right answer because the torques of the forces acting at point A( and C) are zero for rod AB(and BC). did this help? April 21st 2011, 07:04 AM #2 April 21st 2011, 07:33 AM #3 April 21st 2011, 07:52 AM #4 April 21st 2011, 07:53 AM #5 April 21st 2011, 08:00 AM #6 April 21st 2011, 11:30 AM #7 April 21st 2011, 05:03 PM #8 April 21st 2011, 07:23 PM #9 April 21st 2011, 07:37 PM #10
{"url":"http://mathhelpforum.com/math-topics/178249-frame-works-help.html","timestamp":"2014-04-23T23:21:13Z","content_type":null,"content_length":"60695","record_id":"<urn:uuid:43a63e55-c0e3-4727-863d-0b517210129c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Which of the following is the binary coded decimal equivalent of 13? a. D... - Homework Help - eNotes.com Which of the following is the binary coded decimal equivalent of 13? a. D b. 1101 c. 00010010 d. 00010011 The binary coded decimal format is widely used in computing and in electronics. It is a method of encoding where each digit of the decimal number is represented by its binary sequence. The digits from 0 to 9 can be represented as sequences of 4 bits. 0 is 0000 1 is 0001 9 is 0111 This encoding occupies more space for the same number but is easier to use in decimal operations and when encoding has been done to print characters or display them. For 13 the the binary encoding uses the sequence for 1 which is 0001 followed by that for 3 which is 0011. The resultant BCD code is 00010011. The correct option is d. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/bcd-binary-code-division-equivalent-13-10-255720","timestamp":"2014-04-21T00:25:38Z","content_type":null,"content_length":"27941","record_id":"<urn:uuid:abe83227-cc2a-4587-94d0-3d9d755cf3e6>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
An Algebraic Approach to Internet Routing Principal lecturer: Dr Timothy Griffin Taken by: MPhil ACS • Approach A great deal of of interesting work was done in the 1970s in generalizing shortest path algorithms to a wide class of semirings — called ”path algebras” or ”dioids”. Although the evolution of Internet Routing protocols does not seem to have taken much inspiration from this work, recent reverse engineering efforts have demonstrated that an algebraic approach is very useful for both understanding existing protocols and for exploring the design space of future Internet routing protocols. This course is intended present the basic mathematics needed to understand this approach. No previous background will be assumed. The course will start from scratch and end with open research problems. Many examples inspired by Internet Routing will be presented along the The course takes a high-level, top-down approach to the analysis and design of Internet routing protocols. We ask "WHAT problem is being solved?" before asking "HOW do we solve it?". • Goals On completion of this module students should: □ understand the basics of semigroups and semirings, □ be able to reason about and prove various properties of such algebraic structures, □ understand applications of semirings and related structures in diverse fields of computer science and operations research, □ have a deeper understanding of the applications of semirings and related structures to Internet routing. • (Evolving) Outline □ pre-term recommended reading : [BC1975] □ Week 1: Basics General introduction. Brief account of Internet routing. Introduction to semigroups, associated order relations, and semirings. [BC1975,GM2008,BT2010] □ Week 2: More on semirings Solving path problems in graphs using matrix methods. [BC1975,GM2008,BT2010] □ Week 3: constructions Semiring Constructions. [JS2002,GM2008,GG2007,RM2006]. Focus on lexicographic product [GG2007] and why "bisemigroup" framework [GG2008] is easier to work with for building semirings than directly implementing Manger's approach [RM2006]. □ Week 4: Review and Semimodules Modeling route redistribution with semimodules [BG2009]. □ Week 5: A mini-metalanguage for bisemigroups Applying some of the ideas of [GG2007] to semirings/bisemigroups, but with necessary AND sufficient conditions for each property of interest. □ Week 6: Beyond Semirings Algebras of Monoid Endomprhisms (AMEs) [GM2008]. Using AMEs to define scoped product (a metric-dependent partition). Then we consider metric-neutral partitions. □ Week 7: Living without distributivity Global vs. local optimality. Using Dijkstra's algorithm for computing local optima [SG2010]. Matrix methods, algorithms in the Bellman-Ford family, distributed versions [JS2003,JS2005,GG2008,SG2010]. □ Week 8: Return to mini-metalanguage • Lecture Slides □ Slides from last year : http://www.cl.cam.ac.uk/teaching/0910/L11/. □ Slides rewritten for this year will be produced and posted here during the term. ☆ Lectures 1-4: (Updated 18 Oct.) One slide per page : L11_2010_01_04.pdf. Two slides per page : L11_2010_01_04_2up.pdf. ☆ Lectures 5 and 6: One slide per page : L11_2010_05_06.pdf. Two slides per page : L11_2010_05_06_2up.pdf. ☆ Lecture 7 : Review, no slides. ☆ Lecture 8: Semimodules and route redistribution. One slide per page : L11_2010_lecture_08.pdf. Two slides per page : L11_2010_lecture_08_2up.pdf. ☆ Lecture 9: A mini-metalanguage. One slide per page : L11_2010_lec09.pdf. ☆ Lectures 10 and 11: Algebras of Monoid Endomprhisms (AMEs). One slide per page : L11_2010_lec10.pdf. Two slides per page : L11_2010_lec10_2up.pdf. ☆ Lectures 12: Clean (metric-neutral) partitions. One slide per page : L11_2010_lec12.pdf. Two slides per page : L11_2010_lec12_2up.pdf. ☆ Lectures 13 and 14: Routing in Equilibrium : L11_2010_lec13_14.pdf. (Corrections made on Friday, 26 November). • Assessment Problem sets are here L11_problem_sets_2010.pdf (currently contains only problem set 1). □ set 1 due 22 October, 2010. □ set 2 due 12 November, 2010. □ set 3 due 1 December, 2010. □ set 4 due 15 December, 2010. • Core Bibliography Clearly we will not have time to cover all of these papers in depth! □ [BC1975] Regular Algebra Applied to Path-Finding Problems. R.C. Backhouse and B.A.Carr J.Inst.Maths.Applics (1975) 15, 161–186. □ [GM2008] Graphs, Dioids and Semirings : New Models and Algorithms, by Michel Gondran , Michel Minoux. (On reserve in Lab Library) Chapter 8 is on-line : Collected Examples of Monoids, (Pre) -Semirings and Dioids. □ [BT2010] Path problems in networks. John S. Baras and George Theodorakopoulos. Morgan and Claypool, 2010. (On reserve in Lab Library) □ [GS2005] Metarouting. Timothy G. Griffin and João Luís Sobrinho. SIGCOMM 2005. □ [RM2006] R.Manger, A Catalogue of Useful Composite Semirings for Solving Path Problems in Graphs. Proceedings of the 11th International Conference on Operational Research, KOI 2006, Pula, Croatia, September 27 – 29, 2006. Page 13 of http://www.oliver.efpu.hr/koi06/koi06_proceedings.pdf □ [GG2007] Lexicographic Products in Metarouting. Alexander Gurney, Timothy G. Griffin. ICNP, October 2007, Beijing. □ [BG2009] A model of Internet routing using semi-modules. John N. Billings and Timothy G. Griffin. RelMiCS11/AKA6, November 2009. □ [LXZ2010] Theory and New Primitives for Safely Connecting Routing Protocol Instances. Franck Le, Geoffrey Xie, Hui Zhang. SIGCOMM 2010. □ [SG2010] Routing in Equilibrium. Joao Luis Sobrinho and Timothy G. Griffin. The 19th International Symposium on Mathematical Theory of Networks and Systems (MTNS 2010). • Additional (routing related) reading □ [JS2002] J. L. Sobrinho, ”Algebra and Algorithms for QoS Path Computation and Hop- by-Hop Routing in the Internet,” IEEE/ACM Transactions on Networking , pp. 541-550, August 2002. □ [JS2003] J. L. Sobrinho, ”Network Routing With Path Vector Protocols: Theory and Applications” in Proc. ACM SIGCOMM 2003, pp. 49-60, Karlsruhe, Germany, August 2003. □ [JS2005] J. L. Sobrinho, ”An Algebraic Theory of Dynamic Network Routing,” IEEE/ACM Transactions on Networking, pp. 1160-1173, October 2005. □ [RFC4264] RFC 4264: BGP Wedgies. Timothy G. Griffin and Geoff Huston. □ [TG2009] A model of configuration languages for routing protocols. Philip J. Taylor and Timothy G. Griffin. PRESTO workshop (at SIGCOMM 2009) □ [GG2008] Increasing Bisemigroups and Algebraic Routing. Timothy G. Griffin and Alexander Gurney, RelMiCS10, April 2008. • Additional reading (path algorithms, applications of semirings in Computer Science) □ [LeBT2004] Network Calculus Jean-Yves Le Boudec and Patrick Thiran □ Transitive closure and related semiring properties via eliminants. S. Kamal Abdali and B. David Saunders. Theoretical Computer Science Volume 40, 1985, Pages 257-274 Eleventh International Colloquium on Automata, Languages and Programming. From ScienceDirect □ Interprocedural Dataflow Analysis over Weight Domains with Infinite Descending Chains. Morten Kühnrich, Stefan Schwoon, Jiří Srba, Stefan Kiefer. http://arxiv.org/abs/0901.0501 □ An Incremental Algorithm for a Generalization of the Shortest-Path Problem. G. Ramalingam, Thomas Reps. (1992) http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.6796 Last update : Fri Nov 26 15:24:47 GMT 2010
{"url":"http://www.cl.cam.ac.uk/teaching/1011/L11/","timestamp":"2014-04-18T08:03:35Z","content_type":null,"content_length":"22812","record_id":"<urn:uuid:0a4bc4e6-edc1-46e5-8a62-67f790d21744>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mandelbrot Set and Julia Sets Below is a table of contents for pages written here about the Mandelbrot set and its associated Julia sets. There are many other pages on the web on these topics. It is easy to see why. The subject, while reasonably advanced, is easy to get into, and the pictures of the various objects are among the most flamboyantly beautiful to come from mathematics. We are adding our two cents partly as practice with putting things on the web, and partly to fulfill our function as a mathematics department and get some mathematical information out there. The pages in the first part below contain many embedded images. This renders the pages almost useless to text only browsers. We do not feel to sorry about this since these pages were motivated to a large extent by the images. The embedded images also cause the transmission of the pages to be somewhat slow, although this is mitigated by the fact that the embedded images are small. Most are under 2000 bytes. The embedded images are the informative ones and are in black and white, so a black and white screen is sufficient to follow the narrative. The pages also contain links to many non-embedded images. Some of these are enlargements or details of the black and white embedded images. These non-embedded images are in color and are larger. A color screen is necessary to appreciate them. They range from several thousand bytes, to several hundred thousand bytes and will take significantly longer to download. They are there for information and also for enjoyment. 1. What is the Mandelbrot set? 2. A catalog of images of the Mandelbrot set (23 embedded color images). This file last modified: Sep 10 2000 URL: http://www.math.binghamton.edu/MATH/topics/mandel/index.html
{"url":"http://www.math.binghamton.edu/MATH/topics/mandel/index.html","timestamp":"2014-04-20T18:29:03Z","content_type":null,"content_length":"3162","record_id":"<urn:uuid:97050549-a4b7-4797-a183-bb10f233b12e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
Kernel Adaptive Filtering Toolbox Kernel Adaptive Filtering Toolbox A Matlab benchmarking toolbox for kernel adaptive filtering. This toolbox focuses on online and adaptive algorithms that use kernel methods to perform nonlinear regression. It includes algorithms, demos and tools to compare their performance. Maintainer: Steven Van Vaerenbergh (steven at gtas dot dicom dot unican dot es) - Miguel Lazaro-Gredilla - Sohan Seth - Masahiro Yukawa Official web: https://sourceforge.net/projects/kafbox This toolbox is a collaborative effort: every developer wishing to contribute code or suggestions can do so. More info below. Directories included in the toolbox data/ - data sets demo/ - demos and test files lib/ - algorithm libraries and utilities Run install.m Octave / Matlab pre-2008a This toolbox uses the classdef command which is not supported in Matlab pre-2008a and not yet in Octave. The older 0.x versions of this toolbox do not use classdef and can therefore be used with all versions of Matlab and Octave. http://sourceforge.net/projects/kafbox/files/ Each kernel adaptive filtering algorithm is implemented as a Matlab class. To use one, first define its options: options = struct('nu',1E-4,'kerneltype','gauss','kernelpar',32); Next, create an instance of the filter: kaf = aldkrls(options); One iteration of training is performed by feeding one input-output data pair to the filter: kaf = kaf.train(x,y); The outputs for one or more test inputs are evaluated as follows: Y_test = kaf.evaluate(X_test); Example: time-series prediction Code from demo/demo_prediction.m % Demo: 1-step ahead prediction on Lorenz attractor time-series data [X,Y] = kafbox_data(struct('file','lorenz.dat','embedding',6)); % make a kernel adaptive filter object of class aldkrls with options: % ALD threshold 1E-4, Gaussian kernel, and kernel width 32 kaf = aldkrls(struct('nu',1E-4,'kerneltype','gauss','kernelpar',32)); %% RUN ALGORITHM N = size(X,1); Y_est = zeros(N,1); for i=1:N, if ~mod(i,floor(N/10)), fprintf('.'); end % progress indicator, 10 dots Y_est(i) = kaf.evaluate(X(i,:)); % predict the next output kaf = kaf.train(X(i,:),Y(i)); % train with one input-output pair SE = (Y-Y_est).^2; % test error %% OUTPUT fprintf('MSE after first 1000 samples: %.2fdB\n\n',10*log10(mean(SE(1001:end)))); MSE after first 1000 samples: -40.17dB Included algorithms □ Approximate Linear Dependency Kernel Recursive Least-Squares (ALD-KRLS), as proposed in Y. Engel, S. Mannor, and R. Meir. "The kernel recursive least-squares algorithm", IEEE Transactions on Signal Processing, volume 52, no. 8, pages 2275-2285, 2004. □ Sliding-Window Kernel Recursive Least-Squares (SW-KRLS), as proposed in S. Van Vaerenbergh, J. Via, and I. Santamaria. "A sliding-window kernel RLS algorithm and its application to nonlinear channel identification", 2006 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Toulouse, France, 2006. □ Naive Online Regularized Risk Minimization Algorithm (NORMA), as proposed in J. Kivinen, A. Smola and C. Williamson. "Online Learning with Kernels", IEEE Transactions on Signal Processing, volume 52, no. 8, pages 2165-2176, 2004. □ Kernel Least-Mean-Square (KLMS), as proposed in W. Liu, P.P. Pokharel, and J.C. Principe, "The Kernel Least-Mean-Square Algorithm," IEEE Transactions on Signal Processing, vol.56, no.2, pp.543-554, Feb. 2008. □ Fixed-Budget Kernel Recursive Least-Squares (FB-KRLS), as proposed in S. Van Vaerenbergh, I. Santamaria, W. Liu and J. C. Principe, "Fixed-Budget Kernel Recursive Least-Squares", 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2010), Dallas, Texas, U.S.A., March 2010. □ Kernel Recursive Least-Squares Tracker (KRLS-T), as proposed in S. Van Vaerenbergh, M. Lazaro-Gredilla, and I. Santamaria, "Kernel Recursive Least-Squares Tracker for Time-Varying Regression," Neural Networks and Learning Systems, IEEE Transactions on , vol.23, no.8, pp.1313-1326, Aug. 2012. □ Quantized Kernel Least Mean Squares (QKLMS), as proposed in Chen B., Zhao S., Zhu P., Principe J.C. "Quantized Kernel Least Mean Square Algorithm," IEEE Transactions on Neural Networks and Learning Systems, vol.23, no.1, Jan. 2012, pages 22-32. □ Random Fourier Fourier Feature Kernel Least Mean Squares (RFF-KLMS), as proposed in Abhishek Singh, Narendra Ahuja and Pierre Moulin, "Online Learning With Kernels: Overcoming The Growing Sum Problem", 2012 IEEE International Workshop on Machine Learning For Signal Processing. □ Extended Kernel Recursive Least Squares (EX-KRLS), as proposed in W. Liu and I. Park and Y. Wang and J.C. Principe, "Extended kernel recursive least squares algorithm", IEEE Transactions on Signal Processing, volume 57, number 10, pp. 3801-3814, oct. 2009. □ Gaussian-Process based estimation of the parameters of KRLS-T, as proposed in Steven Van Vaerenbergh, Ignacio Santamaria, and Miguel Lazaro-Gredilla, "Estimation of the forgetting factor in kernel recursive least squares," 2012 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), 2012. □ Kernel Affine Projection algorithm with Coherence Criterion, as proposed in C. Richard, J.C.M. Bermudez, P. Honeine, "Online Prediction of Time Series Data With Kernels," IEEE Transactions on Signal Processing, vol.57, no.3, pp.1058,1067, March 2009. □ Kernel Normalized Least-Mean-Square algorithm with Coherence Criterion, as proposed in C. Richard, J.C.M. Bermudez, P. Honeine, "Online Prediction of Time Series Data With Kernels," IEEE Transactions on Signal Processing, vol.57, no.3, pp.1058,1067, March 2009. □ Recursive Least-Squares algorithm with exponential weighting (RLS), as described in S. Haykin, "Adaptive Filtering Theory (3rd Ed.)", Prentice Hall, Chapter 13. □ Multikernel Normalized Least Mean Square algorithm with Coherence-based Sparsification (MKNLMS-CS), as proposed in M. Yukawa, "Multikernel Adaptive Filtering", IEEE Transactions on Signal Processing, vol.60, no.9, pp.4672-4682, Sept. 2012. □ Parallel HYperslab Projection along Affine SubSpace (PHYPASS) algorithm, as described in M. Takizawa and M. Yukawa, "An Efficient Data-Reusing Kernel Adaptive Filtering Algorithm Based on Parallel Hyperslab Projection Along Affine Subspace," 2013 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp.3557-3561, May 2013. How to contribute code to the toolbox Option 1: email it to me (steven@gtas.dicom.unican.es) Option 2: Fork the toolbox on GitHub, push your change to a named branch, then send me a pull request. This source code is released under the FreeBSD License. Changes to previous version: Initial Announcement on mloss.org. Other available revisons Version Changelog Date 1.3 Inclusion of Gaussian process based parameter estimation, and several new regression algorithms. October 21, 2013, 18:15:23 1.2 Initial Announcement on mloss.org. September 2, 2013, 20:22:31 No one has posted any comments yet. Perhaps you'd like to be the first? Leave a comment You must be logged in to post comments.
{"url":"http://www.mloss.org/revision/view/1387/","timestamp":"2014-04-20T20:59:21Z","content_type":null,"content_length":"16806","record_id":"<urn:uuid:b9f90449-556c-43b0-8461-7b6dff4650e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Singly-linked list. Traversal. Assume, that we have a list with some nodes. Traversal is the very basic operation, which presents as a part in almost every operation on a singly-linked list. For instance, algorithm may traverse a singly-linked list to find a value, find a position for insertion, etc. For a singly-linked list, only forward direction traversal is possible. Traversal algorithm Beginning from the head, 1. check, if the end of a list hasn't been reached yet; 2. do some actions with the current node, which is specific for particular algorithm; 3. current node becomes previous and next node becomes current. Go to the step 1. As for example, let us see an example of summing up values in a singly-linked list. For some algorithms tracking the previous node is essential, but for some, like an example, it's unnecessary. We show a common case here and concrete algorithm can be adjusted to meet it's individual Code snippets Although we have two classes for singly-linked list, SinglyLinkedListNode class is used as storage only. Whole algorithm is implemented in the SinglyLinkedList class. Java implementation public class SinglyLinkedList { public int traverse() { int sum = 0; SinglyLinkedListNode current = head; SinglyLinkedListNode previous = null; while (current != null) { sum += current.value; previous = current; current = current.next; return sum; C++ implementation int SinglyLinkedList::traverse() { int sum = 0; SinglyLinkedListNode *current = head; SinglyLinkedListNode *previous = NULL; while (current != NULL) { sum += current->value; previous = current; current = current->next; return sum; Recommended books One response to "Singly-linked list traversal tutorial" 1. on Feb 11, 2009 said: Previous: Internal representation Next: Adding a node to a singly-linked list Contribute to AlgoList Liked this tutorial? Please, consider making a donation. Contribute to help us keep sharing free knowledge and write new tutorials. Every dollar helps!
{"url":"http://www.algolist.net/Data_structures/Singly-linked_list/Traversal","timestamp":"2014-04-19T08:04:25Z","content_type":null,"content_length":"24283","record_id":"<urn:uuid:8d46707c-dd0a-4c84-ad9f-f80d145e2d92>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/wendyswonders/asked","timestamp":"2014-04-19T17:12:17Z","content_type":null,"content_length":"108910","record_id":"<urn:uuid:faea76e2-5894-4d61-8573-2dfb3c3d0729>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Analyze Phase Noise In A Sampled PLL (Part 3) Some have suggested that PLL noise degrades with frequency because the charge pumps are powered on for longer periods compared to the sampling period and therefore allow more noise to be transferred to the loop filter. In fact, the noise follows exactly what would be expected from FM theory and sampling theory. From FM theory, the noise is expected to increase by 6 dB for every doubling of frequency (20logf[s]). However, from sampling theory, the noise power per Hz would be expected to decrease by 3 dB for every doubling of the sampling frequency (-10logf[s]) since the noise power is now spread over twice the bandwidth. The net result is an increase in the phase noise by 3 dB for every doubling of the sampling frequency (10 logf[s]). With this in mind, take the phase noise at the phase detector due to the charge pumps, dividers, and other component, referenced to a 1-Hz bandwidth, to be equal to L[pd_1Hz] = -207 dBc/Hz, with L [pd] = 10log(fs) + L[pd_1Hz] and, thus, L[pd] = -163.021 dBc. The loop acts on this noise to make a contribution to the overall sampled PLL noise as described by Eq. 58: The plot in Fig. 18 below shows the difference between the output noise due to the phase detector for a continuous system and for a sampled system. There is a significant difference between the two. The apparent bandwidth for the sampled system is higher, and the noise at higher frequency offsets is 3 to 4 dB higher except at multiples of the sampling frequency where the contribution falls to In any PLL frequency synthesizer, the reference oscillator may also contribute to the overall phase noise close to the carrier. The SSB phase noise of the reference oscillator appears at the output of the synthesizer multiplied by the overall division ratio between the reference frequency and the output frequency. Typically, for the type of VCTCXOs used as a reference source in a PLL synthesizer, the phase noise is flat from large frequency offsets to about 100 kHz from the carrier, then rises at a rate of about 10 dB/decade to about 400 Hz from the carrier, then rises at a rate of about 30 dB/decade. The reference oscillator phase noise may be modeled by defining the noise at three spot points and interpolating between those points (Fig. 19). For this analysis, points are chosen to be 1 MHz, 10 kHz, and 10 Hz, with (units in dBc and Hz, respectively) L[x0] = -155; f[x0] = 1 x 10^6; L[x1] = -148; f[x1] = 10 x 10^3; L[x3] = -90; and f[x3] = 10. Then, the reference oscillator phase noise can be modeled by using Eq. 59. Figure 20 shows the output phase noise due to the VCTCXO reference oscillator. Once the reference oscillator's phase noise has been modeled, it is possible to plot the effect the loop has on that phase-noise contribution to the overall PLL synthesizer. Usually, the reference noise is well below the contribution due to the phase detector or dividers except at low offset frequencies (typically below a few hundred Hz), as shown in Eqs. 60 and 61: With models developed for the various noise sources in a sampled PLL, the multiple sources can be combined to create an overall noise model for the synthesizer. Each of these sources is modified by the PLL according to the equations below. For example, the noise generated by modulation of the VCO by thermal noise in the loop filter is shown in Eq. 62: The noise due to the phase detector modified by the loop response is given in Eq. 63: The noise of the free-running VCO as modified by the loop is represented by Eq. 64: The noise from the reference oscillator as modified by the loop is represented by Eq. 65: The combined output phase noise due to all the sources modified by the loop is shown in Eq. 66: Figure 21 shows the overall predicted noise for the PLL and the contributions from some of the relevant phasenoise sources. Figure 22 shows the measured phase noise of the test PLL. Note that the close-in phase noise is higher than predicted by the simulation above. This is because the noise floor of the NTS1000A test system has been reached. The instrument's specification sheet lists its noise floor as -40 dBc/Hz offset 10 Hz from the carrier and -74 dBc/Hz offset 100 Hz from the carrier. Figure 23 shows the same information plotted on a linear scale similar to what it might look like on a spectrum analyzer. This also allows closer inspection of the noise in the nulls that occur at the sampling frequency and its harmonics. In this analysis, it is also possible to make the oscillator noise higher than the reference/phasedetector/ divider noise to see the effects. In this new example, the far-out phase noise and close-in phase noise will be left unchanged but raise the level of the 20 dB/decade region will be raised by 20 dB, with the resulting parameters (in units in dBc and Hz, respectively) L[0] = -155; f[0] = 3 x 10^6; L[2] = -108; f[x1] = 100 x 103; L3 = -70; and f[3] = 1x 10^3. Continue to page 2 Page Title The plot in Fig. 24 shows the effect of the increase in oscillator noise. Even though the oscillator noise is above the level of the other noise sources, the effects of sampling in the loop are evident at the harmonics of the sampling frequency. In the plot of Fig. 25, the upper trace shows the measured effect of a relatively noisy oscillator. This is achieved by modulating the VCO with broadband noise to produce the effect of a noisy oscillator. The lower trace shows the normal performance of the PLL. To continue the analysis, the phasedetector/ divider noise can be made significantly higher than the oscillator noise, at -153 dBc/Hz, to study the effect of the change. Figure 26 shows the effect of increasing the phase detector noise floor by 10 dB, in a plot of phase noise due to all sources. The same effect would be produced by a noisy reference divider or a noisy feedback divider. (In each case, the noise spectrum is assumed to be flat.) The upper trace in Fig. 27 shows the measured phase noise when reference phase noise is raised by 10 dB by modulating the reference VCTCXO with pre-emphasized broadband noise (to produce flat phase noise sidebands on the reference signal). The lower trace shows normal PLL performance. There are a number of simulation packages provided by PLL chip manufacturers such as ADIsimPLL from Analog Devices and EasyPLL from National Semiconductor. Although they may take discrete time effects into account for the prediction of transient responses, they only use a linear approximation for the loop response and phase noise predictions. For many applications, this simplification may be sufficient for a given design. But with the trend toward faster-settling-time PLLs where the loop bandwidth must be a significant fraction of the sampling frequency, something closer to the true response is required and the analysis presented in this multi-part article can provide a very close estimation of expected performance in sampled PLLs.
{"url":"http://mwrf.com/print/components/analyze-phase-noise-sampled-pll-part-3","timestamp":"2014-04-20T19:10:54Z","content_type":null,"content_length":"22509","record_id":"<urn:uuid:7e6959e4-3684-4f0d-8926-cb57920472f4>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Divine Patterns? Ramanujan's Magical Mind Gets A Math Formula Srinivasa Ramanujan was a self-taught Indian mathematician known for intuiting extraordinary numerical patterns and connections without using proofs or modern mathematical tools. Instead, the devout Hindu genius said that his findings were divine and were revealed to him in dreams by the goddess Namagiri. A group of number theorists decided to mark the 125th anniversary of his birth by exploring his writings and imagine how his brain may have worked. Like being a mathematical anthropologist, they say, and they came up with a formula for mock modular forms that solves one of the greatest puzzles left behind by the enigmatic Indian genius. While on his death-bed in 1920, Ramanujan wrote a letter to his mentor, English mathematician G. H. Hardy. The letter described several new functions that behaved differently from known theta functions, or modular forms, and yet closely mimicked them. Ramanujan conjectured that his mock modular forms corresponded to the ordinary modular forms earlier identified by Carl Jacobi, and that both would wind up with similar outputs for roots of 1. No one at the time understood what Ramanujan was talking about. "It wasn't until 2002, through the work of Sander Zwegers, that we had a description of the functions that Ramanujan was writing about in 1920," says Emory mathematician Ken Ono. One and two colleagues - Amanda Folsom, from Yale, and Rob Rhoades, from Stanford - drew on modern mathematical tools that had not been developed before Ramanujan's death to prove that a mock modular form could be computed just as Ramanujan predicted. They found that while the outputs of a mock modular form shoot off into enormous numbers, the corresponding ordinary modular form expands at close to the same rate. So when you add up the two outputs or, in some cases, subtract them from one another, the result is a relatively small number, such as four, in the simplest case. "We proved that Ramanujan was right," Ono says. "We found the formula explaining one of the visions that he believed came from his goddess." Ono uses a "magic coin" analogy to illustrate the complexity of Ramanujan's vision. Imagine that Jacobi, who discovered the original modular forms, and Ramanujan are contemporaries and go shopping together. They each spend a coin in the same shop. Each of their coins goes on a different journey, traveling through different hands, shops and cities. "For months, the paths of the two coins look chaotic, like they aren't doing anything in unison," Ono says. "But eventually Ramanujan's coin starts mocking, or trailing, Jacobi's coin. After a year, the two coins end up very near one another: In the same town, in the same shop, in the same cash register, about four inches apart." Ramanujan experienced such extraordinary insights in an innocent way, simply appreciating the beauty of the math, without seeking practical applications for them. Hardy-Ramanujan "taxicab numbers". Link: The Story of Mathematics "No one was talking about black holes back in the 1920s when Ramanujan first came up with mock modular forms, and yet, his work may unlock secrets about them," Ono says. Expansion of modular forms is one of the fundamental tools for computing the entropy of a modular black hole. Some black holes, however, are not modular, but the new formula based on Ramanujan's vision may allow physicists to compute their entropy as though they were. After coming up with the formula for computing a mock modular form, Ono wanted to put some icing on the cake for the 125th-anniversary celebration. He and Emory graduate students Michael Griffin and Larry Rolen revisited the paragraph in Ramanujan's last letter that gave a vague description for how he arrived at the functions. That one paragraph has inspired hundreds of papers by mathematicians, who have pondered its hidden meaning for eight decades. "So much of what Ramanujan offers comes from mysterious words and strange formulas that seem to defy mathematical sense," Ono says. "Although we had a definition from 2002 for Ramanujan's functions, it was still unclear how it related to Ramanujan's awkward and imprecise definition." Ono and his students finally saw the meaning behind the puzzling paragraph, and a way to link it to the modern definition. "We developed a theorem that shows that the bizarre methodology he used to construct his examples is correct," Ono says. "For the first time, we can prove that the exotic functions that Ramanujan conjured in his death-bed letter behave exactly as he said they would, in every case." Although Ramanujan received little formal training in math, and died at the age of 32, he made major contributions to number theory and many other areas of math. In the fall, Ono traveled to Ramanujan's birth home in Madras, and to other significant sites in the Indian mathematician's life, to participate in a docu-drama. Ono acted as a math consultant, and also has a speaking part in the film about Ramanujan, directed by Nandan Kudhyadi and set to premiere next year. "I got to hold some of Ramanujan's original notebooks, and it felt like I was talking to him," Ono says. "The pages were yellow and falling apart, but they are filled with formulas and class invariants, amazing visions that are hard to describe, and no indication of how he came up with them." Ono will spend much of December in India, taking overnight trains to Mysore, Bangalore, Chennai and New Dehli, as part of a group of distinguished mathematicians giving talks about Ramanujan in the lead-up to the anniversary date. "Ramanujan is a hero in India so it's kind of like a math rock tour," Ono says, adding, "I'm his biggest fan. My professional life is inescapably intertwined with Ramanujan. Many of the mathematical objects that I think about so profoundly were anticipated by him. I'm so glad that he existed." Presented at the Ramanujan 125 conference at the University of Florida. 1729 is called the taxicab' s number because of a story told by Hardy: “I remember once going to see him when he was lying ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen. “No,” he replied, “it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways.” Enrico Uva | 12/17/12 | 11:16 AM Hank Campbell | 12/17/12 | 12:00 PM I profoundly accept your comment, as many of our Indian's exposure are made by westerners only Sankaranarayanan (not verified) | 12/22/12 | 12:16 PM number of math, science graduates each year: India: 2.3 million , China: 530,000; 350,000 in Japan; 420,000 in the US and 470,000 in the EU. Enrico Uva | 12/22/12 | 14:15 PM Fred Phillips | 12/31/12 | 18:58 PM
{"url":"http://www.science20.com/news_articles/divine_patterns_ramanujans_magical_mind_gets_math_formula-99186","timestamp":"2014-04-20T01:34:23Z","content_type":null,"content_length":"47784","record_id":"<urn:uuid:e14133fa-5b6d-4857-a853-83c072d3a7cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Need electronics example problems Yeah I guess that is vague. But the book we have is "Basic Electronics: An introduction to electronics for science students". The class is all about circuits. So it extends from the very basic stuff we learned in Physics 2 about circuits, such as Kirchoff's laws and stuff. That's because those forums are not for requesting learning materials. They're for links to such materials, or the actual materials themselves. Oh ok, I was just about to post the thread and found that forum and was happy I was going to post a thread in the right forum for once, and then I wasn't able to.
{"url":"http://www.physicsforums.com/showthread.php?p=4278720","timestamp":"2014-04-19T15:18:33Z","content_type":null,"content_length":"37893","record_id":"<urn:uuid:be91e030-4246-4682-9317-11a9569e1def>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Weehawken Precalculus Tutors ...My actual research experience in astronomy/astrophysics makes me an ideal tutor for this subject. I have an extensive background in physics (bachelors degree) and chemistry (2 year degree + 2 years of undergrad coursework). Keen on building a rock-solid foundation for this topic for my students ... 17 Subjects: including precalculus, chemistry, Spanish, physics I love math/science and love to share my enthusiasm for these subjects with my students. I did my undergraduate in Physics and Astronomy at Vassar, and did an Engineering degree at Dartmouth. I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year. 11 Subjects: including precalculus, Spanish, calculus, physics I recently completed a Master's degree in Education at the Concordia University in Curriculum and Instruction. I have over 10 years experience in tutoring, and I am a mentor to a lovely group of youth aged 4 through 19. I have experience in tutoring Mathematics, Science, and physics. 21 Subjects: including precalculus, English, reading, geometry ...I also previously taught all grades from 9th to 12th and am extremely comfortable teaching all types of math to all level learners. I am an motivated teacher who can teach you to your understanding. I am an educator who motivates and educates in a fun, focused atmosphere. 6 Subjects: including precalculus, calculus, geometry, algebra 1 I have taught mathematics at various levels from Algebra 1 through AP Calculus in high school. I have worked as an adjunct mathematics instructor at at five universities. I have excellent experience tutoring mathematics at all levels. 8 Subjects: including precalculus, calculus, algebra 1, SAT math
{"url":"http://www.algebrahelp.com/Weehawken_precalculus_tutors.jsp","timestamp":"2014-04-19T22:07:09Z","content_type":null,"content_length":"25062","record_id":"<urn:uuid:a0ade164-dfcf-4f1b-bc43-c61e917b9e54>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Wars By Bill Pride Printed in Practical Homeschooling #82, 2008. Would you be right? Not necessarily. For the past several decades, a quiet war has been fought between those who believe in the traditional methods of teaching arithmetic by rote and those who favor teaching kids in preschool to grade 6 how to "think like a mathematician." The fallacy in teaching this way is that all the mathematicians I know learned to add and subtract, without using their fingers or a calculator, before they started "thinking like mathematicians." Math as a Foreign Language The problem with modern elementary math education is that the math programs aren't written by mathematicians. The motto of mathematicians is, "Can you prove that?" From the mathematician's point of view, every new advance in math has to be logically tight and thoroughly justifiable from the math that went before. The mathematician's goal for elementary math is that the student become comfortable with numbers and the operations that are done on them and is prepared to move on to higher math such as Algebra, Geometry, Calculus, etc. In contrast, check out this goals statement from the Connected Mathematics Project out of Michigan State University as reported on their site at connectedmath.msu.edu: The Overarching Goal of the Connected Mathematics Project: All students should be able to reason and communicate proficiently in mathematics. They should have knowledge of and skill in the use of the vocabulary, forms of representation, materials, tools, techniques, and intellectual methods of the discipline of mathematics, including the ability to define and solve problems with reason, insight, inventiveness and proficiency. That's it - mathematics as a foreign language. (Substitute "Russian" in place of "mathematics" in the above definition and it makes more sense.) Objective v. Subjective The Wikipedia article on "Math Wars" defines the traditional method of math instruction as: "skills based on formulas or algorithms (fixed, step-by-step procedures for solving math problems)." You can evaluate the results of this method, because the student works the problem correctly and gets the correct answer or he doesn't. (Partial credit can be given for correctly doing the steps of the problem, but getting a wrong answer due to an arithmetic mistake.) The "reform" method, on the other hand, is subjective in the way it's taught and graded. This makes it easy for its proponents to delude themselves about how their students are doing. As Wikipedia says about the reform method, "In this latter approach, computational skills and correct answers are not the primary goals of instruction." The student gets points by how creative (not necessarily how valid) his approach is to a problem. Grading a math problem becomes as subjective as grading an English essay. By Their Many Names Shall Ye Know Them How can you spot one of these "progressive" math programs? Is it really true that "reform math" = "new math" = "new new math" = "fuzzy math" = "constructivist math" = "PC Math" = "Rainforest Math"? Yes, these types of programs are different in subtle ways, but all have in common that they strive to teach the student to "understand" math, rather than teach him how to do math. "Reform"/"new math" programs can be identified by the unusual topics they introduce into the elementary curriculum. The two most common ones of these are set theory and different bases. The trademark of set theory is the Venn diagram (see below). If you see lots of Venn diagrams in your elementary math textbook, you are holding a "new math" program. Set theory is one of the most abstract branches of mathematics - but elementary school students are concrete thinkers. I wonder what Einstein came up with this idea as a good topic to add to the curriculum. Working in different bases requires an understanding of place value in our own base-ten number system. No one -illustrates the joys of teaching arithmetic, while trying to include an understanding of place value, better than Tom Lehrer in his song "New Math." A lipsynched version of this amusing song can be found at Youtube. The last line says it all: "It's so simple, so very simple, that only a child can do it." "New new math" or "fuzzy" math says the process is more important than the result. The correct answer is the one that demonstrates the most ingenuity, not necessarily the one that results in the correct answer. "PC Math" (politically correct math), also dubbed "Rainforest Math," is the same stuff, with the added wrinkle that now word problems have to teach moral lessons that are socially acceptable to the largely left-leaning educational establishment - and that have practically no actual math in them. Too Young to Derive Which brings us to constructivism. Constructivists want children to learn math through discovery on the theory that if you figure something out for yourself, rather than having it fed to you, it will be yours forever. The problems with this are obvious. To develop math to its present form took thousands of adult mathematicians a millennium and a half. One kid can't be expected to work it out in six years of elementary school. The claim that the child is discovering math for himself is disingenuous anyway. The process of "discovery" is carefully guided so that ideally every student will "discover" what the teacher wants him to discover. Are the Math Wars Over? A Wall Street Journal article of March 5 this year claimed that there was now a "truce" in the math wars, thanks to the newly appointed National Mathematics Advisory Panel. This followed a September 2006 report by the National Council of Teachers of Mathematics, which finally put in a good word for teaching the basics of arithmetic after years of the NCTM exclusively promoting "think like a mathematician" approaches. A special ed teacher made this comment about the WSJ article: "for the past 12 years I have been doing Educational Evaluations on high school students, and a consistent pattern has emerged. While our students may be capable of higher order thinking regarding math concepts such as the application of the Pythagorean Theorem, they cannot do multi-digit subtraction or long division, nor can they manipulate fractions. They never really mastered these concepts in elementary school so they have "forgotten" how to do them as adolescents... but they definitely know how to do them on the Another WSJ reader said: "I judge high and middle school science fairs. I routinely see the following: 1. "No understanding of significant digits: students measure something to one significant digit and report the average results to five decimal places, well beyond the measuring capacity of the equipment they used. 2. "Meaningless graphs: five-color graphs and/or bar charts that have nothing to do with the data that were collected, no labels on the axis, and no clue what the data mean. It came from Excel, so it must be right and relevant. 3. "Undetected gross calculation errors: averages that are higher than the highest number in the observed data or lower than the lowest number in the observed data. Because these students cannot estimate the average value of their data in their heads, they do not realize that they keyed the wrong data into the calculator." A retired teacher who is "still active in the school system" added this. "Think about this the next time you hear someone being snobbish about 'drill and kill' software: Skill repetition or reinforcement is considered 'busy work,' therefore, I have seen schools where the majority of students in the 8th grade don't know their times tables!" Develop This! It's not just students in the eighth grade who don't know their times tables, either. When I began looking into math instructor positions in local community colleges, I discovered a course called "Developmental Mathematics" (not to be confused with the homeschool curriculum of the same name). Developmental Math is just remedial grade school math with a fancier name. Students entering college who can't demonstrate proficiency in basic arithmetic, including fractions, are assigned this course before they are allowed to tackle the normal college math sequence (assuming they ever do). We are talking about millions of college students who can't do fractions! One last WSJ reader put it best: "There is a place for 'drill and kill' and for 'exploration and discovery'; one needs to know the basics before venturing into the unknown." Was this article helpful to you? Subscribe to Practical Homeschooling today, and you'll get this quality of information and encouragement five times per year, delivered to your door. To start, click on the link below that describes USA Individual USA Librarian (purchasing for a library) Outside USA Individual Outside USA Library
{"url":"https://www.home-school.com/Articles/math-wars.php","timestamp":"2014-04-18T03:01:34Z","content_type":null,"content_length":"41952","record_id":"<urn:uuid:6caffb82-6b50-419f-b9ba-e6352b1f92a0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
HOW TO MAKE THOSE LIMNOLOGICAL GRAPHS ON EXCEL First, create an (x,y) plot in Excel. ● Enter x and y values adjacent columns ● Highlight x and y values (you can have more than one X,Y plot on the same graph) ● Click on Insert, Chart, Standard Types, XY (Scatter) ● Select the chart subtype with points and connecting lines ● Then click on 'Series' ● Choose your depth for the Y variable (you can click in the Y row and then highlight the column with your mouse) ● Choose your variable of interest (e.g., light) for your X variable ● If you have two plots for the same graph, then before leaving this dialog box click on Series 'Add' ● In the new series source data box once again specify the same depth column as your Y variable ● Specify the second variable of interest as the X variable ● When you are done adding plots click on 'next' ● If you want you can add titles in the 'Title' tab or gridlines in the 'Gridline' tab or you can add or remove the legend... ● When you are done with this formatting hit 'finish' ● Then on the finished chart double click on the Y axis. Another dialog box will appear and you should click on the 'Scale' tab. At the bottom of the dialog box click on 'Values in reverse order.' This will reverse both axes and give you the right general format (you also could reverse the scale of each axis separately by reversing the min and max values, but this will take longer). The graph can be formatted more by clicking on axes or choosing chart properties.
{"url":"http://www.esf.edu/efb/schulz/Limnology/ExcelGraphs.html","timestamp":"2014-04-21T09:37:32Z","content_type":null,"content_length":"2759","record_id":"<urn:uuid:e66742f3-8f2b-4e4f-9a59-3c5a45735720>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
of the Integer number is a mathematical procedure of extracting all of its Prime divisors, also called Factors. This procedure is also known as Factoring or Factorisation. Any Integer number could be presented as a Product of its Factors. Prime Numbers , or simply Primes have only two trivial divisors: 1 and the number itself. Any Integers except for Primes are called Composite Numbers; they contains at least three divisors. Sample Factorization: 12 = 2 * 2 * 3, where 2 and 3 are Primes To Factorize the Integer using online Calculator enter the it into the text box and click on the "=" screen button. You will see the number presented as product of the Primes. Number 1 as a trivial divisor is not included in the list. test (in other words, check if the Integer number is Prime) could be performed in the same way as Factoring: enter the number and calculate Factor(s): if the list contains only one Factor, then the number is Prime and vice versa. For faster computation, Primality check could be performed without Factoring. Enter the Number and click on the button "IsPrime" located at the top to find the answer fast.
{"url":"http://webinfocentral.com/MATH/Factorization.aspx","timestamp":"2014-04-18T10:35:21Z","content_type":null,"content_length":"62324","record_id":"<urn:uuid:3c874823-69bd-4e4e-bdf3-1dca76ccf2ae>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Induction for postage stamps May 5th 2010, 07:16 AM #1 Junior Member Sep 2009 Induction for postage stamps I've read a few articles but its not doing the trick. can you show step by step how a postage of 12 cents or more can be formed using 4 cents and 5 cents stamps Ah, you're referring to the fact that the Frobenius number of {4,5} is 4*5-(4+5) = 11. The easiest demonstration I can think of involves congruence with a modulus, but you may not be familiar with the notation. We can immediately obtain all numbers congruent to 0 (mod 4). These are simply the multiples of 4. Starting with 5, we can obtain all numbers congruent to 1 (mod 4). These are numbers of the form 4k+1. Just start with 5 and continue adding 4 to it: 5, 9, 13, 17, ... Starting with 10 we can obtain all numbers congruent to 2 (mod 4). Starting with 15 we can obtain all numbers congruent to 3 (mod 4). So certainly, starting with 15 we can obtain all integers. 14 we got via 5+5+4. 13 we got via 5+4+4. 12 is 4+4+4. So that accounts for all integers greater than or equal to 12. If you are interested in learning math, you need a habit of tinkering by do small experiments such as this: These are: $(4q+r)+(5s+t)$, where $q,r,s,t \in \mathbb{N}$, and $0\leq r < 4$ and $0\leq t < 5$ Now you can prove it by the Strong Principle of Induction. If some value y is a multiple of fours and fives, y+4 is a sum of fours and fives if y is. y+5 is a sum of fours and fives if y is or if y+1 is. y+6 is a sum of fives and fours if y+2 or y+1 is. y+7 is a sum of fours and fives if y+3 or y+2 is. The pattern repeats. Let's have a little fun of writing the proof by the Strong Principle of Mathematical Induction, which says: For each positive integer n, let $P(n)$be a statement If $(1) P(1)$is true and $(2) \forall k \in \mathbb{N}, P(1)\wedge P(2) \wedge P(3) \wedge...\Rightarrow P(k+1)$is true. For the basis step, we want to prove the a postage of 12 cents is made of some numbers of 4 cents and 5 cents stamps. So We say there exist nonnegative integers $a$ and $b$ such that $12 = 4a+5b$. In fact we have integers $a=3$ and $b=0$. So this part of proof is done. We move the next step, the Inductive Step: Here we assume the there exist integers $a$ and $b$ such the $i=4a+5b$, where $0 \leq i <k$, where $k>12$. Now we consider the integer $k+1$. We show that there are nonnegative integers such that $k+1=4a+5b$. Since $k>12$, we we have $13=4\cdot2 + 5 \cdot 1$, then we have $14=4\cdot1 + 5 \cdot 2$, and we have $15=4\cdot0+5\cdot3$. Hence we assume $k+1\geq 16.$ It follows that $12 \leq (k+1) - 4 < k$. By the induction hypothesis, there are nonnegative integers $a$ and $b$ such that $(k+1)-4=4a+5b$, and so $(k+1)=(4a+4)+5b$ or equivalently, $(k+1)=4(a+1)+5b$, where $(a+1)$ is and integer. This concludes the proof. $(n-k)4+(m+k)5+1$ is a combination of integer multiples of 4 and 5 if y+k is This part here is interesting: Why do you have y+k on the left? I think the y+k is a little too muddy. Hi novice, sorry about that, it's a follow on from my earlier post above, i guess they look a bit disjointed, i might write out another post. While you are at it, I have a question about the line I marked in red. It seems that the line in question happened to work out nicely. I am wondering whether that was only an accident. In fact the stamps can also be made of some 3 cents and 5 cents stamps. In this particular case, the line in red might even become false. Since there are so many formal proofs popping up here, I might as well formalise my earlier post. Define "k is attainable" as: there exist nonnegative integers m and n such that k = 4m + 5n. Base cases: 12 is attainable, (m, n) = (3, 0). 13 is attainable, (m, n) = (2, 1). 14 is attainable, (m, n) = (1, 2). 15 is attainable, (m, n) = (0, 3). Induction step: Let k be an integer such that k-4 is attainable. Then k is attainable. There exist nonnegative integers m and n such that k-4 = 4m + 5n. Then k = 4(m+1) + 5n. Therefore, k is attainable. One approach to tie together the induction step with the base cases is to use strong induction, but why not just partition the set of nonnegative integers by congruence (mod 4)? S0 = {0, 4, 8, ...} S1 = {1, 5, 9, ...} S2 = {2, 6, 10, ...} S3 = {3, 7, 11, ...} It can be seen that the intersection of the sets is the empty set, and the union of the sets is the nonnegative integers. So we treat each subset the same way we would treat the nonnegative integers and apply weak induction four times. Therefore, all integers greater than or equal to 12 are attainable. This concludes the proof. Last edited by undefined; May 5th 2010 at 07:13 PM. Reason: typo! Since there are so many formal proofs popping up here, I might as well formalise my earlier post. Define "k is attainable" as: there exist nonnegative integers m and n such that k = 4m + 5n. Base cases: 12 is attainable, (m, n) = (3, 0). 13 is attainable, (m, n) = (2, 1). 14 is attainable, (m, n) = (1, 2). 15 is attainable, (m, n) = (3, 0). Induction step: Let k be an integer such that k-4 is attainable. Then k is attainable. There exist nonnegative integers m and n such that k-4 = 4m + 5n. Then k = 4(m+1) + 5n. Therefore, k is attainable. One approach to tie together the induction step with the base cases is to use strong induction, but why not just partition the set of nonnegative integers by congruence (mod 4)? S0 = {0, 4, 8, ...} S1 = {1, 5, 9, ...} S2 = {2, 6, 10, ...} S3 = {3, 7, 11, ...} It can be seen that the intersection of the sets is the empty set, and the union of the sets is the nonnegative integers. So we treat each subset the same way we would treat the nonnegative integers and apply weak induction four times. Therefore, all integers greater than or equal to 12 are attainable. This concludes the proof. Beautiful proof using equivalence classes. I am quite sure there many more ways with Strong induction. Thanks for sharing. The base case has y=12 and k=0. The general case is P(k) and we use it to establish that it causes P(k+1) to be valid. $y+k=(n-k)4+(m+k)5$ is a combination of multiples of 4 and 5 $y+k+1=\left(n-(k+1)4\right)+\left(m+(k+1)5\right)$ is also a combination of multiples of 4 and 5 which is definately a combination of multiples of 4 and 5 if y+k is. May 5th 2010, 10:14 AM #2 May 5th 2010, 11:40 AM #3 Sep 2009 May 5th 2010, 02:01 PM #4 MHF Contributor Dec 2009 May 5th 2010, 03:48 PM #5 Sep 2009 May 5th 2010, 04:30 PM #6 MHF Contributor Dec 2009 May 5th 2010, 05:11 PM #7 Sep 2009 May 5th 2010, 05:22 PM #8 MHF Contributor Dec 2009 May 5th 2010, 05:44 PM #9 Sep 2009 May 5th 2010, 05:59 PM #10 May 5th 2010, 06:27 PM #11 Sep 2009 May 6th 2010, 02:57 AM #12 MHF Contributor Dec 2009
{"url":"http://mathhelpforum.com/discrete-math/143198-induction-postage-stamps.html","timestamp":"2014-04-19T00:47:19Z","content_type":null,"content_length":"83875","record_id":"<urn:uuid:c2cb34a3-1acc-4758-b12a-2d3417ba9620>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Thieves of Hearts Spring 2010 Schedule Survey We are attempting to determine if Tuesday evening is the best day to hold the Thieves of Hearts fighter practice at Glenbrook North High School. This survey will help us figure out the best day of the week for the Spring semester (Jan-May) of 2010.
{"url":"http://www.surveymonkey.com/s/RBHDPBH","timestamp":"2014-04-18T05:57:37Z","content_type":null,"content_length":"27120","record_id":"<urn:uuid:e90904f7-ce15-4f15-8248-b873f4dbc3ec>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Logic Posted in Computability, Universality and Unsolvability Computer Science Foundations of Computation Foundations of Math Mathematical Logic on January 5th, 2012 by Hector Zenil – Be the first to comment I generated this image in the course of an investigation of the distribution of runtimes of programs in relation to the lengths of mathematical proofs, the results of which are being published in my paper bearing the title “Computer Runtimes and the Length of Proofs with an Algorithmic Probabilistic Application to Optimal Waiting Times in Automatic Theorem Proving” Volume 7160 of the Lecture Notes in Computer Science series (LNCS), a festschrift for Cristian Calude. The preprint is now available online in the ArXiv here. The paper shows that as theoretically predicted, most machines either stop quickly or never halt. It also suggests how a theoretical result for halting times may be used to predict the number of random theorems (dis)proven in a random axiom system –see the section I’ve called “Gödel meets Turing in the computational universe”. The plot was the winning image in this year’s Kroto Institute Image Competition, in the Computational Imagery category titled “Runtime Space in a Peano Curve”, it shows the calculation of the distribution of runtimes from simulating (4n+2)^(4n) = 10000 Turing machines with 2 symbols and n=2 states (of a total of more than 10^12 simulated Turing machines with up to n=4 states) following a quasi-lexicographical order in a Peano curve preserving–as far as possible–the distance between 2 machines arranged in a 2-dimensional array from a 1-dimensional enumeration of Turing machines. In the image each point or cluster of points represents a Turing machine or a group of Turing machines, and the color is determined by a spectrum encoding their halting runtimes–the lighter the square the sooner the machine entered the halting state. White cells represent machines that are proven to halt in infinite time. Red cells show the Turing machines that take longer to halt (popularly called Busy Beavers). Knowing the values of the Busy Beaver functions allows us to identify the machines that never halt (depicted in white). The image is noncomputable, meaning that the process cannot be arbitrarily extended because of the undecidability of the halting problem (i.e. there is no procedure for ascertaining the color of the following pixels to zoom out the picture and cover a larger fraction of the computational universe). Put it in the words of crystilogic, What you’re looking at is a subset of descriptions of all possible things, and some impossible things. This is possibility and reality compressed into an image. Turing machines with an arbitrary number of states can encode any possible mathematical problem and are therefore perfect compressors of the known, the yet to be known, and even the unknowable (due to the undecidability of the halting problem). This is how the image looks like as displayed in the stairway of the Kroto Research Institute in the UK: Some postcards with the winning image were printed and they can be sent to scholar or enthusiasts upon request sending an email to hectorz[at] labores.eu. I want to dedicate this prize to my former thesis advisor Jean-Paul Delahaye who suggested me the Peano arrangement as a packing for my visual results of halting times. And also to Cristian Calude who co-advised me all along my PhD thesis and who encouraged me to publish this paper and what better place to do so than for his festschrift. I’m releasing the images under an open licence in honour of the Turing Year, so that it may be used for any artistic illustration of some of Turing’s main results (the halting problem) or for any other purpose in any medium. Postcards of the winning image are also available upon request. Just send an email requesting 1 or more postcards to hectorz [at] labores.eu or to let him know (if you can) that you will be using any of the images (or if you need better resolution versions).
{"url":"http://www.mathrix.org/liquid/category/logic","timestamp":"2014-04-18T10:34:29Z","content_type":null,"content_length":"25908","record_id":"<urn:uuid:b25d5da2-158e-4c8a-85bd-7107f72a4b36>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
iq test if i have two tetrahedral pyramids I don't think I understand the question correctly. You said if you combine the number of balls from two equally sized tetrahedral pyramids and add 20, then you have the number of balls needed to make a pyramid one size larger. The number of balls needed for a pyramid of size n is: so the equation to solve is: to find the size of the smaller pyramids. But it has no integer solutions... Possibly, my picture of making pyramids with golfballs is wrong. Stacking them like this will not make them tetrahedral, but I don't see any other way to do it. The question is simply, "What 2 dissimilar tetrahdra (built from balls) can be combined to make a new tetrahedron, using exactly the number of balls contained in the smaller ones ?" The case of 20 balls would be a solution were this question lacking the word 'dissimilar'. It has nothing to do with the current question. Next, a tetrahedral number of height n, is the sum of the first n triangular numbers, and so, should be given by the sum [tex]\sum_{i=0}^n{i(i+1)/2}=\frac{1}{12}n(n+1)(2n+1) + \frac {n(n+1)}{4}[/tex] Triangular numbers are : 1,3,6,10,15,21,28,... Tetrahedral numbers are : 1,4,10,20,35,56,84,120,... We are looking for 2 different Tet. Numbers that add to give a third one.
{"url":"http://www.physicsforums.com/showthread.php?t=41107","timestamp":"2014-04-17T12:42:35Z","content_type":null,"content_length":"45646","record_id":"<urn:uuid:5bd90395-2ade-4426-9a6a-b86b2a5b91f5>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Bisimulation is not Finitely (First Order) Equationally Axiomatisable This paper considers the existence of finite equational axiomatisations of bisimulation over a calculus of finite state processes. To express even simple properties such as \mu X E=\mu X E[E/X] equationally it is necessary to use some notation for substitutions. Accordingly the calculus is embedded in a simply typed lambda calculus, allowing axioms such as the above to be written as equations of higher type rather than as equation schemes. Notions of higher order transition system and bisimulation are then defined and using them the nonexistence of finite axiomatisations containing at most first order variables is shown. The same technique is then applied to calculi of star expressions containing a zero process --- in contrast to the positive result given in [FZ93] for BPA*, which differs only in that it does not contain a zero. Back to my home page.
{"url":"http://www.cl.cam.ac.uk/~pes20/lics-nonax.abstract.html","timestamp":"2014-04-19T14:31:20Z","content_type":null,"content_length":"1331","record_id":"<urn:uuid:d987f6b8-d05f-470e-b88f-d942b7661a14>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
pro-homotopy theory pro-homotopy theory Pro-homotopy theory involves the study of model category and other abstract homotopy theoretic structure on pro-categories of spaces or simplicial sets. (The term can also be used for any extension of homotopical structures for a category $C$ to the corresponding $Pro(C)$.) and is closely related to profinite homotopy theory. The homotopy theory of simplicial profinite spaces has been explored by Fabien Morel and Gereon Quick. For Morel’s theory see • F. Morel, Ensembles profinis simpliciaux et interprétation géométrique du foncteur $T$, Bull. Soc. Math. France, 124, (1996), 347–373, A reference to Quick’s work is in • G. Quick, Profinite homotopy theorypdf but a correction to an error in the proof of the main result was included in • G. Quick, Continuous group actions on profinite spaces, J. Pure Appl. Algebra 215 (2011), 1024-1039. For one of the earliest model structures, namely the strict model structure on $Pro(C)$, see • D.A. Edwards and H. M. Hastings?, (1976), Čech and Steenrod homotopy theories with applications to geometric topology, Lecture Notes in Maths. 542, Springer-Verlag, pdf More recent contributions include: Revised on November 26, 2013 07:49:34 by Tim Porter
{"url":"http://ncatlab.org/nlab/show/pro-homotopy+theory","timestamp":"2014-04-16T07:15:31Z","content_type":null,"content_length":"16681","record_id":"<urn:uuid:70173a02-0086-463d-abcf-233a8df63594>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Last Class Help!! June 2nd 2008, 09:09 AM #1 Jun 2008 Last Class Help!! I'm taking my last class for my bachelors and of'course it's my worst class, trigonometry. Can anybody help me with this problem? "Show the counter example that the following equation is not an identity?" (sin x+ cos x)2=sin2x+cos2x (The "2"s are squares) Thanks for the help! You just need a counter-example? Many examples work really. Chances are if you pick a random value for x off the top of the head, you would have a counter example. For example, $x = \frac{\pi}{3} $\left[ \sin \left(\frac{\pi}{3}\right) + \cos \left(\frac{\pi}{3}\right) \right]^{2} = \left(\frac{\sqrt{3}}{2} + \frac{1}{2}\right)^{2} = ... eq 1$ (since $\sin^{2} x + \cos^{2} x = 1$) As you may have figured out; math is not my favorite or best subject. Being out of school for 30+ years doesn't help either. I think I'll be back several more times for help. June 2nd 2008, 09:23 AM #2 June 2nd 2008, 11:28 AM #3 Jun 2008
{"url":"http://mathhelpforum.com/trigonometry/40380-last-class-help.html","timestamp":"2014-04-17T14:46:15Z","content_type":null,"content_length":"34996","record_id":"<urn:uuid:4581ee8d-9392-4498-b6dc-fd13664fe672>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
Radioactive Dating © 1996 Frank Steiger; permission granted for retransmission. Radiometric dating is a means of determining the "age" of a mineral specimen by determining the relative amounts present of certain radioactive elements. By "age" we mean the elapsed time from when the mineral specimen was formed. Radioactive elements "decay" (that is, change into other elements) by "half lives." If a half life is equal to one year, then one half of the radioactive element will have decayed in the first year after the mineral was formed; one half of the remainder will decay in the next year (leaving one-fourth remaining), and so forth. The formula for the fraction remaining is one-half raised to the power given by the number of years divided by the half-life (in other words raised to a power equal to the number of half-lives). If we knew the fraction of a radioactive element still remaining in a mineral, it would be a simple matter to calculate its age by the formula log F = (N/H)log(1/2) where: F = fraction remaining N = number of years and H = half life. To determine the fraction still remaining, we must know both the amount now present and also the amount present when the mineral was formed. Contrary to creationist claims, it is possible to make that determination, as the following will explain: By way of background, all atoms of a given element have the same number of protons in the nucleus; however, the number of neutrons in the nucleus can vary. An atom with the same number of protons in the nucleus but a different number of neutrons is called an isotope. For example, uranium-238 is an isotope of uranium-235, because it has 3 more neutrons in the nucleus. It has the same number of protons, otherwise it wouldn't be uranium. The number of protons in the nucleus of an atom is called its atomic number. The sum of protons plus neutrons is the mass number. We designate a specific group of atoms by using the term "nuclide." A nuclide refers to a group of atoms with specified atomic number and mass number. Potassium-Argon dating: The element potassium (symbol K) has three nuclides, K39, K40, and K41. Only K40 is radioactive; the other two are stable. K40 can decay in two different ways: it can break down into either calcium or argon. The ratio of calcium formed to argon formed is fixed and known. Therefore the amount of argon formed provides a direct measurement of the amount of potassium-40 present in the specimen when it was originally formed. Because argon is an inert gas, it is not possible that it might have been in the mineral when it was first formed from molten magma. Any argon present in a mineral containing potassium-40 must have been formed as the result of radioactive decay. F, the fraction of K40 remaining, is equal to the amount of potassium-40 in the sample, divided by the sum of potassium-40 in the sample plus the calculated amount of potassium required to produce the amount of argon found. The age can then be calculated from equation (1). In spite of the fact that it is a gas, the argon is trapped in the mineral and can't escape. (Creationists claim that argon escape renders age determinations invalid. However, any escaping argon gas would lead to a determined age younger, not older, than actual. The creationist "argon escape" theory does not support their young earth model.) The argon age determination of the mineral can be confirmed by measuring the loss of potassium. In old rocks, there will be less potassium present than was required to form the mineral, because some of it has been transmuted to argon. The decrease in the amount of potassium required to form the original mineral has consistently confirmed the age as determined by the amount of argon formed. Carbon-14 dating: See Carbon 14 Dating in this web site. Rubidium-Strontium dating: The nuclide rubidium-87 decays, with a half life of 48.8 billion years, to strontium-87. Strontium-87 is a stable element; it does not undergo further radioactive decay. (Do not confuse with the highly radioactive isotope, strontium-90.) Strontium occurs naturally as a mixture of several nuclides, including the stable isotope strontium-86. If three different strontium-containing minerals form at the same time in the same magma, each strontium containing mineral will have the same ratios of the different strontium nuclides, since all strontium nuclides behave the same chemically. (Note that this does not mean that the ratios are the same everywhere on earth. It merely means that the ratios are the same in the particular magma from which the test sample was later taken.) As strontium-87 forms, its ratio to strontium-86 will increase. Strontium-86 is a stable element that does not undergo radioactive change. In addition, it is not formed as the result of a radioactive decay process. The amount of strontium-86 in a given mineral sample will not change. Therefore the relative amounts of rubidium-87 and strontium-87 can be determined by expressing their ratios to strontium-86: Rb-87/Sr-86 and Sr87/Sr-86 We measure the amounts of rubidium-87 and strontium-87 as ratios to an unchanging content of strontium-86. Because of radioactivity, the fraction of rubidium-87 decreases from an initial value of 100% at the time of formation of the mineral, and approaches zero with increasing number of half lives. At the same time, the fraction of strontium-87 formed increases from zero and approaches 100% with increasing number of half-lives. The two curves cross each other at half life = 1.00. At this point the fraction of Rb87 = Sr87 = 0.500. At half life = 2.00, Rb87 = 25% and Sr87 = 75%, and so on. These curves are illustrated in Fig 17.2, p. 131, Strahler, Science and Earth History: Points are taken from these curves and a plot of fraction Sr-87/Sr-86 (as ordinate) vs. Rb-87/Sr-86 (as abscissa) is made. It turns out to be a straight line with a slope of -1.00. The corresponding half lives for each plotted point are marked on the line and identified. It can be readily seen from the plots that when this procedure is followed with different amounts of Rb87 in different minerals, if the plotted half life points are connected, a straight line going through the origin is produced. These lines are called "isochrons". The steeper the slope of the isochron, the more half lives it represents. When the fraction of rubidium-87 is plotted against the fraction of strontium-87 for a number of different minerals from the same magma an isochron is obtained. If the points lie on a straight line, this indicates that the data is consistent and probably accurate. An example of this can be found in Strahler, Fig 17.5, page 133: If the strontium-87 isotope was not present in the mineral at the time it was formed from the molten magma, then the geometry of the plotted isochron lines requires that they all intersect the origin, as shown in figure 17.3. However, if strontium 87 was present in the mineral when it was first formed from molten magma, that amount will be shown by an intercept of the isochron lines on the y-axis, as shown in Fig 17.5 above. Thus it is possible to correct for strontium-87 initially present. Comparing figures 17.2 and 17.3, it is obvious that the steeper the slope of the isochron line, the greater the number of half lives, and the older the sample. The age of the sample can be obtained by choosing the origin at the y intercept. In Fig 17.5 the isochron line has a slope of 0.005/0.105 = 0.048 and intersects the Sr87 axis at 0.699 = y intercept. Note that the amounts of rubidium 87 and strontium 87 are given as ratios to an inert isotope, strontium 86. However, in calculating the ratio of Rb87 to Sr87, we can use a simple analytical geometry solution to the plotted data. Again referring to Fig. 17.3, the slope of the strontium-87/rubidium-87 line is -1, and y = 1-x. Therefore, with the origin placed at the y intercept, the intersection of the Rb/Sr line and the isochron line can be obtained by solving the equation 1-x = 0.048x, giving the result x = 0.954, which is the rubidium-87/strontium-86 ratio corresponding to the given isochron line.(The corresponding strontium-87/strontium-86 ratio is 1.000 - 0.954 = 0.046) Thus the fraction of Rb87 decayed is 0.954. Since the half-life of Rb87 is 48.8 billion years, we can substitute in the half-life equation: 0.954 = (1/2) raised to the power (age/48.8), where age = age in billions of years. Therefore: log(.954) = (age/48.8)(log 1/2). This solves to age = 3.3 billion years. When properly carried out, radioactive dating test procedures have shown consistent and close agreement among the various methods. If the same result is obtained sample after sample, using different test procedures based on different decay sequences, and carried out by different laboratories, that is a pretty good indication that the age determinations are accurate. Of course, test procedures, like anything else, can be screwed up. Samples can be contaminated and/or improperly prepared. Mistakes can be made at the time a procedure is first being developed. Creationists seize upon any isolated reports of improperly run tests and try to categorize them as representing general shortcomings of the test procedure. This like saying if my watch isn't running, then all watches are useless for keeping time. Creationists also attack radioactive dating with the argument that half-lives were different in the past than they are at present. There is no more reason to believe that than to believe that at some time in the past iron did not rust and wood did not burn. Furthermore, astronomical data show that radioactive half-lives in elements in stars billions of light years away is the same as presently On pages 358 and 359 of The Genesis Flood, creationist authors Whitcomb and Morris present an argument to try to convince the reader that ages of mineral specimens determined by radioactivity measurements are much greater than the "true" (i.e. Biblical) ages. The mathematical procedures employed are totally inconsistent with reality. Henry Morris has a PhD in Hydraulic Engineering, so it would seem that he would know better than to author such nonsense. Apparently, he did know better, because he qualifies the exposition in a footnote stating: This discussion is not meant to be an exact exposition of radiogenic age computation; the relation is mathematically more complicated than the direct proportion assumed for the illustration. Nevertheless, the principles described are substantially applicable to the actual relationship. Morris states that the production rate of an element formed by radioactive decay is constant with time. This is not true, although for a short period of time (compared to the length of the half life) the change in production rate may be very small. Radioactive elements decay by half-lives. At the end of the first half life, only half of the radioactive element remains, and therefore the production rate of the element formed by radioactive decay will be only half of what it was at the beginning. The authors state on p. 358: If these elements existed also as the result of direct creation, it is reasonable to assume that they existed in these same proportions. Say, then, that their initial amounts are represented by quantities of A and cA respectively. [c being the ratio of the initial amounts of the two elements at the moment of "creation."] Now if at some time the incidence of environmental radiation is increased, both [decay] rates will be increased in roughly these same proportions; assume that both are multiplied by a factor k and that the increased rates persist throughout a length of time T'. Prior to this time the normal rates applied and persisted, say, for a time T^o, and following this period they applied again for a time of T^*. Morris makes a number of unsupported assumptions: (1) His basic equation states that the amount of a daughter element formed = A + R(T), where A = amount existing at the "moment of creation," T = time from creation, and R is an unchanging rate of formation. An unchanging value of R requires that the rate of decay is constant with time, meaning that if, for example, 1% of the element decays in a year's time, at the end of a hundred years it will be all gone. This is not correct; radioactive elements decay by half lives, as explained in the first paragraphs of this post. (2)He stated that "Environmental radiation" can change the rate of decay of radioactive elements. There is absolutely no evidence to support this assumption, and a great deal of evidence that electromagnetic radiation does not affect the rate of decay of terrestrial radioactive elements. (3)He postulates that the environmental radiation would spontaneously manifest itself, and at a later time, spontaneously disappear. (4)He assumes that the environmental radiation would penetrate the earth's crust, with no diminution in intensity, and affect all radioactive elements in the same way and to the same degree, but without affecting any living things that might be present. He sums it up with the equations: A + R(T^o) + k(R)(T') + R(T^*) = Quantity of first element, and cA + cR(T^o) + k(cR)(T') + cR(T^*) = Quantity of second element. He then calculates an "age" for the first element by dividing its quantity by its decay rate, R; and an "age" for the second element by dividing its quantity by its decay rate, cR. It's obvious from the above two equations that the result shows the same age for both elements, which is: A/R + T^o + k(T') + T^*. Since the actual age would necessarily be T^o + T' + T^*, Morris concludes that the age determined by radioactive measurements is necessarily greater than the true age. Of course, the mathematics are completely wrong. The correct relation can obtained by rearranging the equation given at the beginning of this post: the number of years N corresponding to a rate of decay (properly expressed as the half-life = H) is: N = H[log(F)/log(1/2)] Where F = fraction of original element remaining. For a half life of 1000 years, the following table shows the fraction remaining for various time periods: Fraction remaining: 0.9 0.7 0.6 0.5 0.3 0.1 Corresponding number of years: 152 514 737 1000 1737 3322 By way of contrast, the following table displays the incorrect values calculated on the basis of the Morris straight line relationship: amount = A + R(T) Morris identifies the rate of daughter element production as R, with no reference to the effect of the amount of parent element on the rate R. In all his mathematics, R is taken as a constant value. We may therefore set R as equal to the initial rate in the above table: R = 0.1/152 = 0.000658 Calculating, using the Morris equation: amount formed = R(T), the amount of element (expressed as a fraction of the amount of the parent element) formed for the same time periods in the above table Number of years: 152 514 737 1000 1737 3322 Fraction formed: 0.1 0.34 0.48 0.66 1.14 2.18 Fraction remaining: 0.9 0.66 0.52 0.34 (-)0.14 (-)1.18 Morris' equations would indicate that after 1520 years the amount of parent element would be completely gone, but the daughter element would nevertheless continue to be formed! Click on the web site of Dr. Roger Wiens of Cal Tech for a detailed analysis of the accuracy of radioactive dating. Additional information is also available in talk.origins faqs: Isochron Dating and Age of Earth
{"url":"http://www.fsteiger.com/radioact.html","timestamp":"2014-04-18T23:14:39Z","content_type":null,"content_length":"17576","record_id":"<urn:uuid:cdf4f7d2-02a3-4216-9b78-a94f7973a6da>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
Euless Prealgebra Tutor Find an Euless Prealgebra Tutor ...Since then I have held a number of positions that have required me to be an expert in a subject. Whether you are in elementary, secondary or college, my knowledge and skill set will be a valuable asset in preparing you for a successful and productive academic career! Outside of classes I took, I have a lot of experience with genetics in the practical setting of the laboratory. 30 Subjects: including prealgebra, reading, chemistry, English ...This continued through my years as a graduate student at Duke University where I was part of the team that helped tutor student athletes. I am certified in Secondary Math in the state of Texas and have taught Algebra I, Geometry, Algebra II and Math Models in the Irving ISD. In that capacity I ... 82 Subjects: including prealgebra, English, chemistry, calculus ...While in college, I spent 3 years tutoring high school students in math, from algebra to AP Calculus. I also tutored elementary students in reading and spent 6 months homeschooling first and third grade. When I was in high school, I would help my classmates in every subject from English to government to calculus. 40 Subjects: including prealgebra, chemistry, reading, calculus ...I love working with people of different ages and personalities. Overall, I love being able to help others excel and help them learn in new ways.I passed Algebra 1 with a 97 my freshman year and am very skilled in math as a whole. I passed Pre-AP Algebra 2 with a 96 my sophomore year and am very skilled in math as a whole. 17 Subjects: including prealgebra, physics, French, algebra 2 ...I had to manage the classroom, teach some group and mostly individual lessons, correct student work, motivate students to complete their assignments, and keep track of at least 3 different grades' progress simultaneously. Report cards and parent conferences were 2 or 3 times per year, but extra ... 9 Subjects: including prealgebra, reading, geometry, algebra 1
{"url":"http://www.purplemath.com/euless_prealgebra_tutors.php","timestamp":"2014-04-20T02:26:11Z","content_type":null,"content_length":"24116","record_id":"<urn:uuid:f6e49905-c710-4913-9281-bf8fc6206f75>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Unsophisticated Query From: William F. Hammond <hammond@csc.albany.edu> Date: 18 Apr 2002 15:05:06 -0400 To: www-math@w3.org Message-ID: <i7r8lcg74t.fsf@pluto.math.albany.edu> Paul Topping <PaulT@dessci.com> writes: > > And how accessible would they be to people who rely on text readers? > Not very. However, if someone went through the resulting page adding ALT > attributes with some kind of text for the equation (e.g. "x^2 + 3"), that > might help. I've not tried to do this, though. Clearly, if the math is very > complicated it isn't going to work well. I'm not sure what "text readers" means. If it means a web browser such as "lynx" that performs through vt100-level terminal connections, I've not seen much done with MathML in that class of browsers so far. Given the terminal condition, however, it does strike me as sensible that such a browser undertaking to render MathML might reasonably do so with pseudo TeX, using braces to remove ambiguity. Another approach would be "ascii art" of the type used in a terminal interface for a computer algebra program. If such a browser handles CSS, then a user might consider providing an early-in-the-cascade style sheet, and content MathML might be more easily handled that way than presentation MathML since the former markup is closer (both on the author side and on the reader side) to meaningful human mathematical thought. -- Bill Received on Thursday, 18 April 2002 15:05:11 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 20 February 2010 06:12:51 GMT
{"url":"http://lists.w3.org/Archives/Public/www-math/2002Apr/0085.html","timestamp":"2014-04-19T07:08:09Z","content_type":null,"content_length":"8857","record_id":"<urn:uuid:62b077b5-8b2c-43a0-87ee-bb227facfd03>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Part a.) Find the slope of the line that contains the points, (1, 2) and (2, 5). (5 points) Part b.) Using the slope that you found in part a and one of the given points, write the equation of the line. Please write your final answer in slope-intercept form. (5 points) • one year ago • one year ago Best Response You've already chosen the best response. K bro, if you plot these on the line, then do rise over run, you will find the slope. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5137bb3ae4b01c4790d02c8a","timestamp":"2014-04-19T15:14:42Z","content_type":null,"content_length":"27782","record_id":"<urn:uuid:9d02206d-820f-445a-b245-aea6808e1f54>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: One ship is sailing south at a rate of 5 knots, and another is sailing east at a rate of 10 knots. At 2 P.M. the second ship was at the place occupied by the first ship one hour before. at what time was the distance between the ships not changing.? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5067cefbe4b06e532421a845","timestamp":"2014-04-17T21:38:57Z","content_type":null,"content_length":"136513","record_id":"<urn:uuid:14a6d742-2756-4931-ae7c-47788e8e6a19>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Assumption and Hypothesis An assumption is a belief that forms one of the bases for the research. This belief is not to be tested or supported with empirical data. Very often belief is not stated in a research proposal. A hypothesis is a tentative answer to a research question. Where can a hypothesis be derived from? a. from the observation before the research is conducted. This is called inductive hypothesis b. from the theory. This is called deductive hypothesis. It does not matter how you derive it, but it must: (a) state relationship between variables, (b) be testable–remember the operational definition, (c) be consistent with the existing theory/ knowledge, and (d) be simple and concise. There are two types of hypothesis: research hypothesis and null hypothesis. Research hypothesis. A research hypothesis is usually developed from experience, literature or theory, or combination of these. This is the expected relationship between variables. Pak Sigit wants to research the relation between debating activities and argumentative writing skills. First of all, Pak Sigit believes (assumes) that in debating activities, students use their logical capacity to build an argument to support the topic of the debate. Similarly, in writing argumentative writing, students also need logical capacity to build arguments to support the topic of their writings. In short, Pak Sigit’s assumption is the skills obtained in debating capacities can be transfered into writing ability. After reading and observing the phenomenon, he develops a research hypothesis, “there is a positive correlation between students’ debating activities and argumentative writing skills.” This kind of hypothesis is called directional hypothesis. Another type of hypothesis is non-directional hypothesis. Example: there is a correlation between students’ debating activities and writing skills. Null hypothesis. A null hypothesis is the one that states NO relationship between varibales. The function is to let the research test the hypothesis statistically. Please make the null hypothesis of the above research hypothesis. 1. Make the null hypothesis 2. select the suitable research method 3. do research and measure the result 4. analyze the data and test the hypothesis statistically Please write your research question, the research hypothesis and null hypothesis. Write them here as a comment to this post. Write also your class after your name at the end of the comment. Ary, Donald, et al. 2002. Introduction to Research in Education. Belmont: Wadswroth Group 115 Comments 1. Research Problem : What is the effects of using language laboratory to the listening skill of students of SMA Negeri 1 Trawas, Mojokerto. Research Hypothesis: There is a positive correlation of using language laboratory to the listening skill of students of SMA N 1 Trawas, Mojokerto. Null Hypothesis : There is no correlation between the using of language laboratory to the listening skill of students of SMA N 1 Trawas, Mojokerto. Laily Indah Hariyani Post Graduate Program_UNISMA NPM: 2091040075 □ Laily, Ooops, sorry, Laily …. your hypotheses are not in line with your research problem. :-) ☆ ok I will Change the problem what about this sir, Research Problem : Is there an effects of using language laboratory to the listening skill of students of SMA Negeri 1 Trawas, Mojokerto. Research Hypothesis: There is a positive correlation of using language laboratory to the listening skill of students of SMA N 1 Trawas, Mojokerto. Null Hypothesis : There is no correlation between the using of language laboratory to the listening skill of students of SMA N 1 Trawas, Mojokerto. Pls give me a comment Laily Indah Hariyani Post Graduate Program_UNISMA NPM: 2091040075 ☆ Laily, The problem is that in your research question, you talk about EFFECT. In your hypothesis you talk about CORRELATION. They are not the same. Research question and hypothesis MUST talk about the same things as hypothesis is the temporary answer to the question. Pls revise again. 2. Reseach Problem Is there an Effect of Students’ achievement of joining the Intensive Course on _*English Subject*_ of the third students of SMAN 1 Trawas in the 2009/2010 Academic Year Reseach Hypothesis there is an Effect of Students’ achievement of joining the Intensive Course on _*English Subject*_ of the third students of SMAN 1 Trawas in the 2009/2010 Academic Year Null Hypothesis there is not an Effect of Students’ achievement of joining the Intensive Course on _*English Subject*_ of the third students of SMAN 1 Trawas in the 2009/2010 Academic Year Tutik Muniroh Post Graduate Program_UNISMA NPM: 2091040076 Dear Tutik, Probably the problem is better revised into: Is there an Effect of Students’ joining the Intensive Course on the English Subject achievement of the third students of SMAN 1 Trawas in the 2009/2010 Academic Year Please revise the hypothesis accordingly. What do you think? □ Research Problem Is there an Effect of Students’ joining the Intensive Course on the English Subject achievement of the third students of SMAN 1 Trawas in the 2009/2010 Academic Year? Reserch Hypothesis There is an Effect of Students’ joining the Intensive Course on the English Subject achievement of the third students of SMAN 1 Trawas in the 2009/2010 Academic Year Null Hypothesis There is no Effect of Students’ joining the Intensive Course on the English Subject achievement of the third students of SMAN 1 Trawas in the 2009/2010 Academic Year Tutik Muniroh Post Graduate Program_UNISMA NPM: 2091040076 This is my revision. What do you think? Is it correct? please give your comment. Thanks a lot Thanks, Tutik. I think it is better now. 3. Reseach Problem Is there an Influence of homestay’s distance of new students in PPS-UNISMA on Quantitative Reseach Subject achievement in the 2009/2010 Academic Year? Research Hypothesis There is an Influence of homestay’s distance of new students in PPS-UNISMA on Quantitative Reseach Subject achievement in the 2009/2010 Academic Year? Null Hyphothesis There is no Influence of homestay’s distance of new students in PPS-UNISMA on Quantitative Reseach Subject achievement in the 2009/2010 Academic Year? Post Graduate Program_UNISMA NPM: 2091040074 Dear Rudi, This is a logical problem. But, do you think it has great significance? 4. Research Problem: Is there any Effect of hearing English music on the listening skill of the second students of SMA Al Munawwariyyah in the 2009/2010 Academic year? Research Hypothesis: There is an Effect of hearing English music on the listening skill of the second students of SMA Al Munawwariyyah in the 2009/2010 Academic year. Null Hyphothesis: There is no Effect of hearing English music on the listening skill of the second students of SMA Al Munawwariyyah in the 2009/2010 Academic year. Hanifatus Sa’diyah Post Graduate Program_UNISMA NPM: 2091040062 Dear Hanif, This is good. 5. I’m not sure that the problem has great significance. In my mind, I just want to give the newest problem in our class. My be, there is interesting one related of different home stay of the students. because the class has come from multiple regencies and different provinces. What do you think? I’ve to change my research problem? or may be any other solution? give your comment. Thank you for your attention. I’ll always waiting your critic, comment and solution. Dear Rudi, Since our major is English Teaching, I hope you have a research problem related to English teaching or applied linguistics in general. 6. Research Problem : What is the effects of students’ ability to speak English on their confidence of SMA Negeri 1 Gondang, Mojokerto. Research Hypothesis: There is an effects of student’s ability to speak english on their confidence of SMA Negeri 1 Gondang, Mojokerto. Null Hypothesis : There is no effects of student’s ability to speak english on their confidence of SMA Negeri 1 Gondang, Mojokerto. Lilik Indayani Post Graduate Program_UNISMA NPM: 2091040073 Dear Lilik, Yes, this is OK. But what kind of confidence do you mean? Further, what do you mean by “confidence of SMA Negeri 1 …” □ Dear Mr Sugeng….. i mean that “student’s confidence of SMA Negeri 1 Gondang, Mojokerto ” confidence in this case about confidence to do anything, eg: confidence to speak english with their teacher, confidence to speak english with foreigner, confidence to show on their ability in front of the audience,etc. please, give your comment !!! Dear Lilik. Ok, then. The next step is read the related literature to see whether it is logical to relate the variable. After that, you can revise if necessary or go on with the hypothesis + designs. ☆ Dear Mr Sugeng…. I find the difficulties in my research problem, because I can’t do research design well, exactly in procedures or steps how to do this design. my research problem: “What is the effects of students’ ability to speak English on their confidence of SMA Negeri 1 Gondang, Mojokerto”. so, I must give my variable a treatment with give speaking class and what can i do to get their score? I hope you can help me to solve my problem sir…. thank you…. ☆ Dear Lilik, This is better done with “ex-post facto research design”. Please read my posting on the research design. With the design, you can start by classifying students into two groups. The groups are students with high ability to speak English and students with low ability to speak English. You can identy them by looking at the available data or test them. Then, see whether the group have the same confidence or not. To arrive at the confidence scores, you need to find the theory about “confidence”, then give operational definition and make the descriptors to score their confidence. With this design, it seems that it is difficult to claim that the ability affects Ss’ confidence. Only after you find very convincing literature about the possible effect of soeaking ability on confidence you can claim the causal relation in your research. Otherwise, it is a correlational research, though. Now, you can think it over whether you want to go on with this idea or find another research problem. As for me, I think this current idea is not a bad idea. 7. Reserach Problem: What is the effect of teaching to understand the questions on the reading comprehension achievements of the third year students of junior high school? Research Hypothesis: The third year students of junior high school will make higher achievements of the reading comprehension when they are taught to understand the questions. Null Hypothesis: There is no effect of teaching to understand the questions on the reading comprehension achievements of the third year students of junior high school. Post Graduate Program_UNISMA NPM: 2091040002 Dear Amroji, Yes, this is a good research problem. □ I have no comment on my hypothesis. Does it mean that I can proceed to the next assignment? ☆ Yes, you can continue to the design. 8. RESEARCH QUESTION : Is there any relationship between grammar and vocabulary mastery and reading comprehension of the third-year students of SMA Laboratorium UM. RESEARCH HYPOTHESIS: The relationship of grammar and vocabulary mastery with reading comprehension of the third-year students of SMA Laboratorium UM. NULL HYPOTHESIS: There is no relationship between grammar and vocabulary mastery and reading comprehension of the third-year students of SMA Laboratorium UM. ROSDIANA AMINI GRADUATE PROGRAM-UNISMA NPM : 209140006 Dear Rosdiana, This is an interesting research problem. But, please revise your research hypothesis. 9. RESEARCH QUESTION: Do the second-year students of the state of senior high school of Turen I taught by laissez-faire teachers show higher problem solving skills than those taught by authoritarian teachers? RESEARCH HYPOTHESIS: The second-year students of the state of senior high school of Turen I taught by laissez- faire teachers show higher problem solving skills than those taught by authoritarian teachers. NULL HYPOTHESIS: There is no difference between the second-year students of the state of senior high school of Turen I taught by laissez- faire teachers show higher problem solving skills than those taught by authoritarian teachers. GRADUATE PROGRAM-UNISMA NPM: 2091040001 Dear Mahrus, This is a good research problem. However, your null hypothesis needs to be revised. Pls revise it :-) There is a grammatical problem in it. 10. Research Problem : What is the influence of the using of Transition Signals on Procedure text of the first year students of SMA Negeri 1 Trawas, Mojokerto. Research Hypothesis: There is a positive influence of the using of Transition Signals on Procedure text of the first year students of SMA Negeri 1 Trawas, Mojokerto. Null Hypothesis : There is no influence of the using of Transition Signals on Procedure text of the first year students of SMA Negeri 1 Trawas, Mojokerto. I’ve change my Research Prolem and Hypothesis related of English Teaching. Now, please you revise my problem and my hypothesis. I’ll waiting on your comment. And I’ve considered that this problem will be my topic in my next thesis. Thanks a lot. Post Graduate Program_UNISMA NPM: 2091040074 Dear Rudi, “What is the influence of the using of Transition Signals on Procedure text of the first year students of SMA Negeri 1 Trawas, Mojokerto”. Do you mean “procedure text quality”? If yes, it is ok. If no, please revise. 11. How does the student’s ability to comprehend English text in reading text? Dear Arief Wahyudi? This research problem is not very clear. “How dows…” ask about a manner. Howver, it is followed by “students ability to ….” Please revise it. How many variables do you want to study? □ how does the student’s ability to comprehend english text in reading skill at smp 1 bangkalan? Moh. arief wahyudi PPS Unisma ☆ sorry sir, I add my hypothesis because it wasnot complete research hypothesis there is siqnificant correlation to comprehend english text in reading skill for the student’ ability null hypothesis there is no moh arief wahyudi PPs Unisma 2091040034 ☆ Dear Arief, Your research hypothesis is ok. Your null hypothesis is not complete yet :-) ☆ Dear Arief, What is it? :-) Research question? If yes, this is not a good research question. The question is not focused on particular relation between two variables. 12. is the mother tongue influence the writing skill used by post graduate student of UNISMA 2009? Dear Abdul Latief, Your research question is not clear. “Is ….. influence”? Do you mean “Does ….. influence..?” “Mother tongue” is very broad. Which aspect of mother tongue? 13. Dear Sir, This is my revised research problem: Do the students of SMAN 1 Malang who frequently read text written by native speaker enable them to minimize the first- language -influenced errors in writing? Research hypothesis: The students who frequently read text written by native speaker can minimize the first-language-influenced errors in writing. Alternative hypothesis: The students who frequently read text written by native speaker cannot minimize the first-language-influenced errors in writing. PPS UNISMA Dear Suprapto, This is a good research problem and hypotheses. However, your research problem need revision again. What do you think if I revise as follows: “Do SMAN 1 Malang students’ frequent readings of texts written by native speakers minimize the first- language -influenced errors in writing?” 14. I am sorry for not stating the null hypothesis, the following is the null hypothesis: there is no difference between students who frequently read text written by native speaker and students who infrequently read text written by native speaker in minimizing the first-language-influenced errors in writing. OK, good! 15. Dear sir, Here is my research question we already discussed last week in the classroom. You commented that there was no problem about it. Research Question: What is the effect of peer feedback on the writing performance of language class students of Public Senior High School 6 of Malang? Research Hypothesis : There is an effect of peer feedback on the writing performance of language class students of Public Senior High School 6 of Malang. Null Hypothesis: There is NO effect of peer feedback on the writing performance of language class students of Public Senior High School 6 of Malang. I am looking forward for your comment. Thank you. Yoyok Agus Dwi Irawan NPM: 2091040013 Post Graduate Program Islamic University of Malang Master of English Education Yes, this is great, Mas Yoyok. 16. Research Problem: What is the effect of picture media in teaching narrative-writing skill on the English achievement of SMP N 1 Gempol third graders? Research Hypothesis : There is the effect of picture media in teaching narrative-writing skill on the English achievement of SMP N 1 Gempol third graders Null Hypothesis: There is no the effect of picture media in teaching narrative-writing skill on the English achievement of SMP N 1 Gempol third graders OK, good! By the way, is it the general English achievement or writing English achievement? □ sory sir. I meant : English writing achievement ☆ Yes, ok. 17. sorry sir. I didn’t include my identity: Name: Mukhamad Hasadollah/M.Hasadollah Post Graduate Program_Unisma NPM: 2091040030 18. Excuse me, sir. Where is the comment for my problem and my hypothesis? I’ll always waiting your comment Post Graduate Program_UNISMA NPM: 2091040074 □ Please see my comment under your posting PREVIOUSLY. Sorry. I overlooked it. 19. Research Problem: Is there any effect of watching English film on enriches vocabulary of the first semester students IKIP BUdi Utomo Research Hypothesis: There is an effect of watching Egnlish film on enriches vocabulary on the first semester stiudents IKIP Budi Utomo Null Hypothesis: There is no effect of watching English film on enriches vocabulary of the first semester students IKIP Budi Utomo Decky Yohanes Nd. Post Graduate Program_UNISMA NPM: 2091040056 Dear Decky, This is good. However, you need to revise the sentence structure. e.g. “There is an effect of watching English film on the vocabulary enrichmen of the first semester students IKIP Budi Utomo.” Revise also the hypotheses, pls. 20. Dear Sir, Here are my Research Question and the Hypotheses: Research Question: Do the students taught using Full-English Teaching Instruction have better Speaking Achievement than those under Bilingual Teaching Instruction? Research Hypothesis: The students taught using Full-English Teaching Instruction have better Speaking Achievement than those under Bilingual Teaching Instruction. Null Hypothesis: The students taught using Full-English Teaching Instruction DO NOT have better Speaking Achievement than those under Bilingual Teaching Instruction. I do hope there is no problem with it, and I’m looking forward for your comment, sir. Thank you. UUN MUHAJI Post Graduate Program of Islamic University of Malang. Master of English Education. NPM: 2091040021 Dear Uun, Yes, there is no problem with it. You can just go on to the next topic. This is great! 21. Dear sir, Yes sir, I think your suggestion is good , bcs my research problem is to wordy. tks a lot 22. Dear sir, well I think, I resent my nullhypothesis twice, as I am not sure whether or not the first nullhypothesis is delivered. According to my understanding ,null hypothesis is a prediction on two variables have no significant influences or on the other word, when two methods A and B compared on their excelence, it is assumed that they are equally good. Dear Prapto, Yes, that is right. Null hypothesis is a statement that states that there is no difference existing between two things (not necessarily variables). No difference can be equal in number, in amount, in the degree of effectiveness, etc. You sent the null hypothesis as follows: There is no difference in minimizing first-language-influenced errors between students who frequently read texts written by native speaker and students who infrequently read texts written by native speaker. There is a confusion here. Which or who minimize or not minimize? PLEASE REPLY THIS COMMENT IF WE TALK ABOUT THIS TOPIC, DON’T MAKE YOU THREAD. :-) Otherwise, it is difficult for us to trace the topic of discussion among us. □ Allright sir, how abt my first nullhypothesis, is it acceptable or not? well , if it is not acceptable, i will look into it and make revision accordingly. tks a lot sir. Which one do you mean? ☆ Which one do you mean? ☆ I mean,(There is no difference between students who frequently read text written by native speaker and students who infrequently read text written by native speaker in minimizing the first-language-influenced errors in writing.) best rgds ☆ Yes, this is right. 23. RESEARCH QUESTION: What is the effect of learning English in SAC (Self Access Center) on English proficiency of the first semester students of English Department in Airlangga University? Research Hypothesis: There is a positive correlation of learning English in SAC (Self Access Center) and English proficiency of the first semester students of English Department in Airlangga University. Null Hypothesis : There is no correlation between the learning English in SAC (Self Access Center) and English proficiency of the first semester students of English Department in Airlangga University. Laili Hibatin Wafiroh Post Graduate Program_UNISMA NPM: 2091040121 Dear Laili, The research problem and the hypothesis IS NOT in line. The problem talks about “what is the effect”, and both hypotheses talk about correlation. Please revise your research problem OR hypotheses. BTW, what do you mean by “what is …”? Do you want to see positive/negative effects or you still want to see the effects in the field and describe it later?” □ Dear Sir, Thanks for your comment, Sir. I’d like to revise my research problem. Actually, I want to investigate if there is an effect of learning English in SAC (one of media to improve English proficiency which is facilitated by the campus). Here is my revision: RESEARCH QUESTION: Is there an effect of learning English in SAC (Self Access Center) on English proficiency of the first semester students of English Department in Airlangga University? Research Hypothesis: There is an effect of learning English in SAC (Self Access Center) on English proficiency of the first semester students of English Department in Airlangga University. Null Hypothesis : There is no effect of learning English in SAC (Self Access Center) on English proficiency of the first semester students of English Department in Airlangga University. That’s all Sir. Thanks before hand. Laili Hibatin Wafiroh Post Graduate Program_UNISMA NPM: 2091040121 ☆ Dear Laily, Because a quantitative research is to test hypothesis, the problem and hypothesis must be specific. You must decide what kind of effect you will encounter, e.g. the increase of the students scores. Can you revise them again now? Thanks. ☆ Dear Sir, Thanks for the correction. I am really appreciated to receive that. Sir, i would like to change the population and sample because it affects the result of the research. Here is my new revision: RESEARCH QUESTION: Is there an effect of learning English in SAC (Self Access Center) on the increase of English scores of Indonesian Department students in Airlangga University? Research Hypothesis: There is an effect of learning English in SAC (Self Access Center) on the increase of English scores of Indonesian Department students in Airlangga University. Null Hypothesis : There is no effect of learning English in SAC (Self Access Center) on the increase of English scores of Indonesian Department students in Airlangga University. By the way, i have another research question and hypotheses. Could I send them to make comparison? I’d like to make up my mind. Here is my other research problem: RESEARCH QUESTION: Is there an effect of writing diary assignment on writing skill improvement of the first graders of Public Senior High School Sukodadi Lamongan in the 2009/2010 academic year? Research Hypothesis: There is an effect of writing diary assignment on writing skill improvement of the first graders of Public Senior High School Sukodadi Lamongan in the 2009/2010 academic year. Null Hypothesis : There is no effect of writing diary assignment on writing skill improvement of the first graders of Public Senior High School Sukodadi Lamongan in the 2009/2010 academic year. Actually, it is based on my own experience. I ask my students to write their daily activities on diary. Then, they should submit every two weeks. In your opinion, which one is better? I am looking forward to hearing from you. Big Thanks. Laili Hibatin Wafiroh Post Graduate Program_UNISMA NPM: 2091040121 Dear Laili, Your second research problem and hyupotheses are great. I like them more than the first (Airlangga Univ.) because this one is more down to earth. Yes, please go on to the designs. 24. Dear Sir, This is my revised RESEARCH QUESTION: Do the second-year students of the state of senior high school of Turen I taught by laissez-faire teachers show higher problem solving skills than those taught by authoritarian teachers? Dear Sir, RESEARCH HYPOTHESIS: The second-year students of the state of senior high school of Turen I taught by laissez- faire teachers show higher problem solving skills than those taught by authoritarian teachers. NULL HYPOTHESIS: There is no difference between the second-year students of the state of senior high school of Turen I taught by laissez- faire teachers and those taught by authoritarian teachers. GRADUATE PROGRAM-UNISMA NPM: 2091040001 Dear Mahrus, Your research problem and research hypothesis are good. Your null hypothesis is not good. It should be: Both groups of the second-year students of the state of senior high school of Turen I taught by laissez- faire teachers and taught by authoritarian teachers do not show …. 25. Dear Sir, This is my revised RESEARCH QUESTION : Is there any relationship between grammar and vocabulary mastery with reading comprehension of the third-year students of SMA Laboratorium UM. RESEARCH HYPOTHESIS: The grammar and vocabulary mastery of third-year students of SMA Laboratorium UM with their reading comprehension are closely related. NULL HYPOTHESIS: There is no relationship between grammar and vocabulary mastery with reading comprehension of the third-year students of SMA Laboratorium UM. ROSDIANA AMINI GRADUATE PROGRAM-UNISMA NPM : 2091040006 Dear Rosdiana, Your research hypothesis is not in line with your research question. Your null hypothesis is good. 26. I mean “procedure text” is one of the Genre in English. It is one of kinds of text, such as narrative text, descriptive text, explanation text, etc. I do this in order to more specific of the What do you think? Should I revise the problem and hyphothesis? what is the solution? Thanks Post Graduate Program_UNISMA NPM: 2091040074 □ Oh, this one. I think I have commented on it. The main point is “what aspect of procedure text do you mean”? Read again my comment. Below I paste my comment and your research questions and hypotheses: Research Problem : What is the influence of the using of Transition Signals on Procedure text of the first year students of SMA Negeri 1 Trawas, Mojokerto. Research Hypothesis: There is a positive influence of the using of Transition Signals on Procedure text of the first year students of SMA Negeri 1 Trawas, Mojokerto. Null Hypothesis : There is no influence of the using of Transition Signals on Procedure text of the first year students of SMA Negeri 1 Trawas, Mojokerto. I’ve change my Research Prolem and Hypothesis related of English Teaching. Now, please you revise my problem and my hypothesis. I’ll waiting on your comment. And I’ve considered that this problem will be my topic in my next thesis. Thanks a lot. Post Graduate Program_UNISMA NPM: 2091040074 Dear Rudi, “What is the influence of the using of Transition Signals on Procedure text of the first year students of SMA Negeri 1 Trawas, Mojokerto”. Do you mean “procedure text quality“? If yes, it is ok. If no, please revise. □ Rudi, My question is “what aspects of procedure text”? Is it the quality? In your research question you mention “what is the influence of XXXX on procedure text”? “Procedure text” as a name of text genre. Nothing can influence the genre. However, something can influence the quality of the text in that genre. Please let me know if you don’t understand my point. Pls revise your research problem and the hypotheses accordingly. 27. What is the effect of having laboratory class to the listening ability of SMPN 1 Lekok Pasuruan ?? Pak Bandi, do you want to do a qulitative research? If NO, the question “what is” may not be a good choice. 28. Dear Mr. Sugeng. I would like to write a Quantitative Research but I am not sure with my first research problem,so let me change it and please correct. Research Problem What is the influence of learning vocabulary to the reading comprehension achievement of the third grade students of SMP N 1 Lekok Research Hypothesis There is a significant influence of learning vocabulary to the reading comprehension achiement of the third grade students of SMP N 1 Lekok Null Hypothesis There is no significant influence of learning vocabulary to the reading comprehension achievement of the third grade students of SMP N 1 Lekok Post Graduate Program_UNISMA NPM : 2091040029 □ Dear Pak Bandi, This is OK, actually. However, it is better you read the literatue, so you have a prediction on the kind of influence. If you have this kind of research problem, it seems that this can be a qualitative research. Revise it into, for example, “Does learning vocabulary increase ….. ” 29. Dear sir, I would like to ask for my research problems based on the title “The effect of Literal Bilingual Translation Teaching Method on students’ speaking and grammar proficiency taught at public elementary students of Madrasah Ibtidaiyah AL Misbakh” Research Problem Statement 1. What is the effect of Literal Bilingual Translation Teaching Method on the students’ speaking and grammar proficiency taught at Public elementary student of Madrasah Ibtidaiyah Al Misbakh? 2. Does this method make the students progress both in speaking and grammar proficiency? are my research problems accepted to be observed in the term of quantitative research? thank you for your comment MOHAMMAD ULUR ROSYAD POSTGRADUATE PROGRAM OF UNISMA 2009 The question “what is …” is more suitable for qualitative research. Just directly ask like “Does this method improve something?” You may just have one research question. Pls, revise. and reply to this comment. □ Is it appropriate to revise the research question to be “Does this method make the students improve both their speaking and grammar proficiency?” and what about my research hypothesis and null hypothesis below? are both of them accepted or not? Research Hypothesis The students taught by using the literal bilingual teaching method will be good in both speaking and grammar. Null Hypothesis There is no significant effect of teaching students by using the literal bilingual teaching method on the improvement of their speaking and grammar proficiency. Thank you for your comment ☆ Dear Rosyad, You need to explicitly state what “this method” refer to in the research problem. For your research hypothesis, you must be aware that experiment is done to see whether there is a significant improvement or decrease. your word “good” does not imply the comparison. It is better to use words like “better” or “worse” in your formulation. Considering your research problem, the null hypothesis can be made in several versions: 1. The students taught by using the literal bilingual teaching method will not have a better speaking ability than that of the students not taught with literal bilingual teaching method. 1. The students taught by using the literal bilingual teaching method will not have a better grammar ability than that of the students not taught with literal bilingual teaching method. 1. The students taught by using the literal bilingual teaching method will not have a better speaking and grammar ability than that of the students not taught with literal bilingual teaching method. What do you think? ☆ Thank you for your comment. The method I mean is the technique of teaching in which the students are taught by using the literal bilingual teaching method. This has been done since I taught in this school. Thus, my research question will be “Does the literal bilingual teaching method make the students improve both their speaking and grammar proficiency?” in this research, I want to know the improvement of either their speaking or grammar proficiency after they have been taught by using this method. According to my mind, the last version of your suggestion is quite suitable for my null hypothesis “The students taught by using the literal bilingual teaching method will not have a better speaking and grammar proficiency than that of the students not taught with literal bilingual teaching method”. So, what do you think? Can I go further to research design? Thank you MOHAMMAD ULUR ROSYAD POSTGRADUATE PROGRAM UNISMA 2009 Yes, Pak Rosyad. Now it is much better. You can go to the design. 30. Dear sir, I would like to ask for my research problems based on the title “The effect of Literal Bilingual Translation Teaching Method on students’ speaking and grammar proficiency taught at public elementary students of Madrasah Ibtidaiyah AL Misbakh” Research Problem Statement 1. What is the effect of Literal Bilingual Translation Teaching Method on the students’ speaking and grammar proficiency taught at Public elementary student of Madrasah Ibtidaiyah Al Misbakh? 2. Does this method make the students progress both in speaking and grammar proficiency? are my research problems accepted to be observed in the term of quantitative research? thank you for your comment MOHAMMAD ULUR ROSYAD Yes, this is ok. 31. Research Problem: Is there any effect of watching English film on the vocabulary enrichmen of the first semester students IKIP Budi Utomo? Research Hypothesis: There is an effect of watching English film on the vocabulary enrichmen of the first semester students IKIP Budi Utomo Null Hypothesis: There is no effect of watching English film on the vocabulary enrichmen of the first semester students IKIP Budi Utomo Decky Yohanes Nd. Post Graduate Program_UNISMA NPM: 2091040056 I Have change become like this. Whats your opinion sir? Thanks Sir. □ OK, this is great, now. 32. Dear sir, This is my second revised. RESEARCH QUESTION : Is there any relationship between grammar and vocabulary mastery with reading comprehension of the third-year students of SMA Laboratorium UM? RESEARCH HYPOTHESIS: The relationship between grammar and vocabulary mastery with reading comprehension of the third-year students of SMA Laboratorium UM. NULL HYPOTHESIS: There is no relationship between grammar and vocabulary mastery with reading comprehension of the third-year students of SMA Laboratorium UM. Thank you for your comment. ROSDIANA AMINI GRADUATE PROGRAM-UNISMA NPM : 2091040006 □ Dear Rosdiana, The research hypothesis is better directional if you have found theory or previous research findings. However, your research hypothesis now still needs revision. It is not a complete sentence. Your null hypothesis is ok. 33. Dear Sir, This is my revised RESEARCH QUESTION: Do the second-year students of the state of senior high school of Turen I taught by laissez-faire teachers show higher problem solving skills than those taught by authoritarian teachers? RESEARCH HYPOTHESIS: The second-year students of the state of senior high school of Turen I taught by laissez- faire teachers show higher problem solving skills than those taught by authoritarian teachers. NULL HYPOTHESIS: Both groups of the second-year students of the state of senior high school of Turen I taught by laissez- faire teachers and taught by authoritarian teachers do not show higher problem solving Thanks a lot GRADUATE PROGRAM-UNISMA NPM: 2091040001 □ OK. this is great, Mahrus. Go on to the design. 34. Dear sir, this is my assigment Research problem what is the influence of English teacher’s preparation to the implementation of classroom activities on the student’s interest in studying English at junior high school in rural area ? Research hipothesis There is the influence of English teacher’s preparation to the implementation of classroom activities on the student’s interest in studying English at junior high school in rural area ? Null hipothesis There is no influence of the English teacher’s preparation to the implementation of classroom activities on the student interest in studying English at junior high school at rural area ? thank a lot PPS UNISMA Bahasa Inggris 2009 – 2010 □ Dear Samaroh, Yes, this is OK now and you can go to the design. By the way the word “to” whould be “on” if we talk about effect. 35. Dear Sir, I’d like to write my Research Question once again. ” What is the effect of applying mind mapping on fluency in delivering speech of students studying at grade X SMA Negeri 1 Pandaan-Pasuruan” To do the research,I need to have both experimental group and control group. And I have to apply treatment. That is why I design the research as True-Experimental Design with Randomized subjects, pretest-postest control group design. Based on the research question, the research hypothesis will be ; a. Directional hypothesis : There is a positive correlation between applying mind mapping on fluency in delivering speech. b. Non-directional hypothesis : There is a correlation between applying mind mapping on fluency in delivering speech. Well, the null hypothesis will be : There is no correlation between applying mind mapping on fluency in delivering speech. in case of my research question, will research hypothesis with non-directional hypothesis be best applied ? I am looking forward to your comment and advise Thank you. Arlita Dwi Amilawati □ Dear Arlyta, Your hypotheses are research problem are not in line. Your research problem is talking about the effect and your hypothesis is talking about correlation. They should be about the same thing, either effect or correlation. 36. Dear sir, This is my third revised. RESEARCH QUESTION : Is there any relationship between grammar and vocabulary mastery with reading comprehension of the third-year students of SMA Laboratorium UM? RESEARCH HYPOTHESIS: The relationship between grammar and vocabulary mastery with reading comprehension capability of the third-year students of SMA Laboratorium UM. NULL HYPOTHESIS: There is no relationship between grammar and vocabulary mastery with reading comprehension of the third-year students of SMA Laboratorium UM. Thank you for your comment. I hope this is the last revision. ROSDIANA AMINI GRADUATE PROGRAM-UNISMA NPM : 2091040006 □ Dear Rosdiana, I think your research hypothesis miss a word “There is a ….” please revise and go on to design. 37. Assalamu’alaikum sir I want to know whether “concept map” is more efficient than “using dictionary” in reading class for SMA students like the research problem I sent to you. My problem is what is the proper title of this research. thanks □ Your research problem you sent me is: - Is the using of Map Concept more effective than the using of dictionary in Reading for the students of SMA in Malang City? Sorry, this research problem is not correct. My question: “Effective for what?” Or “Effective in doing what?” Pls revise your research problem. ☆ What about this one, sir? Research Problem Is there any significant different result (achievement) of reading comprehension between a group of students of senior high school who have been taught using concept map than those using conventional method (lexical guidance)? RESEARCH HYPOTHESIS: There is a significant different result (achievement) of reading comprehension between a group of students of senior high school who have been taught using concept map than those using conventional method (lexical guidance) NULL HYPOTHESIS: There isn’t a significant different result (achievement) of reading comprehension between a group of students of senior high school who have been taught using concept map than those using conventional method (lexical guidance) Thank you very much for the reply. ☆ Dear Trisno, Yes, that is not wrong. But, the simple formulation for the research question would be: “Is there any effect of teaching using mind map on the students’ reading achievement?” 38. Dear sir, I will revise my first research question be like this: Is there any correlationship between teaching English through picture and students ability in learning vocabulary. Research hypothesis: there is correlation between teaching English through picture and students ability in learning vocabulary. Null hypothesis: There is no correlation between teaching English through picture and students ability in learning vocabulary. Thank for your correction. WIWIK AFIFATUL CHOIROH. POST GRADUATE STUDENTS-PPS UNISMA. NPM 2091040014 □ OK, great! □ based on the data before # research question: is there any correlation between teaching English through picture and students ability in learning vocabulary of the fifth year students at SDN Gempol legundi, Gudo sub Distric Jombang Regency? #research hypothesis: there is correlation between teaching English through picture and students ability in learning vocabulary of the fifth year students at SDN Gempol legundi, Gudo sub Distric Jombang Regency #null hypothesis: there is no correlation between teaching English through picture and students ability in learning vocabulary of the fifth year students at SDN Gempol legundi, Gudo sub Distric Jombang Regency #research approach: quantitative research. I will tell you about my research design 1. My research design is experimental design, because I want to know the correlation between teaching English through picture and students ability in learning vocabulary 2. Here, I will use quasi-experimental design. there is no randomization but there are experimental group and control group 3. There are two classes of the fifth year students at SDN Gempol legundi Jombang. Then, 5A class is as experimental group which get the treatment teach English through picture and 5B class is as control group which do not get the treatment. It means 5B class teach English without through picture 4. I will use pretest- posttest for two group. the illustration below: E Y1 x Y2 C Y1 – Y2 X= treatment E= experimental group C= control group Y1= pretest Y2= posttest That all my explanation about research design. thank for your comment. WIWIK AFIFATUL CHOIROH post graduate program -unisma 39. Research problem : What is the effect of memorizing narrative text to the reading skill of the second-year students of the the state of junior high school of Jombang VI Research Hypothesis : There is positive correlation of memorizing narrative text to the reading skill of the second-year students of the the state of junior high school of Jombang VI Null hypothesis : There is no correlation between memorizing narrative text to the reading skill of the second-year students of the the state of junior high school of Jombang VI Tatik Irawati Post Graduate Program-Unisma □ Dear Tatik, Sorry, your hypotheses are not suitable with your research question. Pls revise them. 40. Research problem: Is there an effect of watching English News ” Metro TV” to the students’ capability of listening of Language class students of MAN 3 Malang? Research Hypothesis: There is positive effect of watching English News ” Metro TV” to the students’ capability of listening of Language class students of MAN 3 Malang Null hypothesis: There is no effect of watching English News ” Metro TV” to the students’ capability of listening of Language class students of MAN 3 Malang Agung Setiawati Pstgraduate Program UNISMA 2009 □ All of these are OK. My question: ho will you control the other variables that may influence the dependence variable? ☆ thank you very much sir for your comment. Let me answer your reply that to control the other variables which influence the dependence variable is by taking notes what factor that may influence them. For example: the students have good listening skill because they used to listen western song music. It could be an additional information for me to recognize another independent variables as supporting data. That’s my answer. I really hope for your next comment and thank you very much Agung Setiawati Postgraduate English Education Program UNISMA 41. Dear sir, Research Problem The effect of using environment technique to the students achievement in learning vocabulary. Research Hypothesis There is positive effect of using environment technique to the students achievement in learning vocabulary. Null hypothesis There is no positive effect of using environment technique to the students achievement in learning vocabulary. Lilis Rahmawati Post Graduate Program_UNISMA NPM: 2091040036 □ OK …. the null hypothesis should be : “there is NO EFFECT … ” The other hypothesis is OK. 42. Dear sir, Thanks for your comment and I will change it based on your advice. RESEARCH QUESTION : Is there any relationship between grammar and vocabulary mastery with reading comprehension of the third-year students of SMA Laboratorium UM? RESEARCH HYPOTHESIS: There is a relationship between grammar and vocabulary mastery with reading comprehension capability of the third-year students of SMA Laboratorium UM. NULL HYPOTHESIS: There is no relationship between grammar and vocabulary mastery with reading comprehension of the third-year students of SMA Laboratorium UM. ROSDIANA AMINI GRADUATE PROGRAM-UNISMA NPM : 2091040006 □ OK, thanks Bu Rosdiana. However, the the word “between” must be accompanied with the word “to”. For example: “the difference between girls and boys are …” Would you please revise your problem and hypothesis formulation? You can go to the desigb, then. 43. Research problem: Is there any effect of domino game on vocabulary mastery of the third graders of SDN 3 Ngantru Trenggalek ? Research Hypothesis: There is an effect of domino game on vocabulary mastery of the third graders of SDN 3 Ngantru Trenggalek Null Hypothesis: There is no effect of domino game on vocabulary mastery of the third graders of SDN 3 Ngantru Trenggalek Astried Damayanti Post Graduate Program_UNISMA □ OK. Great. Go to the design, please. 44. thank you for your comment sir. I think I should learn more to understand the difference between constructivism and positivism 45. Research problem: ” Do children who fond of reading also have good writing ability?” Research hypothesis There is correlation between fond of reading and writing ability/skill on children. Null hypoithesis There is no correlatiopn between fond of reading and writing ability/skill on children. Dina Kartikawati Post Graduate Program UNISMA NPM 2091040047 □ Yes. This is OK. However, it is better to put your research questionin the formulation of “is there any correlation between ….” Your research question formulation now does not use the jargon in research mthodology, but the everyday words. ☆ You mean my research question should: “Is there any correlation between fond of reading and good writing ability in children?” ☆ Yes, Dina. NOT: “Is there any correlation between fond of reading and good writing ability in children?” BUT: “Is there any correlation between fond of reading and writing ability in children?” Don’t use the word “good” in the research question, because it is what you will study. 46. RESEARCH PROBLEM: Is there any effect of using flashcards on the vocabulary mastery of the third year students of SDN Karanganyar 1 in the 2009/ 2010 academic year? RESEARCH HYPOTHESIS: There is an effect of using flashcards on the vocabulary mastery of the third year students of SDN Karanganyar 1 in the 2009/ 2010 academic year. NULL HYPOTHESIS: There is no effect of using flashcards on the vocabulary mastery of the third year students of SDN Karanganyar 1 in the 2009/ 2010 academic year. MARIYA PATMAWATI Post Graduate Program_UNISMA □ OK, this is good. Go to the design. Best regards, 47. RESEARCH PROBLEM: Is there any effect of using flashcards on the vocabulary mastery of the third year students of SDN Karanganyar 1 in the 2009/ 2010 academic year? RESEARCH HYPOTHESIS: There is an effect of using flashcards on the vocabulary mastery of the third year students of SDN Karanganyar 1 in the 2009/ 2010 academic year. NULL HYPOTHESIS: There is no effect of using flashcards on the vocabulary mastery of the third year students of SDN Karanganyar 1 in the 2009/ 2010 academic year. MARIYA PATMAWATI Post Graduate Program_UNISMA □ OK, good. Go on to the next topic. Design. 48. Dear Sir, This is my assignment as I had sent through email Research Problem What is the influence of “picture series”in improving students’writing skill at Junior High School? Research Hypothesis There is a possitive correlation of using “picture series”to the students’writing skill at Junior High School. Null hypothesis There is no correlation between the using of “picture series”and students’writing skill at junior high School. I’m extremely looking foward to your comment and advice tanks alot Wahju Indrawati Post Graduate Student,UNISMA NPM 2091040015 □ Dear Wahju, Your research problem and your research hypothesis do not match. Your research problem is about effect, your research hypotheses are about correlation. They must be in line with the research problem. Please revise. 49. Dear Sir this is my assighment. Research Problem Do the Natural science teachers get easier in writing short task than social science teachers at english course of SMK Daruttaqwa Purwosari? Research Hypothesis The natural science teachers get easier in writing short task than social science teachers at english course of SMK Daruttaqwa Purwosari Null Hypothesis The natural science teachers have difficulty in writing short task than social science teachers al english course of SMK Daruttaqwa Purwosari Post Graduate Program UNISMA □ Syaikhu, Your research question formulation is not very clear. What is “social science teacher at english course”? And your null hypothesis is not correct. That is not a null hypothesis because you state “have difficulty”. 50. Research Question: By playing question’s card strategy, do the seventh year students SMPN 1 Dau Kabupaten Malang motivate to improve of the habit’s response questions? Research Hypothesis: By playing question’s card strategy, the seventh year students SMPN 1 Dau Kabupaten Malang motivate to improve of the habit’s response questions. Null Hypothesis: By playing question’s card strategy, the seventh year students SMPN 1 Dau Kabupaten Malang no motivate to improve of the habit’s response questions. NURUL APRILIYANTI NPM: 2091040010 GRADUATE PROGRAM-UNISMA □ Dear Nurul, I am afraid some of your concept statement is confusing. What is “question’s card strategy”? What is “habit’s response question”? Revise this and the hypotheses accordingly. 51. Dear Sir, Here is my revised Research Problem What is the influence of “picture series”in improving students’writing skill at Junior high School? Research Hypothesis There is an effect of using “picture series”to the students’writing skill at Junior High School. Null Hypothesis There is no effect between The using of “picture series”and the students’writing skill at Junior High School. Thanks alot for your correction,sir I am still looking foward to your advice. □ This is good, Wahju. But your null hypothesis needs revision. Don’t use the word “between”. This is mostly just like the research hypothesis in the formulation. 52. RESEARCH PROBLEM: Is there any effect of different first language on literacy development of the student at SMU X Malang? RESEARCH HYPOTESIS: There is a positive effect of different first language on literacy development of the student at SMU X Malang NULL HYPOTESIS: There is no effect of different first language on literacy development of the student at SMU X Malang. Fitri Anggraini Postgraduate Program of English Education UNISMA □ Dear Fitri, Yes, your research problem is fine. Go on to the design. 53. thank you very much sir for your comment. Let me answer your reply that to control the other variables which influence the dependence variable is by taking notes what factor that may influence them. For example: the students have good listening skill because they used to listen western song music. It could be an additional information for me to recognize another independent variables as supporting data. That’s my answer. I really hope for your next comment and thank you very much Agung Setiawati Postgraduate English Education Program UNISMA □ Dear Agung, Controlling other variables means maintaining the other variable that may influence the dependent variable the same for all subject. The example of the other variable is IQ. What you mentioned is how to identify the independent variable. 54. dear sir, here is my revised Research Problem: What is the influence of “picture series”in improving students’writing skill at junior High school? Research Hypothesis: There is an effect of using”picture series” to the students’writing skill at junior high school. Null hypothesis: There is no effect of “picture series”to the students’writing skill at junior high school. thanks alot for your guidance.Can I go to the Design Sir. WAHJU INDRAWATI 2091040015,PPS UNISMA 55. My topic is ~The analysis of the impact of teaching methods on the performance of pupils at Junior High school Examination in mathematics~ My Hypothesis is, the use of the the appropriate teaching method at JHS will increase pupils performance. Can there be further adjustment on this topic? □ That’s fine, Eric. That’s OK. A little revision can be done later (only in the wording). 56. hi, my study is about ” the pragmatics of religious expressions in Jordanian Arabic” can you provide me with some hypotheses? □ I have answered your question via email.
{"url":"http://quantres.wordpress.com/2009/11/12/hypothesis/","timestamp":"2014-04-17T17:02:39Z","content_type":null,"content_length":"212545","record_id":"<urn:uuid:c6431b88-f886-4aea-893c-bde1f0594548>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
Weekly Problem 6 - 2013 Copyright © University of Cambridge. All rights reserved. 'Weekly Problem 6 - 2013' printed from http://nrich.maths.org/ A snail is at one corner of the top face of a cube with side length $1$m. The snail can crawl at a speed of $1$m per hour. What proportion of the cube's surface is made up of points which the snail could reach within one hour? This problem is taken from the UKMT Mathematical Challenges. View the previous week's solutionView the current weekly problem
{"url":"http://nrich.maths.org/9765/index?nomenu=1","timestamp":"2014-04-18T03:17:28Z","content_type":null,"content_length":"3201","record_id":"<urn:uuid:d356e4e3-e4fa-4c3e-9aaa-73c32205b8d4>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
the blog formerly known as The Statistical Mechanic I walk up to a fully automated coffee dispenser and push the button to get an espresso. What I do not know is that the button is connected to a device which consists of a radioactive source and a detector. The button activates the detector for 5 seconds and if it registers a particle from the radioactive material it will make an espresso and otherwise not (*). The probability to get my espresso, according to text book quantum theory, is exactly 50%. But this raises the following question. There are many ways I can get the espresso, but there is only one way that it can fail. See, I push the button at 1pm, which here means I push it exactly at 1:00:00. Now the detector could register a particle at 1:00:01 or at 1:00:02 or ... at any time between 1:00:00 and 1:00:05. There is an infinite number of possibilities when and how the detector could register a particle. But there is only one way it can register a particle, with the detector shutting down at 1:00:05. Since we all believe in the many worlds interpretation , this implies that there are many worlds where an espresso has been prepared and only one world where I fail to get it. So how can the probability be 50%? Unless we assume that somehow the many different worlds are not equally real (x). So perhaps the many worlds are not really real after all ? (*) Actually, many automated dispensers in the real world follow a similar design. (x) added later: There are proposals on how to save the appearances. Let me know if you find them convincing. 31 comments: I guess I am missing your point. Do you mean that if the detector is shut off then the world does not split? And what if you change the detector time until the probability to get an espresso is one billionth? Do you still think that the non espresso world is less "real"? And what about Anna Karenina? Does that mean that there are much less happy families? My point is that in this example (I assume that) the probability to get an espresso is 50% , but there are many more worlds with espresso than without. Therefore counting outcomes does not give the correct probability, therefore the worlds of m.w.i. cannot be (equally) real in the usual sense. By the way, this is of course known already for a long time and several proposals have been made how to rescue the appearances. e.g. So realizations can have different weights? How is this different from what goes on in plain old probability? (Maybe it is in the article, I haven't looked at it yet). The Schroedinger equation is deterministic and so the question would be where the weights (probabilities) come from. The Born rule is an unexplained add-on within the Copenhagen interpretation and m.w.i. proponents are/were hoping it can be derived. There is an obvious solution to this puzzle. The worlds split at any point in time and therefore there are as many worlds with you having an espresso as there are worlds where you don't get one. If you assume that for every t, 1:00:00 < t < 1:00:05, the worlds split in two (one with and one without espresso) then you can perhaps 'save the appearances' for the case that the probability is indeed 50%. But what if the total probability to get an espresso is only 43% ? Also, the radioactive material could consist of many different atoms and so the detector could register a particle in many different ways. But there is only one way it can not register a particle. So there are many different (classical) worlds how a particle was registered but only one where none has been detected (for every t). I think I get your point. Kind of like, if you set the probability of getting a espresso to be 0.999 and not getting it at 0.001, and you carried out this experiment 1000 times you'd still get 999 espressos. But in MWI it would be a 50/50 probability since the universe always splits in half, no matter the probability. Which is MWI's biggest problem You can tweak it both ways. You can have a probability for e of 99% and 1% for n and yet the worlds split only in two. Or you can set it so that the probability is 50:50 and yet there are 999 worlds with e and only one with n. As I wrote above, this is known already for quite a while. I think it suggests that the worlds of mwi cannot be equally 'real' (in the sense of classical probability) no matter what the details are of a particular 'fix'. How would you get 999 worlds with n and 1 with e from 50:50 probability? I thought my blog post makes clear how you can get 999 different ways to get an espresso e but only one way not to get it. The detector is active for 5 sec. The probability that it detects a particle during these 5 sec is 50% (which results in e). But there are 999 (and many more) ways when exactly it detects the particle; so there are 999 different worlds with e. But there is only one way how the detector does not detect a particle within the 5 sec. Therefore 999 worlds with e but only 1 world with n. But the probability for e was 50%. yes in your thought experiment that would be the case, but I was talking about mine with the light bulps. Sorry I see I used your "e" and "n" for "espresso" and "none". think of a detector D which consists of 999 sub-detectors D1, D2, ... D999 (if the Di are pixels in a digital camera we talk about 9Mio sub detectors!) The detector D registers a photon with probability 50% (e.g. in a Mach Zehnder experiment). But there are 999 macroscopically different ways how it can detect something, while there is only one way how it can not register the photon. But (if I understand this correctly) this means that MWI can't make sense of probabilities at all? Why isn't this adressed more thoroughly by more physicists (opponents and proponents of MWI) ? Also which are the other good realist candidates without collapse? deBroglie Bohm? What interpretation do you adhere to? >> this means that MWI can't make sense of probabilities at all? >> Why isn't this adressed more thoroughly by more physicists (opponents and proponents of MWI) ? it is a well known problem of MWI and people try to find ways around it. >> What interpretation do you adhere to? The interpretation problem is an unsolved problem as far as I can see, which really bothers me. Unfortunately, for practical purposes any interpretation will be sufficient to sweep this under the rug. How well does this attempt at getting around the problem work? Also if they can't dervive born rule, can't they just postulate it as a axiom? >> How well does this attempt at getting around the problem work? It makes several non-trivial assumptions ad hoc and I am not convinced. >> Also if they can't derive born rule, can't they just postulate it as a axiom? The Born rule makes sense within the Copenhagen interpretation and in this context we know what it means. I am not sure what 'probability' even means in the context of mwi if the number of worlds does not match the Born probability. But you surely don't believe in the copenhagen interpretation where "collapse" is a reality? Is the probability problem the "only" problem MWI face? >> But you surely don't believe in >> the copenhagen interpretation as I wrote above, I think the interpretation problem is an unsolved problem. >>Is the probability problem the >>"only" problem MWI face? There are other problems, e.g. the issue of the preferred basis. Thanks a lot. I'm not a physicist but I try my best to understand these things (which sometimes takes hours of reading up :P ). How "crucial" is this argument against MWI? I know the probability problem is always brought up, but I rarely see the preferred basis objection. I've heard it a few times, but the MWI proponents usually dismiss it. So I guess I thought it wasn't really a big issue at all. >> How "crucial" is this argument against MWI? I think it is important. But one should not forget that mwi is much more ambitious than other attempts to solve the interpretation problem. The idea that the quantum dynamics of the wave function is all there is (no collapse, no hidden variables etc.) is actually very ambitious. Copenhagen simply assumes the existence of a (quasi)classical domain while mwi needs to derive it (the problem of the preferred basis). And while Copenhagen can simply state the Born rule as an axiom, it somehow has to be derived within mwi (the problem of the probabilities). Therefore, mwi is more interesting than Copenhagen and others, but as I understand it important questions remain unanswered. When you say more ambitious, what do you mean by that? Some would argue that Intelligent Design is very amibtious because it's supposed to explain EVERYTHING, yet it's the most crackpot idea in history... Do you mean MWI may have a chance of being correct? If MWI has atleast these 3 problems: can't account for probability which we observe 24/7, can't account for special relativity and has this preferred basis problem... Doesn't that disprove MWI to the point where it can't be true? No matter how much you twist and bend it? >> When you say more ambitious, what do you mean by that? well, mwi tries to explain the same as other interpretations, but with less assumptions. i think this makes it more ambitious. ps: i am not aware that there is a problem with relativity... sure, but ambitious doesn't equal more likely or even sensible. Do you feel MWI is sensible ? A "world" is defined relative to an instantaneous value of the universal wavefunction. but that wavefunction then becomes a frame-dependent object, depending on a particular time-slicing of This violates the spirit of relativity, according to which all the things that are actually real are frame-independent. Whats your thoughts? >>Do you feel MWI is sensible ? no, not yet. >> depending on a particular time-slicing of spacetime. the concept of space-time assumes implicitly classical objects and mwi leaves open where they come from. (in other words mwi leaves open where classical observers and their reference frames come from). In this sense there is a problem. As H.D. Zeh and others pointed out, mwi requires one to take quantum gravity into consideration and of course this is an unsolved problem ... No not yet? Do that imply that you feel MWI needs modification before it's sensible, or that you have more to learn? I read that earlier and I agree it's a problem, but it doesn't adress MWI's problem with relativity? >> Do that imply that you feel MWI needs modification before it's sensible, or that you have more to learn? probably both So you do not feel that the current problems with MWI is enough to dismiss the version of MWI that people adhere to today (unmodified) ? as i wrote above, i have not seen a plausible, convincing solution to the interpretation problem yet. Which I'm aware of ofcourse ^^ I'm rather asking if you feel that the problems specially against MWI is enough to DISMISS it? I could give you my email instead so we don't have to fill up your whole comment section if you want to discuss it further? >> the problems specially against MWI is enough to DISMISS it? mwi solves the interpretation problem, but introduces 2 or 3 new unsolved problems. People discuss for many years now if/how these problems can be resolved somehow. So let them work on it and see what happens... Personally, I would look somewhere else to think about the interpretation problem. >> so we don't have to fill up your whole comment section dont worry. nobody else is posting comments or even reading ours. blogs are so yesterday (especially this one). For the record, someone else is reading the comments and finding them very interesting -- thanks!
{"url":"http://tsm2.blogspot.com/2010/05/one-espresso-many-worlds.html?showComment=1292439515968","timestamp":"2014-04-20T15:50:24Z","content_type":null,"content_length":"90714","record_id":"<urn:uuid:79c44cef-c10d-44ac-855b-4fc00b4f4c5a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
1. No matter where you press a toothpaste tube, paste comes out of the opening. This is an application of 5. Distance and displacement of an object has the same value 6. The period of a simple pendulum follows this formula 7. The vertical velocity of a projectile at the peak of its flight is 8. A rectangular solid with a density of 10 kg per cubic meter has a height of 2000 cm. It has a length of 3 cm but the width is unknown. What is the pressure applied to the ground underneath it? 10. The angle at which an object starts to slide across a surface is not dependent on its weight 11. The force and the acceleration may have the same value and units 12. What is the density (in kg/m^3 of a cube whose side 2 m and whose mass is 8 kg? 16. We can inhale the exhaust fumes of a jeepney even when we are inside the moving vehicle. This is an application of 18. An object which remains at rest means no external force are acting on it 19. What is the density (in kg/m^3 which displaces about 500 m^3 in a container and whose mass is 50000?
{"url":"http://www.proprofs.com/quiz-school/story.php?title=physics-formal-assessment-lab6-under-mr-tan","timestamp":"2014-04-20T14:12:19Z","content_type":null,"content_length":"182748","record_id":"<urn:uuid:394e0719-500d-49fb-9a8e-ef80ac377777>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Important message from Doron Zeilberger ANALYSIS of ALGORITHMS, Bulletin Board [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Important message from Doron Zeilberger • To: AofA@pommard.inria.fr • Subject: Important message from Doron Zeilberger • From: "Helmut Prodinger" <helmut@cam.wits.ac.za> • Date: Wed, 7 Apr 1999 09:48:31 GMT +2 • Organization: MS, University of the Witwatersrand • Priority: normal ------- Forwarded Message Follows ------- Date: Tue, 6 Apr 1999 18:09:06 -0400 (EDT) From: Doron Zeilberger <zeilberg@euclid.math.temple.edu> To: zeilberg@euclid.math.temple.edu Subject: Non-Junky Junk Mail Cc: notices@math.sunysb.edu Hi My Direct-E-Mail (a.k.a. junk mail) recipients: As you alredy know, Knapp&Friedlander turned down my brilliant opinion 36, but I believe that it deserves the largest diffusion. So, I will soon ask you for a favor. But, FIRST: Let me take this opportunity to announce that the classic book A=B (by Petkovsek, W and Z) is now available for FREE download (just go to my homepage, http://www.math.temple.edu/~zeilberg/). It is still very much in print, but unlike the `non-profit' AMS, that charges for viewing the lousy papers of the Transactions, our `commercial' (but far less greedy than the AMS directors) publisher, Klaus Peters, agreed to have it downloadable. But, the main point of this message is to ask you for a favor. The Notices of the American Mathematical Society has just stupidly (or perhaps, cunningly) turned down the opinion below. Of course, it is available on the web for ever after (go to my homepage and click on opinions, then on Opinion 36), but some of my best friends still are non-surfers, and even those who do, may miss it. So, please, even if you don't agree completely with the content of this opinion, or even if you totally disagree, but believe in free speech, PLEASE save it, and then send it as `junk mail' to all your department, (in my system one would do: mail allfaculty<Op36 ) You may also want to post it on the Department's (physical) Bulletin Board, for the benefit of the non-users of E-mail. --Begin Opinion 36 submitted to, and rejected by FORUM editor S. Friedlander Don't Ask: What Can The Computer do for ME?, But Rather: What CAN I do for the COMPUTER? By Doron Zeilberger, Dept. of Mathematics, Temple University, Philadelphia, PA 19122, zeilberg@math.temple.edu, http://www.math.temple.edu/~zeilberg/ Written: March 5, 1999 Revised: March 25, 1999 Rabbi Levi Ben Gerson, in his pre-algebra text (1321), Sefer Ma'asei Khosev, had about fifty theorems, complete with rigorous proofs. Nowadays, we no longer call them theorems, but rather (routine) algebraic identities. For example, proving (a+b)c=ab+bc took him about half a page, while proving (a+b)*(a+b)=a*a+2*a*b+b*b took a page and a The reason that it took him so long is that while he already had the algebraic concepts, he still was too hung-up on words, and while he used symbols, (denoted by dotted Hebrew letters), he did not quite utilize, systematically, the calculus of algebraic identities. The reason was that he was still in a pre-algebra frame of mind, and it was more than three hundred years later (even after Cardano), that probably Vieta started the modern `high-school' algebra. So Levi Ben Gerson had an inkling of the algebraic revolution to come, but still did not go all the way, because we humans are creatures of habit, and he liked proving these deep theorems so much that it did not occur to him to streamline them, and hence kept repeating the same old arguments again and again in long-winded natural language. Believe it not, our current proofs are just as long-winded and repetitive, since we use an informal language, a minor variation on our everyday speech. We are now on the brink of a much more significant revolution in mathematics, not of algebra, but of COMPUTER ALGEBRA. All our current theorems, FLT included, will soon be considered trivial, in the same way that Levi Ben Gerson's theorems and `mophetim' (he used the word MOPHET to designate proof, the literal meaning of mophet is `perfect', `divine lesson', and sometimes even miracle), are considered trivial today. I have a meta-proof that FLT is trivial. After all, a mere human (even though a very talented as far as humans go), with a tiny RAM, disk-space, and very unreliable circuitry, did it). So any theorem that a human can prove is, ipso facto, utterly trivial. (Of course, this was already known to Richard Feynman, who stated the theorem (Surely You're Joking Mr. Feynman, p. 70)- `mathematicians can prove only trivial theorems, because every theorem that is proved is trivial'.) Theorems that only computers can prove, like the Four Color Theorem, Kepler's Conjecture, and Conway's Lost Cosmological Theorem, are also not very deep, but not quite as trivial, since, after all, computers are few order-of-magnitudes better and faster than humans. In fact, if something is provable by computer, it is at best semi-trivial (on complexity-theory grounds). So Erdos's BOOK may exist, but all the proofs there, though elegant, are really trivial (since they are short). So for non-trivial stuff we can only have, at best, semi-rigorous proofs , and sometimes just empirical evidence. Since Everything that we can prove today will soon be provable, faster and better, by computers, it is a waste time to keep proving, in the same old-way, either by only pencil and paper, and even doing `computer-assisted' proofs, regarding the computer, as George Andrews put it, as a `pencil with power-stirring'. Very soon all our awkwardly phrased proofs, in semi-natural language, with their endless redundancy, will seem just as ludicrous as Levi's half-page statement of (a+b)c=ac+bc, and his subsequent half-page proof. We could be much more useful than we are now, if, instead of proving yet another theorem, we would start teaching the computer everything we know, so that it would have a headstart. Of course, eventually computers will be able to prove everything humans did (and much more!) ab initio, but if we want to reap the fruits of the computer revolution as soon as possible, and see the proofs of the Riemann Hypothesis and the Goldbach conjecture in OUR lifetime, we better get to work, and transcribe our human mathematical heritage into Maple, Mathematica, or whatever. Hopefully we will soon have super-symbolic programming languages, of higher and higher levels, continuing the sequence: Machine, Assembly, C, Maple, ... further and further up. This will make the transcription task much easier. So another worthwhile project is to develop these higher and higher math systems. So we can serve our time much better by programming rather than proving. If you still don't know how to program, you better get going! And don't worry. If you were smart enough to earn a Ph.D. in math, you should be able to learn how to program, once you overcome a possible psychological block. More important, let's make sure that our grad students are top-notch programmers, since very soon, being a good programmer will be a prerequisite to being a good mathematician. Once you learned to PROGRAM (rather than just use) Maple (or Mathematica, etc.), you should immediately get to the business of transcribing your math-knowledge into Maple. You can get a (very crude) prototype by looking at my own Maple packages (http://www.math.temple.edu/~zeilberg/programs.html)., in particular RENE (http://www.math.temple.edu/~zeilberg/tokhniot/RENE), my modest effort in stating (and hence proving!) theorems in Plane Geometry. Other noteworthy efforts are by Frederic Chyzak (Mgfun and Holonomic), Christian Krattenthaler (HYP and qHYP), John Stembridge (SF and Coxeter), the INRIA gang (Salvy and Zimmermann's gfun and Automatic Average Case Analysis), Peter Paule and his RISC gang (WZ-stuff and Omega), and many others (but still a tiny fraction of all mathematicians). What, if like me, you are addicted to proving? Don't worry, you can still do it. I go jogging every day for an hour, even though I own a car, since jogging is fun, and it keeps my body in shape. So proving can still be pursued as a very worthy recreation (it beats watching TV!), and as mental calisthenics, BUT, PLEASE, not instead of working! The real work of us mathematicians, from now until, roughly, fifty years from now, when computers won't need us anymore, is to make the transition from homocentric math to machine-centric math as smooth and efficient as possible. If we will dawdle, and keep loafing, pretending that `proving' is real work, we would be doomed to never see non-utterly-trivial results. Our only hope at seeing the proofs of RH, P!=NP, Goldbach etc., is to try to teach our much more reliable, more competent, smarter. and of course faster, but inexperienced, silicon-colleagues, what we know, in a language that they can understand! Once enough edges will be established, we will very soon see a PERCOLATING phase-transition of mathematics from the UTTERLY TRIVIAL state to the SEMI-TRIVIAL state. ------end Opinion 36 of DZ------ Professor Helmut Prodinger Mathematics Department University of the Witwatersrand P.O. Wits 2050 Johannesburg South Africa Tel. +27-11-716 2919 Fax. +27-11-4032017 Email. helmut@gauss.cam.wits.ac.za Homepage. http://www.wits.ac.za/helmut/index.htm Date Prev | Date Next | Date Index | Thread Index
{"url":"http://algo.inria.fr/AofA/mailing_list/msg00113.html","timestamp":"2014-04-16T21:52:29Z","content_type":null,"content_length":"12652","record_id":"<urn:uuid:c9ccacfc-31bf-4633-9f68-73a71cb1897e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
ISBN: 9780131913998 | 0131913999 Edition: 7th Format: Hardcover Publisher: PRENTICE HALL SCHOOL GROUP Pub. Date: 3/12/2004 Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/precalculus-7th-sullivan-michael-burns/bk/9780131913998","timestamp":"2014-04-17T04:48:53Z","content_type":null,"content_length":"35308","record_id":"<urn:uuid:4d66e2d0-004d-46a1-bb7d-b5b56b8bddd4>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
Practice tests Objective tests free download Many Online Tests Exams for India exams.qoney Model Paper Junior Engineers (Electrical) SSC For more please visit parikshaa Model Paper Junior Engineers (Electrical) RRB download SSC Junior Engineers Sample paper Solved question paper for Junior engineer(electrical) 1. The frequency of DC current (a) Equal to voltage magnitude (b) 0 (c) Double of AC frequency (d) 50 HZ Ans : b 2. The current flowing in a purely inductive circuit of 30 mH on application of 230 V, 50 Hz single phase supply is 24.4 A. If the frequency of the applied voltage is increased to 100 Hz the current flowing in the same circuit will be (a) 24.4 A (b) 48.8 A (c) 12.2 A (d) 6.1 A Ans : c 3. Find the total resistance when two 3 Ohm resistances are connected in parallel. (a) 1.11 ohms (b) 1.5 ohms (c) 0.707 ohms (d) 1.23 ohms Ans : b 4. Voltage drop in a resistance given by (a) mmf/reluctance (b) IR (c) I/R (d) VI Ans : b 5. Off-line converter, SMPS has a. AC input and dc output b .DC input and dc output c. AC input and ac output d. None Ans. a 6. Filter circuits are constructed by means of a. Diode b. Resistors c. Transformers d. Capacitor and inductors Ans. d 7. Resistance of the diode is decreased when a. Forward biased b. Reverse biased c. Both forward and reverse biased d. Either a or b Ans. a 8. In earlier time is used for voltage regulation. a. Diode b. Transistors c. Vacuum tubes and glow bulbs d. SMPS Ans. c 9. is the equipment used during power failure. a. Rectifier b. Voltage regulators c. UPS d. SMPS Ans. c 10. Peak factor of the sine wave is equal to (a) 0.901 (b) 1.414 (c) 1.1 (d) 1.11 Ans :b 11. The amplitude of current of full wave rectified sinusoidal wave is 80 A, its average value will be (a) 25.44A (b) 80A (c) 40A (d) 56.56A Ans : a 12.Find the total current supplied to the lamp rated 100w .when supply voltage is 200 v. (a) 1.75A (b) 2A (c) 0.5A (d) 1A (e) Ans : c 13. The power factor of a inductive circuit is (a) Lagging (b) Leading (c) Zero lagging (d) Unity Ans : a 14. The power factor of a purely capacitive circuit is always (a) Lagging (b) Leading (c) Unity (d) Zero lagging Ans : b 15. The overall circuit power factor of a RLC series circuit is found to be 0.898 lagging. The nature of the resultant circuit is (a) Resistive (b) Inductive (c) Capacitive (d) None of these Ans : b 16. The maximum, rms and average value of a periodic current wave form is 100 A, 64.42A and 57.5A, respectively. The peak factor of this wave is (a) 0.644 (b) 1.552 (c) 1.12 (d) None of these Ans : b 17. In a parallel resistance circuit (a) Power is same in all resistance (b) Current is same in all resistance (c) Voltage is same in all resistance (d) Resistances are same Ans : c 18. Find the total resistance when 2 Ohm and 4 Ohm resistances are in parallel. (a) 1.33 Ohms (b) 0.33 Ohms (c) 2.33 Ohms (d) 1 Ohm Ans : a 19. Expression for mmf in terms of field strength is (a) HI (b) H/I (c) HL (d) H/L Ans : c 20. is the property of magnetic which opposes the flow of flux through it. (a) Resistance (b) MMF (c) Reluctance (d) emf Ans : c 21. is the property of electrical conductor which oppose the flow of current through it (a) Reluctance (b) emf (c) mmf (d) Resistance Ans : d 22. Reluctance is expressed in (a) Ampere Weber (b) Ohm (c) Ampere/Weber (d) Volt/Ampere Ans : c 23. Reciprocal of reluctance is termed as (a) Conductance (b) Permenance (c) Permeability (d) None of these Ans : b 24. Ohm’s law for electric circuit will be (a) emf= current/resistance (b) emf= current X resistance (c) emf= resistance/current (d) emf= 1/( resistance X current) Ans : b 25. Ohm’s law for magnetic circuit will be (a) mmf= flux /resistance (b) flux= mmf X resistance (c) reluctance = mmf/ flux (d) Resistance= mmf X flux Ans : c for study material logon to flipkart or snapdeal kly send ssc previous question paper of electrical engineer in JE post
{"url":"http://m4in.com/model-paper-junior-engineers-electrical-ssc/","timestamp":"2014-04-16T16:12:30Z","content_type":null,"content_length":"115609","record_id":"<urn:uuid:259b9273-b314-4a10-9d41-b475a3c27ea4>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Stamford, CT Calculus Tutor Find a Stamford, CT Calculus Tutor ...Prior to that I was an Adjunct Professor at a private university in New York. I have over six years of experience teaching and tutoring students from economics to history. My teaching philosophy is that with a positive attitude and willingness to learn as well as the putting-learning-into-practice, economics can be easy and fun. 16 Subjects: including calculus, statistics, accounting, Chinese ...The answers are in the mechanisms... I can show students how to deconstruct reactions to their underlying principles, and apply this fundamental knowledge to understand and predict molecular behavior. I enjoy helping students achieve success in Organic Chemistry. If you or someone you know is i... 10 Subjects: including calculus, chemistry, algebra 1, algebra 2 ...My experience includes being a substitute teacher in Long Island and teaching Math 8 during summer school. I also welcome Freshmen and Sophomore college students as well. I can do private tutoring or small groups. 9 Subjects: including calculus, French, geometry, algebra 1 ...I do have special education knowledge that I like to apply as well. Most importantly, my students and parents always have a realistic view of where they stand. I keep track of the student's progress and at the end of each session, I share this information with the parent. 17 Subjects: including calculus, Spanish, geometry, accounting ...I cannot offer tutoring for any of the other sections. I have taken MIT's 6.00x course in Computer Science and gained certification. I took Linear Algebra and received an A, then took Modern Algebra I and II, receiving a B+ and A respectively. 32 Subjects: including calculus, physics, statistics, geometry Related Stamford, CT Tutors Stamford, CT Accounting Tutors Stamford, CT ACT Tutors Stamford, CT Algebra Tutors Stamford, CT Algebra 2 Tutors Stamford, CT Calculus Tutors Stamford, CT Geometry Tutors Stamford, CT Math Tutors Stamford, CT Prealgebra Tutors Stamford, CT Precalculus Tutors Stamford, CT SAT Tutors Stamford, CT SAT Math Tutors Stamford, CT Science Tutors Stamford, CT Statistics Tutors Stamford, CT Trigonometry Tutors Nearby Cities With calculus Tutor Astoria, NY calculus Tutors Bridgeport, CT calculus Tutors Bronx calculus Tutors Cos Cob calculus Tutors Darien, CT calculus Tutors Flushing, NY calculus Tutors Glenbrook, CT calculus Tutors Greenwich, CT calculus Tutors New Rochelle calculus Tutors Norwalk, CT calculus Tutors Old Greenwich calculus Tutors Ridgeway, CT calculus Tutors Riverside, CT calculus Tutors White Plains, NY calculus Tutors Yonkers calculus Tutors
{"url":"http://www.purplemath.com/stamford_ct_calculus_tutors.php","timestamp":"2014-04-16T04:44:57Z","content_type":null,"content_length":"23932","record_id":"<urn:uuid:f93db7e8-9cd1-4d5c-9bdb-a4f1f170ac66>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Welcome to pH Calculation! pH calculation pH and paH for separate or mixtures of many acids and bases together Titration with derivative determination of the equivalence-point. Rreference: look at: www.asdlib.org You can download the programs directly without cost on the Order-page. There are: • Detailed guides with calculated examples for each of them. • Tables of ion diameters. • A detailed mathematical description of how the programs have been constructed. • An article of accuracy. A complete mathematical solution is given in "A General Program for ActpH" on the Order-page. There is good agreement between values got from ActpH and those from NIST (standard buffer). Ex: Phosphate buffer: ActpH gives paH=6.858 and NIST gives paH=6.865. Diff= 0.007. For details go to the Order-page and download "Guide for the ActpH" and Ex 12. If you want a review from an analytical journal you can go to Analytical Sciences Digital Library and read a Description Application/quantitative analysis/pH Calculation. Programs have also been created based on these pH programs to give pH as a function of volume in acid-base titrations. There are four programs, two to calculate paH for titration of acids and titration of bases, and two to calculate pH for the same titrations. These programs and guidelines for their use are given following these pH programs. There are two possibilities to download: 1. The ConcpH-program with its guide and an adjusted form of “A General Program for Calculating ConcpH”. 2. The ActpH-program with its guide, the complete form of “A General Program for Calculating ActpH”, the tables for ion diameters and in addition: the ConcpH-Program. Decimal point is standard. If you have adjustment for decmal comma you can change to decimal point by going to the Control Panel (Language) and choose USA (English).
{"url":"http://www.phcalculation.se/","timestamp":"2014-04-16T16:28:11Z","content_type":null,"content_length":"4494","record_id":"<urn:uuid:1664b555-b2df-4c64-8f8c-de3f61d7524f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I need to check my work. It is from a "fake" AP Calculus AB 2008 exam. The line normal to 3x^2+4y+y^2=3 at x = m is parallel to the y-axis. What is m? the choices for the answers are a. 3 b. -2 c. 0 d. -3 e. 2 I used implicit differentiation to get a derivative of dy/dx = (-6x)/(4+2y) Won't the slope of the normal line be the neg. recip of this derivative function? This means (4+2y)/(6x) To be parallel to the y-axis, the slope is undefined so 6x = 0 and x = 0. This means choice c is correct? • one year ago • one year ago Best Response You've already chosen the best response. Agreed. Looks correct. Best Response You've already chosen the best response. I didn't do any of the math, but I read your method and it sounds just perfect. Derive to get the slope, solve for dy/dx. The line normal to this will be the negative reciprocal. For it to be parallel to the y axis, it would have to be undefined. So yeah. Or you could have just known that the line normal would be parallel when the tangent was 0 and solved it that way without taking the negative reciprocal. Both work. Best Response You've already chosen the best response. Okay, and yeah. I went ahead and checked your implicit differentiation too. That checks out. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fa86212e4b059b524f45a06","timestamp":"2014-04-20T08:31:54Z","content_type":null,"content_length":"33254","record_id":"<urn:uuid:f3b283be-d330-4770-9065-37b7c6a2fe37>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
"Learning About the Future and Dynamic Efficiency," A. Gershkov & B. Moldovanu (2009) “Learning About the Future and Dynamic Efficiency,” A. Gershkov & B. Moldovanu (2009) How am I to set a price when buyers arrive over time and I have a good that will expire, such as a baseball ticket or an airplane seat? “Yield management” pricing is widespread in industries like these, but the standard methods tend to involve nonstrategic agents. But a lack of myopia can sometimes be very profitable. Consider a home sale. Buyers arrive slowly, and the seller doesn’t know the distribution of potential buyer values. It’s possible that if I report a high value when I arrive first, the seller will Bayesian update about the future and will not sell me the house, since they believe that other buyers also value the house highly. If I report a low value, however, I may get the house. Consider the following numerical example from Gershkov and Moldovanu. There are two agents, one arriving now and one arriving tomorrow. The seller doesn’t know whether the agent values are IID in [0,1] or IID in [1,2], but puts 50 percent weight on each possibility. With complete information, the dynamically efficient thing to do would be to sell to the first agent if she reports a value in [.5,1]U[1.5,2]. With incomplete information, however, there is no transfer than can simultaneously get the first agent to tell the truth when her value is in [.5,1] and tell the truth when her value is in [1,1.5]. By the revelation principle, then, there can be no dynamically efficient pricing mechanism. Consider a more general problem, with N goods with qualities q1,q2..qN, and one buyer arriving each period. The buyer has a value x(i) drawn from a distribution F, and he gets utility x(i)*q(j) if he receives good j. Incomplete information by itself turns out not to be a major problem, as long as the seller knows the distribution: just find the optimal history-dependent cutoffs using a well-known result from Operations Research, then choose VCG style payments to ensure each agent reports truthfully. If the distribution from which buyer values is unknown, as in the example above, then seller’s learn about what the optimal cutoffs should be from the buyer’s reports. Unsurprisingly, we will need something like the following: since cutoffs depend on my report, implementation depends on the maximal amount the cutoff can change having a derivative less than one in my type. If the derivative is less than one, then the multiplicative nature of buyer utilities means that there will be no incentive to lie about your valuation in order to alter the seller’s beliefs about the buyer value distribution. http://www.econ2.uni-bonn.de/moldovanu/pdf/learning-about-the-future-and-dynamic-efficiency.pdf (IDEAS version). Final version published in the September 2009 AER. I previously wrote about a followup by the same authors for the case where the seller does not observe the arrival time of potential buyers, in addition to not knowing the buyer’s values.
{"url":"http://afinetheorem.wordpress.com/2012/08/20/learning-about-the-future-and-dynamic-efficiency-a-gershkov-b-moldovanu-2009/","timestamp":"2014-04-17T05:09:10Z","content_type":null,"content_length":"46393","record_id":"<urn:uuid:132b7a3a-1182-4371-88e6-f49df97bf910>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Measurement unit conversion: cc ›› Measurement unit: cc Full name: cubic centimetre Plural form: cubic centimeters Symbol: cm^3 Alternate spelling: cc Category type: volume Scale factor: 1.0E-6 ›› SI unit: cubic meter The SI derived unit for volume is the cubic meter. 1 cubic meter is equal to 1000000 cc. Valid units must be of the volume type. You can use this form to select from known units: I'm feeling lucky, show me some random units ›› Definition: Cubic centimeter A cubic centimetre (cm3) is equal to the volume of a cube with side length of 1 centimetre. It was the base unit of volume of the CGS system of units, and is a legitimate SI unit. It is equal to a millilitre (ml). The colloquial abbreviations cc and ccm are not SI but are common in some contexts. It is a verbal shorthand for "cubic centimetre". For example 'cc' is commonly used for denoting displacement of car and motorbike engines "the Mini Cooper had a 1275 cc engine". In medicine 'cc' is also common, for example "100 cc of blood loss". ›› Sample conversions: cc cc to pony cc to barrel [UK, wine] cc to tablespoon [US] cc to cubic mile cc to cubic millimetre cc to exaliter cc to shot cc to acre inch cc to acre foot cc to pipe [UK]
{"url":"http://www.convertunits.com/info/cc","timestamp":"2014-04-19T19:36:30Z","content_type":null,"content_length":"24059","record_id":"<urn:uuid:4022e97b-6941-4566-a6fd-f17157d2001c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
Schwarz Reflection Principle for (Real) Harmonic Functions February 13th 2013, 02:13 PM #1 Sep 2009 Schwarz Reflection Principle for (Real) Harmonic Functions Assume $u$ is harmonic in $U^{+}$ and $u\in\mathscr{C}^{2}(\bar{U}^{+})$, where $U$ is the open ball $B_{1}(0)$ of radius $1$ about the origin in $\mathbb{R}^{n}$, $U^{+}$ being the upper half-ball: $U^{+}:=U\cap\partial\mathbb{R}^{n}_{+}.$ (This is problem #2.5.-something in Evans PDE text). We want to show (under the assumed regularity of $u$) that the odd extension of $u$ into $U^{-}$ provides us a with a harmonic function on all of $U$. That is, if $v=u$ in $U^{+}$, $v=0$ on $\ partial U\cap\mathbb{R}^{n}_{\pm}$, and $v=-u(-x)$ in $U^{-}$, then $v$ is harmonic and $\mathscr{C}^{2}$ in all of $U$. Okay, it is obvious $v$ is $\mathscr{C}^{2}$ in the separated sets $U^{+}$ and $U^{-}$. Since $u$ is $\mathscr{C}^{2}$ upto the boundary of $U^{+}$ (in particular upto $\bar{U}\cap\mathbb{R}^{n}_ {+}$), then it is also clear that $v$ is $\mathscr{C}^{2}$ in all of $\bar{U}$. We also see that $v$ satisfies the mean-value-properties in $U^{+}$ and $U^{-}$, and also on $U\cap\mathbb{R}^{n}_ {\pm}$ because of the odd symmetry. Here's my problem, and of all the proofs I have seen, this is overlooked. The mean-value properties of $u$ are satisfied on $U^{+}$, $U^{-}$ and $\partial U\cap\mathbb{R}^{n}_{+}$, yes. But only when viewed individually. How do you use the fact that $v\in\mathscr{C}^{2}(\bar{U})$ to then show that the mean-value property is satisfied in all of $U$ (not just the three aforementioned sets when the spherical averages are restricted to the individuals sets). In other words, how do you justify the extending of a spherical average across the three sets (say at a point $x\in U^{+}$ with radius sufficiently large to intersect all three sets, but sufficiently small to remain in $U$). I will reiterate this: every proof I have seen does not make explicit reference to the $\mathscr{C}^{2}$ regularity of $v$. If the mean-value property can be demonstrated without $\mathscr{C}^{2} $ regularity, then all one needs is $\mathscr{C}$ regularity (not even differentiability) of $v$ in order to conclude $v$ is harmonic (it is easy to prove that a continuous function which satisfies the mean-value property at every point in an open set is harmonic there). But if this were the case, then why would Evans (and other texts where the problem is posed) be insistent on requiring $u$ being $\mathscr{C}^{2}$ in $\bar{U}^{+}$, and thus $v$$\mathscr{C}^{2}$ in $\bar{U}$? NOTE: In part (b) of this problem, Evans drops the hypothesis that $u$ is $\mathscr{C}^{2}$ upto the boundary, only that $u\in\mathscr{C}^{2}(U^{+})\cap\mathscr{C}(\bar{U})$. But the suggested proof is entirely different: apply the Poisson integral formula for harmonic functions on a disc. Indeed, one solves the problem $\left\{\begin{array}{rl}<br /> \Delta w=0&\text{in}\;U\\<br /> w=g&\text{on}\;\partial U,\end{array}\right.$ where $g(x)=u(x)$ on the upper boundary and $g(x)=-u(-x)$ on the lower boundary. The solution is given by the Poisson integral formula, and computing $w(x^{+})$ where $x^{+}\in\mathbb{R}^{n}_{+}\ cap U$, we find $w(x^{+})=0$. From uniqueness, we conclude that $w(x)=v$ as above (the odd extension of $u$), and the theorem is proved. Anyway, if anyone could help me fill in the details of the mean-value property argument in the first part, I would appreciate it! Last edited by TaylorM0192; February 13th 2013 at 02:40 PM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/differential-equations/213074-schwarz-reflection-principle-real-harmonic-functions.html","timestamp":"2014-04-20T13:33:28Z","content_type":null,"content_length":"44989","record_id":"<urn:uuid:459cae32-0283-4e6b-a0ef-b3e4e09f18ad>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
More on detection, attribution and estimation 2: Incredible confidence intervals Following on from this post , here are a couple of simple examples where perfectly valid confidence intervals are clearly not credible intervals at the same level of probability. For the first, rather natural example, let's assume we are trying to measure some simple non-negative quantity such as the mass of an apple. We have a set of scales which have a random (but well-characterised) error of +-50g (Gaussian at 1 standard deviation). That is, if we take a calibrated mass of value X, repeatedly use the scales and plot a histogram of the results of each measurements, the outputs will form a nice gaussian shape with mean X and standard deviation 50g. [OK, I know I'm doing this at a very boring pace, but I need to make sure it is all clearly set out.] One obvious and very natural way to create a confidence interval for the apple's mass is to take a single measurement (call the observed mass m) and then write down (m-50,m+50), which is a symmetric 68% confidence interval for M, the true mass. That is to say, if we were to hypothetically repeat this experiment numerous times, and construct the set of confidence intervals (m +50) where i indexes the measurements generated in the experiments, then 68% of these intervals would include M, 16% would be wholly greater than M and 16% wholly smaller (this is guaranteed by the specified observational uncertainty). We don't, of course, actually do this infinite experiment - but this is precisely what is meant by "(m-50,m+50) is a symmetrical 68% confidence interval for M". [Are you asleep yet? The punchline is coming up...] It is incorrect to interpret the specific confidence interval (m-50,m+50) as implying that M lies in that range with probability 68% (and above and below with probability 16%). To see why this is the case, consider the following: what if the reading happens to be m=40g? (Which it might well be, if the true mass is say 80g.) Is the confidence interval (-10,90) really a symmetric credible interval at the 68% level? That is to say, would anyone believe that the apple's mass is <-10g with probability 16% (would they bet on it - if so, please point them this way...)? Of course not. Any symmetric 68% credible interval (ie an interval so that one believes M lies in it with probability 68%, and below and above with probability 16% each) must necessarily be truncated somewhere above zero. Yet it is trivial to show that the confidence interval as constructed above is entirely valid. Under repeated observations, 16% of the measurements will be lower than M-50, 16% greater than M+50, and the remainder in between, so the population of confidence intervals has exactly the statistical properties required of it. One can, perhaps, state that "negative mass is not statistically inconsistent with the measurement" or maybe even say "negative mass cannot be ruled out by the measurement", but these statements cannot be interpreted as implying that anyone thinks the apple actually has negative mass! There are other examples of non-credible confidence intervals that are quite striking. Here's one I found on the web (description lightly modified): Let's say we want to estimate a parameter x. Let's ignore all the available measurements entirely! In their place, start by using a random number generator to generate y uniformly in [0,1]. If y > 0.68, then define the confidence interval to be the empty interval. If y < 0.68, then define the confidence interval to be the whole number line. That's it! Again, this routine trivially generates a 68% CI - that is, exactly 68% of the time, the CI contains x whatever value this takes. But neither of the two possible intervals that the algorithm generates is credible at the 68% level - it should be clear that one of the possible intervals contains x (and the other does not) with certainty, even without knowing what x is. In the next part , I'll try to reconcile these results with the underlying theory. 10 comments: ankh said... I'm trying to follow this, having had only grad-level statistics in the 1970s for bio/psych, and much appreciate your making it available. One probably naive question -- how does your view describe the choice to ignore the "incredible" low values detected by NASA's TOMS early on? I recall the system had discovered the 'ozone hole' but the scientists didn't see the data, because the low numbers had been automatically ignored based on assumptions about what was credible; eventually ground-based teams' reports caused the assumptions to be changed. Here's one cite: "The timing of the Nimbus-7 mission included a period of rapid deepening and discovery of the ozone hole. The significant lowering in total ozone over Antarctica caused a rethinking of the autonomous ground quality-assurance programs that otherwise would reject the “unrealistic” low values." James Annan said... That's an interesting one. I don't know the story in detail, but scientists often use similar simple outlier-rejection techniques. I'd view this as a Bayesian prior belief that the instrument is much more likely to return a bad value, than that a massive rapid change is likely to occur in reality. Perhaps some people might try to waffle about Kuhnian paradigm-shifts at this point, but it seems like an example of rather reasonable and rational behaviour - once the evidence built up, people changed their opinions pretty quickly. Of course, there are denialists over the ozone issue too... ankh said... A couple more sources on that delayed discovery -- cautionary as we now watch far more instruments indirectly via computers. "Murphy never sleeps, but that's no reason to poke him with a sharp stick." -- www.nancybuttons.com The Antarctic ozone hole was first observed by ground-based measurements from Halley Bay on the Antarctic coast, during the years 1980-84. (At about the same time, an ozone decline was seen at the Japanese Antarctic station of Syowa; this was less dramatic than those seen at Halley since Syowa is about 1000 km further north, and did not receive as much attention.) It has since been confirmed by satellite measurements as well as ground-based measurements elsewhere on the continent, on islands in the Antarctic ocean and at Ushaia, at the tip of Patagonia. With hindsight, one can see the hole beginning to appear in the data around 1976, but it grew much more rapidly in the 1980's. Satellite measurements showing massive depletion of ozone around the south pole were becoming available at the same time. However, these were initially rejected as unreasonable by data quality control algorithms (they were filtered out as errors since the values were unexpectedly low); the ozone hole was only detected in satellite data when the raw data was reprocessed following evidence of ozone thinning in in situ observations. (Wikipedia) I just want to point out that in your apple-weighing experiment you've made conflicting assumptions. First you accepted a normal distribution for the measured value, but later stated that the mass couldn't be negative. It seems you've made some prior (oops, there's that word again) assumptions about what value the mass of an apple may have. James Annan said... That's not quite right. I start off with the premise that the measurement error is normal. Then I find that the measurement implies a non-zero likelihood for a negative mass, and also that the natural confidence interval extends to negative values. I have indeed made a prior assumption that the apple's mass cannot be negative. I think that's an entirely reasonable prior assumption! Thanks for responding. I think you've criticized bayesian language for a case which doesn't include a bayesian analysis. An analyst that applied a reasonable prior to the apple's mass (m>0 for instance) would never end up with a confidence interval that included negative values. The absurdity is easy to see. In your analysis, you've said that the confidence interval will always be centered on the measured value. Since the gaussian measurement error results in negative measurents for m>0, unreasonable (incredible) confidence intervals are guaranteed. To say that a non-frequentist interpretation is incorrect is very disingenuous. Only frequentist methods were used. If anything, this is a demonstration of the problems with failing to apply bayesian methods. James Annan said... You describe it is a problem of failing to apply Bayesian methods, but there are legitimate ways of applying both frequentist and Bayesian methods here, so long as one recognises that they are answering different questions. The confidence interval is fine as it stands, so long as one accepts that it is a confidence interval! The real problem IMO arises when the answer to a frequentist analysis is presented in a Bayesian manner (ie, presenting a confidence interval as a credible interval). Unfortunately, this is what some of the climate literature on detection and attribution seems to do. In fact, rumour has it that one of the main figures in the field actually believes it is a valid (or even the correct) If your original post wasn't meant as a critique of bayesian analysis, then I don't have a problem with it. It sounds like you're saying that the term "confidence interval" can only describe an interval that is arrived at by frequentist methods, while "credible interval" should refer to bayes-derived intervals. Is that true? I think that such a suble distinction is bound to cause more confusion than it solves. I also think it's a distinction without a practical difference, as both are trying to answer the same question i.e. what is the value of an unknown population parameter. In a comparison of any two or more methods, the interval performance would be evaluated in exactly the same James Annan said... It sounds like you're saying that the term "confidence interval" can only describe an interval that is arrived at by frequentist methods, while "credible interval" should refer to bayes-derived intervals. Is that true? I don't pretend to speak for them, but I think Bayesians would generally insist on it (eg here), and Frequentists who don't are usually those who are unaware of the distinction :-) I can count myself as a member of the latter group until fairly recently, I might add. I think that such a suble distinction is bound to cause more confusion than it solves. Is it not more confusing to use the same term to describe two different things? It has certainly confused me in the past. As I've shown in these examples, it is easy to create confidence intervals that are not credible, and even when their non-credible nature is not so immediately clear, this does not mean that they actually are valid credible intervals. The problem as I see it is that frequentist methods don't actually attempt to answer the question "what is the value of the parameter" at all. However, people sometimes interpret their results as if they do. The problem as I see it is that frequentist methods don't actually attempt to answer the question "what is the value of the parameter" at all. However, people sometimes interpret their results as if they do. I agree completely, and that's what puts me in the bayesian camp. Maybe the frequent misinterpretation of confidence intervals by frequentists is what initially caused my distrust. Any time a decision needs to be made based on available data, bayesian methods are required. I haven't found a counterexample yet. Thanks for the discussion.
{"url":"http://julesandjames.blogspot.com/2006/07/more-on-detection-attribution-and_15.html","timestamp":"2014-04-16T13:05:10Z","content_type":null,"content_length":"142129","record_id":"<urn:uuid:7539a2bd-ba29-43da-bc59-cf1634960b95>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Moderated Mediation Michael Strambler posted on Tuesday, September 21, 2010 - 11:23 am I am attempting to test a moderated mediation model where X->M->Y (all observed variables) is moderated by w, a binary variable (gender). I know how to examine this with a multiple group approach but am attempting the approach described in Preacher, Rucker & Hayes (2007) in Figure 2 model 5 where W influences paths a and b of the mediation chain. With some help from Preacher & Hayes, I have modeled this as pasted below. However, all of the estimates SEs, ps, CIs are exactly the same for all of the paths with the exception of X->M and M->Y. This seems very odd to me. Can you point me to what I might be doing wrong? Thank you. ANALYSIS: BOOTSTRAP = 1000; M WITH MW; M ON X (a1) W XW (a3); Y ON M (b1) X W XW MW (b2); NEW(eff1 eff2); OUTPUT: CINTERVAL(bcbootstrap); Linda K. Muthen posted on Tuesday, September 21, 2010 - 2:18 pm Can you send your input, data, output, and license number to support@statmodel.com. Linda K. Muthen posted on Tuesday, September 21, 2010 - 5:34 pm I should have seen this earlier. The label b2 after x, w, ex, and mw holds all of those regression coefficients equal. It should be stated as: Y ON M (b1) X W XW MW (b2); Michael Strambler posted on Tuesday, September 21, 2010 - 6:30 pm Thank you. The results make much more sense now. Now it is only the M on W and M on MW results that are identical. Does it make sense that this would be the case? Might this be because the MW cross-product involves a binary variable (coded 1 & 2)? Linda K. Muthen posted on Tuesday, September 21, 2010 - 6:42 pm I think you mean M on w and Xw. It is the same issue. The label needs to be on the next line. M ON X (a1) XW (a3); Michael Strambler posted on Tuesday, September 21, 2010 - 6:47 pm Ah, yes. All looks well now. Thank you again for the help. Robert Wickham posted on Monday, November 01, 2010 - 8:52 am Drs. Muthen, I am attempting to fit a Moderated Mediation model with simple (single level) survey data. Specifically, I wish to examine the moderating influence of a dichotomous variable (Mod) effect coded (-1, +1) on the 'A', 'B', and 'Cprime' paths as seen in Edwards and Lambert (2007; Model H). I realize that the multigroup function of Mplus may be used in this specific instance(e.g. a categorical moderator), however I would like to retain the more traditional 'cross product' approach for two reasons: 1) To replicate OLS estimates for all paths, 2) I am often interested in similar models with continuous moderators where multigroup analysis is not appropriate. IV and Mod were centered prior to importing into Mplus, and product terms were created using a DEFINE statement. I use the following code: Med on Mod IV IVxMod; ! 'A' Path Y on Mod IV IVxMod Med ModxMed; ! 'B' and 'Cprime' Paths Y IND Med IV; The resulting model has df = 1 and a significant ChiSq. Examination of RESIDUAL matrix suggests a notable covariance between Med and ModxMed. Adding a covariance parameter results in a not-positive first-order derivative matrix. I have run into this problem before in the past, and I have a hunch that it stems from the fact that Med is an endog variable. Any advice? Linda K. Muthen posted on Tuesday, November 02, 2010 - 10:21 am If you obtain standard errors, the message most likely comes from the fact that the mean and the variance of your binary variable are not orthogonal. You can ignore the message if this is the case. Hong Deng posted on Thursday, May 10, 2012 - 8:10 pm Dear Drs. Muthen, I’m trying to test a moderated mediation model with nested data. Data for all my variables were collected from individual and the level of interest is individual too. Basically it is a 1-1-1 mediation model (x-m-y)with a level 1 moderator (w). It should be an easy one if my data isn’t nested in several clusters. Is it possible to test such a model in Mplus? How should I write my Syntax. Thanks very much. Linda K. Muthen posted on Friday, May 11, 2012 - 10:20 am Moderation can be handled by a multiple group analysis if the moderator is categorical or by creating an interaction term between the moderator and the variable being moderated. You can use TYPE= COMPLEX to take the non-independence of observations into account. Modify Example 3.11 and add CLUSTER to the VARIABLE command and TYPE=COMPLEX to the ANALYSIS command. Elizabeth Penela posted on Friday, May 25, 2012 - 2:48 pm I am attempting to conduct a moderated mediation analysis where all variables are latent variables formed with continuous indicators. In my model, the independent variable (X) functions as a moderator of the b1 path (the path from the mediating variable to outcome). This model is illustrated in Preacher, Rucker & Hayes (2007) - Figure 2, Model 1. I get an error message that says: MODEL INDIRECT is not available for TYPE=RANDOM. 1. Is it possible to do moderated mediation with a latent interaction variable? 2. Any guidance as to how to generate syntax for this model would be greatly appreciated. Bengt O. Muthen posted on Friday, May 25, 2012 - 3:03 pm This type of model is discussed in Section 3 and Section 5 of Muthén, B. (2011). Applications of causally defined direct and indirect effects in mediation analysis using SEM in Mplus. which is on our web site under Papers, Mediational Modeling. As stated in Section 3, Model Indirect cannot be used for this type of model. Section 5 shows how to define the direct and indirect effects instead using Model Constraint. Paula Ruttle posted on Thursday, August 30, 2012 - 12:17 pm Dear Drs. Muthen, I am currently running a moderated mediation model (X -> A -> Y with the pathway from A to Y being moderated by gender) using the following syntax: Y on X; A on X(Med1); Y on A(Med2); Y on Gender A*Gender; This model revealed significant moderated mediation; however, correlations suggested that moderation could also be expected on the pathway between X and A. When I checked for moderation of this pathway it also revealed significant moderated mediation. I've been told that presenting a model with both moderated pathways is inappropriate and that I need to check to see which model is best; however I am unsure of how to do this. Do you have any advice or syntax for such a problem? Many thanks in advance, Bengt O. Muthen posted on Friday, August 31, 2012 - 2:49 pm I would suggest a 2-group analysis with gender as the grouping variable. You can then easily test if the two paths are the same or different across gender. I can't see why a model with both paths differing across gender would be inappropriate. Luisa Rossi posted on Thursday, October 25, 2012 - 4:32 am Dear Drs Muthen, I have been running some mediation analyses and found that the x-->y relationship was mediated by m. I would now like to see if the a and b paths of the indirect models are different depending on whether respondents score high or low on a number of personality measures. So far, I have used multiple group comparisons with difftest to assess this. I am now thinking there may be a better way to do it but I am not sure. Can you help? Linda K. Muthen posted on Thursday, October 25, 2012 - 9:57 am Why are you dissatisfied with your approach? Is it because your moderator is continuous and you are categorizing it? See Example 3.18 in the Version 7 Mplus User's Guide on the website for another approach to moderated mediation. Luisa Rossi posted on Thursday, October 25, 2012 - 10:08 am Hi Linda, Thanks for the tip! I will look at the example. I am unsure whether multiple group comparison is the most effective way to look at moderated mediation or whether reviewers may criticize it. Thanks again! Linda K. Muthen posted on Thursday, October 25, 2012 - 11:47 am There are two ways to look at moderation. One is multiple group and the other to create an interaction variable. Luisa Rossi posted on Tuesday, October 30, 2012 - 7:03 am Hi Linda, I work with survey data so I had to impute my dataset to account of missing data. I noticed that difftest cannot be used with imputed data so I am thinking about using the interaction variable approach. The example you recommended above seems to be based on path analysis (?) as it is able to use the define option after the usevariable one. I am working with latent variables so I first need to define them - however, mplus is not happy for me to use 'define' within the 'model' option. What can I do? Linda K. Muthen posted on Tuesday, October 30, 2012 - 1:35 pm With latent variables, you should use the XWITH option for interactions. DEFINE is for observed variables. You don't include it in the MODEL command but above or below it. Luisa Rossi posted on Wednesday, October 31, 2012 - 5:15 am Thanks Linda! AS I am interested in the potential moderating role of w on path a and b, I adapted the model 5 example in Preacher, Rucker, and Hayes (2007) but I used the XWITH command to obtain the two interaction terms mw | m XWITH w; xw | x XWITH w; and included these interaction terms in the model as they suggest: y on m (b1) mw (b2) m on x (a1) xw (a3); I find that there is no evidence for a moderating role of either mw or xw (Ps>.05). However, when I had run the multiple group comparisons using w as the grouping variable I had found that it moderated path a (not b)... I am slightly confused on why this might be... can you help? are the 2 approaches not 100% comparable? Linda K. Muthen posted on Wednesday, October 31, 2012 - 1:05 pm These two approaches should yield identical results. Please send the two outputs and your license number to support@statmodel.com. C. Lechner posted on Monday, January 14, 2013 - 7:58 am Dear Drs. Muthén, I am testing a moderated mediation model where the moderator is a latent variable; i.e., there is a latent interaction using the XWITH command involved. Because this requires TYPE=RANDOM, the STANDARDIZED output is not available in these analyses. However, I would like to report R-square values for my outcomes. --> Is there any way to obtain R-square values for these analyses? Many thanks in advance! Linda K. Muthen posted on Monday, January 14, 2013 - 10:35 am See the FAQ Latent Variable Interactions on the website. C. Lechner posted on Thursday, January 17, 2013 - 7:38 am Thanks, Linda. I have a follow-up question: Using the formulas provided in the FAQ, I programmed myself an Excel spreadsheet that calculates R-square, change in R-square, and standardized path coefficients. It reproduces the numbers from your example on p. 6 in the FAQ sheet perfectly well. However, I wonder how one would generalize the equations from the FAQ sheet to include covariates. In my model (otherwise identical to Fig.2 on p.7), all three latent variables are regressed on a set of covariates. -> How do I get the total variances of the latent variables in this case? My approach was to sum all the product terms of the squared regression weights and the variance of the respective predictor, plus the residual variance of the latent variable (which I get in the output). E.g., for a latent variable eta2, regressed on eta1 and covariates x1 and x2, where b1 to b3 denote regression coefficients and zeta2 the residual variance of eta2, the total variance would be obtained by computing: Var(eta2) = b1^2 * Var(x1) + b2^2 * Var(x2)+ b3^2 * Var(eta1) + Var(zeta2). However, this seems to systematically underestimate r-square. -> Have I overlooked anything? -> Is there any way to directly get the total variance of a latent variable that is regressed on a set of covariates in the Mplus output? Linda K. Muthen posted on Thursday, January 17, 2013 - 8:52 am You need the variances of the two factors in the interaction. I think you are using the residual variances. If the variances are not available in TECH4, you will need to compute them. C. Lechner posted on Thursday, January 17, 2013 - 9:42 am TECH4 is unavailable for TYPE = RANDOM. I tried to compute the variances of all latent variables in the way described above. Referring to the notation in Figure 2 in the FAQ sheet, extended by two covariates x1 and x2 for each latent variable, I compute Var(eta1) = b11^2 * Var(x1) + b12^2 * Var(x2)+ ß^2 * Var(eta2) + Var(zeta1) Var(eta2) = b21^2 * Var(x1) + b22^2 * Var(x2)+ Var(zeta2) Var(eta3) = b31^2 * Var(x1) + b32^2 * Var(x2)+ ß1^2 * Var(eta1) + ß2^2 * Var(eta2) + 2*ß1*ß2*Cov(eta1,eta2)+ß3^2*Var(eta1*eta2) + Var(zeta3) where eta1 is regressed on eta2 (coefficient ß), eta3 is regressed on eta1 (coefficient ß1), eta2 (coefficient ß2), and their interaction (coefficient ß3), all three latent are regressed on the covariates x1 and x2 (coefficients bij), zeta_i denote residual variances for the latent variables; and where Var(eta1*eta2) = Var(eta1)*Var(eta2)+[Cov(eta1,eta2)]^2 and Cov(eta1,eta2) = ß*Var(eta2). -> Is this correct? There must be something missing. R-squares are substantially smaller than the ones Mplus computes for a model without interaction. Bengt O. Muthen posted on Thursday, January 17, 2013 - 5:29 pm In your eta1 equation you have to express eta2 in terms of the x's so that eta1 is written as a function of these x's. You also have to take into account that the x's are correlated and add a term like for the et1 equation: 2*Cov(x1, x2)*b11*b12. C. Lechner posted on Friday, January 18, 2013 - 1:04 am Thanks you, Bengt – I think the covariance term is what I had overlooked. I'll add it and see whether the numbers add up to something that makes sense then. As this involves a lot of manual computation when more covariates are involved, I wonder whether there is any easy workaround? E.g., could one simply estimate the factor variances in a measurement-part only model (without the covariates and structural paths) and use those as input for the calculations of standardized parameters and r-square in the final model? I'm afraid that would bias the variance estimates, wouldn't it? Bengt O. Muthen posted on Friday, January 18, 2013 - 2:53 pm It might be too approximate. ywang posted on Friday, March 29, 2013 - 12:49 pm Dear Drs. Muthen: I have a question about how to decide the signficance of the moderated medaition. M is a mediator and W is a moderator (gender, 0 and 1). For the following input: ANALYSIS: BOOTSTRAP = 1000; M ON X (a1) XW (a2); Y ON M (b1) X W XW MW (b2); NEW(eff1 eff2); Should we test the indirect effect of a2*b1 and a1*b2 in order to conclude the signficant moderated mediation? Thank you very much in advance! Bengt O. Muthen posted on Friday, March 29, 2013 - 4:33 pm Seems like you have set it up right. One thing you want to test is the difference between eff1 and eff2. To test significance of the moderation in the mediation, don't you want to test that a2* (b1+b2) is significant? Both the difference and this last term can be given NEW names so you get z-tests. Leslie Roos posted on Friday, May 03, 2013 - 1:54 pm I wanted to ask a question about a post from the first author on this post regarding a model he created from your 2007 Conditional Indirect Effects Paper, working from "Model 5". In line with the model, I am confused about the role of eff1 and eff2. Having performed previous mediations with the eff1 = a*b, this makes sense to me based on the M ON (X & XW) and Y on (M & MW) paths. I am confused about the eff2 role of multiplying each by 2 -- what would this be representing? Thank you! NEW(eff1 eff2); OUTPUT: CINTERVAL(bcbootstrap); Bengt O. Muthen posted on Friday, May 03, 2013 - 5:06 pm One and two standard deviations above the mean of zero for the moderator. Krista Highland posted on Thursday, May 30, 2013 - 10:13 am I am attempting to complete a moderated mediation in which my predictor and mediator variables are latent and I have a categorical and latent outcome. The format follows Preacher’s Model 5 in which the moderator impacts both paths a and b. The moderator variable has 4 categories (Black, White, Hispanic, Other). Would it be best to utilize the multiple group function, comparing all 4 racial groups together? Though, I am unsure of how to complete group comparisons from there. Or would it be better to run models with several dummy codes representing the moderator? Bengt O. Muthen posted on Thursday, May 30, 2013 - 10:21 am I would suggest doing multiple-group analysis, letting the a and b paths vary. Yisheng Peng posted on Wednesday, July 10, 2013 - 9:47 am I found most other researchers would calculate the critical ratios of differences(CRD) by dividing the difference between two estimates by an estimate of the standard error of the difference (Arbuckle, 2003). A CRD greater than 1.96 indicates that there was a significant difference between the two parameter estimates at p < 0.05.However, they usually do this in the Amos software. However, in the Mplus result part, I cannot find the standard error of the difference. I am wondering how can you do this through the results provided by Mplus? Yisheng Peng posted on Wednesday, July 10, 2013 - 9:54 am My question is, I want to do multi-group analysis to identify whether the path coefficients differ significantly between east and west. We compared the first model, which allows the structural paths to vary across cultures, with the second model, which constrains the structural paths across cultures to be equal to examine the cultural differences. All the other paths (i.e., factor loadings, error variances and structure covariances) were constrained to be equal. However, I found the factor loading are still different in the Mplus result part. Are there other things that need to be constrained equally? By the way, do you have any sample code for me to do such a multiple group comparison of the mediation model? Linda K. Muthen posted on Thursday, July 11, 2013 - 2:19 pm You can use MODEL TEST to do the CRD test. See the user's guide. See the Topic 1 course handout on the website under multiple group analysis to see the inputs for testing for measurement invarinace. JOEL WONG posted on Tuesday, October 08, 2013 - 5:28 pm I am attempting to test a first and second stage moderated mediation model. I have one predictor, one outcome, one mediator, and one moderator (W). W is hypothesized to moderate the relationship between the predictor and the mediator and the relationship between the mediator and the outcome. This model is described as model 58 in Hayes' PROCESS manual -- see http://mres.gmu.edu/pmwiki/uploads/Main/process.pdf Does anyone know the Mplus syntax for this model? Preacher et al. (2007) provides the Mplus syntax for several moderated mediation models, but it doesn't include this model. Thank you. Bengt O. Muthen posted on Tuesday, October 08, 2013 - 5:46 pm UG ex 3.18 describes the case of a moderator Z that moderates the influence of the predictor X on M and X on Y. That involves creating X*Z in Define. A plot of the effects and their confidence bands are obtained by LOOP. So that's the first part of your question. The second part is the moderation of the M->Y relationship. This calls for creating M*Z and regressing Y on it. LOOP could be used here as well. Krista Highland posted on Thursday, October 10, 2013 - 7:11 am Thank you for your previous suggestion in running my moderated mediation with 2 latent predictors, continuous mediator, and binary outcome. I have 4 groups, and have tested the model using the indirect command and bootstrapping (with BC confidence intervals). To compare the effect size of the indirect effects across groups. Would it be acceptable to examine whether the confidence intervals between groups overlaps as a means of testing for significant differences? Or is there a better way to test whether the effect sizes are significantly different (e.g. running a difftest where I constrain paths a and b and compare them to a non-constrained model)? Also, when I run the mediation for the whole sample, the mediation is significant. However, when I run the mediation using multigroup, the indirect effect is no longer significant in any of my four groups. Would it be safe to say that this could be due to sample size issues (as, I have already established measurement invariance in my predictors) Linda K. Muthen posted on Friday, October 11, 2013 - 11:04 am You should use DIFFTEST. Yes, lower power could be the reason for this. Laura Baams posted on Tuesday, October 29, 2013 - 9:02 am I am running a bootstrap multigroup mediation model with observed variables. Two predictors (x1, x2), two mediators (m1, m2) and one outcome (y). There are 5 groups for the multigroup part. I have compared model fit of a model in which I constrain all paths to be equal across groups, and a model in which they are not equal, and variations of this. The model with the best fit, is the one where groups 1 and 2 are constrained to be equal, and group 3 and 4 are constrained to be equal. I need to report the standardized estimates, but these are not equal for groups 1 and 2, or 3 and 4, while the unstandardized estimates are. From other posts I understand that Mplus does not constrain standardized estimates, does this mean I cannot report standardized estimates in this case? Is there a way I can still obtain standardized estimates (that are equal across groups 1 and 2; and 3 and 4)? Thanks so much! Linda K. Muthen posted on Tuesday, October 29, 2013 - 10:12 am The standardized coefficients are standardized using different standard deviations for each group. This is why the coefficients are different. It is not because they are not constrained to be equal in the analysis. Patrícia Costa posted on Friday, January 31, 2014 - 2:47 am Dear Drs Muthén, I am running a moderated mediation model based on Preacher, Rucker & Hayes (2007) model 3: the path from mediator m to y is moderated by w. The output shows that all paths are significant at p =.000. However, the fit indexes are as follows: AIC 195.379 BIC 221.996 Sample-Size Adjusted BIC 171.948 Chi-Square Test of Model Fit Value 12.001 Degrees of Freedom 2 P-Value 0.0025 Estimate 0.358 90 Percent C.I. 0.182 0.565 Probability RMSEA <= .05 0.004 CFI 0.958 TLI 0.852 Chi-Square Test of Model Fit for the Baseline Model Value 243.003 Degrees of Freedom 7 P-Value 0.0000 SRMR 0.311 Can I consider that my model is significant, relying on the significance of the paths? Thank you for your input, Linda K. Muthen posted on Friday, January 31, 2014 - 9:35 am Model fit is not assessed using the significance of the model parameters. The fit statistics indicate that leaving out the two paths represented by the two degrees of freedom create a lack of fit. Hannah Lee posted on Sunday, March 30, 2014 - 2:17 pm I am trying to run a moderated mediation model (Similar to Model 2 as Hayes and Preacher 2007 shows). The IV, Moderater are latent constructs and I am unable to use the MODEL INDIRECT with the latent interactions. So I referred to Muthen (2011) sections 3 and 5 using BOOTSTRAP=1000; and OUTPUT: But I keep getting the error message that bootstrap cannot be used with TYPE=RANDOM. How can I get the indirect/total effects? ANALYSIS: TYPE = RANDOM; bootstrap = 1000; MODEL: x1 BY CPI1-CPI3; x2 BY CPC1-CPC3; w BY CCC1-CCC3; m BY NPA1-NPA5; d BY Perf1- Perf6; x1xw | x1 XWITH w; x2xw | x2 XWITH w; d ON m (b1) x1 x2 w x1xw x2xw c1 c2; m ON x1 (a1) x2 (a2) w x1xw (a3) x2xw (a4) c1 c2; new (ind wmodval); wmodval = -1; *** ERROR in ANALYSIS command BOOTSTRAP is not allowed with TYPE=RANDOM. Bengt O. Muthen posted on Sunday, March 30, 2014 - 4:55 pm Drop the bootstrap request. If your sample size is not small it is unlikely that bootstrap SEs would be very different from regular ML SEs. Hannah Lee posted on Sunday, March 30, 2014 - 7:07 pm Thank you, Dr. Muthen. Would N=201 sample size be sufficient? Some of the t-values for the indirect effects come out to be borderline significant. I was wondering if this may be improved if I were able to bootstrap. Linda K. Muthen posted on Monday, March 31, 2014 - 8:05 am I would not worry unless the sample size is less than 100. I would be conservative as far as significance goes given that you are not doing a single test but several tests. Back to top
{"url":"http://www.statmodel.com/discussion/messages/11/5926.html?1391189708","timestamp":"2014-04-18T13:09:19Z","content_type":null,"content_length":"95002","record_id":"<urn:uuid:ab8d5659-be8c-4155-80a0-a0a4545e8030>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
partial derivative question April 28th 2009, 07:39 PM #1 Feb 2009 partial derivative question Say I'm given a function $u(x/r, 1/r)$ the function u itself is not defined. when i take a partial derivative with respect to x, do i just write it as $u_x(x/r, 1/r)$ or is there a chain rule that i have to follow? indeed there is a chain rule. let $s = \frac xr$ and $t = \frac 1r$, then you want $\frac {\partial}{\partial x} u(s,t)$. by the chain rule, $\frac {\partial u}{\partial x} = \frac {\partial u}{\partial s} \cdot \frac {\partial s}{\partial x} + \frac {\partial u}{\partial t} \cdot \frac {\partial t}{\partial x}$ April 28th 2009, 08:52 PM #2
{"url":"http://mathhelpforum.com/calculus/86342-partial-derivative-question.html","timestamp":"2014-04-16T05:15:03Z","content_type":null,"content_length":"34926","record_id":"<urn:uuid:95e60e84-07ba-46d6-86f2-9198cef9efba>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
TOP ALBUMS OF 2011 AS VOTED BY REDDIT (voting now!) (self.Music) submitted ago* by nevonanevona sorry, this has been archived and can no longer be voted on This went really well last year, so I was hoping to initiate another voting session as all the magazines/blogs etc. are starting to put out their best-of-the-year lists. I'll start by commenting some albums from the past year (no order or anything). You submit albums that aren't listed, and upvote the ones that you think deserve it. Then we see the results. Commenting Format: Album Title - Artist - Release date + label EDIT: At 24hrs from posting I'm going to post results here and an ordered Spotify playlist. Woo. EDIT 2: OrneryOctopus made a Spotify Playlist! A lot of you are posting albums that are already posted! Expand all comments, before searching with ctrl+f :
{"url":"http://www.reddit.com/r/Music/comments/n0tqc/top_albums_of_2011_as_voted_by_reddit_voting_now/?sort=top","timestamp":"2014-04-21T06:15:10Z","content_type":null,"content_length":"733297","record_id":"<urn:uuid:a5622a3f-f95e-4bb4-9b49-c95d09e35c42>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
1. No matter where you press a toothpaste tube, paste comes out of the opening. This is an application of 5. Distance and displacement of an object has the same value 6. The period of a simple pendulum follows this formula 7. The vertical velocity of a projectile at the peak of its flight is 8. A rectangular solid with a density of 10 kg per cubic meter has a height of 2000 cm. It has a length of 3 cm but the width is unknown. What is the pressure applied to the ground underneath it? 10. The angle at which an object starts to slide across a surface is not dependent on its weight 11. The force and the acceleration may have the same value and units 12. What is the density (in kg/m^3 of a cube whose side 2 m and whose mass is 8 kg? 16. We can inhale the exhaust fumes of a jeepney even when we are inside the moving vehicle. This is an application of 18. An object which remains at rest means no external force are acting on it 19. What is the density (in kg/m^3 which displaces about 500 m^3 in a container and whose mass is 50000?
{"url":"http://www.proprofs.com/quiz-school/story.php?title=physics-formal-assessment-lab6-under-mr-tan","timestamp":"2014-04-20T14:12:19Z","content_type":null,"content_length":"182748","record_id":"<urn:uuid:394e0719-500d-49fb-9a8e-ef80ac377777>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
High energy e- e- scattering, ratio inelastic/elastic cross-sections? If we were to scatter high energy (say E>>m_e) electrons off each other is there a hand wavy argument to come up with a rough estimate for the ratio of inelastic to elastic cross-sections? Would the dominate inelastic collision between two high energy electrons result in a single photon emitted? Thanks for any help!
{"url":"http://www.physicsforums.com/showthread.php?t=735106","timestamp":"2014-04-18T03:13:01Z","content_type":null,"content_length":"22553","record_id":"<urn:uuid:b7a13906-5fe3-4c42-9d30-a526ddbb1bd8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
If XX+YY=ZYZ, what are the values for X, Y and Z? If XX+YY=ZYZ, what are the values for X, Y and Z? Since the result for X + Y in the ones column is not the same as the X + Y for the tens column, there must be a "carry", so X + Y = 10 or more. But X + Y cannot equal 10 itself, since then Z = 0, and you can't have a leading zero. You know that X + Y must be 17 or less, since X and Y are different numbers, and 8 + 9 = 17. Since you're adding two 2-digit numbers and getting a 3-digit number, there must be an additional "carry" when you add the X + Y + 1 in the tens column. The minimum value of X + Y is 11, and the maximum is 17, so the minimum and maximum for X + Y + 1 are 12 and 18. What is Z, in either case? Then what must be X + Y? ...and so forth. Re: If XX+YY=ZYZ, what are the values for X, Y and Z? is X=5,Y=6 or X=6,Y=5? Um... neither. Check those values in the addition. Will they give you a correct value in the middle of ZYZ? Re: If XX+YY=ZYZ, what are the values for X, Y and Z? 2+9 works is that rite?
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=4&t=550","timestamp":"2014-04-20T09:25:13Z","content_type":null,"content_length":"25441","record_id":"<urn:uuid:13fabdad-0439-44b2-8f4a-4253d37bf04a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A grocer wants to make a 10-pound mixture of peanuts and cashews that he can sell for $4.75 per pound. If peanuts cost $4.00 per pound and cashews cost $6.50 per pound, how many pounds of each should he use? Let p = pounds of peanuts and let c = pounds of cashews. Write a system of equations that could be used to solve the problem. • one year ago • one year ago Best Response You've already chosen the best response. p+c=10 4p+6.5c=4.75(10) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50fc6fa8e4b0860af51e7bf2","timestamp":"2014-04-18T18:40:00Z","content_type":null,"content_length":"27870","record_id":"<urn:uuid:c89368b0-c1e3-4e71-b3c6-4e8ba5ba3954>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00520-ip-10-147-4-33.ec2.internal.warc.gz"}