content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
[pypy-commit] extradoc extradoc: remove sections about numpy and prolog for space reasons
cfbolz noreply at buildbot.pypy.org
Mon Jun 20 10:13:52 CEST 2011
Author: Carl Friedrich Bolz <cfbolz at gmx.de>
Branch: extradoc
Changeset: r3748:cec026d1ed94
Date: 2011-06-20 10:10 +0200
Log: remove sections about numpy and prolog for space reasons
diff --git a/talk/iwtc11/paper.tex b/talk/iwtc11/paper.tex
--- a/talk/iwtc11/paper.tex
+++ b/talk/iwtc11/paper.tex
@@ -913,27 +913,8 @@
the relative immaturity of PyPy's JIT assembler backend as well as missing
optimizations, like instruction scheduling.
-As a part of the PyPy project, we implemented small numerical kernel for
-performing matrix operations. The exact extend of this kernel is besides
-the scope of this paper, however the basic idea is to unroll a series of
-array operations into a loop compiled into assembler. LICM is a very good
-optimization for those kind of operations. The example benchmark performs
-addition of five arrays, compiling it in a way that's equivalent to C's:
-for (int i = 0; i < SIZE; i++) {
- res[i] = a[i] + b[i] + c[i] + d[i] + e[i];
-Where $res$, $a$, $b$, $c$, $d$ and $e$ are $double$ arrays.
-XXX: Carl?
+XXX add a small note somewhere that numpy and prolog are helped by this
In this paper we have studied loop invariant code motion during trace
More information about the pypy-commit mailing list
|
{"url":"https://mail.python.org/pipermail/pypy-commit/2011-June/051374.html","timestamp":"2014-04-16T22:54:08Z","content_type":null,"content_length":"4683","record_id":"<urn:uuid:f447c03c-822f-426b-81d7-a0f7cb218555>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum - Problems Library - Algebra, Quadratics: Quadratic Formula
This page:
quadratic equations
completing the square
About Levels
of Difficulty
use of variables
one variable equations
functions & relations
linear equations
linear data
linear systems
linear inequalities
exponents & radicals
rational equations
exponential functions
Browse all
About the
PoW Library
The Quadratic Formula
These problems involve quadratic equations which are not factorable and can be solved by use of the quadratic formula, although some students may choose to complete the square.
Related Resources
Interactive resources from our Math Tools project:
Algebra: Quadratic Equations
The closest match in our Ask Dr. Math archives:
Quadratic Equations
NCTM Standards:
Algebra Standard for Grades 9-12
Access to these problems requires a Membership.
|
{"url":"http://mathforum.org/library/problems/sets/alg_quadratics_quadformula.html","timestamp":"2014-04-16T20:00:31Z","content_type":null,"content_length":"12502","record_id":"<urn:uuid:dd62f911-0df5-4ad0-85b1-b5acc04a80dc>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Biology Project: BioMath
Quantitative Biology Problem Sets
Acid/Base Equilibria
Enzyme Kinetics: Michaelis-Menten Equation
Enzyme Kinetics: Lineweaver-Burke Plot
Hill Equation
DNA Melting Temperature
Bragg's Equation
Fixation of a Mutant Gene
Population Genetics
Frequency-Dependent Selection
Binomial Distribution
Mutation Selection Balance
Human Biology
Drug Concentrations
Heart Activity
Determining Your Target Heart Rate
Tumor Growth
Body Mass Index
Population Biology
Exponential Population Growth
Logistical Population Model I
Logistical Population Model II
Effective Population Size
Temperature Conversion
Newton's Law of Cooling
Carbon Dating
Allometry I
Allometry II
Circadian Rhythms
Environmental Fluctuations
Need Help? Mathematic Tutorials
An Introduction to Scientific Notation A quick review of writing very large and very small numbers using scientific notation.
Mathematical Notation Learn the proper notation for representing numbers, sets, sums, and products.
Introduction to Functions Learn the definition and properties of functions, how to perform mathematical operations on functions, and then practice what you have learned.
Transformations Learn how functions are transformed and how to sketch the graph of a function by inspecting the equation. Then test your knowledge.
Linear Functions Learn the definition of linear function, how to calculate the slope of a line, how to solve a linear equation, and how linear models are used in biology. Then practice what you have
Quadratic Functions Learn the definition of a quadratic function, what the graph of quadratic function looks like,and how to solve quadratic equations. Then test your knowledge with a problem set.
Exponential Functions Learn the definitions of exponential functions, how they are graphically represented, and how to graph basic exponential functions and transformed exponential functions.
Logarithmic Functions Learn the definitions of logarithmic functions and their properties, and how to graph them. Then practice what you have learned with exponential and logarithmic functions.
Polynomials Learn the definition of a polynomial, how to perform polynomial division, and what a graph of a polynomial function looks like. Then review what you have learned with a problem set.
Power Functions Learn the definition of a power function and how to graph one. Then test your knowledge with a problem set.
Rational Functions Learn the definition of a rational function, what the graph of a rational function looks like, and how to find the asymptotes. Then complete the problem set.
Trigonometric Functions Learn the definition of a trigonometric function, review some special angles, learn what the graphs of various trigonometric function look like, and see some trigonometric
identities. Then test your knowledge with a problem set.
|
{"url":"http://www.biology.arizona.edu/biomath/BioMath.html","timestamp":"2014-04-17T21:26:38Z","content_type":null,"content_length":"12338","record_id":"<urn:uuid:b9e3a9de-76c5-4020-97a9-781df6e2b26c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CJM: Volume 50 (1998)
3 Subgroups of the adjoint group of a radical ring
Amberg, B.; Dickenschied, O.; Sysak, Ya. P.
It is shown that the adjoint group $R^\circ$ of an arbitrary radical ring $R$ has a series with abelian factors and that its finite subgroups are nilpotent. Moreover, some criteria
for subgroups of $R^\circ$ to be locally nilpotent are given.
Show abstract
It is shown that the adjoint group $R^\circ$ of an arbitrary radical ring $R$ has a series with abelian factors and that its finite subgroups are nilpotent. Moreover, some criteria
for subgroups of $R^\circ$ to be locally nilpotent are given.
Hide abstract
16 Asymptotic shape of finite packings
Böröczky, Károly Jr.; Schnell, Uwe
Let $K$ be a convex body in $\ed$ and denote by $\cn$ the set of centroids of $n$ non-overlapping translates of $K$. For $\varrho>0$, assume that the parallel body $\cocn+\varrho K$
of $\cocn$ has minimal volume. The notion of parametric density (see \cite{Wil93}) provides a bridge between finite and infinite packings (see \cite{BHW94} or \cite{Hen}). It is
known that there exists a maximal $\varrho_s(K)\geq 1/(32d^2)$ such that $\cocn$ is a segment for $\varrho<\varrho_s$ (see \cite{BHW95}). We prove the existence of a minimal $\
varrho_c(K)\leq d+1$ such that if $\varrho>\varrho_c$ and $n$ is large then the shape of $\cocn$ can not be too far from the shape of $K$. For $d=2$, we verify that $\varrho_s=\
varrho_c$. For $d\geq 3$, we present the first example of a convex body with known $\varrho_s$ and $\varrho_c$; namely, we have $\varrho_s=\varrho_c=1$ for the parallelotope.
Show abstract
Let $K$ be a convex body in $\ed$ and denote by $\cn$ the set of centroids of $n$ non-overlapping translates of $K$. For $\varrho>0$, assume that the parallel body $\cocn+\varrho K$
of $\cocn$ has minimal volume. The notion of parametric density (see~\cite{Wil93}) provides a bridge between finite and infinite packings (see~\cite{BHW94} or~\cite{Hen}). It is
known that there exists a maximal $\varrho_s(K)\geq 1/(32d^2)$ such that $\cocn$ is a segment for $\varrho<\varrho_s$ (see~\cite{BHW95}). We prove the existence of a minimal $\
varrho_c(K)\leq d+1$ such that if $\varrho>\varrho_c$ and $n$ is large then the shape of $\cocn$ can not be too far from the shape of $K$. For $d=2$, we verify that $\varrho_s=\
varrho_c$. For $d\geq 3$, we present the first example of a convex body with known $\varrho_s$ and $\varrho_c$; namely, we have $\varrho_s=\varrho_c=1$ for the parallelotope.
Hide abstract
29 Weighted norm inequalities for fractional integral operators with rough kernel
Ding, Yong; Lu, Shanzhen
Given function $\Omega$ on ${\Bbb R^n}$, we define the fractional maximal operator and the fractional integral operator by $$ M_{\Omega,\alpha}\,f(x)=\sup_{r>0}\frac 1{r^{n-\alpha}}
\int_{|\,y|1)$, homogeneous of degree zero.
Show abstract
Given function $\Omega$ on ${\Bbb R^n}$, we define the fractional maximal operator and the fractional integral operator by $$ M_{\Omega,\alpha}\,f(x)=\sup_{r>0}\frac 1{r^{n-\alpha}}
\int_{|\,y|<r}| \Omega(\,y)|\,|\,f(x-y)|\,dy $$ and $$ T_{\Omega,\alpha}\,f(x)=\int_{\Bbb R^n}\frac {\Omega(\,y)}{|y|^{n-\alpha}} \,f(x-y)\,dy $$ respectively, where $0<\alpha<n$. In
this paper we study the weighted norm inequalities of $ M_{\Omega, \alpha}$ and $T_{\Omega,\alpha}$ for appropriate $\alpha,s$ and $A(\,p,q)$ weights in the case that $\Omega\in L^s
(S^{n-1})(s>1)$, homogeneous of degree zero.
Hide abstract
40 Green's functions for powers of the invariant Laplacian
Engliš, Miroslav; Peetre, Jaak
The aim of the present paper is the computation of Green's functions for the powers $\DDelta^m$ of the invariant Laplace operator on rank-one Hermitian symmetric spaces. Starting
with the noncompact case, the unit ball in $\CC^d$, we obtain a complete result for $m=1,2$ in all dimensions. For $m\ge3$ the formulas grow quite complicated so we restrict
ourselves to the case of the unit disc ($d=1$) where we develop a method, possibly applicable also in other situations, for reducing the number of integrations by half, and use it to
give a description of the boundary behaviour of these Green functions and to obtain their (multi-valued) analytic continuation to the entire complex plane. Next we discuss the type
of special functions that turn up (hyperlogarithms of Kummer). Finally we treat also the compact case of the complex projective space $\Bbb P^d$ (for $d=1$, the Riemann sphere) and,
as an application of our results, use eigenfunction expansions to obtain some new identities involving sums of Legendre ($d=1$) or Jacobi ($d>1$) polynomials and the polylogarithm
function. The case of Green's functions of powers of weighted (no longer invariant, but only covariant) Laplacians is also briefly discussed.
Show abstract
The aim of the present paper is the computation of Green's functions for the powers $\DDelta^m$ of the invariant Laplace operator on rank-one Hermitian symmetric spaces. Starting
with the noncompact case, the unit ball in $\CC^d$, we obtain a complete result for $m=1,2$ in all dimensions. For $m\ge3$ the formulas grow quite complicated so we restrict
ourselves to the case of the unit disc ($d=1$) where we develop a method, possibly applicable also in other situations, for reducing the number of integrations by half, and use it to
give a description of the boundary behaviour of these Green functions and to obtain their (multi-valued) analytic continuation to the entire complex plane. Next we discuss the type
of special functions that turn up (hyperlogarithms of Kummer). Finally we treat also the compact case of the complex projective space $\Bbb P^d$ (for $d=1$, the Riemann sphere) and,
as an application of our results, use eigenfunction expansions to obtain some new identities involving sums of Legendre ($d=1$) or Jacobi ($d>1$) polynomials and the polylogarithm
function. The case of Green's functions of powers of weighted (no longer invariant, but only covariant) Laplacians is also briefly discussed.
Hide abstract
74 Elementary proof of the fundamental lemma for a unitary group
Flicker, Yuval Z.
The fundamental lemma in the theory of automorphic forms is proven for the (quasi-split) unitary group $U(3)$ in three variables associated with a quadratic extension of $p$-adic
fields, and its endoscopic group $U(2)$, by means of a new, elementary technique. This lemma is a prerequisite for an application of the trace formula to classify the automorphic and
admissible representations of $U(3)$ in terms of those of $U(2)$ and base change to $\GL(3)$. It compares the (unstable) orbital integral of the characteristic function of the
standard maximal compact subgroup $K$ of $U(3)$ at a regular element (whose centralizer $T$ is a torus), with an analogous (stable) orbital integral on the endoscopic group $U(2)$.
The technique is based on computing the sum over the double coset space $T\bs G/K$ which describes the integral, by means of an intermediate double coset space $H\bs G/K$ for a
subgroup $H$ of $G=U(3)$ containing $T$. Such an argument originates from Weissauer's work on the symplectic group. The lemma is proven for both ramified and unramified regular
elements, for which endoscopy occurs (the stable conjugacy class is not a single orbit).
Show abstract
The fundamental lemma in the theory of automorphic forms is proven for the (quasi-split) unitary group $U(3)$ in three variables associated with a quadratic extension of $p$-adic
fields, and its endoscopic group $U(2)$, by means of a new, elementary technique. This lemma is a prerequisite for an application of the trace formula to classify the automorphic and
admissible representations of $U(3)$ in terms of those of $U(2)$ and base change to $\GL(3)$. It compares the (unstable) orbital integral of the characteristic function of the
standard maximal compact subgroup $K$ of $U(3)$ at a regular element (whose centralizer $T$ is a torus), with an analogous (stable) orbital integral on the endoscopic group $U(2)$.
The technique is based on computing the sum over the double coset space $T\bs G/K$ which describes the integral, by means of an intermediate double coset space $H\bs G/K$ for a
subgroup $H$ of $G=U(3)$ containing $T$. Such an argument originates from Weissauer's work on the symplectic group. The lemma is proven for both ramified and unramified regular
elements, for which endoscopy occurs (the stable conjugacy class is not a single orbit).
Hide abstract
99 $A_\phi$-invariant subspaces on the torus
Izuchi, Keiji; Matsugu, Yasuo
Generalizing the notion of invariant subspaces on the 2-dimensional torus $T^2$, we study the structure of $A_\phi$-invariant subspaces of $L^2(T^2)$. A complete description is given
of $A_\phi$-invariant subspaces that satisfy conditions similar to those studied by Mandrekar, Nakazi, and Takahashi.
Show abstract
Generalizing the notion of invariant subspaces on the 2-dimensional torus $T^2$, we study the structure of $A_\phi$-invariant subspaces of $L^2(T^2)$. A complete description is given
of $A_\phi$-invariant subspaces that satisfy conditions similar to those studied by Mandrekar, Nakazi, and Takahashi.
Hide abstract
134 On critical level sets of some two degrees of freedom integrable Hamiltonian systems
Médan, Christine
We prove that all Liouville's tori generic bifurcations of a large class of two degrees of freedom integrable Hamiltonian systems (the so called Jacobi-Moser-Mumford systems) are
nondegenerate in the sense of Bott. Thus, for such systems, Fomenko's theory \cite{fom} can be applied (we give the example of Gel'fand-Dikii's system). We also check the Bott
property for two interesting systems: the Lagrange top and the geodesic flow on an ellipsoid.
Show abstract
We prove that all Liouville's tori generic bifurcations of a large class of two degrees of freedom integrable Hamiltonian systems (the so called Jacobi-Moser-Mumford systems) are
nondegenerate in the sense of Bott. Thus, for such systems, Fomenko's theory~\cite{fom} can be applied (we give the example of Gel'fand-Dikii's system). We also check the Bott
property for two interesting systems: the Lagrange top and the geodesic flow on an ellipsoid.
Hide abstract
152 Inequalities for rational functions with prescribed poles
Min, G.
This paper considers the rational system ${\cal P}_n (a_1,a_2,\ldots,a_n):= \bigl\{ {P(x) \over \prod_ {k=1}^n (x-a_k)}, P\in {\cal P}_n\bigr\}$ with nonreal elements in $\{a_k\}_ {k
=1}^{n}\subset\Bbb{C}\setminus [-1,1]$ paired by complex conjugation. It gives a sharp (to constant) Markov-type inequality for real rational functions in ${\cal P}_n (a_1,a_2,\
ldots,a_n)$. The corresponding Markov-type inequality for high derivatives is established, as well as Nikolskii-type inequalities. Some sharp Markov- and Bernstein-type inequalities
with curved majorants for rational functions in ${\cal P}_n(a_1,a_2,\ldots,a_n)$ are obtained, which generalize some results for the classical polynomials. A sharp Schur-type
inequality is also proved and plays a key role in the proofs of our main results.
Show abstract
This paper considers the rational system ${\cal P}_n (a_1,a_2,\ldots,a_n):= \bigl\{ {P(x) \over \prod_{k=1}^n (x-a_k)}, P\in {\cal P}_n\bigr\}$ with nonreal elements in $\{a_k\}_{k=
1}^{n}\subset\Bbb{C}\setminus [-1,1]$ paired by complex conjugation. It gives a sharp (to constant) Markov-type inequality for real rational functions in ${\cal P}_n (a_1,a_2,\
ldots,a_n)$. The corresponding Markov-type inequality for high derivatives is established, as well as Nikolskii-type inequalities. Some sharp Markov- and Bernstein-type inequalities
with curved majorants for rational functions in ${\cal P}_n(a_1,a_2,\ldots,a_n)$ are obtained, which generalize some results for the classical polynomials. A sharp Schur-type
inequality is also proved and plays a key role in the proofs of our main results.
Hide abstract
167 Murnaghan-Nakayama rules for characters of Iwahori-Hecke algebras of the complex reflection groups $G(r,p,n)$
Halverson, Tom; Ram, Arun
Iwahori-Hecke algebras for the infinite series of complex reflection groups $G(r,p,n)$ were constructed recently in the work of Ariki and Koike \cite{AK}, Broué and Malle \cite{BM},
and Ariki \cite{Ari}. In this paper we give Murnaghan-Nakayama type formulas for computing the irreducible characters of these algebras. Our method is a generalization of that in our
earlier paper \cite{HR} in which we derived Murnaghan-Nakayama rules for the characters of the Iwahori-Hecke algebras of the classical Weyl groups. In both papers we have been
motivated by C. Greene \cite{Gre}, who gave a new derivation of the Murnaghan-Nakayama formula for irreducible symmetric group characters by summing diagonal matrix entries in
Young's seminormal representations. We use the analogous representations of the Iwahori-Hecke algebra of $G(r,p,n)$ given by Ariki and Koike \cite{AK} and Ariki \cite{Ari}.
Show abstract
Iwahori-Hecke algebras for the infinite series of complex reflection groups $G(r,p,n)$ were constructed recently in the work of Ariki and Koike~\cite{AK}, Brou\'e and Malle \cite
{BM}, and Ariki~\cite{Ari}. In this paper we give Murnaghan-Nakayama type formulas for computing the irreducible characters of these algebras. Our method is a generalization of that
in our earlier paper ~\cite{HR} in which we derived Murnaghan-Nakayama rules for the characters of the Iwahori-Hecke algebras of the classical Weyl groups. In both papers we have
been motivated by C. Greene~\cite{Gre}, who gave a new derivation of the Murnaghan-Nakayama formula for irreducible symmetric group characters by summing diagonal matrix entries in
Young's seminormal representations. We use the analogous representations of the Iwahori-Hecke algebra of $G(r,p,n)$ given by Ariki and Koike~\cite{AK} and Ariki ~\cite{Ari}.
Hide abstract
193 Intertwining operator and $h$-harmonics associated with reflection groups
Xu, Yuan
We study the intertwining operator and $h$-harmonics in Dunkl's theory on $h$-harmonics associated with reflection groups. Based on a biorthogonality between the ordinary harmonics
and the action of the intertwining operator $V$ on the harmonics, the main result provides a method to compute the action of the intertwining operator $V$ on polynomials and to
construct an orthonormal basis for the space of $h$-harmonics.
Show abstract
We study the intertwining operator and $h$-harmonics in Dunkl's theory on $h$-harmonics associated with reflection groups. Based on a biorthogonality between the ordinary harmonics
and the action of the intertwining operator $V$ on the harmonics, the main result provides a method to compute the action of the intertwining operator $V$ on polynomials and to
construct an orthonormal basis for the space of $h$-harmonics.
Hide abstract
210 Isomorphisms between generalized Cartan type $W$ Lie algebras in characteristic $0$
Zhao, Kaiming
In this paper, we determine when two simple generalized Cartan type $W$ Lie algebras $W_d (A, T, \varphi)$ are isomorphic, and discuss the relationship between the Jacobian
conjecture and the generalized Cartan type $W$ Lie algebras.
Show abstract
In this paper, we determine when two simple generalized Cartan type $W$ Lie algebras $W_d (A, T, \varphi)$ are isomorphic, and discuss the relationship between the Jacobian
conjecture and the generalized Cartan type $W$ Lie algebras.
Hide abstract
225 Derivations and invariant forms of Lie algebras graded by finite root systems
Benkart, Georgia
Lie algebras graded by finite reduced root systems have been classified up to isomorphism. In this paper we describe the derivation algebras of these Lie algebras and determine when
they possess invariant bilinear forms. The results which we develop to do this are much more general and apply to Lie algebras that are completely reducible with respect to the
adjoint action of a finite-dimensional subalgebra.
Show abstract
Lie algebras graded by finite reduced root systems have been classified up to isomorphism. In this paper we describe the derivation algebras of these Lie algebras and determine when
they possess invariant bilinear forms. The results which we develop to do this are much more general and apply to Lie algebras that are completely reducible with respect to the
adjoint action of a finite-dimensional subalgebra.
Hide abstract
242 Intégration du sous-différentiel proximal: un contre exemple
Benoist, Joël
Etant donnée une partie $D$ dénombrable et dense de ${\R}$, nous construisons une infinité de fonctions Lipschitziennes définies sur ${\R}$, s'annulant en zéro, dont le
sous-différentiel proximal est égal à $]-1, 1[$ en tout point de $D$ et est vide en tout point du complémentaire de $D$. Nous déduisons que deux fonctions dont la différence n'est
pas constante peuvent avoir les mêmes sous-différentiels.
Show abstract
Etant donn\'ee une partie $D$ d\'enombrable et dense de ${\R}$, nous construisons une infinit\'e de fonctions Lipschitziennes d\'efinies sur ${\R}$, s'annulant en z\'ero, dont le
sous-diff\'erentiel proximal est \'egal \`a $]-1, 1[$ en tout point de $D$ et est vide en tout point du compl\'ementaire de $D$. Nous d\'eduisons que deux fonctions dont la diff\
'erence n'est pas constante peuvent avoir les m\^emes sous-diff\'erentiels.
Hide abstract
266 The torsion free Pieri formula
Britten, D. J.; Lemire, F. W.
Central to the study of simple infinite dimensional $g\ell(n, \Bbb C)$-modules having finite dimensional weight spaces are the torsion free modules. All degree $1$ torsion free
modules are known. Torsion free modules of arbitrary degree can be constructed by tensoring torsion free modules of degree $1$ with finite dimensional simple modules. In this paper,
the central characters of such a tensor product module are shown to be given by a Pieri-like formula, complete reducibility is established when these central characters are distinct
and an example is presented illustrating the existence of a nonsimple indecomposable submodule when these characters are not distinct.
Show abstract
Central to the study of simple infinite dimensional $g\ell(n, \Bbb C)$-modules having finite dimensional weight spaces are the torsion free modules. All degree $1$ torsion free
modules are known. Torsion free modules of arbitrary degree can be constructed by tensoring torsion free modules of degree $1$ with finite dimensional simple modules. In this paper,
the central characters of such a tensor product module are shown to be given by a Pieri-like formula, complete reducibility is established when these central characters are distinct
and an example is presented illustrating the existence of a nonsimple indecomposable submodule when these characters are not distinct.
Hide abstract
290 Noncommutative disc algebras for semigroups
Davidson, Kenneth R.; Popescu, Gelu
We study noncommutative disc algebras associated to the free product of discrete subsemigroups of $\bbR^+$. These algebras are associated to generalized Cuntz algebras, which are
shown to be simple and purely infinite. The nonself-adjoint subalgebras determine the semigroup up to isomorphism. Moreover, we establish a dilation theorem for contractive
representations of these semigroups which yields a variant of the von Neumann inequality. These methods are applied to establish a solution to the truncated moment problem in this
Show abstract
We study noncommutative disc algebras associated to the free product of discrete subsemigroups of $\bbR^+$. These algebras are associated to generalized Cuntz algebras, which are
shown to be simple and purely infinite. The nonself-adjoint subalgebras determine the semigroup up to isomorphism. Moreover, we establish a dilation theorem for contractive
representations of these semigroups which yields a variant of the von Neumann inequality. These methods are applied to establish a solution to the truncated moment problem in this
Hide abstract
312 Units in group rings of free products of prime cyclic groups
Dokuchaev, Michael A.; Singer, Maria Lucia Sobral
Let $G$ be a free product of cyclic groups of prime order. The structure of the unit group ${\cal U}(\Q G)$ of the rational group ring $\Q G$ is given in terms of free products and
amalgamated free products of groups. As an application, all finite subgroups of ${\cal U}(\Q G)$, up to conjugacy, are described and the Zassenhaus Conjecture for finite subgroups in
$\Z G$ is proved. A strong version of the Tits Alternative for ${\cal U}(\Q G)$ is obtained as a corollary of the structural result.
Show abstract
Let $G$ be a free product of cyclic groups of prime order. The structure of the unit group ${\cal U}(\Q G)$ of the rational group ring $\Q G$ is given in terms of free products and
amalgamated free products of groups. As an application, all finite subgroups of ${\cal U}(\Q G)$, up to conjugacy, are described and the Zassenhaus Conjecture for finite subgroups in
$\Z G$ is proved. A strong version of the Tits Alternative for ${\cal U}(\Q G)$ is obtained as a corollary of the structural result.
Hide abstract
323 Purely infinite, simple $C^\ast$-algebras arising from free product constructions
Dykema, Kenneth J.; Rørdam, Mikael
Examples of simple, separable, unital, purely infinite $C^\ast$-algebras are constructed, including: \item{(1)} some that are not approximately divisible; \item{(2)} those that arise
as crossed products of any of a certain class of $C^\ast$-algebras by any of a certain class of non-unital endomorphisms; \item{(3)} those that arise as reduced free products of
pairs of $C^\ast$-algebras with respect to any from a certain class of states.
Show abstract
Examples of simple, separable, unital, purely infinite $C^\ast$-algebras are constructed, including: \item{(1)} some that are not approximately divisible; \item{(2)} those that arise
as crossed products of any of a certain class of $C^\ast$-algebras by any of a certain class of non-unital endomorphisms; \item{(3)} those that arise as reduced free products of
pairs of $C^\ast$-algebras with respect to any from a certain class of states.
Hide abstract
342 Shape fibrations, multivalued maps and shape groups
Giraldo, Antonio
The notion of shape fibration with the near lifting of near multivalued paths property is studied. The relation of these maps---which agree with shape fibrations having totally
disconnected fibers---with Hurewicz fibrations with the unique path lifting property is completely settled. Some results concerning homotopy and shape groups are presented for shape
fibrations with the near lifting of near multivalued paths property. It is shown that for this class of shape fibrations the existence of liftings of a fine multivalued map, is
equivalent to an algebraic problem relative to the homotopy, shape or strong shape groups associated.
Show abstract
The notion of shape fibration with the near lifting of near multivalued paths property is studied. The relation of these maps---which agree with shape fibrations having totally
disconnected fibers---with Hurewicz fibrations with the unique path lifting property is completely settled. Some results concerning homotopy and shape groups are presented for shape
fibrations with the near lifting of near multivalued paths property. It is shown that for this class of shape fibrations the existence of liftings of a fine multivalued map, is
equivalent to an algebraic problem relative to the homotopy, shape or strong shape groups associated.
Hide abstract
356 Some norms on universal enveloping algebras
Gross, Leonard
The universal enveloping algebra, $U(\frak g)$, of a Lie algebra $\frak g$ supports some norms and seminorms that have arisen naturally in the context of heat kernel analysis on Lie
groups. These norms and seminorms are investigated here from an algebraic viewpoint. It is shown that the norms corresponding to heat kernels on the associated Lie groups decompose
as product norms under the natural isomorphism $U(\frak g_1 \oplus \frak g_2) \cong U(\frak g_1) \otimes U(\frak g_2)$. The seminorms corresponding to Green's functions are examined
at a purely Lie algebra level for $\rmsl(2,\Bbb C)$. It is also shown that the algebraic dual space $U'$ is spanned by its finite rank elements if and only if $\frak g$ is nilpotent.
Show abstract
The universal enveloping algebra, $U(\frak g)$, of a Lie algebra $\frak g$ supports some norms and seminorms that have arisen naturally in the context of heat kernel analysis on Lie
groups. These norms and seminorms are investigated here from an algebraic viewpoint. It is shown that the norms corresponding to heat kernels on the associated Lie groups decompose
as product norms under the natural isomorphism $U(\frak g_1 \oplus \frak g_2) \cong U(\frak g_1) \otimes U(\frak g_2)$. The seminorms corresponding to Green's functions are examined
at a purely Lie algebra level for $\rmsl(2,\Bbb C)$. It is also shown that the algebraic dual space $U'$ is spanned by its finite rank elements if and only if $\frak g$ is nilpotent.
Hide abstract
378 Equivariant polynomial automorphism of $\Theta$-representations
Kurth, Alexandre
We show that every equivariant polynomial automorphism of a $\Theta$-repre\-sen\-ta\-tion and of the reduction of an irreducible $\Theta$-representation is a multiple of the
Show abstract
We show that every equivariant polynomial automorphism of a $\Theta$-repre\-sen\-ta\-tion and of the reduction of an irreducible $\Theta$-representation is a multiple of the
Hide abstract
401 The hypercentre and the $n$-centre of the unit group of an integral group ring
Li, Yuanlin
In this paper, we first show that the central height of the unit group of the integral group ring of a periodic group is at most $2$. We then give a complete characterization of the
$n$-centre of that unit group. The $n$-centre of the unit group is either the centre or the second centre (for $n \geq 2$).
Show abstract
In this paper, we first show that the central height of the unit group of the integral group ring of a periodic group is at most $2$. We then give a complete characterization of the
$n$-centre of that unit group. The $n$-centre of the unit group is either the centre or the second centre (for $n \geq 2$).
Hide abstract
412 Asymptotic transformations of $q$-series
McIntosh, Richard J.
For the $q$-series $\sum_{n=0}^\infty a^nq^{bn^2+cn}/(q)_n$ we construct a companion $q$-series such that the asymptotic expansions of their logarithms as $q\to 1^{\scriptscriptstyle
-}$ differ only in the dominant few terms. The asymptotic expansion of their quotient then has a simple closed form; this gives rise to a new $q$-hypergeometric identity. We give an
asymptotic expansion of a general class of $q$-series containing some of Ramanujan's mock theta functions and Selberg's identities.
Show abstract
For the $q$-series $\sum_{n=0}^\infty a^nq^{bn^2+cn}/(q)_n$ we construct a companion $q$-series such that the asymptotic expansions of their logarithms as $q\to 1^{\scriptscriptstyle
-}$ differ only in the dominant few terms. The asymptotic expansion of their quotient then has a simple closed form; this gives rise to a new $q$-hypergeometric identity. We give an
asymptotic expansion of a general class of $q$-series containing some of Ramanujan's mock theta functions and Selberg's identities.
Hide abstract
426 The groups of the regular star-polytopes
McMullen, Peter
The regular star-polyhedron ${5, 5/2}$ is isomorphic to the abstract polyhedron ${5, 5|3}$, where the last entry "3" in its symbol denotes the size of a hole, given by the imposition
of a certain relations on the group of the hyperbolic honeycomb ${5, 5}$. Here, analogous formulations are found for the groups of the regular 4-dimensional star-polytopes, and for
those of the non-discrete regular 4-dimensional honeycombs. In all cases, the extra group relations to be imposed on the corresponding Coxeter groups are those arising from "deep
holes"; thus the abstract description of ${5, 3^k, 5/2}$ is ${5, 3^k, 5|3}$ for $k=1$ or 2. The non-discrete quasi-regular honeycombs in $\cal{E}^3$, on the other hand, are not
determined in an analogous way.
Show abstract
449 $Q_p$ spaces on Riemann surfaces
Aulaskari, Rauno; He, Yuzan; Ristioja, Juha; Zhao, Ruhan
We study the function spaces $Q_p(R)$ defined on a Riemann surface $R$, which were earlier introduced in the unit disk of the complex plane. The nesting property $Q_p(R)\subseteq Q_q
(R)$ for $0
Show abstract
We study the function spaces $Q_p(R)$ defined on a Riemann surface $R$, which were earlier introduced in the unit disk of the complex plane. The nesting property $Q_p(R)\subseteq Q_q
(R)$ for $0<p<q<\infty$ is shown in case of arbitrary hyperbolic Riemann surfaces. Further, it is proved that the classical Dirichlet space $\AD(R)\subseteq Q_p(R)$ for any $p$, $0<p
<\infty$, thus sharpening T.~Metzger's well-known result $\AD(R)\subseteq \BMOA(R)$. Also the first author's result $\AD(R)\subseteq \VMOA(R)$ for a regular Riemann surface $R$ is
sharpened by showing that, in fact, $\AD(R)\subseteq Q_{p,0}(R)$ for all $p$, $0<p<\infty$. The relationships between $Q_p(R)$ and various generalizations of the Bloch space on $R$
are considered. Finally we show that $Q_p(R)$ is a Banach space for $0<p<\infty$.
Hide abstract
465 Six primes and an almost prime in four linear equations
Balog, Antal
There are infinitely many triplets of primes $p,q,r$ such that the arithmetic means of any two of them, ${p+q\over2}$, ${p+r\over2}$, ${q+r\over2}$ are also primes. We give an
asymptotic formula for the number of such triplets up to a limit. The more involved problem of asking that in addition to the above the arithmetic mean of all three of them, ${p+q+r\
over3}$ is also prime seems to be out of reach. We show by combining the Hardy-Littlewood method with the sieve method that there are quite a few triplets for which six of the seven
entries are primes and the last is almost prime.}
Show abstract
There are infinitely many triplets of primes $p,q,r$ such that the arithmetic means of any two of them, ${p+q\over2}$, ${p+r\over2}$, ${q+r\over2}$ are also primes. We give an
asymptotic formula for the number of such triplets up to a limit. The more involved problem of asking that in addition to the above the arithmetic mean of all three of them, ${p+q+r\
over3}$ is also prime seems to be out of reach. We show by combining the Hardy-Littlewood method with the sieve method that there are quite a few triplets for which six of the seven
entries are primes and the last is almost prime.}
Hide abstract
487 On the Liouville property for divergence form operators
Barlow, Martin T.
In this paper we construct a bounded strictly positive function $\sigma$ such that the Liouville property fails for the divergence form operator $L=\nabla (\sigma^2 \nabla)$. Since
in addition $\Delta \sigma/\sigma$ is bounded, this example also gives a negative answer to a problem of Berestycki, Caffarelli and Nirenberg concerning linear Schrödinger operators.
Show abstract
In this paper we construct a bounded strictly positive function $\sigma$ such that the Liouville property fails for the divergence form operator $L=\nabla (\sigma^2 \nabla)$. Since
in addition $\Delta \sigma/\sigma$ is bounded, this example also gives a negative answer to a problem of Berestycki, Caffarelli and Nirenberg concerning linear Schr\"odinger
Hide abstract
497 Morse index of approximating periodic solutions for the billiard problem. Application to existence results
Bolle, Philippe
This paper deals with periodic solutions for the billiard problem in a bounded open set of $\hbox{\Bbbvii R}^N$ which are limits of regular solutions of Lagrangian systems with a
potential well. We give a precise link between the Morse index of approximate solutions (regarded as critical points of Lagrangian functionals) and the properties of the bounce
trajectory to which they converge.
Show abstract
This paper deals with periodic solutions for the billiard problem in a bounded open set of $\hbox{\Bbbvii R}^N$ which are limits of regular solutions of Lagrangian systems with a
potential well. We give a precise link between the Morse index of approximate solutions (regarded as critical points of Lagrangian functionals) and the properties of the bounce
trajectory to which they converge.
Hide abstract
525 Nilpotent orbit varieties and the atomic decomposition of the $q$-Kostka polynomials
Brockman, William; Haiman, Mark
We study the coordinate rings $k[\Cmubar\cap\hbox{\Frakvii t}]$ of scheme-theoretic intersections of nilpotent orbit closures with the diagonal matrices. Here $\mu'$ gives the Jordan
block structure of the nilpotent matrix. de Concini and Procesi \cite{deConcini&Procesi} proved a conjecture of Kraft \cite{Kraft} that these rings are isomorphic to the cohomology
rings of the varieties constructed by Springer \cite{Springer76,Springer78}. The famous $q$-Kostka polynomial $\Klmt(q)$ is the Hilbert series for the multiplicity of the irreducible
symmetric group representation indexed by $\lambda$ in the ring $k[\Cmubar\cap\hbox{\Frakvii t}]$. \LS \cite{L&S:Plaxique,Lascoux} gave combinatorially a decomposition of $\Klmt(q)$
as a sum of ``atomic'' polynomials with non-negative integer coefficients, and Lascoux proposed a corresponding decomposition in the cohomology model. Our work provides a geometric
interpretation of the atomic decomposition. The Frobenius-splitting results of Mehta and van der Kallen \cite{Mehta&vanderKallen} imply a direct-sum decomposition of the ideals of
nilpotent orbit closures, arising from the inclusions of the corresponding sets. We carry out the restriction to the diagonal using a recent theorem of Broer \cite{Broer}. This gives
a direct-sum decomposition of the ideals yielding the $k[\Cmubar\cap \hbox{\Frakvii t}]$, and a new proof of the atomic decomposition of the $q$-Kostka polynomials.
Show abstract
We study the coordinate rings~$k[\Cmubar\cap\hbox{\Frakvii t}]$ of scheme-theoretic intersections of nilpotent orbit closures with the diagonal matrices. Here $\mu'$ gives the Jordan
block structure of the nilpotent matrix. de Concini and Procesi~\cite{deConcini&Procesi} proved a conjecture of Kraft~\cite{Kraft} that these rings are isomorphic to the cohomology
rings of the varieties constructed by Springer~\cite{Springer76,Springer78}. The famous $q$-Kostka polynomial~$\Klmt(q)$ is the Hilbert series for the multiplicity of the irreducible
symmetric group representation indexed by~$\lambda$ in the ring $k[\Cmubar\cap\hbox{\Frakvii t}]$. \LS~\cite{L&S:Plaxique,Lascoux} gave combinatorially a decomposition of~$\Klmt(q)$
as a sum of ``atomic'' polynomials with non-negative integer coefficients, and Lascoux proposed a corresponding decomposition in the cohomology model. Our work provides a geometric
interpretation of the atomic decomposition. The Frobenius-splitting results of Mehta and van der Kallen~\cite{Mehta&vanderKallen} imply a direct-sum decomposition of the ideals of
nilpotent orbit closures, arising from the inclusions of the corresponding sets. We carry out the restriction to the diagonal using a recent theorem of Broer~\cite{Broer}. This gives
a direct-sum decomposition of the ideals yielding the $k[\Cmubar\cap \hbox{\Frakvii t}]$, and a new proof of the atomic decomposition of the $q$-Kostka polynomials.
Hide abstract
538 Upper bounds for the resonance counting function of Schrödinger operators in odd dimensions
Froese, Richard
The purpose of this note is to provide a simple proof of the sharp polynomial upper bound for the resonance counting function of a Schrödinger operator in odd dimensions. At the same
time we generalize the result to the class of super-exponentially decreasing potentials.
Show abstract
The purpose of this note is to provide a simple proof of the sharp polynomial upper bound for the resonance counting function of a Schr\"odinger operator in odd dimensions. At the
same time we generalize the result to the class of super-exponentially decreasing potentials.
Hide abstract
547 Mittag-Leffler theorems on Riemann surfaces and Riemannian manifolds
Gauthier, Paul M.
Cauchy and Poisson integrals over {\it unbounded\/} sets are employed to prove Mittag-Leffler type theorems with massive singularities as well as approximation theorems for
holomorphic and harmonic functions.
Show abstract
Cauchy and Poisson integrals over {\it unbounded\/} sets are employed to prove Mittag-Leffler type theorems with massive singularities as well as approximation theorems for
holomorphic and harmonic functions.
Hide abstract
563 Primes in short segments of arithmetic progressions
Goldston, D. A.; Yildirim, C. Y.
Consider the variance for the number of primes that are both in the interval $[y,y+h]$ for $y \in [x,2x]$ and in an arithmetic progression of modulus $q$. We study the total variance
obtained by adding these variances over all the reduced residue classes modulo $q$. Assuming a strong form of the twin prime conjecture and the Riemann Hypothesis one can obtain an
asymptotic formula for the total variance in the range when $1 \leq h/q \leq x^{1/2-\epsilon}$, for any $\epsilon >0$. We show that one can still obtain some weaker asymptotic
results assuming the Generalized Riemann Hypothesis (GRH) in place of the twin prime conjecture. In their simplest form, our results are that on GRH the same asymptotic formula
obtained with the twin prime conjecture is true for ``almost all'' $q$ in the range $1 \leq h/q \leq h^{1/4-\epsilon}$, that on averaging over $q$ one obtains an asymptotic formula
in the extended range $1 \leq h/q \leq h^{1/2-\epsilon}$, and that there are lower bounds with the correct order of magnitude for all $q$ in the range $1 \leq h/q \leq x^{1/3-\
Show abstract
Consider the variance for the number of primes that are both in the interval $[y,y+h]$ for $y \in [x,2x]$ and in an arithmetic progression of modulus $q$. We study the total variance
obtained by adding these variances over all the reduced residue classes modulo $q$. Assuming a strong form of the twin prime conjecture and the Riemann Hypothesis one can obtain an
asymptotic formula for the total variance in the range when $1 \leq h/q \leq x^{1/2-\epsilon}$, for any $\epsilon >0$. We show that one can still obtain some weaker asymptotic
results assuming the Generalized Riemann Hypothesis (GRH) in place of the twin prime conjecture. In their simplest form, our results are that on GRH the same asymptotic formula
obtained with the twin prime conjecture is true for ``almost all'' $q$ in the range $1 \leq h/q \leq h^{1/4-\epsilon}$, that on averaging over $q$ one obtains an asymptotic formula
in the extended range $1 \leq h/q \leq h^{1/2-\epsilon}$, and that there are lower bounds with the correct order of magnitude for all $q$ in the range $1 \leq h/q \leq x^{1/3-\
Hide abstract
581 The homology of singular polygon spaces
Kamiyama, Yasuhiko
Let $M_n$ be the variety of spatial polygons $P= (a_1, a_2, \dots, a_n)$ whose sides are vectors $a_i \in \text{\bf R}^3$ of length $\vert a_i \vert=1 \; (1 \leq i \leq n),$ up to
motion in $\text{\bf R}^3.$ It is known that for odd $n$, $M_n$ is a smooth manifold, while for even $n$, $M_n$ has cone-like singular points. For odd $n$, the rational homology of
$M_n$ was determined by Kirwan and Klyachko [6], [9]. The purpose of this paper is to determine the rational homology of $M_n$ for even $n$. For even $n$, let ${\tilde M}_n$ be the
manifold obtained from $M_n$ by the resolution of the singularities. Then we also determine the integral homology of ${\tilde M}_n$.
Show abstract
Let $M_n$ be the variety of spatial polygons $P= (a_1, a_2, \dots, a_n)$ whose sides are vectors $a_i \in \text{\bf R}^3$ of length $\vert a_i \vert=1 \; (1 \leq i \leq n),$ up to
motion in $\text{\bf R}^3.$ It is known that for odd $n$, $M_n$ is a smooth manifold, while for even $n$, $M_n$ has cone-like singular points. For odd $n$, the rational homology of
$M_n$ was determined by Kirwan and Klyachko [6], [9]. The purpose of this paper is to determine the rational homology of $M_n$ for even $n$. For even $n$, let ${\tilde M}_n$ be the
manifold obtained from $M_n$ by the resolution of the singularities. Then we also determine the integral homology of ${\tilde M}_n$.
Hide abstract
595 Multipliers of fractional Cauchy transforms and smoothness conditions
Luo, Donghan; MacGregor, Thomas
This paper studies conditions on an analytic function that imply it belongs to ${\cal M}_\alpha$, the set of multipliers of the family of functions given by $f(z) = \int_ {|\zeta|=1}
{1 \over (1-\overline\zeta z)^\alpha} \,d\mu (\zeta)$ $(|z|<1)$ where $\mu$ is a complex Borel measure on the unit circle and $\alpha >0$. There are two main theorems. The first
asserts that if $0<\alpha<1$ and $\sup_ {|\zeta|=1} \int^1_0 |f'(r\zeta)| (1-r)^{\alpha-1} \,dr<\infty$ then $f \in {\cal M}_\alpha$. The second asserts that if $0<\alpha \leq 1$, $f
\in H^\infty$ and $\sup_t \int^\pi_0 {|f(e^{i(t+s)}) - 2f(e^{it}) + f(e^{i(t-s)})| \over s^{2-\alpha}} \, ds < \infty$ then $f \in {\cal M}_\alpha$. The conditions in these theorems
are shown to relate to a number of smoothness conditions on the unit circle for a function analytic in the open unit disk and continuous in its closure.
Show abstract
This paper studies conditions on an analytic function that imply it belongs to ${\cal M}_\alpha$, the set of multipliers of the family of functions given by $f(z) = \int_{|\zeta|=1}
{1 \over (1-\overline\zeta z)^\alpha} \,d\mu (\zeta)$ $(|z|<1)$ where $\mu$ is a complex Borel measure on the unit circle and $\alpha >0$. There are two main theorems. The first
asserts that if $0<\alpha<1$ and $\sup_{|\zeta|=1} \int^1_0 |f'(r\zeta)| (1-r)^{\alpha-1} \,dr<\infty$ then $f \in {\cal M}_\alpha$. The second asserts that if $0<\alpha \leq 1$, $f
\in H^\infty$ and $\sup_t \int^\pi_0 {|f(e^{i(t+s)}) - 2f(e^{it}) + f(e^{i(t-s)})| \over s^{2-\alpha}} \, ds < \infty$ then $f \in {\cal M}_\alpha$. The conditions in these theorems
are shown to relate to a number of smoothness conditions on the unit circle for a function analytic in the open unit disk and continuous in its closure.
Hide abstract
605 Hardy spaces of conjugate systems of temperatures
Guzmán-Partida, Martha; Pérez-Esteva, Salvador
We define Hardy spaces of conjugate systems of temperature functions on ${\bbd R}_{+}^{n+1}$. We show that their boundary distributions are the same as the boundary distributions of
the usual Hardy spaces of conjugate systems of harmonic functions.
Show abstract
We define Hardy spaces of conjugate systems of temperature functions on ${\bbd R}_{+}^{n+1}$. We show that their boundary distributions are the same as the boundary distributions of
the usual Hardy spaces of conjugate systems of harmonic functions.
Hide abstract
620 The Eichler trace of $\bbd Z_p$ actions on Riemann surfaces
Sjerve, Denis; Yang, Qing Jie
We study $\hbox{\Bbbvii Z}_p$ actions on compact connected Riemann surfaces via their associated Eichler traces. We determine the set of possible Eichler traces and determine the
relationship between 2 actions if they have the same trace.
Show abstract
We study $\hbox{\Bbbvii Z}_p$ actions on compact connected Riemann surfaces via their associated Eichler traces. We determine the set of possible Eichler traces and determine the
relationship between 2 actions if they have the same trace.
Hide abstract
638 Fractals in the large
Strichartz, Robert S.
A {\it reverse iterated function system} (r.i.f.s.) is defined to be a set of expansive maps $\{T_1,\ldots,T_m\}$ on a discrete metric space $M$. An invariant set $F$ is defined to
be a set satisfying $F = \bigcup^m_{j=1} T_jF$, and an invariant measure $\mu$ is defined to be a solution of $\mu = \sum^m_{j=1} p_j\mu\circ T_j^{-1}$ for positive weights $p_j$.
The structure and basic properties of such invariant sets and measures is described, and some examples are given. A {\it blowup} $\cal F$ of a self-similar set $F$ in $\Bbb R^n$ is
defined to be the union of an increasing sequence of sets, each similar to $F$. We give a general construction of blowups, and show that under certain hypotheses a blowup is the sum
set of $F$ with an invariant set for a r.i.f.s. Some examples of blowups of familiar fractals are described. If $\mu$ is an invariant measure on $\Bbb Z^+$ for a linear r.i.f.s., we
describe the behavior of its {\it analytic} transform, the power series $\sum^\infty_{n=0} \mu(n)z^n$ on the unit disc.
Show abstract
A {\it reverse iterated function system} (r.i.f.s.) is defined to be a set of expansive maps $\{T_1,\ldots,T_m\}$ on a discrete metric space $M$. An invariant set $F$ is defined to
be a set satisfying $F = \bigcup^m_{j=1} T_jF$, and an invariant measure $\mu$ is defined to be a solution of $\mu = \sum^m_{j=1} p_j\mu\circ T_j^{-1}$ for positive weights $p_j$.
The structure and basic properties of such invariant sets and measures is described, and some examples are given. A {\it blowup} $\cal F$ of a self-similar set $F$ in $\Bbb R^n$ is
defined to be the union of an increasing sequence of sets, each similar to $F$. We give a general construction of blowups, and show that under certain hypotheses a blowup is the sum
set of $F$ with an invariant set for a r.i.f.s. Some examples of blowups of familiar fractals are described. If $\mu$ is an invariant measure on $\Bbb Z^+$ for a linear r.i.f.s., we
describe the behavior of its {\it analytic} transform, the power series $\sum^\infty_{n=0} \mu(n)z^n$ on the unit disc.
Hide abstract
658 Hankel operators on pseudoconvex domains of finite type in ${\Bbb C}^2$
Symesak, Frédéric
The aim of this paper is to study small Hankel operators $h$ on the Hardy space or on weighted Bergman spaces, where $\Omega$ is a finite type domain in ${\Bbbvii C}^2$ or a strictly
pseudoconvex domain in ${\Bbbvii C}^n$. We give a sufficient condition on the symbol $f$ so that $h$ belongs to the Schatten class ${\cal S}_p$, $1\le p<+\infty$.
Show abstract
The aim of this paper is to study small Hankel operators $h$ on the Hardy space or on weighted Bergman spaces, where $\Omega$ is a finite type domain in ${\Bbbvii C}^2$ or a strictly
pseudoconvex domain in ${\Bbbvii C}^n$. We give a sufficient condition on the symbol $f$ so that $h$ belongs to the Schatten class ${\cal S}_p$, $1\le p<+\infty$.
Hide abstract
673 Fredholm modules and spectral flow
Carey, Alan; Phillips, John
An {\it odd unbounded\/} (respectively, $p$-{\it summable}) {\it Fredholm module\/} for a unital Banach $\ast$-algebra, $A$, is a pair $(H,D)$ where $A$ is represented on the Hilbert
space, $H$, and $D$ is an unbounded self-adjoint operator on $H$ satisfying: \item{(1)} $(1+D^2)^{-1}$ is compact (respectively, $\Trace\bigl((1+D^2)^{-(p/2)}\bigr) <\infty$), and \
item{(2)} $\{a\in A\mid [D,a]$ is bounded$\}$ is a dense $\ast-$subalgebra of $A$. If $u$ is a unitary in the dense $\ast-$subalgebra mentioned in (2) then $$ uDu^\ast=D+u[D,u^{\
ast}]=D+B $$ where $B$ is a bounded self-adjoint operator. The path $$ D_t^u:=(1-t) D+tuDu^\ast=D+tB $$ is a ``continuous'' path of unbounded self-adjoint ``Fredholm'' operators.
More precisely, we show that $$ F_t^u:=D_t^u \bigl(1+(D_t^u)^2\bigr)^{-{1\over 2}} $$ is a norm-continuous path of (bounded) self-adjoint Fredholm operators. The {\it spectral flow\
/} of this path $\{F_t^u\}$ (or $\{ D_t^u\}$) is roughly speaking the net number of eigenvalues that pass through $0$ in the positive direction as $t$ runs from $0$ to $1$. This
integer, $$ \sf(\{D_t^u\}):=\sf(\{F_t^u\}), $$ recovers the pairing of the $K$-homology class $[D]$ with the $K$-theory class [$u$]. We use I. M. Singer's idea (as did E. Getzler in
the $\theta$-summable case) to consider the operator $B$ as a parameter in the Banach manifold, $B_ {\sa}(H)$, so that spectral flow can be exhibited as the integral of a closed
$1$-form on this manifold. Now, for $B$ in our manifold, any $X\in T_B(B_ {\sa}(H))$ is given by an $X$ in $B_ {\sa}(H)$ as the derivative at $B$ along the curve $t\mapsto B+tX$ in
the manifold. Then we show that for $m$ a sufficiently large half-integer: $$ \alpha (X)={1\over {\tilde {C}_m}}\Tr \Bigl(X\bigl(1+(D+B)^2\bigr)^{-m}\Bigr) $$ is a closed $1$-form.
For any piecewise smooth path $\{D_t=D+B_t\}$ with $D_0$ and $D_1$ unitarily equivalent we show that $$ \sf(\{D_t\})={1\over {\tilde {C}_m}} \int_0^1\Tr \Bigl({d\over {dt}} (D_t)
(1+D_t^2)^{-m}\Bigr)\,dt $$ the integral of the $1$-form $\alpha$. If $D_0$ and $D_1$ are not unitarily equivalent, we must add a pair of correction terms to the right-hand side. We
also prove a bounded finitely summable version of the form: $$ \sf(\{F_t\})={1\over C_n}\int_0^1\Tr\Bigl({d\over dt}(F_t)(1-F_t^2)^n\Bigr)\,dt $$ for $n\geq{{p-1}\over 2}$ an
integer. The unbounded case is proved by reducing to the bounded case via the map $D\mapsto F=D(1+D^2 )^{-{1\over 2}}$. We prove simultaneously a type II version of our results.
Show abstract
An {\it odd unbounded\/} (respectively, $p$-{\it summable}) {\it Fredholm module\/} for a unital Banach $\ast$-algebra, $A$, is a pair $(H,D)$ where $A$ is represented on the Hilbert
space, $H$, and $D$ is an unbounded self-adjoint operator on $H$ satisfying: \item{(1)} $(1+D^2)^{-1}$ is compact (respectively, $\Trace\bigl((1+D^2)^{-(p/2)}\bigr) <\infty$), and \
item{(2)} $\{a\in A\mid [D,a]$ is bounded$\}$ is a dense $\ast-$subalgebra of $A$.
If $u$ is a unitary in the dense $\ast-$subalgebra mentioned in (2) then $$ uDu^\ast=D+u[D,u^{\ast}]=D+B $$ where $B$ is a bounded self-adjoint operator. The path $$ D_t^u:=(1-t)
D+tuDu^\ast=D+tB $$ is a ``continuous'' path of unbounded self-adjoint ``Fredholm'' operators. More precisely, we show that $$ F_t^u:=D_t^u \bigl(1+(D_t^u)^2\bigr)^{-{1\over 2}} $$
is a norm-continuous path of (bounded) self-adjoint Fredholm operators. The {\it spectral flow\/} of this path $\{F_t^u\}$ (or $\{ D_t^u\}$) is roughly speaking the net number of
eigenvalues that pass through $0$ in the positive direction as $t$ runs from $0$ to $1$. This integer, $$ \sf(\{D_t^u\}):=\sf(\{F_t^u\}), $$ recovers the pairing of the $K$-homology
class $[D]$ with the $K$-theory class [$u$]. We use I.~M.~Singer's idea (as did E.~Getzler in the $\theta$-summable case) to consider the operator $B$ as a parameter in the Banach
manifold, $B_{\sa}(H)$, so that spectral flow can be exhibited as the integral of a closed $1$-form on this manifold. Now, for $B$ in our manifold, any $X\in T_B(B_{\sa}(H))$ is
given by an $X$ in $B_{\sa}(H)$ as the derivative at $B$ along the curve $t\mapsto B+tX$ in the manifold. Then we show that for $m$ a sufficiently large half-integer: $$ \alpha (X)=
{1\over {\tilde {C}_m}}\Tr \Bigl(X\bigl(1+(D+B)^2\bigr)^{-m}\Bigr) $$ is a closed $1$-form. For any piecewise smooth path $\{D_t=D+B_t\}$ with $D_0$ and $D_1$ unitarily equivalent we
show that $$ \sf(\{D_t\})={1\over {\tilde {C}_m}} \int_0^1\Tr \Bigl({d\over {dt}} (D_t)(1+D_t^2)^{-m}\Bigr)\,dt $$ the integral of the $1$-form $\alpha$. If $D_0$ and $D_1$ are not
unitarily equivalent, we must add a pair of correction terms to the right-hand side. We also prove a bounded finitely summable version of the form: $$ \sf(\{F_t\})={1\over C_n}\int_0
^1\Tr\Bigl({d\over dt}(F_t)(1-F_t^2)^n\Bigr)\,dt $$ for $n\geq{{p-1}\over 2}$ an integer. The unbounded case is proved by reducing to the bounded case via the map $D\mapsto F=D(1+D^2
)^{-{1\over 2}}$. We prove simultaneously a type II version of our results.
Hide abstract
719 Indecomposable almost free modules---the local case
Göbel, Rüdiger; Shelah, Saharon
Let $R$ be a countable, principal ideal domain which is not a field and $A$ be a countable $R$-algebra which is free as an $R$-module. Then we will construct an $\aleph_1$-free
$R$-module $G$ of rank $\aleph_1$ with endomorphism algebra End$_RG = A$. Clearly the result does not hold for fields. Recall that an $R$-module is $\aleph_1$-free if all its
countable submodules are free, a condition closely related to Pontryagin's theorem. This result has many consequences, depending on the algebra $A$ in use. For instance, if we choose
$A = R$, then clearly $G$ is an indecomposable `almost free' module. The existence of such modules was unknown for rings with only finitely many primes like $R = \hbox{\Bbbvii Z}_
{(p)}$, the integers localized at some prime $p$. The result complements a classical realization theorem of Corner's showing that any such algebra is an endomorphism algebra of some
torsion-free, reduced $R$-module $G$ of countable rank. Its proof is based on new combinatorial-algebraic techniques related with what we call {\it rigid tree-elements\/} coming from
a module generated over a forest of trees.
Show abstract
Let $R$ be a countable, principal ideal domain which is not a field and $A$ be a countable $R$-algebra which is free as an $R$-module. Then we will construct an $\aleph_1$-free
$R$-module $G$ of rank $\aleph_1$ with endomorphism algebra End$_RG = A$. Clearly the result does not hold for fields. Recall that an $R$-module is $\aleph_1$-free if all its
countable submodules are free, a condition closely related to Pontryagin's theorem. This result has many consequences, depending on the algebra $A$ in use. For instance, if we choose
$A = R$, then clearly $G$ is an indecomposable `almost free' module. The existence of such modules was unknown for rings with only finitely many primes like $R = \hbox{\Bbbvii Z}_
{(p)}$, the integers localized at some prime $p$. The result complements a classical realization theorem of Corner's showing that any such algebra is an endomorphism algebra of some
torsion-free, reduced $R$-module $G$ of countable rank. Its proof is based on new combinatorial-algebraic techniques related with what we call {\it rigid tree-elements\/} coming from
a module generated over a forest of trees.
Hide abstract
739 Eigenpolytopes of distance regular graphs
Godsil, C. D.
Let $X$ be a graph with vertex set $V$ and let $A$ be its adjacency matrix. If $E$ is the matrix representing orthogonal projection onto an eigenspace of $A$ with dimension $m$, then
$E$ is positive semi-definite. Hence it is the Gram matrix of a set of $|V|$ vectors in $\re^m$. We call the convex hull of a such a set of vectors an eigenpolytope of $X$. The
connection between the properties of this polytope and the graph is strongest when $X$ is distance regular and, in this case, it is most natural to consider the eigenpolytope
associated to the second largest eigenvalue of $A$. The main result of this paper is the characterisation of those distance regular graphs $X$ for which the $1$-skeleton of this
eigenpolytope is isomorphic to $X$.
Show abstract
Let $X$ be a graph with vertex set $V$ and let $A$ be its adjacency matrix. If $E$ is the matrix representing orthogonal projection onto an eigenspace of $A$ with dimension $m$, then
$E$ is positive semi-definite. Hence it is the Gram matrix of a set of $|V|$ vectors in $\re^m$. We call the convex hull of a such a set of vectors an eigenpolytope of $X$. The
connection between the properties of this polytope and the graph is strongest when $X$ is distance regular and, in this case, it is most natural to consider the eigenpolytope
associated to the second largest eigenvalue of $A$. The main result of this paper is the characterisation of those distance regular graphs $X$ for which the $1$-skeleton of this
eigenpolytope is isomorphic to $X$.
Hide abstract
756 Estimates on renormalization group transformations
Brydges, D.; Dimock, J.; Hurd, T. R.
We consider a specific realization of the renormalization group (RG) transformation acting on functional measures for scalar quantum fields which are expressible as a polymer
expansion times an ultra-violet cutoff Gaussian measure. The new and improved definitions and estimates we present are sufficiently general and powerful to allow iteration of the
transformation, hence the analysis of complete renormalization group flows, and hence the construction of a variety of scalar quantum field theories.
Show abstract
We consider a specific realization of the renormalization group (RG) transformation acting on functional measures for scalar quantum fields which are expressible as a polymer
expansion times an ultra-violet cutoff Gaussian measure. The new and improved definitions and estimates we present are sufficiently general and powerful to allow iteration of the
transformation, hence the analysis of complete renormalization group flows, and hence the construction of a variety of scalar quantum field theories.
Hide abstract
794 Upper bounds on $|L(1,\chi)|$ and applications
Louboutin, Stéphane
We give upper bounds on the modulus of the values at $s=1$ of Artin $L$-functions of abelian extensions unramified at all the infinite places. We also explain how we can compute
better upper bounds and explain how useful such computed bounds are when dealing with class number problems for $\CM$-fields. For example, we will reduce the determination of all the
non-abelian normal $\CM$-fields of degree $24$ with Galois group $\SL_2(F_3)$ (the special linear group over the finite field with three elements) which have class number one to the
computation of the class numbers of $23$ such $\CM$-fields.
Show abstract
We give upper bounds on the modulus of the values at $s=1$ of Artin $L$-functions of abelian extensions unramified at all the infinite places. We also explain how we can compute
better upper bounds and explain how useful such computed bounds are when dealing with class number problems for $\CM$-fields. For example, we will reduce the determination of all the
non-abelian normal $\CM$-fields of degree $24$ with Galois group $\SL_2(F_3)$ (the special linear group over the finite field with three elements) which have class number one to the
computation of the class numbers of $23$ such $\CM$-fields.
Hide abstract
816 Tableaux realization of generalized Verma modules
Mazorchuk, Volodymyr
We construct the tableaux realization of generalized Verma modules over the Lie algebra $\sl(3,{\bbd C})$. By the same procedure we construct and investigate the structure of a new
family of generalized Verma modules over $\sl(n,{\bbd C})$.
Show abstract
We construct the tableaux realization of generalized Verma modules over the Lie algebra $\sl(3,{\bbd C})$. By the same procedure we construct and investigate the structure of a new
family of generalized Verma modules over $\sl(n,{\bbd C})$.
Hide abstract
829 Conjugacy classes and nilpotent variety of a reductive monoid
Putcha, Mohan S.
We continue in this paper our study of conjugacy classes of a reductive monoid $M$. The main theorems establish a strong connection with the Bruhat-Renner decomposition of $M$. We
use our results to decompose the variety $M_{\nil}$ of nilpotent elements of $M$ into irreducible components. We also identify a class of nilpotent elements that we call standard and
prove that the number of conjugacy classes of standard nilpotent elements is always finite.
Show abstract
We continue in this paper our study of conjugacy classes of a reductive monoid $M$. The main theorems establish a strong connection with the Bruhat-Renner decomposition of $M$. We
use our results to decompose the variety $M_{\nil}$ of nilpotent elements of $M$ into irreducible components. We also identify a class of nilpotent elements that we call standard and
prove that the number of conjugacy classes of standard nilpotent elements is always finite.
Hide abstract
845 Lusternik-Schnirelmann category and algebraic $R$-local homotopy theory
Scheerer, H.; Tanré, D.
In this paper, we define the notion of $R_{\ast}$-$\LS$ category associated to an increasing system of subrings of $\Q$ and we relate it to the usual $\LS$-category. We also relate
it to the invariant introduced by Félix and Lemaire in tame homotopy theory, in which case we give a description in terms of Lie algebras and of cocommutative coalgebras, extending
results of Lemaire-Sigrist and Félix-Halperin.
Show abstract
In this paper, we define the notion of $R_{\ast}$-$\LS$ category associated to an increasing system of subrings of $\Q$ and we relate it to the usual $\LS$-category. We also relate
it to the invariant introduced by F\'elix and Lemaire in tame homotopy theory, in which case we give a description in terms of Lie algebras and of cocommutative coalgebras, extending
results of Lemaire-Sigrist and F\'elix-Halperin.
Hide abstract
863 Smooth formal embeddings and the residue complex
Yekutieli, Amnon
Let $\pi\colon X \ar S$ be a finite type morphism of noetherian schemes. A {\it smooth formal embedding\/} of $X$ (over $S$) is a bijective closed immersion $X \subset \mfrak{X}$,
where $\mfrak{X}$ is a noetherian formal scheme, formally smooth over $S$. An example of such an embedding is the formal completion $\mfrak{X} = Y_ {/ X}$ where $X \subset Y$ is an
algebraic embedding. Smooth formal embeddings can be used to calculate algebraic De Rham (co)homology. Our main application is an explicit construction of the Grothendieck residue
complex when $S$ is a regular scheme. By definition the residue complex is the Cousin complex of $\pi^{!} \mcal{O}_ {S}$, as in \cite{RD}. We start with I-C. Huang's theory of
pseudofunctors on modules with $0$-dimensional support, which provides a graded sheaf $\bigoplus_ {q} \mcal{K}^{q}_ {\,X / S}$. We then use smooth formal embeddings to obtain the
coboundary operator $\delta \colon\mcal{K}^{q}_ {X / S} \ar \mcal{K}^{q + 1}_ {\,X / S}$. We exhibit a canonical isomorphism between the complex $(\mcal{K}^{\bdot}_ {\,X / S}, \
delta)$ and the residue complex of \cite{RD}. When $\pi$ is equidimensional of dimension $n$ and generically smooth we show that $\mrm{H}^{-n} \mcal{K}^{\bdot}_ {\,X/S}$ is
canonically isomorphic to to the sheaf of regular differentials of Kunz-Waldi \cite{KW}. Another issue we discuss is Grothendieck Duality on a noetherian formal scheme $\mfrak{X}$.
Our results on duality are used in the construction of $\mcal{K}^{\bdot}_ {\,X / S}$.
Show abstract
Let $\pi\colon X \ar S$ be a finite type morphism of noetherian schemes. A {\it smooth formal embedding\/} of $X$ (over $S$) is a bijective closed immersion $X \subset \mfrak{X}$,
where $\mfrak{X}$ is a noetherian formal scheme, formally smooth over $S$. An example of such an embedding is the formal completion $\mfrak{X} = Y_{/ X}$ where $X \subset Y$ is an
algebraic embedding. Smooth formal embeddings can be used to calculate algebraic De~Rham (co)homology. Our main application is an explicit construction of the Grothendieck residue
complex when $S$ is a regular scheme. By definition the residue complex is the Cousin complex of $\pi^{!} \mcal{O}_{S}$, as in \cite{RD}. We start with I-C.~Huang's theory of
pseudofunctors on modules with $0$-dimensional support, which provides a graded sheaf $\bigoplus_{q} \mcal{K}^{q}_{\,X / S}$. We then use smooth formal embeddings to obtain the
coboundary operator $\delta \colon\mcal{K}^{q}_{X / S} \ar \mcal{K}^{q + 1}_{\,X / S}$. We exhibit a canonical isomorphism between the complex $(\mcal{K}^{\bdot}_{\,X / S}, \delta)$
and the residue complex of \cite{RD}. When $\pi$ is equidimensional of dimension $n$ and generically smooth we show that $\mrm{H}^{-n} \mcal{K}^{\bdot}_{\,X/S}$ is canonically
isomorphic to to the sheaf of regular differentials of Kunz-Waldi \cite{KW}. Another issue we discuss is Grothendieck Duality on a noetherian formal scheme $\mfrak{X}$. Our results
on duality are used in the construction of $\mcal{K}^{\bdot}_{\,X / S}$.
Hide abstract
897 Fourier multipliers for local hardy spaces on Chébli-Trimèche hypergroups
Bloom, Walter R.; Xu, Zengfu
In this paper we consider Fourier multipliers on local Hardy spaces $\qin$ $(0 < p \leq 1)$ for Chébli-Trimèche hypergroups. The molecular characterization is investigated which
allows us to prove a version of Hörmander's multiplier theorem.
Show abstract
In this paper we consider Fourier multipliers on local Hardy spaces $\qin$ $(0<p \leq 1)$ for Ch\'ebli-Trim\`eche hypergroups. The molecular characterization is investigated which
allows us to prove a version of H\"ormander's multiplier theorem.
Hide abstract
929 Decomposition varieties in semisimple Lie algebras
Broer, Abraham
The notion of decompositon class in a semisimple Lie algebra is a common generalization of nilpotent orbits and the set of regular semisimple elements. We prove that the closure of a
decomposition class has many properties in common with nilpotent varieties, \eg, its normalization has rational singularities. The famous Grothendieck simultaneous resolution is
related to the decomposition class of regular semisimple elements. We study the properties of the analogous commutative diagrams associated to an arbitrary decomposition class.
Show abstract
The notion of decompositon class in a semisimple Lie algebra is a common generalization of nilpotent orbits and the set of regular semisimple elements. We prove that the closure of a
decomposition class has many properties in common with nilpotent varieties, \eg, its normalization has rational singularities.
The famous Grothendieck simultaneous resolution is related to the decomposition class of regular semisimple elements. We study the properties of the analogous commutative diagrams
associated to an arbitrary decomposition class.
Hide abstract
972 Trace class elements and cross-sections in Kac-Moody groups
Brüchert, Gerd
Let $G$ be an affine Kac-Moody group, $\pi_0,\dots,\pi_r,\pi_{\delta}$ its fundamental irreducible representations and $\chi_0, \dots, \chi_r, \chi_{\delta}$ their characters. We
determine the set of all group elements $x$ such that all $\pi_i(x)$ act as trace class operators, \ie, such that $\chi_i(x)$ exists, then prove that the $\chi_i$ are class
functions. Thus, $\chi:=(\chi_0, \dots, \chi_r, \chi_{\delta})$ factors to an adjoint quotient $\bar{\chi}$ for $G$. In a second part, following Steinberg, we define a cross-section
$C$ for the potential regular classes in $G$. We prove that the restriction $\chi|_C$ behaves well algebraically. Moreover, we obtain an action of $\hbox{\Bbbvii C}^{\times}$ on $C$,
which leads to a functional identity for $\chi|_C$ which shows that $\chi|_C$ is quasi-homogeneous.
Show abstract
Let $G$ be an affine Kac-Moody group, $\pi_0,\dots,\pi_r,\pi_{\delta}$ its fundamental irreducible representations and $\chi_0, \dots, \chi_r, \chi_{\delta}$ their characters. We
determine the set of all group elements $x$ such that all $\pi_i(x)$ act as trace class operators, \ie, such that $\chi_i(x)$ exists, then prove that the $\chi_i$ are class
functions. Thus, $\chi:=(\chi_0, \dots, \chi_r, \chi_{\delta})$ factors to an adjoint quotient $\bar{\chi}$ for $G$. In a second part, following Steinberg, we define a cross-section
$C$ for the potential regular classes in $G$. We prove that the restriction $\chi|_C$ behaves well algebraically. Moreover, we obtain an action of $\hbox{\Bbbvii C}^{\times}$ on $C$,
which leads to a functional identity for $\chi|_C$ which shows that $\chi|_C$ is quasi-homogeneous.
Hide abstract
1007 Galois module structure of ambiguous ideals in biquadratic extensions
Elder, G. Griffith
Let $N/K$ be a biquadratic extension of algebraic number fields, and $G=\Gal (N/K)$. Under a weak restriction on the ramification filtration associated with each prime of $K$ above
$2$, we explicitly describe the $\bZ[G]$-module structure of each ambiguous ideal of $N$. We find under this restriction that in the representation of each ambiguous ideal as a $\bZ
[G]$-module, the exponent (or multiplicity) of each indecomposable module is determined by the invariants of ramification, alone. For a given group, $G$, define ${\cal S}_G$ to be
the set of indecomposable $\bZ[G]$-modules, ${\cal M}$, such that there is an extension, $N/K$, for which $G\cong\Gal (N/K)$, and ${\cal M}$ is a $\bZ[G]$-module summand of an
ambiguous ideal of $N$. Can ${\cal S}_G$ ever be infinite? In this paper we answer this question of Chinburg in the affirmative.
Show abstract
Let $N/K$ be a biquadratic extension of algebraic number fields, and $G=\Gal (N/K)$. Under a weak restriction on the ramification filtration associated with each prime of $K$ above
$2$, we explicitly describe the $\bZ[G]$-module structure of each ambiguous ideal of $N$. We find under this restriction that in the representation of each ambiguous ideal as a $\bZ
[G]$-module, the exponent (or multiplicity) of each indecomposable module is determined by the invariants of ramification, alone. For a given group, $G$, define ${\cal S}_G$ to be
the set of indecomposable $\bZ[G]$-modules, ${\cal M}$, such that there is an extension, $N/K$, for which $G\cong\Gal (N/K)$, and ${\cal M}$ is a $\bZ[G]$-module summand of an
ambiguous ideal of $N$. Can ${\cal S}_G$ ever be infinite? In this paper we answer this question of Chinburg in the affirmative.
Hide abstract
1048 Localization theories for simplicial presheaves
Goerss, P. G.; Jardine, J. F.
Most extant localization theories for spaces, spectra and diagrams of such can be derived from a simple list of axioms which are verified in broad generality. Several new theories
are introduced, including localizations for simplicial presheaves and presheaves of spectra at homology theories represented by presheaves of spectra, and a theory of localization
along a geometric topos morphism. The $f$-localization concept has an analog for simplicial presheaves, and specializes to the $\hbox{\Bbbvii A}^1$-local theory of Morel-Voevodsky.
This theory answers a question of Soulé concerning integral homology localizations for diagrams of spaces.
Show abstract
Most extant localization theories for spaces, spectra and diagrams of such can be derived from a simple list of axioms which are verified in broad generality. Several new theories
are introduced, including localizations for simplicial presheaves and presheaves of spectra at homology theories represented by presheaves of spectra, and a theory of localization
along a geometric topos morphism. The $f$-localization concept has an analog for simplicial presheaves, and specializes to the $\hbox{\Bbbvii A}^1$-local theory of Morel-Voevodsky.
This theory answers a question of Soul\'e concerning integral homology localizations for diagrams of spaces.
Hide abstract
1090 Sur les transformées de Riesz sur les groupes de Lie moyennables et sur certains espaces homogènes
Lohoué, Noël; Mustapha, Sami
Let $\Delta$ be a left invariant sub-Laplacian on a Lie group $G$ and let $\nabla$ be the associated gradient. In this paper we investigate the boundness of the Riesz transform $\
nabla\Delta^{-1/2}$ on Lie groups $G$ which are amenable and with exponential volume growth and on certain homogenous spaces.
Show abstract
Let $\Delta$ be a left invariant sub-Laplacian on a Lie group $G$ and let $\nabla$ be the associated gradient. In this paper we investigate the boundness of the Riesz transform $\
nabla\Delta^{-1/2}$ on Lie groups $G$ which are amenable and with exponential volume growth and on certain homogenous spaces.
Hide abstract
1105 Tempered representations and the theta correspondence
Roberts, Brooks
Let $V$ be an even dimensional nondegenerate symmetric bilinear space over a nonarchimedean local field $F$ of characteristic zero, and let $n$ be a nonnegative integer. Suppose that
$\sigma \in \Irr \bigl(\OO (V)\bigr)$ and $\pi \in \Irr \bigl(\Sp (n,F)\bigr)$ correspond under the theta correspondence. Assuming that $\sigma$ is tempered, we investigate the
problem of determining the Langlands quotient data for $\pi$.
Show abstract
Let $V$ be an even dimensional nondegenerate symmetric bilinear space over a nonarchimedean local field $F$ of characteristic zero, and let $n$ be a nonnegative integer. Suppose that
$\sigma \in \Irr \bigl(\OO (V)\bigr)$ and $\pi \in \Irr \bigl(\Sp (n,F)\bigr)$ correspond under the theta correspondence. Assuming that $\sigma$ is tempered, we investigate the
problem of determining the Langlands quotient data for $\pi$.
Hide abstract
1119 Ward's solitons II: exact solutions
Anand, Christopher Kumar
In a previous paper, we gave a correspondence between certain exact solutions to a \((2+1)\)-dimensional integrable Chiral Model and holomorphic bundles on a compact surface. In this
paper, we use algebraic geometry to derive a closed-form expression for those solutions and show by way of examples how the algebraic data which parametrise the solution space
dictates the behaviour of the solutions. Dans un article précédent, nous avons démontré que les solutions d'un modèle chiral intégrable en dimension \( (2+1) \) correspondent aux
fibrés vectoriels holomorphes sur une surface compacte. Ici, nous employons la géométrie algébrique dans une construction explicite des solutions. Nous donnons une formule
matricielle et illustrons avec trois exemples la signification des invariants algébriques pour le comportement physique des solutions.
Show abstract
In a previous paper, we gave a correspondence between certain exact solutions to a \((2+1)\)-dimensional integrable Chiral Model and holomorphic bundles on a compact surface. In this
paper, we use algebraic geometry to derive a closed-form expression for those solutions and show by way of examples how the algebraic data which parametrise the solution space
dictates the behaviour of the solutions.
Dans un article pr\'{e}c\'{e}dent, nous avons d\'{e}montr\'{e} que les solutions d'un mod\`{e}le chiral int\'{e}grable en dimension \( (2+1) \) correspondent aux fibr\'{e}s
vectoriels holomorphes sur une surface compacte. Ici, nous employons la g\'{e}om\'{e}trie alg\'{e}brique dans une construction explicite des solutions. Nous donnons une formule
matricielle et illustrons avec trois exemples la signification des invariants alg\'{e}briques pour le comportement physique des solutions.
Hide abstract
1138 Compound invariants and mixed $F$-, $\DF$-power spaces
Chalov, P. A.; Terzioğlu, T.; Zahariuta, V. P.
The problems on isomorphic classification and quasiequivalence of bases are studied for the class of mixed $F$-, $\DF$-power series spaces, {\it i.e.} the spaces of the following
kind $$ G(\la,a)=\lim_{p \to \infty} \proj \biggl(\lim_{q \to \infty}\ind \Bigl(\ell_1\bigl(a_i (p,q)\bigr)\Bigr)\biggr), $$ where $a_i (p,q)=\exp\bigl((p-\la_i q)a_i\bigr)$, $p,q \
in \N$, and $\la =( \la_i)_{i \in \N}$, $a=(a_i)_{i \in \N}$ are some sequences of positive numbers. These spaces, up to isomorphisms, are basis subspaces of tensor products of power
series spaces of $F$- and $\DF$-types, respectively. The $m$-rectangle characteristic $\mu_m^{\lambda,a}(\delta,\varepsilon; \tau,t)$, $m \in \N$ of the space $G(\la,a)$ is defined
as the number of members of the sequence $(\la_i, a_i)_{i \in \N}$ which are contained in the union of $m$ rectangles $P_k = (\delta_k, \varepsilon_k] \times (\tau_k, t_k]$, $k =
1,2, \ldots, m$. It is shown that each $m$-rectangle characteristic is an invariant on the considered class under some proper definition of an equivalency relation. The main tool are
new compound invariants, which combine some version of the classical approximative dimensions (Kolmogorov, Pe{\l}czynski) with appropriate geometrical and interpolational operations
under neighborhoods of the origin (taken from a given basis).
Show abstract
The problems on isomorphic classification and quasiequivalence of bases are studied for the class of mixed $F$-, $\DF$-power series spaces, {\it i.e.} the spaces of the following
kind $$ G(\la,a)=\lim_{p \to \infty} \proj \biggl(\lim_{q \to \infty}\ind \Bigl(\ell_1\bigl(a_i (p,q)\bigr)\Bigr)\biggr), $$ where $a_i (p,q)=\exp\bigl((p-\la_i q)a_i\bigr)$, $p,q \
in \N$, and $\la =( \la_i)_{i \in \N}$, $a=(a_i)_{i \in \N}$ are some sequences of positive numbers. These spaces, up to isomorphisms, are basis subspaces of tensor products of power
series spaces of $F$- and $\DF$-types, respectively. The $m$-rectangle characteristic $\mu_m^{\lambda,a}(\delta,\varepsilon; \tau,t)$, $m \in \N$ of the space $G(\la,a)$ is defined
as the number of members of the sequence $(\la_i, a_i)_{i \in \N}$ which are contained in the union of $m$ rectangles $P_k = (\delta_k, \varepsilon_k] \times (\tau_k, t_k]$, $k =
1,2, \ldots, m$. It is shown that each $m$-rectangle characteristic is an invariant on the considered class under some proper definition of an equivalency relation. The main tool are
new compound invariants, which combine some version of the classical approximative dimensions (Kolmogorov, Pe{\l}czynski) with appropriate geometrical and interpolational operations
under neighborhoods of the origin (taken from a given basis).
Hide abstract
1163 Gradient estimates for harmonic Functions on manifolds with Lipschitz metrics
Chen, Jingyi; Hsu, Elton P.
We introduce a distributional Ricci curvature on complete smooth manifolds with Lipschitz continuous metrics. Under an assumption on the volume growth of geodesics balls, we obtain a
gradient estimate for weakly harmonic functions if the distributional Ricci curvature is bounded below.
Show abstract
We introduce a distributional Ricci curvature on complete smooth manifolds with Lipschitz continuous metrics. Under an assumption on the volume growth of geodesics balls, we obtain a
gradient estimate for weakly harmonic functions if the distributional Ricci curvature is bounded below.
Hide abstract
1176 Isomorphism problem for metacirculant graphs of order a product of distinct primes
Dobson, Edward
In this paper, we solve the isomorphism problem for metacirculant graphs of order $pq$ that are not circulant. To solve this problem, we first extend Babai's characterization of the
CI-property to non-Cayley vertex-transitive hypergraphs. Additionally, we find a simple characterization of metacirculant Cayley graphs of order $pq$, and exactly determine the full
isomorphism classes of circulant graphs of order $pq$.
Show abstract
In this paper, we solve the isomorphism problem for metacirculant graphs of order $pq$ that are not circulant. To solve this problem, we first extend Babai's characterization of the
CI-property to non-Cayley vertex-transitive hypergraphs. Additionally, we find a simple characterization of metacirculant Cayley graphs of order $pq$, and exactly determine the full
isomorphism classes of circulant graphs of order $pq$.
Hide abstract
1189 Totally real rigid elements and Galois theory
Engler, Antonio José
Abelian closed subgroups of the Galois group of the pythagorean closure of a formally real field are described by means of the inertia group of suitable valuation rings.
Show abstract
Abelian closed subgroups of the Galois group of the pythagorean closure of a formally real field are described by means of the inertia group of suitable valuation rings.
Hide abstract
1209 A lower bound for $K_X L$ of quasi-polarized surfaces $(X,L)$ with non-negative Kodaira dimension
Fukuma, Yoshiaki
Let $X$ be a smooth projective surface over the complex number field and let $L$ be a nef-big divisor on $X$. Here we consider the following conjecture; If the Kodaira dimension $\
kappa(X)\geq 0$, then $K_{X}L\geq 2q(X)-4$, where $q(X)$ is the irregularity of $X$. In this paper, we prove that this conjecture is true if (1) the case in which $\kappa(X)=0$ or
$1$, (2) the case in which $\kappa(X)=2$ and $h^{0}(L)\geq 2$, or (3) the case in which $\kappa(X)=2$, $X$ is minimal, $h^{0}(L)=1$, and $L$ satisfies some conditions.
Show abstract
Let $X$ be a smooth projective surface over the complex number field and let $L$ be a nef-big divisor on $X$. Here we consider the following conjecture; If the Kodaira dimension $\
kappa(X)\geq 0$, then $K_{X}L\geq 2q(X)-4$, where $q(X)$ is the irregularity of $X$. In this paper, we prove that this conjecture is true if (1) the case in which $\kappa(X)=0$ or
$1$, (2) the case in which $\kappa(X)=2$ and $h^{0}(L)\geq 2$, or (3) the case in which $\kappa(X)=2$, $X$ is minimal, $h^{0}(L)=1$, and $L$ satisfies some conditions.
Hide abstract
1236 The behaviour of Legendre and ultraspherical polynomials in $L_p$-spaces
Kalton, N. J.; Tzafriri, L.
We consider the analogue of the $\Lambda(p)-$problem for subsets of the Legendre polynomials or more general ultraspherical polynomials. We obtain the ``best possible'' result that
if $2 < p < 4$ then a random subset of $N$ Legendre polynomials of size $N^{4/p-1}$ spans an Hilbertian subspace. We also answer a question of König concerning the structure of the
space of polynomials of degree $n$ in various weighted $L_p$-spaces.
Show abstract
We consider the analogue of the $\Lambda(p)-$problem for subsets of the Legendre polynomials or more general ultraspherical polynomials. We obtain the ``best possible'' result that
if $2<p<4$ then a random subset of $N$ Legendre polynomials of size $N^{4/p-1}$ spans an Hilbertian subspace. We also answer a question of K\"onig concerning the structure of the
space of polynomials of degree $n$ in various weighted $L_p$-spaces.
Hide abstract
1253 Integral representation of $p$-class groups in ${\Bbb Z}_p$-extensions and the Jacobian variety
López-Bautista, Pedro Ricardo; Villa-Salvador, Gabriel Daniel
For an arbitrary finite Galois $p$-extension $L/K$ of $\zp$-cyclotomic number fields of $\CM$-type with Galois group $G = \Gal(L/K)$ such that the Iwasawa invariants $\mu_K^-$, $ \
mu_L^-$ are zero, we obtain unconditionally and explicitly the Galois module structure of $\clases$, the minus part of the $p$-subgroup of the class group of $L$. For an arbitrary
finite Galois $p$-extension $L/K$ of algebraic function fields of one variable over an algebraically closed field $k$ of characteristic $p$ as its exact field of constants with
Galois group $G = \Gal(L/K)$ we obtain unconditionally and explicitly the Galois module structure of the $p$-torsion part of the Jacobian variety $J_L(p)$ associated to $L/k$.
Show abstract
For an arbitrary finite Galois $p$-extension $L/K$ of $\zp$-cyclotomic number fields of $\CM$-type with Galois group $G = \Gal(L/K)$ such that the Iwasawa invariants $\mu_K^-$, $ \
mu_L^-$ are zero, we obtain unconditionally and explicitly the Galois module structure of $\clases$, the minus part of the $p$-subgroup of the class group of $L$. For an arbitrary
finite Galois $p$-extension $L/K$ of algebraic function fields of one variable over an algebraically closed field $k$ of characteristic $p$ as its exact field of constants with
Galois group $G = \Gal(L/K)$ we obtain unconditionally and explicitly the Galois module structure of the $p$-torsion part of the Jacobian variety $J_L(p)$ associated to $L/k$.
Hide abstract
1273 Mean convergence of Lagrange interpolation for exponential weights on $[-1,1]$
Lubinsky, D. S.
We obtain necessary and sufficient conditions for mean convergence of Lagrange interpolation at zeros of orthogonal polynomials for weights on $[-1,1]$, such as \[ w(x)=\exp \bigl(-
(1-x^{2})^{-\alpha }\bigr),\quad \alpha >0 \] or \[ w(x)=\exp \bigl(-\exp _{k}(1-x^{2})^{-\alpha }\bigr),\quad k\geq 1, \ \alpha >0, \] where $\exp_{k}=\exp \Bigl(\exp \bigl(\cdots\
exp (\ )\cdots\bigr)\Bigr)$ denotes the $k$-th iterated exponential.
Show abstract
We obtain necessary and sufficient conditions for mean convergence of Lagrange interpolation at zeros of orthogonal polynomials for weights on $[-1,1]$, such as \[ w(x)=\exp \bigl(-
(1-x^{2})^{-\alpha }\bigr),\quad \alpha >0 \] or \[ w(x)=\exp \bigl(-\exp _{k}(1-x^{2})^{-\alpha }\bigr),\quad k\geq 1, \ \alpha >0, \] where $\exp_{k}=\exp \Bigl(\exp \bigl(\cdots\
exp (\ )\cdots\bigr)\Bigr)$ denotes the $k$-th iterated exponential.
Hide abstract
1298 Imprimitively generated Lie-algebraic Hamiltonians and separation of variables
Milson, Robert
Turbiner's conjecture posits that a Lie-algebraic Hamiltonian operator whose domain is a subset of the Euclidean plane admits a separation of variables. A proof of this conjecture is
given in those cases where the generating Lie-algebra acts imprimitively. The general form of the conjecture is false. A counter-example is given based on the trigonometric
Olshanetsky-Perelomov potential corresponding to the $A_2$ root system.
Show abstract
Turbiner's conjecture posits that a Lie-algebraic Hamiltonian operator whose domain is a subset of the Euclidean plane admits a separation of variables. A proof of this conjecture is
given in those cases where the generating Lie-algebra acts imprimitively. The general form of the conjecture is false. A counter-example is given based on the trigonometric
Olshanetsky-Perelomov potential corresponding to the $A_2$ root system.
Hide abstract
1323 L'invariant de Hasse-Witt de la forme de Killing
Morales, Jorge
Nous montrons que l'invariant de Hasse-Witt de la forme de Killing d'une alg{èbre de Lie semi-simple $L$ s'exprime {à l'aide de l'invariant de Tits de la repr{ésentation irr
{éductible de $L$ de poids dominant $\rho=\frac{1}{2}$ (somme des racines positives), et des invariants associ{és au groupe des sym{étries du diagramme de Dynkin de $L$.
Show abstract
Nous montrons que l'invariant de Hasse-Witt de la forme de Killing d'une alg{\`e}bre de Lie semi-simple $L$ s'exprime {\`a} l'aide de l'invariant de Tits de la repr{\'e}sentation irr
{\'e}ductible de $L$ de poids dominant $\rho=\frac{1}{2}$ (somme des racines positives), et des invariants associ{\'e}s au groupe des sym{\'e}tries du diagramme de Dynkin de $L$.
Hide abstract
1337 Author Index - Index des auteurs
1998, for 1998 - pour
Show abstract
|
{"url":"http://cms.math.ca/cjm/v50/","timestamp":"2014-04-19T04:35:25Z","content_type":null,"content_length":"199015","record_id":"<urn:uuid:4e2e33a1-7c02-4c64-8d83-c1dffe387753>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The DO Loop
My previous post described how to use the "missing response trick" to score a regression model. As I said in that article, there are other ways to score a regression model. This article describes
using the SCORE procedure, a SCORE statement, the relatively new PLM procedure, and the CODE statement.
The following DATA step defines a small set of data. The goal of the analysis is to fit various regression models to Y as a function of X, and then evaluate each regression model on a second data
set, which contains 200 evenly spaced X values.
/* the original data; fit model to these values */
data A;
input x y @@;
1 4 2 9 3 20 4 25 5 1 6 5 7 -4 8 12
/* the scoring data; evaluate model on these values */
%let NumPts = 200;
data ScoreX(keep=x);
min=1; max=8;
do i = 0 to &NumPts-1;
x = min + i*(max-min)/(&NumPts-1); /* evenly spaced values */
output; /* no Y variable; only X */
The SCORE procedure
Some SAS/STAT procedures can output parameter estimates for a model to a SAS data set. The SCORE procedure can read those parameter estimates and use them to evaluate the model on new values of the
explanatory variables. (For a regression model, the SCORE procedure performs matrix multiplication: you supply the scoring data X and the parameter estimates b and the procedure computes the
predicted values p = Xb.)
The canonical example is fitting a linear regression by using PROC REG. You can use the OUTEST= option to write the parameter estimates to a data set. That data set, which is named RegOut in this
example, becomes one of the two input data sets for PROC SCORE, as follows:
proc reg data=A outest=RegOut noprint;
YHat: model y = x; /* name of model is used by PROC SCORE */
proc score data=ScoreX score=RegOut type=parms predict out=Pred;
var x;
It is worth noting that the label for the MODEL statement in PROC REG is used by PROC SCORE to name the predicted variable. In this example, the YHat variable in the Pred data set contains the
predicted values. If you do not specify a label on the MODEL statement, then a default name such as MODEL1 is used. For more information, see the documentation for the SCORE procedure.
The SCORE statement
Nonparametric regression procedures cannot output parameter estimates because...um...because they are nonparametric! Nonparametric regression procedures support a SCORE statement, which enables you
to specify the scoring data set. The following example shows the syntax of the SCORE statement for the TPSPLINE procedure, which fits a thin-plate spline to the data:
proc tpspline data=A;
model y = (x);
score data=ScoreX out=Pred;
Other nonparametric procedures that support the SCORE statement include the ADAPTIVEREG procedure (new in SAS/STAT 12.1), the GAM procedure, and the LOESS procedure.
The STORE statement and the PLM procedure
Although the STORE statement and the PLM procedure were introduced in SAS/STAT 9.22 (way back in 2010), some SAS programmers are still not aware of these features. Briefly, the idea is that sometimes
a scoring data set is not available when a model is fit, so the STORE statement saves all of the information needed to recreate and evaluate the model. The saved information can be read by the PLM
procedure, which includes a SCORE statement, as well as many other capabilities. A good introduction to the PLM procedure is Tobias and Cai (2010), "Introducing PROC PLM and Postfitting Analysis for
Very General Linear Models."
For this example, the GLM procedure is used to fit the data. Because of the shape of the previous thin-plate spline curve, a cubic model is fit. The STORE statement is used to save the model
information in an item store named WORK.ScoreExample. (I've used the WORK libref, but use a permanent libref if you want the item store to persist across SAS sessions.) Many hours or days later, you
can use the PLM procedure to evaluate the model on a new set of data, as shown in the following statements:
proc glm data=A;
model y = x | x | x;
store work.ScoreExample; /* store the model */
proc plm restore=work.ScoreExample;
score data=ScoreX out=Pred; /* evaluate the model on new data */
The STORE statement is supported by many SAS/STAT regression procedures, including the GENMOD, GLIMMIX, GLM, GLMSELECT, LIFEREG, LOGISTIC, MIXED, ORTHOREG, PHREG, PROBIT, SURVEYLOGISTIC, SURVEYPHREG,
and SURVEYREG procedures. It also applies to the RELIABILITY procedure in SAS/QC software.
The CODE statement
In SAS/STAT 12.1 the CODE statement was added to several SAS/STAT regression procedures. It is also part of the PLM procedure. The CODE statement offers yet another option for scoring data. The CODE
statement writes DATA step statements into a text file. You can then use the %INCLUDE statement to insert those statements into a DATA step. In the following example, DATA step statements are written
to the file glmScore.sas. You can include that file into a DATA step in order to evaluate the model on the ScoreX data:
proc glm data=A noprint;
model y = x | x | x;
code file='glmScore.sas';
data Pred;
set ScoreX;
%include 'glmScore.sas';
For this example, the predicted values are in a variable called P_y in the Pred data set. The CODE statement is supported by many predictive modeling procedures, such as the GENMOD, GLIMMIX, GLM,
GLMSELECT, LOGISTIC, MIXED, PLM, and REG procedures in SAS/STAT software. In addition, the CODE statement is supported by the HPLOGISTIC and HPREG procedures in SAS High-Performance Analytics
In summary, there are many ways to score SAS regression models. For PROC REG and linear models with an explicit design matrix, use the SCORE procedure. For nonparametric models, use the SCORE
statement. For scoring data sets long after a model is fit, use the STORE statement and the PLM procedure. For scoring inside the DATA step, use the CODE statement. For regression procedures that do
not support these options (such as PROC TRANSREG) use the missing value trick from my last post.
Did I leave anything out? What is your favorite technique to score a regression model? Leave a comment.
The missing value trick for scoring a regression model
A fundamental operation in statistical data analysis is to fit a statistical regression model on one set of data and then evaluate the model on another set of data. The act of evaluating the model on
the second set of data is called scoring.
One of first "tricks" that I learned when I started working at SAS was how to score regression data in procedures that do not support the SCORE statement. I think almost every SAS statistical
programmer learns this trick early in their career, usually from a more experienced SAS programmer. The trick is used in examples in the SAS/STAT User's Guide and on discussion forums, but it is not
often explicitly discussed, probably because it is so well known. However,I had to search the internet for a while before I found a SUGI paper that describes the trick. In an effort to assist new SAS
programmers, here is an explanation of how to score regression models by using the "missing value trick," which is also called the "missing response trick" or the "missing dependent variable trick."
Suppose that you want to fit a regression model to some data. You also have a second set of explanatory values for which you want to score the regression model. For example, the following DATA step
create training data for the regression model. A subsequent DATA step creates evenly spaced values in the X variable. The goal is to evaluate the regression model on the second data set.
data A; /* the original data; fit model to these values */
input x y @@;
1 4 2 9 3 20 4 25 5 1 6 5 7 -4 8 12
%let NumPts = 200;
data ScoreX(keep=x); /* the scoring data; evaluate model on these values */
min=1; max=8;
do i = 0 to &NumPts-1;
x = min + i*(max-min)/(&NumPts-1); /* evenly spaced values */
output; /* no Y variable; only X */
The trick relies on two features of SAS software:
• The first data set contains variables X and Y. The second contains only X. If the two data sets are concatenated, a missing value is assigned to Y for each value of X in the second data set.
• When a regression procedure encounters an observation that has a missing value for the response variable, it does not use that observation to fit the model. However, provided that all of the
explanatory variables are nonmissing, the procedure does compute the predicted value for that observation.
Consequently, the missing value trick is to concatenate the original data with the scoring data. If you call a regression procedure on the concatenated data, the original data are used to fit the
data but predicted values are generated for the scoring data, as follows:
/* The missing response trick.
1. Concatenate the original data with the score data */
data B;
set A ScoreX; /* y=. for all obs in ScoreX */
/* 2. Run a regression. The model is fit to the original data. */
proc reg data=B plots=(NONE);
model y = x;
output out=Pred p=P; /* predicted values for the scoring data */
proc print data=Pred(obs=12);
The table shows that the scoring data, which begins in row 9, contains predicted values as desired.
The advantage of this technique is that it is easy to implement. A disadvantage is this technique makes two extra copies of the scoring data, which might require a lot of disk space if the scoring
data set is huge. A second disadvantage is that the trick increases the number of observations that the regression procedure must read. In the example, there are only eight observations in the
original data, but the REG procedure has to read and write 208 observations.
There are other ways to evaluate a regression model, including using a SCORE statement, the SCORE procedure, and the relatively new PLM procedure. I will discuss these alternative methods in a future
blog post.
I'm interested in hearing about when you first learned the missing response trick? Who did you learn it from? Do you still use it, or do you now use a more modern technique? Leave a comment.
Post a Comment
Fundamental theorems of mathematics and statistics
Although I currently work as a statistician, my original training was in mathematics. In many mathematical fields there is a result that is so profound that it earns the name "The Fundamental Theorem
of [Topic Area]." A fundamental theorem is a deep (often surprising) result that connects two or more seemingly unrelated mathematical ideas.
It is interesting that statistical textbooks do not usually highlight a "fundamental theorem of statistics." In this article I briefly and informally discuss some of my favorite fundamental theorems
in mathematics and cast my vote for the fundamental theorem of statistics.
The fundamental theorem of arithmetic
The fundamental theorem of arithmetic connects the natural numbers with primes. The theorem states that every integer greater than one can be represented uniquely as a product of primes.
This theorem connects something ordinary and common (the natural numbers) with something rare and unusual (primes). It is trivial to enumerate the natural numbers, but each natural number is "built"
from prime numbers, which defy enumeration. The natural numbers are regularly spaced, but the gap between consecutive prime numbers is extremely variable. If p is a prime number, sometimes p+2 is
also prime (the so-called twin primes), but sometimes there is a huge gap before the next prime.
The fundamental theorem of algebra
The fundamental theorem of algebra connects polynomials with their roots (or zeros). Along the way it informs us that the real numbers are not sufficient for solving algebraic equation, a fact known
to every child who has pondered the solution to the equation x^2 = –1. The fundamental theorem of algebra tells us that we need complex numbers to be able to find all roots. The theorem states that
every nonconstant polynomial of degree n has exactly n roots in the complex number system. Like the fundamental theorem of arithmetic, this is an "existence" theorem: it tells you the roots are
there, but doesn't help you to find them.
The fundamental theorem of calculus
The fundamental theorem of calculus (FTC) connects derivatives and integrals. Derivatives tell us about the rate at which something changes; integrals tell us how to accumulate some quantity. That
these should be related is not obvious, but the FTC says that the rate of change for a certain integral is given by the function whose values are being accumulated. Specifically, if f is any
continuous function on the interval [a, b], then for every value of x in [a,b] you can compute the following function:
The FTC states that F'(x) = f(x). That is, derivatives and integrals are inverse operations.
Unlike the previous theorems, the fundamental theorem of calculus provides a computational tool. It shows that you can solve integrals by constructing "antiderivatives."
The fundamental theorem of linear algebra
Not everyone knows about the fundamental theorem of linear algebra, but there is an excellent 1993 article by Gil Strang that describes its importance. For an m x n matrix A, the theorem relates the
dimensions of the row space of A (R(A)) and the nullspace of A (N(A)). The result is that dim(R(A)) + dim(N(A)) = n.
The theorem also describes four important subspaces and describes the geometry of A and A^t when thought of as linear transformations. The theorem shows that some subspaces are orthogonal to others.
(Strang actually combines four theorems into his statement of the Fundamental Theorem, including a theorem that motivates the statistical practice of ordinary least squares.)
The fundamental theorem of statistics
Although most statistical textbooks do not single out a result as THE fundamental theorem of statistics, I can think of two results that could make a claim to the title. These results are based in
probability theory, so perhaps they are more aptly named fundamental theorems of probability.
• The Law of Large Numbers (LLN) provides the mathematical basis for understanding random events. The LLN says that if you repeat a trial many times, then the average of the observed values tend to
be close to the expected value. (In general, the more trials you run, the better the estimates.) For example, you toss a fair die many times and compute the average of the numbers that appear.
The average should converge to 3.5, which is the expected value of the roll because (1+2+3+4+5+6)/6 = 3.5. The same theorem ensures that about one-sixth of the faces are 1s, one-sixth are 2s, and
so forth.
• The Central Limit theorem (CLT) states that the mean of a sample of size n is approximately normally distributed when n is large. Perhaps more importantly, the CLT provides the mean and the
standard deviation of the sampling distribution in terms of the sample size, the population mean μ, and the population variance σ^2. Specifically, the sampling distribution of the mean is
approximately normally distributed with mean μ and standard deviation σ/sqrt(n).
Of these, the Central Limit theorem gets my vote for being the Fundamental Theorem of Statistics. The LLN is important, but hardly surprising. It is the basis for frequentist statistics and assures
us that large random samples tend to reflect the population. In contrast, the CLT is surprising because the sampling distribution of the mean is approximately normal regardless of the distribution of
the original data! As a bonus, the CLT can be used computationally. It forms the basis for many statistical tests by estimating the accuracy of a statistical estimate. Lastly, the CLT connects
important concepts in statistics: means, variances, sample size, and accuracy of point estimates.
Do you have a favorite "Fundamental Theorem"? Do you marvel at an applied theorem such as the fundamental theorem of linear programming or chuckle at a pseudo-theorems such as the fundamental theorem
of software engineering? Share your thoughts in the comments.
Post a Comment
Define functions with default parameter values in SAS/IML
One of my favorite new features of SAS/IML 12.1 enables you to define functions that contain default values for parameters. This is extremely useful when you want to write a function that has
optional arguments.
Example: Centering a data vector
It is simple to specify a SAS/IML module with a default parameter value. Suppose that you want to write a module that centers a vector. By default, the module should center the vector so that new
mean of the data is 0. However, the module should also support an arbitrary value to center the vector. With default arguments in SAS/IML 12.1, you can write the following module:
proc iml;
start Center(x, a=0);
return ( x - mean(x) + a );
The Center module has two arguments, but the equal sign in the second argument indicates that the second argument is optional and that it receives a default value of 0. (SAS programmers who are
familiar with the macro language will find this syntax familiar.) Using default values enables you to call the function with fewer parameters in the usual (default) case, but also enables you to call
the function with more generality.
In this example, if you call the function by using one argument, the local variable a is set to zero. You can also specify a value for a when you call the function, as follows:
x = {-1,0,1,4}; /* data */
c0 = Center(x); /* center x at 0 */
c10 = Center(x, 10); /* center x at 10 */
print c0 c10;
The c0 vector is centered at 0, whereas the c10 vector is centered at 10. The Center module has a simpler calling syntax for the usual case of centering a vector at 0.
Example: Testing whether two vectors are equal to within a certain precision
I have previously blogged about the dangers of testing floating-point values for equality. In a numerical program, you should avoid testing whether a vector x is (exactly) equal to another vector y.
Even though two vectors should be equal (if the computations were performed in exact arithmetic), finite-precision computations can lead to small differences, which some people call rounding errors.
For example, in exact arithmetic the square of the square root of x is exactly equal to x for any nonnegative value. However, the following statements show that this equality might not hold in finite
precision arithmetic:
x = {4 5 2000 12345 654321}; /* some numbers */
y = sqrt(x)#sqrt(x); /* in exact arithmetic, y=x */
diff = x-y; /* the diff vector is not zero */
print diff;
As you can see, some elements of x and y are not equal, which leads to a nonzero difference vector. Consequently, it is a bad idea to use the equal operator (=) to test whether the vectors are equal:
eq = all(x=y); /* Bad idea: Don't test for exact comparison */
print eq;
To understand more about floating-point arithmetic, see the paper "What Every Computer Scientist Should Know About Floating-Point Arithmetic." A better way to compare numerical values in SAS/IML is
to test whether two numbers are within some specified tolerance of each other, like you can do with PROC COMPARE. The following module computes whether the absolute value of the difference between
corresponding elements is less than some criterion. By default, the criterion is set to 10^–6:
/* return 1 if x[i] is within eps of y[i] for all i */
start IsEqual(x, y, eps=1e-6);
return( all( abs(x-y)<=eps) );
eq6 = IsEqual(x, y); /* Default: compare within 1e-6 */
eq12 = IsEqual(x, y, 1e-12); /* compare within 1e-12 */
print eq6 eq12;
The vectors are judged to be equal when the default criterion is used. If you tighten the criterion, the vectors are judged to be unequal.
Being able to specify default parameter values is very useful to a programmer. Do you have an example for which this new feature of SAS/IML 12.1 will be useful? Describe your application in the
Post a Comment
A simple way to find the root of a function of one variable
Finding the root (or zero) of a function is an important computational task because it enables you to solve nonlinear equations. I have previously blogged about using Newton's method to find a root
for a function of several variables. I have also blogged about how to use the bisection method to find the zeros of a univariate function.
As of SAS/IML 12.1, there is an easy way to find the roots of function of one variable. The FROOT function solves for a root on a finite interval by using Brent’s method, which uses a combination of
bisection, linear interpolation, and quadratic interpolation to converge to a root. Unlike Newton's method, Brent's method does not require that you specify a derivative for the function. All you
need to provide is a module that evaluates the function, f, and an interval [a, b] such that f(a) and f(b) have different signs.
If there is a root in the interval, Brent's method is guaranteed to find it.
As an example, the following SAS/IML module defines a function that I investigated in a previous blog post:
proc iml;
start Func(x);
return( exp(-x##2) - x##3 + 5#x +1 );
The image at the beginning of this article shows the graph of the function on the interval [–5, 5]. The function has three roots. You can use the graph to estimate intervals on which the function
contains a root. The FROOT function enables you to find multiple zeros with a single call, as follows:
intervals = {-4 -1.5, /* 1st interval [-4, -1.5] */
-1.5 1 , /* 2nd interval [-1.5 1] */
1 4 }; /* 3rd interval [1, 4] */
z = froot("Func", intervals);
print z;
The vector z contains three elements. The first element is the root in the first specified interval, the second element is the root in the second interval, and so forth. If you specify an interval on
which the function does not have a root, then the FROOT function returns a missing value.
That's all there is to it. So next time you need to solve a nonlinear equation of one variable, remember that the FROOT function in the SAS/IML language makes the task simple.
Post a Comment
Sample without replacement in SAS
Last week I showed three ways to sample with replacement in SAS. You can use the SAMPLE function in SAS/IML 12.1 to sample from a finite set or you can use the DATA step or PROC SURVEYSELECT to
extract a random sample from a SAS data set. Sampling without replacement is similar. This article describes how to use the SAS/IML SAMPLE function or the SURVEYSELECT procedure.
Simulate dealing cards
When I wrote about how to generate permutations in SAS, I used the example of dealing cards from a standard 52-card deck. The following SAS/IML statements create a 52-element vector with values 2H,
3H, ..., KS, AS, where 'H' indicates the heart suit, 'D' indicates diamonds, 'C' indicates clubs, and 'S' indicates spades:
proc iml;
/* create a deck of 52 playing cards */
suits52 = rowvec( repeat({H, D, C, S},1,13) );
vals = char(2:10,2) || {J Q K A};
vals52 = repeat( right(vals), 1, 4 );
Cards = vals52 + suits52;
/* choose 20 cards without replacement from deck */
call randseed(293053001);
deal = sample(Cards, 20, "WOR"); /* sample 20 cards without replacement */
The third argument to the SAMPLE function is the value "WOR", which stands for "without replacement." With this option, the SAMPLE function returns 20 cards from the deck such that no card appears
more than once in the sample. If there are four card players who are playing poker and each gets five cards, you can use the SHAPE function to reshape the 20 cards into a matrix such that each column
indicates a player's poker hand:
PokerHand = shape(deal, 0, 4); /* reshape vector into 4 column matrix */
print PokerHand[c=("Player1":"Player4")];
Let's see what poker hands these players were dealt. The first player has a pair of 2s. The second player has a pair of queens. The third player has three kings, and the fourth player has a flush!
That was a heck of a deal! (I'll leave it to the sharp-eyed reader to figure out how I "cheated" in order to simulate such an improbable "random" sample. Extra credit if you link to a blog post of
mine in which I explain the subterfuge.)
The SAMPLE function provides a second way to sample without replacement. If the third argument is "NoReplace", then a faster algorithm is used to extract a sample. However, the sample is in the same
order as the original elements, which might not be acceptable. For the poker example, the "WOR" option enables you to simulate a deal. If you use the "NoReplace" option, then you should first use the
RANPERM function to shuffle the deck. Of course, if you only care about the sample as a set rather than as a sequence, then using the faster algorithm makes sense.
One more awesome feature of the SAMPLE function: it enables you to sample with unequal probabilities by adding a fourth argument to the function call.
Sampling without replacement by using the SURVEYSELECT procedure
As mentioned above, some algorithms generate a sample whose elements are in the same order as the original data. This is the case with the SURVEYSELECT procedure when you use the METHOD=SRS option.
Suppose that you write the 52 cards to a SAS data set. You can use the SURVEYSELECT procedure to extract 20 cards without replacement, as follows:
create Deck var {"Cards"}; append; close Deck; /* create data set */
proc surveyselect data=Deck out=Poker seed=1
method=srs /* sample w/o replacement */
sampsize=20; /* number of observations in sample */
proc print data=Poker(obs=8); run;
The output shows the first eight observations in the sample. You can see that the hearts appear first, followed by the diamonds, followed by the clubs, and that within each suit the values of the
cards are in their original order. If you want the data in a random order, imitate the DATA step code in the SAS Knowledge Base article "Simple Random Sample without Replacement."
Post a Comment
Sample with replacement in SAS
Randomly choosing a subset of elements is a fundamental operation in statistics and probability. Simple random sampling with replacement is used in bootstrap methods (where the technique is called
resampling), permutation tests and simulation.
Last week I showed how to use the SAMPLE function in SAS/IML software to sample with replacement from a finite set of data. Because not everyone is a SAS/IML programmer, I want to point out two other
ways to sample (with replacement) observations in a SAS data set.
• The DATA Step: You can use the POINT= option in the SET statement to randomly select observations from a SAS data set. For many data sets you can use the SASFILE statement to read the entire
sample data into memory, which improves the performance of random access.
• The SURVEYSELECT Procedure: You can use the SURVEYSELECT procedure to randomly select observations according to several sampling schemes. Again, you can use the SASFILE statement to improve
The material in this article is taken from Chapter 15, "Resampling and Bootstrap Methods," of Simulating Data with SAS.
The DATA step
The following DATA step randomly selects five observations from the Sashelp.Cars data set:
sasfile Sashelp.Cars load; /* 1. Load data set into memory */
data Sample(drop=i);
call streaminit(1);
do i = 1 to 5;
p = ceil(NObs * rand("Uniform")); /* random integer 1-NObs */
set Sashelp.Cars nobs=NObs point=p; /* 2. POINT= option; random access */
STOP; /* 3. Use the STOP stmt */
sasfile Sashelp.Cars close;
A few statements in this DATA step require additional explanation. They are indicated by numbers inside of comments:
1. Provided that the data set is not too large, use the SASFILE statement to load the data into memory, which speeds up random access.
2. The NOBS= option stores the number of observations in the Sashelp.Cars data into the NObs variable before the DATA step runs. Consequently, the value of NObs is available throughout the DATA
step, even on statements that execute prior to the SET statement. The POINT= option is used to read the (randomly chosen) observation.
3. The STOP statement must be used to end the DATA step processing when you use the POINT= option, because SAS never encounteres the end-of-file indicator during random access.
For the example, the selected observation numbers are 379, 417, 218, 380, and 296. You can print the random sample to see which observations were selected:
proc print data=Sample noobs;
var Make Model MPG_City Length Weight;
The SURVEYSELECT procedure
The main SAS procedure for (re)sampling is called the SURVEYSELECT procedure. The name is a bit unfortunate because statisticians and programmers who are new to SAS might browse the documentation and
completely miss the relevance of this procedure. (To me, it is "PROC RESAMPLE.") The SURVEYSELECT procedure has many methods for sampling, but the method for sampling without replacement is known as
unrestricted random sampling (URS). The following call creates an output data set that contains five observations that are sampled (with replacement) from the Sashelp.Cars data:
proc surveyselect data=Sashelp.Cars out=Sample2 NOPRINT
seed=1 /* 1 */
method=urs sampsize=5 /* 2 */
outhits; /* 3 */
The call to PROC SURVEYSELECT has several options, which are indicated by numbers in the comments:
1. The SEED= option specifies the seed value for random number generation. If you specify a zero seed, then omit the NOPRINT option so that the value of the chosen seed appears in the procedure
2. The METHOD=URS option specifies unrestricted random sampling, which means sampling with replacement and with equal probability. The SAMPSIZE= option specifies the number of observations to
select, or you can use the SAMPRATE= option to specify a proportion of observations.
3. The OUTHITS options specifies that the output data set contains five observations, even if a record is selected multiple times. If you omit the OUTHITS option, then the output data set might have
fewer observations, and the NumberHits variable contains the number of times that each record was selected.
If you are using SAS/STAT 12.1 or later, the output data set contains exactly the same observations as for the DATA step example, because PROC SURVEYSELECT uses the same random number generator (RNG)
as the RAND function. Prior to SAS/STAT 12.1, PROC SURVEYSELECT used the older RNG that is used by the RANUNI function.
In summary, if you need to sample observations from a SAS data set, you can implement a simple sampling scheme in the DATA step or you can use PROC SURVEYSELECT. I recommend PROC SURVEYSELECT because
the procedure makes it clearer what sampling method is being used and because the procedure supports other, more complex, sampling schemes that are also useful.
Post a Comment
Ulam spirals: Visualizing properties of prime numbers with SAS
Prime numbers are strange beasts. They exhibit properties of both randomness and regularity. Recently I watched an excellent nine-minute video on the Numberphile video blog that shows that if you
write the natural numbers in a spiral pattern (called the Ulam spiral), then there are certain lines in the pattern that are very likely to contain prime numbers.
The image to the left shows the first 22,500 natural numbers arranged in the Ulam spiral pattern within a 150 x 150 grid. Cells that contain prime numbers are colored black. Cells that contain
composite numbers are colored white. You can see certain diagonal lines with slope ±1 that contain a large number of prime numbers. There are conjectures in number theory that explain why certain
lines along the Ulam spiral have a greater density of prime numbers than other lines. (The diagonal lines in the spiral correspond to quadratic equations.)
I don't know enough about prime numbers to blog about the mathematical properties of the Ulam spiral, but as soon as I saw the Ulam spiral explained, I knew that I wanted to generate it in SAS. The
Ulam spiral packs the natural numbers into a square matrix in a certain order. This post describes how to construct the Ulam spiral in the SAS/IML matrix language.
The Ulam spiral construction
Although you can stop writing numbers at any time, for the purpose of this post let's assume you want to fill an N x N array with numbers. Notice that when N is even, the last number in the array (N^
2=4, 16, 36,...), appears in the upper left corner of the array, whereas for odd N, the last number (9, 25, 49,...) appears in the lower right corner. I found this asymmetry hard to deal with, so I
created an algorithm that fills the N x N matrix so that the N^2 term is always in the upper left. For odd values of N, the algorithm rotates the matrix after inserting the elements. I've previously
shown that it is easy to write SAS/IML statements that rotate elements in a matrix.
My algorithm constructs the spiral iteratively. Notice that if you have constructed the spiral correctly for an (N–2) x (N–2) array, you can construct the N x N array by adding two vertical columns
(first and last) and two horizontal rows (first and last). I call the outer rows and columns a "frame" because they remind me of a picture frame. Given a value for N, you can figure out the starting
and ending values of each row and column of the frame. You can use the SAS/IML index creation operator to create each row and column, as shown in the following program:
proc iml;
start SpiralFrame(n);
if n=1 then return(1);
if n=2 then return({4 3, 1 2});
if n=3 then return({9 8 7, 2 1 6, 3 4 5});
X = j(n,n,.);
/* top of frame. 's' means 'start'. 'e' means 'end' */
s = n##2; e = s - n + 2; X[1,1:n-1] = s:e;
/* right side of frame */
s = e - 1; e = s - n + 2; X[1:n-1,n] = (s:e)`;
/* bottom of frame */
s = e - 1; e = s - n + 2; X[n,n:2] = s:e;
/* left side of frame */
s = e - 1; e = s - n + 2; X[n:2,1] = (s:e)`;
return( X );
/* test the frame construction */
M2 = SpiralFrame(2);
M4 = SpiralFrame(4);
M6 = SpiralFrame(6);
print M2, M4, M6;
You can see that the M4 matrix fits inside the M6 frame. Similarly, the M2 matrix fits inside the M4 frame.
After writing the code that generates the frames, the rest of the construction is easy. Start by creating the frame of size N. Decrease N by 2 and iteratively create a frame of size N, taking care to
insert the (N–2) x (N–2) array into the interior of the existing N x N array. After the array is filled, rotate the array by 180 degrees if N is odd. The following SAS/IML statements implement this
start Rot180(m);
return( m[nrow(m):1,ncol(m):1] );
/* Create N x N integer matrix with elements arranged in an Ulam spiral */
start UlamSpiral(n);
X = SpiralFrame(n); /* create outermost frame */
k = 2;
do i = n-2 to 2 by -2; /* iteratively create smaller frames */
r = k:n-k+1; /* rows (and cols) to insert smaller frame */
X[r, r] = SpiralFrame(i); /* insert smaller frame */
k = k+1;
if mod(n,2)=1 then return(Rot180(X));
U10 = UlamSpiral(10); /* test program by creating 10 x 10 matrix */
print U10[f=3.];
You might not have much need to generate Ulam spirals in your work, but this exercise demonstrates several important principles of SAS/IML programming:
• It is often convenient to calculate the first and last element of an array and to use the color index operator (:) to generate the vector.
• Notice that I generated the largest frame first and inserted the smaller frames inside of it. That is more efficient than generating the smaller frames and then using concatenation to grow the
• In the same way, sometimes an algorithm is simpler or more efficient to implement if your DO loops run in reverse order.
The heat map at the beginning of this article is created by using the new HEATMAPDISC subroutine in SAS/IML 13.1. I will blog about heat maps in a future post. If you have SAS/IML 13.1 and want to
play with Ulam spirals yourself, you can download the SAS code used in this blog post.
Post a Comment
Sampling with replacement: Now easier than ever in the SAS/IML language
With each release of SAS/IML software, the language provides simple ways to carry out tasks that previously required more effort. In 2010 I blogged about a SAS/IML module that appeared in my book
Statistical Programming with SAS/IML Software, which was written by using the SAS/IML 9.2. The blog post showed how to sample with replacement (with equal probability) from a finite set of
As of SAS/IML 12.1, there is a built-in function that returns random samples from a finite set. The SAMPLE function makes it easy to do the following:
• Sample with replacement with equal or varying probabilities
• Sample without replacement
• Generate multiple samples with a single call
Sampling with replacement is a common task for bootstrap (resampling) methods, so let's start by discussing sampling with replacement.
Sample with replacement with equal probability
You can use the SAMPLE function in the SAS/IML language to sample with replacement from a finite set. In my 2010 article, I used the example of choosing five elements at random from the set {1, 2,
..., 8}. The following call shows how to use the built-in SAMPLE function for this task:
proc iml;
call randseed(1234);
s = sample(1:8, 5); /* randomly choose 5 elements from the set 1:8 */
print s;
The default sampling scheme is to sample with replacement, which is why the element 3 appears twice in the random sample. Notice that the random number seed for the SAMPLE function is set by using
the RANDSEED subroutine.
If you want to generate multiple samples, each of size five, you can specify a two-element vector for the second argument. The first element specifies the sample size. The second element specifies
the number of samples, which is the number of rows in the output matrix. For example, the following statement generates six random samples. Each row is one random sample and contains five elements.
s6 = sample(1:8, {5 6}); /* sample size=5; number of samples=6 */
print s6;
Sample with replacement with varying probabilities
The SAMPLE function also supports sampling with unequal probabilities. Since SAS is known for having free M&Ms® in the breakrooms, here's an M&M-inspired example. There is a large jar of plain M&Ms
in my breakroom. The M&Ms are different colors: 30% are brown, 20% are yellow, 20% are red, 10% are green, 10% are orange, and 10% are blue. I'll use the SAMPLE function to simulate drawing 20 M&Ms
from the jar. Although in real life I would never select an M&M and then replace it back into the jar (Yuck! Unsanitary!), the jar is so large that the probabilities are approximately constant during
the sampling, so I can use the sampling with replacement method.
colors = {"Brown" "Yellow" "Red" "Green" "Orange" "Blue"};
prob = {0.3 0.2 0.2 0.1 0.1 0.1};
snack = sample(colors, 20, "Replace", prob); /* a 1x20 vector of colors */
call tabulate(category, freq, snack); /* count how many of each color */
print freq[colname=category];
For this sample of 20 candies, more than half of the sample is brown; I did not draw any greens or oranges. The output of the SAMPLE function is a 1 x 20 vector of colors. If I only want the total
number of each color—and not the sample itself—I could use the RANDMULTINOMIAL function to simulate the counts directly, rather than use the SAMPLE function and the TABULATE subroutine.
Sampling observations in a data matrix
If you have data in a SAS/IML matrix, you can sample the observations by sampling from the integers 1, 2, ..., N, where N is the number of rows of the matrix. For example, the following statements
read in 428 observations from the Sashelp.Cars data set. The SAMPLE function is used to draw a random sample that contains five observations. Each observation contains information about a random
vehicle in the data set.
proc iml;
call randseed(1);
use Sashelp.Cars;
varNames = {"MPG_City" "Length" "Weight"};
read all var varNames into x[rowname=Model];
close Sashelp.Cars;
obsIdx = sample(1:nrow(x), 5); /* sample size=5, rows chosen from 1:NumRows */
s5 = x[obsidx, ]; /* extract subset of rows */
print s5[rowname=(Model[obsIdx]) colname=varNames];
In summary, this article has shown how to use the SAMPLE function in SAS/IML 12.1 to sample with replacement from a finite set. In future posts I will show how to use other SAS tools to resample from
a data set and how to sample without replacement.
Post a Comment
The best articles of 2013: Twelve posts from The DO Loop that merit a second look
I began 2014 by compiling a list of 13 popular articles from my blog in 2013. Although this "People's Choice" list contains many articles that I am proud of, it did not include all of my favorites,
so I decided to compile an "Editor's Choice" list. The blog posts on today's list were not widely popular, but they deserve a second look because they describe SAS computations that are elegant,
surprising, or just plain useful.
I often write about four broad topics: the SAS/IML language, statistical programming, simulating data, and data analysis and visualization. My previous article included five articles on statistical
graphics and data analysis, so here are a few of my favorite articles from the other categories.
The SAS/IML language and matrix programming
Here are a few articles that can help you get more out of the SAS/IML product:
Serious SAS/IML programmers should read my book Statistical Programming with SAS/IML Software.
Simulating data with SAS
Here are a few of my favorite articles from 2013 about efficient simulation of data:
Readers who want to learn more about simulating data might enjoy my book Simulating Data with SAS, which contains hundreds of examples and exercises.
Statistical programming and computing
Much of my formal training is in numerical analysis and matrix computations. Here are a few interesting computational articles that I wrote. Be warned: some of these articles have a high "geek
There you have it, my choice of 12 articles that I think are worth a second look. What is your favorite post from The DO Loop in 2013? Leave a comment.
Post a Comment
|
{"url":"http://blogs.sas.com/content/iml/page/3/","timestamp":"2014-04-19T17:01:45Z","content_type":null,"content_length":"132283","record_id":"<urn:uuid:61dca19e-aa89-4133-926d-46320ef017d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rotating to position... [Archive] - OpenGL Discussion and Help Forums
02-04-2004, 03:11 AM
I have a triangle with normal vector (nx,ny,nz),
i want to rotate this triangle such that
the normal vector be aligned with (0,1,0).
i.e., after the rotation, all the vertices
of the triangle will have the same
y coordinate.
how can i do this ?
thanks in advance,
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-159499.html","timestamp":"2014-04-16T13:46:48Z","content_type":null,"content_length":"4580","record_id":"<urn:uuid:6d662a8d-8824-42dd-a91d-d195607fa1c0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vector problem involving orthogonal
March 5th 2011, 05:41 PM
Vector problem involving orthogonal
Hi, I'm struggling with this problem:
For $\vec{u} =$[-4, 1, 10]^T and $\vec{v} =$ [−12, −6, 8]^T find the vectors $\vec{u1}$ and $\vec{u2}$ such that:
(i) $\vec{u1}$ is parallel to $\vec{v}$
(ii) $\vec{u2}$ is orthogonal to $\vec{v}$
(iii) $\vec{u} = \vec{u1} + \vec{u2}$
I figured I should firstly try and find u2 by (ii), and then after I found that I would be able to use (iii) to get u1. This approach didn't work out too well for me, heh. Basically I tried to
set it up with dot product, and solve for u2:
$\vec{v} * \vec{u2} = 0$
Didn't work out, just ended up with something like: -12a - 6b + 8c = 0
Where a, b, c are the numbers in u2.
From there I couldn't see what more I could do to find a, b, c. Basically, I don't really know how to approach this problem :(.
Anyone mind helping out a math newbie? Thanks in advance!
March 5th 2011, 07:13 PM
Hi, I'm struggling with this problem:
For $\vec{u} =$[-4, 1, 10]^T and $\vec{v} =$ [−12, −6, 8]^T find the vectors $\vec{u1}$ and $\vec{u2}$ such that:
(i) $\vec{u1}$ is parallel to $\vec{v}$
(ii) $\vec{u2}$ is orthogonal to $\vec{v}$
(iii) $\vec{u} = \vec{u1} + \vec{u2}$
I figured I should firstly try and find u2 by (ii), and then after I found that I would be able to use (iii) to get u1. This approach didn't work out too well for me, heh. Basically I tried to
set it up with dot product, and solve for u2:
$\vec{v} * \vec{u2} = 0$
Didn't work out, just ended up with something like: -12a - 6b + 8c = 0
Where a, b, c are the numbers in u2.
From there I couldn't see what more I could do to find a, b, c. Basically, I don't really know how to approach this problem :http://mathhelpforum.com/advanced-algebra/
173580-vector-problem-involving-orthogonal.html#post626044\" rel=\"nofollow\">
Hi, I'm struggling with this problem:
For $\vec{u} =$[-4, 1, 10]^T and $\vec{v} =$ [−12, −6, 8]^T find the vectors $\vec{u1}$ and $\vec{u2}$ such that:
(i) $\vec{u1}$ is parallel to $\vec{v}$
(ii) $\vec{u2}$ is orthogonal to $\vec{v}$
(iii) $\vec{u} = \vec{u1} + \vec{u2}$
I figured I should firstly try and find u2 by (ii), and then after I found that I would be able to use (iii) to get u1. This approach didn't work out too well for me, heh. Basically I tried to
set it up with dot product, and solve for u2:
$\vec{v} * \vec{u2} = 0$
Didn't work out, just ended up with something like: -12a - 6b + 8c = 0
Where a, b, c are the numbers in u2.
From there I couldn't see what more I could do to find a, b, c. Basically, I don't really know how to approach this problem :(.
Anyone mind helping out a math newbie? Thanks in advance!
That's not a bad start! Yes, to satisfy (ii) you want -12a- 6b+ 8c= 0.
And to satisfy (iii) you want u1= <-4- a, 1- b, 10- c>.
And, then, to satisfy (i) you want u1 to be a multiple of v: u1= <-4-a, 1- b, 10- c>= d<12, 6, 8>. That is you have four equations, -12a- 6b+ 8c= 0, -4-a= 12d, 1- b= 6d, and 10- c= 8d, for the
four numbers, a, b, c, and d.
|
{"url":"http://mathhelpforum.com/advanced-algebra/173580-vector-problem-involving-orthogonal-print.html","timestamp":"2014-04-21T04:44:28Z","content_type":null,"content_length":"12336","record_id":"<urn:uuid:ab2673ae-8fe9-4fc5-9cda-f14789228939>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Bohr's radius
yeah...base on my result, I would say that Bohr was right at least at the radius of the probability density in a Hydrogen atom, but for higher state of n, using the same approach, the Bohr radius
failed miserably: adding the repulsion energy of 2 electrons to the Bohr's energy level, I got positive answer (when it supposed to be negative...). So I say electron in higher n does not travel in
circular orbit, which mean the Bohr's assumption was wrong.
Does the Bohr radius have the same value with the maximum of the probability density in an excited atom, at least for the s orbital (the s because the wavefunction look kinda circular)? My guest is
no, but I'm not that sure...
|
{"url":"http://www.physicsforums.com/showthread.php?p=2199389","timestamp":"2014-04-17T04:42:26Z","content_type":null,"content_length":"36477","record_id":"<urn:uuid:1cd11e7a-bb66-467f-9648-adb70ad33a0c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Colorado Interstate Gas Company
First Revised Volume No. 1
Contents / Previous / Next / Main Tariff Index
Effective Date: 06/01/2010, Docket: RP10-689-000, Status: Effective
Second Revised Sheet No. 228A.01 Second Revised Sheet No. 228A.01
Superseding: First Revised Sheet No. 228A.01
1.2 Continued
(f) Shippers must have adequate Transportation capacity to
deliver gas to storage for injection using either the
Standard or High ADIQ.
(g) The Standard and High Available Daily Injection Quantity
Curves and Standard and High Available Daily Injection
Quantity Tables in this Section 1.2 are provided for
illustrative purposes only.
%MDIQ = 100 - (0.31 * %MAC)
%MDIQ = 124.8 - (0.36 * %MAC)
NOTE:The STANDARD formula applies only from 100% of a
Shipper's MAC to 0% of a Shipper's MAC.
NOTE:In the context of these formulas, the %MAC and %MDIQ
values are taken as whole numbers, and not as decimal
only numbers (i.e. use 40.0 not .40 or use 36.1234 not
NOTE:To calculate ADIQ from these formulas: ADIQ =
{"%MDIQ from formula"/100)* "The 100% MDIQ Value}
Rounded to the nearest whole Dekatherm (an integer).
|
{"url":"http://www.ferc.gov/industries/gas/gen-info/fastr/htmlall/003979_000100/003979_000100_000417.htm","timestamp":"2014-04-20T11:20:04Z","content_type":null,"content_length":"7106","record_id":"<urn:uuid:46fcb864-28c3-427e-9c96-3e4f21a034e4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
|
S.C.: Hysterical B-trees
, 1989
"... This paper is a study of persistence in data structures. Ordinary data structures are ephemeral in the sense that a change to the structure destroys the old version, leaving only the new version
available for use. In contrast, a persistent structure allows access to any version, old or new, at any t ..."
Cited by 250 (6 self)
Add to MetaCart
This paper is a study of persistence in data structures. Ordinary data structures are ephemeral in the sense that a change to the structure destroys the old version, leaving only the new version
available for use. In contrast, a persistent structure allows access to any version, old or new, at any time. We develop simple, systematic, and effiient techniques for making linked data structures
persistent. We use our techniques to devise persistent forms of binary search trees with logarithmic access, insertion, and deletion times and O(1) space bounds for insertion and deletion.
- Journal of the ACM , 1998
"... We introduce a new text-indexing data structure, the String B-Tree, that can be seen as a link between some traditional external-memory and string-matching data structures. In a short phrase, it
is a combination of B-trees and Patricia tries for internal-node indices that is made more effective by a ..."
Cited by 122 (12 self)
Add to MetaCart
We introduce a new text-indexing data structure, the String B-Tree, that can be seen as a link between some traditional external-memory and string-matching data structures. In a short phrase, it is a
combination of B-trees and Patricia tries for internal-node indices that is made more effective by adding extra pointers to speed up search and update operations. Consequently, the String B-Tree
overcomes the theoretical limitations of inverted files, B-trees, prefix B-trees, suffix arrays, compacted tries and suffix trees. String B-trees have the same worst-case performance as B-trees but
they manage unbounded-length strings and perform much more powerful search operations such as the ones supported by suffix trees. String B-trees are also effective in main memory (RAM model) because
they improve the online suffix tree search on a dynamic set of strings. They also can be successfully applied to database indexing and software duplication.
, 1993
"... In this paper, we give new techniques for designing efficient algorithms for computational geometry problems that are too large to be solved in internal memory, and we use these techniques to
develop optimal and practical algorithms for a number of important largescale problems. We discuss our algor ..."
Cited by 121 (20 self)
Add to MetaCart
In this paper, we give new techniques for designing efficient algorithms for computational geometry problems that are too large to be solved in internal memory, and we use these techniques to develop
optimal and practical algorithms for a number of important largescale problems. We discuss our algorithms primarily in the contex't of single processor/single disk machines, a domain in which they
are not only the first known optimal results but also of tremendous practical value. Our methods also produce the first known optimal algorithms for a wide range of two-level and hierarchical muir
{level memory models, including parallel models. The algorithms are optimal both in terms of I/0 cost and internal computation.
, 1992
"... 2 1 1 Figure 1: A red-black tree. The darkened nodes are black nodes. The external nodes are denoted by squares. Shown with each node is its rank. Wyk give another, simpler, implementation of
finger trees. They describe a finger data structure which is a modification of red-black trees, but othe ..."
Add to MetaCart
2 1 1 Figure 1: A red-black tree. The darkened nodes are black nodes. The external nodes are denoted by squares. Shown with each node is its rank. Wyk give another, simpler, implementation of finger
trees. They describe a finger data structure which is a modification of red-black trees, but other forms of balanced trees could be used as a basis for the structure. The two problems presented in
Chapters 3 and 4 rely on the use of redblack and finger trees respectively. In this chapter we give a fairly complete overview of red-black trees, of the finger trees introduced by Tarjan and Van
Wyk, and of a variant of these which we use in Chapter 4. The material here is intended to be comprehensive and useful as an introduction to these two types of data structures. Re - ack rees A
red-black tree is a full binary tree in which each node is assigned a color, either red or black. The leaves are called
"... Summary. In this paper we explore the use of weak B-trees to represent sorted lists. In weak B-trees each node has at least a and at most b sons where 2a
Add to MetaCart
Summary. In this paper we explore the use of weak B-trees to represent sorted lists. In weak B-trees each node has at least a and at most b sons where 2a<b. We analyse the worst case cost of
sequences of insertions and deletions in weak B-trees. This leads to a new data structure (level-linked weak B-trees) for representing sorted lists when the access pattern exhibits a (time-varying)
locality of reference. Our structure is substantially simpler than the one proposed in [7], yet it has many of its properties. Our structure is as simple as the one proposed in [5], but our structure
can treat arbitrary sequences of insertions and deletions whilst theirs can only treat non-interacting insertions and deletions. We also show that weak B-trees support concurrent operations in an
efficient way. 1.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=957948","timestamp":"2014-04-19T17:55:29Z","content_type":null,"content_length":"22917","record_id":"<urn:uuid:026af63c-cf49-4a29-bbbc-979ccb57164e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integral [Calc 2]
November 19th 2008, 11:43 PM #1
Aug 2008
Integral [Calc 2]
There was a question on a previous exam and I can't seem to get the right answer.
$\int\frac{2dx}{x^3 + x}$
My initial method was to use Integration by Parts, but I can't seem to massage it correctly.
$u = \frac{2}{x}$; $du = 2ln(x)dx$
$dv = (x^2 + 1) dx$; $v = \frac{x^3}{3} + x$
Fairly certain this doesn't work though, although my arithmetic could be off.
Could someone please take a jab at this? I would appreciate it greatly. :]
There was a question on a previous exam and I can't seem to get the right answer.
$\int\frac{2dx}{x^3 + x}$
My initial method was to use Integration by Parts, but I can't seem to massage it correctly.
$u = \frac{2}{x}$; $du = 2ln(x)dx$
$dv = (x^2 + 1) dx$; $v = \frac{x^3}{3} + x$
Fairly certain this doesn't work though, although my arithmetic could be off.
Could someone please take a jab at this? I would appreciate it greatly. :]
Use a partial fraction decomposition.
Put $x=\frac1u$ and you'll get an easy integral.
November 19th 2008, 11:49 PM #2
November 20th 2008, 07:59 AM #3
Aug 2008
November 20th 2008, 08:51 AM #4
|
{"url":"http://mathhelpforum.com/calculus/60620-integral-calc-2-a.html","timestamp":"2014-04-19T01:11:47Z","content_type":null,"content_length":"42290","record_id":"<urn:uuid:2dc9913c-f2a7-45b3-9b97-538cfaba824a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An approximate converse of discrete uncertainty principle
up vote 4 down vote favorite
Let $f:\mathbb{Z}_n \rightarrow \{0, 1\}$ and let's normalize the Fourier transform $\hat{f}$ so that $\|\hat{f}\|_2 = \|f\|_2$, i.e. $$\hat{f}(\xi) = \frac{1}{\sqrt{n}}\sum_{x \in \mathbb{Z}_n}{f(x)
e^{-2\pi i x \xi/n}}$$ Also let $\hbox{supp}(f) = \{x \in \mathbb{Z}_n: f(x) \neq 0\}$.
What I am calling the discrete uncertainty principle is the following statement:
If $|\hbox{supp}(f)| > 0$ then $|\hbox{supp}(f)| \cdot |\hbox{supp}(\hat{f})| \geq n$.
This inequality is tight for the Dirac comb. Also, for $n$ a prime number a much stronger inequality is true: $|\hbox{supp}(f)| + |\hbox{supp}(\hat{f})| \geq n + 1$ (again as long as $f$ is not the
constant 0 function).
The uncertainty principle states that if $f$ is is "concentrated" then $\hat{f}$ is "spread-out". I am interested in the existence of a weak converse, i.e. is it true in some approximate sense that
if $f$ is very spread out then $\hat{f}$ is fairly concentrated.
Here is a possible theorem statement that I would like to be true:
Let $f:\mathbb{Z}_n \rightarrow \{0, 1\}$ and let $\hat{f}$ be define as above. Is it true that for any $f$ s.t. $\|f\|_2^2 \geq \sqrt{n}$ there exists a set $S \subseteq \mathbb{Z}_n$ s.t. $|S|
\leq \sqrt{n}$ and $$\sum_{\xi \in S}{|\hat{f}(\xi)|^2 \geq \|\hat{f}\|_2^2 - \sqrt{n}} = \|f\|_2^2 - \sqrt{n}$$
Note that since the range of $f$ is $\{0, 1\}$, $\hbox{supp}(f) = \|f\|_2^2$. Note also that the condition that $\|f\|_2^2 \geq \sqrt{n}$ is redundant given the error factor of $\sqrt{n}$. On the
other hand, some error factor is necessary, given the strong inequality for $n$ a prime number that I mentioned above.
The reasons I have for guessing this statement are that
1. I want it to be true (for my application) :)
2. I have checked it by brute-force enumeration for $n \leq 23$.
Is there any statement of this form known? Or is it obviously false?
fourier-analysis rt.representation-theory
1 If you try the random 0-1-valued function, I think you will find that asymptotically almost surely $\hat f(0) = \|f\|_2^2 \sim n$, but that $|\hat f(\xi)| = O(\log n)$ for all non-zero $\xi$ (by
the Chernoff inequality). So this will provide a counterexample to your statement for n large enough. – Terry Tao Dec 11 '11 at 23:24
... $\hat f(0)$ should be $n^{1/2} \hat f(0)$, with your normalisations. Also, a deterministic counterexample can probably be constructed by setting f to be the indicator function of the quadratic
residues, say in the case when n is prime. – Terry Tao Dec 11 '11 at 23:25
@TerryTao Now I am amazed I didn't try this calculation. Thanks! So, $|\hat{f}(0)| = \sqrt{n}/2 \pm O(\log n)$ and $|\hat{f}(\xi)| = O(\log n)$ for all $\xi \neq 0$ with nonzero probability. So
clearly for error factor $\sqrt{n}$ the set $S$ needs to be of size at least $(n - \sqrt{n})/\log n$. – Sasho Nikolov Dec 12 '11 at 0:17
I can accept this if it's given as an answer. Sorry for asking an uninteresting question. Unless someone sees a way to salvage a statement like this, but right now I don't see a case where the
probabilistic counterexample would fail. – Sasho Nikolov Dec 12 '11 at 0:23
add comment
1 Answer
active oldest votes
(My previous comment, converted to an answer as requested.)
If one sets $f$ to be the random 0-1 valued function, then from the Chernoff inequality one sees that with non-zero probability, one has $\hat f(0) = \sqrt{n}/2 + O(1)$, $\|f\|_2^2 =
n/2 + O(\sqrt{n})$ and $\hat f(\xi) = O(\log n)$ for all $\xi \neq 0$, so the Fourier transform is basically maximally dispersed, so there is no concentration at anywhere near the
scale suggested.
up vote 7 down If $n$ is prime, one can obtain a deterministic version of this example (without the losses of $\log n$) by taking $f$ to be the indicator function of the quadratic residues, and
vote accepted then using Gauss sums.
Informally, "most" functions (drawn from, say, a gaussian measure) will be more or less uniformly spread out in phase space, which implies that the function and its Fourier transform
will both be spread out uniformly as well. Concentration (either in physical space or frequency space) is the exception rather than the rule.
add comment
Not the answer you're looking for? Browse other questions tagged fourier-analysis rt.representation-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/83208/an-approximate-converse-of-discrete-uncertainty-principle?answertab=oldest","timestamp":"2014-04-20T06:29:51Z","content_type":null,"content_length":"57598","record_id":"<urn:uuid:99340463-43cc-4ec6-b990-6addfe138e20>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relatively Prime Quadratic Integers
December 8th 2010, 10:44 AM #1
Jun 2010
United States
Hello Math Help Forum,
I saw the following problem on another forum but the responses were all over the place and I was hoping if someone could give some clarity. The problem states:
Assume $32 = \alpha\beta$ for $\alpha,\beta$ relatively prime quadratic integers in $Q[i]$. It can be shown that $\alpha = \epsilon \gamma^2$ for some unit $\epsilon$ and some quadratic integer $
\gamma$ in $Q[i]$.
Can someone explain how this is shown?
Well, $32 = 2^5 = (1+i)^5(1-i)^5$. And $1+i,1-i$ are irreducible. But $(1+i)=i(1-i)$. Hence $32 = i(1-i)^{10}$ is the unique expression of $32$ in $\mathbb{Z}[i]$ as a product of irreducibles.
Can you finish from there?
December 29th 2010, 01:04 PM #2
|
{"url":"http://mathhelpforum.com/number-theory/165709-relatively-prime-quadratic-integers.html","timestamp":"2014-04-17T07:22:53Z","content_type":null,"content_length":"34895","record_id":"<urn:uuid:b4d0fec4-25c0-476a-8fa9-62ea38c60298>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nonlinear systems
There is a long standing debate if financial systems are truly random or contain some structure. From the study of non-linear dynamical systems and chaos one finds it is possible that even perfectly
deterministic systems can appear to be random. Those systems tend to be predictable on short time scales but neighboring states quickly diverge, leaving the operator incapable of forecasting future
outcomes beyond a characteristic time scale. Still, the study of these systems provides insights and technics for forecasting, state classification and control. For machine learning applications data
from chaotic model systems can be a benchmark for tuning and selection of learning algorithms.
In this post I will briefly explore the ‘R’ package “tseriesChaos” and demonstrate its usage to generate chaotic sample data. In the following example mapping out the time-delay embedded signal
reveals the simple structure that governs this particular system.
#load the library tseriesChaos and scatterplot3d for graphing
#generate and plot times series data for the Rössler system
rossler.ts plot(rossler.ts)
Represent and plot the times series as a 3 dimensional delayed vector [x(t), x(t-d), x(t-2*d)]
xyz scatterplot3d(xyz, type="l")
Taking the Poincaré section, which is to read a slice off the continuous attractor if it crosses a lower dimensional subspace. Here the z dimension is recorded whenever the x dimension crosses the
value 0 from above.
pc<-xyz[1000+which(sapply( 1000:(length(xyz[,1])-1) , function(i) ( (xyz[i+1,1]<xyz[i,1]) * (xyz[i,1]*xyz[i+1,1]) ) )<0),3]
Plotting the return map reveals the law that governs the evolution of the states. In particular one can identify a fixed-point that where the 45 degree line crosses the return map. If the system were
brought to this state it would always remain on it.
plot(pc[1:(length(pc)-1)],pc[2:length(pc)],pch=16,cex=.4,xlab="current state", ylab="next state",main="Return Map")
Similarly plotting states two iterations apart reveals the location of period-2 orbits.
plot(pc[1:(length(pc)-2)],pc[3:length(pc)],pch=16,cex=.4,xlab="previous state", ylab="next state",main="Return Map")
An area of research in this type of systems is to dynamically identify and control such special states. The ability of learning algorithms to recover the hidden model structure can be benchmarked
against clean test data as generated in the example above and also against chaotic data with added noise. In addition, in order to avoid over-fitting (high variance) learning of real model data can
also be compared against surrogate data. Here surrogate data refers to a generated random time series that shares the same frequency power spectrum and probability distribution with the original
signal, but contains no deterministic structure. See also D. Prichard and J. Theiler in “Generating surrogate data for time series with several simultaneously measured variables” and J.D Farmer and
others in “Testing for nonlinearity in time series: the method of surrogate data”.
|
{"url":"http://quantsignals.wordpress.com/2012/06/22/nonlinear-systems/","timestamp":"2014-04-21T09:36:11Z","content_type":null,"content_length":"54995","record_id":"<urn:uuid:878f5e54-d932-4843-afc5-c4c3c341e87e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
See attachment
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5116cc85e4b09e16c5c8816a","timestamp":"2014-04-19T13:00:03Z","content_type":null,"content_length":"59697","record_id":"<urn:uuid:648f7cfe-c484-4caf-81ba-2f29aa70706b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 512
At car (1140-kg)traveling at 76.04 m/s and truck (12600-kg)traveling 6.88 m/s have a head-on collision and then stick together. What is their final common velocity (m/s)? (assume the car is going in
the positive direction) What is the cars' change in momentum in the above ...
And yes, that is the right mass.
Ohh sorry, the truck is going (24.8 km/hr) and the car is going 76.042 m/s
At car (1140-kg)traveling at 24.8 m/s and truck (12600-kg) have a head-on collision and then stick together. What is their final common velocity (m/s)? (assume the car is going in the positive
direction) What is the cars' change in momentum in the above question (kg m/s)? ...
In an intense battle, gunfire is so intense that bullets from opposite sides collide in midair. Suppose that one (with mass M= 5.12 g moving to the right at a speed V= 214 m/s directed 21.3 degrees
above the horizontal) collides and fuses with another with mass m = 3.05 g movi...
At what speed would a 1140-kg car have the same momentum as a 12,600-kg truck traveling at 25 km/hr? The hint given to this problem is that I need to find the velocity of the car and truck.
Converting 25 km/hr to m/s I get 6.944 m/s as the initial velocity.. The professor told...
Thank you for such a wonderful explanation
An object slides without friction in a bowl shaped like a hemisphere. It is released from rest at one edge of the bowl, slides up the other side before stopping right on the edge of the bowl. During
which part of this process is the power being delivered by the gravitational f...
A ball is thrown upward and allow to fall under the influence of gravity. We will ignore the effects of air resistance. We consider three times: Time A, when the ball is first thrown upward, Time B
when the ball reaches the peak, and Time C when the ball hits the ground. Rank ...
A parachutist with mass 79.7 kg jumps from an airplane traveling at a speed vi= 112 km/hr at a height H = 2570 m. He lands with a speed of vf = 5.21 m/s. What is the change in mechanical energy (J)
of the earth- parachutist system from just after the jump to just before landin...
Physics Conservation of energy
A box is pushed across the floor against a constant frictional force. The box is pushed across the room to the east in 30 seconds and returned to its starting point (pushed to the west) in 60
seconds. Which of the following is true about the work done and the power expended in...
Physics Conservation of Energy
A car traveling at 20 miles/hour stops in a distance of 15 feet when its brakes are locked. Assuming that the frictional force doesn't depend on speed, that same car traveling at 60 miles/hour will
stop in a distance closest to a. 15 feet b. 45 feet c. 90 feet d. 135 feet ...
An aqueous solution is 22.0% by mass silver nitrate, AgNO3, and has a density of 1.22 g/mL. what is the molality of silver nitrate in the solution?
A 645-kg elevator starts from rest and reaches a cruising speed of 1.47 m/s after 3.13 seconds. It moved 2.75 m during that time. What average power (W) is delivered by the motor during the initial
acceleration of the elevator during the first 3.13 seconds? The correct solutio...
Physics Block and ramp
Thank you so much!!!! I get it.
Physics Block and ramp
A crate of mass 0.812 kg is placed on a rough incline of an angle 35.3 degrees. Near the base of the incline is a spring of spring constant 1140 N/m. The mass is pressed against the spring a distance
x and released. It moves up the slope 0.169 meters from the compressed positi...
Physics- bungee jump
A tall bald student (height 2.1 meters and mass 91.0 kg) decides to bungee jump off a bridge 36.7 meters above the river. The bungee cord is 25.3 meters long as measured from the attachment at the
bridge to the foot of the jumper. Treat the bungee as an ideal spring and the st...
A construction crew pulls up an 87.5 kg load using a rope thrown over a pulley and pulled by an electric motor. They lift the load 15.5 m and it arrives with a speed of 15.6 m.s having started from
rest. Assume that acceleration was not constant. How much work was done by the ...
Physics - pulley
A construction crew pulls up an 87.5-kg load using a rope thrown over a pulley and pulled by an electric motor. They lift the load 15.5 m and it arrives with a speed of 15.6 m/s having started from
rest. Assume that acceleration was not constant. I have done the problem but am...
Physics - Spring gun
A spring gun is made by compressing a spring in a tube and then latching the spring at the compressed position. A 4.87-g pellet is placed against the compressed and latched spring. The spring latches
at a compression of 4.87 cm, and it takes a force of 9.13 N to compress the s...
A child rides on a Ferris Wheel at the State Fair. The seat always faces to the right. There are three forces that act on the child,the child s weight, W, the normal force of the seat,N, and the
force of friction between the child and the seat, f. At which listed location...
Physics - Multiple choice
Correction: I believe it to be (a) not (c)
Physics - Multiple choice
An object is moving in a circle at a constant speed. From this you can be certain that a) there are at least two forces acting on the object, which balance each other. b) there are at least two
forces acting on the object, which do not balance each other c) there must be more ...
A pendulum of length L=26.0 cm and mass m= 168 g is released from rest when the cord makes an angle of 65.2 degrees with the vertical. A) how far (m) does the mass fall before reaching its lowest
point? My work: .26m - .109 = .151 m B) how much work (J) is done by gravity as i...
list all the factors of 84
At approximately what rate would you have to invest a lump-sum amount today if you need the amount to triple in six years, assuming interest is compounded annually?
NVM's preferred stock currently has a market price equal to $80 per share. If the dividend paid on this stock is $6 per share, what is the required rate of return investors would get from NVM
preferred stock?
simplify sin7x-sin3x as a product of trig functions.
convert the point with polar coordinates (2,7pi/6) into rectangular coordinates.
The parametric equations for a curve in the x-y plane are x=2+t^2 and y=4-3t. Determine the points where the curve intersects the x-axis.
simplify the expression: sin(theta+pi/4) + sin(theta-pi/4)
convert the polar equation r^2=2cos^2theta + 3sin^2theta into a rectangular equation
simplify the expression sin(theta+pi/4) + sin(theta-pi/4)
While on the moon, the apollo astronauts enjoyed the effects on gravity so much smaller then that ont he earth. If neil armstrong jumped up on the moon with the initial speed
Suppose the speedometer in your car reads 55.0 mph also written as mile/hour. What is your speed in m/sec? (1 km = 0.621 mi.) unit factor methond soooo lost on this one please help!!!
a piece of ribbon is 4 1/2 feet long. a piece of blue ribbon is 1 yd long. How many feet longer is the piece of red ribbon than the piece of blue ribbon?
find the sum estimate to cheack if the answer is reasonable
x|y 6|2 12|4 24|8 33|11
X|Y 25|17 32|24 46|38 59|51 COULD YOU PLEASE EXPLAIN HOW TO GET THE ANSWER.
find a rule for each table, write an equation for each rule. x|y 0|16 20|36 36|52 42|58
5th grade math
please show how to get the answer... w>2 d+6<15 s-1>4
5th grade math
please show how to get the answer... w>2 d+6<15 s-1>4
5th grade math
please show how to get the answer... w>2 d+6<15 s-1>4
could you show me how to get the answer.. name three soultions to the inequality,
w > 2 d + 6 < 15 s - 1 > 4 please explain
5th grade math
name three solutions of inequality. then graph the inequality on a number line. k<8
Alg 1
Graph -4x+y>-6 5y<=-x-20 Then give 2 ordered pairs that solve it& 2 ordered pairs that dont solve it
A farmer has 165 feet of fencing material in which to enclose a rectangle area. He wants the length x to be greater than 50 feet and width y to be no more than 20 feet. write a system to represent
this situation.
Alg 1
The popultion of Medford High is 800 students & the population of WEstville High is 1240 students. Medfords pop. has been increasing by 30 students per year, while westville has been decreasing by 25
students per year. In how many yearas will the populations be the same. how m...
5th grade math
5th grade math
use inverse operations and a property of equaility to solve the equations. x+13=42 x-12=37
5th grade math
get each variable by itself on one side of the equation. x-45=90 n+23.4=36.9
Yea so it would be K2CrO4
prime and composite arrays are hard .Help please .
why the formula for finding the surface area of a rectangular prism is helpful.
Julia deposits $900 in a savings certificate that pays 6.5 % annually. How much money will Julia have at the end of one year?
a woman is mowing her lawn. she is pushing on the handle with a force of 250 newtons. if the handle is at a 40 degree angle with the horizontal, resolve the vector. ??
a man is pulling a 100kg crate with a force of 250 newtons. the rope makes a downward angle of 30.0 degrees with the horizontal. if the crate is sliding at a constant rate, calulate the coefficient.
?? how do i do this??
a 35.5kg box slides down an incline plane with a 25 degree angle at a constant rate. how do i calculate the coefficient of friction
n(t)=-3t^+23t^2=8t Rewrite the formula by factoring the right-hand side completely. b) Use the factored version of the formula to find N(3) maximum number of components assembled
In a study of worker efficiency at Wong Laboratories it was found that the number of components assembled per hour by the average worker t hours after starting work could be modeled by the formula.
Rewrite the formula by factoring the right-hand side completely. b) Use the fac...
How large a sample from N(80,20) population is needed so that the sampling distribution of the mean has a standard deviation of 1?
An angry construction worker throws his wrench downward from a height of 128 feet with an initial velocity of 32 feet per second. The height of the wrench above the ground after t seconds is given by
S(t)= -16t^2 - 32t + 128. a) What is the height of the wrench after 1 second?...
A hill is 290 m long and makes an angle of 10 degrees with the horizontal. As a 68 kg jogger runs up the hill, how much work does gravity do on the jogger?
organic chemistry
What is the {\rm HCl} concentration if 54.2 mL of 0.100 M {\rm NaOH} is required to titrate a 30.0 mL sample of the acid?
write 1.06X as a sum
The fastest recoded pitch in Major league baseball was thrown by nolan ryan in 1974. If this pitch were thrown horizontally, the ball would fall 0.809m(2.65 ft)by the time it reached home plate,
18.3m (60ft) away. how fast was ryans pitch?
Earth Science
If the date was March 22 and one saw the noon sun at 35 degrees south above the horizon, what latitude is one at?
A package of medical supplies is released from a small airpllane flying a mercy mission over an isolated jungle settlement. The plane flies horizontally with a speed of 20 m/s and an altitude of 20
meters. How far from the release point will the package strike the ground?
Earth Science
Please determine the noon sun angle at the given latitudes on April 11: 40 degrees north 0 degrees
Earth Science
How many hours of daylight occur at the following locations on the specified dates? March 22 ---------------- 40 degrees north 0 degrees 90 degrees south December 22 ---------------- 40 degrees north
0 degrees 90 degrees south
A motor cyclist traveling at 30 m/s accelerates uniformly for 5 seconds to v= 40 m/s. H then maintains a constant velocity for the next 4 minutes. How far did the motor cyclist travel throughout the
entire motion?
joshua had 5/6 of a tank of gas when he started his trip. if he used 1/10 of what was there what part of a tank was left at the end of the trip?
Question part Points Submissions 1 0/1 5/15 Total 0/1 You ride your bike for 1.5 h at an average velocity of 11 km/h, then for 30 min at 15 km/h. What is your average velocity?
Physical Science
A force of 400.0 N is exerted on a 1,250 N car while moving it a distance of 3.0m. How much work was done on the car?
adult education
You and a friend are invited to my garage sale.
An artillery shell is fired with an initial velocity of 300 m/s at 65.0° above the horizontal. To clear an avalanche, it explodes on a mountainside 37.5 s after firing. What are the x and y
components of the shell where it explodes, relative to its firing point? x = _____...
math 7th
64 over 96
Calculate each binomial probability: a. X = 2, n = 8, ð = .10 b. X = 1, n = 10, ð = .40 c. X = 3, n = 12, ð = .70 d. X = 5, n = 9, ð = .90
A lottery ticket has a grand prize of $28 million. The probability of winning the grand prize is .000000023. Based on the expected value of the lottery ticket, would you pay $1 for a ticket? Show
your calculations and reasoning clearly.
i dont know
What volume of 4.50M HCl can be made by mixing 5.65M HCl with 250 mL of 3.55M HCl?
9m+n=24 m-8n=27 n= 24-9m 27= m - 8(24-9m) 27= m - 192 - 72m 219= -71m so m = 3.0845 Double check by entering this into the other equation. 9(-3.0845)+ (24-9(-3.0845) -27.7605 + 24 - -27.7605 = 24. so
m = -3.0845 -3.0845 - 8(n)= 27 -8n= 30.0845 n= - 3.7606 m: -3.0845 n= -3.7606
-10xy - 15y / 5y I believe the expression simplifies to -2x - 3. The Y's cancel out when you divide by 5y.
If 10.0g of CaCl2 and 10.0g of NaCl are dissolved in 100.0mL of solution, what is the concentration of chloride ions? I found the molar mass of CaCl2 is 111.0g/mol; the molar mass of NaCl is 58.44g/
mol. I divided 10g CaCl2/111.0g CaCl2 I divided 10g NaCl/588.44g NaCl and added...
So, 2C4H8O + 11O2 ---> 8CO2 + 8H2O Is that right?
It says to write the balanced equation for the complete combustion of cyclobutanol in Oxygen. I'm not even sure what the product is supposed to look like so how can I balance the equation? Please
help. C4H8O + O2 ---> C4H8O
You have to draw Lewis Dot structures and than use 1 of two approaches. The first is The vector addition approach, which is most helpful for smaller molecules. The 2nd approach is the center of
charge approach, and you look to see where your negatives and positives are on your...
For which of the following reactions is the value of Delta H rxn equal to Delta H of formation? I. 2Ca(s) + O2(g) --> 2CaO(s) II. C2H2(g) + H2(g) --> C2H4(g) III. S(s) + O3(g) --> SO3(g) IV. 3Mg(s) +
N2(g) --> Mg3N2(s). According to my answer key, the only one is I...
What is the de Broglie wavelength, in meters, associated with a proton. (mass = 1.673 x 10^-24g) accelerated to a velocity of 4.5 x 10^7 m/s? I tried dividing the mass by the velocity. I also
multiplied the mass times the velocity, and neither answer came out correctly. The fi...
Thank you very much.
One of the lines in the spectrum of a mercury vapor lamp has a wave length of 254nm. What is the energy, in kj/mol, of this electro magnetic radiation. The work I have done: (6.626 x 10^-34 x 3.0 x
10^8) all divided by (254 x 10^-9.) The answer I calculated is 7.82 x 10^-19. T...
If 10.0g of CaCl2 and 10.0g of NaCl are dissolved in 100.0mL of solution, what is the concentration of chloride ions? (the molar mass of CaCl2 is 111.0g/mol; the molar mass of NaCl is 58.44g/mol. I
found individual molar mass of: Ca-40.08 Cl-35.45 Na-22.99 So I divided 10g CaC...
Thankyou both very much.
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Casey&page=3","timestamp":"2014-04-18T10:44:13Z","content_type":null,"content_length":"28122","record_id":"<urn:uuid:00e83f4c-aa5c-43cc-a271-1cf21c5c5f22>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quick Question
10-03-2012 #1
Registered User
Join Date
Oct 2012
Quick Question
This is for a class. I am NOT asking for someone to do this for me. I just can't find out an issue and would appreciate it if anyone could point out its cause.
This is the program.
int main(void)
//Variable declaration
double a, b, c, discriminant, r1, r2;
//Display title prompt
printf("This program finds the roots of the quadratic equation.\n");
//Prompt user for coefficients of the quadratic equation
printf("Input the a, b, and c coefficients of the quadratic equation\n in this form: (a)x^2 + (b)x + (c) \n");
//Read user values and assign to appropriate variables
scanf("%dx^2+%dx+%d", &a, &b, &c);
//Find root type
discriminant = (b*b-4*a*c);
//Find roots using the variables and display
if (discriminant>0)
r1 = -b+sqrt(discriminant)/(2*a);
r2 = +b+sqrt(discriminant)/(2*a);
printf("The two roots are real: ");
printf("%d %d\n");
else if (discriminant==0)
r1 = -b+sqrt(discriminant)/(2*a);
r2 = +b+sqrt(discriminant)/(2*a);
printf("The two roots are equal: ");
printf("%d %d\n");
r1 = -b+sqrt(discriminant)/(2*a);
r2 = +b+sqrt(discriminant)/(2*a);
printf("The two roots are complex and may not be correct: ");
printf("%d %d\n");
//Termination statements
system ("PAUSE");
return 0;
My problem is that, no matter the variables, I always get the last else statement as the answer "The roots are complex...." even when I KNOW they're real or equal. Can't see why. Any help?
Last edited by Salem; 10-04-2012 at 12:08 AM. Reason: old code restored
Any guesses? I've only just started learning, so if it's a simple error, don't hold it against me.
You're using the format for integers in scanf. The format for double is lf (that's a lowercase L).
scanf("%lfx^2+%lfx+%lf", &a, &b, &c);
The cost of software maintenance increases with the square of the programmer's creativity. - Robert D. Bliss
I fixed that. It just made the wrong answer longer. Changed everything to float and put %f everywhere.
Any other ideas?
Post your current code.
You are not only using the wrong format in your printf's, you also don't have any variables listed!
So this
printf("%d %d\n");
should be
printf("%f %f\n", r1, r2);
Last edited by oogabooga; 10-03-2012 at 10:36 PM.
The cost of software maintenance increases with the square of the programmer's creativity. - Robert D. Bliss
Also changed discriminant to d. and some other syntax things that doesn't matter.
Current code:
int main(void)
//Variable declaration
float a, b, c, d, r1, r2;
//Display title prompt
printf("This program finds the roots of the quadratic equation.\n");
//Prompt user for coefficients of the quadratic equation
printf("Input the a, b, and c coefficients of the quadratic equation\n in this form: (a)x^2 + (b)x + (c) \n");
//Read user values and assign to appropriate variables
scanf("%fx^2+%fx+%f", &a, &b, &c);
//Find root type
d = (b*b)-(4*a*c);
//Find roots using the variables and display
if (d>0)
r1 = (-b+(sqrt(d)))/(2*a);
r2 = (b+(sqrt(d)))/(2*a);
printf("The two roots are real: ");
printf("%f %f\n");
else if (d==0)
r1 = (-b+(sqrt(d)))/(2*a);
r2 = (b+(sqrt(d)))/(2*a);
printf("The two roots are equal: ");
printf("%f %f\n");
r1 = (-b+(sqrt(d)))/(2*a);
r2 = (b+(sqrt(d)))/(2*a);
printf("The two roots are complex and may not be correct: ");
printf("%f %f\n");
//Termination statements
system ("PAUSE");
return 0;
Last edited by Salem; 10-04-2012 at 12:09 AM. Reason: restore old code
Still wrong.
//code deleted
Last edited by Lavendershoe; 10-03-2012 at 11:19 PM.
I figured out what I needed.
Thank you, oog. Appreciated.
Please stop deleting your code after you've got your answer.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
10-03-2012 #2
Registered User
Join Date
Oct 2012
10-03-2012 #3
10-03-2012 #4
Registered User
Join Date
Oct 2012
10-03-2012 #5
10-03-2012 #6
Registered User
Join Date
Oct 2012
10-03-2012 #7
Registered User
Join Date
Oct 2012
10-03-2012 #8
Registered User
Join Date
Oct 2012
10-03-2012 #9
Registered User
Join Date
Oct 2012
10-04-2012 #10
|
{"url":"http://cboard.cprogramming.com/c-programming/151103-quick-question.html","timestamp":"2014-04-17T12:32:54Z","content_type":null,"content_length":"76693","record_id":"<urn:uuid:5eb9f3ec-4fce-4e1c-9243-3a3300f12fe9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Problem solving
February 16th 2008, 10:54 PM #1
Junior Member
Feb 2008
Problem solving
Please some one help me with my daughters maths problem.
If a square is divided into 16 squares, how many different ways are there to get from the top left hand corner to the bottom right hand corner if only moving right or down. Can you also please
explain how to get the answer.
There's a reaaaally easy way to do this without using even simpliest combinatorics
This is our path. We're allowed to move only right and bottom.
Now, I put a 1 on that point. What does it mean? It means there's only 1 way to go from that point to the point B.
I put another 1. There's only 1 way to go from that point to B too.
I summed them and wrote 2 on that point. This means there are only 2 ways to go from that point to B.
I placed all 1s..
And summed the other points..
Continue summing...
OK, we completed the diagram
So, there are 70 ways to go from A to B.. Solved!
This diagram with the numbers are called Pascal's Triangle. Numbers on it are also a list of combinations.
Last edited by wingless; February 17th 2008 at 02:24 AM.
February 17th 2008, 02:04 AM #2
February 17th 2008, 07:07 AM #3
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/math-topics/28441-problem-solving.html","timestamp":"2014-04-19T22:18:11Z","content_type":null,"content_length":"37243","record_id":"<urn:uuid:c9ba9be7-d572-41fa-8e85-c3f209d6d40a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Philosophy of Mathematics
March 30th 2012, 05:14 AM #1
Mar 2012
Well hello everyone! This is a new section devoted to philosophical questions related to mathematics.
The philosophy of mathematics is an important and interesting subject. As history has demonstrated, every great leap in mathematics is a philosophical step forward. A mathematician never needs to
stop and ask 'what is a number' in order to solve an equation. However, in order to enter into new understandings of the world around us, mathematics is utilised as a langue allowing us to 'make
sense' of these 'deeper' elements of reality.
All great mathematical, scientific or technological breakthroughs are, at their core, philosophical steps forward. Let us also remember that many pivotal philosophers were also mathematicians,
including: DesCartes, Leibniz, Frege, Whitehead, etc. This section is aimed at stimulating discussion that can help expand our understanding of mathematics as a subject and provoke new ways at
looking at the world through the use of numbers.
Feel free to post any questions or comments.... the only one who loses a philosophical argument is the person who refuses to speak!
We look forward to hearing from you!
|
{"url":"http://mathhelpforum.com/math-philosophy/196593-philosophy-mathematics.html","timestamp":"2014-04-20T10:10:40Z","content_type":null,"content_length":"30448","record_id":"<urn:uuid:ad67c505-349a-4eff-89b3-c694ce7e5deb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maximum and minimum value
October 4th 2007, 04:27 PM
Maximum and minimum value
Show that the function f(x) = 1/squareroot(x) has neither the maximum value or the minimum value on (1,2).
maximum: lim f(x) as x-> 0 = infinity which is not in (1,2).
minimum: lim f(x) as x-> infinity = 0 which is not in (1,2).
Is this correct?
October 4th 2007, 09:06 PM
i doubt it. i believe they are talking about the local max and min, as in, where the slope is zero, as in, find the derivative and show that it is never zero, particularly not in that interval
|
{"url":"http://mathhelpforum.com/calculus/20001-maximum-minimum-value-print.html","timestamp":"2014-04-17T04:30:01Z","content_type":null,"content_length":"4408","record_id":"<urn:uuid:479999e5-91e8-47d6-ae65-b60e757b7ec2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Conway, WA Math Tutor
Find a Conway, WA Math Tutor
...I am also happy to help younger children struggling with science find a love for science! I am able to tutor statistics, algebra, geometry, and basic math for young children through high
school level classes, as well. I received a 4.0 GPA during my undergraduate career in my French minor.
36 Subjects: including geometry, probability, SAT math, statistics
...Trigonometry could be challenging for some people, so I always try to make it easy to understand and fun to learn. I've helped many students improve their grades in trigonometry. If you don't
understand something or can't solve a problem, I can simplify it until you get it and solve it all by yourself.
13 Subjects: including algebra 1, algebra 2, general computer, geometry
I am an experienced test prep tutor who has worked for three major tutoring companies. I continue to work for a premier tutoring company in the area, so I am knowledgeable of changes in the SAT
and ACT and current on the latest techniques for improving scores. I teach techniques that work to enable students to reach their SAT or ACT goal score.
36 Subjects: including SAT math, ACT Math, geometry, prealgebra
...Formally, I have taken college courses on Religion and the Philosophy of Religion. My expertise in Tennis is primarily in the fundamentals of technique and the basics of play. I grew up
learning how to play tennis at the age of 8.
20 Subjects: including algebra 1, algebra 2, statistics, basketball
I am new to the Marysville area and looking to work as a tutor while I finish school and further education courses. I am currently taking classes to get my EMT certification, while awaiting
nursing school. I graduated from University of Washington in 2013 with a BS in Biology.
23 Subjects: including algebra 1, ACT Math, precalculus, GED
Related Conway, WA Tutors
Conway, WA Accounting Tutors
Conway, WA ACT Tutors
Conway, WA Algebra Tutors
Conway, WA Algebra 2 Tutors
Conway, WA Calculus Tutors
Conway, WA Geometry Tutors
Conway, WA Math Tutors
Conway, WA Prealgebra Tutors
Conway, WA Precalculus Tutors
Conway, WA SAT Tutors
Conway, WA SAT Math Tutors
Conway, WA Science Tutors
Conway, WA Statistics Tutors
Conway, WA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/conway_wa_math_tutors.php","timestamp":"2014-04-18T13:38:59Z","content_type":null,"content_length":"23729","record_id":"<urn:uuid:c27edc31-f8a4-41b3-beab-c2536a387617>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Electrical potential problems
I am sometimes confused by electric potential sometimes in question. I know its is the work done or potential energy of a particle at specific place,and electric potential is work done per charge.
You've said electric potential twice. So I think you meant to say electric potential energy in the first sentence. In this case, yes, you've understood it.
For problem 2 I just used that U = Ufinal - UInititial; and since UInitial is approaching zero,so I neglected that.
This is the correct way to do the problem. But your method to get Ufinal was not right. (Although you did get the right answer anyway).
...I think I solved it thanks for that hint BruceW I first we know since its conservative force then potential will transform to kinetic...W = 4.48 * 10^-15 * 4 = 1.792 * 10^-14;
You've got the right idea, but the calculation has a mistake. The plates are separated by 4 cm, so you should convert that into SI units before you use it in the equation, or it'll get complicated to
keep track of the units.
For problem 3 we know that V = kQ / d and we derived that from integration I saw lecture of walter lewin on that. We also know that U / q = V,so we can do the following --> V = Vfinal - Vinitial ,
but I am still confused into how to think about this problem though I think I am over complicating it in my head..
You're given the electric field, so you need to relate that to the potential energy. The equation V = kQ / d is true for an electric field created by a point charge, but it is not true in this case,
where they are telling us that the electric field is constant throughout space. In other words, you have a much simpler integration to do!
|
{"url":"http://www.physicsforums.com/showpost.php?p=3789015&postcount=7","timestamp":"2014-04-17T18:25:04Z","content_type":null,"content_length":"10998","record_id":"<urn:uuid:eb139cb8-72e7-4040-a117-4bf13c221d5a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
its been so liong i forgot please help real simple
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5084236ce4b041b051b974dc","timestamp":"2014-04-19T19:57:05Z","content_type":null,"content_length":"53727","record_id":"<urn:uuid:b3826515-a6b7-468e-89e0-36ee6cbd886c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bala Cynwyd Algebra Tutor
Find a Bala Cynwyd Algebra Tutor
...I have high expectations and do expect to see a lot of hard work, productivity, progress and a regular evaluation of student skills in desired areas of focus, helping students be unashamed to
work on weakness in the direction of change and improvement. Cheers to building strong foundations for c...
12 Subjects: including algebra 1, algebra 2, geometry, chemistry
...My goal is to serve you and your learning needs! There is no one-size-fits-all when it comes to education. Let's think outside the box and help you to succeed!
20 Subjects: including algebra 1, algebra 2, reading, statistics
...As a test prep expert I've been able to help students both get into private schools and obtain scholarships. I know how best to ace a standardize test and hope to help your student get the
score he or she needs. I taught Desktop Publishing on Windows PCs at Rhodes High School in Philadelphia to two classes.
37 Subjects: including algebra 1, algebra 2, reading, geometry
...Chemical Equations and Reactions 9. STOICHIOMETRY 10. Representative Gases 11.
8 Subjects: including algebra 1, algebra 2, chemistry, biology
...Best,MattAs a graduate student in Clinical Psychology, I have spent a great deal of time working on academic and social skills with students on the Spectrum. I have worked in the field
professionally and academically and have a great deal of experience. As a graduate student in clinical psychology, I have a great deal of experience working with students on the spectrum.
23 Subjects: including algebra 1, algebra 2, reading, English
Nearby Cities With algebra Tutor
Ardmore, PA algebra Tutors
Bala, PA algebra Tutors
Belmont Hills, PA algebra Tutors
Bryn Mawr, PA algebra Tutors
Cynwyd, PA algebra Tutors
Flourtown algebra Tutors
Gladwyne algebra Tutors
Merion Park, PA algebra Tutors
Merion Station algebra Tutors
Merion, PA algebra Tutors
Narberth algebra Tutors
Overbrook Hills, PA algebra Tutors
Penn Valley, PA algebra Tutors
Philadelphia Ndc, PA algebra Tutors
Wynnewood, PA algebra Tutors
|
{"url":"http://www.purplemath.com/Bala_Cynwyd_Algebra_tutors.php","timestamp":"2014-04-16T19:35:50Z","content_type":null,"content_length":"23828","record_id":"<urn:uuid:7b50a289-62ac-4633-b95d-d4cb1f433d7d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by Jane
Total # Posts: 1,317
a car travelved a distance of 195 miles in 3 hours 15 minutes what is the unit rate
The acceleration due to gravity on Earth is 9.80 m/s2. If the mass of a dog is 40.0 kg, what is the weight of the dog?
The longer leg of a triangle is one foot less than twice the shorter leg. The hypotenuse is 17 feet long. Find the area of the triangle.
Explain some modifiable risk factors as well as some actions you can take to avoid dying from a lifestyle disease.
How do lifestyle diseases differ from other diseases?
Maria's average speed for a 6 hour trip was 40kph. For the first 4 hours, she drove at an average rate of 50kph. What was her average speed for the last 2 hours?
Algebra 2
How do I find the equation of an ellipse with: Vertices: (-7,10), (-7,-10) Foci: (-7, square root of 19) (-7, - square root of 19) I think the center is (-7,0) and a^2 is 49 I just don't know how to
get c^2 so I cant get b^2
Can someone please tell me if I have the right answers? Pasteur's experiments led to which theory? A. biogenesis theory B. endosymbiont theory C. evolution theory D. spontaneous generation theory I
think its A or D Scientists have fossil evidence for which idea for the ori...
The cost C (in dollars) of manufacturing x wheels at Deon's Bicycle Supply is given by the function C(x)= 0.1x^2-60x+22,744. What is the minimum cost of manufacturing wheels? Do not round your
if I have 1 defect in a batch of 54 widgets, what is my quality index?
Computer Tech
Are they Daydreamin' and Yours Truly?
Computer Tech
I am doing a powerpoint/photoshop project in my computer tech class about a singer that has recently become famous in the world of music. The artist I am researching is Ariana Grande. When I was
looking for her albums I found two: Daydreamin' and Yours Truly. Some websites...
A golfer stands 420ft(140 yd) horizontally from the hole and 50ft above the hole. Assuming the ball is hit with an initial speed of 120ft/s, at what angle should it be hit to land in the hole? Assume
the path of the ball lies in a plane.
Computer Tech
Thank you!
Computer Tech
I have to do a power point on a singer who has recently become famous. I chose Ariana Grande because she only has two albums. I have been trying to find her song and album release dates but cant find
a website. Can someone please see if they could find one? It would be a big h...
Algebra 2
Thank you!
Algebra 2
Okay so I fixed the next two because I added the fractions wrong. 2. y=(1/2)x-(11/4) 3. y=(5/3)x-4
Algebra 2
What am I doing wrong? slope m=(8-6)/(1--5)= m=2/6= m=1/3 m=-3 midpoint (-5+1)/2, 8+6/2)= (-4/2, 14/2)= (-2, 7) y=mx+b= 7=-3(-2)+b = 7=6+b = 1=b equation y=-3x+1
Algebra 2
Can someone please check my answers? The question is: Write an equation for the perpendicular bisector of the line segment joining the two points. (-5,6),(1,8)= y=-3x+1 (1,4),(6,-6)= y=(1/2)x-(3/4)
(5,10),(10,7)= y=(5/3)x-(17/3)
Algebra 2
Can someone please explain how to do this problem? A regional soccer tournament has 64 participating teams. In the first round, the number of games played decreases by a factor of one half. Find a
rule for the number of games played in the regional soccer tournament. I tried u...
Algebra 2
I can't figure out how to find the n on top of the sigma. I have the rest of the equation but I just don't know what to do. I'll use "E" for sigma. nEi=1 (10-3i)=-28 Can someone please show me how to
do this problem?
Algebra 2
I can't figure out how to find the n on top of the sigma. I have the rest of the equation but I just don't know what to do. I'll use "E" for sigma. nEi=1 (-5+7i)=486 Can someone please show me how to
do this problem?
A truck is pulling a car. FT is the magnitude of the force that the truck exerts on the car FC is the magnitude of the force that the car exerts on the truck Consider the following scenarios
independently. 1)The truck is driving with a constant velocity, but as it turns out, t...
Science! just one question! help ASAP plzzz
Photosynthesis is required for this process to occur.
Algebra 2
Can someone please explain how to do this problem? Woodrow wins a tic-tac-toe game 65% of the time when he chooses the first square and 32% of the time when his opponent chooses the first square. The
player who plays first is chosen by a coin toss. What is the probability that...
Algebra 2
Can someone please check my answers? Find the probability of drawing the given cards from a standard dick of 52 cards (a) with replacement and (b) without replacement. 1.A red card, then a black card
a. 1/4 b. 13/51 2.A ten, then the ace of hearts a. 1/676 b. 1/663 3.A face ca...
Algebra 2
Do I add them and that's it or do I do something else?
Algebra 2
Theres 12 face cards (4 queens, 4 kings, and 4 jacks) and 4 fours. What would you do with 12/52 and 4/52?
Algebra 2
How would you find the probability of drawing a face card and a 4 out of a standard deck of 52 cards? I got 4/13 but I don't think its right.
Algebra 2
Can someone please explain how to do this problem? The probability that it will rain today is 65%, and the probability that it will rain tomorrow is 25%. The probability that it will rain both days
is 35%. What is the probability that it will rain today or tomorrow?
Algebra 2
Can someone please check my answers? A card is randomly drawn from a standard deck of 52 cards. Find the probability of drawing the indicated card. 1.Not an ace. =12/13 2.A face card and a four. =4/
13 3.A heart or a diamond. =1/2 4.A six or a seven. = 2/13 5.An ace and a spade...
Algebra 2
Can someone please check my answers? A card is randomly drawn from a standard deck of 52 cards. Find the probability of drawing the indicated card. 1.Not an ace. =12/13 2.A face card and a four. =4/
13 3.A heart or a diamond. =1/2 4.A six or a seven. = 2/13 5.An ace and a spade...
Would this be a good thesis to a paper about someone's life? Even though Karlee has a big interest in farming, she wants to take her life in a slightly different direction.
How do you write a thesis statement that doesn't sound like a roadmap. My teacher wants us to have a hook, which I have, then a thesis, and then the roadmap. I don't know how to write a thesis that
doesn't sound like the roadmap.
am history
During the bitter struggle over reconstruction policy, Congress overrode Johnson's veto of the
am history
Which of the following nineteenth -century inventors is associated with Menlo Park in New Jersey? A. Gustavus F. Swift B. Alexander Graham Bell C. Cyrus W. Field D. Thomas Alva Edison
am history
General Grant's army came closest to defeat in the West as a result of a surprise Confederate attack at the Battle of
Algebra 2
How would I reduce the equation 7n=n-5n(n+4)
Algebra 2
Thank you
Algebra 2
What do I do after I have the equation at 8+64n = 8n
Algebra 2
Thank you
Algebra 2
How do I get 0 and 7 out of 2x^2 = x^2 +7x
What do you expect from an art education? How will it help you reach your life and career goals ? What are three of your strengths ? explain how two of your strengths have contributed to your success
in art school, in art , or in your life.
Math please help quick
Which of the following are identities? Check all that apply. (Points : 2) sin2x = 1 - cos2x sin2x - cos2x = 1 tan2x = 1 + sec2x cot2x = csc2x - 1 Question 4. 4. Which of the following equations are
identities? Check all that apply. (Points : 2) Question 5. 5. The expression si...
3. Which of the following methods of reporting research is not followed by most scientists? (Points : 1) publishing a research paper online publishing a research paper in print holding a press
conference presenting a talk at a meeting Question 4. 4. Who decides whether a paper...
3. Which of the following methods of reporting research is not followed by most scientists? (Points : 1) publishing a research paper online publishing a research paper in print holding a press
conference presenting a talk at a meeting Question 4. 4. Who decides whether a paper...
Algebra 2
Inverse for both I think. Right?
Algebra 2
I am trying to figure out if these problems are Direct Variation, Inverse Variation, or Neither. Could someone please check my answers? m=-5p Direct c=e/-4 Neither c=3v Direct r= 9/t Inverse n=(1/2)f
Direct u=I/18 Neither d=4t Direct z= -0.2/t Inverse
How do you express relative amounts pf each element in a compound?
If 1.50 g of H2C2O4 2H2O were heated to drive off the water of hydration, how much anhydrous H2C2O4 would remain? Thank you! Please explain clearly.
Algebra 2
How do you solve this equation? -3e^2t=-480
Algebra 2
I think I figured it out. 85/2=42.5 log5=log42.5 log42.5/log5 =2.330 to the third decimal
Algebra 2
How do you solve this exponential equation using logarithms? 2*5^x=85 Do you start like this? xlog2*5=log85 I don't know what to do after that.
One light year is about 9 trillion kilometers. Arcturus is a star that is 37 light years from Earth. If you are about 11 years old now, how old will you be when light that is leaving Arcturus today
reached Earth? How far, in kilometers, will it have traveled? please explain ho...
A 50 N girl climbs the ladder with a 60 N painting in 5 seconds. She has done 120 J of work. How much power does she need?
Math- Trigonometry
The flowering of many commercially grown plants in greenhouses depends on the duration of natural darkness and daylight. Short-day plants, such as chrysanthemums, need 12 or more hours of darkness
before they will start to bloom. Long day plants, such as carnations...
5th grade science
Which of the following best describes an electric charge? -Positive or negative terminal of a battery -property of electrons, protons, and neutrons -materials that conducts current -has a north pole
and a south pole
5th grade science
Which of the following is an example of a nonmnetal conductor? copper wire, salt water, plastic covering of a wire, or cotton clothing?
Which of the following best describes an electric charge? -Positive or negative terminal of a battery -property of electrons, protons, and neutrons -materials that conducts current -has a north pole
and a south pole
Which of the following is an example of a nonmnetal conductor? copper wire, salt water, plastic covering of a wire, or cotton clothing?
Biology 1
Does anyone know of a Photosynthesis concept map that includes all of the following: Photons, thylakoids, reactions, sun, NADPH, carbohydrates, xanthophyll, wavelength, pigments, stroma, product,
photosynthesis, NADP+, visible light, glucose, energy, glucose, energy, carotenoi...
Algebra 2
Thank you
Algebra 2
I am trying to find the inverse of these functions. Can someone please check my answers and tell me how to do number 2? 1. y=7^x y=log base 7 x 2. y=log base 1/2 x 3. y=2^(x)-3 y=2^(x+3) 4.y=6+log x
twice a number minus a second number is -1. twice the second number added to three times the first number is 9. find the two numbers.
How do you site a reference book? Also, does anyone know a website that would tell me how to site a primary source, a website, and a magazine?
Neutrons have a rest mass of 1.6749*10-27 kg (equivalent to 939.6 MeV). If a certain neutron has a total energy of 949 MeV, what are its relativistic mass energy in MeV, its speed in m/s, its kinetic
energy in MeV, and its momentum in kg.m.s-1?
Neutrons have a rest mass of 1.6749*10-27 kg (equivalent to 939.6 MeV). If a certain neutron has a total energy of 949 MeV, what are its relativistic mass energy in MeV, its speed in m/s, its kinetic
energy in MeV, and its momentum in kg.m.s-1?
Posting this question again Design an AC circuit with R, L, and C components so that it can achieve output voltage in a given load to be doubled in amplitude (magnitude) while achieving increase in
the phase shift of the output voltage of 45 more than the phase of the input vo...
It doesn't say
Design an AC circuit with R, L, and C components so that it can achieve output voltage in a given load to be doubled in amplitude (magnitude) while achieving increase in the phase shift of the output
voltage of 45 more than the phase of the input voltage. Use 60Hz as the frequ...
A chemist has two solutions of sulfuric acid. The first is one half sulfuric acid and half water. The second is three-fourths sulfuric acid and one fourths water. He wishes to make 10 liters of a
solution which is two-thirds sulfuric acid and one-thirds water. How many liters ...
Can someone please check my answers? *Cross out any prepositional phrase. Underline the subject once and the verb/verb phrase twice. Write Adv. above adverbs. Put infinitives in parenthesis. Not
doesn't need to be listed as an adverb. 1.Without hesitation, the rabbit hoppe...
Yes not isn't a verb yet in my English class because I'm in English 1. Thank you for checking my answers.
Can someone please check my answers? *Cross out any prepositional phrase. Underline the subject once and the verb/verb phrase twice. Write Adv. above adverbs. Put infinitives in parenthesis. Not is
not considered an adverb in my English class. 1.From June until the end of Augu...
Thank you
Can someone please check my answers? *Cross out any prepositional phrase. Underline the subject once and the verb/verb phrase twice. Write Adv. above adverbs. Put infinitives in parenthesis. 1. In
the early morning, the birds chirped in the back of the house near the pond. (su...
2. at noon. Thank you for checking them. :)
Can someone please check my answers? *Cross out any prepositional phrase. Underline the subject once and the verb/verb phrase twice. Write Adv. above adverbs. Put infinitives in parenthesis. 1.In the
bushes along the road stood an elephant with purple spots in front of his eye...
Rearrange equation to isolate for P to use in a calculator please v=e[(1/cos(pi/2sqrt(P/Pcr)))-1] OR e[(sec(pi/2sqrt(P/Pcr)-1]
English Compound Subjects
Can someone please check my answers? *Cross out any prepositional phrases. Underline the subject once and the verb/verb phrase twice. 1. Doug and his new bride vacationed in Florida. (subject- Doug,
bride verb- vacationed prepositional phrase- in Florida) 2. During the fair, a...
English Compound Objects
Thank you! :)
English Compound Objects
6. wo be completed is the verb.
English Compound Objects
Can someone please check my answers? *Cross out any prepositional phrases. Underline the subject once and the verb/verb phrase twice. 1. Mrs. Little stepped into the rain without a hat or an
umbrella. (subject- Mrs. Little verb stepped prepositional phrase- into the rain, with...
English Imperative Sentences
Thank you! :)
English Imperative Sentences
Can someone please check my answers? *Cross out any prepositional phrases. Underline the subject once and the verb/verb phrase twice. 1. Give this tip to the waiter in the checkered shirt. (subject-
you verb- give prepositional phrase- to the waiter, in the checkered shirt) 2....
English Verb Phrases
Thank you! :)
English Verb Phrases
Can someone please check my answers please? *Cross out any prepositional phrases. Underline the subject once and the verb/verb phrase twice. 1. The garbage truck has stopped near the corner of
Washington Street. (subject truck verb- has stopped prepositional phrase- near the c...
English Compound Verbs
Thank you!
English Compound Verbs
Could someone please check my answers? *Cross out any prepositional phrases. Underline the subject once and the verb/verb phrase twice. 1. The instructor stood among the students and chatted with
them. (subject- instructor verb- stood, chatted prepositional phrase- among the s...
English Adverbs
Thank you again! :)
English Adverbs
10. The other verb is waved. 16. The other verb is hurried. 7. The subject is one and of the players is a prepositional phrase. 9. With us is another prepositional phrase.
English Adverbs
I'm not really sure about my adverbs in these sentences. Can someone please check them? *Cross out any prepositional phrases. Underline the subject once and the verb/verb phrase twice. Label any
adverb- Adv. 1. That small child often falls down on his roller skates. (subje...
Thank you for checking them. :)
Can someone please check my answers? I'll type the directions above the questions. *Cross out any prepositional phrases. Underline the subject once and the verb/verb phrase twice. Place any
infinitive in parenthesis. 1. Marnie's mother and father want to go to New York...
what is the terminal velocity of 5 µm particle with a density of 8.90 grams/cm^3
Algebra Help
It's not his problem it's just that I stated the question wrong instead of 24 and 6 minutes, theyre supposed to be 24 and 6 miles
Algebra Help
A motorist was to travel from town A to town B, a distance of 80 miles. He traveled the first 24 miles at a certain rate; traffic then increased and for the next 6 miles he averaged 10 miles per hour
less than his original speed; then traffic eased up and he traveled the remai...
A motorist was to travel from town A to town B, a distance of 80 miles. He traveled the first 24 minutes at a certain rate; traffic then increased and for the next 6 minutes he averaged 10 miles per
hour less than his original speed; then traffic eased up and he traveled the r...
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Jane","timestamp":"2014-04-19T10:38:56Z","content_type":null,"content_length":"30222","record_id":"<urn:uuid:e3809d06-dcb0-497d-a02f-312cdc5d19ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Multiplicity of Zeros and Graphs Polynomials
With this factored form, you can change the values of the leading coefficient a and the 5 zeros z1, z2, z3, z4 and z5. You can explore the local behavior of the graphs of these polynomials near zeros
with multiplicity greater than 1.
Once you finish this interactive tutorial, you may want to consider a self test on graphs of polynomial functions.
How do the leading coefficient a and the zeros z1,z2,z3,z4 and z5 affect the graph of f(x)?
1- Use the scrollbar to set z1,z2,z3,z4 and z5 to zero, then change the value of a. How does a affect the graph of f(x)? Change a from a positive value a1 to a negative value -a1 and note the effects
it has on the graph.
2- Set a to a certain value (not zero) and set z1,z2,z3,z4 and z5 to the same value z. How does this choice affect the graph of f(x)? Write down the equation of f(x).
3- Set z1 and z2 to the same value (say z11) and z3,z4 and z5 to another and same value (say z22). Write down the equation of f(x). What is the shape of the graph locally (around the zeros) at z11
and z22 ?. How does the multiplicity of the zeros affect the graph locally (around the zeros).
More references and links to polynomial functions. Derivatives of Polynomial Functions.
Polynomial Functions
Polynomial Functions, Zeros, Factors and Intercepts
Find Zeros of Polynomial Functions - Problems Graphs of Polynomial Functions - Questions.
Factor Polynomials.
|
{"url":"http://www.analyzemath.com/polynomials/polynomials.htm","timestamp":"2014-04-16T13:15:33Z","content_type":null,"content_length":"9636","record_id":"<urn:uuid:7c90fc2c-8774-43b7-8498-6c8a63321ada>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Logarithmic functions help.
March 17th 2008, 02:27 PM #1
Logarithmic functions help.
Im taking Business calc so it's not nearly as hard as normal calc, and I have a few questions on a take home quiz I have. If any one could help me it would be greatly appreciated.
Find the derivative, do not need to simplify
and f(x)=(ln(12x^2+5))^25
lastly y=e^x ln(5x)
Thanks a lot, im pretty much done with the quiz im just not sure on these 3 probs
Some of the questions are a bit ambiguous but seems like they involve the application of the chain rule which says:
$\left[f(g(x))\right]' = f'\left(g(x)\right) \cdot g'(x)$
For: $y = ln\left(\frac{2x-3}{x^{2}-4}\right)$
Let: $f(x) = ln(x) \quad \quad g(x) = \frac{2x - 3}{x^{2}-4}$
Applying the formula:
$\left[ ln\left(\frac{2x-3}{x^{2}-4}\right) \right]' = \underbrace{\frac{1}{g(x)}}_{(\ln x)' = \frac{1}{x}} \cdot \: \:g'(x)$
$= \frac{1}{\frac{2x - 3}{x^{2}-4}} \cdot \left(\frac{2x-3}{x^{2}-4}\right)'$
$= \frac{x^{2} - 4}{2x - 3} \cdot \underbrace{\left(\frac{2x-3}{x^{2}-4}\right)'}_{\mbox{Quotient Rule}}$
See if you can apply the chain rule to the other questions.
March 17th 2008, 03:55 PM #2
|
{"url":"http://mathhelpforum.com/calculus/31243-logarithmic-functions-help.html","timestamp":"2014-04-18T03:18:58Z","content_type":null,"content_length":"33447","record_id":"<urn:uuid:d93d7cb0-8169-4f15-ae05-601b47e9bdff>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
University of
5-7 May 2014
Jacob Lurie
Professor Lurie will give three lectures over the course of his week in Eugene (Monday, Tuesday, Wednesday) on the general theme of
Theory of "Spectral" Algebraic Geometry.
• Lecture 1: Cohomology Theories and Commutative Rings
4pm, Monday, 5 May 2014, 100 Willamette
• Lecture 2: Ambidexterity
4pm, Tuesday, 6 May 2014, 100 Willamette
• Lecture 3: Roots of Unity in Stable Homotopy Theory
4pm, Wednesday, 7 May 2014, 100 Willamette
All three lectures will be preceded by Tea in Fenton 319 at 3:15pm. Here is the poster including a detailed abstract of the talks.
21-23 May 2013
Raphael Rouquier
Professor Rouquier will give three lectures over the course of his week in Eugene (Tuesday, Wednesday, Thursday) on the general theme of
Higher Representation Theory.
• Lecture 1: Quiver Hecke algebras
4pm, Tuesday, 21 May 2013, 240C McKenzie
• Lecture 2: Representations and geometry
4pm, Wednesday, 22 May 2013, 240C McKenzie
• Lecture 3: Topology in dimensions 3 and 4
4pm, Thursday, 23 May 2013, 240A McKenzie
All three lectures will be preceded by Tea in Fenton 319 at 3:15pm. Here is the poster including detailed abstracts of each talk.
21-25 May 2012
Andrei Okounkov
Columbia University
Professor Okounkov will give three lectures over the course of his week in Eugene.
Quantum Groups and Quantum Cohomology.
Quantum cohomology is a deformation of the classical cohomology algebra of an algebraic variety X that takes into account enumerative geometry of rational curves in X. A great deal is know about
its structure for special X. For example, Givental and Kim described the quantum cohomology of flag manifolds in terms of certain quantum integrable systems, namely Toda lattices. A general
vision for a connection between quantum cohomology and quantum integrable systems recently emerged in supersymmetric gauge theories, in particular in the work of Nekrasov and Shatashvili.
Mathematically, the relevant class of varieties X to consider appears to be the so-called equivariant symplectic resolutions. These include, for example, cotangent bundles to compact homogeneous
varieties, as well as Hilbert schemes of points and more general instanton moduli spaces. In my lectures, which will be based on joint work with Davesh Maulik, I will construct certain solutions
of the Yang-Baxter equation associated to symplectic resolutions as above. The associated quantum integrable system will be identified with the quantum cohomology of X. If time permits, we will
also explore K-theoretic generalization of this theory.
20-22 April 2010
Anatoly Libgober
University of Illinois at Chicago
Professor Libgober will present three lectures on the following topics:
• Lecture 1: Topology of quasi-projective varieties. Abstract.
4pm, Tuesday, 20 April 2010, 100 Willamette
• Lecture 2: Lefschetz methods in topology of algebraic varieties and theory of Alexander invariants. Abstract.
4pm, Wednesday, 21 April 2010, 125 McKenzie
• Lecture 3: Hodge theoretical methods for the study of Alexander invariants. Abstract.
4pm, Thursday, 22 April 2010, 282 Lillis
10-12 November 2009
Terence Tao
University of California, Los Angeles
Professor Tao will present three lectures on the following topics:
• Lecture 1: Recent Progress in Additive Prime Number Theory. Abstract.
4pm, Tuesday, 10 November 2009, 129 McKenzie
• Lecture 2: Compressed Sensing. Abstract.
4pm, Wednesday, 11 November 2009, 221 McKenzie
• Lecture 3: Discrete Random Matrices. Abstract.
4pm, Thursday, 12 November 2009, 221 McKenzie
Recordings of the lectures (audio and video) are available here. The audio is good (Tao was wearing a microphone). One can't see Tao very well, but one can see the slides for the presentation.
7-9 May 2008
William Fulton
University of Michigan
Professor Fulton will present three lectures on "Equivariant cohomology of homogeneous varieties":
• Lecture 1: 4pm, Wednesday, 7 May 2008, 115 Lawrence.
• Lecture 2: 4pm, Thursday, 8 May 2008, 115 Lawrence.
• Lecture 3: 4pm, Friday, 9 May 2008, 115 Lawrence.
The abstract is on the poster.
23-25 May 2007
Gerhard Huisken
Max Planck Institute of Gravitational Physics
Professor Huisken will present three lectures on the following topics:
• Lecture 1: The heat equation and uniformisation in geometry.
4pm, Wednesday, 23 May 2007, 221 McKenzie
• Lecture 2: Isoperimetric inequalities and the concept of mass in General Relativity.
4pm, Thursday, 24 May 2007, 204 Villard
• Lecture 3: Isoperimetric inequalities via geometric evolution equations.
4pm, Friday, 25 May 2007, 205 Deady
Click here for the abstracts (pdf).
15-17 March 2006
Victor Ginzburg
University of Chicago
Professor Ginzburg will present three lectures on "Noncommutative geometry and quiver algebras":
• Lecture 1: Symplectic resolutions, their deformations and quantizations.
4pm, Wednesday, 15 March 2006, 106 Deady
• Lecture 2: Noncommutative symplectic geometry, quivers, and matrix integrals.
4pm, Thursday, 16 March 2006, 106 Deady
• Lecture 3: Calabi-Yau algebras.
4pm, Friday, 17 March 2006, 110 Willamette
The abstract is on the poster.
25-27 April 2005
Richard Schoen
Stanford University
Professor Schoen will present three lectures:
• Lecture 1: The Yamabe problem revisited
4:00 p.m., Monday, 25 April 2005, 106 Deady Hall
• Lecture 2: Global compactness theorems for constant scalar curvature metrics
4:00 p.m., Tuesday, 26 April 2005, 106 Deady Hall
• Lecture 3: Sharp isoperimetric inequalities for minimal surfaces in Euclidean space
4:00 p.m., Wednesday, 27 April 2005, 110 Willamette Hall
The abstracts are on the poster.
12-16 April 2004
Maxim Kontsevich
IHES, Bures-sur-Yvette, France
Professor Kontsevich will present three lectures on Integral Affine Structures:
• Lecture 1: Definitions and basic examples
4:00 p.m., Monday, 12 April 2004, 100 Willamette Hall
• Lecture 2: Non-Archimedean and tropical viewpoints
4:00 p.m., Wednesday, 14 April 2004, 100 Willamette Hall
• Lecture 3: Collapsing in mirror symmetry
4:00 p.m., Friday, 16 April 2004, 110 Fenton Hall
14-18 January 2002
Victor Guillemin
Massachusetts Institute of Technology
Professor Guillemin will present the following three lectures:
• Lecture 1: Betti numbers of polytopes and graphs
4:00 p.m., Monday, 14 January 2002, 110 Fenton Hall
• Lecture 2: The GKM theorem
4:00 p.m., Wednesday, 16 January 2002, 110 Fenton Hall
• Lecture 3: Multiplicative Morse theory for symplectic G-manifolds
4:00 p.m., Friday, 18 January 2002, 110 Fenton Hall
25-27 October 2000
Dennis Sullivan
CUNY at Stony Brook
Professor Sullivan will present the following three lectures on Fluids, quantum theory and algebraic topology:
• Lecture 1: Discrete modules
4:00 p.m., Wednesday, 25 October 2000, 123 Pacific Hall
• Lecture 2: Algebraic quantization
4:00 p.m., Thursday, 26 October 2000, 110 Fenton Hall
• Lecture 3: String topology
4:00 p.m., Friday, 27 October 2000, 110 Fenton Hall
11-15 October 1999
Alexander Varchenko
University of North Carolina in Chapel Hill
Professor Varchenko will present the following three lectures on multidimensional hypergeometric functions and representation theory:
• Lecture 1: The KZ differential equations and hypergeometric functions
4:00 p.m., Monday, 11 October 1999, 110 Fenton Hall
• Lecture 2: Statistical mechanics, R-matrices and qKZ difference equations
4:00 p.m., Wednesday, 13 October 1999, 110 Fenton Hall
• Lecture 3: The qKZ equations, q-hypergeometric functions, and quantization of geometry
4:00 p.m., Friday, 15 October 1999, 110 Fenton Hall
11-15 October 1998
Jean-Pierre Serre
College de France, Paris
Professor Serre will present two lecture series on the following topics:
• Lecture series 1: Finite subgroups of Lie groups
• Lecture series 2: The notion of complete reducibility in group theory
Lecture notes are available for both series here.
March 1997
Efim Zelmanov
UC San Diego
March 1998
Philip Griffiths
October 1995
Clifford Taubes
Harvard University
January 1993
Yu I Manin
Max Planck Institut für Mathematik
January 1989
Michael Atiyah
Cambridge University, UK
November 1986
Victor Kac
|
{"url":"http://math.uoregon.edu/seminars/moursund/","timestamp":"2014-04-19T19:37:17Z","content_type":null,"content_length":"15966","record_id":"<urn:uuid:2a4c7cc0-6a6a-4447-b893-e2a5bc8d1f4e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about Non-linearity on Possible Insight
Archive for the ‘Non-linearity’ Category
Via a post at the always terrific Watts Up with That, a pre publication version of this paper examines the non-linear coupling dynamics of the climate. Its hypothesis is based on the mathematics of
synchronized chaos (sorry, no good introductory link available).
Via Tyler Cowen at Marginal Revolution, an excellent article in Wired about how one formula, embodying one assumption, catalyzed the meltdown. I recommend you read it and ponder it. There are many
useful lessons for modeling complex systems in general.
However, I will summarize for those of you short on time. A fundamental problem in securitization is figuring out how different components of a security are related. Think of it as measuring how
well the components are diversified. The more independent the components, the less risk embodied in the security. Thus AAA rated tranches of mortgage-backed securities are supposed to be very safe
because the components are supposed to be highly independent.
A Chinese mathematician named David X. Li had an insight. You don’t have to analyze the dependencies directly, you just have to observe the correlations in the market prices of the components. Then
you can compute these really tight sounding confidence intervals on the correlations of various components because you have all this market data. Of course, the market can’t take into account what
it doesn’t understand. So you see a bunch of 25-sigma events. At least, your model says they are 25-sigma. Oops!
I attended the Singularity Summit today. Overall, it was worth the time spent. I did not attend the workshop on Friday because it didn’t look substantive when I reviewed the program. Today, I spoke
to several people who were there and they confirmed my prediction. I took 7 pages of notes at the summit and hope to have some insightful synthesis of the material in a few days [Edit: first thought
here, more here]. In the meantime, here is a short review of the talks.
I apologize for the posting lull. I actually spotted an issue than I wanted to address a few weeks ago, but I’ve been pondering how to approach it. It’s pretty complicated and subtle. I even ran a
couple of drafts by Rafe to refine my thinking. So please bear with me.
As I’ve mentioned before, I am a fan of Dave Zetland. When I saw him propagate what I think is a fundamentally false dichotomy in this post, I knew I had to take on the concept of Knightian
uncertainty. It crops up rather often in discussions of forecasting complex systems and I think a lot of people use it as a cop out.
|
{"url":"http://possibleinsight.com/category/non-linearity/","timestamp":"2014-04-17T09:35:19Z","content_type":null,"content_length":"30500","record_id":"<urn:uuid:b8680182-5023-42df-bcc1-4644e221898c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ANNOUNCEMENT FOR 1996
Advanced Course on Theory of Elasticity
Date: 29 October-8 November 1996
Location: TICMI (Tbilisi, Telavi)
David Gordeziani (University of Tbilisi, Georgia)
Summary: Reduction of three-dimensional problems of the theory of elasticity to the two-and one-dimensional mathematical models by Vekua's reduction method. Analysis of the models, their accuracy,
and comparison with other models (variational-difference and finite elements methods, method of complex variables, justification of methods) (7 hours).
George C. Hsiao (University of Delaware, USA) Wolfgang L. Wendland (University Stuttgart, Germany)
Summary: The variational formulation of boundary integral equations and its connection to variational solutions of partial differential equations. Radiation conditions for exterior problems and their
incorporation into boundary integral methods. Coerciveness properties, transmission problems, coercive boundary integral equations (4 hours).
Summary: Acoustic scattering, the Stokes expansion for exterior flows, scattering problems with elastic vibrations (4 hours).
Summary: Trefftz elements, hybrid macroelements, mortal elements, relaxed continuity requirements, coerciveness inequalities and error estimates, iterative solution techniques and parallelization (4
David Natroshvili (Georgian Technical university, Tbilisi, Georgia)
Summary: Investigation of steady state oscillation problems for anisotropic media: fundamental matrices and properties of potentials, properties of boundary integral operators; generalised
Sommerfeld-Kupradze type radiation conditions in anisotropic elasticity; uniqueness and existence theorems of solutions to the basic, interface, mixed and cracked pipe boundary value problems (7
Coordinator: George Jaiani
Deadline for registration: September 20, 1996.
Further information:
Tbilisi International Centre of Mathematics and Informatics, I. Vekua Institute of Applied Mathematics of Tbilisi State University, University Str.2, Tbilisi - 43, Republic of Georgia
e-mail: jaiani@viam.sci.tsu.ge (George Jaiani)
Tel.: (+995 32) 305995
This is the first of a series of courses, at a level suitable for advanced graduate students or recent Ph.D.'s, which the TICMI plans to offer every year at fall.
|
{"url":"http://www.emis.de/journals/TICMI/ann96.htm","timestamp":"2014-04-20T18:49:56Z","content_type":null,"content_length":"4300","record_id":"<urn:uuid:fa31540b-5207-420d-9e66-dbc29730b843>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Journey Through Genius: The Great Theorems of Mathematics Summary
This Study Guide consists of approximately 32 pages of chapter summaries, quotes, character analysis, themes, and more - everything you need to sharpen your knowledge of Journey Through Genius.
This section contains 333 words
(approx. 2 pages at 300 words per page)
In the following century, the center of mathematics shifts from Italy to France and Britain, Dunham explains. Great thinkers such as Francois Viete, Renee Descartes, Blaise Pascal and Pierre de
Fermat make great strides in the advancement of mathematics. In Britain, John Napier and Henry Briggs make important discoveries. The largest figure of the period, however, is easily Sir Isaac
Newton. Dunham chooses a few of Newton's advances as representatives for the great theorem discussed in this chapter, which is Newton's calculation of π.
Dunham presents a brief biography of Newton from his troubled boyhood through his years at Cambridge as a student and later a professor and into his later years as Warden of the Mint. As a young
college student, Newton's genius goes almost unnoticed in an environment that has been overtaken by politics in favor...
(read more from the A Gem from Isaac Newton Summary)
|
{"url":"http://www.bookrags.com/studyguide-journey-through-genius/chapanal007.html","timestamp":"2014-04-23T07:21:19Z","content_type":null,"content_length":"44116","record_id":"<urn:uuid:102ad24c-f553-47ff-9bbe-80df907bc8f2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
problem involving triangle and profit
July 19th 2011, 11:21 AM #1
Jul 2011
problem involving triangle and profit
The question reads:
Due to previous tunneling experience, the firm estimates a cost of $18,000 per meter for boring through this type of rock and constructing the tunnel according to required specifications. If
management insists on a 30% profit, what will be their minimum bid to the nearest thousand.
The sides are 1200m, 1600m, 330.3m
Attached is the picture of the triangle that I worked out.
If someone can explain how to continue on from here that would be fantastic.
I had to use the cosine law to get the 330.3m, so I thought it would be relevant to post here.
-- Thanks
Re: problem involving triangle and profit
How did you get 330.3m? I used the cosine law to get 1175.2m for the "missing" side.
$c^2 = a^2+b^2 - 2ab\cos(C)$
With your numbers:
$c = \sqrt{1600^2 + 1200^2 - 2 \cdot 1600 \cdot 1200 \cos(47^o)} \approx 1175.2\ m$
Re: problem involving triangle and profit
Good catch, I didn't enter the numbers in proper order and got the wrong answer. When I put them in all at the same time it came out to be 1175.2
So, continuing on with the triangle and the 30% profit, any tips?
Re: problem involving triangle and profit
You have the cost per metre and the amount of metres.
To get a 30% profit on that you multiply the actual cost by 1.3
Re: problem involving triangle and profit
Thanks for your help. Much appreciated.
July 19th 2011, 11:34 AM #2
July 19th 2011, 11:45 AM #3
Jul 2011
July 19th 2011, 11:53 AM #4
July 19th 2011, 11:55 AM #5
Jul 2011
|
{"url":"http://mathhelpforum.com/trigonometry/184831-problem-involving-triangle-profit.html","timestamp":"2014-04-19T01:58:15Z","content_type":null,"content_length":"42388","record_id":"<urn:uuid:8c0dc7c8-2051-463d-9f4b-dc20ebf3d8c9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Matching Problem
\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\bs}{\boldsymbol}\) \(\newcommand{\var}{\text{var}}\) \(\
newcommand{\cov}{\text{cov}}\) \(\newcommand{\cor}{\text{cor}}\)
1. 5
5. The Matching Problem
Definitions and Notation
The Matching Experiment
The matching experiment is a random experiment that can the formulated in a number of colorful ways:
• Suppose that \(n\) married couples are at a party and that the men and women are randomly paired for a dance. A match occurs if a married couple happens to be paired together.
• An absent-minded secretary prepares \(n\) letters and envelopes to send to \(n\) different people, but then randomly stuffs the letters into the envelopes. A match occurs if a letter is inserted
in the proper envelope.
• \(n\) people with hats have had a bit too much to drink at a party. As they leave the party, each person randomly grabs a hat. A match occurs if a person gets his or her own hat.
These experiments are clearly equivalent from a mathematical point of view, and correspond to selecting a random permutation \(\bs{X} = (X_1, X_2, \ldots, X_n)\) of the population \(D_n = \{1, 2, \
ldots, n\}\). Here are the interpretations for the examples above:
• Number the couples from 1 to \(n\). Then \(X_i\) is the number of the woman paired with the \(i\)th man.
• Number the letters and corresponding envelopes from 1 to \(n\). Then \(X_i\) is the number of the envelope containing the \(i\)th letter.
• Number the people and their corresponding hats from 1 to \(n\). Then \(X_i\) is the number of the hat chosen by the \(i\)th person.
Our modeling assumption, of course, is that \(\bs{X}\) is uniformly distributed on the sample space of permutations of \(D_n\). The number of objects \(n\) is the basic parameter of the experiment.
We will also consider the case of sampling with replacement from the population \(D_n\), because the analysis is much easier but still provides insight. In this case, \(\bs{X}\) is a sequence of
independent random variables, each uniformly distributed over \(D_n\).
We will say that a match occurs at position \(j\) if \(X_j = j\). Thus, number of matches is the random variable \(N\) defined mathematically by
\[ N_n = \sum_{j=1}^n I_j\]
where \(I_j = \bs{1}(X_j = j)\) is the indicator variable for the event of match at position \(j\). Our problem is to compute the probability distribution of the number of matches. This is an old and
famous problem in probability that was first considered by Pierre-Remond Montmort; it sometimes referred to as Montmort's matching problem in his honor.
Sampling With Replacement
First let's solve the matching problem in the easy case, when the sampling is with replacement.
\((I_1, I_2, \ldots, I_n)\) is a sequence of \(n\) Bernoulli trials, with success probability \(\frac{1}{n}\).
The variables are independent since the sampling is with replacement. Since \(X_j\) is uniformly distributed, \(\P(I_j = 1) = \P(X_j = j) = \frac{1}{n}\).
The number of matches \(N_n\) has the binomial distribution with trial parameter \(n\) and success parameter \(\frac{1}{n}\).
\[ \P(N_n = k) = \binom{n}{k} \left(\frac{1}{n}\right)^k \left(1 - \frac{1}{n}\right)^{n-k}, \quad k \in \{0, 1, \ldots, n\} \]
This follows immediately from Exercise 1.
The mean and variance of the number of matches are
1. \(\E(N_n) = 1\)
2. \(\var(N_n) = \frac{n-1}{n}\)
These results follow from Exercise 2. Recall that the binomial distribution with parameters \(n\) and \(p\) has mean \(n \, p\) and variance \(n \, p (1 - p)\).
The distribution of the number of matches converges to the Poisson distribution with parameter 1 as \(n \to \infty\):
\[ \P(N_n = k) \to \frac{e^{-1}}{k!} \text{ as } n \to \infty \text{ for } k \in \N \]
This is a special case of the convergence of the binomial distribution to the Poisson. For a direct proof, note that
\[ \P(N_n = k) = \frac{1}{k!} \frac{n^{(k)}}{n^k} \left(1 - \frac{1}{n}\right)^{n-k} \]
But \(\frac{n^{(k)}}{n^k} \to 1\) as \(n \to \infty\) and \(\left(1 - \frac{1}{n}\right)^{n-k} \to e^{-1}\) as \(n \to \infty\) by a famous limit from calculus.
Sampling Without Replacement
Now let's consider the case of real interest, when the sampling is without replacement, so that \(\bs{X}\) is a random permutation of the elements of \(D_n = \{1, 2, \ldots, n\}\).
Counting Permutations with Matches
To find the probability density function of \(N_n\), we need to count the number of permutations of \(D_n\) with a specified number of matches. This will turn out to be easy once we have counted the
number of permutations with no matches; these are called derangements of \(D_n\). We will denote the number of permutations of \(D_n\) with exactly \(k\) matches by \(b_n(k) = \#\{N_n = k\}\) for \(k
\in \{0, 1, \ldots, n\}\). In particular, \(b_n(0)\) is the number of derrangements of \(D_n\).
The number of derrangements is
\[ b_n(0) = n! \sum_{j=0}^n \frac{(-1)^j}{j!} \]
By the complement rule for counting measure \(b_n(0) = n! - \#(\bigcup_{i=1}^n \{X_i = i\})\). From the inclusion-exclusion formula,
\[ b_n(0) = n! - \sum_{j=1}^n (-1)^{j-1} \sum_{J \subseteq D_n, \; \#(J) = j} \#\{X_i = i \text{ for all } i \in J\} \]
But if \(J \subseteq D_n\) with \(\#(J) = j\) then \(\#\{X_i = i \text{ for all } i \in J\} = (n - j)!\). Finally, the number of subsets \(J\) of \(D_n\) with \(\#(J) = j\) is \(\binom{n}{j}\).
Substituting into the displayed equation and simplifying gives the result.
The number of permutations with exactly \(k\) matches is
\[ b_n(k) = \frac{n!}{k!} \sum_{j=0}^{n-k} \frac{(-1)^j}{j!}, \quad k \in \{0, 1, \ldots, n\} \]
The following is two-step procedure that generates all permutations with exactly \(k\) matches: First select the \(k\) integers that will match. The number of ways of performing this step is \(\binom
{n}{k}\). Second, select a permutation of the remaining \(n - k\) integers with no matches. The number of ways of performing this step is \(b_{n-k}(0)\). By the multiplication principle of
combinatorics it follows that \(b_n(k) = \binom{n}{k} b_{n-k}(0)\). Using the result in Exercise 5 and simplifying gives the results.
The Probability Density Function
The probability density function of the number of matches is
\[ \P(N_n = k) = \frac{1}{k!} \sum_{j=0}^{n-k} \frac{(-1)^j}{j!}, \quad k \in \{0, 1, \ldots, n\} \]
This follows directly from Exercise 6, since \(\P(N_n = k) = \#\{N_n = k\} / n!\).
In the matching experiment, vary the parameter \(n\) and note the shape and location of the probability density function. For selected values of \(n\), run the simulation 1000 times and note the
apparent convergence of empirical density function to the true probability density function.
\(\P(N_n = n - 1) = 0\).
A simple probabilistic proof is to note that the event is impossible--if there are \(n - 1\) matches, then there must be \(n\) matches. An algebraic proof can also be constructed from the probability
density function in Exercise 7.
The distribution of the number of matches converges to the Poisson distribution with parameter 1 as \(n \to \infty\):
\[ \P(N_n = k) \to \frac{e^{-1}}{k!} \text{ as } n \to \infty, \quad k \in \N \]
From the power series for the exponential function,
\[ \sum_{j=0}^{n-k} \frac{(-1)^j}{j!} \to \sum_{j=0}^\infty \frac{(-1)^j}{j!} = e^{-1} \text{ as } n \to \infty \]
So the result follows from the probability density function in Exercise 7.
The convergence is remarkably rapid.
In the matching experiment, increase \(n\) and note how the probability density function stabilizes rapidly. For selected values of \(n\), run the simulation 1000 times and note the apparent
convergence of the relative frequency function to the probability density function.
The mean and variance of the number of matches could be computed directly from the distribution. However, it is much better to use the representation in terms of indicator variables. The exchangeable
property is an important tool in this section.
\(\E(I_j) = \frac{1}{n}\) for \(j \in \{1, 2, \ldots, n\}\).
\(X_j\) is uniformly distributed on \(D_n\) for each \(j\) so \(\P(I_j = 1) = \P(X_j = x) = \frac{1}{n}\).
Thus, the expected number of matches is 1, regardless of \(n\), just as when the sampling is with replacement.
\(\var(I_j) = \frac{n-1}{n^2}\) for \(j \in \{1, 2, \ldots, n\}\).
This follows from \(\P(I_j = 1) = \frac{1}{n}\).
A match in one position would seem to make it more likely that there would be a match in another position. Thus, we might guess that the indicator variables are positively correlated.
For distinct \(j, \; k \in \{1, 2, \ldots, n\}\),
1. \(\cov(I_j, I_k) = \frac{1}{n^2 (n - 1)}\)
2. \(\cor(I_j, I_k) = \frac{1}{(n - 1)^2}\)
Note that \(I_j I_k\) is the indicator variable of the event of a match in position \(j\) and a match in position \(k\). Hence by the exchangeable property \(\P(I_j I_k = 1) = \P(I_j = 1) \P(I_k = 1
\mid I_j = 1) = \frac{1}{n} \frac{1}{n-1}\). As before, \(\P(I_j = 1) = \P(I_k = 1) = \frac{1}{n}\). The results now follow from standard computational formulas for covariance and correlation.
From Exercise 15, when \(n = 2\), the event that there is a match in position 1 is perfectly correlated with the event that there is a match in position 2. This makes sense, since there will either
be 0 matches or 2 matches.
\(\var(N_n) = 1\) for every \(n \in \{2, 3, \ldots\}\).
This follows from the previous two exercises and basic properties of covariance. Recall that \(\var(N_n) = \sum_{j=1}^n \sum_{k=1}^n \cov(I_j, I_k)\).
In the matching experiment, vary the parameter \(n\) and note the shape and location of the mean/standard deviation bar. For selected values of the parameter, run the simulation 1000 times and note
the apparent convergence of sample mean and standard deviation to the distribution mean and standard deviation.
For distinct \(j, \; k \in \{1, 2, \ldots, n\}\), \(\cov(I_j, I_k) \to 0\) as \(n \to \infty\).
Thus, the event that a match occurs in position \(j\) is nearly independent of the event that a match occurs in position \(k\) if \(n\) is large. For large \(n\), the indicator variables behave
nearly like \(n\) Bernoulli trials with success probability \(\frac{1}{n}\), which of course, is what happens when the sampling is with replacement.
A Recursion Relation
In this subsection, we will give an alternate derivation of the distribution of the number of matches, in a sense by embedding the experiment with parameter \(n\) into the experiment with parameter \
(n + 1\).
The probability density function of the number of matches satisfies the following recursion relation and initial condition:
1. \(\P(N_n = k) = (k + 1) \P(N_{n+1} = k + 1), \quad k \in \{0, 1, \ldots, n\}\).
2. \(\P(N_1 = 1) = 1\).
First, consider the random permutation \((X_1, X_2, \ldots, X_n, X_{n+1})\) of \(D_{n+1}\). Note that \((X_1, X_2, \ldots, X_n)\) is a random permutation of \(D_n\) if and only if \(X_{n+1} = n + 1\)
if and only if \(I_{n+1} = 1\). It follows that
\[ \P(N_n = k) = \P(N_{n+1} = k + 1 \mid I_{n+1} = 1), \quad k \in \{0, 1, \ldots, n\} \]
From the defnition of conditional probability argument we have
\[ \P(N_n = k) = \P(N_{n+1} = k + 1) \frac{\P(I_{n+1} = 1 \mid N_{n+1} = k + 1)}{\P(I_{n+1} = 1)}, \quad k \in \{0, 1, \ldots, n\} \]
But \(\P(I_{n+1} = 1) = \frac{1}{n+1}\) and \(\P(I_{n+1} = 1 \mid N_{n+1} = k + 1) = \frac{k+1}{n+1}\). Substituting into the last displayed equation gives the recurrence relation. The initial
condition is obvious, since if \(n = 1\) we must have one match.
The results of the previous two exercises can be used to obtain the probability density function of \(N_n\) recursively for any \(n\).
The Probability Generating Function
Next recall that the probability generating function of \(N_n\) is given by
\[ G_n(t) = \E\left(t^{N_n}\right) = \sum_{j=0}^n \P(N_n = j) t^j, \quad t \in \R \]
The family of probability generating functions satisfies the following differential equations and ancillary conditions:
\[ G_{n+1}^\prime(t) & = G_n(t), \quad t \in \R, \; n \in \N_+ \\ G_n(1) & = 1, \quad n \in \N_+ \]
Note also that \(G_1(t) = t\) for \(t \in \R\). Thus, the system of differential equations can be used to compute \(G_n\) for any \(n \in \N_+\).
In particular, for \(t \in \R\),
1. \(G_2(t) = \frac{1}{2} + \frac{1}{2} t^2\)
2. \(G_3(t) = \frac{1}{3} + \frac{1}{2} t + \frac{1}{6} t^3\)
3. \(G_4(t) = \frac{3}{8} + \frac{1}{3} t + \frac{1}{4} t^2 + \frac{1}{24} t^4\)
For \(k, \; n \in \N_+\) with \(k \lt n\),
\[ G_n^{(k)}(t) = G_{n-k}(t), \quad t \in \R \]
This follows from Exercise 20.
For \(n \in \N_+\),
\[ \P(N_n = k) = \frac{1}{k!} \P(N_{n-k} = 0), \quad k \in \{0, 1, \ldots, n - 1\} \]
This follows from the previous exercise and basic properties of generating functions to conclude that
Examples and Applications
A secretary randomly stuffs 5 letters into 5 envelopes. Find each of the following:
1. The number of outcomes with exactly \(k\) matches, for each \(k \in \{0, 1, 2, 3, 4, 5\}\).
2. The probability density function of the number of matches.
3. The covariance and correlation of a match in one envelope and a match in another envelope.
1. \(k\) 0 1 2 3 4 5
\(b_5(k)\) 44 45 20 10 0 1
2. \(k\) 0 1 2 3 4 5
\(\P(N_5 = k)\) 0.3667 0.3750 0.1667 0.0833 0 0.0083
3. Covariance: \(\frac{1}{100}\), correlation \(\frac{1}{16}\)
Ten married couples are randomly paired for a dance. Find each of the following:
1. The probability density function of the number of matches.
2. The mean and variance of the number of matches.
3. The probability of at least 3 matches.
1. \(k\) \(\P(N_{10} = k)\)
0 0.3678794
1 0.3678791
2 0.1839409
3 0.0613095
4 0.0153356
5 0.0030555
6 0.0005208
7 0.0000661
8 0.0000124
10 0.0000003
2. \(\E(N_{10}) = 1\), \(\var(N_{10}) = 1\)
3. \(\P(N_{10} \ge 3) = 0.0803\)
In the matching experiment, set \(n = 10\). Run the experiment 1000 times and compare the following for the number of matches:
1. The true probabilities
2. The relative frequencies from the simulation
3. The limiting Poisson probabilities
1. See 5.25 (a)
3. \(k\) \(\P(N = k)\)
0 0.3678794
1 0.3678794
2 0.1839397
3 0.0613132
4 0.0153283
5 0.0030657
6 0.0005109
7 0.0000730
8 0.0000091
9 0.0000014
10 0.0000001
|
{"url":"http://www.math.uah.edu/stat/urn/Matching.html","timestamp":"2014-04-20T03:10:23Z","content_type":null,"content_length":"26532","record_id":"<urn:uuid:6b2c6311-8ff4-4889-98de-9beae17837c8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Paul Dawkins
Math 2413 – Calculus & Analytic Geometry I
Class Notes
You can access copies of my class notes at http://tutorial.math.lamar.edu. The notes are viewable on the web and can be downloaded. The downloads are broken up into section, chapter and complete set
so you can get as much or as little as you need. I've tried to proof read these notes as much as possible, but there are bound to be typos in them. If you should happen to find a typo please let me
know so I can get if fixed up.
These notes are NOT a substitute for attending class. I've tried to make them as complete as possible, but often a good question will arise in class that I didn't think about while writing the notes.
So, if you skip class you will not get the results that discussion. Also, the examples given in the notes may or may not be the same as the examples given in class. This will depend upon the section
that we're covering on any given day.
Exam Schedule
Here is the tentative schedule for exams in this class. Remember that the exams in this class will cover the material listed and as such the dates are very tentative!! I will always announce the exam
at least one week before it is given. It is your responsibility to get to class to find out when each exam will be given.
When I have finalized an exam date I will change the color of the date to red. At this point you will know that the exam will fall on that date.
Click here to see my makeup policy for this class.
Exam 1
Wednesday September 21, 2011
Review, Limits, Definition of Derivative, Interpretation of Derivative
[Show Exam Results]
Num. Hi Low Avg. A's B's C's D's F's
37 100 (100%) 27 (27%) 74.84 (74.84%) 9 7 10 3 8
This exam was worth 100 points.
Exam 2
Friday October 14, 2011
[Show Exam Results]
Num. Hi Low Avg. A's B's C's D's F's
35 100 (100%) 29 (29%) 75.2 (75.2%) 7 6 11 5 6
This exam was worth 100 points.
Exam 3
Tuesday November 1, 2011
Applications of Derivatives
[Show Exam Results]
Num. Hi Low Avg. A's B's C's D's F's
32 100 (100%) 43 (43%) 76.44 (76.44%) 10 4 10 2 6
This exam was worth 100 points.
Exam 4
Tuesday December 6, 2011
Integrals and Applications of Integrals
[Show Exam Results]
Num. Hi Low Avg. A's B's C's D's F's
30 100 (100%) 45 (45%) 75.37 (75.37%) 6 5 10 5 4
This exam was worth 100 points.
Final Exam
Wednesday December 14, 2011 from 8:00 AM – 10:30 AM
All material covered in class.
[Show Exam Results]
Num. Hi Low Avg. A's B's C's D's F's
31 199 (99.5%) 127 (63.5%) 161.97 (80.98%) 8 7 10 6 0
This exam was worth 200 points.
Grades & Distributions
After each hour exam I compute the grades of all the students based on the standard scale :
A : 100%- 90%, B : 89% - 80%, C : 79% - 70%, D : 69% - 60%, F : 59% - 0%
The "exam grades" are included only for reference purposes so you will know how you are doing in the course to that point and DO NOT in any way influence the final course grades. To make it a little
easier to read the results I've also include the point range/scale needed for each grade based upon the standard percentage scale listed above.
I've got a very simply policy for computing final grades. At the end of the semester I add up your total points and divide that by the total number of possible points and then compare that percentage
to the standard scale given above and assign a grade accordingly. I do not use any kind of fancy weighting system and I rarely scale the grades. On those few occasions that I do scale the final grade
it is never more than a percentage point or two.
I've also complied all of the exam results here so that they are in one place.
[Show Exam Results] [Show Grade Distributions]
Exam Results
Points Num. Hi Low Avg. A's B's C's D's F's
Exam 1 100 37 100 (100%) 27 (27%) 74.84 (74.84%) 9 7 10 3 8
Exam 2 100 35 100 (100%) 29 (29%) 75.2 (75.2%) 7 6 11 5 6
Exam 3 100 32 100 (100%) 43 (43%) 76.44 (76.44%) 10 4 10 2 6
Exam 4 100 30 100 (100%) 45 (45%) 75.37 (75.37%) 6 5 10 5 4
Final Exam 200 31 199 (99.5%) 127 (63.5%) 161.97 (80.98%) 8 7 10 6 0
Note that the results given here only include those who actually took each exam.
Grade Distributions
Exam 1 Exam 2 Exam 3 Exam 4 Course Grade
Grade Distribution Grade Distribution Grade Distribution Grade Distribution Distribution
Scale (38) Students Scale (37) Students Scale (35) Students Scale (35) Students (35) Students
A 120 - 108 10 250 - 225 8 370 - 333 7 500 - 450 5 7
B 107 - 96 5 224 - 200 8 332 - 296 9 449 - 400 9 12
C 95 - 84 12 199 - 175 13 295 - 259 7 399 - 350 8 12
D 83 - 72 3 174 - 150 2 258 - 222 7 349 - 300 8 0
F 71 - 0 8 149 - 0 6 221 - 0 5 299 - 0 5 4
Note that the number of student here after each exam may differ from that listed in the exam results. The totals here include all students still enrolled in the class instead of just those that took
each exam.
Homework Assignments/ Due Dates
Here is a list of the homework assignments for this semester. Each homework set is typically full homework set and will contain all the problems that I want you to turn in on the indicated due date
(provided one has been given for that particular homework set).
Each homwork set generally consits of my own problems. I have, on very rare occasions, used problems out of a text book. When I do use a problem out of a text book I will put the problem in its
entirety in the homework set and the text book will NOT be needed for the problem. Note as well that for good or bad my own problems tend to be harder (in general) than the average text book problem.
In most classes I will also post complete solutions to each homework set. IF solutions have been made available for download for a given set of homework there are a couple of things to note. First,
the solutions may not have all the graphics that were in the solutions handed out in class due to the difficulty of getting some of the graphics into the document. Second, typically those that I
graded are given first, with point values, and those that weren't graded are given last. Last, due to time constraints I don't always include every step in the solutions. I will always put in what I
consider to be major steps and the answer will always be given. It will up to you to fill in any missing steps.
In order to view the homework sets and solutions you will need to have the Adobe Reader installed on your computer. You can download a copy here. Our lab in Lucas 209 already has the Adobe Reader
installed so you can always use one of the computers there to view the homework sets.
Disclaimer : While I make every effort to make sure that the homework sets available here and the due dates are accurate, I will on occasion have a typo in the homework set, change the due date and/
or inadvertently put the wrong due date here. If there is ever a discrepancy between the due date listed here and that given in class the due date given in class is the official due date. In the case
of typos I will always announce the typo the very next class period after finding the typo. I will also make every effort to get the typo corrected here, however it may take me longer to get the typo
corrected here.
It is your responsibility to know the correct due dates and/or be aware of any typo fixes that I may have announced in class.
Section/Problems Due Date Points
Homework Set 1 Friday August 12, 2011 10
Homework Set 1 - Solutions
Homework Set 2 Tuesday September 13, 2011 10
Homework Set 2 - Solutions
Homework Set 3 Monday September 19, 2011 10
Homework Set 3 - Solutions
Homework Set 4 Wednesday September 28, 2011 10
Homework Set 4 - Solutions
Homework Set 5 Tuesday October 4, 2011 10
Homework Set 5 - Solutions
Homework Set 6 Tuesday October 11, 2011 10
Homework Set 6 - Solutions
Homework Set 7 Friday October 21, 2011 10
Homework Set 7 - Solutions
Homework Set 8 Friday October 28, 2011 10
Homework Set 8 - Solutions
Homework Set 9 Wednesday November 9, 2011 10
Homework Set 9 - Solutions
Homework Set 10 Wednesday November 16, 2011 10
Homework Set 10 - Solutions
Homework Set 11 Wednesday November 30, 2011 10
Homework Set 11 - Solutions
Here are a variety of handouts that you may find useful in this class. Some are copies of handouts that I've given in class and others are here because you may find them useful for one reason or
another. They are in a variety of formats.
• html - This is fairly obvious. It will load the information into your browser.
• pdf - This is Adobe's pdf format and requires their Acrobat Reader to view it. Click here to go to Adobe's web site to download Acrobat Reader if you need to. Our lab in Lucas 209 has Acrobat
Reader installed as so you can always view these there.
Here are the handouts for this semester that are available for download.
• Information Sheet [pdf] This is a copy of the information sheet that I handed out one the first day of classes.
• Course Policies [pdf] This is a copy of the course policies that I handed out one the first day of classes.
• My Syllabus [pdf] Here is a copy of the list of topics in my notes that we'll be covering this semester as well as exam dates and material covered.
• Important Information [pdf] This is a copy of some important information that is handed out to all mathematics classes.
• Tips for passing my class. [pdf] This is a list of tips that may help you to pass my class. You may or may not agree with all of them and some may not help you, but I suspect that most of these
will help at least some to pass my class.
Here are some handouts that I generally handout to the class over the course of the semester.
Here are some downloads you may find useful.
• Algebra/Trig Review [html | pdf] - Looking for a refresher in Algebra and Trig? Here is a list of problems to test your knowledge of Algebra and Trig skills.
• Algebra/Trig Review Solutions [html | pdf] - Here are solutions to the problems from the Algebra/Trig Review above.
• Common Math Errors [html | pdf] - Here is a list of common math errors that you may find useful.
• Getting Help/How to Study Math [html | pdf] - Here are some tips for studying mathematics.
• Algebra Cheat Sheet [Full Sized - pdf | Reduced - pdf] - This is as many common algebra facts, properties, formulas, and functions that I could think of. There is also a page of common algebra
errors included. It comes in two version, a full sized version that is currently four pages long and a reduced version that has the same information except it has been reduced down to fit into
two pages.
• Trig Cheat Sheet [Full Sized - pdf | Reduced - pdf] - Here is a set of common trig facts, properties and formulas. A unit circle (completely filled out) is also included. It comes in two version,
a full sized version that is currently four pages long and a reduced version that has the same information except it has been reduced down to fit into two pages.
• Common Derivatives and Integrals [Full Sized - pdf | Reduced - pdf] - Here is a set of common derivatives and integrals that are used somewhat regularly in a Calculus I or Calculus II class. Also
included are reminders on several integration techniques. It comes in two version, a full sized version that is currently four pages long and a reduced version that has the same information
except it has been reduced down to fit into two pages.
|
{"url":"http://www.math.lamar.edu/faculty/dawkins/2413/2413.aspx?ID=74","timestamp":"2014-04-16T07:43:00Z","content_type":null,"content_length":"42594","record_id":"<urn:uuid:2bd3a50d-d606-439e-9d55-8171049af1c5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
January-March, 2003
Volume 5, Issue 1
Issue is dedicated to 50th anniversary of Professor Anatoly Georgievich Kusraev
Anatoly Georgievich Kusraev is a representative of the scientific school of Academician Leonid Vital'evich Kantorovich, an outstanding Soviet mathematician and a Nobel prize winner in economics. His
main area of scientific research is functional analysis and its applications.
Article (rus.) - [pdf] [zip-pdf]
Article (rus.) - [pdf] [zip-pdf]
Approximation by Solutions of Quasielliptical Equations in Lp
M. S. Alborova (Vladikavkaz)
UDC 517.5
The Lp-approximating problem for quasielliptical operators was studied. Some functional geometrical tests are found for a set K to ensure the density of the space η(K) in η ^p(K) with respect to the
Article (rus.) - [pdf] [zip-pdf]
Geometry of Carnot Caratheodory Spaces, Quasiconformal Analysis, and Geometric Measure Theory
S. K. Vodop'yanov (Novosibirsk)
UDC 517.518.23+517.54+517.813.52+517.954
Some results on geometry of Carnot Caratheodory spaces are set forth. Application of these results is given to the proof of P-differentiability of Lipschitz mappings and weakly compact mappings of
Carnot Caratheodory spaces. Some applications are also revealed of the theory of differentiability to geometric measure theory and the theory of quasiconformal mappings of Carnot Caratheodory
Article (rus.) - [pdf] [zip-pdf]
Let U be a Banach Kantorovich space over a ring of measurable functions. Let U be a *-algebra and the norm of U have the properties like those of a C*-algebra. We give a representation of U as a
mesurable bundle of classical C*-algebras.
Article (rus.) - [pdf] [zip-pdf]
The problem is considered of optimal recovery of derivatives of functions in Sobolev classes on R^d from inaccurate information on their Fourier transforms. We show that there is a domain Ω[0]М R^d
such that information about the Fourier transform on each domain containing Ω[0] leads to no decrease of the optimal recovery error.
Article (rus.) - [pdf] [zip-pdf]
Optimal Recovery of Analytic Functions by Their Values at an Equidistant Grid on a Circle
K. Yu. Osipenko (Moskow)
UDC 517.51
In the paper an optimal method is constructed of recovery of functions analytic in the unit disk and having the first derivative bounded on using information about the values of these functions at an
equidistant grid on the circle |z| = ρ, 0 < ρ < 1.
Article (rus.) - [pdf] [zip-pdf]
We prove a theorem on decomposition of an atomic operator in some special basis of lattice homomrphisms and show that this decomposition is in a sense unique.
Article (rus.) - [pdf] [zip-pdf]
Open Poblems of Nonlinear Dominated Operators in Locally Bounded Spaces of Measurable Functions
V. G. Fetisov (Rostov-na-Dony)
UDC 517.98
We show that to use dominated operators is effective in studying a wide class of operator equations and systems in locally bounded spaces of Lebesque measurable scalar and vector-valued functions. We
also specify some directions of research in which the idea of domination may be rewarding.
Article (rus.) - [pdf] [zip-pdf]
|
{"url":"http://www.maths.soton.ac.uk/EMIS/journals/VMJ/eng/2003_1.html","timestamp":"2014-04-17T15:51:45Z","content_type":null,"content_length":"10527","record_id":"<urn:uuid:d6e57d7e-48fe-42fc-961d-4f5a890b0b93>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IJ Plugins: k-means Clustering
k-means Clustering plugin performs pixel-based segmentation of multi-band images. Each pixel in the input image is assigned to one of the clusters. Values in the output image produced by the plugin
represent cluster number to which original pixel was assigned.
An input image stack can be interpreted in two ways. Slices can be interpreted as bands in a 2D image or as a 3D image. For instance, an RGB color images has three bands: red, green, and blue. Each
pixels is represented by an n-valued vector, where n is a number of bands, for instance, a 3-value vector [r,g,b] in case of a color image.
Each cluster is defined by its centroid in n-dimensional space. Pixels are grouped by their proximity to cluster's centroids. Cluster centroids are determined using a heuristics: initially centroids
are initialized using the k-means++ algorithm and then their location is interactively optimized. For more information on this and other clustering approaches see:
Anil K. Jain and Richard C. Dubes, Algorithms for Clustering Data, Prentice Hall, 1988. (PDF version available).
The main plugin k-means Clustering takes an input image and segments it based on clusters discovered in that image. Utility plugin k-means Clustering Reapply can use centers cluster computed for one
image and use them to segment another image of the same type (image size can be different).
k-means Clustering Plugin options
• Number of clusters - Number of segments image will be divided into.
• Cluster center tolerance - At each iteration cluster center location are updated. If cluster centers, between interactions, move less than the tolerance value it ts assumed the algorithm
converged to the final solution.
• Interpret stack as 3D - If checked the input stack is interpreted as a 3D image. If not checked, the input image is assumed to be 2D with number of values per pixel equal to the number of slices.
• Enable randomization seed - When randomization seed is used, cluster centers are initialized to the same values every time algorithm starts. When randomization seed is disabled cluster center
will be initialized differently each time. It is possible that different cluster initialization may lead to different final solutions.
• Randomization seed - The seed is the initial value of the internal state of the pseudorandom number generator.
• Show clusters as centroid value - produces additional output image where clusters are represented by its centroid value, see examples below.
• Enable clustering animation - produces additional output showing optimization process.
• Print optimization trace - prints out cluster centroids and change in centroid location at each iteration.
• Send to results table - Computed cluster centers are send to the results table. This can be used in combination with k-means Clustering Reapply plugin to cluster more images in the same way.
k-means Clustering Reapply Plugin options
• Table with clusters - Result table containing cluster centers created by k-means Clustering plugin.
• Image to apply clusters - Image that you want to segment. It should be of the same type as the image used to create clusters. Sizes can be different
• Interpret stack as 3D - In version 1.9 this option is not yet supported.
• Show clusters as centroid value - produces additional output image where clusters are represented by its centroid value, see examples below.
Clustering Segmentation Examples
4-class k-means segmentation of a color image:
Original. Clusters encoded by gray level. Clusters encoded by centroid color.
3-class k-means segmentation of a color image
Original. Clusters encoded by gray level. Clusters encoded by centroid color.
Possible Extensions of the Current Implementation
• Processing only within ROI
• Anisotropic distance measures, for instance, Mahalonobis
• Extension to fuzzy k-means
• Cluster validity indexes
• Automatic selection of the number of clusters
The k-means Clustering plugins installs in ImageJ under: Plugins/Segmentation/k-means Clustering.
Download and Installation
The k-means Clustering plugin is part of ij-Plugins Toolkit. Download and installation are described here
|
{"url":"http://ij-plugins.sourceforge.net/plugins/segmentation/k-means.html","timestamp":"2014-04-20T20:55:01Z","content_type":null,"content_length":"18240","record_id":"<urn:uuid:b3936737-0f63-4950-859d-caac14516740>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Generating 3D Figures with a Given Symmetry Group
A symmetry of a figure is a transformation, such as a rotation, reflection, inversion, etc., that repositions the figure to be indistinguishable from the original. For example, rotating a circle
about its center is a symmetry of the circle.
All the symmetries of a figure form a group called the figure's symmetry group. This Demonstration considers some figures in 3D consisting of finite sets of congruent triangles.
If the figure has only one rotational axis, there are the four possible kinds of symmetries, all cyclic: (there is an axis of rotation and reflection, but there is no mirror plane), (there is a
mirror plane, but it is not perpendicular to the axis), (there is a mirror plane that is perpendicular to the axis), and (there is a glide reflection).
If the figure has more than one rotational axis but no more than one -fold axis with , the possibilities are (dihedral symmetries): (no mirror plane), (the mirror plane is not perpendicular to the
principal axis), (the mirror plane is perpendicular to the principal axis).
The figure may have more than one 5-fold axis (icosahedral symmetry): (rotations only), (there is a mirror plane).
The figure may have more than one 4-fold axis (octahedral symmetry): (rotations only), (there is a mirror plane).
The figure may have more than one principal 3-fold axis (tetrahedral symmetry): (rotations only), (there is a mirror plane, no inversion), (there is a point of inversion).
This Demonstration is a guessing game to learn about the 14 types of symmetry groups of figures that have a rotational axis.
Not demonstrated are the three symmetry groups with no rotational symmetry: (asymmetric ), (only inversion), and (only one mirror plane).
|
{"url":"http://demonstrations.wolfram.com/Generating3DFiguresWithAGivenSymmetryGroup/","timestamp":"2014-04-16T16:15:51Z","content_type":null,"content_length":"48708","record_id":"<urn:uuid:b109f300-0ad8-499f-a5ad-548a877181ae>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cfg To Cnf Examples
Cfg To Cnf Examples PDF
Sponsored High Speed Downloads
Converting CFGs to CNF (Chomsky Normal Form) Richard Cole October 17, 2007 A CNF grammar is a CFG with rules restricted as follows. The right hand side of a rule consists of:
required length, so the CFG is in CNF. Quesiton 7 Convert the following CFG to Chomsky Normal Form (CNF): S aX | Y | bab X /\ | Y Y bb | bXb Solution 7 Step 1 - Kill all /\ productions: By
inspection, the nullable nonterminal is X.
1 The Pumping Lemma for Context Free Grammars Chomsky Normal Form • Chomsky Normal Form (CNF) is a simple and useful form of a CFG • Every rule of a CNF grammar is in the form
Chomsky Normal Form (CNF) A CFG is in Chomsky Normal Form if all its productions are of the form: A → BC or A → a where A, B, C ∈ V and a ∈ T. Also, S → ε may be one of the productions. IF-UTAMA 7
... Examples of CNF A b A SA S a
Note The provisioning in these examples is based on Cisco BTS 10200 Release 4.1. China Dial Plan Using a Cisco 7960 SIP Phone ... SIPDefault.cnf file ##### # Image Version image_version:
"P0S3-WF-X-20" # Proxy Server ... tftp_cfg_dir: "./sip_phone/" # Time Server sntp_mode: "directedbroadcast"
Context-free Grammar (Example) A 0A1 A B B # Substitution Rules Variables A, B ... CFG (more examples) •Let G = ( {S}, {a,b}, R, S ), and the set of rules, R, is ... The only reasons for a CFG not in
CNF: 1.
automata by employing this representation method, the problem of context free grammar from examples can be reduced to the problem of partitioning the set of non terminals. We use the ... parsing
algorithm needs CFG to be in CNF, that is, ...
Algorithm 1: The psuedocode for the CYK algorithm Input: A string s = s[0];:::;s[jsj 1] and CFG G in CNF form Output: A Boolean value indicating if s 2L(G)
result on the learnability of lexical semantics given a context-free grammar. Our ultimate goal, ... 2.1 Inferring Grammars from Positive Structural Examples A context-free grammar (CFG) is a
four-tuple {N,Σ,P,S} ... refers to a CFG in CNF. Let L(G) denote the language of grammar G.
Context Free Grammars (CFG) ... (CNF) in which productions must have either exactly 2 non-terminal symbols on the RHS or 1 terminal symbol (lexicon rules). ... Examples taken from Jurafsky and Martin
Ambiguity Shallow Parsing
Those had few visual examples, ... • context-free grammar! , S ;! a , ! ; a b , ! ; b q0 q1!, q Z2; ... • CFG → PDA (LR parser) • CFG → CNF • CFG → LL parse table and parser • CFG → LR parse table
and parser • CFG → Brute force parser Deriv ed !
Examples A context-free grammar(CFG) is a four-tuple{N,Σ,P,S} where N is a finite set of non-terminals, ... a CFG in CNF. Let L(G) denote the language of grammar G. An un-labeled derivation tree (UDT)
for s ∈ L(G) is the deriva-
Examples A context-free grammar(CFG) is a four-tuple{N,Σ,P,S} where N is a finite set of non-terminals, ... a CFG in CNF. Let L(G) denote the language of grammar G. An un-labeled derivation tree (UDT)
for s ∈ L(G) is the deriva-
Definition: A context-free grammar (CFG) is a 4-tuple , ... Examples of boolean expressions: 8. CMPSCI 601: Boolean Logic: ... lent to one in Conjunctive Normal Form (CNF), and to one in Disjunctive
Normal Form (DNF).
ConvertingCFG(withoutSConverting CFG (without S>=>* )intoCNF) into CNF ... Refer to the exercises done in class as examples. Closure Properties for CFL
such as fuzzy context-free grammar, ... accepts a CFL given a nite number of positive and negative examples drawn from that ... A -free (fuzzy) context-free grammar Gis in CNF i P V [0;1] (T[(V V)).
After the initial grammar has been generated, the weights are assigned to the production
form of a context-free grammar rule set in a Chomsky Normal Form. ... examples [8], and that even the ability to ask equivalence ... because every CFG can be transformed into equivalent CNF. Chomsky
Normal Form allows only for
• CFG →CNF • CFG →LL parse table and parser • CFG →LR parse table and parser • CFG →Brute force parser. What is JFLAP? (cont) Recursively Enumerable languages • Turing machine (1-tape) ... A study
aid - create additional examples ...
$ is a CFG $ MemberCFL &$ '$ is a CFG $ Thm: ... Examples of boolean expressions: ! ! 4. CS601/CM730-A: Boolean Logic: Semantics Lecture 8 ... alent to one in Conjunctive Normal Form (CNF), and to
one in Disjunctive Normal Form (DNF).
4. Inductive proof Show that, if G is a CFG in CNF then any leftmost derivation with 2n−1 steps generates a string of length exactly n, for any n ≥ 1.
arbitrary CFG into CNF. ... • Examples of undecidable properties of L(M):
... What are CNF and GNF for context free grammar? Give examples. c) ... Define Context free grammar and write context free grammar for the languages i) L= ... Convert the following CFG to CNF S→ASB|
ε A→aAS|a B→SbS|A|bb.
Examples of strings belonging to L(G2) a boy sees the boy sees a flower ... A context-free grammar is a 4-tuple <V, , R, S∑ > where ... Chomsky Normal Form (CNF)
Context Free Grammar Definition • A CFG G = (V, T, P, S) where V ... Reading Assignment: Converting a CFG to CNF. 45 Exercises • Are the following CFG's in CNF? (i) S →aSb | ... Examples • L. 1 = {a.
n. b. n
Dynamic Programming Algorithm To Determine How Context-Free Grammar Can Generate A String Samuel Simon ... Some examples of CFG: • Example 1 S Æ a S Æ aS ... (CNF). Since any CFG can be converted to
CNF without too
definition of CFG, ambiguous grammar, design of CFG -conversion of CFG to Chomsky normal form (CNF) -definition of pushdown automaton (PDA), design of PDA - ... Language examples in different classes
of languages . Turing-recognizable •
Some examples include: ... The first program converts any CFG into CNF. ... Converting this context-free grammar to Chomsky Normal Form produced 3907 rules. This high number is partly due to a small
bug in the CNF program that occasionally produces
Table 108: EIP_OBJECT_CFG_QOS_CNF – Packet Status/Error.....162 Table 109: Service Codes according to the CIP specification ... Examples of configurable items include the device’s IP Address, Network
Mask, and Gateway Address.
Special case of Context-Free Grammar ... Chomsky normal form CNF A CFG is in CNF if each element of P is in one of the following forms A Æ ... training set of examples ... grammatical inference (GI).
GI is a supervised learning approach.
CFG’s G1 andG2 tells whether L(G1) = L(G2) ? ... arbitrary CFG into CNF. ... • Examples of undecidable properties of L(M):
CFG ) Definition (Terms), Mathematical Representation and Examples, Regular Grammar, erivate TD rees and Examples (Right–Most and Left–Most ... Context-Free Grammars (Algorithm and Examples), Normal
Forms (CNF, GNF and Conversions). Tutorial (T ...
Two examples of simulation results are included in the paper. This software application serves as a handy ... files (.CFG, .DAT). COMTRADE is a common format for data files and exchange medium used
for the interchange of various types of fault, test, or simulation data for electrical power
Some more examples: = f(;)g L= fw2 jwisastringofproperlynestedparenthesesg. Eg. ();()();(()());::: G= (fSg; ;P;S) P: S!(S)jSSj ... Theorem 1.5 Any CFL without can be generated by a grammar in CNF.
Hence every CFG can be con-verted into CNF. 1.4.1 Conversion into CNF
Command Modes LPCOR custom configuration (cfg-lpcor-custom) ... Examples The following example shows a LPCOR configuration with six resource groups: ... cnf-file location Specifies a storage location
for phone configuration files.
Examples: * HMM-based gene-finders assume DNA is regular ... A stochastic context-free grammar (SCFG) is a CFG plus a probability ... Transforming a CFG into CNF can be accomplished by
appropriately-ordered ...
•Any CFG grammar can be represented in CNF ... Slide based on “Learning to Extract Information from Semi‐Structured Text using a Discriminative Context Free Grammar”by Paul Viola and ... •Give only a
few dozen prototypical examples (for NP e.g. determiner ...
... cfg -site:sourceforge.net "powered by ducalendar" -site: ... "/*/_vti_cnf/" filetype:cfg ks intext:rootpw -sample -test -howto filetype:ini Desktop.ini intext:mydocs.dll filetype:torrent torrent
Index of phpMyAdmin ... jee/examples/jsp inurljspdemos private protected secret secure
Given a context free grammar (CFG) we present an algorithm to decide, for ... Interesting examples can be found in the analysis of DNA sequences [2], treating ... Remember that CNF assumes that every
production is either
tutability as seen in the examples above can be captured ... Viterbi-parse of a sentence given a probabilistic context free grammar(PCFG). We will report on research relevant to ... the translation
and transform back from CNF to CFG. 3.2 The CYK Algorithm
- some source code examples added, some are still missing 2 25.10.05 AB - Removing the AREP in all Packets and replace them by the use of ulSrcId of a Packet. Update ... The second packet
PROFIBUS_FSPMS_CMD_SET_CFG_REQ/CNF allows changing the “Is-
category of a context-free grammar. ... Examples of the kind requested can be found in chapter 5 (and many other places in the literature for the course). 2. ... We rst write a (non-CNF) CFG grammar
for the language: S!AC A!abbjaAbb C!cjcC
Pushdown Automata and Context Free Languages, Part 4 Richard Cole October 16, 2008 1 Converting CFGs to Chomsky Normal Form (CNF) A CNF grammar is a CFG with rules restricted as follows.
form of a context-free grammar rule set in a Chomsky Normal Form. ... examples [17], and that even the ability to ask equivalence ... because every CFG can be transformed into equivalent CNF. Chomsky
Normal Form allows only for
Table 1: English to Bengali Transfer Rules But as we said earlier we need a grammar defined in CNF form to have an optimal parsing strategy (CYK
PhD Qualifier Exam Examples Problem (Pipeling Datapath & Control): ... show that the 3-cnf-sat problem is NP-Complete. Please go through all steps one by one. ... Put the CFG in its simplest form
(for human readability) ...
Q1 a) Define the following with examples: [8] Kleen closure An alphabet Regular expression Formal language b) Design a ... Q5 a) Convert the following CFG into CNF(Chomsky Normal Form) [6] S ABA A aA
|∈ B bB ...
Sep 30 Examples of Non-Regular Languages and Use of Pumping Lemma ... Nov 18 Chomsky Normal Form of a Context-Free Grammar; Convert-ing a CFG to CNF; Eliminating Useless Symbols; Eliminating ...
CFG’s (Chapters 12.1-12.6), Complexity (Parts ... Examples: –“she” ... (CNF) A →B C or A →a. 24 CNF Example Convert to Chomsky-Normal Form (CNF): S →a Y X X →a X | b Y →Y a | b Why do care about
grammar? We need grammars for parsing!
real‐world examples: ... •context‐free grammar CFL ‐transform •PDA to CFG ... •CFG to NPDA (SLR parse) •CFG to CNF •CFG to LL Parse table and parser •CFG to SLR Parse table and parser •CFG to brute
force parser. What is ...
More examples of NP problems" •!SATISFIABILITY, a.k.a. SAT:" –!Input: A formula, i.e., the AND of a set of Boolean clauses, e.g." •!x ... [Like converting a CFG into CNF] 3SAT ! IND-SET •!The IND-SET
problem asks whether a given input
|
{"url":"http://ebookily.org/pdf/cfg-to-cnf-examples","timestamp":"2014-04-23T15:13:27Z","content_type":null,"content_length":"41169","record_id":"<urn:uuid:c5507b90-9242-497b-b769-580db6009de2>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elliptic curves, primality proving, and some titanic primes, Journées Arithmétiques
- Math. Comp , 1993
"... The aim of this paper is to describe the theory and implementation of the Elliptic Curve Primality Proving algorithm. ..."
, 1991
"... We explain how the Elliptic Curve Primality Proving algorithm can be implemented in a distributed way. Applications are given to the certification of large primes (more than 500 digits). As a
result, we describe the successful attempt at proving the primality of the lO65-digit (2^3539+ l)/3, the fir ..."
Cited by 2 (1 self)
Add to MetaCart
We explain how the Elliptic Curve Primality Proving algorithm can be implemented in a distributed way. Applications are given to the certification of large primes (more than 500 digits). As a result,
we describe the successful attempt at proving the primality of the lO65-digit (2^3539+ l)/3, the first ordinary Titanic prime.
, 1992
"... We present some new classes of numbers that are easier to test for primality with the Elliptic Curve Primality Proving algorithm than average numbers. It is shown that this is the case for about
half the numbers of the Cunningham project. Computational examples are given. ..."
Add to MetaCart
We present some new classes of numbers that are easier to test for primality with the Elliptic Curve Primality Proving algorithm than average numbers. It is shown that this is the case for about half
the numbers of the Cunningham project. Computational examples are given.
"... Abstract. We give a deterministic algorithm that very quickly proves the primality or compositeness of the integers N in a certain sequence, using an elliptic curve E/Q with complex
multiplication by the ring of integers of Q ( √ −7). The algorithm uses O(log N) arithmetic operations in the ring Z/ ..."
Add to MetaCart
Abstract. We give a deterministic algorithm that very quickly proves the primality or compositeness of the integers N in a certain sequence, using an elliptic curve E/Q with complex multiplication by
the ring of integers of Q ( √ −7). The algorithm uses O(log N) arithmetic operations in the ring Z/NZ, implying a bit complexity that is quasi-quadratic in log N. Notably, neither of the classical “N
− 1 ” or “N + 1 ” primality tests apply to the integers in our sequence. We discuss how this algorithm may be applied, in combination with sieving techniques, to efficiently search for very large
primes. This has allowed us to prove the primality of several integers with more than 100,000 decimal digits, the largest of which has more than a million bits in its binary representation. We
believe that this is the largest proven prime N for which no significant partial factorization of N − 1 or N + 1 is known. 1.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=954553","timestamp":"2014-04-19T01:51:36Z","content_type":null,"content_length":"18609","record_id":"<urn:uuid:529c216d-f9dc-4959-a6c1-45f8d23d4aec>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Recently Active 'schemes' Questions
The first purpose of schemes theory is the geometrical study of solutions of algebraic systems of equations, not only over the real/complex numbers, but also over integer numbers (and more generally
over any commutative ring with 1).It was finalized by Alexandre GROTHENDIECK, during the 1950's and ...
learn more… | top users | synonyms
|
{"url":"http://mathoverflow.net/questions/tagged/schemes?sort=active&pagesize=15","timestamp":"2014-04-21T09:55:03Z","content_type":null,"content_length":"177150","record_id":"<urn:uuid:a092e738-c44d-4bd5-9b23-461318a20b95>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Marina Dl Rey, CA Math Tutor
Find a Marina Dl Rey, CA Math Tutor
I have 30 years of classroom experience teaching mathematics to youngsters from ages 11 to 18 years of age. My strong points are patience and building confidence. I can tailor my lessons around
your child's homework or upcoming tests and stay synchronized.
14 Subjects: including algebra 1, algebra 2, grammar, Microsoft Excel
...Often after creating WordPress sites, my clients have asked for tutoring in order to maintain their own sites. With the right learning and skills, this is a great way to manage your own online
content. I have taught both beginner and advanced Adobe Photoshop at the college level.
11 Subjects: including geometry, spelling, ACT Math, general computer
...I look forward to hearing from youHi, I have been teaching all math subjects for many years. I can improve your math skills in just a very short time. I live in Palos Verdes and am available
day, evenings, or weekends.
11 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...However, I'm also an award winning writer, recognized by the Scholastic Arts & Writing Committee and the Yale English Department as an Orlo Nominee. Additionally, I spent two years teaching an
intensive and academically diverse test preparation class to high schoolers, with subjects ranging from...
28 Subjects: including calculus, prealgebra, precalculus, geometry
I have previously tutored in Earth Science, Geometry and Biology at an under-performing inner city school in San Diego (San Diego High School). This was done concurrently for a quarter while
studying at UCSD, where I recently graduated from with my Bachelor of Science in Environmental Systems and a specialization in Earth Science.
16 Subjects: including algebra 2, calculus, chemistry, elementary (k-6th)
Related Marina Dl Rey, CA Tutors
Marina Dl Rey, CA Accounting Tutors
Marina Dl Rey, CA ACT Tutors
Marina Dl Rey, CA Algebra Tutors
Marina Dl Rey, CA Algebra 2 Tutors
Marina Dl Rey, CA Calculus Tutors
Marina Dl Rey, CA Geometry Tutors
Marina Dl Rey, CA Math Tutors
Marina Dl Rey, CA Prealgebra Tutors
Marina Dl Rey, CA Precalculus Tutors
Marina Dl Rey, CA SAT Tutors
Marina Dl Rey, CA SAT Math Tutors
Marina Dl Rey, CA Science Tutors
Marina Dl Rey, CA Statistics Tutors
Marina Dl Rey, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/marina_dl_rey_ca_math_tutors.php","timestamp":"2014-04-21T05:05:44Z","content_type":null,"content_length":"24157","record_id":"<urn:uuid:9d27be7b-e902-444d-9e9d-984dd85d8d3a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Geometry Help for @danielamarques Please!
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4ff89179e4b058f8b76313f0","timestamp":"2014-04-19T20:12:10Z","content_type":null,"content_length":"54353","record_id":"<urn:uuid:0e796c41-81c9-467b-abdb-17e6a816f067>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An efficient algorithm for com
Awi Federgruen
An efficient algorithm for computing optimal (s,S) policies
Coauthor(s): Paul Zipkin.
Adobe Acrobat PDF
This paper presents an algorithm to compute an optimal (s,S) policy under standard assumptions (stationary data, well-behaved one-period costs, discrete demand, full backlogging, and the average-cost
criterion). The method is iterative, starting with an arbitrary, given (s,S) policy and converging to an optimal policy in a finite number of iterations. Any of the available approximations can thus
be used as an initial solution. Each iteration requires only modest computations. Also, a lower bound on the true optimal cost can be computed and used in a termination test. Empirical testing
suggests very fast convergence.
Source: Operations Research
Exact Citation:
Federgruen, Awi, and Paul Zipkin. "An efficient algorithm for computing optimal (s,S) policies." Operations Research 32, no. 6 (1984): 1268-1285.
Volume: 32
Number: 6
Date: 1984
|
{"url":"http://www0.gsb.columbia.edu/whoswho/more.cfm?uni=af7&pub=4023","timestamp":"2014-04-20T16:22:14Z","content_type":null,"content_length":"4325","record_id":"<urn:uuid:e35d58fd-859f-49b4-bd0b-f809bb158755>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Method of lines
From Scholarpedia
Method of Lines, Part I: Basic Concepts
The method of lines (MOL) is a general procedure for the solution of time dependent partial differential equations (PDEs). First we discuss the basic concepts, then in Part II, we follow on with an
example implementation.
Some PDE Basics
Our physical world is most generally described in scientific and engineering terms with respect to three-dimensional space and time which we abbreviate as spacetime. PDEs provide a mathematical
description of physical spacetime, and they are therefore among the most widely used forms of mathematics. As a consequence, methods for the solution of PDEs, such as the MOL (Schiesser, 1991;
Schiesser and Griffiths, 2009; Griffiths and Schiesser, 2011), are of broad interest in science and engineering.
As a basic illustrative example of a PDE, we consider
\[\tag{1} \frac{\partial u}{\partial t}=D\frac{\partial ^2 u}{\partial x^2} \]
• \(u\) dependent variable (dependent on \(x\) and \(t\))
• \(t\) independent variable representing time
• \(x\) independent variable representing one dimension of three-dimensional space
• \(D\) real positive constant, explained below
Note that eq. (1) has two independent variables, \(x\) and \(t\) which is the reason it is classified as a PDE (any differential equation with more than one independent variable is a PDE). A
differential equation with only one independent variable is generally termed an ordinary differential equation (ODE); we will consider ODEs later as part of the MOL.
Eq. (1) is termed the diffusion equation or heat equation. When applied to heat transfer, it is Fourier's second law; the dependent variable \(u\) is temperature and \(D\) is the thermal diffusivity.
When eq. (1) is applied to mass diffusion, it is Fick's second law; \(u\) is mass concentration and \(D\) is the coefficient of diffusion or the diffusivity.
\( \frac{\partial u}{\partial t}\) is the partial derivative of \(u\) with respect to \(t\) (\(x\) is held constant when taking this partial derivative, which is why partial is used to describe this
derivative). Eq. (1) is first order in \(t\) since the highest order partial derivative in \(t\) is first order; it is second order in \(x\) since the highest order partial derivative in \(x\) is
second order. Eq. (1) is linear or first degree since all of the terms are to the first power (note that order and degree can be easily confused).
Initial and Boundary Conditions
Before we consider a solution to eq. (1), we must specify some auxiliary conditions to complete the statement of the PDE problem. The number of required auxiliary conditions is determined by the
highest order derivative in each independent variable. Since eq. (1) is first order in \(t\) and second order in \(x\ ,\) it requires one auxiliary condition in \(t\) and two auxiliary conditions in
\(x\ .\) To have a complete well-posed problem, some additional conditions may have to be included; for example, that specify valid ranges for coefficients (Kreiss and Lorenz, 2004). However, this is
a more advanced topic and will not be developed further here.
\(t\) is termed an initial value variable and therefore requires one initial condition (IC). It is an initial value variable since it starts at an initial value, \(t_0\ ,\) and moves forward over a
finite interval \( t_0 \leq t \leq t_f \) or a semi-infinite interval \( t_0 \leq t \leq \infty \) without any additional conditions being imposed. Typically in a PDE application, the initial value
variable is time, as in the case of eq. (1).
\(x\) is termed a boundary value variable and therefore requires two boundary conditions (BCs). It is a boundary value variable since it varies over a finite interval \( x_0 \leq x \leq x_f \ ,\) a
semi-infinite interval \( x_0 \leq x \leq \infty \) or a fully infinite interval \( -\infty \leq x \leq \infty \ ,\) and at two different values of \( x \ ,\) conditions are imposed on \( u \) in eq.
(1). Typically, the two values of \( x \) correspond to boundaries of a physical system, and hence the name boundary conditions.
As examples of auxiliary conditions for eq. (1) (there are other possibilities),
\[\tag{2} u(x,t=0) = u_0 \]
where \( u_0\) is a given function of \( x \) .
\[\tag{3} u(x=x_0,t) = u_b \]
\[\tag{4} \frac{\partial u(x=x_f,t)}{\partial x} = 0 \]
where \(u_b\) is a given boundary (constant) value of \(u\) for all \( t \). Another common possibility is where the initial condition is given as above and the boundary conditions are \(u_0\left( x=
x_0,t\right) = f_0 \left(t \right)\) and \(u\left( x=x_f,t \right) = f_b \left( t \right) \). Discontinuities at the boundaries, produced for example, by differences in initial and boundary
conditions at the boundaries, can cause computational difficulties, particularly for hyperbolic problems.
BCs can be of three types:
1. If the dependent variable is specified, as in BC (3), the BC is termed Dirichlet.
2. If the derivative of the dependent variable is specified, as in BC (4), the BC is termed Neumann.
3. If both the dependent variable and its derivative appear in the BC, it is termed a BC of the third type or a Robin BC.
Types of PDE Solutions
Eqs. (1), (2), (3) and (4) constitute a complete PDE problem and we can now consider what we mean by a solution to this problem. Briefly, the solution of a PDE problem is a function that defines the
dependent variable as a function of the independent variables, in this case \(u(x,t)\ .\) In other words, we seek a function that when substituted in the PDE and all of its auxiliary conditions,
satisfies simultaneously all of these equations.
The solution can be of two types:
1. If the solution is an actual mathematical function, it is termed an analytical solution. While analytical solutions are the gold standard for PDE solutions in the sense that they are exact, they
are also generally difficult to derive mathematically for all but the simplest PDE problems (in much the same way that solutions to nonlinear algebraic equations generally cannot be derived
mathematically except for certain classes of nonlinear equations).
2. If the solution is in numerical form, e.g., \(u(x,t)\) tabulated numerically as a function of \(x\) and \(t\ ,\) it is termed a numerical solution. Ideally, the numerical solution is simply a
numerical evaluation of the analytical solution. But since an analytical solution is generally unavailable for realistic PDE problems in science and engineering, the numerical solution is an
approximation to the analytical solution, and our expectation is that it represents the analytical solution with good accuracy. However, numerical solutions can be computed with modern-day
computers for very complex problems, and they will generally have good accuracy (even though this cannot be established directly by comparison with the analytical solution since the latter is
usually unknown).
The focus of the MOL is the calculation of accurate numerical solutions.
PDE Subscript Notation
Before we go on to the general classes of PDEs that the MOL can handle, we briefly discuss an alternative notation for PDEs. Instead of writing the partial derivatives as in eq. (1), we adopt a
subscript notation that is easier to state and bears a closer resemblance to the associated computer coding. For example, we can write eq. (1) as \[\tag{5} u_t=Du_{xx} \]
where, for example, \(u_t \) is subscript notation for \( \frac{\partial u}{\partial t}\ .\) In other words, a partial derivative is represented as the dependent variable with a subscript that
defines the independent variable. For a derivative that is of order \(n\ ,\) the independent variable is repeated \(n\) times, e.g., for eq. (1), \(u_{xx} \) represents \( \frac{\partial ^2 u}{\
partial x^2}\ .\)
A General PDE System
Using the subscript notation, we can now consider some general PDEs. For example, a general PDE first order in \(t\) can be considered \[\tag{6} \overline{u}_t=\overline{f}(\overline{x},t,\overline
{u},\overline{u}_{\overline{x}}, \overline{u}_{\overline{xx}},\ldots) \]
where an overbar (overline) denotes a vector. For example, \(\overline{u}\) denotes a vector of \(n\) dependent variables \[ \overline{u}=(u_1,u_2,\ldots ,u_n)^T \] i.e., a system of \(n\)
simultaneous PDEs. Similarly, \(\overline{f}\) denotes an \(n\) vector of derivative functions \[ \overline{f}=(f_1,f_2,\ldots ,f_n)^T \] where \(T\) denotes a transpose (here a row vector is
transposed to a column vector). Note also that \(\overline{x}\) is a vector of spatial coordinates, so that, for example, in Cartesian coordinates \(\overline{x}=(x,y,z)^T\) while in cylindrical
coordinates \(\overline{x}=(r,\theta,z)^T\ .\) Thus, eq. (6) can represent PDEs in one, two and three spatial dimensions.
Since eq. (6) is first order in \(t\ ,\) it requires one initial condition \[\tag{7} \overline{u}(\overline{x},t=0)=\overline{u}_0(\overline{x},\overline{u},\overline{u}_{\overline{x}}, \overline{u}_
{\overline{xx}},\ldots) \]
where \(\overline{u}_0\) is an \(n\)-vector of initial condition functions \[\tag{8} \overline{u}_0=(u_{10},u_{20},\ldots ,u_{n0})^T \]
The derivative vector \(\overline{f}\) of eq. (6) includes functions of various spatial derivatives, \((\overline{u},\overline{u}_{\overline{x}},\overline{u}_{\overline{xx}},\ldots)\ ,\) and
therefore we cannot state a priori the required number of BCs. For example, if the highest order derivative in \(\overline{x}\) in all of the derivative functions is second order, then we require \(2
\times n \) BCs for each of the spatial independent variables, e.g., \(2\times 2\times n\) for a 2D PDE system, \(2\times 3\times n \) BCs for a 3D PDE system.
We state the general BC requirement of eq. (6) as \[\tag{9} \overline{f}_b(\overline{x}_b,\overline{u}, \overline{u}_{\overline{x}},\overline{u}_{\overline{xx}},\ldots,t)=0 \]
where the subscript \(b\) denotes boundary. The vector of boundary condition functions, \(\overline{f}_b\) has a length (number of functions) determined by the highest order derivative in \(\overline
{x}\) in each PDE (in eq. (6) ) as discussed previously.
PDE Geometric Classification
Eqs. (6), (7) and (9) constitute a general PDE system to which the MOL can be applied. Before proceeding to the details of how this might be done, we need to discuss the three basic forms of the PDEs
as classified geometrically. This geometric classification can be done rigorously if certain mathematical forms of the functions in eqs. (6), (7) and (9) are assumed. However, we will adopt a
somewhat more descriptive (less rigorous but more general) form of these functions for the specification of the three geometric classes.
If the derivative functions in eq. (6) contain only first order derivatives in \(\overline{x}\ ,\) the PDEs are classified as first order hyperbolic. As an example, the equation \[\tag{10} u_t+vu_x=0
is generally called the linear advection equation; in physical applications, \(v\) is a linear or flow velocity. Although eq. (10) is possibly the simplest PDE, this simplicity is deceptive in the
sense that it can be very difficult to integrate numerically since it propagates discontinuities, a distinctive feature of first order hyperbolic PDEs.
Eq. (10) is termed a conservation law since it expresses conservation of mass, energy or momentum under the conditions for which it is derived, i.e., the assumptions on which the equation is based.
Conservation laws are a bedrock of PDE mathematical models in science and engineering, and an extensive literature pertaining to their solution, both analytical and numerical, has evolved over many
An example of a first order hyperbolic system (using the notation \(u_1 \Rightarrow u, u_2 \Rightarrow v\)) is \[\tag{11} u_t=v_x \]
\[\tag{12} v_t=u_x \]
Eqs. (11) and (12) constitute a system of two linear, constant coefficient, first order hyperbolic PDEs.
Differentiation and algebraic substitution can occasionally be used to eliminate some dependent variables in systems of PDEs. For example, if eq. (11) is differentiated with respect to \(t\) and eq.
(12) is differentiated with respect to \(x\) \[ u_{tt}=v_{xt} \] \[ v_{tx}=u_{xx} \] we can then eliminate the mixed partial derivative between these two equations (assuming \(v_{xt}\) in the first
equation equals \(v_{tx}\) in the second equation) to obtain \[\tag{13} u_{tt}=u_{xx} \]
Eq. (13) is the second order hyperbolic wave equation.
If the derivative functions in eq. (6) contain only second order derivatives in \(\overline{x}\ ,\) the PDEs are classified as parabolic. Eq. (1) is an example of a parabolic PDE.
Finally, if a PDE contains no derivatives in \(t\) (e.g., the LHS of eq. (6) is zero) it is classified as elliptic. As an example, \[\tag{14} u_{xx}+u_{yy}=0 \]
is Laplace's equation where \(x\) and \(y\) are spatial independent variables in Cartesian coordinates. Note that with no derivatives in \(t\ ,\) elliptic PDEs require no ICs, i.e., they are entirely
boundary value PDEs.
PDEs with mixed geometric characteristics are possible, and in fact, are quite common in applications. For example, the PDE \[\tag{15} u_t=-u_{x}+u_{xx} \]
is hyperbolic-parabolic. Since it frequently models convection (hyperbolic) through the term \(u_{x}\) and diffusion (parabolic) through the term \(u_{xx}\ ,\) it is generally termed a
convection-diffusion equation. If additionally, it includes a function of the dependent variable \(u\) such as \[\tag{16} u_t=-u_{x}+u_{xx}+f(u) \]
then it might be termed a convection-diffusion-reaction equation since \(f(u)\) typically models the rate of a chemical reaction. If the function depends only the independent variables, i.e., \[\tag
{17} u_t=-u_{x}+u_{xx}+g(x,t) \]
the equation could be labeled an inhomogeneous PDE.
This discussion clearly indicates that PDE problems come in an infinite variety, depending, for example, on linearity, types of coefficients (constant, variable), coordinate system, geometric
classification (hyperbolic, elliptic, parabolic), number of dependent variables (number of simultaneous PDEs), number of independent variables (number of dimensions), type of BCs, smoothness of the
IC, etc., so it might seem impossible to formulate numerical procedures with any generality that can address a relatively broad spectrum of PDEs. But in fact, the MOL provides a surprising degree of
generality, although the success in applying it to a new PDE problem depends to some extent on the experience and inventiveness of the analyst, i.e., MOL is not a single, straightforward, clearly
defined approach to PDE problems, but rather, is a general concept (or philosophy) that requires specification of details for each new PDE problem. We now proceed to illustrate the formulation of a
MOL numerical algorithm with the caveat that this will not be a general discussion of MOL as it might be applied to any conceivable PDE problem.
Elements of the MOL
The basic idea of the MOL is to replace the spatial (boundary value) derivatives in the PDE with algebraic approximations. Once this is done, the spatial derivatives are no longer stated explicitly
in terms of the spatial independent variables. Thus, in effect only the initial value variable, typically time in a physical problem, remains. In other words, with only one remaining independent
variable, we have a system of ODEs that approximate the original PDE. The challenge, then, is to formulate the approximating system of ODEs. Once this is done, we can apply any integration algorithm
for initial value ODEs to compute an approximate numerical solution to the PDE. Thus, one of the salient features of the MOL is the use of existing, and generally well established, numerical methods
for ODEs.
To illustrate this procedure, we consider the MOL solution of eq. (10). First we need to replace the spatial derivative \(u_{x}\) with an algebraic approximation. In this case we will use a finite
difference (FD) such as \[\tag{18} u_x \approx \frac{u_{i}-u_{i-1}}{\Delta x} \]
where \(i\) is an index designating a position along a grid in \(x\) and \(\Delta x\) is the spacing in \(x\) along the grid, assumed constant for the time being. Thus, for the left end value of \(x\
,\) \(i=1\ ,\) and for the right end value of \(x\ ,\) \(i=M\ ,\) i.e., the grid in \(x\) has \(M\) points. Then the MOL approximation of eq. (10) is \[\tag{19} \frac{du_i}{dt} = -v\frac{u_{i}-u_
{i-1}}{\Delta x}, \qquad 1 \leq i \leq M \]
Note that eq. (19) is written as an ODE since there is now only one independent variable, \(t\ .\) Note also that eq. (19) represents a system of \(M\) ODEs.
This transformation of a PDE, eq. (10), to a system of ODEs, eqs. (19), illustrates the essence of the MOL, namely, the replacement of the spatial derivatives, in this case \(u_x\ ,\) so that the
solution of a system of ODEs approximates the solution of the original PDE. Then, to compute the solution of the PDE, we compute a solution to the approximating system of ODEs. But before considering
this integration in \(t\ ,\) we have to complete the specification of the PDE problem. Since eq. (10) is first order in \(t\) and first order in \(x\ ,\) it requires one IC and one BC. These will be
taken as
\[\tag{20} u(x,t=0) =f(x) \]
\[\tag{21} u(x=0,t) = g(t) \]
Since eqs. (19) constitute \(M\) initial value ODEs, \(M\) initial conditions are required and from eq. (20), these are \[\tag{22} u(x_i,t=0) = f(x_i),\quad 1 \leq i \leq M \]
Also, application of BC (21) gives for grid point \( i=1\) \[\tag{23} u(x_1,t) = g(t), \quad t \ge 0 \]
Eqs. (19), (22) and (23) now constitute the complete MOL approximation of eq. (10) subject to eqs. (20) and (21). The solution of this ODE system gives the \(M\) functions \[\tag{24} u_{1}(t),u_{2}
(t), \ldots u_{M-1}(t),u_{M}(t) \]
that is, an approximation to \(u(x,t)\) at the grid points \( i=1,2, \ldots M\ .\)
Before we go on to consider the numerical integration of the approximating ODEs, in this case eqs. (19), we briefly consider further the FD approximation of eq. (18), which can be written as \[\tag
{25} u_x \approx \frac{u_{i}-u_{i-1}}{\Delta x}+O(\Delta x) \]
where \(O(\Delta x)\) denotes of order \(\Delta x\ ,\) that is, the truncation error (from a truncated Taylor series) of the approximation of eq. (19) is proportional to \(\Delta x\) (varies linearly
with \(\Delta x\)); thus eq. (25) is also termed a first order FD (since \(\Delta x\) is to the first power in the order or truncation error term).
Note that the numerator of eq. (18), \(u_{i}-u_{i-1}\ ,\) is a difference in two values of \(u\ .\) Also, the denominator \(\Delta x\) remains finite (nonzero). Hence the name finite difference (and
it is an approximation because of the truncated Taylor series, so a more complete description is first order finite difference approximation). In fact, in the limit \(\Delta x \rightarrow 0\) the
approximation of eq. (18) becomes exactly the derivative. However, in a practical computed-based calculation, \(\Delta x\) remains finite, so eq. (18) remains an approximation.
Also, eq. (10) typically describes the flow of a physical quantity such as concentration of a chemical species or temperature, represented by \(u\ ,\) from left to right with respect to \(x\) with
velocity \(v\ .\) Then, using the FD approximation of eq. (25) at \(i\) involves \(u_i\) and \(u_{i-1}\ .\) In a flowing system, \(u_{i-1}\) is to the left (in \(x\)) of \(u_i\) or is upstream or
upwind of \(u_i\) (to use a nautical analogy). Thus, eq. (25) is termed a first order upwind FD approximation. Generally, for strongly convective systems such as modeled by eq. (10), some form of
upwinding is required in the numerical solution of the descriptive PDEs; we will look at this requirement further in the subsequent discussion.
ODE Integration within the MOL
We now consider briefly the numerical integration of the \(M\) ODEs of eqs. (19). If the derivative \(\frac{du_i}{dt}\) is approximated by a first order FD \[\tag{26} \frac{du_i}{dt} \approx \frac{u_
{i}^{n+1}-u_{i}^{n}}{\Delta t}+O(\Delta t) \]
where \(n\) is an index for the variable \( t \) (\( t \) moves forward in steps denoted or indexed by \(n\)), then a FD remains an approximation to the derivative of eq. (19) is \[ \frac{u_i^{n+1}
-u_i^n}{\Delta t} = -v\frac{u_{i}^n-u_{i-1}^n}{\Delta x} \] or solving for \(u_i^{n+1}\ ,\) \[\tag{27} u_i^{n+1}=u_i^n-(v\Delta t/\Delta x)(u_{i}^n-u_{i-1}^n), i = 1, 2, \ldots M \]
Eq. (27) has the important characteristic that it gives \(u_i^{n+1}\) explicitly, that is, we can solve for the solution at the advanced point in \(t\ ,\) \(n+1\ ,\) from the solution at the base
point \(n\ .\) In other words, explicit numerical integration of eqs. (19) is by the forward FD of eq. (26), and this procedure is generally termed the forward Euler method which is the most basic
form of ODE integration.
While the explicit form of eq. (27) is computationally convenient, it has a possible limitation. If the time step \(\Delta t\) is above a critical value, the calculation becomes unstable, which is
manifest by successive changes in the dependent variable, \(\Delta u = u_i^{n+1}-u_i^{n}\ ,\) becoming larger and eventually unbounded as the calculation moves forward in \(t\) (for increasing \(n
\)). In fact, for the solution of eq. (10) by the method of eq. (27) to remain stable, the dimensionless group \((v\Delta t/\Delta x)\ ,\) which is called the Courant-Friedricks-Lewy or CFL number,
must remain below a critical value, in this case, unity. Note that this stability limit places an upper limit on \(\Delta t\) for a given \(v\) and \(\Delta x\ ;\) if one attempts to increase the
accuracy of eq. (27) by using a smaller \(\Delta x\) (larger number of grid points in \(x\) by increasing \(M\)), a smaller value of \(\Delta t\) is required to keep the CFL number below its critical
value. Thus, there is a conflicting requirement of improving accuracy while maintaining stability.
The way to circumvent the stability limit of the explicit Euler method as implemented via the forward FD of eq. (26) is to use a backward FD for the derivative in \(t\)
\[\tag{28} \frac{du_i}{dt} \approx \frac{u_{i}^{n}-u_{i}^{n-1}}{\Delta t}+O(\Delta t) \]
so that the FD approximation of eqs. (19) becomes \[ \frac{u_i^n-u_i^{n-1}}{\Delta t} = -v\frac{u_{i}^n-u_{i-1}^n}{\Delta x} \] or after rearrangement (with \((v\Delta t/\Delta x)=\alpha\)) \[\tag
{29} (1+\alpha)u_i^n+\alpha u_{i-1}^n = u_i^{n-1}, i = 1, 2, \ldots M \]
Note that we cannot now solve eq. (29) explicitly for the solution at the advanced point, \(u_i^n\ ,\) in terms of the solution at the base point \(u_i^{n-1}\ .\) Rather, eq. (29) is implicit in \
(u_i^n\) because \(u_{i-1}^n\) is also unknown; that is, we must solve eq. (29) written for each grid point \(i = 1, 2, \dots M\) as a simultaneous system of bidiagonal equations (bidiagonal because
each of eqs. (29) has two unknowns so that simultaneous solution of the full set of approximating algebraic equations is required to obtain the complete numerical solution \(u_1^n,u_2^n, \dots u_M^n
\)). Thus, the solution of eqs. (29) is termed the implicit Euler method.
We could then naturally ask why use eqs. (29) when eq. (27) is so much easier to use (explicit calculation of the solution at the next step in \(t\) of eq. (27) vs. the implicit calculation of eqs. (
29)). The answer is that the implicit calculation of eqs. (29) is often worthwhile because the implicit Euler method has no stability limit (is unconditionally stable in comparison with the explicit
method with the stability limit stated in terms of the CFL condition). However, there is a price to pay for the improved stability of the implicit Euler method, that is, we must solve a system of
simultaneous algebraic equations; eqs. (29) is an example. Furthermore, if the original ODE system approximating the PDE is nonlinear, we have to solve a system of nonlinear algebraic equations (eqs.
(29) are linear, so the solution is much easier). The system of nonlinear equations is typically solved by a variant of Newton's method which can become very demanding computationally if the number
of ODEs is large (due to the use of a large number of spatial grid points in the MOL approximation of the PDE, especially when we attempt the solution of two dimensional (2D) and three dimensional
(3D) PDEs). If you have had some experience with Newton's method, you may appreciate that the Jacobian matrix of the nonlinear algebraic system can become very large and sparse as the number of
spatial grid points increases.
Additionally, although there is no limit for \(\Delta t\) with regard to stability for the implicit method, there is a limit with regard to accuracy. In fact, the first order upwind approximation of
\(u_x\) in eq. (10), eq. (25), and the first order approximation of \(u_t\) in eq. (10), eq. (26) or (28), taken together limit the accuracy of the resulting FD approximation of eq (10). One way
around this accuracy limitation is to use higher order FD approximations for the derivatives in eq. (10).
For example, if we consider the second order approximation for \(u_x\) at \(i\) \[\tag{30} u_x \approx \frac{u_{i+1}-u_{i-1}}{2\Delta x}+O(\Delta x^2) \]
substitution in eq. (10) gives the MOL approximation of eq. (10) \[\tag{31} \frac{du_i}{dt} = -v\frac{u_{i+1}-u_{i-1}}{2\Delta x},\; 1 \leq i \leq M \]
We could then reason that if the integration in \(t\) is performed by the explicit Euler method, i.e., we use the approximation of eq. (26) for \(u_t = \frac{du_i}{dt}\ ,\) the resulting numerical
solution should be more accurate than the solution from eq. (27). In fact, the MOL approximation based on this idea \[\tag{32} u_i^{n+1}=u_i^n-\frac{v\Delta t}{2\Delta x}(u_{i+1}^n-u_{i-1}^n), i = 1,
2, \ldots M \]
is unconditionally unstable; this conclusion can be demonstrated by a von Neumann stability analysis that we will not cover here. This surprising result demonstrates that replacing the derivatives in
PDEs with higher order approximations does not necessarily guarantee more accurate solutions, or even stable solutions.
Numerical Diffusion and Oscillation
Even if the implicit Euler method is used for the integration in \(t\) of eqs. (31) to achieve stability (or a more sophisticated explicit integrator in \(t\) is used that automatically adjusts \(\
Delta t\) to achieve a prescribed accuracy), we would find the solution oscillates unrealistically. This numerical distortion is one of two generally observed forms of numerical error. The other
numerical distortion is diffusion which would be manifest in the solution from eq. (27). Briefly, the solution would exhibit excessive smoothing or rounding at points in \(x\) where the solution
changes rapidly.
It should be noted that numerical diffusion produced by a first order approximation (e.g., of \(u_x\) in eq. (9) )is to be expected due to severe truncation of the Taylor series (beyond the first or
linear term), and that the production of numerical oscillations by higher order approximations is predicted by the Godunov order barrier theorem (Wesseling, 2001). To briefly explain, the order
barrier is first order and any linear approximation, including FDs, above first order will not be monotonicity preserving (i.e. will introduce oscillations).
Eq. (10) is an example of a difficult Riemann problem (Wesseling, 2001) if IC eq. (20) is discontinuous; for example, \(u(x,t=0) = h(x)\) where \(h(x)\) is the Heaviside unit step function. The
(exact) analytical solution is the initial condition function \(f(x)\) of eq. (20) moving left to right with velocity \(v\) (from eq. (10) and without distortion, i.e., \(u(x,t) = f(x-vt)\ ;\)
however, the numerical solution will oscillate if \(u_{x}\) in eq. (10) is replaced with a linear approximation of second or higher order.
We should also mention one point of terminology for FD approximations. The RHS of eq. (30) is an example of a centered approximation since the two points at \(i+1\) and \(i-1\) are centered around
the point \(i\ .\) Eq. (25) is an example of a noncentered, one-sided or upwind approximation since the points \(i\) and \(i-1\) are not centered with respect to \(i\ .\) Another possibility would be
to use the points \(i\) and \(i+1\) in which case the approximation of \(u_x\) would be downwind (for \(v > 0 \)). Although this might seem like a logical alternative to eq. (17) for approximating
eq. (9) at point \(i\ ,\) the resulting MOL system of ODEs is actually unstable. Physically, for a convective system modeled by eq. (9), we would be using information that is downwind of the point of
interest (point \(i\)) and thus unavailable in a system flowing with positive velocity \(v\ .\) If \(v\) is negative, then using points \(i\) and \(i+1\) would be upwinding (and thus stable). This
illustrates the concept that the direction of flow can be a key consideration in forming the FD approximation of a (convective or hyperbolic) PDE.
Finally, to conclude the discussion of first order hyperbolic PDEs such as eq. (10), since the Godunov theorem indicates that FD approximations above first order will produce numerical oscillations
in the solution, the question remains if there are approximations above first order that are nonoscillatory. To answer this question we note first that the Godunov theorem applies to linear
approximations; for example, eq. (30) is a linear approximation since \(u\) on the RHS is to the first power. If, however, we consider nonlinear approximations for \(u_x\ ,\) we can in fact develop
approximations that are nonoscillatory. The details of such nonlinear approximations are beyond the scope of this discussion, so we will merely mention that they are termed high resolution methods
which seek a total variation diminishing (TVD) solution. Such methods, which include flux limiter (Wesseling, 2001) and weighted essentially nonoscillatory (WENO) (Shu, 1998) methods, seek to avoid
non-real oscillations when shocks or discontinuities occur in the solution.
So far we have considered only the MOL solution of first order PDEs, e.g., eq. (10). We conclude this discussion of the MOL by considering a second order PDE, the parabolic eq. (1). To begin, we need
an approximation for the second derivative \(u_{xx}\ .\) A commonly used second order, central approximation is (again, derived from the Taylor series, so the term \(O(\Delta x^2)\) represents the
truncation error) \[\tag{33} u_{xx} \approx \frac{u_{i+1}-2u_{i}+u_{i-1}}{\Delta x^2}+O(\Delta x^2) \]
Substituting eq. (33) into eq. (1) gives a system of approximating ODEs \[\tag{34} \frac{du_i}{dt} = D\frac{u_{i+1}-2u_{i}+u_{i-1}}{\Delta x^2}, i=1,2, \ldots M \]
Eqs. (34) are then integrated subject to IC (2) and BCs (3) and (4). This integration in \(t\) can be by the explicit Euler method, the implicit Euler method, or any other higher order integrator for
initial value ODEs. Generally stability is not as much of a concern as with the previous hyperbolic PDEs (a characteristic of parabolic PDEs which tend to smooth solutions rather than hyperbolic PDEs
which tend to propagate nonsmooth conditions). However, stability constraints do exist for explicit methods. For example, for the explicit Euler method with a step \(\Delta t\) in \(t\ ,\) the
stability constraint is \(D \Delta t/\Delta x^2 < constant\) (so that as \(\Delta x\) is reduced to achieve better spatial accuracy in \(x\ ,\) \(\Delta t\) must also be reduced to maintain
Before proceeding with the integration of eqs. (34), we must include BCs (3) and (4). The Dirichlet BC at \(x=x_0\ ,\) eq. (3), is merely \[\tag{35} u_{1}=u_b \]
and therefore the ODE of eqs. (34) for \(i=1\) is not required and the ODE for \(i=2\) becomes \[\tag{36} \frac{du_2}{dt} = D\frac{u_{3}-2u_{2}+u_{b}}{\Delta x^2} \]
Differential Algebraic Equations
Eq. (35) is algebraic, and therefore in combination with the ODEs of eqs. (34), we have a differential algebraic (DAE) system.
At \(i=M\ ,\) we have eqs. (34) \[\tag{37} \frac{du_M}{dt} = D\frac{u_{M+1}-2u_{M}+u_{M-1}}{\Delta x^2} \]
Note that \(u_{M+1}\) is outside the grid in \(x\ ,\) i.e., \(M+1\) is a fictitious point. But we must assign a value to \(u_{M+1}\) if eq. (37) is to be integrated. Since this requirement occurs at
the boundary point \(i=M\ ,\) we obtain this value by approximating BC (4) using the centered FD approximation of eq. (30) \[\tag{:label exists!} u_x \approx \frac{u_{M+1}-u_{M-1}}{2\Delta x}=0 \]
or \[\tag{38} u_{M+1}=u_{M-1} \]
We can add eq. (38) as an algebraic equation to our system of equations, i.e., continue to use the DAE format, or we can substitute \(u_{M+1}\) from eq. (38) into the eq. (37) \[\tag{39} \frac{du_M}
{dt} = D\frac{u_{M-1}-2u_{M}+u_{M-1}}{\Delta x^2} \]
and arrive at an ODE system (eqs. (34) for \(i=3,\ldots M-1\ ,\) eq. (36) for \(i=2\) and eq. (39) for \(i=M\)). Both approaches, either an ODE system or a DAE system, have been used in MOL studies.
Either way, we now have a complete formulation of the MOL ODE or DAE system, including the BCs at \(i=1,M\) in eqs. (34). The integration of these equations then gives the numerical solution \(u_1
(t),u_2(t), \ldots u_M(t)\ .\) The preceding discussion is based on a relatively basic DAE system, but it indicates that integrators designed for DAE systems can play an important role in MOL
If the implicit Euler method is applied to eqs. (34), we have \[ \frac{u_i^{n+1}-u_i^{n}}{\Delta t} = D\frac{u_{i+1}^{n+1}-2u_{i}^{n+1}+u_{i-1}^{n+1}}{\Delta x^2}, \quad i=1,2, \ldots M \] or (with \
( \alpha = D\Delta t/\Delta x^2\)) \[ u_{i+1}^{n+1}-(1/\alpha+2)u_i^{n+1}+u_{i-1}^{n+1} = - (1/\alpha) u_i^{n},\quad i=1,2, \ldots M \] which is a tridiagonal system of algebraic equations (three
unknowns in each equation). Since such banded systems (the nonzero elements are banded around the main diagonal) are common in the numerical solution of PDE systems, special algorithms have been
developed to take advantage of the banded structure, typically by not storing and using the zero elements outside the band. These special algorithms that take advantage of the structure of the
problem equations can result in major savings in computation time. In the case of tridiagonal equations, the special algorithm is generally called Thomas' method. If the coefficient matrix of the
algebraic system does not have a well defined structure, such as bidiagonal or tridiagonal, but consists of mostly zeros with a relatively small number of nonzero elements, which is often the case in
the numerical solution of PDEs, the coefficient matrix is said to be sparse; special algorithms and associated software for sparse systems have been developed that can result in very substantial
savings in the storage and numerical manipulation of sparse matrices.
Generally, when applying the MOL, integration of the approximating ODE/DAEs (e.g., eqs. (26) and (34)) is accomplished by using library routines for initial value ODE/DAEs. In other words, the
explicit programming of the ODE/DAE integration (such as the explicit or implicit Euler method) is avoided; rather, an established integrator is used. This has the advantage that: (1) the detailed
programming of the integration can be avoided, particularly the linear algebra (solution of simultaneous equations) required by an implicit integrator, so that the MOL analysis is substantially
simplified, and (2) library routines (usually written by experts) include features that make these routines especially effective (robust) and efficient such as automatic integration step size
adjustment and the use of higher order integration methods (beyond the first order accuracy of the Euler methods); also, generally they have been thoroughly tested. Thus, the use of quality ODE/DAE
library routines is usually an essential part of MOL analysis. We therefore list at the end of this article some public domain sources of such library routines.
Since the MOL essentially replaces the problem PDEs with systems of approximating ODEs, the addition of other ODEs is easily accomplished; this might occur, for example, if the BCs of a PDE are
stated as ODEs. Thus, a mixed model consisting of systems of PDEs and ODEs is readily accommodated. Further, this idea can easily be extended to systems of ODE/DAE/PDEs. In other words, the MOL is a
very flexible procedure for various combinations of algebraic and differential equations (and this flexibility generally is not available with other library software for PDEs).
Higher Dimensions and Different Coordinate Systems
To conclude this discussion of the MOL solution of PDEs, we cover two additional points. First, we have considered PDEs in only Cartesian coordinates, and in fact, just one Cartesian coordinate, \(x\
.\) But MOL analysis can in principle be carried out in any coordinate system. Thus, eq. (1) can be generalized to \[\tag{40} \frac{\partial u}{\partial t}=D\nabla ^2 u \]
where \( \nabla ^2\) is the coordinate independent Laplacian operator which can then be expressed in terms of a particular coordinate system. For example, in cylindrical coordinates eq. (40) is \[\
tag{41} \frac{\partial u}{\partial t}=D\left( \frac{\partial ^{2}u}{\partial r^{2}} +\frac{1}{r}\frac{\partial u}{\partial r}+\frac{1}{r^{2}}\frac{\partial ^{2}u}{\partial \theta ^{2}}+\frac{\partial
^{2}u}{\partial z^{2}}\right) \]
and in spherical coordinates eq. (40) is \[\tag{42} \frac{\partial u}{\partial t}=D\left[ \frac{\partial ^{2}u}{\partial r^{2}}+\frac{2}{r}\frac{\partial u}{\partial r} +\frac{1}{r^{2}}\left( \frac{\
partial ^{2}u}{\partial \theta ^{2}}+\frac{ \cos \theta }{\sin \theta }\frac{\partial u}{\partial \theta }\right) + \frac{1}{r^{2}\sin ^{2}\theta }\frac{\partial ^{2}u}{\partial\phi ^{2}}\right] \]
The challenge then in applying the MOL to PDEs such as eqs. (41) and (42) is the algebraic approximation of the RHS (\(\nabla ^2 u\)) using for example, FDs, finite elements or finite volumes; all of
these approximations have been used in MOL analysis, as well as Galerkin, least squares, spectral and other methods. A particularly demanding step is regularization of singularities such as at \( r=0
\) (note the number of divisions by \( r\) in the RHS of eqs. (41) and (42) and at \(\theta = 0, \pi/2\) (note the divisions by \(sin(\theta)\) in eq. (42)). Thus the application of the MOL typically
requires analysis based on the experience and creativity of the analyst (i.e., it is generally not a mechanical procedure from beginning to end).
The complexity of the numerical solution of higher dimensional PDEs in various coordinate systems prompts the question of why a particular coordinate system would be selected over others. The
mathematical answer is that the judicious choice of a coordinate system facilitates the implementation of the BCs in the numerical solution.
The answer based on physical considerations is that the coordinate system is selected to reflect the geometry of the problem system. For example, if the physical system has the shape of a cylinder,
cylindrical coordinates would be used. This choice then facilitates the implementation of the BC at the exterior surface of the physical system (exterior surface of the cylinder). However, this can
also lead to complications such as the \(r=0\) singularities in eq. (41) (due to the variable \(1/r\) and \(1/r^2\) coefficients). The resolution of these complications is generally worth the effort
rather than the use of a coordinate system that does not naturally conform to the geometry of the physical system. If the physical system is not shaped in accordance with a particular coordinate
system, i.e., has an irregular geometry, then an approximation to the physical geometry is used, generally termed body fitted coordinates.
As a concluding point, we might consider the origin of the name method of lines. If we consider eqs. (34), integration of this ODE system produces the solution \( u_2(t), u_3(t), \ldots u_M(t)\)
(note \(u_1(t)=u_b\ ,\) a constant, from BC (3)). We could then plot these functions in an \(x-u(x,t)\) plane as a vertical line at each \(x\) (\(i=2,3, \dots M\)) with the height of each line equal
to \(u(x_i,t)\ .\) In other words, the plot of the solution would be a set of vertical parallel lines suggesting the name method of lines (Schiesser, 1991, Schiesser and Griffiths, 2009).
Sources of ODE/DAE Integrators and MOL Software
One of the very useful aspects of MOL is that it enables tried and tested ODE/DAE numerical routines to be used, many of which are in the public domain. The following sources are a good starting
point for these routines. The LSODE and VODE series of ODE/DAE integrators [2s], DASSL [2s] Radau5 and MEBDFDAE [6s] for DAEs and the SUNDIALS library [5s] are widely used in MOL analysis. Some
challenging test problems for ODE/DAE routines as well as some efficient codes and an abundance of illustrative test results can be found on [6s]. Software for MOL analysis is available from [7s] and
[8s]. Additionally, routines that can be called from MOL codes are available to perform a variety of complementary computations (e.g., functional approximation by interpolation, evaluation of
integrals, maximization and minimization in optimization associated with the solution of PDEs) [1s,3s,4s,8s]
Example Implementation
Part II of this article deals with an example implementation.
The authors would like to thank the anonymous reviewers for their positive and constructive comments.
• Kreiss, H-O. and J. Lorenz (2004), Initial-Boundary Value Problems and the Navier-Stokes Equations, SIAM, Philadelphia.
• Schiesser, W. E. and G. W. Griffiths (2009), A Compendium of Partial Differential Equation Models: Method of Lines Analysis with Matlab, Cambridge University Press; see also http://
• Shu, C-W (1998), Essentially Non-oscillatory and Weighted Essential Non-oscillatory Schemes for Hyperbolic Conservation Laws. In: Cockburn, B., C. Johnson, C-W. Shu and E. Tadmor (Eds.), Advanced
Numerical Approximation of Nonlinear Hyperbolic Equations, Lecture Notes in Mathematics, vol 1697. Springer, pp325-432.
• Wesseling, P. (2001), Principles of Computational Fluid Dynamics, Springer, Berlin
Internal references
See also
Differential-Algebraic Equations, Numerical Analysis, Partial Differential Equations, Wave Equation
|
{"url":"http://www.scholarpedia.org/article/Method_of_lines","timestamp":"2014-04-18T02:58:14Z","content_type":null,"content_length":"85435","record_id":"<urn:uuid:9aef1edf-6f39-4dd1-81d3-d4207ff60963>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simulated annealing
From Encyclopedia of Mathematics
For Algorithm, computational complexity of an). In practice, exact algorithms are used for solving only moderately sized problem instances. This results in the development of heuristic optimization
techniques which provide good quality solutions in a reasonable amount of computational time. One such popular technique is simulated annealing, which has been widely applied in both discrete and
continuous optimization problems [a1], [a6], [a11]. Simulated annealing is a stochastic search method modeled according to the physical annealing process which is found in the field of
thermodynamics. Annealing refers to the process of a thermal system initially melting at high temperature and then cooling slowly by lowering the temperature until it reaches a stable state (ground
state), in which the system has its lowest energy. S. Kirkpatrick, C.D. Gelatt and M.P. Vecchi [a7] initially proposed an effective connection between simulated annealing and combinatorial
optimization, based on original work by N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller [a8].
The main advantage of the simulated annealing algorithm is its ability to escape from local optima by using a mechanism which allows deterioration in the objective function value. That is, in the
optimization process, simulated annealing accepts not only better-than-previous solutions, but also worse-quality solutions controlled probabilistically through the temperature parameter
There are many factors that have a strong impact on the performance of the simulated annealing algorithm:
The initial temperature
The thermal equilibrium. This is the condition in which further improvement in the solution cannot be expected with high probability.
The annealing schedule, which determines in what point of the algorithm and by how much the temperature
Now, consider a minimization process. Let Boltzmann equation). Simulated annealing is presented below in pseudo-code:
PROCEDURE simulated annealing()
Generate randomly an initial solution;
DO thermal equilibrium not reached
Generate a neighbour state randomly;
update current state
with probability
RETURN(solution with the lowest energy)
END simulated annealing;A pseudo-code for a simulated annealing procedure
The following example (the quadratic assignment problem) illustrates the basic principles of simulated annealing in combinatorial optimization. The quadratic assignment problem is defined as follows:
Given a set
In the context of location theory one uses the quadratic assignment problem formulation to model the problem of allocating [a10].
Let [a2], [a12] can be described with the following steps:
Start with a feasible solution (permutation). Make an exchange between two randomly selected permutation elements (a
While uniform distribution
The system remains at stage
The procedure stops when all the temperatures in the annealing schedule have been used, i.e. when
Simulated annealing has been used to solve a wide variety of combinatorial optimization problems, such as graph partitioning and graph colouring [a3], [a4], VLSI design [a7], quadratic assignment
problems [a2], image processing [a5] and many others. In addition, implementations of simulated annealing in parallel environments have recently appeared, with applications in cell placement
problems, traveling-salesman problems, quadratic assignment problems, and others [a9]. General references on simulated annealing can be found in [a1] and in [a11].
[a1] E.H.L. Aarts, J.H.M. Korst, "Simulated annealing and Boltzmann machines" , Wiley (1989)
[a2] R.E. Burkard, F. Rendl, "A thermodynamically motivated simulation procedure for combinatorial optimization problems" European J. Operations Research , 17 (1984) pp. 169–174
[a3] D.S. Johnson, C.R. Aragon, L.A. McGeoch, C. Schevon, "Optimization by simulated annealing: an experimental evaluation. Part I: graph partitioning" Operations Research , 37 (1989) pp. 865–892
[a4] D.S. Johnson, C.R. Aragon, L.A. McGeoch, C. Schevon, "Optimization by simulated annealing: an experimental evaluation. Part II: graph coloring and number partitioning" Operations Research , 39
(1991) pp. 378–395
[a5] S. Geman, D. Geman, "Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images" IEEE Trans. Patern Analysis and Machine Intelligence , 6 (1984) pp. 721–741
[a6] R. Horst, P.M. Pardalos, "Handbook of global optimization" , Kluwer Acad. Publ. (1995)
[a7] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, "Optimization by simulated annealing" Science , 220 (1989) pp. 671–680
[a8] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, E. Teller, "Equation of state calculations by fast computing machines" J. Chem. Phys. , 21 (1953) pp. 1087–1092
[a9] P.M. Pardalos, L.S. Pitsoulis, T.D. Mavridou, M.G.C. Resende, "Parallel search for combinatorial optimization: genetic algorithms, simulated annealing, tabu search and GRASP" , Lecture Notes in
Computer Science , 980 , Springer (1995) pp. 317–331
[a10] "Quadratic assignment and related problems" P.M. Pardalos (ed.) H. Wolkowicz (ed.) , Discrete Math. and Theor. Comput. Sci. , 16 , Amer. Math. Soc. (1994)
[a11] P.J.M. van Laarhoven, E.H.L. Aarts, "Simulated annealing, theory and practice" , Kluwer Acad. Publ. (1987)
[a12] M.R. Wilhelm, T.L. Ward, "Solving quadratic assignment problems by simulated annealing" IEEE Trans. , 19 (1987) pp. 107–119
How to Cite This Entry:
Simulated annealing. P.M. PardalosT.D. Mavridou (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Simulated_annealing&oldid=17162
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
|
{"url":"http://www.encyclopediaofmath.org/index.php/Simulated_annealing","timestamp":"2014-04-20T18:24:18Z","content_type":null,"content_length":"28651","record_id":"<urn:uuid:70cdb9f1-43b5-449e-b350-b58f01d76f1d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent application title: METHOD AND SYSTEM FOR FORMING A PANORAMIC IMAGE OF A SCENE HAVING MINIMAL ASPECT DISTORTION
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A panoramic image is generated from a sequence of input frames captured by a camera that translates relative to a scene having at least two points at different distances from the camera. A processor
(13) is responsive to optical flow between corresponding points in temporally different input frames for computing flow statistics for at least portions of some of the input frames and for computing
respective stitching costs between some of the portions and respective neighboring portions thereof. A selection unit (18) selects a sequence of portions and respective neighboring portions that
minimizes a cost function that is a function of the flow statistics and stitching costs. A stitching unit (21) stitches the selected portions and respective neighboring portions so as to form a
panoramic image of the scene, which may then be displayed or post-processed.
A computer-implemented method for forming a panoramic image of a scene from a sequence of input frames captured by a camera having an optical center that translates relative to the scene, said scene
having at least two points at different distances from a path of said optical center, said method comprising:obtaining an optical flow between corresponding points in temporally different input
frames;using said optical flow to compute flow statistics for at least portions of some of said input frames;using said optical flow to compute respective stitching costs between some of said
portions and respective neighboring portions thereof;identifying a sequence of selected portions and respective neighboring portions that minimizes a cost function that is a function of the flow
statistics and the stitching costs; andstitching the selected portions and respective neighboring portions so as to form a panoramic image of the scene.
The method according to claim 1, wherein obtaining the optical flow includes computing the optical flow directly from the input frames.
The method according to claim 1, wherein obtaining the optical flow includes computing the optical flow from data indicative of motion of the camera and respective depth data of pixels in the input
The method according to claim 1, including rectifying the optical flow by warping at least some of the input frames prior to computing the flow statistics and the stitching costs.
The method according to claim 1, including scaling at least some of the input frames according to the optical flow or depth data prior to computing the stitching costs.
The method according to claim 1, including normalizing the optical flow so as derive said depth data prior to computing the stitching costs and the stitching costs.
The method according to claim 1, wherein at least one of said portions is defined by different strips of image data in two or more input frames.
The method according to claim 7, wherein the flow statistics include variance of optical flow in one of said strips.
The method according to claim 1, wherein stitching includes warping at least one of the selected portions.
The method according to claim 1, wherein the optical flow is estimated from previous frames or based on an arbitrary estimated optical flow.
The method according to claim 1, wherein the sequence of input frames is captured by more than one camera each having an optical center that translates relative to a different scene or a different
region of a scene.
The method according to claim 11, including:capturing multiple video sequences by respective users at different locations;uploading the video sequences to a central server;using the uploaded video
sequences to generate respective panoramic mosaics at the central server, andstoring the panoramic mosaics for viewing over the Internet.
A computer program comprising computer program code means for performing the method of claim 1 when said program is run on a computer.
A computer program as claimed in claim 14 embodied on a computer readable medium.
A system for forming a panoramic image of a scene from a sequence of input frames captured by a camera having an optical center that translates relative to the scene, said scene having at least two
points at different distances from a path of said optical center, said system comprising:a memory for storing an optical flow between corresponding points in temporally different input frames;a
processor coupled to said memory and responsive to said optical flow for computing flow statistics for at least portions of some of said input frames and for computing respective stitching costs
between some of said portions and respective neighboring portions thereof;a selection unit coupled to the processor for selecting a sequence of portions and respective neighboring portions that
minimizes a cost function that is a function of the flow statistics and the stitching costs; anda stitching unit coupled to the selection unit for stitching the selected portions and respective
neighboring portions so as to form a panoramic image of the scene.
The system according to claim 15, further including a rectification unit coupled to the processor for rectifying the optical flow.
The system according to claim 16, wherein the rectification unit includes a pre-warping unit that warps at least some of the input frames.
The system according to claim 15, further including a scaling unit coupled to the processor for scaling at least some of the input frames according to the optical flow or depth data.
The system according to claim 15, further including a post-warping unit coupled to the selection unit for warping at least one of the selected portions.
The system according to claim 15, further including a display unit coupled to the stitching unit for displaying the panoramic image.
The system according to claim 15, including:a communications network for uploading to a central server respective video sequences captured by multiple cameras disposed at different locations, said
central server being configured to generate from the uploaded video sequences respective panoramic mosaics and to store the panoramic mosaics for viewing over the communications network via one or
more display devices.
RELATED APPLICATIONS [0001]
This application claims benefit of U.S. provisional applications Ser. No. 60/894,946 filed Mar. 15, 2007 and 60/945,338 filed Jun. 20, 2007 whose contents are included herein by reference.
FIELD OF THE INVENTION [0002]
The invention relates generally to the field of image and video mosaicing and in particular to the presentation of mosaic images having perceived depth.
PRIOR ART [0003]
Prior art references considered to be relevant as a background to the invention are listed below and their contents are incorporated herein by reference. Additional references are mentioned in the
above-mentioned U.S. provisional applications Nos. 60/894,946 and 60/945,338 and their contents are incorporated herein by reference. Acknowledgement of the references herein is not to be inferred as
meaning that these are in any way relevant to the patentability of the present invention. Each reference is identified by a number enclosed in square brackets and accordingly the prior art will be
referred to throughout the specification by numbers enclosed in square brackets.
[1] A. Agarwala, M. Agrawala, M. Cohen, D. Salesin, and R. Szeliski. Photographing long scenes with multi-viewpoint panoramas. ACM Trans. Graph., 25(3):853-861, 2006.
[2] J. Bergen, P. Anandan, K. Hanna, and R. Hingorani. Hierarchical model-based motion estimation. In ECCV, pages 237-252, 1992.
[3] S. Birchfield and C. Tomasi. A pixel dissimilarity measure that is insensitive to image sampling. PAMI, 20(4):401-406, 1998.
[4] S. Birchfield and C. Tomasi. Multiway cut for stereo and motion with slanted surfaces. In ICCV, volume 1, pages 489-495, 1999.
[5] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. PAMI, 23(11):1222-1239, 2001.
[6] S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen. The lumigraph. SIGGRAPH, 30:43-54, 1996.
[7] K. Hanna. Direct multi-resolution estimation of ego-motion and structure from motion. In MOTION91, pages 156-162, 1991.
[8] R. Hartley and A. Zisserman. Multiple View Geometry. Cambridge University Press, second edition, 2004.
[9] L. Hong and G. Chen. Segment-based stereo matching using graph cuts. In CVPR, volume 1, pages 74-81, Los Alamitos, Calif., USA, 2004.
[10] M. Irani, P. Anandan, and M. Cohen. Direct recovery of planar-parallax from multiple frames. PAMI, 24(11):1528-1534, November 2002.
[11] M. Irani, B. Rousso, and S. Peleg. Detecting and tracking multiple moving objects using temporal integration. In ECCV'92, pages 282-287, 1992.
[12] V. Kolmogorov and R. Zabih. Computing visual correspondence with occlusions via graph cuts. In ICCV, volume 2, pages 508-515, July 2001.
[13] M. Levoy and P. Hanrahan. Light field rendering. SIGGRAPH, 30:31-42, 1996.
[14] S. Ono, H. Kawasaki, K. Hirahara, M. Kagesawa, and K. Ikeuchi. Ego-motion estimation for efficient city modeling by using epipolar plane range image. In ITSWC2003, November 2003.
[15] A. Rav-Acha and S. Peleg. A unified approach for motion analysis and view synthesis. In Second IEEE International Symposium on 3D Data Processing Visualization, and Transmission (3DPVT),
Thessaloniki, Greece, September 2004.
[16] A. Rav-Acha, Y. Shor, and S. Peleg. Mosaicing with parallax using time warping. In Second IEEE Workshop on Image and Video Registration, Washington, D.C., July 2004.
[17] A. Roman, G. Garg, and M. Levoy. Interactive design of multi-perspective images for visualizing urban landscapes. In IEEE Visualization 2004, pages 537-544, October 2004.
[18] A. Roman and H. P. A. Lensch. Automatic multiperspective images. In Proceedings of Eurographics Symposium on Rendering, pages 161-171, 2006.
[19] M. Shi and J. Y. Zheng. A slit scanning depth of route panorama from stationary blur. In CVPR'05, volume 1, pages 1047-1054, 2005.
[20] Y. Wexler and D. Simakov. Space-time scene manifolds. In ICCV'05, volume 1, pages 858-863, 2005.
[21] J. Y. Zheng. Digital route panorama. IEEE Multimedia, 7(2):7-10, April-June 2000.
[22] Z. Zhu, E. Riseman, and A. Hanson. Generalized parallel-perspective stereo mosaics from airborne videos. PAMI, 26(2):226-237, February 2004.
[23] A. Zomet, D. Feldman, S. Peleg, and D. Weinshall. Mosaicing new views: The crossed-slits projection. PAMI, pages 741-754, June 2003.
[24] U.S. Pat. No. 7,006,124. Generalized panoramic mosaic.
[25] U.S. Pat. No. 6,665,003, System and method for generating and displaying panoramic images and movies.
[26] US2007003034 Apparatus and method for capturing a scene using staggered triggering of dense camera arrays.
[27] Chris Buehler, Michael Bosse, Leonard McMillan, Steven Gortler, Michael Cohen, Unstructured lumigraph rendering, SIGGRAPH 2001.
BACKGROUND OF THE INVENTION [0031]
Many mosaicing applications involve long image sequences taken by translating cameras scanning a long scene. Thus applications are known that include a video camera mounted on a vehicle scanning city
streets [14,1,17,21,19], or a video camera mounted on a low altitude aircraft scanning a terrain [22]. Earlier versions of our work on ego-motion computation for sideways moving cameras were proposed
in [16,15]. They had initialization and robustness problems that are addressed in this paper. In addition, they did not address the computation of dense depth maps and the creation of undistorted
In [1,17] methods are described for creating a multi-perspective panorama. These methods recover camera motion using structure-from-motion [8], matching features between pairs of input images.
Matched points are used to recover the camera parameters as well as a sparse cloud of 3D scene points, recovery that is much easier when fisheye lens are used as in [1]. Feature points as used in the
above-described approaches will be preferred in clean, high contrast, and unambiguous imagery. However, direct methods may be preferred when feature points are rare, ambiguous, or noisy.
Image mosaicing can be regarded as a special case of creating a model of the observed scene. Having multiple images of the scene theoretically enables both the computation of camera parameters and
the geometric and photometric structure of the scene. As the mosaicing process is much simpler than the creation of a scene model, it is likely to work in more cases. Mosaicing works especially well
when long scenes are involved, with camera motion only in one direction. Even when a scene model has successfully been constructed, the generation of a very long panoramic image of the entire scene,
having minimum distortion, is a challenging problem.
Also known in this field is X-Slits mosaicing [23] one of whose declared benefits is its reduced distortion compared to pushbroom projection. But for mosaics that are spatially very long, the X-Slits
images become very close to pushbroom projection with its significant distortions. Attempts to reduce the distortion of the spatially long mosaic were presented in [17,18] using different X-Slits
projections for different scene segments. Also relevant is [20], where a mosaic image is generated by minimizing a stitching cost using dynamic programming. Other papers on mosaicing of long scenes
include [19,21], where long mosaics are generated from a narrow slit scanning a scene. In these papers the camera is assumed to move slowly in a roughly constant velocity, and the scene depth can be
estimated from stationary blur. In [2] a long panorama is stitched from a sparse set of still images, mainly addressing stitching errors.
Panoramic images of long scenes, generated from images taken by a translating camera, are normally distorted compared to perspective images. When large image segments are used for stitching a
panoramic image, each segment is perspective but the seams between images are apparent due to depth parallax. When narrow strips are used the panoramic image is seamless, but its projection is
normally pushbroom, having aspect distortions. The distortions become very significant when the variations in scene depth are large compared to the distance from the camera.
US 2007/003034 (Wilburn et al.) [26] discloses an apparatus and method for video capture of a three-dimensional region of interest in a scene using an array of video cameras positioned for viewing
the three-dimensional region of interest in the scene from their respective viewpoints. A triggering mechanism is provided for staggering the capture of a set of frames by the video cameras of the
array. A processing unit combines and operates on the set of frames captured by the array of cameras to generate a new visual output, such as high-speed video or spatio-temporal structure and motion
models, that has a synthetic viewpoint of the three-dimensional region of interest. The processing involves spatio-temporal interpolation for determining the synthetic viewpoint space-time
trajectory. Wilburn et al. do not generate panoramic images, but only new perspective images. Also, all cameras in the array are synchronized, and combination is done only on a set of frames captured
SUMMARY OF THE INVENTION [0037]
In accordance with one aspect of the invention, there is provided a computer-implemented method for forming a panoramic image of a scene from a sequence of input frames captured by a camera having an
optical center that translates relative to the scene, said scene having at least two points at different distances from a path of said optical center, said method comprising:
obtaining an optical flow between corresponding points in temporally different input frames;
using said optical flow to compute flow statistics for at least portions of some of said input frames;
using said optical flow to compute respective stitching costs between some of said portions and respective neighboring portions thereof;
identifying a sequence of selected portions and respective neighboring portions that minimizes a cost function that is a function of the flow statistics and the stitching costs; and
stitching the selected portions and respective neighboring portions so as to form a panoramic image of the scene.
In accordance with another aspect of the invention, there is provided a system for forming a panoramic image of a scene from a sequence of input frames captured by a camera having an optical center
that translates relative to the scene, said scene having at least two points at different distances from a path of said optical center, said system comprising:
a memory for storing an optical flow between corresponding points in temporally different input frames;
a processing unit coupled to said memory and responsive to said optical flow for computing flow statistics for at least portions of some of said input frames and for computing respective stitching
costs between some of said portions and respective neighboring portions thereof;
a selection unit coupled to the processing unit for selecting a sequence of portions and respective neighboring portions that minimizes a cost function that is a function of the flow statistics and
the stitching costs; and
a stitching unit coupled to the selection unit for stitching the selected portions and respective neighboring portions so as to form a panoramic image of the scene.
The invention thus provides a direct method and system to compute camera motion and dense depth that are needed for the above-mentioned mosaicing applications. The computed motion and depth
information are used for two visualization approaches. (i) a new Minimal Aspect Distortion (MAD) mosaicing of the scene. (ii) an immersive 3D visualization, allowing interactive changes of viewpoint
and viewing direction, using X-Slits [23].
In accordance with another aspect of the invention, there is provided a system allowing panoramic mosaics to be generated from multiple videos uploaded by multiple people. The system allows people to
take videos at different locations using conventional cameras or cellular telephones having video features and to upload their videos over the Internet to an application web server. The web server is
programmed according to the invention to construct a model of the world by creating panoramic mosaics by stitching selected portions of the uploaded videos and optionally to display the generated
world model on a display device.
While the proposed computation of camera motion is accurate, depth may not be computed accurately for many scene points. Occlusions, low contrast, varying illumination, and reflections will always
leave many scene points with no depth values, or even with a wrong depth. While model based rendering may fail when depth is not accurate everywhere, image based rendering such as MAD mosaicing and
X-Slits projection use only statistics of depth, and can overcome these problems.
The proposed approach can use only image data, and does not require external motion information. If motion and/or depth information is available, e.g. GPS or laser scanners, [14], it could be used to
replace or enhance the motion computation part and/or the flow statistics.
For depth computation we use a variation of the iterative graph cuts approach as described in [12], which is based on [5]. Instead of extracting constant disparities, we segment the image into planar
surfaces. Combining the graph cuts approach with planar surfaces was described in [4,9]. The main differences between [4] and the method according to the invention is in the initialization of the
planar surfaces and in the extension of the two-frames algorithm to long un-stabilized sequences. A more detailed discussion can be found in Section 7.
The effectiveness of the proposed approach is demonstrated by processing several videos sequences taken from moving cars and from helicopters. Using X-Slits, different views from different virtual
viewing locations are presented. The new MAD mosaicing is shown to significantly reduce aspect distortions when applied to long scenes having large depth variability.
The motion and depth computation according to the present invention is closely related to many intensity based ("direct") ego motion computations such as [10]. While these methods recover
unrestricted camera motions, they are relevant only for short sequences since they require an overlapping region to be visible in all frames. For mosaicing long sequences, a robust motion computation
method is required, which can also handle degenerate cases where general motion can not be recovered. In particular, the algorithm should give realistic mosaic images even when no 3D information is
available in large portions of the scene.
In Section 4 we present "MAD Mosaicing", which is a long mosaic having minimum aspect distortions. It is based on the observation that only the perspective projection is undistorted in a scene with
large depth variations, while for a scene at a constant depth almost any projection can give an undistorted mosaic. MAD mosaicing changes the panoramic projection depending on scene structure at each
location, minimizing two costs: a distortion cost and a stitching cost. Minimization is done using dynamic programming.
An alternative to MAD mosaicing is the X-Slits projection, as described in Section 5. Even though X-Slits images are more distorted, they enable the viewpoint to be controlled and fly-through
sequences to be created. In this case the depth information is mostly needed when handling large displacements, when the stitching process requires better interpolation.
Knowledge of camera motion and dense depth is needed for mosaicing, and this information can be provided by any source. The invention proposes a robust approach, alternating between direct ego-motion
computation (Section 2) and depth computation (Section 1). This results in a robust method to compute both motion and depth, which can overcome large disparities, moving objects, etc. A general
description of the alternation between motion and depth computation is described in Section 3.
By limiting our analysis to the most common case of a camera moving mostly sideways, motion and depth can be computed robustly for otherwise ambiguous sequences. We can robustly handle cameras
mounted on moving cars and scanning city streets, down-looking cameras scanning the ground from a low altitude aircraft, etc. The camera is allowed to rotate, as rotations are common in such cases
due to the vibrations of the vehicle, and the focal length of the camera lens may also change.
The computation of camera motion uses a simple variation of the Lucas-Kanade method [2] that takes into account the estimated scene depth. Given the estimated motion, depth is computed using a graph
cuts approach to detect planar surfaces in the scene. In long image sequences, planar surfaces that were computed for previous frames are used as priors for the new frames, increasing robustness.
Additional robustness is achieved by incorporating a flexible pixel dissimilarity measure for the graph cuts method. The variations of motion and depth computations that allowed handling inaccurate
inputs are described in Sections 2 and 1.
A note about notation: We use the terms "depth" and "inverse depth" when we actually refer to "normalized disparity" meaning the horizontal disparity due to translation, divided by the horizontal
camera translation. This normalized disparity is proportional to the inverse of the depth. The exact meaning will be clear from the context.
BRIEF DESCRIPTION OF THE DRAWINGS [0061]
In order to understand the invention and to see how it may be carried out in practice, embodiments will now be described, by way of non-limiting example only, with reference to the accompanying
drawings, in which:
FIGS. 1a to 1e are pictorial representations illustrating the advantage of using flexible graph cuts;
FIGS. 2a to 2c are pictorial representations showing use of regular graph cuts approach with constant disparities to obtain an initial over-segmentation of image disparities;
FIGS. 3a and 3b are schematic diagrams showing respectively the motion and depth computations;
FIGS. 4b to 4d are intermediate depth maps computed during the iterative depth and motion computations process from an original frame shown in FIG. 4a;
[0066]FIG. 5
is a schematic diagram showing work-flow of the initialization stage;
[0067]FIG. 6
is a schematic diagram showing work-flow of the interleaving process;
[0068]FIG. 7a
shows pictorially a general cut C(t) through the space-time volume;
FIG. 7b shows pictorially the same cut depicted in
FIG. 7a
in the spatially aligned space-time volume;
[0070]FIG. 8
depicts how the panorama is stitched from a collection of rectangular strips;
FIGS. 9a and 9b are diagrams showing respectively selection of nodes and graph construction using X-Slits projection;
FIG. 10 shows how disparity variance of columns in the non-aligned x-t volume may be used to measure depth;
FIGS. 11a and 11b show respectively a segment from a first long pushbroom mosaic and a corresponding long MAD mosaic generated according to the invention;
FIGS. 11c and 11d show respectively a segment from a second long pushbroom mosaic and a corresponding long MAD mosaic generated according to an embodiment of the invention;
FIGS. 12a, 12b and 12c show, respectively, a MAD mosaic of a street in Jerusalem; a graph showing C(t), the left strip boundary for each frame; and the depth map of the mosaic constructed according
to an embodiment of the invention;
FIGS. 13a and 13b are schematic representations relating to the generation of new views;
FIGS. 14a and 14b are pictorial representations showing a street view (X-slits) obtained by stitching without and with depth scaling, respectively;
FIGS. 15a and 15b are pictorial representations showing respectively a panoramic image of a boat sequence and the corresponding panoramic inverse depth map;
FIG. 16 shows frames of new synthetic views generated from the same boat sequence using X-Slits;
FIG. 17a show a panoramic view of a scene, with its corresponding inverse depth and motion parameters (Tx and accumulated tilt and image rotations);
FIGS. 17b and 17c show respectively an image frame from the same scene and its Inverse depth map;
FIGS. 17d and 17e show respectively another image frame from the same scene and its Inverse depth map;
FIGS. 18a and 18c are simulated views synthesized to appear from a distance similar to the camera location, while FIGS. 18b and 18d are synthesized from closer into the scene;
[0084]FIG. 19
is a flow diagram showing the principal operations carried out by a method according to the invention;
[0085]FIG. 20
is a block diagram showing functionality of a system according to the invention; and
FIG. 21 is a pictorial representation of a system allowing composite panoramic movies to be generated according to the invention from multiple component sources.
1. Graph Cuts for Depth Computation (Assuming Known Motion)
Given a stereo image pair, depth can be computed by finding for each pixel in the left image its corresponding pixel in the right image. Many methods improve stereo computation by incorporating the
depth consistency between neighboring points. A method that gives excellent results uses a graph cuts approach to compute depth from stereo [5]. We start by briefly describing the basic formulation
of the graph cuts approach. In Sections 1.1 and 1.2 we describe our variants on this formulation: defining a flexible data penalty, and using planar surfaces instead of constant disparities.
Let L be the set of pixels in the left image. In the common graph cuts approach to stereo each pixel p in the left image is labeled with its disparity f
. A desirable labeling f usually minimizes the Potts energy [8]:
( f ) = p .di-elect cons. L C p ( f p ) + p , q .di-elect cons. N V p , q δ ( f p ≠ f q ) , ( 1 ) ##EQU00001##
where C[p]
) is a cost for the pixel p to have the disparity f
based on image pair similarity, N denotes the set of pixel pairs in the left image which are in a neighborhood and δ(•) is 1 if its argument is true and 0 otherwise. Each V
represents a penalty for assigning different disparities to neighboring pixels p and q. The value of the penalty V
is smaller for pairs {p,q} with larger intensity differences |I
In stereo computations using the graph cuts approach, it is assumed that disparities can only have a finite set of values [0, . . . , d
]. Minimizing the energy in Eq. (1) is still NP-hard, but in [5] it was shown that using a set of "α-expansion" moves, each finding a minimal cut in a binary graph, can give good results that are
very close to the global optimum.
Following [5] improvements to graph-cuts stereo were introduced (such as [12]). These include better handling of occlusions, and symmetrical formulation of stereo. While we used the basic formulation
of [5], the newer approaches can be incorporated into the proposed method if needed.
It should be noted that since we pass the depth values from frame to frame, we need a consistent description that is independent of the relative motion between frames. Therefore, after computing the
disparities, we normalize them by the camera translation T
between the two frames (In this section, we assume that the camera motion is given).
1.1 Flexible Pixel Dissimilarity Measure for Graph Cuts
Depth computation using iterative the graph cuts approach, as most other stereo computation methods, assumes one-dimensional displacements between input images. This is a valid assumption when
accurate image rectification is possible. Rectification needs accurate camera calibration, where both internal and external camera parameters are known (or can be accurately computed). However, when
the input frames are part of an uncalibrated video sequence, the computed motions usually accumulate small errors in a few frames. In addition, the internal parameters of the camera are not always
known. As a result, methods that assume accurate calibration and rectification fail for such video sequences. Moreover, the presence of small sub-pixel miss-registrations between frames not only
reduces the accuracy of the computed depth, but usually results in a totally erroneous depth computation.
A possible way to overcome this problem is by computing a two-dimensional optical flow rather than a one-dimensional optical flow. This approach increases the size of the graph and the computational
complexity. We therefore keep the original structure of the graph using only horizontal displacements, but change the dissimilarity measure to be more tolerant for small vertical displacements.
1.1.1. Allowing 2D Sub-Pixel Displacements
The first step towards a flexible graph cuts approach is to allow horizontal and vertical sub-pixel displacements. To do so we extend an idea suggested in [3], where pixel dissimilarity takes into
account image sampling. Let (x,y) be a coordinate of a pixel in the image I
and let f
be some candidate disparity. Instead of using the pixel dissimilarity C
)|, [3] suggests the following pixel dissimilarity:
C p
( f p ) = min - 1 2 ≦ s ≦ 1 2 I 1 ( x , y ) - I 2 ( x + f p + s , y ) . ( 2 ) ##EQU00002##
Eq. (2) is more accurate due to the fact that only a discrete set of disparities is possible. When the sampling of the image values at sub-pixel locations is computed using a linear interpolation,
the above dissimilarity measure can be computed efficiently by:
(x,y)}, (3)
where v[max]
and v
are respectively the maximum and minimum of the two pixel values
{ I 2 ( x + f p ± 1 2 , y ) } . ##EQU00003##
To allow sub-pixel vertical displacements, we will further change the range of the target pixel (x+f
) from a 1D interval to a 2D region:
C p
( f p ) = min - 1 2 ≦ s , r ≦ 1 2 I 1 ( x , y ) - I 2 ( x + f p + s , y + r ) , ( 4 ) ##EQU00004##
which can also be efficiently computed
, in a similar way to the one dimensional case (Eq. (3)).
1.1.2. Handling Larger Vertical Displacements
The next step is to handle larger vertical displacements. This is most common when having small rotation about the optical axis, in which case the pixels at the left and right boundaries have large
vertical displacements. Allowing large pixel displacements without any penalty will reduce the accuracy of the 1D search that overcomes the aperture problem. Therefore, unlike the sub-pixel
displacements, larger displacements have some penalty. Considering all possible vertical displacements of up to a single pixel, gives our final pixel dissimilarity measure:
C p
( f p , l ) = min - 1 2 ≦ s , r ≦ 1 2 I 1 ( x , y ) - I 2 ( x + f p + s , y + r + l ) + l K , C p ( f p ) = min l .di-elect cons. { - 1 , 0 , 1 } C p ( f p , l ) , ( 5 ) ##EQU00005##
where l is a vertical displacement and K is a penalty factor
(we used K=5). Note that for a very large K, Eq. (5) reduces to the sub-pixel case in Eq. (4).
FIGS. 1a to 1e demonstrate the advantage of using the flexible graph cuts approach. FIG. 1a shows one of two inputs frames. The disparities computed by the flexible and the regular graph cuts
approaches on the original frames are shown in FIGS. 1b and 1c, respectively. The disparities computed by the flexible and the regular graph cuts approaches after rotating the right frame by one
degree are shown in FIGS. 1d and 1e, respectively, unknown depths being marked in black. It can be seen that the flexible and regular graph cuts approaches have similar results when using two
calibrated frames, but when one of the images is slightly rotated, the flexible graph cuts approach successfully recovers the disparities for most of the pixels, while regular graph cuts approach
1.2 A Planar Representation of the Scene Using Graph Cuts
In the proposed framework depth is computed by the graph cuts approach only for a partial set of frames, and is propagated to the rest of the frames by depth warping. In order to propagate depth, it
should be accurate and piecewise continuous. The widely used graph cuts methods give piecewise constant depth values. As a result, they tend to over-segment the image and do not obtain sub-pixel
Instead, we compute a piecewise planar structure, as also suggested by [9, 4]. The main differences between [4] and our method is in the initialization of the planes and in the extension of the
two-frames algorithm to long un-stabilized sequences. A detailed discussion of important differences can be found in Section 7.
There are several advantages in using a planar representation of the depth rather than discrete disparity values: (1) The piecewise planar model gives a better representation of the scene especially
in urban areas. (2) The planar disparity surfaces can be estimated with sub-pixel accuracy, and therefore can be used to predict the depth even at far away frames without losing its accuracy. (3)
Description of the depth map with planar surfaces requires a smaller number of segments compared to constant depths. Having a smaller number of more accurate segments significantly reduces the number
of pixels marked as occlusions due to quantization errors.
The depth of a planar scene surface can be denoted as a'X+b'Y+c'Z+d'=0 in the coordinate system of frame I
. Assuming a perspective projection (x=fX/Z and y=fY/Z, where f is the focal length) and multiplying the surface equation by
f d
' Z ##EQU00006##
' d ' x 1 + b ' d ' y 1 + fc ' d ' + f Z = 0. ( 6 ) ##EQU00007##
Dividing by d' is valid as d'=0 only for planes that pass through the focal point, and these are planes that the camera does not see. Assuming a horizontal camera translation T
between frames I
and I
, the disparity between the corresponding pixels x
and x
1 - x 2 = f T x Z ##EQU00008##
so the normalized disparity x
1 - x 2 T x ##EQU00009##
equals the inverse depth f
/Z. From Eq. 6 it can be seen that the normalized disparity (or inverse depth) of a planar surface in the scene can be expressed as an affine function in the image coordinates:
1 - x 2 T x = f Z = a ' d ' x 1 - b ' d ' y 1 - fc ' d ' . ( 7 ) ##EQU00010##
This formulation suggests that planar surfaces in the world induce affine disparities between the images (and vice versa). We will refer to planes in the world and "planar" disparities in the image
in the same manner.
The process of computing planar disparities using graph-cuts can be described schematically as follows:
1. Run regular graph-cuts with constant disparities in the range of [0, . . . , d
2. Find the parameters of a new plane and add them to the planes list.
3. Run the graph cuts method with planar labels from the list (described in Section 1.2.2).
4. Optionally, remove unused planes and return to Step 2.
This general scheme will now be described in a more detail.
1.2.1 Finding Candidate Planes (Steps 1-2)
Our purpose is to determine planes that will be used as labels in the planes-based graph-cuts (Step 3 and Section 1.2.2). A naive way to do so would be to select representatives of all planes, as is
done in the case of constant disparities. However, this is not realistic as the space of all planar surfaces is too big, and sub-pixel accuracy is required. Therefore, the list of planes should be
determined in a more efficient way.
Many times, an initial list of planes is already available. This is the case, for example, when the list of planes can be transferred from another image in the sequence where such a list has already
been computed (See Section 1.3). In other cases, where no initial list of planes is available, the following scheme is applied:
Step 1: Run regular graph-cuts with constant disparities. These disparities can be viewed as disparities of planar surfaces:
[max] [0114]
Computing stereo with constant disparities can be done with the regular graph-cuts [5]. Note, however, that we always use a flexible pixel dissimilarity measure as described in Section 1.1.
Step 2: The output of the graphcuts is used to segment the image such as shown in
FIG. 2a
into connected components of equal disparity. Our assumption is that each planar surface is represented by multiple segments with constant disparities as demonstrated in
FIG. 2b
. Based on this assumption, we calculate the affine motion parameters for each large enough segment. Let S be such a segment, then the computed affine motion parameters m
=(a, b, c) are those that minimize the error:
( a , b , c ) = ( x , y ) .di-elect cons. S [ I 1 ( x , y ) - I 2 ( x + ax + by + c , y ) ] 2 ( 8 ) ##EQU00011##
For each segment in 2b a parametric motion model is computed directly from the image intensities (using the Lucas-Kanade method) and all planes where this computation converge are used as planar
labels in the planes-based graph-cuts. The result of the planes-based graph-cuts is shown in
FIG. 2c
. This result is distinguishable from the ground truth in only a few pixels.
The motion parameters are computed directly from the image intensities using a gradient descent algorithm in a multiresolution framework as suggested by [2]. With the method of [2], the affine
parameters can be computed in a sub-pixel accuracy. If a plane consists of several disparity segments, it is sufficient that only one of the corresponding parametric motion computations will
converge, while the computations that do not converge are ignored. Having multiple descriptions for the same plane is allowed. The task of segmenting the image into planar disparities according to
the planes list is left to the plane based graph-cuts described next.
1.2.2 Graph Cuts with Planar Labels (Step 3)
In this section we assume that a list of candidate planes is given, and we would like to represent the disparity map between the two images with these candidate planes. An iterative graph cuts
approach is performed, where each pixel p=(x,y) can be assigned with a single corresponding plane denoted as m
=(a,b,c). Similar to the classical graph cuts approach we use the Potts model as in Eq. (1),
( f ) = p .di-elect cons. L C p ( f p ( m p ) ) + p , q .di-elect cons. N V p , q δ ( m p ≠ m q ) ( 9 ) ##EQU00012##
where f[p]
) is the disparity of the pixel p according to the planar disparity f
)=ax+by+c. As a data penalty function C
we use the flexible pixel dissimilarity measure introduced in Section 1.1 (Eq. (5)). Note that the smoothness term penalizes transitions between different planes, and not transitions between
different disparities. This implies that a single planar label will be the preferred representation of a single planar surface, as using multiple planes having similar parameters will be penalized.
As a result, the planar representation of the scene will tend to be sparse even if the list of planes is redundant.
In addition to the labels representing planar surfaces, a special label, denoted as `unknown` is used to represent pixels with unknown disparities. This label is assigned with a constant penalty.
Although stronger formulations exist for the specific case of occlusions (such as [12]), we use this general purpose label to handle both occlusions, moving objects and deviations from the motion
The energy function defined above is very similar to the constant-disparities case, and can be minimized in the same manner. The result of the energy minimization will be an assignment of a planar
surface to each pixel in the image. As noted before, the process of finding new candidate planes (while removing unused planes) can be repeated, this time with the better segmentation obtained by the
plane-based graphcuts.
1.3 Forward Warping of Planes
A basic building block in the proposed scheme is the mapping of the inverse depths to the next frames. Let the inverse depth map for frame I
be described by planar surfaces, and let each pixel in I
be labeled as belonging to one of these planes.
The inverse depth of I
can be estimated from that of I
and from the horizontal translation T
between the two frames in the following way:
5. The parameters of each planar surface are translated from the coordinate system of the source frame I
to the coordinate system of the target frame I
. This is done as follows:
1 ( x , y ) = f Z = ax + by + c ##EQU00013##
describe a planar
(normalized) disparity in I
. Using Eq. 7, one can go back to the coefficients of the plane in the 3D space (up to a scale factor) giving:
+ bY + c Z f - 1 = 0. ##EQU00014##
Applying a horizontal translation T[x]
to get the representation of the plane in the coordinate system of I
( X - T x ) + bY + c Z f - 1 = 0 , or ##EQU00015## aX + bY + c Z f - ( aT x + 1 ) = 0. ##EQU00015.2##
Using Eq. 7 gives the normalized disparity in frame I
2 ( x , y ) = a a T x + 1 x + b a T x + 1 y + c a T x + 1 . ( 10 ) ##EQU00016##
The parameters of a planar disparity in I
can therefore be computed simply from the corresponding plane parameters in I
and the relative horizontal translation T
6. The pixel labeling of I
can be computed by warping forward the pixel labeling of I
. When two labels are mapped to the same pixel in I
, the label corresponding to a smaller depth is used to account for occlusion. Pixels in I
that were not assigned with a label by the forward warping are marked as "unknowns", and are not used in further computations.
The forward warping of inverse depth may leave some pixels in I
with no assigned label. This is not an immediate problem to motion computation, as the depth of all pixels is not required for motion analysis. At a later stage, the labels are completed from
neighboring frames or interpolated from other pixels.
The stereo computations may be accelerated significantly using a multi-resolution framework. In addition, our formulation of planar and flexible graph-cuts can be incorporated with other methods for
solving Markov Random Fields. For example, a fast multi-resolution implementation of Belief-Propagation (which is an alternative way to solve MRFs) has been shown to produce good results much more
efficiently. Real-time implementation of stereo matching using graphics hardware has also been used.
2. Computing Ego Motion (Assuming Known Depth)
Assume that the image I
has already been aligned and de-rotated according to the motion parameters that were computed in previous steps. Let I
be the new frame to be aligned. We are also given the inverse depth map D
corresponding to (the de-rotated) I
. Our motion model includes a horizontal camera translation T
and camera rotations R about the x and z axis:
P', (11)
where P and P
' are corresponding 3D points in the coordinate systems of I
and I
respectively, and T
denotes the horizontal translation. Note that the rotation matrix R is applied only on frame I
, as we assume that I
has already been aligned and de-rotated. On the other hand, the translation is applied on I
, since the depths are known only for frame I
[0131]FIG. 3a
is a schematic diagram of the ego-motion computation, which is performed as follows:
(1) Initial translation {tilde over (T)} and {tilde over (R)} are estimated (e.g. same as last frame, or zero);
(ii) I
is warped with the estimated disparity, equivalent to {tilde over (T)} multiplied by D
, to give the warped
. I
is rotated by R-1 to give
(iii) New estimations for rotation {tilde over (R)} and translation {tilde over (T)} are computed between the warped images
(iv) The process is repeated from step (ii) until convergence.
[0136]FIG. 3b
shows computation of inverse depth for a new reference frame I
is performed between the previous reference frame I
, and
is the new frame I
after it has been rotated to the coordinate of I
by the rotation estimated in part (a). The inverse depth D
is the disparity computed between I
divided by the translation T previously computed between these two frames.
Assuming small rotations, the image displacements can be modeled as:
'=y+b+sin(α)x'+cos(α)y' (12)
The camera rotation about the z axis is denoted by α, and the tilt is denoted by a uniform vertical translation b. in cases of a larger tilt, or when the focal length is small, the fully accurate
rectification can be used.
To extract the motion parameters, we use a slight modification of the Lucas-Kanade direct 2D alignment [2], iteratively finding motion parameters which minimize the sum of square differences using a
first order Taylor approximation. The approximations cos(α)≈1 and sin(α)≈α are also used, giving the following error function to be minimized:
( T x , b , α ) = x , y { I k - 1 ( x - T x D k - 1 ( x , y ) , y ) - I k ( x ' - α y ' , y ' + b + α x ' ) } 2 . ( 13 ) ##EQU00017##
We use the first order Taylor expansion around I
(x,y) and around I
(x',y') to approximate:
I k
- 1 ( x - T x D k - 1 ( x , y ) , y ) ≈ I k - 1 ( x , y ) - ∂ I k - 1 ∂ x T x D k - 1 ( x , y ) I k ( x ' - α y ' , y ' + b + α x ' ) ≈ I k ( x ' , y ' ) + ∂ I k ∂ x ' ( - α y ' ) + ∂ I k ∂ y ' ( b +
α x ' ) , ( 14 ) ##EQU00018##
which results in the following minimization:
( T x , b , α ) = x , y { I k - 1 ( x , y ) - I k ( x ' , y ' ) - ∂ I k - 1 ∂ x T x D k - 1 ( x , y ) - ∂ I k ∂ x ' ( - α y ' ) - ∂ I k ∂ y ' ( b + α x ' ) } 2 ( 15 ) ##EQU00019##
The minimization can be solved efficiently by taking the derivatives of the error function E with respect to each of the three motion parameters and setting them to zero, giving the following linear
set of equations with only three unknowns:
A T A
[ T x α b ] = cA T , where A = [ x , y ∂ I k - 1 ∂ x D k - 1 ( x , y ) , x , y ( ∂ I k ∂ y ' x ' - ∂ I k ∂ x ' y ' ) , x , y ∂ I k ∂ y ' ] , and c = x , y ( I k - 1 ( x , y ) - I k ( x ' , y ' ) ) .
( 16 ) ##EQU00020##
Similar to [2], we handle large motions by using an iterative process and a multi-resolution framework. In our case, however, we simultaneously warp both images, one towards the other: we warp I
towards I
according to the computed camera translation T
(and the given inverse depth), and we warp I
towards I
according to the computed rotation α and the uniform vertical translation b.
For additional robustness, we added outlier removal and temporal integration:
Pixels where the intensity difference is large are marked as outliers and are omitted from the motion computation. Specifically, we omit pixels with
W ( I k - 1 ( x , y ) - I k ( x ' , y ' ) ) ∂ I k - 1 ∂ x W ∂ I k - 1 2 ∂ x > s , ( 17 ) ##EQU00021##
where W is a 5×5 neighborhood, and s is a threshold (we used s=1). Other schemes such as re-weighted least squares can also be used. Obviously, pixels that were marked by the depth computation as
having an unknown disparity are also not used.
Frames that were already aligned are averaged with earlier frames, also known as "Temporal Integration". Instead of computing motion using a single reference frame, we use the temporal integration
that was shown in [11] to add stability and robustness to outliers in traditional 2D alignment methods.
Since the computed depth is quantized, a consistent depth pattern (e.g. the common case when near objects are on the bottom of the image and far objects are on the top of the image) can cause a small
rotational bias, which accumulates in long sequences. A small modification of the motion computation method described earlier can overcome this problem: Since the depth parallax is horizontal, only
vertical displacements are used to compute image rotation. To do so, we change the error function in Eq. (13) to:
( T x , b , α ) = x , y { I k - 1 ( x - T x D k - 1 ( x , y ) , y ) - I k ( x ' , y ' + b + α x ' ) } 2 . ( 18 ) ##EQU00022##
As in the original method, the iterative image warping is done using the accurate rotation matrix. It should be noted that more general motion models can be computed, however our experience showed
that adding motion parameters that are not independent (e.g. pan with horizontal translation) may reduce the robustness of the motion computation for scenes with small depth variations. Our current
approach can even handle scenes that are entirely flat.
3. Interleaving Computation of Depth and Motion
In the proposed method, the motion of the camera and the depth of the scene are computed for all frames. While the camera motion varies from frame to frame, the depth is consistent across frames. On
the other hand, the camera motion consists of only a few parameters per frame, while the depth should be computed for each pixel. It is well known that the observed image motion can be factorized to
its shape and motion components [8]. However, in long video sequences scene features are visible only in a few frames, and computed structure and motion information should be passed along the
The process is initialized by computing the inverse depth for the first frame. It is continued by interleaving stereo and motion computations, until the motion of the camera and the corresponding
inverse depths are computed for the entire sequence. The initialization is described in Section 3.1 (and
FIG. 5
), and the interleaving process is described in Section 3.2. A schematic work-flow of the interleaving process is shown in
FIG. 6
3.1 Initialization: First Two Frames
[0152]FIG. 5
is a schematic diagram showing work-flow of the initialization stage, which will now be described. A 2D image translation (u,v) is initially computed using the traditional Lucas-Kanade method [5],
between I
(the reference frame) and I
, for k=2,3, . . . . This is performed until a frame I
having sufficient horizontal displacement from I
is reached. Given (u,v), I
is warped vertically towards I
according to v, and it is assumed that T
=u. The vertical motion v is estimated accurately since the parallax is mostly horizontal.
The graph cuts algorithm is applied on I
and the warped I
to estimate the depth map of I
as described in Section 1. Despite the rotational component which has not yet been compensated, a correct depth map for most of the pixels can be estimated by using the "Flexible Dissimilarity
Measure" (Section 1.1). The "unknown" label is automatically assigned to pixels with large vertical displacements induced by the rotation. The pixels that get valid depth values can now be used to
compute a better estimation of the relative camera motion between I
and I
(using the ego motion computation described in Section 2).
After warping I
towards I
according to the estimated rotation matrix, the depth map of I
is re-computed. We continue with this iterative process until convergence.
FIGS. 4b to 4d are intermediate depth maps computed during the iterative depth and motion computations process from an original frame shown in FIG. 4a (unknowns are marked in black). In this example,
two frames from a video sequence were selected, and one of the frames was manually rotated by 2°. The intermediate depth maps that were obtained during the iterative process are shown, demonstrating
the effective convergence of the proposed method. It can be seen that the columns at the left and right margins are marked as unknowns in the first iteration due to their large vertical
displacements. The support of the center pixels was sufficient to correctly extract the relative motion between the two frames.
3.2 Interleaving Computation for the Entire Sequence
[0156]FIG. 6
is a schematic diagram showing work-flow of the interleaving process, which will now be described. During the initialization process the inverse depth map D
was computed, corresponding to frame I
. The reference frame I
is set to be I
, and its inverse depth map D
is set to be D
With the inverse depth D
the relative camera motion between the reference frame I
and its neighboring frame I
can be computed as described in Section 2. Let (T
) be the computed camera pose for frame I
compared to the reference frame I
. Given (T
) and the inverse depth map D
for image I
, the inverse depth values of D
can be mapped to the coordinate system of I
as described in Section 1.3, giving D
. A schematic diagram of the work-flow is shown in FIG. 3.
Camera motion is computed as described above between I
and its neighboring frames I
+2, etc., until the maximal disparity between the reference frame I
and the last frame being processed, I
, reaches a certain threshold. At this point the inverse depth map D
has been computed by forward warping of the inverse depth map D
. D
is updated using the iterative graph cuts approach between I
and I
to get better accuracy. To encourage consistency between frames, small penalties are added to all pixels that are not assigned with labels of their predicted planes. This last step can be done by
slightly changing the data term in Eq. (1).
Recall that for keeping the consistency between the different frames, the disparities computed by the graph cuts method should be normalized by the horizontal translation computed between the frames,
giving absolute depth values (up to a global scale factor).
After D
was updated, the new reference frame I
is set to be I
, and its inverse depth map D
is set to be D
. The relative pose and the inverse depth of the frames following I
are computed in the same manner, replacing the reference frame whenever the maximal disparity exceeds a given threshold.
The process continues until the last frame of the sequence is reached. In a similar manner, the initial frame can be set to be one of the middle frames, in which the interleaving process continues in
the both positive and negative time directions.
This scheme requires a computation of the inverse depths using the graph cuts method only for a subset of the frames in the sequence. Besides the benefit of reducing the processing time, disparities
are more accurate between frames having larger separation. While depth is computed only on a subset of frames, all the original frames are used for stitching seamless panoramic images.
3.3 Panoramic Rectification
In real scenarios, the motion of the camera may not perfectly satisfy our assumptions. Some small calibration problems, such as lens distortion, can be treated as small deviations from the motion
model and can be overcome using the robust tools (such as the "unknown" label or the "flexible" graph cuts approach presented in Section 1.1).
However, a bigger challenge is introduced by the camera's initial orientation: if the camera is not horizontally leveled in the first frame, the motion computations may consist of a false global
camera rotation. The deviation of the computed motion from the actual motion may hardly be noticed for a small number of frames (making traditional calibration much harder). But since the effect of a
global camera rotation is consistent for the entire sequence, the error accumulates, causing visual artifacts in the resulting panoramas.
This problem can be avoided using a better setup or by pre-calibrating the camera. But very accurate calibration is not simple, and in any case our work uses videos taken by uncalibrated cameras, as
shown in all the examples.
A possible rotation of the first image can be addressed based on the analysis of the accumulated motion. A small initial rotation is equivalent to a small vertical translation component. The effect
for a long sequence will be a large vertical displacement. The rectification of the images will be based on this effect: After computing image translations (u
) between all the consecutive frames in the sequence, the camera rotation α of the first frame can be estimated as α=arctan(Σ
). A median can be used instead of summation in the computation of α if a better robustness is needed. All the frames are derotated around the z axis according to α.
4. Minimal Aspect Distortion (MAD) Panorama
The mosaicing process can start once the camera ego-motion and the dense depth of the scene have been computed. Motion and depth computation can be done as proposed in the previous sections, or can
be given by other processes. We propose two approaches for mosaicing. In Section 5 the X-Slits approach is used for rendering selected viewpoints. In this section a new method is presented for
generation of a long minimal aspect distortion (MAD) panorama of the scene. This panorama should satisfy the following properties: (i) The aspect distortions should be minimal (ii) The resulting
mosaic should be seamless.
Since the mosaicing stage comes after motion computation, the images in this section are assumed to be de-rotated and vertically aligned, and therefore the remaining displacement between the images
is purely horizontal. It is also assumed that the depth maps are dense. The depth values for pixels that were labeled as unknowns are interpolated from other frames (see Section 1.3). When pixels are
occluded in all neighboring frames, they are interpolated from neighboring pixels.
4.1 Panorama as a Cut in the Space-Time Volume
[0169]FIG. 7a
shows pictorially a general cut C(t) through the space-time volume, while FIG. 7b shows the same cut in the spatially aligned space-time volume. C(t) designates the leftmost column of the strip S
taken from frame t. A panoramic image is determined by a cut C(t) through the space-time volume, as seen in FIG. 7b. For each image t, C(t) determines the left column of a strip S
in image t to be stitched into the panorama.
As shown in
FIG. 8
to obtain a seamless stitching, the right border of the strip S
is a curved line, corresponding to the left side of the next strip, i.e. C(t+1). This curved line is computed using the camera motion and the known inverse depth. The image strip S
is warped to a rectangular strip S', before being pasted into the panorama. The panorama is stitched from a collection of rectangular strips S'
warped from the input frames S
. The stitching is seamless because each point A' on the right border of the image strip S
corresponds to the point A on the left border S
+1, according to its the computed disparity. A and A' are the same point in the mosaic's coordinates. The warping is done by scaling each row independently, from its width in S
to the width of S'
given by:
+1(C(t+1)), (19)
where alignment[t]
+1(x) is set to be the average disparity of the pixels in column x of image t+1 relative to image t. This average disparity equals to the average inverse depth at this column multiplied by the
relative horizontal translation T
. In the warping from S
to S'
far away objects are widened and closer objects are becoming narrower.
FIGS. 9a and 9b are diagrams showing respectively selection of nodes and graph construction using X-Slits projection.
FIG. 9a
shows a path C in the graph. C(t) indicates the node selected in time t. Locally C(t) is a X-Slits projection as shown by the dotted line, with slope=cot(θ).
FIG. 9b
shows graph construction where the nodes are all the columns in the X-T space. There are edges from each column of image t to all the columns of image t+1.
In pushbroom and X-Slits projections (See FIG. 13), C(t) is a linear function of the camera translation. The local derivative of a general cut C(t) can represent a local X-Slits slice having a slope
(t)=dC(t)/dt, (20)
as demonstrated in FIG
. 9a. The local X-Slits slope can change spatially throughout the panorama. Special slopes are the pushbroom projection (slope(t)=0) and the perspective projection (slope(t)→∞).
In [18] a minimum distortion mosaic is created such that the slope function is piecewise constant, and a non-linear optimization is used to minimize this distortion. In MAD mosaicing the cut C(t) is
allowed to be a general function, and simple dynamic programming is used to find the global optimum.
4.2 Defining the Cost of a Cut
The optimal cut through the space-time volume that creates the MAD mosaic is the cut that minimizes a combination of both a distortion cost and a stitching cost. The cost of a cut C(t) is defined as
( C ) = t distortion t ( C t , C t + 1 ) + α t stitching t ( C t , C t + 1 ) , ( 21 ) ##EQU00023##
where t is a frame number and α is a weight. The distortion term estimates the aspect distortion in each strip, and the stitching term measures the stitching quality at the boundaries. Both are
described next.
Distortion cost: As described before, a cut C determines a set of strips {S
}. We define the distortion cost distortion
(C(t),C(t+1)) to be the variance of disparities of the pixels in strip S
. This is a good measurement for distortion, as strips with high variance of disparities have many dominant depths. Objects in different depths have different motion parallax, resulting in a
distorted mosaic. In this case, we prefer such strips to be wider, giving a projection that is closer to perspective. A single wide strip in a high variance region will be given a lower cost than
multiple narrow strips.
FIG. 10 shows disparity variance at columns in the non-aligned x-t volume corresponding to FIGS. 12a, 12b and 12c showing, respectively, a MAD mosaic of a street in Jerusalem; a graph showing C(t),
the left strip boundary for each frame; and the depth map of the mosaic constructed according to an embodiment of the invention. Each row in FIG. 10 represents an input image, and the value at
location (x, t) represents the variance of normalized disparity at column x of image t. The distortion cost is the variance of each strip, and here we show the variance of each column. It can be
observed that the selected cut has a large slope when it passes through problematic areas that have high variance. A large slope is equivalent to wide strips; giving a result that is close to the
original perspective.
We have also experimented with a few different distortion functions: spatial deviation of pixels relative to the original perspective; aspect-ratio distortions of objects in the strip ([18]), etc.
While the results in all cases were very similar, depth variance in a strip (as used in our examples) was preferred as it consistently gave the best results, and due to its simplicity.
Stitching cost: The stitching cost measures the smoothness of the transition between consecutive strips in the panoramic image, and encourages seamless mosaics. We selected a widely used stitching
cost, but unlike the common case, we also take the scene depth into consideration when computing the stitching cost. This stitching cost is defined as the sum of square differences between the C(t
+1) column of image t+1 and the corresponding column predicted from image t. To compute this predicted column an extra column is added to S'
, by generating a new strip of width width(S'
)+1 using the method above. The right column of the new strip will be the predicted column.
In addition to the distortion and stitching costs, strips having regions that go spatially backwards are prohibited. This is done by assigning an infinite cost to strips for which C(t+1)<C(t)+D
(C(t+1)), where D
(C(t+1)) is the minimal disparity in the column C(t +1) of image t+1. For efficiency reasons, we also limit C(t+1)-C(t) to be smaller than 1/5 of the image's width.
The distortion cost and the minimal and average disparities are computed only from image regions having large gradients. Image regions with small gradients (e.g. a blue sky) should not influence this
cost as distortions are imperceptible at that regions, and their depth values are not reliable.
4.2.1 Graph Construction and Minimal Cut
The graph used for computing the optimal cut is constructed from nodes 5 representing image columns. We set a directed edge from each column x
in frame t to each column x
in frame t+1 having the weight
)=distortion, (x
Each cut C corresponds to a path in the graph, passing through the column C(t) at frame t. The sum of weights along this path is given by Σ
(C(t),C(t+1)), and is equal to the cost defined in Eq. 21.
Finding the optimal cut that minimizes the cost in Eq. 21 is therefore equivalent to finding the shortest-path from the first frame to the last frame in the constructed graph. Any shortest-path
algorithm can be used for that purpose. We implemented the simple Bellman-Ford dynamic programming algorithm with online graph construction.
FIGS. 11a and 11b show respectively a segment from a first long pushbroom mosaic and a corresponding long MAD mosaic generated according to the invention. FIGS. 11c and 11d are similar pictures
relating to different regions of the scene. The visible differences between the Pushbroom mosaics and the MAD mosaics are large because the camera is very close to the scene, and depth differences
inside the scene are large compared to the distance between the camera and the scene.
5. Dynamic Image Based Rendering with X-Slits
MAD mosaicing generates a single panoramic image with minimal distortions. A MAD panorama has multiple viewpoints, and it can not be used to create a set of views having a 3D effect. Such 3D effects
can be generated by X-Slits mosaicing.
New perspective views can be rendered using image based rendering (IBR) methods when the multiple input images are located densely on a plane [13,6]. In our case the camera motion is only 1D and
there is only horizontal parallax, so perspective images cannot be reconstructed by IBR. An alternative representation that can simulate new 3D views in the case of a 1D camera translation is the
X-Slits representation [23]. With this representation, the slicing function C(t) is a linear function of the horizontal translation.
[0188]FIG. 13a
shows that changing the slope of the slice simulates a forward-backward camera motion, while shifting the slice simulates a sideways camera motion. Constant slicing functions (C(t)=const) generating
parallel slices in the space-time volume, as in
FIG. 13b
, correspond to pushbroom views from infinity. More general linear functions,
where U[x]
is the accumulated horizontal camera translation, correspond to finite viewing positions.
As the linear slicing functions of the X-Slits are more constrained than those used in MAD mosaicing, they cannot avoid distortions as in MAD mosaicing. On the other hand, X-Slits projections are
more powerful to create a desired viewpoint. For example, in MAD-mosaicing distortions are reduced by scaling each strip according to the average disparity, while in the X-slits representation image
strips are not scaled according to disparity. This is critical for preserving the geometrical interpretation of X-Slits, and for the consistency between different views, but it comes at the expense
of increasing the aspect distortions.
5.1 Seamless Stitching for X-Slits
Since a video sequence is not dense in time, interpolation should be used to obtain continuous mosaic images. When the displacement of the camera between adjacent frames is small, reasonable results
can be obtained using some blurring of the space-time volume [13,6]. For larger displacements between frames, a depth-dependent interpolation must be used. The effect of using depth information for
stitching is shown in FIG. 14. FIG. 14a shows results without using the dense depth. While the restaurant (whose depth is close to the dominant depth in the scene) is stitched well, closer objects
(rails) are truncated and faraway objects are duplicated. In contrast, stitching according to the dense depth map (FIG. 14b) gives a far better stitching result. Note that very narrow objects are
still a challenge to our method and may not be stitched well (such as the narrow pole on the right side of the restaurant). This problem is usually avoided in the MAD mosaicing, which tends to keep
such narrow objects in a single strip.
We used two different rendering approaches of X-Slit images: an accurate stitching using the dense depth maps, and a faster implementation suitable for interactive viewing. The accurate stitching is
similar to the one described in section 4.1, with the only difference that the image strips are not scaled according to average disparity. To get real-time performance suitable for interactive
viewing we cannot scale independently each row. Instead, the following steps can be used in real-time X-Slits rendering: (i) Use a pre-process to create a denser sequence by interpolating new frames
between the original frames of the input sequence. This can be done using the camera motion between the frames, and their corresponding inverse depth maps. (ii) Given the denser sequence, continuous
views can be obtained by scaling uniformly each vertical strip pasted into the synthesized view, without scaling each row separately. The uniform scaling of each strip is inversely proportional to
the average disparity in the strip.
6. Experimental Results
All experiments discussed in this paper were performed on videos without camera calibration, and without manual involvement (except for the setting of the maximal allowed disparity, which was done
only to speed up performance).
The examples demonstrate the applicability of our method to a variety of scenarios, including a sequence captured from a river boat (FIG. 15-16), a helicopter (FIG. 17) and a driving car in a long
street (FIG. 18-19). We constructed panoramic images using X-Slits (FIG. 15,17) as well as sample images from a virtual walk-through (FIGS. 16 and 18). In FIG. 18 MAD mosaicing was used to reduce
To successfully process all these different scenarios, the method had to handle different types of camera motions (e.g. highly unstable camera in the helicopter sequence), and different kinds of
scenes (such as the street sequence, where the depths varies drastically between the front of the buildings and the gaps between them).
MAD mosaicing was proven to work particularly well on long and sparse sequences, as shown in
FIG. 19
. Some objects have such a large disparity that the stereo computation could not register them well, e.g. the close traffic signs that have a disparity greater than 50 pixels. In such cases the
stitching cost is responsible for rendering these objects using a wide strip from a single frame. As shown in FIG. 12, moving people and some of the very close traffic signs were rendered correctly
in most cases.
More details about each sequence are now described:
Boat: The input sequence used to produce the panoramic boat image in FIG. 15 consists of 450 frames. The camera is relatively far from the scene, and therefore the variance of the depth was small
relative to other sequences (such as the street sequence). This allowed us to limit the maximal disparity to 20 pixels, and reduce the run-time which is approximately linear in the number of labels.
In addition, the iterative graph-cuts was applied on this sequence only once in 5 frames on average (The graph cuts method is performed only when the maximal disparity reaches a certain threshold, as
described in Section 4).
Shinkansen: The input sequence used to produce the panoramic image in FIG. 17 consists of 333 frames. The derailed Shinkansen train is not moving. Some intermediate depth maps and the motion
parameters are also shown. A panoramic image of the same scene consisting of 830 frames appears in the homepage.
[0199]FIG. 19
is a flow diagram showing the principal operations carried out by a method according to the invention for forming a panoramic image of a scene having at least two points at different distances from
the optical center of a camera that translates relative to the scene. According to the method, an optical flow is obtained between corresponding points in temporally different input frames of a
sequence of input frames captured by the camera. The optical flow is used to compute flow statistics for at least portions of some of the input frames as well as to compute respective stitching costs
between some of the portions and respective neighboring portions thereof. A sequence of selected portions and respective neighboring portions is identified that minimizes a cost function that is a
function of the flow statistics and stitching costs, and the selected portions and respective neighboring portions are stitched so as to form a panoramic image of the scene, which may then be
displayed or stored for further processing.
[0200]FIG. 20
is a block diagram showing functionality of a system 10 according to an embodiment the invention, comprising a video camera 11 that captures video sequences whose input frames are stored in a memory
12. A processor 13 is coupled to the memory 12 and is responsive to an optical flow between corresponding points in temporally different input frames for computing flow statistics for at least
portions of some of the input frames and for computing respective stitching costs between some of the portions and respective neighboring portions thereof. Optical flow is computed by an optical flow
computation unit 14 from data indicative of motion of the camera and respective depth data of pixels in the input frames determined by a depth computation unit 15.
Optionally, a rectification unit 16 is coupled to the processor 13 for rectifying the optical flow. This may be done, for example, by means of a pre-warping unit 17 that warps at least some of the
input frames. Likewise, there may optionally be provided a scaling unit 18 coupled to the processor 13 for scaling at least some of the input frames according to the optical flow or depth data.
A selection unit 19 is coupled to the processor 13 for selecting a sequence of portions and respective neighboring portions that minimizes a cost function that is a function of the flow statistics
and stitching costs. A post-warping unit 20 may optionally be coupled to the selection unit 19 for warping at least one of the selected portions. A stitching unit 21 is coupled to the selection unit
19 for stitching the selected portions and respective neighboring portions (after optional warping) so as to form a panoramic image of the scene, which may then be displayed by a display unit 22.
FIG. 21 is a pictorial representation of a system 30 allowing composite panoramic movies to be generated according to the invention from multiple videos captured by multiple people. The system allows
people to take videos at different locations using conventional cameras such as 31 or cellular telephones 32 having video features and to upload their videos over the Internet 34 to an application
web server 35. The web server 35 is programmed according to the invention as described above to construct a panoramic model of the world by creating panoramic mosaics by stitching selected portions
of the component videos and optionally to display the generated world model on a display device 36. Obviously, the captured video may be stored in a database and uploaded by users over the Internet.
In use of such a system, people located in different parts of the world capture component image sequences of the streets in their neighborhoods, their houses, their shops--whatever they wish. Once
they upload to the web server 35 the component image sequences, software in the web server 35 analyzes the uploaded component videos, and computes the camera motion between frames. The image
sequences may then be stored as an "aligned space time volume", with new intermediate frames optionally synthesized to create a denser image sequence.
The user can optionally upload together with the image data additional information of the sequence. This can include location information, e.g. GPS data of some of the frames, location on a map, e.g.
an online map presented on the computer screen, street address, direction of camera (south, north, etc.), and more. Time information of capture may be sent, and also information about the camera.
This information may be used to arrange the uploaded data in relation to some global map, and to other data uploaded by this or other users.
Once the data has been uploaded and processed, a user can access the data for a virtual walkthrough anywhere data is available. Using the appropriate slicing of the space time volume, as described in
the X-Slits paper [23] or the stereo panorama patent [25], the user can transmit to the server his desired view, and the server 35 will compute the actual "slice" and send the generated image to the
user. Alternatively, the user may be able to download from the server some portion of the space-time volume, and the image rendering can be performed locally on his computer.
It will be understood that although the use of X-Slits or stereo panoramic rendering is proposed, any image based rendering method may be employed. Camera motion can be computed either along one
dimension or in two dimensions, and once the relative relation between input images is known, new images can be synthesized using familiar Image based Rendering methods, either on the web server or
on the computers of users after downloading the image data.
It will also be understood that the system according to the invention may be a suitably programmed computer. Likewise, the invention contemplates a computer program being readable by a computer for
executing the method of the invention. The invention further contemplates a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing the method of
the invention.
Patent applications by Giora Engel, Mevasseret Zion IL
Patent applications by Shmuel Peleg, Mevasseret Zion IL
Patent applications by YISSUM RESEARCH DEVELOPMENT COMPANY OF THE HEBREW UNIVERSITY OF JERUSALEM
Patent applications in class PANORAMIC
Patent applications in all subclasses PANORAMIC
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20110043604","timestamp":"2014-04-19T21:34:15Z","content_type":null,"content_length":"125659","record_id":"<urn:uuid:54445228-9c62-43a6-b375-080b1dd03e27>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Where does matrix multipication come from?
July 4th 2008, 06:16 AM #1
Jul 2008
Where does matrix multipication come from?
I am trying to get my head around the origin or matrix multiplication.
Is a good description just: "Matrix Multiplication is just solving a system of linear equations. For example a nxm matrix 'A' (representing the coficients of n equations with m coefficients each)
'multiplies' by a mx1 vector 'V' (representing the variables), and the multiplication process is just solving for the variable; except in matrix multiplcation the the enries in V are actually
constants and the anser vector holds the 'variables'. In the case when the vector V has more than one column this is just a a range of cases with different values for the enries in V. So, when V
is a matrix, the answer to A x V, is a matrix with the number of columns equal to the number of cases set out as columns in the matrixs V." I just wrote this from the top of my head.
I have been thinking about matrix multiplication like a mxnxk box where matrix A is mxn and matrix be is nxk. The front of the box is matrix A and the top is matrix B. Various points inside the
box coincide with entries in A and B, and the values at these points are the product of the coresponding enrtries in a A and B. 'Surfaces' A and B of the box are joined along the side 'n' long.
Summing along the n direction give values in a mxk matrix which is tyhe side of the box. This side is the answer of matirx multiplication.
What is the role of determonants in matrix multiplication? I read in wikipedia that the determinant of a nxn matrix can be used to represent the volume inside a rhombus in n-space whose sides are
made up of the entries in the matrix. Haven't thought about this enough yet.
My question is can anyone give me some pointers?
We can define matrix multiplication in an "obvious" way. Meaning just multiply the corresponding components. But this type of multiplication is not very interesting. The matrix multiplication
that is would turns out being more useful. The simplest application of matrix multiplication is that we can regard a system of equations as simply $A\bold{x}=\bold{b}$. Another application is
that linear transformations from $\mathbb{R}^m\mapsto \mathbb{R}^n$ can be again regarded as matrix multiplication. In fact, any linear transformation between two vector spaces in general can be
expressed as a matrix. All of these connections follow by the way we define multiplication.
Might be worth pointing out that matrix multiplication was defined the way it is for precisely the reason that it describes algebraically the geometry of linear transformations.
July 4th 2008, 06:20 AM #2
Global Moderator
Nov 2005
New York City
July 4th 2008, 11:47 AM #3
|
{"url":"http://mathhelpforum.com/advanced-algebra/43020-where-does-matrix-multipication-come.html","timestamp":"2014-04-20T05:11:47Z","content_type":null,"content_length":"37972","record_id":"<urn:uuid:8ed9d3ea-93a4-4439-b5ee-8559b16b987d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Main OFSolvers
From OpenFOAMWiki
The solvers that are supplied with OpenFOAM. This page is structured according to the directory structure of the solver-sources.
1 Basic CFD codes
2 Combustion
3 Compressible flow
4 Direct numerical simulation (DNS)
5 Electromagnetics
6 Finance
7 Heat transfer
8 Incompressible flow
9 Molecular dynamics
10 Multiphase flow
11 Stress analysis of solids
Note to editors: Use the solver name as the page name, when adding a new solver to this page.
|
{"url":"http://openfoamwiki.net/index.php/Main_OFSolvers","timestamp":"2014-04-21T15:49:04Z","content_type":null,"content_length":"41086","record_id":"<urn:uuid:09113443-5da6-4411-a6c2-e0f9d15a3276>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Determining the Cut-off Frequency of a MEMS Accelerometer
Well, accelerometer is subject to several sources of high frequency noise, including thermal, electrical and mechanical vibrations. I thought of developing a simple first order, single-pole infinite
impulse response LPF, given by,
y(n) = α.y(n-1) + (1-α).x(n), where,
x(n) = current accelerometer reading,
y(n) = current estimate; y(n-1) = previous estimate.
My issue is with determination of alpha. If sampling frequency is Fs then,
α = [itex]\frac{\tau Fs}{1+\tau Fs}[/itex].
[itex]\tau[/itex] = [itex]\frac{1}{2\pi Fc}[/itex].
Once I determine Fc, I can justify my choice of α. Now, If the device were analog in nature, I could have designed an RC LPF, got the state equations, taken a laplace transform and applied Bilinear Z
Transform to get the digital equivalent. But this device gives acceleration in digital format, a 16 bit number.
I need to be able to somehow use Fourier analysis or some such technique to work on this digital data directly. I would be delighted to get some help on clarifying this conundrum.
|
{"url":"http://www.physicsforums.com/showthread.php?p=4257462","timestamp":"2014-04-20T11:27:13Z","content_type":null,"content_length":"39190","record_id":"<urn:uuid:50211072-444e-4ffc-9940-bc89e034bbd3>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Combinatorial Potlatch 2011
The 2011 edition of the Combinatorial Potlatch was yesterday at Seattle University, which I organized along with Nancy Neudauer. David Neel handled all of the local arrangements.
William Stein gave an overview of Sage, with some combinatorial tidbits. There were the usual “oohs” and “aahs” when he generated a random matrix and then produced its LaTeX code in one extra step.
Josh Laison gave a very nice talk on obstacle numbers of graphs with great visuals and an on-the-fly clicker change in mid-talk. Peter Winkler wrapped up with more great visuals (cartoons, almost!)
for an entertaining talk about a cop pursuing a drunk on a graph.
It was nice to see some new faces, such as Shannon Overbay over from Spokane, and some regulars from farther afield, such as John Gimbel down from Alaska.
I never have any graduate students to supervise, but many of my students go on to a graduate degree. Sometimes they even get their PhD in combinatorics. And both were present for the Potlatch.
David Neel took the UPS combinatorics course the first time it was offered and studied under Bogart at Dartmouth, while Jane Butterfield did a summer research project with me and is finishing up her
thesis in graph theory at the University of Illinois. Photo below, and more from the conference are here.
|
{"url":"http://www.beezers.org/blog/bb/2011/11/combinatorial-potlatch-2011/","timestamp":"2014-04-16T10:18:23Z","content_type":null,"content_length":"12179","record_id":"<urn:uuid:927cf76b-135f-4393-84a8-20c371b4ae11>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum Physics
1105 Submissions
[3] viXra:1105.0043 [pdf] submitted on 31 May 2011
Physics and Religion
Authors: Ir J.A.J. van Leunen
Comments: 2 pages
When deliberating about the fundaments of physics often the existence of a creator and the relation to religion comes up. This article gives the personal opinion of the author on this subject.
Category: Quantum Physics
[2] viXra:1105.0024 [pdf] submitted on 16 May 2011
A Note on the Gravity Screening in Quantum Systems
Authors: Andrea Gregori
Comments: 17 pages
We discuss how, in the theoretical scenario presented in [1], the gravity screening and the gravity impulse which seem to be produced under certain conditions by high temperature superconductors are
expected to be an entropic response to the ow of part of the system into a deeper quantum regime.
Category: Quantum Physics
[1] viXra:1105.0006 [pdf] submitted on 4 May 2011
The Local-Nonlocal Dichotomy Is but a Relative and Local View Point
Authors: Elemér E Rosinger
Comments: 20 pages.
As argued earlier elsewhere, what is the Geometric Straight Line, or in short, the GSL, we shall never know, and instead, we can only deal with various mathematical models of it. The so called
standard model, given by the usual linearly ordered field R of real numbers is essentially based on the ancient Egyptian assumption of the Archimedean Axiom which has no known reasons to be assumed
in modern physics. Setting aside this axiom, a variety of linearly ordered fields F[U] becomes available for the mathematical modelling of the GSL. These fields, which are larger than R, have a rich
self-similar structure due to the presence of infinitely small and infinitely large numbers. One of the consequences is the obvious relative and local nature of the long ongoing local versus nonlocal
dichotomy which still keeps having foundational implications in quantum mechanics.
Category: Quantum Physics
|
{"url":"http://vixra.org/quant/1105","timestamp":"2014-04-17T01:20:49Z","content_type":null,"content_length":"5716","record_id":"<urn:uuid:01e2029d-fddb-4f19-877f-e70baf62744b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The DATAPAC library was written by James Filliben of the Statistical Engineering Division. After these routines were incorporated into the Dataplot program, development of DATAPAC stopped. However,
there are some subroutines here that may still be of interest. In particular, there are a number of routines for computing various probability functions.
This software is not formally supported and is not being further developed. It is provided on an "as is" basis. There is no formal documentation for the subroutines. However, most of the subroutines
contain usage instructions in the comments in the source code.
These routines are written in Fortran 77 and should be portable to most Fortran 77 compilers.
|
{"url":"http://www.nist.gov/itl/sed/datapac.cfm","timestamp":"2014-04-19T14:52:58Z","content_type":null,"content_length":"43383","record_id":"<urn:uuid:d1e94107-f7f6-441e-b5cf-ab3704f988be>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how do i evaluate sec(x)tan^2(x)+sec(x)?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5034222ee4b0d8ed9b49e0d7","timestamp":"2014-04-19T19:48:16Z","content_type":null,"content_length":"41740","record_id":"<urn:uuid:a703f65f-1d5b-4188-a314-ddd39223afb6>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computing Modular Polynomials
Kristin Lauter and Denis Charles
August 2004
We present a new probabilistic algorithm to compute modular polynomials modulo a prime. Modular polynomials parameterize pairs of isogenous elliptic curves and are useful in many aspects of
computational number theory and cryptography. Our algorithm has the distinguishing feature that it does not involve the computation of Fourier coefficients of modular forms. We avoid computing the
exponentially large integral coefficients by working directly modulo a prime and computing isogenies between elliptic curves via Velu's formulas.
Type TechReport
Number MSR-TR-2004-75
Pages 7
Institution Microsoft Research
Related Projects
Related People
Related Labs
|
{"url":"http://research.microsoft.com/apps/pubs/default.aspx?id=70079","timestamp":"2014-04-17T06:59:28Z","content_type":null,"content_length":"11780","record_id":"<urn:uuid:b843dc6d-76ee-49a7-838f-a48c16f39026>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lazy Lake, FL Statistics Tutor
Find a Lazy Lake, FL Statistics Tutor
I have been tutoring for over 20 years at the high school level (mainly private tutoring and 6 years at the University level). I am extremely passionate about my students success and will go the
extra mile to ensure their learning. My greatest reward in teaching is not the salary, but the success....
11 Subjects: including statistics, geometry, algebra 1, SAT math
...I have always excelled in these courses, from elementary to present. Not only have I done well, but I have been at the top or near the top of my class for all courses relating to these fields.
I am confident in teaching any aspect of biology, chemistry and mathematics up to the undergraduate level.
27 Subjects: including statistics, chemistry, geometry, algebra 1
...I have taught students of all ages for 20 years. I was director of a 40 piece orchestra and have written over 200 arrangements with 12 part harmony including choral voices. I am the producer,
writer and instructor of a music instruction video and book.
27 Subjects: including statistics, calculus, geometry, algebra 1
...Discrete math is the study of mathematical structures and functions that are "discrete" instead of continuous. My University B.A degree from the University of California is Computational
Mathematics, and I had to take several courses in differential equations, which are several forms of derivati...
48 Subjects: including statistics, chemistry, reading, calculus
...When I tutor Algebra 1 students, I focus on how to study math and how to prepare for math tests. Algebra 2 is difficult for many students because this is the course that accelerates the pace
in math. Lots of new concepts are thrown at a student and they must all be mastered without the extended time and practice that a student has seen on concepts in lower level courses.
9 Subjects: including statistics, calculus, algebra 2, geometry
Related Lazy Lake, FL Tutors
Lazy Lake, FL Accounting Tutors
Lazy Lake, FL ACT Tutors
Lazy Lake, FL Algebra Tutors
Lazy Lake, FL Algebra 2 Tutors
Lazy Lake, FL Calculus Tutors
Lazy Lake, FL Geometry Tutors
Lazy Lake, FL Math Tutors
Lazy Lake, FL Prealgebra Tutors
Lazy Lake, FL Precalculus Tutors
Lazy Lake, FL SAT Tutors
Lazy Lake, FL SAT Math Tutors
Lazy Lake, FL Science Tutors
Lazy Lake, FL Statistics Tutors
Lazy Lake, FL Trigonometry Tutors
Nearby Cities With statistics Tutor
Bay Harbor Islands, FL statistics Tutors
Fort Lauderdale statistics Tutors
Golden Beach, FL statistics Tutors
Hillsboro Beach, FL statistics Tutors
Lauderdale By The Sea, FL statistics Tutors
Lauderdale Lakes, FL statistics Tutors
Lauderhill, FL statistics Tutors
Lighthouse Point, FL statistics Tutors
North Lauderdale, FL statistics Tutors
Oakland Park, FL statistics Tutors
Plantation, FL statistics Tutors
Pompano Beach statistics Tutors
Sea Ranch Lakes, FL statistics Tutors
Surfside, FL statistics Tutors
Wilton Manors, FL statistics Tutors
|
{"url":"http://www.purplemath.com/Lazy_Lake_FL_Statistics_tutors.php","timestamp":"2014-04-20T14:04:29Z","content_type":null,"content_length":"24583","record_id":"<urn:uuid:836a0d13-8e02-4d91-bf58-b786a8a6c5ae>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What does an aggregate demand and supply graph look like at equilibrium with both short and long-run aggregate supply? - Homework Help - eNotes.com
What does an aggregate demand and supply graph look like at equilibrium with both short and long-run aggregate supply?
An aggregate demand (AD) and aggregate supply (AS) graph looks very much like any graph of supply and demand for a single product. There are only a few differences.
First, there are differences in the labeling of the axes. On a supply and demand graph, the vertical axis is labeled with “price” and the horizontal axis is labeled with “quantity.” In a graph that
shows AS and AD, the vertical axis is typically labeled with “price level” and the horizontal axis is labeled with “real domestic output” or “Real GDP.”
In a regular supply and demand graph, there is a demand curve that slopes downward from left to right and a supply curve that slopes upward. The same is true in a graph that shows AS and AD. The AD
curve slopes downward and the short-run AS curve slopes upward. In an AS-AD graph, there is another curve. This is the long run aggregate supply curve (LRAS). It is vertical. Equilibrium is at
the point where all three lines intersect.
Please refer to this link for a visual representation. This link shows two AS and two AD curves because it is meant to show the effects of changes in AS and AD. However, you can see the basic setup
of such a graph even so.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/what-does-an-aggregate-demand-supply-grpah-look-430153","timestamp":"2014-04-20T03:24:17Z","content_type":null,"content_length":"28063","record_id":"<urn:uuid:9154c26b-5220-419a-9d68-49d54ee1d307>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Efficient noise-tolerant learning from statistical queries
Results 1 - 10 of 230
, 1998
"... We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular, we consider a
setting in which the description of each example can be partitioned into two distinct views, motivated by the ta ..."
Cited by 1244 (28 self)
Add to MetaCart
We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular, we consider a setting in
which the description of each example can be partitioned into two distinct views, motivated by the task of learning to classify web pages. For example, the description of a web page can be
partitioned into the words occurring on that page, and the words occurring in hyperlinks that point to that page. We assume that either view of the example would be su cient for learning if we had
enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment amuch smaller set of labeled examples. Speci cally, the presence of two distinct views
of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm's predictions on new unlabeled examples are used to enlarge the
training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled
and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice. As part of our
analysis, we provide new re-
, 1995
"... We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated
by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas pr ..."
Cited by 419 (16 self)
Add to MetaCart
We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by
training the given learning algorithm on a different set of examples. Our algorithm is based on ideas presented by Schapire in his paper "The strength of weak learnability", and represents an
improvement over his results. The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant's polynomial PAC learning framework, which are the best
general upper bounds known today. We show that the number of hypotheses that are combined by our algorithm is the smallest number possible. Other outcomes of our analysis are results regarding the
representational power of threshold circuits, the relation between learnability and compression, and a method for parallelizing PAC learning algorithms. We provide extensions of our algorithms to
cases in which the conc...
- MACHINE LEARNING , 2002
"... We consider the following clustering problem: we have a complete graph on # vertices (items), where each edge ### ## is labeled either # or depending on whether # and # have been deemed to be
similar or different. The goal is to produce a partition of the vertices (a clustering) that agrees as mu ..."
Cited by 222 (4 self)
Add to MetaCart
We consider the following clustering problem: we have a complete graph on # vertices (items), where each edge ### ## is labeled either # or depending on whether # and # have been deemed to be similar
or different. The goal is to produce a partition of the vertices (a clustering) that agrees as much as possible with the edge labels. That is, we want a clustering that maximizes the number of #
edges within clusters, plus the number of edges between clusters (equivalently, minimizes the number of disagreements: the number of edges inside clusters plus the number of # edges between
clusters). This formulation is motivated from a document clustering problem in which one has a pairwise similarity function # learned from past data, and the goal is to partition the current set of
documents in a way that correlates with # as much as possible; it can also be viewed as a kind of "agnostic learning" problem. An interesting
- In PODS ’05: Proceedings of the twenty-fourth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems , 2005
"... We consider a statistical database in which a trusted administrator introduces noise to the query responses with the goal of maintaining privacy of individual database entries. In such a
database, a query consists of a pair (S, f) where S is a set of rows in the database and f is a function mapping ..."
Cited by 158 (34 self)
Add to MetaCart
We consider a statistical database in which a trusted administrator introduces noise to the query responses with the goal of maintaining privacy of individual database entries. In such a database, a
query consists of a pair (S, f) where S is a set of rows in the database and f is a function mapping database rows to {0, 1}. The true answer is P i∈S f(di), and a noisy version is released as the
response to the query. Results of Dinur, Dwork, and Nissim show that a strong form of privacy can be maintained using a surprisingly small amount of noise – much less than the sampling error –
provided the total number of queries is sublinear in the number of database rows. We call this query and (slightly) noisy reply the SuLQ (Sub-Linear Queries) primitive. The assumption of sublinearity
becomes reasonable as databases grow increasingly large. We extend this work in two ways. First, we modify the privacy analysis to real-valued functions f and arbitrary row types, as a consequence
greatly improving the bounds on noise required for privacy. Second, we examine the computational power of the SuLQ primitive. We show that it is very powerful indeed, in that slightly noisy versions
of the following computations can be carried out with very few invocations of the primitive: principal component analysis, k means clustering, the Perceptron Algorithm, the ID3 algorithm, and
(apparently!) all algorithms that operate in the in the statistical query learning model [11].
- In Proceedings of NIPS , 2007
"... We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no
simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we dev ..."
Cited by 138 (7 self)
Add to MetaCart
We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple
and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many
different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we
show that algorithms that fit the Statistical Query model [15] can be written in a certain “summation form, ” which allows them to be easily parallelized on multicore computers. We adapt Google’s
map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression
, 2000
"... In many practical learning scenarios, there is a small amount of labeled data along with a large pool of unlabeled data. Many supervised learning algorithms have been developed and extensively
studied. We present a new "co-training" strategy for using unlabeled data to improve the performance ..."
Cited by 120 (0 self)
Add to MetaCart
In many practical learning scenarios, there is a small amount of labeled data along with a large pool of unlabeled data. Many supervised learning algorithms have been developed and extensively
studied. We present a new "co-training" strategy for using unlabeled data to improve the performance of standard supervised learning algorithms. Unlike much of the prior work, such as the co-training
procedure of Blum and Mitchell (1998), we do not assume there are two redundant views both of which are sufficient for perfect classification. The only requirement our co-training strategy places on
each supervised learning algorithm is that its hypothesis partitions the example space into a set of equivalence classes (e.g. for a decision tree each leaf defines an equivalence class). We evaluate
our co-training strategy via experiments using data from the UCI repository. 1. Introduction In many practical learning scenarios, there is a small amount of labeled data along with a lar...
- IN PROCEEDINGS OF THE TWENTY-SIXTH ANNUAL SYMPOSIUM ON THEORY OF COMPUTING , 1994
"... We present new results on the well-studied problem of learning DNF expressions. We prove that an algorithm due to Kushilevitz and Mansour [13] can be used to weakly learn DNF formulas with
membership queries with respect to the uniform distribution. This is the rst positive result known for learn ..."
Cited by 118 (23 self)
Add to MetaCart
We present new results on the well-studied problem of learning DNF expressions. We prove that an algorithm due to Kushilevitz and Mansour [13] can be used to weakly learn DNF formulas with membership
queries with respect to the uniform distribution. This is the rst positive result known for learning general DNF in polynomial time in a nontrivial model. Our results should be contrasted with those
of Kharitonov [12], who proved that AC 0 is not eciently learnable in this model based on cryptographic assumptions. We also present ecient learning algorithms in various models for the read-k and
SAT-k subclasses of DNF. We then turn our attention to the recently introduced statistical query model of learning [9]. This model is a restricted version of the popular Probably Approximately
Correct (PAC) model, and practically every PAC learning algorithm falls into the statistical query model [9]. We prove that DNF and decision trees are not even weakly learnable in polynomial time in
this model. This result is information-theoretic and therefore does not rely on any unproven assumptions, and demonstrates that no straightforward modication of the existing algorithms for learning
various restricted forms of DNF and decision trees will solve the general problem. These lower bounds are a corollary of a more general characterization of the complexity of statistical query
learning in terms of the number of uncorrelated functions in the concept class. The underlying tool for all of our results is the Fourier analysis of the concept class to be learned.
- J. ACM
"... We describe a slightly sub-exponential time algorithm for learning parity functions in the presence of random classification noise. This results in a polynomial-time algorithm for the case of
parity functions that depend on only the first O(log n log log n) bits of input. This is the first known ins ..."
Cited by 116 (2 self)
Add to MetaCart
We describe a slightly sub-exponential time algorithm for learning parity functions in the presence of random classification noise. This results in a polynomial-time algorithm for the case of parity
functions that depend on only the first O(log n log log n) bits of input. This is the first known instance of an efficient noise-tolerant algorithm for a concept class that is provably not learnable
in the Statistical Query model of Kearns [7]. Thus, we demonstrate that the set of problems learnable in the statistical query model is a strict subset of those problems learnable in the presence of
noise in the PAC model. In coding-theory terms, what we give is a poly(n)-time algorithm for decoding linear k × n codes in the presence of random noise for the case of k = clog n log log n for some
c> 0. (The case of k--- O(log n) is trivial since one can just individually check each of the 2 k possible messages and choose the one that yields the closest codeword.) A natural extension of the
statistical query model is to allow queries about statistical properties that involve t-tuples of examples (as opposed to single examples). The second result of this paper is to show that any class
of functions learnable (strongly or weakly) with t-wise queries for t = O(log n) is also weakly learnable with standard unary queries. Hence this natural extension to the statistical query model does
not increase the set of weakly learnable functions. 1.
, 2003
"... We propose and evaluate a family of methods for converting classifier learning algorithms and classification theory into cost-sensitive algorithms and theory. The proposed conversion is based on
cost-proportionate weighting of the training examples, which can be realized either by feeding the weight ..."
Cited by 106 (13 self)
Add to MetaCart
We propose and evaluate a family of methods for converting classifier learning algorithms and classification theory into cost-sensitive algorithms and theory. The proposed conversion is based on
cost-proportionate weighting of the training examples, which can be realized either by feeding the weights to the classification algorithm (as often done in boosting), or by careful subsampling. We
give some theoretical performance guarantees on the proposed methods, as well as empirical evidence that they are practical alternatives to existing approaches. In particular, we propose costing, a
method based on cost-proportionate rejection sampling and ensemble aggregation, which achieves excellent predictive performance on two publicly available datasets, while drastically reducing the
computation required by other methods.
, 2005
"... Abstract. Forgery and counterfeiting are emerging as serious security risks in low-cost pervasive computing devices. These devices lack the computational, storage, power, and communication
resources necessary for most cryptographic authentication schemes. Surprisingly, low-cost pervasive devices lik ..."
Cited by 101 (4 self)
Add to MetaCart
Abstract. Forgery and counterfeiting are emerging as serious security risks in low-cost pervasive computing devices. These devices lack the computational, storage, power, and communication resources
necessary for most cryptographic authentication schemes. Surprisingly, low-cost pervasive devices like Radio Frequency Identification (RFID) tags share similar capabilities with another weak
computing device: people. These similarities motivate the adoption of techniques from humancomputer security to the pervasive computing setting. This paper analyzes a particular human-to-computer
authentication protocol designed by Hopper and Blum (HB), and shows it to be practical for low-cost pervasive devices. We offer an improved, concrete proof of security for the HB protocol against
passive adversaries. This paper also offers a new, augmented version of the HB protocol, named HB +, that is secure against active adversaries. The HB + protocol is a novel, symmetric authentication
protocol with a simple, low-cost implementation. We prove the security of the HB + protocol against active adversaries based on the hardness of the Learning Parity with Noise (LPN) problem.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=39587","timestamp":"2014-04-18T00:43:46Z","content_type":null,"content_length":"42504","record_id":"<urn:uuid:d0dfe0a5-74b6-4396-8454-e39e0aff9feb>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probably the first question we should ask is not “What can we know?” but “How can we go about knowing anything?” Both the philosophy of knowledge, known as epistemology, and the philosophy of
science, are large subjects, so I’ll only touch on a few main points to describe my views. What I’ll argue here, is that we can not know anything with absolute certainty, but we can know many things
to a practical certainty. Then I’ll argue that the key feature that distinguishes science from other forms of careful investigation is the data that it admits. Differences in method in this view,
only serve to distinguish good science from bad science, or good science from better science.
Pure deductive reason
Can we use pure deductive reason to arrive at truths about the world?
The classic standard form of deductive reasoning is known as the Aristotelian syllogism.
It consists of a major premise, a minor premise, and a conclusion that must follow if both premises are true. For example:
All men are mortal.
Socrates is a man.
Therefore Socrates is mortal.
But the form of the argument is still valid, even if the premises are false. For example:
All dogs are cats.
Rex is a dog.
Therefore Rex is a cat.
Deductive logic is truth preserving. If the premises are true then the conclusions will be true. If the premises are not true, the conclusion need not be true. Without some other method to establish
the truth of the premises, we are no closer to establishing the actual truth of the conclusion. Deductive logic does not generate any new information. The conclusion is implicit in the premises. In
fact if you fully define all the terms you are using, deduction is reduced to tautology. For example:
If we say 2 + 3 = 5, and then fully spell out what we mean by “2” “3” “+” and “5”,
we are just saying 5 = 5.
This does not mean deductive logic is useless. Given a set of premises, we can use deductive logic to see what is implicit in those premises, and not just what is explicit.
The quintessential example of this sort of thought is Euclidean geometry. Given 3 basic definitions, and 5 postulates, which are taken to be self-evident, deductive logic is used to prove all of
Euclidean geometry. For example, we can show all triangles have angles that sum to 180 degrees. While the fact that all triangles have 180 degrees is contained in the postulates, it is not at all
obvious, just by inspection.
It is also useful in real life. You may never have run full speed into a tree, but you know it would hurt. The reasoning might be:
High speed contact with hard objects hurts.
Trees are hard objects.
High speed contact with a tree will hurt.
But deductive logic is sterile. Without some sort of real-world information to feed into our syllogisms, we are confined to mathematical and logical truths that reduce to tautology.
Induction, and Bayesian analysis.
Another form of reason is known as induction. A simple inductive argument might be:
We have done “A” a large number of times, under a variety of conditions and the result was always “B”. Therefore A leads to B. Or we have observed many different cats. All of the cats meowed.
Therefore cats meow. This specific form of induction is also called generalization.
The first thing to notice is that we do not have a proof. Induction leads only to probable truths. It should also be clear that if the only tools in our bag are induction from data, and deductive
reason, we can never prove anything absolutely about the real world. The best we can hope for is to show something to be highly probable, or true for practical purposes. It does indeed appear to be
the case that justified absolute certainty is impossible. However, even that statement can not be made with absolute certainty. We could, for example, speculate than someone could be born with
absolute perfect knowledge of something, that the rest of us did not have access to.
It’s more difficult to say what constitutes a good inductive argument, than what constitutes a good deductive argument. One way to get a more quantitative statement of how induction works is to use
Bayes’s theorem.
What Bayes’s theorem does in a nutshell is tell us how to update probabilities as new information comes in. Suppose for example, we thought the probability of hypothesis “H” was 50%, and we get a new
result confirming it. Bayes’s theorem will tell us what our new estimate of the likelihood of the hypothesis should be. The formula is as follows:
Let h = prior estimate of probability of hypothesis.
Let h|e = probability of hypothesis given the event.
Let e|h = probability of the event given the hypothesis.
Let e = probability of the event.
Then h|e = (h * e|h) / e.
More detail on Bayes’s theorem and a Bayesian calculator can be found here:
One positive point about Bayes’s theorem is that we humans seem to be able to do a pretty good job of working it just by intuition. Experiments have been done to show this. The exception is that we
are not good with probabilities very close to 1 or 0. We tend to ignore, or overestimate slight risk.
A lot of important investigation into this phenomenon was done in the field of financial economics by Kahneman and Tversky.
Another positive point for Bayes’s theorem is that proponents of this idea have had a fair amount of success in using it to describe actual historical progress in science.
One objection to Bayes’s theorem is that we start with a subjective estimate of the probability. This bothers some people more than others. I think it is fair to say that when approaching any new
problem, we are never starting in a vacuum, and we will always bring with us certain historical pre-conceptions. But then how did we get those pre-conceptions? Guess work? This seems to be a
difficulty for a theory of knowledge, even though it seems to be a quite accurate description of what goes on.
Is there some “correct” probability to assign to the hypothesis a priori? If we have a finite number of possibilities, say for example if there are 10 horses in a race, we can assign them all a 10%
chance of winning, if we know nothing about them. This method works fine for finite cases. However, if we are talking about hypothesis, this would seem to mean we need to enumerate all possibilities,
which could be infinite. The probability of any specific hypothesis is then 0. This is historically the point that was made by Karl Popper. It led him to claim that we can only falsify theories never
prove them. This claim was later countered by Thomas Kuhn who showed historically in science contradictory data often does not falsify a theory. Rather, it can simply lead to auxiliary hypotheses, in
some cases. We’ll come back to Kuhn’s ideas later.
I claim that there is a way to non-subjectively get the process started. But first we have to ask: “What do we mean when we say a scientific law is true?” I claim that we mean it is a very good
approximation of reality. If we look at one of the first examples of modern science in Isaac Newton and his theory of gravity, we find that Newton applied his laws of gravity to an ideal case of
point masses, with no interaction, in orbit around the sun. He found that they led directly to Kepler’s laws of planetary motion. But Kepler’s laws of motion are not exactly correct for the real
planets, they are only a very good approximation. Newton’s methods of reducing a problem to solvable parts, and idealizing it, would be the guiding light for generations of scientists to follow. But
was Newton’s theory of gravity true? In fact, in spite of giving excellent predictions, we know that it was later replaced by Einstein’s theory of relativity. So, what we have in successful
scientific laws and theories is an excellent approximation of reality, not reality itself.
So, my claim is that what we mean by saying hypothesis “A” is 90% likely to be true, is that it is 90% likely to be a good approximation of the real world. Or, in other words, the next time we use
it, we are 90% likely to get a result it would predict. This gives us a means to get started.
Suppose we are in an empty universe, with no prior knowledge. All that exists is us and a bag with objects in it. What might we pull out? The odds of any specific hypothesis being true are zero. Now
we reach in and pull out a red marble. We can now form the hypothesis “This bag contains all red marbles”. The only alternative is “this bag does not contains all red marbles”. What probability
should we use as our a priori hypothesis in our Bayesian formula? Let’s say the bag contained 100 marbles. We would need to draw out close to half of them before we started feeling comfortable that
every marble was red. And if the bag contained infinite marbles, the probability of the hypothesis “all the marbles are red” being true would be zero, based on any number of marbles sampled.
Still, if we pull out 5 marbles in a row, we intuitively feel there is something good about this hypothesis of marbles being red. What gives us this feeling, is that fact that the next marble is
probably red. Based on our sampling, the bag probably contains at least mostly red marbles, and our next marble will probably be red. So what we really want to start our Bayesian formula is the
probability that our hypothesis will be correct in the next instance.
Based on this we can say that after pulling the first red marble form the bag, we can assign the hypothesis “This bag contains all red marbles.” a 50% probability. This does not mean there is a 50%
chance it is “true”. All it means is there is a 50% chance that it will prove to be a useful approximation of reality in the next instance. Another way to see this is to see that after pulling one
red marble, with no other information, we have only 2 possibilities. The next pull will be red, or it will not be red. Since we have 2 possibilities, and no other information, 50% is the correct
possibility to assign to each possibility.
For more on assigning non-subjective prior probabilities see the Jaynes reference. He makes the case that we are not interested in answering infinite question, but questions with a finite number of
possible answers. He also argues there is a not subjective way to correctly enumerate the possibilities, and thus give us non-subjective staring probabilities.
Now the picture will become much more complex as we introduce other bags, different colors of marbles, etc. Exact formulas for updating probability could be worked out, but fortunately, as was
pointed out, we are fairly good at doing this sort of calculation intuitively. The purpose here is just to show we have a starting point, so we can start working out rules about the world that are
useful approximations.
In the process, we have answered Popper’s claim that all theories have a 0% probability of being true. That may be the case, but we can find theories that have very high probabilities of being
excellent approximations of the world. We have also retained an important part of Popper’s falsification idea. Popper claimed that a good scientific theory should be as falsifiable as possible. The
more the better. So, while we could have claimed, “this bag often produces red marbles”, the claim “this bag produces red marbles”, is a better scientific theory, because it is falsifiable. Now, as
Kuhn would later point out, a blue marble may not immediately lead us to discard the rule. We might search for what was different the time a blue marble appeared. This is reasonable. If the bag
continues to produce almost all red marbles, we still have a good approximation of reality. We can get an even better approximation, if we understand what causes a blue marble from time to time.
Another challenge to this view of science came from Thomas Kuhn. He claimed that there are no pure facts. All facts in this view are theory laden. To look at an example, suppose we see a sequence of
numbers, “2”, “4”. We hypothesize that each number is two more than the one before it. Would we be justified in giving this hypothesis a 50% probability, like we did for the red marble? Is it 50%
likely that the next number is “6”? What if we look at it differently and say that “4” is twice “2”, now should we expect an “8”? There seems to be a problem here. In fact, we could come up with an
infinite number of patterns that contain a "2”, followed by a “4”. The “fact” that the series is increasing by two is not a raw fact, but is theory laden.
Kuhn claimed that normal science took place within a paradigm, and that from time to time paradigm shifts occurred where all the facts were reinterpreted and seen from within a new framework. He
claimed that these frames were incommensurate. Based on Kuhn’s work, there seemed to be no objective way of choosing a paradigm.
The philosophy of science does have an impact on society throughout history. For the ancient Greeks, the new science was geometry, and this profoundly influenced Aristotle, Plato, and western
civilization for centuries. Newton’s physics was a model for social thinkers of the enlightenment. The 20th century opened with Einstein’s relativity, and the uncertainty of quantum mechanics. And
certainly the themes of uncertainty and relativity appear in Kuhn’s work. In my opinion Kuhn’s work, or really the sociologists that have taken his work and run with it, have had a pernicious
influence on society. The claim is that there is no objective path towards truth. On the left, this has led to cultural relativism, and the claim that societies are different, but it is impossible to
judge one better than another. On the right, it sparked American fundamentalism. (see interesting related example here) The claim made by some academics was that since any paradigm is as good as any
other, Christians should simply regard the bible as absolute truth, and that any evidence from the world of man that does not exactly fit, should be ignored, or challenged. This claim of absolute
truth can not be disproved within their paradigm. If we set the probability of any given hypothesis equal to one, then this hypothesis can not be disproved by any amount of evidence.
Fortunately, in my opinion, I think we have moved into a new scientific era. The last half of the 20th century gave us computers, and genetic engineering. And the new emerging paradigm for knowledge
involves things like complex systems, and information theory. While it is not possible to predict exactly what this will bring, I, personally, think we are back on the right track.
As I pointed out earlier, absolute certainty is not possible. But, I don’t believe we want to say that all possibilities are equal, either. Some things are more probable than others, and while we may
not be able to absolutely disprove any paradigm, we can objectively choose between them. Kuhn’s work was very important, and showed us how science worked historically. Paradigms and paradigm shifts
are very real. But in my opinion we take the concept too far if we claim there is no way to objectively choose paradigms. In ethics, we can say that ethical relativism is very good in terms of
describing things as they are and as they have been historically. If we simply survey people, we find that they hold similar believes within a culture, and that believes vary from culture to culture,
and over time. But while this is very good for description, it gives us no way to engage in prescriptive ethics. It does not tell us how we should try to improve cultures.
Recently a group of thinkers that have been dubbed the “new experimentalists”, for lack of a better term, have shown that some facts can be for all practical purposes, theory independent. Suppose for
example we started a sequence a number of times with “2”, and always found that “4”, followed. That is we experimented to find that “2” leads to “4”. The statement that “4” follows “2”, would then be
virtually theory independent. These type of facts, free of any high level theory, can be used to choose between paradigms. Note, however, the qualifier “practical purposes”. It is in principle
possible to question the validity of “facts” ad infinitum. But if we reserve the word “facts” for those observations that are nearly unreducibly basic, and have universal practical acceptance, then
we do indeed have “facts" for all practical purposes.
But we still have an issue to deal with here. If we see “2”, “4”, “6”, we expect to see “8”, but how can we justify this? This is pattern recognition, and/or reasoning by analogy. The problem is that
based on induction alone, we expect another 2, 4, or 6. And if we claim to see a pattern, we are correct, but there are infinite possible patterns that fit. What we want to claim is that based on
seeing a simple pattern, we expect it to continue. I believe we are justified in this, but the reasoning is hard to quantify, and I won’t attempt to quantify it exactly.
One idea is to consider this as an extension of the inductive principle. First we need to recognize that even simple repetition is a pattern. If we see “blue,blue,blue,blue” and expect blue next, we
are identifying the simplest pattern. If we saw them on Tuesday, the real pattern could be blue on Tuesday, red on Wednesday. Both ideas are supported by the data. Why should we favor the “all blue”
hypothesis? My answer is that “blue Tuesday” has more information content than “blue”. “Blue Tuesday” has an additional hypothesis not supported by additional data, so we should eliminate it by the
principle of Ockham’s razor.
Now we can look back at the pattern “2”, “4”, “6”. We could assume that there are 1/3 of each type, or we could say there is a pattern of increase by two. The pattern of increase by two is better
supported because we see that happen twice in the data. The other hypothesis only has one example to support it.
So to summarize, if two hypothesis both explain the data we should support the hypothesis that can be supported by more confirmation in the data. If both are equally confirmed, we should choose the
hypothesis with the least information content, since the extra information content is not supported in the data. Thus pattern recognition can be reduced to the inductive principle.
But now we have to ask in any given case, “Can we identify a pattern?” and “How many patterns?” Differences in our cognitive abilities, patience, and the categories that we pick will lead us to
different starting probabilities. A system of categories and rules that explains the data well may still be replaced by one that is even better confirmed by the data, or a system that is even
simpler, but finding that system may require creative insight.
The end result of all this is that while is might be possible to develop precise statements for a theory of knowledge, that tell us exactly why we would be justified in estimating the probabilities
that we do, for all practical purposes, these probabilities are subjective. We will not bring the same history to a problem as someone else will, nor will we bring the same pattern recognition
ability or persistence.
But while our starting assessments may be for all practical purposes subjective, they need not stay that way. As long as we are open minded and not dogmatic, that is to say as long as we do not fix
our probably estimates at 0 or 1, but instead allow at least a small possibility for error, then we will be able to update our assessment of the probabilities using Bayes’s theorem, and
experimentation that is as theory neutral as possible.
This is where communication is vital. If two scientists disagree, they should be able to trade experimental data and reasoning, and come to an agreement, if the evidence is compelling. This also
tells us that the best way to improve your personal approximation of reality is to seek out points of view that disagree with your view, and understand what leads people to think the way they do. The
results of this scientific consensus will not be perfect either, but it is the best method yet discovered by humans.
We can also bring deduction back into the picture. If our induction says A is 90% likely to be true, and B is 90% likely to be true, and if logical deduction says if A is true and if B is true then C
must be true, then we can now say that C is 90% * 90% = 81% likely to be true, without ever having directly tested C. Of course we might go test C anyway, and if we confirm it, then we have increased
the probability of A, B and C being true. Thus the process of deduction can be used to link together a vast array of different and seemingly unconnected inductions, and increase the probably that all
of them are true. This gives us the great certainty that is possible with the scientific method.
Inductive problem
Before leaving this topic we should discuss a classic challenge to the whole idea of induction. It has been claimed that the only way to show that induction is a valid process is by induction. We
would have to say something like, “Well, we’ve tried induction a great number of times, and it always worked, so it is valid”. The problem with that reasoning is that it is circular. Induction is
used to validate itself.
This may not be quite as invalid as it sounds, since we are not proving that induction works. We are just observing that it generally has worked, historically. Also, we are not using it to claim our
observed rules are "true", only that they are good approximations of reality, as we have observed it so far. Our use of it may just be a tautology. But still, we could ask "Why should it continue to
work?" So this may not be completely satisfactory.
We could ground induction in the laws of probability, but they require some mathematical axioms to be accepted. We could say that because causes follow from effects, today should be relevant to
tomorrow. But again, we know this only by observation and induction. I suppose the practical thing to say here, is that if by chance tomorrow is completely unrelated to any past experience, then we
should not worry about it since nothing in our past experience could help us prepare for it any better.
Or, one could simply argue, that we must start somewhere. Nothing is meaningful by itself in a vacuum. Things are meaningful only by relation to other things.This is the view of Quine's ontological
relativity. We have a vast web of interconnected ideas, but in the end there is nothing to ground the web to, but itself. The best we could hope for is a theory of the universe, that explained all
observations to date, not one that could be proven true.
But what if starting with different reasonable seeming things, gives different answers?
Then we are back to Kuhn's problem of how to choose between paradigms.
I would argue that we have no choice but to start with induction.
Without it, we can take no lessons from experience.
Even our language, which we must use to think about anything is inductive in character. If we use the word “cat”, we are making a generalization. The word describes many similar animals we have
known. It is also a classification. It separates cats from “not-cats”. But generalization, and categorization are processes of induction. Small children often make mistakes of overgeneralization, and
under-generalization, as they learn to use a language. This shows that language itself is learned by induction and experiment. Without language we can not even start on the path of knowledge. There
would seem to be no alternative for us humans than to accept that the process of induction leads to probable truths.
Language provides a good analogy for knowledge in general. Words can be defined in terms of other words, which in turn can be defined in terms of other words, leading to a vast web or interconnected
ideas, that are not grounded to anything. All words are imperfectly defined. But yet we are able to use language. By induction we associate it with things in reality. The word itself has no meaning,
until it is associated with something. We can use simple experimentation to check our understanding of words against other’s understanding. But is it “true” that a “cat” is a furry animal that meows?
Does the word “cat”, carry with it a perfect unchanging form of perfect “catness”, as Platonists contended, particularly in medieval times? I argue no. “Cat” is just a very useful approximation of
reality, not truth itself. The same is true about all our theories.
What makes it science?
O.K. so we have sketched a theory of knowledge, and compared it to language in general. Is this all that science is? A careful way of knowing things? Is there anything about it that separates it from
other careful ways of knowing, that are not science? I would argue there is. Science proceeds with the goal of consensus by logical arguments and careful methods from common shared data. We have
discussed how the arguments and methods work, but what about the data?
If we have shared data, that is either publicly available, or reproducible, or both, then we may be able to reach consensus on facts about that data. However, if we have private data, the possibility
exists that consensus will never be possible between those with access to the data, and those without access. Therefore science, with its goal of consensus about the natural world, deals only in
public and/or reproducible data. Testimony is not an acceptable form of scientific data.
More on why public data is preferred - here.
This puts fields like psychology at the edge of science. It may be possible to study humans scientifically, from a Skinnerian behaviorist perspective. However, it may be more useful to ask people
about themselves, which takes us out of the realm of public data. Even the doctor that asks a patient to describe a feeling, is not really doing science, by this definition. But again, this does not
mean that the doctor is not engaging in anything useful. Courtrooms and religion both rely heavily on testimony and private experience, again making them non-scientific. But, also again, not making
them useless. This is not a universally accepted definition of science, but I think it is a good one. It is also the one proposed by Ian G. Barbour in his book, "Religion and Science: Historical and
Contemporary Issues".
One way, that I would like to stress that we should not define science, is that we should not define it as only experimental. There are also observational sciences. Being able to experiment is ideal.
We can then generate as much data as we want, of exactly the kind we want. In an observational science, we must take the data we can get. For the most part, the field of astronomy, for example, falls
under the heading of observational science. All we can do is build bigger and better instruments to gather the data that is there. We cannot generate new data. While this does not involve
reproducible experiments, it does involve public data, and therefore reproducible observation, and it should certainly be classified as a science.
Are there known limits to knowledge? Yes. Beyond the inability to prove things absolutely, as argued above, we have Godel’s theorem in mathematics. “Godel Escher Bach: An Eternal Golden Braid”, is a
very good text that describes this theorem and it is a classic in artificial intelligence. The basic point of the theorem is that no system of mathematics can ever be all of the below:
1) Non-trivial
2) Non-contradictory
3) Complete
The system can never completely "know" itself, is another way of expressing it. Yet another way of looking at it is to say that if you choose more that one axiom for your system, you can never prove
the axioms do not contradict in some way.
Another fundamental limitation is given by quantum mechanics and chaos theory. I’ll talk about quantum mechanics more in a subsequent essay. But here, we can just say that it makes certain
fundamental events inherently unpredictable. And chaos theory tells us that in complex systems, small changes can eventually lead to vastly different outcomes, so there is an inherent limitation on
our ability to completely predict the future.
In closing, I’d just like to reiterate the most important points.
Absolute knowledge is not possible and consensus about truth will only be possible if:
1) We avoid dogmatically setting our estimates of probability to 0 or 1.
2) We use public and/or reproducible data while trying to reach consensus.
3) We take care to carefully use deductive logic to check facts against other facts.
4) We actively seek discourse with those that have different views.
This, I believe, describes the process of science.
Also see
Induction and the problem of miracles
Bayesian Epistemology
For more information:
What Is This Thing Called Science?
by Alan F. Chalmers
A Companion to the Philosophy of Science (Blackwell Companions to Philosophy)
by W. H. Newton-Smith (Editor)
Great Minds of The Western Intellectual Tradition, 3rd Edition
|
{"url":"http://www.davegentile.com/philosophy/knowledge.html","timestamp":"2014-04-20T00:41:06Z","content_type":null,"content_length":"37156","record_id":"<urn:uuid:3794fac5-80d2-41d1-a08d-78f64a2ecec4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can I use K-means algorithm on a string?
up vote 4 down vote favorite
I am working on a python project where I study RNA structure evolution (represented as a string for example: "(((...)))" where the parenthesis represent basepairs). The point being is that I have an
ideal structure and a population that evolves towards the ideal structure. I have implemented everything however I would like to add a feature where I can get the "number of buckets" ie the k most
representative structures in the population at each generation.
I was thinking of using the k-means algorithm but I am not sure how to use it with strings. I found scipy.cluster.vq but I don't know how to use it in my case.
add comment
3 Answers
active oldest votes
K-means doesn't really care about the type of the data involved. All you need to do a K-means is some way to measure a "distance" from one item to another. It'll do its thing based
on the distances, regardless of how that happens to be computed from the underlying data.
up vote 3 down
vote accepted That said, I haven't used scipy.cluster.vq, so I'm not sure exactly how you tell it the relationship between items, or how to compute a distance from item A to item B.
This answer doesn't make any sense. What is the "distance" between two strings of RNA such that it A) obeys the triangle inequality and B) is euclidean? There are many clustering
algorithms, and it seems beyond me how k-means in particular would be useful in this circumstance. – sclv Jun 9 '11 at 16:51
The distance I am using is the structural distance, for example sequences: (1) "(((....)))" and (2) "((((..))))" Have a distance of 1 since the only difference in an insertion –
Doni Jun 9 '11 at 20:35
add comment
One problem you would face if using scipy.cluster.vq.kmeans is that that function uses Euclidean distance to measure closeness. You'd have to find a way to convert your strings into
numerical vectors and be able to justify using Euclidean distance as a reasonable measure of closeness. What if two strings are identical except that one has an additional basepair
inserted somewhere. You might want to consider them "close", but Euclidean distance might be far apart...
up vote 4
down vote Perhaps you are looking for Levenshtein distance?
add comment
K-means only works with euclidean distance. Edit distances such as Levenshtein don't even obey the triangle inequality. For the sorts of metrics you're interested in, you're better off
using a different sort of algorithm, such as Hierarchical clustering: http://en.wikipedia.org/wiki/Hierarchical_clustering
up vote 4 Alternately, just convert your list of RNA into a weighted graph, with Levenshtein weights at the edges, and then decompose it into a minimum spanning tree. The most connected nodes of
down vote that tree will be, in a sense, the "most representative".
Why the downvote? – sclv Jun 9 '11 at 16:50
add comment
Not the answer you're looking for? Browse other questions tagged python algorithm cluster-analysis bioinformatics k-means or ask your own question.
|
{"url":"http://stackoverflow.com/questions/6293637/can-i-use-k-means-algorithm-on-a-string","timestamp":"2014-04-17T05:15:26Z","content_type":null,"content_length":"75414","record_id":"<urn:uuid:279e2da1-3fa4-4738-8ff2-c3d7620ef678>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why doesn't forall (RankNTypes usage) apply by default?
up vote 8 down vote favorite
I am not so familiar with forall, but recently read this question: What does the `forall` keyword in Haskell/GHC do?
In one of the answers is this example:
{-# LANGUAGE RankNTypes #-}
liftTup :: (forall x. x -> f x) -> (a, b) -> (f a, f b)
liftTup liftFunc (t, v) = (liftFunc t, liftFunc v)
The explanation is good and I understand what forall is doing here. But I'm wondering, is there a particular reason why this isn't the default behaviour. Is there ever a time where it would be
Edit: I mean, is there are a reason why the forall's can't be inserted by default?
haskell ghc
3 Are you asking why the extension isn't on by default or why (x -> f x) -> (a,b) -> (f a, f b) isn't treated the same as (forall x. x -> f x) -> (a, b) -> (f a, f b)? If it's the latter, can you
specify the logic by which you propose the compiler should decide where to insert the foralls? – sepp2k Apr 12 '12 at 18:05
The latter, and I wouldn't know where to begin proposing anything at all on this subject! – Peter Hall Apr 12 '12 at 18:23
Note that the type (x -> f x) -> (a, b) -> (f a, f b) is fairly useless. The function can't apply the first argument to either element of the tuple, so its result must be either ⊥ or (⊥, ⊥). –
hammar Apr 13 '12 at 0:42
add comment
2 Answers
active oldest votes
Well, it's not part of the Haskell 2010 standard, so it's not on by default, and is offered as a language extension instead. As for why it's not in the standard, rank-n types are quite a
bit harder to implement than the plain rank-1 types standard Haskell has; they're also not needed all that frequently, so the Committee likely decided not to bother with them for reasons
of language and implementation simplicity.
Of course, that doesn't mean rank-n types aren't useful; they are, very much so, and without them we wouldn't have valuable tools like the ST monad (which offers efficient, local mutable
state — like IO where all you can do is use IORefs). But they do add quite a bit of complexity to the language, and can cause strange behaviour when applying seemingly benign code
transformations. For instance, some rank-n type checkers will allow runST (do { ... }) but reject runST $ do { ... }, even though the two expressions are always equivalent without rank-n
types. See this SO question for an example of the unexpected (and sometimes annoying) behaviour it can cause.
up vote 15 If, like sepp2k asks, you're instead asking why forall has to be explicitly added to type signatures to get the increased generality, the problem is that (forall x. x -> f x) -> (a, b)
down vote -> (f a, f b) is actually a more restrictive type than (x -> f x) -> (a, b) -> (f a, f b). With the latter, you can pass in any function of the form x -> f x (for any f and x), but with
accepted the former, the function you pass in must work for all x. So, for instance, a function of type String -> IO String would be a permissible argument to the second function, but not the
first; it'd have to have the type a -> IO a instead. It would be pretty confusing if the latter was automatically transformed into the former! They're two very different types.
It might make more sense with the implicit foralls made explicit:
forall f x a b. (x -> f x) -> (a, b) -> (f a, f b)
forall f a b. (forall x. x -> f x) -> (a, b) -> (f a, f b)
I think I need to digest this some more :) – Peter Hall Apr 12 '12 at 18:37
Also, for second and higher order polymorphism, there's no such thing as the most general type, ∀ b. (∀ a. a → b) → (b, b) isn't more general than ∀ c d. (∀ a b. a → b) → (c, d) nor
the other way around. And while type inference is decidable for first and second order polymorphism, it is not for third and higher. – Vitus Apr 12 '12 at 21:12
@vitus - can you provide a reference for decidability of second order polymorphism? I'm unfamiliar with it. – John L Apr 12 '12 at 23:31
@JohnL: I've found it in this wikipedia article, which in turn references Pierce's Types and Programming Languages. – Vitus Apr 13 '12 at 6:18
@JohnL: It should be in chapter 23.8. (rank 2 polymorphism restriction of System F): Kfoury and Wells (1999) gave the first correct type reconstruction algorithm for the rank 2 system
and showed that type reconstruction for ranks 3 and higher of System F is undecidable. – Vitus Apr 13 '12 at 11:39
show 1 more comment
I suspect that higher rank types aren't enabled by default because they make type inference undecidable. This is also why, even with the extension enabled, you need to use the forall
keyword to get a higher-rank type - GHC assumes that all types are rank-1 unless explicitly told otherwise in order to infer as much type information as possible.
Put another way, there's no general way to infer a higher-rank type (forall x. x -> f x) -> (a,b) -> (f a, f b), so the only way to get that type is by an explicit type signature.
up vote 7
down vote Edit: per Vitus's comments above, rank-2 type inference is decidable, but higher-rank polymorphism isn't. So this type signature is technically inferrable (although the algorithm is more
complex). Whether the extra complexity of enabling rank-2 polymorphic type inference is worthwhile is debatable...
add comment
Not the answer you're looking for? Browse other questions tagged haskell ghc or ask your own question.
|
{"url":"http://stackoverflow.com/questions/10129101/why-doesnt-forall-rankntypes-usage-apply-by-default","timestamp":"2014-04-20T01:47:05Z","content_type":null,"content_length":"80857","record_id":"<urn:uuid:d72663b4-a7e9-451b-9063-df2941b122dd>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solving the Cubic with Cardano - The Problem
The Problem
Before we examine the solution to \(x^3=ax^2+b,\) we consider some background on the cubic.
In 1494, Luca Pacioli published his Somma di aritmetica, geometrica, proporzioni e proporzionalità, which collected much of the mathematical knowledge of the time. In his account of algebra, he
listed the cubic problems, including those unsolved at the time. He also put forth his opinion that the unsolved problems would never be solved. Note that I write “cubic problems” rather than “cubic
equations”: equations, as we know them, were a product of the hundred years after Cardano wrote, culminating with Descartes and analytic geometry. And I use the plural, “cubic problems,” because at
that time, the theory of negative numbers had not been fleshed out (Cardano called them ficta or “fictitious” in places). Thus, the cubic problems were all stated with strictly positive coefficients,
and there was never a question of moving all terms to one side to achieve any kind of standard form. What made these problems “unsolved” was the lack of a formula for the solution in terms of the
numbers given in the problem; geometric solutions for some of the cubic problems had been constructed by Islamic mathematicians, such as Umar al-Khayyami’s solution of 1070 [Berggren, p. 119].
The rather dramatic story of the solution of the cubic has been told, excellently, elsewhere. (See [Dunham, pp. 133-142] or [Katz, pp. 358-361], for example.) In brief, Scipione del Ferro, a
professor at Bologna, found the solution to "cube and thing equal to number" (in our symbolic notation, \(x^3=ax+b\) with \(a\) and \(b\) positive rational numbers) in the early sixteenth century. He
told his student Antonio Maria Fior of the solution, and Fior challenged Niccolo Tartaglia to a mathematical duel. In such a duel, each contestant posed a list of questions to the other, with the
loser paying the winner a prize. Tartaglia had found the solution to "cube and square equal to number," and, just in time, the solution to "cube equal to thing and number." Fior couldn’t answer
Tartaglia's questions, and Tartaglia won the duel. A few years later, Cardano obtained the solution to both cubics from Tartaglia (whether or not such obtaining was underhanded is still a matter of
contention), and published the solutions in his Ars Magna.
The solution I will present is to the problem “cube equal to square and number,” or \(x^3=ax^2+b.\) Cardano’s solution in the Ars Magna proceeds in three steps over three chapters. First, in Chapter
6, Cardano decomposed the cube into eight parts, establishing a geometric demonstration for our binomial formula \[{(x+y)}^3=x^3+3x^2y+3xy^2+y^3.\]He used this decomposition in almost all of his
arguments about cubics. Second, in Chapter 14, Cardano depressed the problem “cube equal to square and number” to “cube equal to thing and number”: symbolically, he depressed \(x^3=ax^2+b\) to \(y^3=
Ay+B.\) His geometric procedure was radically different from our analytic one, and relied on the decomposition of the cube from Chapter 6. This depression allowed him to use his solution to “cube
equal to thing and number” presented in Chapter 12, to derive his rule for the solution to “cube equal to square and number.”
His solution in Ars Magna to "cube equal to thing and number" consists of two parts, a geometric demonstration that a particular number is the solution, and a rule for finding that number from the
coefficients of the problem. The demonstration does not show how to find the number given by the rule; I argue that the number arises naturally from an abbaco "problem of ten," and that Cardano
relied on this abbaco knowledge to connect his demonstration with his rule.
|
{"url":"http://www.maa.org/publications/periodicals/convergence/solving-the-cubic-with-cardano-the-problem","timestamp":"2014-04-17T08:19:24Z","content_type":null,"content_length":"102790","record_id":"<urn:uuid:4f542575-7ef8-4c7b-8a2e-9e78f7f50ece>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Back to Last Page
> <
Full Glossary
Definition of Cointegration: "An (n x 1) vector time series y[t] is said to be cointegrated if each of the series taken individually is ... nonstationary with a unit root, while some linear
combination of the series a'y is stationary ... for some nonzero (n x 1) vector a."
Hamilton uses the phrasing that y[t] is cointegrated with a', and offers a couple of examples. One was that although consumption and income time series have unit roots, consumption tends to be a
roughly constant proportion of income over the long term, so (ln income) minus (ln consumption) looks stationary. (Econterms)
Terms related to Cointegration:
About.Com Resources on Cointegration:
Writing a Term Paper? Here are a few starting points for research on Cointegration:
Books on Cointegration:
Journal Articles on Cointegration:
|
{"url":"http://economics.about.com/library/glossary/bldef-cointegration.htm","timestamp":"2014-04-19T01:47:54Z","content_type":null,"content_length":"35168","record_id":"<urn:uuid:adc84a19-fcbc-41ba-8b0b-1955cc389909>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1209 Submissions
[1] viXra:1209.0014 [pdf] submitted on 2012-09-05 00:31:28
Dual Numbers
Authors: W. B. Vasantha Kandasamy, Florentin Smarandache
Comments: 159 Pages.
The concept of dual numbers was introduced in 1873 by W.K.Clifford. In this book the authors build higher dimensional dual numbers, interval dual numbers and impose some algebraic structures on them.
The S-vector space of dual numbers built over a Smarandache dual ring can have eigen values and eigen vectors to be dual numbers. Complex modulo integer dual numbers and neutrosophic dual numbers are
also introduced. The notion of fuzzy dual numbers can play a vital role in fuzzy models. Some research level problems are also proposed here.
Category: Algebra
|
{"url":"http://vixra.org/alg/1209","timestamp":"2014-04-20T18:30:12Z","content_type":null,"content_length":"3923","record_id":"<urn:uuid:cb282b7e-c6e2-4f1c-8bc0-01b7492801cf>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is network design I
Steiner Tree
What is network design I
Steiner Tree
What is network design II
(connected facility location)
What is network design II
(connected facility location)
Is there fair & budget balanced cost-sharing?
Dealing with limited individual utility: should all clients get connected?
From cost sharing game to mechanism design
From cost-sharing to mechanism design
Are cost shares population monotone ?
population monotone cost shares?
Monotone cost-shares from any cost-shares?
Primal-Dual algorithm for monotone cost-sharing
Cost-sharing problem for facility location
Idea of Approximate cost-sharing algorithm
Primal-Dual Algorithm: Trouble
How to make monotone cost shares
Approximate budget balance = approximation quality
Approximate budget balance = approximation quality
Rent-or-buy cost-sharing competitive?
|
{"url":"http://www.ima.umn.edu/talks/workshops/4-7-11.2003/tardos/tardos_files/outline_collapsed.htm","timestamp":"2014-04-19T19:35:10Z","content_type":null,"content_length":"9741","record_id":"<urn:uuid:64d19fb6-72f1-478c-8dad-9b83d79a1f7c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by tammy on Saturday, January 6, 2007 at 1:56pm.
1.find the area heigh 5ft width 8 ft
A=H x W
A= 5ft x 8Ft
A= 40ft^2
2.The circular surface of a table has a diameter of 4 ft. What it will cost to have the top refinished if the company charges $5.00 per square foot for the refinishing ? Use 3.14 for pi and round
your answer to the nearest cent.
39.4384 x $5.00
3. Divid by moving the decimal point 27.3/10^5=
4. A beef roast weighing 6.6lb costs $2.25 per pound. What is the cost of the roast?
x 6.6
5. Write 0.00548 as a common fraction or mixed number.
548/100,000 = 137/25,000
2.The circular surface of a table has a diameter of 4 ft. What it will cost to have the top refinished if the company charges $5.00 per square foot for the refinishing ? Use 3.14 for pi and round
your answer to the nearest cent.
ok to here. For the next step you squared 3.14 x 2 but you should have squared just the 2.
3.14(2)^2 = 3.14(4)=12.566 ft^2. Then multiply by $5.00.
39.4384 x $5.00
Two notes:
1. Numbers 1, 3, and 4 look ok.
2. The 12.566 I have above should be 12.56 if we are using 3.14 for pi. I used pi from the calculator which uses 3.14159....
Related Questions
Algebra - A circular aboveground pool has a diameter of 8ft and stands 5.5ft off...
geometry - A room is 11ft by 21ft and it needs painted. The ceiling is 8ft above...
algebra word problem - I NEED HELP PLEASE!!!!!!!!! A rectangular table is five ...
Geometry - The figure is formed from tectangles. Find the total area 8Ft 5ft 3ft...
Algebra - The width of a rectangle is 2ft less than the length. The area is 8ft^...
calculus - What is the volume of a spherical dome with radius 5.5ft and height 3...
physics - An aboveground pool is 5ft tall with a diameter of 40 ft is filled ...
physics - An aboveground pool is 5ft tall with a diameter of 40 ft is filled ...
physics - An aboveground pool is 5ft tall with a diameter of 40 ft is filled ...
math check - Would some please check these and see if I am doing them right ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1168109774","timestamp":"2014-04-17T04:38:38Z","content_type":null,"content_length":"9365","record_id":"<urn:uuid:55e9d598-7f5d-45cf-88b2-2d8da5e6f087>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Enjoy Lighting Review/Giveaway
I am a giant candle lover but with having a child I have am limited. I can only light them when she is in bed or on my kitchen table. I love the look of candles burning in my living room. It is
impossible to have them out with a child running around. Thanks to
Enjoy Lighting
I can have that look I want.
Enjoy products look & feel like real candles without the danger of a flame. They have LED lights inside that flicker like a real flame would. The candles can be set to "burn" for four, six or eight
hour durations.
Enjoy flameless candles even smell like real candles. As soon as I opened the package I smelled the candle and the smell lingers through my room. I love it. They have different fragrances to choose
from & they have unscented if you just want the look of a candle. There are also different styles and colors to choose from.
I sat the candle on my TV stand and it looked so real, even had the "burning" reflection. I will definitely be purchasing a few more Enjoy products. No more worrying if you forget to blow the candle.
I know I have done this many times before.
Enjoy Lighting is offering my readers 2 candles. To enter check out Enjoy Lighting and let me know your favorite scents.
Giveway ends 2/17
After you’ve done the required entry, you may do as many of the extra entries as you want just be sure to leave separate comments for each entry.
• Follow my blog (3 entries)
• Follow Me on Twitter
• Subscribe via Feedburner
• Tweet about this giveaway (twice daily) #Win Enjoy Lighting flameless candles @m0mmy http://tinyurl.com/4keyhfy #giveaway
• Use the like button on this post
• Like Mommy to a lil lady[bug] on Facebook
• Follow me via Networked Blogs
• Add my button to your sidebar
• Blog about this giveaway (2 entries)
• Enter another one of my giveaways (1 entry per giveaway entered)
• Comment on a non-giveaway post (tell me which one)
The items featured in this review were provided free of cost to me by the manufacturer or representing PR agency. Opinions expressed are my own and are NOT influenced by monetary compensation
242 comments:
1. I follow your blog on GFC #1
grandmasguide2life at yahoo dot com
2. I follow your blog on GFC #2
grandmasguide2life at yahoo dot com
3. I follow your blog on GFC #3
grandmasguide2life at yahoo dot com
4. I follow you on Twitter (@CarliAlice)
grandmasguide2life at yahoo dot com
5. I would definitely like the Ivory & Beeswax Smooth Vanilla Scented Wax Votives. What a great giveaway!!
6. I follow your blog
7. I follow your blog
8. I follow your blog
9. Follow you on twitter
10. I am a plain old vanilla girl. Thanks for the giveaway!
11. Subscribed via feedburner
12. I tweeted: http://twitter.com/#!/CarliAlice/status/33330451309395968
grandmasguide2life at yahoo dot com
13. like you on fb
14. Follow you on networked blogs
15. I follow you!
16. entered your eden fantasys giveaway
17. I liked this post
grandmasguide2life at yahoo dot com
18. I follow you on twitter!
19. I subscribe to you via feedburner!
20. I liked you on Facebook (Carla Schmeing Bruns)
grandmaguide2life at yahoo dot com
21. I tweeted about the giveaway!
22. I used the like button on this post!
23. I liked you on fb!
24. I follow you on networked blogs
25. I added your button to my sidebar.
26. I would like to try the vanilla, and ivory
27. I follow your blog. (1)
28. I follow your blog. (2)
29. I follow your blog. (3)
30. I "liked" this post.
31. I like mommy to a lil lady [bug] on facebook.
32. I follow via networked blog's.
33. I also entered the roundabout giveaway.
34. Their vanilla scent sounds lovely!
judywhatilivefor at gmail dot com
35. I follow your blog
judywhatilivefor at gmail dot com
36. I follow your blog
judywhatilivefor at gmail dot com
37. I follow your blog
judywhatilivefor at gmail dot com
38. I follow you on Twitter @judywhatilive4
judywhatilivefor at gmail dot com
39. I entered your Tarpy giveaway
judywhatilivefor at gmail dot com
40. I entered your Novica giveaway
judywhatilivefor at gmail dot com
41. I entered your Gummy Lump giveaway
judywhatilivefor at gmail dot com
42. I entered your Old Time Candy giveaway
judywhatilivefor at gmail dot com
43. I entered bubble and bee giveaway.
44. i entered paintshop photo pro x3 giveaway.
45. i entered gourmet gift basket giveaway.
46. i entered sand gone giveaway.
47. i entered edenfantsays giveaway.
48. i entered monster jam giveaway.
49. i entered novica giveaway.
50. I entered Tarpy giveaway.
51. i entered enjoy lighting giveaway.
52. i entered old time giveaway.
53. Vanilla scented would please me. Thanks for the wonderful giveaway
dianad8008 AT gmail DOT com
54. Email subscriber #1. Thanks
dianad8008 AT gmail DOT com
55. Email subscriber #2. Thanks
dianad8008 AT gmail DOT com
56. Email subscriber #3. Thanks
dianad8008 AT gmail DOT com
57. I am an email subscriber. Thanks for the wonderful giveaway.
dianad8008 AT gmail DOT com
58. I want the manderine scent it smells so good. I follow your blog I twitted contest and I entered the 70s candy contest...
59. I tweeted:
grandmasguide2life at yahoo dot com
60. Vanilla and Bayberry are my favorites!
61. Follow your blog via GFC
62. Follow via GFC
63. Follow via GFC
64. Follow in Google Reader
65. Follow you on FAcebook
66. Entered Old Time Candy giveaway
67. I like the look of the Pomegranate Passion distressed candles. how neat! thanks for the giveaway!
tara.the.equestrian at gmail dot com
68. I follow your blog!
tara.the.equestrian at gmail dot com
69. I follow your blog 2!
tara.the.equestrian at gmail dot com
70. I follow your blog 3!
tara.the.equestrian at gmail dot com
71. I follow you on Twitter! th7272
tara.the.equestrian at gmail dot com
72. I sub via Feedburner.
tara.the.equestrian at gmail dot com
73. I like you on Facebook!
tara.the.equestrian at gmail dot com
74. I entered the Gourmet Gift Basket giveaway as well. :)
tara.the.equestrian at gmail dot com
75. I entered Sand Gone, too!
tara.the.equestrian at gmail dot com
76. and I entered Gummy Lump.
tara.the.equestrian at gmail dot com
77. I also entered Edenfantasys.
tara.the.equestrian at gmail dot com
78. I entered the Roundabouts giveaways! (can you tell I love your giveaways?!)
tara.the.equestrian at gmail dot com
79. I entered the Novica giveaway as well.
tara.the.equestrian at gmail dot com
80. I also entered the Tarpy giveaway. :)
tara.the.equestrian at gmail dot com
81. Oh, vanilla, pumpkin spice and roasted hazelnut all sound just heavenly!
82. I think the Cedar Green Glacier Pillars – Spring Meadow Fragrance sounds wonderful.
hickenfam at hotmail dot com
83. I follow on gfc (psychdog). 1
hickenfam at hotmail dot com
84. I follow on gfc (psychdog). 2
hickenfam at hotmail dot com
85. I follow on gfc (psychdog). 3
hickenfam at hotmail dot com
86. I follow on twitter (hifam).
hickenfam at hotmail dot com
87. I follow with google reader.
hickenfam at hotmail dot com
88. I follow on networked blogs.
hickenfam at hotmail dot com
89. I entered the novica giveaway.
hickenfam at hotmail dot com
90. I entered the Tarpy giveaway.
hickenfam at hotmail dot com
91. I entered the old time candy giveaway.
hickenfam at hotmail dot com
92. Pumpkin Spice smells nice
93. Follow you on Twitter
94. Subscribe via Feedburner
95. Like Mommy to a lil lady[bug] on Facebook
96. Follow me via Networked Blogs
97. Entered •Old Time Candy 2/18
98. Entered PaintShop Photo Pro X3
99. 1st Daily tweet http://twitter.com/#!/memamemepapa/status/33949992624463872
100. Entered Gourmet Gift Basket giveaway
101. I tweeted:
102. Entered Tarpy giveaway
103. I tweeted:
104. Mandarin Spice Fragrance is the one for me. They all look good though.
farmhousekitchen at gmail dot com
105. I follow your blog.
106. I follow you on Twitter
107. I follow Old Time Candy on Twitter
108. Corrected Tweet Posting For This Giveaway.
109. I subscribed via Feedburner
110. I used the like button this post.
111. I like you on Facebook
112. I follow on Networked blogs
113. I tweeted: http://twitter.com/#!/CarliAlice/status/34469182351347712
114. My favorite scents are the Ivory Smooth Vanilla and the Pumpkin Spice!
115. I follow you on GFC #1
116. I follow you on GFC #2
117. I follow you on GFC #3
118. I follow you on twitter @mae_01_sw
119. Tweet! http://twitter.com/#!/mae_01_sw/status/34591615150456832
120. I "like" this post!
121. I like you on facebook!
122. I follow you on Networked Blogs!
123. My favorite scent is definitely pumpkin spice, so warm and inviting
Gchord88 at aol dot com
124. I follow your blog on GFC (G) #1
Gchord88 at aol dot com
125. I follow your blog on GFC (G) #2
Gchord88 at aol dot com
126. I follow your blog on GFC (G) #3
Gchord88 at aol dot com
127. I follow you on twitter (@Gchord88)
Gchord88 at aol dot com
128. I subscribe to your emails
Gchord88 at aol dot com
129. I like you on facebook (Gina R)
Gchord88 at aol dot com
130. I like the Brown Distressed Pillars – Roasted Hazelnut Fragrance
nclaudia25 at yahoo dot com
131. I Follow you on Google Friend Connect
nclaudia25 at yahoo dot com
132. I Follow you on Google Friend Connect
nclaudia25 at yahoo dot com
133. I Follow you on Google Friend Connect
nclaudia25 at yahoo dot com
134. I follow you on twitter
nclaudia 25 at yahoo dot com
135. I Like your blog on Facebook
(claudia n)
nclaudia 25 at yahoo dot com
136. tweeted
nclaudia25 at yahoo dot com
137. Tweeted:
138. i like the ivory smooth vanilla pilars
nannypanpan at sbcglobal.net
139. e-mail subscriber
140. gfc follower
141. gfc 2
142. gfc 3
143. entered novica
144. entered roundabouts
145. entered tarpy
146. entered i like book
147. entered old time candy
148. I twitter http://twitter.com/#!/CarliAlice/status/35149350476849152
149. I like cocoa brown mottled pillars
150. GFC follower
151. I'd like the Ivory Beeswax vanilla pillars! Thanks!
152. I tweeted: http://twitter.com/#!/CarliAlice/status/35321840666673152
153. The Roasted Hazlenut sounds yummy!
kcoud33 at gmail dot com
154. Follow you on GFC (Kimberly) 1
kcoud33 at gmail dot com
155. Follow you on GFC (Kimberly) 2
kcoud33 at gmail dot com
156. Follow you on GFC (Kimberly) 3
kcoud33 at gmail dot com
157. Follow you on twitter (@kcoud33)
kcoud33 at gmail dot com
158. Subscribe to RSS on Google Reader
kcoud33 at gmail dot com
159. Tweeted: http://twitter.com/#!/kcoud33/status/35404159654035458
kcoud33 at gmail dot com
160. Like this post
kcoud33 at gmail dot com
161. Like you on a Facebook
kcoud33 at gmail dot com
162. Follow you on Networked Blogs
kcoud33 at gmail dot com
163. Entered Roundabouts giveaway
kcoud33 at gmail dot com
164. Entered Kidorable giveaway
kcoud33 at gmail dot com
165. Entered Old Time Candy giveaway
kcoud33 at gmail dot com
166. I tweeted:
167. the mountain breeze scent sounds nice. heinzmom at hotmail dot com
168. follow you on gfc 1 heinzmom at hotmail dot com
169. gfc follower 2 heinzmom at hotmail dot com
170. gfc follower 3 heinzmom at hotmail dot com
171. network blogs follower. heinzmom at hotmail dot com
172. I like you on facebook. heinzmom at hotmail dot com
173. I entered your Tarpy giveaway. heinzmom at hotmail dot com
174. entered: Novica. heinzmom at hotmail dot com
175. entered: Roundabouts. heinzmom at hotmail dot com
176. entered: old time candy. heinzmom at hotmail dot com
177. I tweeted: http://twitter.com/#!/CarliAlice/status/35884044914794496
178. I love sound of roasted hazelnut :)
candy at fiber dot net
179. I commented on your lunchbox giveaway :)
candy at fiber dot net
180. I tweeted: http://twitter.com/#!/CarliAlice/status/36278201701376000
181. Tweeted: http://twitter.com/#!/CarliAlice/status/36504491188551680
182. I follow your blog on GFC.
183. I love the pomegranate scent
184. entered plow and hearth
185. entered yubo
186. entered kidorable
187. I Follow your blog #1
188. I Follow you on Twitter
189. I Subscribe via Feedburner
190. I Used the like button on this post
191. I Like Mommy to a lil lady[bug] on Facebook
192. I Follow you via Networked Blogs
193. I Add your button to my sidebar
194. I follow you via GFC #2
195. I follow via GFC #3
196. http://twitter.com/davisesq212/status/36613857665359872
197. https://twitter.com/davisesq212/status/36613857665359872
198. the mandarin spice and the vanilla scents look great!
micaela6955 at msn dot com
199. GFC follower #1
micaela6955 at msn dot com
200. GFC follower #2
micaela6955 at msn dot com
New comments are not allowed.
|
{"url":"http://m0mmy-x3.blogspot.com/2011/02/enjoy-lighting-reviewgiveaway.html?showComment=1297082211577","timestamp":"2014-04-19T22:17:15Z","content_type":null,"content_length":"441324","record_id":"<urn:uuid:f1be7f4d-dc02-43b6-95e2-860614512d4c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HAKMEM -- Figure 1(b).
Figure 1(b). [Item 55] As 1(a), but radix i+1. Large circle is origin. Dashes indicate continuity of curve at confusing places. Dotted curve is with an infinity of ones to the left (big dot = ...1111
= i). The solid and dotted curves are symmetrical about the point marked with a small circle.
Figure 1(b). [Item 55] As 1(a), but radix i+1. Large circle is origin. Dashes indicate continuity of curve at confusing places. Dotted curve is with an infinity of ones to the left (big dot = ...1111
= i). The solid and dotted curves are symmetrical about the point marked with a small circle.
[Retyped and formatted in html ('Web browser format) by Henry Baker, April, 1995.]
|
{"url":"http://www.std.com/obi/hakmem/Figure1b.html","timestamp":"2014-04-21T04:32:06Z","content_type":null,"content_length":"1583","record_id":"<urn:uuid:4685a576-946b-402d-91d9-5ef2d1f6b20e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SMALL 2013 Projects:
Commutative Algebra:
Advisor: Susan Loepp
Project Description:
Consider the set of polynomials in one variable over the complex numbers. We can define a distance between these polynomials that turns out to be a metric. The Cauchy sequences with respect to this
metric, however, do not all converge. So, we can complete this metric space to get a new metric space in which all cauchy sequences converge. What is this new space algebraically? Surprisingly, it
turns out to be the set of formal power series in one variable over the complex numbers. The idea of completing a set of polynomials generalizes to rings. Given a local ring, one can define a
metric on that ring and form a new ring by completing the metric space. The relationship between a ring and its completion is important and mysterious. Algebraists often gain useful information
about a ring by passing to the completion, which, by Cohen’s Structure Theorem, is easier to understand. Unfortunately, the relationship between a local ring and its completion is not well
understood. It is the goal of the Commutative Algebra groups in SMALL to shed light on this relationship.
Students participating in the Commutative Algebra group will work on problems relating local ring to their completions. For example, they may attempt to characterize which complete local rings are
completions of a local ring satisfying a given “nice” property. Students could also work on a variety of questions on the relationship between a local ring R and the polynomial ring R[X] by looking
at their completions. In addition, there are open questions about formal fibers on which students might work. At least one Abstract Algebra course is required. The following references are results
from previous SMALL Commutative Algebra groups.
1. D. Lee, L. Leer, S. Pilch, and Y. Yasufuku, Characterizations of Completions of Reduced Local Rings, Proc. Amer. Math. Soc.,129 (2001), 3193-3200.
2. M. Florenz, D. Kunvipusilkul, and J. Yang, Constructing Chains of Excellent Rings with Local Generic Formal Fibers, Communications in Algebra, 30 (2002), 3569-3587.
3. J. Bryk, S. Mapes, C. Samuels and G. Wang, Constructing Almost Excellent Unique Factorization Domains, Communications in Algebra, 33 (2005), 1321-1336.
4. A. Dundon, D. Jensen, S. Loepp, J. Provine, and J. Rodu, Controlling Formal Fibers of Principal Prime Ideals, Rocky Mountain Journal of Mathematics, 37 (2007), 1871-1892.
5. A. Boocher, M. Daub, R. Johnson, H. Lindo, S. Loepp, and P. Woodard, Formal Fibers of Unique Factorization Domains, Canadian Journal of Mathematics, 62 (2010), 721-736.
6. A. Boocher, M. Daub, S. Loepp, Dimensions of Formal Fibers of Height one Prime Ideals, Communications in Algebra, (2010), no.1, 233-253.
7. N. Arnosti, R. Karpman, C. Leverson, J. Levinson, and S. Loepp, Semi-Local Formal Fibers of Minimal Prime Ideals of Excellent Reduced Local Rings, Journal of Commutative Algebra, (2012), no.1,
8. J. Chatlos, B. Simanek, N. Watson, and S. Wu, Semilocal Formal Fibers of Principal Prime Ideals, Journal of Commutative Algebra, to appear.
9. J. Ahn, E. Ferme, F. Jiang, S. Loepp, and G. Tran, Completions of Hypersurface Domains, Communications in Algebra, to appear.
Ergodic Theory
Advisor: Cesar Silva
Project Description:
Ergodic theory studies dynamical systems from a probabilistic or measurable point of view. A discrete-time dynamical system is given by a self-map on some measure space. An interesting class of
examples is obtained by continuous maps on Cantor spaces. A particular class of such maps is given by polynomial maps defined on compact open subsets of the p-adic numbers. More generally, there
are interesting classes of measurable maps on the unit interval. A technique that has been very successful for constructing such examples is called cutting and stacking. In addition to maps, which
can be regarded as actions of group of integers, one considers actions of other groups. We will study properties such as ergodicity and mixing for these maps or actions. We have the following
possible projects.
1) Extend results on measurable sensitivity from previous SMALL groups:
James, Jennifer; Koberda, Thomas; Lindsey, Kathryn; Silva, Cesar E.; Speh, Peter Measurable sensitivity. Proc. Amer. Math. Soc. 136 (2008), no. 10, 3549–3559.
See also http://arxiv.org/abs/math.DS/0612480, http://arxiv.org/abs/0910.1958, and http://arxiv.org/abs/1207.3575.
2) Mixing properties for rational functions on the p-adics. See previous SMALL results in
For an introduction to measurable p-adic dynamics seeMeasurable dynamics of simple p-adic polynomials, Amer. Math. Monthly, Vol. 112 (2005), no. 3, 212-232.
See also http://arxiv.org/abs/0909.4130.
3) Other mixing and rigidity properties for certain classes of transformations.
See http://nyjm.albany.edu:8000/j/2009/15_393.html, http://journals.impan.pl/cgi-bin/doi?cm119-1-1, or http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=208351, http://
In terms of background, a first course in real analysis is expected, and preferably some work in measure theory and sufficient background to cover most of the following book during the first week or
so of the program: http://www.ams.org/bookstore?fn=20&arg1=stmlseries&ikey=STML-42.
Advisor: Frank Morgan
Project Description:
1. Perelman’s stunning proof of the million-dollar Poincaré Conjecture needed to consider not just manifolds but “manifolds with density” (like the density in physics you integrate to compute mass).
Yet much of the basic geometry of such spaces remains unexplored. The Log Convex Density Conjecture states that for a log-convex radial density, balls about the origin are isoperimetric (minimize
weighted perimeter for given weighted area). Despite some progress, the borderline case of the plane with density e^r for example remains open. Other cases such as the rapidly growing exp(e^r)
could be interesting. For a log-concave radial density such as e^-1/r, isoperimetric curves probably pass through the origin, like the isoperimetric circles for density r^p [4]; a numerical study
would be interesting. See references [1-7] below, especially [6].
2. Recent work by the Geometry Group and me has found the least perimeter way to tile the plane with unit-area pentagons, assuming that the pentagons are convex. We’d like to remove the convexity
assumption. Another interesting question asks for the least-perimeter 3D unit-volume n-hedral tile, with good conjectures provided by the 2012 Geometry Group [8]. Even in the lowest n=4 case, the
conjectured least-perimeter unit-volume tetrahedron, a third of a triangular prism, is proved only if orientation-reversing copies are forbidden. See references [8-13] below.
3. The Convex Body Isoperimetric Conjecture [14] says that the least perimeter to enclose given volume inside an open ball in R^n is greater than inside any other convex body of the same volume. The
two-dimensional case has been proved [15] for the case of exactly half the volume and is ripe for further study, starting with the easy case of n-gons for small n.
1. Frank Morgan, Manifolds with density, Notices Amer. Math. Soc. 52 (2005), 853-858, http://www.ams.org/notices/200508/fea-morgan.pdf
2. Ivan Corwin, Neil Hoffman, Stephanie Hurder, Vojislav Sesum, Ya Xu (2004 Geometry Group), Differential geometry of manifolds with density, Rose-Hulman Und. Math. J. 7 (1) (2006), http://
3. Colin Carroll, Adam Jacob, Conor Quinn, Robin Walters (2006 Geometry Group), The isoperimetric problem on planes with density, Bull. Austral. Math. Soc. 78 (2008), 177-197.
4. Jonathan Dahlberg, Alexander Dubbs, Edward Newkirk, Hung Tran (2008 Geometry Group), Isoperimetric regions in the plane with density r^p, NY J. Math. 16 (2010), 31-51, http://nyjm.albany.edu/j/
5. Alexander Díaz, Nate Harman, Sean Howe, David Thompson (2009 Geometry Group), Isoperimetric problems in sectors with density, Advances in Geometry, to appear, http://arxiv.org/abs/1012.0450
6. Ping Ngai Chung, Miguel A. Fernandez, Niralee Shah, Luis Sordo Vieira, Elena Wikner (2011 Geometry Group), Are circles isoperimetric in the plane with density e^r? preprint (2011).
7. Frank Morgan, Geometric Measure Theory, Academic Press, 4th ed., 2009, Chapters 18 and 15.
8. Thomas C. Hales, The honeycomb conjecture, Discr. Comput. Geom. 25 (2001), 1-22, http://front.math.ucdavis.edu/math.MG/9906042
9. Ping Ngai Chung, Miguel A. Fernandez, Yifei Li, Michael Mara, Frank Morgan, Isamar Rosa Plata, Niralee Shah, Luis Sordo Vieira, Elena Wikner, Isoperimetric pentagonal tilings, Notices Amer. Math.
Soc., 59 (May, 2012), 632-640. http://www.ams.org/notices/201205/rtx120500632p.pdf
10. Yifei Li, Michael Mara, Isamar Rosa Plata, and Elena Wikner (2010 Geometry Group), Tiling with penalties and isoperimetry with density, preprint (2011).
11. Ping Ngai Chung, Miguel A. Fernandez, Niralee Shah, Luis Sordo Vieira, Elena Wikner, Perimeter-minimizing pentagonal tilings, Involve, to appear (2012).
12. Whan Ghang, Zane Martin, Steven Waruhui, Planar tilings by convex and nonconvex pentagons, arXiv.org (2013).
13. Whan Ghang, Zane Martin, Steven Waruhui, Surface-area-minimizing n-hedral tiles, arXiv.org (2013).
14. Frank Morgan, Convex body isoperimetric conjecture.
15. L. Esposito, V. Ferone, B. Kawohl, C. Nitsch, and C. Trombetti, The longest shortest fence and sharp Poincaré-Sobolev inequalities, arXiv.org (2010).
Knot Theory
Advisor: Colin Adams
Project Description:
Traditionally, knot theorists have looked at knots via projections with crossings where two strands cross. The least number of crossings in any projection of a knot K is called the crossing number c
In Triple Crossing Number of Knots and Links ,this idea was generalized to allow multi-crossings (also called n-crossings) where n strands cross straight through the crossing. It turns out that
every knot has a projection with just n-crossings, so we can try to find the minimal number of n-crossings for any knot, denoted c[n](K). We will be working on determining c[n](K) for various knots
and various values of n.
Furthermore, last summer the SMALL knot theory group proved that every knot can be drawn with just a single multi-crossing (See Knot Projections with a Single Multi-Crossing). Even more surprising,
every knot can be drawn with a single multi-crossing so that it looks like a daisy (see figure below). Hence every knot has a least n for which it can be drawn in these ways, called the ubercrossing
number u(K) and the petal number p(K). These new ideas lead to lots of exciting questions, which will keep us busy for a long time to come.
Number Theory and Probability
Advisor: Steven J. Miller
Project Description:
We’ll explore many of the interplays between number theory and probability, with projects drawn from L-functions, Random Matrix Theory, Additive Number Theory (such as the 3x+1 Problem and Zeckendorf
decompositions) and Benford’s law. A common theme in many of these systems is either a probabilistic model or heuristic. For example, Random Matrix Theory was developed to study the energy levels of
heavy nuclei. While it is hard to analyze the behavior of a specific configuration, often it is easy to calculate an average over all configurations, and then appeal to a Central Limit Theorem type
result to say that a generic systems behavior is close to this average. These techniques have been applied to many problems, ranging from the behavior of L-functions to the structure of networks to
city transportation. For more on the connection between number theory and random matrix theory, see the survey article by Firk-Miller.
References and more details: go to http://www.williams.edu/Mathematics/sjmiller/public_html/ntprob13
|
{"url":"http://math.williams.edu/small/previous-smalls/small-2013-projects/","timestamp":"2014-04-18T18:10:28Z","content_type":null,"content_length":"38957","record_id":"<urn:uuid:219a9d5d-198c-4b01-ad11-af0212d29a7b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Single Structure
Represents a single-precision floating-point number.
Namespace: System Assembly: mscorlib
(in mscorlib.dll)
public struct Single : IComparable, IFormattable,
IConvertible, IComparable<float>, IEquatable<float>
The Single type exposes the following members.
Name Description
CompareTo(Object) Compares this instance to a specified object and returns an integer that indicates whether the value of this instance is less than, equal to, or greater than the
value of the specified object.
CompareTo(Single) Compares this instance to a specified single-precision floating-point number and returns an integer that indicates whether the value of this instance is less
than, equal to, or greater than the value of the specified single-precision floating-point number.
Equals(Object) Returns a value indicating whether this instance is equal to a specified object. (Overrides ValueType.Equals(Object).)
Equals(Single) Returns a value indicating whether this instance and a specified Single object represent the same value.
GetHashCode Returns the hash code for this instance. (Overrides ValueType.GetHashCode().)
GetType Gets the Type of the current instance. (Inherited from Object.)
GetTypeCode Returns the TypeCode for value type Single.
IsInfinity Returns a value indicating whether the specified number evaluates to negative or positive infinity.
IsNaN Returns a value that indicates whether the specified value is not a number (NaN).
IsNegativeInfinity Returns a value indicating whether the specified number evaluates to negative infinity.
IsPositiveInfinity Returns a value indicating whether the specified number evaluates to positive infinity.
Parse(String) Converts the string representation of a number to its single-precision floating-point number equivalent.
Parse(String, NumberStyles) Converts the string representation of a number in a specified style to its single-precision floating-point number equivalent.
Parse(String, IFormatProvider) Converts the string representation of a number in a specified culture-specific format to its single-precision floating-point number equivalent.
Parse(String, NumberStyles, Converts the string representation of a number in a specified style and culture-specific format to its single-precision floating-point number equivalent.
ToString() Converts the numeric value of this instance to its equivalent string representation. (Overrides ValueType.ToString().)
ToString(IFormatProvider) Converts the numeric value of this instance to its equivalent string representation using the specified culture-specific format information.
ToString(String) Converts the numeric value of this instance to its equivalent string representation, using the specified format.
ToString(String, IFormatProvider) Converts the numeric value of this instance to its equivalent string representation using the specified format and culture-specific format information.
TryParse(String, Single) Converts the string representation of a number to its single-precision floating-point number equivalent. A return value indicates whether the conversion
succeeded or failed.
TryParse(String, NumberStyles, Converts the string representation of a number in a specified style and culture-specific format to its single-precision floating-point number equivalent. A
IFormatProvider, Single) return value indicates whether the conversion succeeded or failed.
Name Description
Equality Returns a value that indicates whether two specified Single values are equal.
GreaterThan Returns a value that indicates whether a specified Single value is greater than another specified Single value.
GreaterThanOrEqual Returns a value that indicates whether a specified Single value is greater than or equal to another specified Single value.
Inequality Returns a value that indicates whether two specified Single values are not equal.
LessThan Returns a value that indicates whether a specified Single value is less than another specified Single value.
LessThanOrEqual Returns a value that indicates whether a specified Single value is less than or equal to another specified Single value.
Name Description
Epsilon Represents the smallest positive Single value that is greater than zero. This field is constant.
MaxValue Represents the largest possible value of Single. This field is constant.
MinValue Represents the smallest possible value of Single. This field is constant.
NaN Represents not a number (NaN). This field is constant.
NegativeInfinity Represents negative infinity. This field is constant.
PositiveInfinity Represents positive infinity. This field is constant.
Name Description
IComparable.CompareTo Compares the current instance with another object of the same type and returns an integer that indicates whether the current instance precedes, follows, or occurs in the
same position in the sort order as the other object.
IConvertible.ToBoolean Infrastructure. For a description of this member, see IConvertible.ToBoolean.
IConvertible.ToByte Infrastructure. For a description of this member, see IConvertible.ToByte.
IConvertible.ToChar Infrastructure. This conversion is not supported. Attempting to use this method throws an InvalidCastException.
IConvertible.ToDateTime Infrastructure. This conversion is not supported. Attempting to use this method throws an InvalidCastException.
IConvertible.ToDecimal Infrastructure. For a description of this member, see IConvertible.ToDecimal.
IConvertible.ToDouble Infrastructure. For a description of this member, see IConvertible.ToDouble.
IConvertible.ToInt16 Infrastructure. For a description of this member, see IConvertible.ToInt16.
IConvertible.ToInt32 Infrastructure. For a description of this member, see IConvertible.ToInt32.
IConvertible.ToInt64 Infrastructure. For a description of this member, see IConvertible.ToInt64.
IConvertible.ToSByte Infrastructure. For a description of this member, see IConvertible.ToSByte.
IConvertible.ToSingle Infrastructure. For a description of this member, see IConvertible.ToSingle.
IConvertible.ToType Infrastructure. For a description of this member, see IConvertible.ToType.
IConvertible.ToUInt16 Infrastructure. For a description of this member, see IConvertible.ToUInt16.
IConvertible.ToUInt32 Infrastructure. For a description of this member, see IConvertible.ToUInt32.
IConvertible.ToUInt64 Infrastructure. For a description of this member, see IConvertible.ToUInt64.
The Single value type represents a single-precision 32-bit number with values ranging from negative 3.402823e38 to positive 3.402823e38, as well as positive or negative zero, PositiveInfinity,
NegativeInfinity, and not a number (NaN). It is intended to represent values that are extremely large (such as distances between planets or galaxies) or extremely small (such as the molecular mass of
a substance in kilograms) and that often are imprecise (such as the distance from earth to another solar system). The Single type complies with the IEC 60559:1989 (IEEE 754) standard for binary
floating-point arithmetic.
This topic consists of the following sections:
System.Single provides methods to compare instances of this type, to convert the value of an instance to its string representation, and to convert the string representation of a number to an instance
of this type. For information about how format specification codes control the string representation of value types, see Formatting Types, Standard Numeric Format Strings, and Custom Numeric Format
Floating-point representation and precision
The Single data type stores single-precision floating-point values in a 32-bit binary format, as shown in the following table:
Part Bits
Significand or mantissa 0-22
Exponent 23-30
Sign (0 = positive, 1 = negative) 31
Just as decimal fractions are unable to precisely represent some fractional values (such as 1/3 or Math.PI), binary fractions are unable to represent some fractional values. For example, 2/10, which
is represented precisely by .2 as a decimal fraction, is represented by .0011111001001100 as a binary fraction, with the pattern "1100" repeating to infinity. In this case, the floating-point value
provides an imprecise representation of the number that it represents. Performing additional mathematical operations on the original floating-point value often increases its lack of precision. For
example, if you compare the results of multiplying .3 by 10 and adding .3 to .3 nine times, you will see that addition produces the less precise result, because it involves eight more operations than
multiplication. Note that this disparity is apparent only if you display the two Single values by using the "R" standard numeric format string, which, if necessary, displays all 9 digits of precision
supported by the Single type.
using System;
public class Example
public static void Main()
Single value = .2f;
Single result1 = value * 10f;
Single result2 = 0f;
for (int ctr = 1; ctr <= 10; ctr++)
result2 += value;
Console.WriteLine(".2 * 10: {0:R}", result1);
Console.WriteLine(".2 Added 10 times: {0:R}", result2);
// The example displays the following output:
// .2 * 10: 2
// .2 Added 10 times: 2.00000024
Because some numbers cannot be represented exactly as fractional binary values, floating-point numbers can only approximate real numbers.
All floating-point numbers have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Single value has up to 7 decimal
digits of precision, although a maximum of 9 digits is maintained internally. This means that some floating-point operations may lack the precision to change a floating-point value. The following
example defines a large single-precision floating-point value, and then adds the product of Single.Epsilon and one quadrillion to it. However, the product is too small to modify the original
floating-point value. Its least significant digit is thousandths, whereas the most significant digit in the product is 1-312.
using System;
public class Example
public static void Main()
Single value = 123456789e4f;
Single additional = Single.Epsilon * 1e12f;
Console.WriteLine("{0} + {1} = {2}", value, additional,
value + additional);
// The example displays the following output:
// 1.234568E+12 + 1.401298E-33 = 1.234568E+12
The limited precision of a floating-point number has several consequences:
• Two floating-point numbers that appear equal for a particular precision might not compare equal because their least significant digits are different. In the following example, a series of numbers
are added together, and their total is compared with their expected total. Although the two values appear to be the same, a call to the Equals method indicates that they are not.
using System;
public class Example
public static void Main()
Single[] values = { 10.01f, 2.88f, 2.88f, 2.88f, 9.0f };
Single result = 27.65f;
Single total = 0f;
foreach (var value in values)
total += value;
if (total.Equals(result))
Console.WriteLine("The sum of the values equals the total.");
Console.WriteLine("The sum of the values ({0:R}) does not equal the total ({1:R}).",
total, result);
// The example displays the following output:
// The sum of the values (27.65) does not equal the total (27.65).
// If the index items in the Console.WriteLine statement are changed to {0:R},
// the example displays the following output:
// The sum of the values (27.6500015) does not equal the total (27.65).
If you change the format items in the Console.WriteLine(String, Object, Object) statement from {0} and {1} to {0:R} and {1:R} to display all significant digits of the two Single values, it is
clear that the two values are unequal because of a loss of precision during the addition operations. In this case, the issue can be resolved by calling the Math.Round(Double, Int32) method to
round the Single values to the desired precision before performing the comparison.
• A mathematical or comparison operation that uses a floating-point number might not yield the same result if a decimal number is used, because the binary floating-point number might not equal the
decimal number. A previous example illustrated this by displaying the result of multiplying .3 by 10 and adding .3 to .3 nine times.
When accuracy in numeric operations with fractional values is important, use the Decimal type instead of the Single type. When accuracy in numeric operations with integral values beyond the range
of the Int64 or UInt64 types is important, use the BigInteger type.
• A value might not round-trip if a floating-point number is involved. A value is said to round-trip if an operation converts an original floating-point number to another form, an inverse operation
transforms the converted form back to a floating-point number, and the final floating-point number is not equal to the original floating-point number. The round trip might fail because one or
more least significant digits are lost or changed in a conversion. In the following example, three Single values are converted to strings and saved in a file. As the output shows, although the
values appear to be identical, the restored values are not equal to the original values.
Imports System.IO
Module Example
Public Sub Main()
Dim sw As New StreamWriter(".\Singles.dat")
Dim values() As Single = { 3.2/1.11, 1.0/3, CSng(Math.PI) }
For ctr As Integer = 0 To values.Length - 1
If ctr <> values.Length - 1 Then sw.Write("|")
Dim restoredValues(values.Length - 1) As Single
Dim sr As New StreamReader(".\Singles.dat")
Dim temp As String = sr.ReadToEnd()
Dim tempStrings() As String = temp.Split("|"c)
For ctr As Integer = 0 To tempStrings.Length - 1
restoredValues(ctr) = Single.Parse(tempStrings(ctr))
For ctr As Integer = 0 To values.Length - 1
Console.WriteLine("{0} {2} {1}", values(ctr),
If(values(ctr).Equals(restoredValues(ctr)), "=", "<>"))
End Sub
End Module
' The example displays the following output:
' 2.882883 <> 2.882883
' 0.3333333 <> 0.3333333
' 3.141593 <> 3.141593
using System;
using System.IO;
public class Example
public static void Main()
StreamWriter sw = new StreamWriter(@".\Singles.dat");
Single[] values = { 3.2f/1.11f, 1.0f/3f, (float) Math.PI };
for (int ctr = 0; ctr < values.Length; ctr++) {
if (ctr != values.Length - 1)
Single[] restoredValues = new Single[values.Length];
StreamReader sr = new StreamReader(@".\Singles.dat");
string temp = sr.ReadToEnd();
string[] tempStrings = temp.Split('|');
for (int ctr = 0; ctr < tempStrings.Length; ctr++)
restoredValues[ctr] = Single.Parse(tempStrings[ctr]);
for (int ctr = 0; ctr < values.Length; ctr++)
Console.WriteLine("{0} {2} {1}", values[ctr],
values[ctr].Equals(restoredValues[ctr]) ? "=" : "<>");
// The example displays the following output:
// 2.882883 <> 2.882883
// 0.3333333 <> 0.3333333
// 3.141593 <> 3.141593
using System;
using System.IO;
public class Example
public static void Main()
StreamWriter sw = new StreamWriter(@".\Singles.dat");
Single[] values = { 3.2f/1.11f, 1.0f/3f, (float) Math.PI };
for (int ctr = 0; ctr < values.Length; ctr++)
sw.Write("{0:R}{1}", values[ctr], ctr < values.Length - 1 ? "|" : "" );
Single[] restoredValues = new Single[values.Length];
StreamReader sr = new StreamReader(@".\Singles.dat");
string temp = sr.ReadToEnd();
string[] tempStrings = temp.Split('|');
for (int ctr = 0; ctr < tempStrings.Length; ctr++)
restoredValues[ctr] = Single.Parse(tempStrings[ctr]);
for (int ctr = 0; ctr < values.Length; ctr++)
Console.WriteLine("{0} {2} {1}", values[ctr],
values[ctr].Equals(restoredValues[ctr]) ? "=" : "<>");
// The example displays the following output:
// 2.882883 = 2.882883
// 0.3333333 = 0.3333333
// 3.141593 = 3.141593
In this case, the values can be successfully round-tripped by using the "R" standard numeric format string to preserve the full precision of Single values, as the following example shows.
• Single values have less precision than Double values. A Single value that is converted to a seemingly equivalent Double often does not equal the Double value because of differences in precision.
In the following example, the result of identical division operations is assigned to a Double value and a Single value. After the Single value is cast to a Double, a comparison of the two values
shows that they are unequal.
using System;
public class Example
public static void Main()
Double value1 = 1/3.0;
Single sValue2 = 1/3.0f;
Double value2 = (Double) sValue2;
Console.WriteLine("{0:R} = {1:R}: {2}", value1, value2,
// The example displays the following output:
// 0.33333333333333331 = 0.3333333432674408: False
To avoid this problem, either use the Double data type in place of the Single data type, or use the Round method so that both values have the same precision.
Testing for equality
To be considered equal, two Single values must represent identical values. However, because of differences in precision between values, or because of a loss of precision by one or both values,
floating-point values that are expected to be identical often turn out to be unequal due to differences in their least significant digits. As a result, calls to the Equals method to determine whether
two values are equal, or calls to the CompareTo method to determine the relationship between two Single values, often yield unexpected results. This is evident in the following example, where two
apparently equal Single values turn out to be unequal, because the first value has 7 digits of precision, whereas the second value has 9.
using System;
public class Example
public static void Main()
float value1 = .3333333f;
float value2 = 1.0f/3;
Console.WriteLine("{0:R} = {1:R}: {2}", value1, value2, value1.Equals(value2));
// The example displays the following output:
// 0.3333333 = 0.333333343: False
Calculated values that follow different code paths and that are manipulated in different ways often prove to be unequal. In the following example, one Single value is squared, and then the square
root is calculated to restore the original value. A second Single is multiplied by 3.51 and squared before the square root of the result is divided by 3.51 to restore the original value. Although the
two values appear to be identical, a call to the Equals(Single) method indicates that they are not equal. Using the "R" standard format string to return a result string that displays all the
significant digits of each Single value shows that the second value is .0000000000001 less than the first.
using System;
public class Example
public static void Main()
float value1 = 10.201438f;
value1 = (float) Math.Sqrt((float) Math.Pow(value1, 2));
float value2 = (float) Math.Pow((float) value1 * 3.51f, 2);
value2 = ((float) Math.Sqrt(value2)) / 3.51f;
Console.WriteLine("{0} = {1}: {2}\n",
value1, value2, value1.Equals(value2));
Console.WriteLine("{0:R} = {1:R}", value1, value2);
// The example displays the following output:
// 10.20144 = 10.20144: False
// 10.201438 = 10.2014389
In cases where a loss of precision is likely to affect the result of a comparison, you can use the following techniques instead of calling the Equals or CompareTo method:
• Call the Math.Round method to ensure that both values have the same precision. The following example modifies a previous example to use this approach so that two fractional values are equivalent.
using System;
public class Example
public static void Main()
float value1 = .3333333f;
float value2 = 1.0f/3;
int precision = 7;
value1 = (float) Math.Round(value1, precision);
value2 = (float) Math.Round(value2, precision);
Console.WriteLine("{0:R} = {1:R}: {2}", value1, value2, value1.Equals(value2));
// The example displays the following output:
// 0.3333333 = 0.3333333: True
Note that the problem of precision still applies to rounding of midpoint values. For more information, see the Math.Round(Double, Int32, MidpointRounding) method.
• Test for approximate equality instead of equality. This technique requires that you define either an absolute amount by which the two values can differ but still be equal, or that you define a
relative amount by which the smaller value can diverge from the larger value.
Single.Epsilon is sometimes used as an absolute measure of the distance between two Single values when testing for equality. However, Single.Epsilon measures the smallest possible value that can
be added to, or subtracted from, a Single whose value is zero. For most positive and negative Single values, the value of Single.Epsilon is too small to be detected. Therefore, except for values
that are zero, we do not recommend its use in tests for equality.
The following example uses the latter approach to define an IsApproximatelyEqual method that tests the relative difference between two values. It also contrasts the result of calls to the
IsApproximatelyEqual method and the Equals(Single) method.
using System;
public class Example
public static void Main()
float one1 = .1f * 10;
float one2 = 0f;
for (int ctr = 1; ctr <= 10; ctr++)
one2 += .1f;
Console.WriteLine("{0:R} = {1:R}: {2}", one1, one2, one1.Equals(one2));
Console.WriteLine("{0:R} is approximately equal to {1:R}: {2}",
one1, one2,
IsApproximatelyEqual(one1, one2, .000001f));
static bool IsApproximatelyEqual(float value1, float value2, float epsilon)
// If they are equal anyway, just return True.
if (value1.Equals(value2))
return true;
// Handle NaN, Infinity.
if (Double.IsInfinity(value1) | Double.IsNaN(value1))
return value1.Equals(value2);
else if (Double.IsInfinity(value2) | Double.IsNaN(value2))
return value1.Equals(value2);
// Handle zero to avoid division by zero
double divisor = Math.Max(value1, value2);
if (divisor.Equals(0))
divisor = Math.Min(value1, value2);
return Math.Abs(value1 - value2)/divisor <= epsilon;
// The example displays the following output:
// 1 = 1.00000012: False
// 1 is approximately equal to 1.00000012: True
Floating-point values and exceptions
Operations with floating-point values do not throw exceptions, unlike operations with integral types, which throw exceptions in cases of illegal operations such as division by zero or overflow.
Instead, in these situations, the result of a floating-point operation is zero, positive infinity, negative infinity, or not a number (NaN):
• If the result of a floating-point operation is too small for the destination format, the result is zero. This can occur when two very small floating-point numbers are multiplied, as the following
example shows.
using System;
public class Example
public static void Main()
float value1 = 1.163287e-36f;
float value2 = 9.164234e-25f;
float result = value1 * value2;
Console.WriteLine("{0} * {1} = {2}", value1, value2, result);
Console.WriteLine("{0} = 0: {1}", result, result.Equals(0.0f));
// The example displays the following output:
// 1.163287E-36 * 9.164234E-25 = 0
// 0 = 0: True
• If the magnitude of the result of a floating-point operation exceeds the range of the destination format, the result of the operation is PositiveInfinity or NegativeInfinity, as appropriate for
the sign of the result. The result of an operation that overflows Single.MaxValue is PositiveInfinity, and the result of an operation that overflows Single.MinValue is NegativeInfinity, as the
following example shows.
using System;
public class Example
public static void Main()
float value1 = 3.065e35f;
float value2 = 6.9375e32f;
float result = value1 * value2;
Console.WriteLine("PositiveInfinity: {0}",
Console.WriteLine("NegativeInfinity: {0}\n",
value1 = -value1;
result = value1 * value2;
Console.WriteLine("PositiveInfinity: {0}",
Console.WriteLine("NegativeInfinity: {0}",
// The example displays the following output:
// PositiveInfinity: True
// NegativeInfinity: False
// PositiveInfinity: False
// NegativeInfinity: True
PositiveInfinity also results from a division by zero with a positive dividend, and NegativeInfinity results from a division by zero with a negative dividend.
• If a floating-point operation is invalid, the result of the operation is NaN. For example, NaN results from the following operations:
□ Division by zero with a dividend of zero. Note that other cases of division by zero result in either PositiveInfinity or NegativeInfinity.
□ Any floating-point operation with invalid input. For example, attempting to find the square root of a negative value returns NaN.
□ Any operation with an argument whose value is Single.NaN.
Floating-point functionality
The Single structure and related types provide methods to perform the following categories of operations:
• Comparison of values. You can call the Equals method to determine whether two Single values are equal, or the CompareTo method to determine the relationship between two values.
The Single structure also supports a complete set of comparison operators. For example, you can test for equality or inequality, or determine whether one value is greater than or equal to another
value. If one of the operands is a Double, the Single value is converted to a Double before performing the comparison. If one of the operands is an integral type, it is converted to a Single
before performing the comparison. Although these are widening conversions, they may involve a loss of precision.
Because of differences in precision, two Single values that you expect to be equal may turn out to be unequal, which affects the result of the comparison. See the Testing for equality section for
more information about comparing two Single values.
You can also call the IsNaN, IsInfinity, IsPositiveInfinity, and IsNegativeInfinity methods to test for these special values.
• Mathematical operations. Common arithmetic operations such as addition, subtraction, multiplication, and division are implemented by language compilers and Common Intermediate Language (CIL)
instructions rather than by Single methods. If the other operand in a mathematical operation is a Double, the Single is converted to a Double before performing the operation, and the result of
the operation is also a Double value. If the other operand is an integral type, it is converted to a Single before performing the operation, and the result of the operation is also a Single
You can perform other mathematical operations by calling static (Shared in Visual Basic) methods in the System.Math class. These include additional methods commonly used for arithmetic (such as
Math.Abs, Math.Sign, and Math.Sqrt), geometry (such as Math.Cos and Math.Sin), and calculus (such as Math.Log). In all cases, the Single value is converted to a Double.
You can also manipulate the individual bits in a Single value. The BitConverter.GetBytes(Single) method returns its bit pattern in a byte array. By passing that byte array to the BitConverter.
ToInt32 method, you can also preserve the Single value's bit pattern in a 32-bit integer.
• Rounding. Rounding is often used as a technique for reducing the impact of differences between values caused by problems of floating-point representation and precision. You can round a Single
value by calling the Math.Round method. However, note that the Single value is converted to a Double before the method is called, and the conversion can involve a loss of precision.
• Formatting. You can convert a Single value to its string representation by calling the ToString method or by using the composite formatting feature. For information about how format strings
control the string representation of floating-point values, see the Standard Numeric Format Strings and Custom Numeric Format Strings topics.
• Parsing strings. You can convert the string representation of a floating-point value to a Single value by calling the Parse or TryParse method. If the parse operation fails, the Parse method
throws an exception, whereas the TryParse method returns false.
• Type conversion. The Single structure provides an explicit interface implementation for the IConvertible interface, which supports conversion between any two standard .NET Framework data types.
Language compilers also support the implicit conversion of values for all other standard numeric types except for the conversion of Double to Single values. Conversion of a value of any standard
numeric type other than a Double to a Single is a widening conversion and does not require the use of a casting operator or conversion method.
However, conversion of 32-bit and 64-bit integer values can involve a loss of precision. The following table lists the differences in precision for 32-bit, 64-bit, and Double types:
Type Maximum precision (in decimal digits) Internal precision (in decimal digits)
Double 15 17
Int32 and UInt32 10 10
Int64 and UInt64 19 19
Single 7 9
The problem of precision most frequently affects Single values that are converted to Double values. In the following example, two values produced by identical division operations are unequal,
because one of the values is a single-precision floating point value that is converted to a Double.
using System;
public class Example
public static void Main()
Double value1 = 1/3.0;
Single sValue2 = 1/3.0f;
Double value2 = (Double) sValue2;
Console.WriteLine("{0:R} = {1:R}: {2}", value1, value2,
// The example displays the following output:
// 0.33333333333333331 = 0.3333333432674408: False
.NET Framework
Supported in: 4.5.1, 4.5, 4, 3.5, 3.0, 2.0, 1.1, 1.0
.NET Framework Client Profile
Supported in: 4, 3.5 SP1
Portable Class Library
Supported in: Portable Class Library
.NET for Windows Store apps
Supported in: Windows 8
.NET for Windows Phone apps
Supported in: Windows Phone 8.1, Windows Phone 8, Silverlight 8.1
Windows Phone 8.1, Windows Phone 8, Windows 8.1, Windows Server 2012 R2, Windows 8, Windows Server 2012, Windows 7, Windows Vista SP2, Windows Server 2008 (Server Core Role not supported), Windows
Server 2008 R2 (Server Core Role supported with SP1 or later; Itanium not supported)
The .NET Framework does not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements.
All members of this type are thread safe. Members that appear to modify instance state actually return a new instance initialized with the new value. As with any other type, reading and writing to a
shared variable that contains an instance of this type must be protected by a lock to guarantee thread safety.
|
{"url":"http://msdn.microsoft.com/en-us/library/system.single.aspx","timestamp":"2014-04-16T19:41:09Z","content_type":null,"content_length":"161371","record_id":"<urn:uuid:9b25c710-d670-4ec0-8800-1a41fe2bdea3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
If the use of the passive sign convention is specified, find the (a) complex voltage that results when the complex current 4e^j800t A is applied to the series combination of a 1-mF capacitor and a
2-Ω resistor; (b) complex current that results when the complex voltage100e^j2000t V is applied to the parallel combination of a 1-mH inductor and a 50-Ω resistor.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50a4f480e4b0f1696c13a9cd","timestamp":"2014-04-20T06:06:54Z","content_type":null,"content_length":"54785","record_id":"<urn:uuid:0f01d17e-13b0-474e-bf26-a9e32b48ebaf>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Turbulence models expansion?
Just wondering if the future development would include more turbulence models (like large eddy simulation, detached eddy simulation, etc...) other than the ones we see (k-e, RNG, eddy viscoscity,
mixing length)?
hey max.. we have done a fair amount of R&D on which turb models to offer. What sort of simulations would you use LES/DES models for? As you know, they are fairly sophisticated models that are pretty
computationally expensive..
Derrek, with LES/DES models, we would like to accurately (as much as possbile) model unsteady vortex shedding of vertical axis wind turbine blades in 2D. 3D for horizontal axis wind turbine would be
nice too, if possible.
How good is the eddy viscosity model we have? Comparable to LES/DES models in some ways? I agree with you that these are sophisticated and computationally expensive models.
You could attain vortex shedding with RSM, (not that it's implemented) - this model would presumably be less work for the developers than LES DES / hybrid. I'd also like to see RSM implemented.
You are correct that implementing the RSM would probably be more feasible since it is an RANS based model. I am not a developer, but that is my understanding.
The LES/DES would be a very new direction. These turbulance models also require much more mesh and longer runs times, not something that works well for most designers and our tradiational message of
trying to be upfront.
What would be a good compromise turbulance model that would provide reasonable error and solve time when you are looking for vortex shedding and accurate lift/drag numbers at low and high reynolds
As vortex shedding is a boundary layer / transition phenomena, it'd require turbulence models valid all the way to the wall: SST k-omega, or a low-RE RSM model for example. This'd also require work
on the mesher, with meshes which can resolve the boundary layers (a low y+ setting, instead of the standard wall treatment).
Although I agree that this increase in mesh requirement goes against the upfront philosophy, I feel the whole user experience, CAD integration and general speed of pre/post processing would offset
the extra computational time in terms of whether it is perceived 'up front' enough, as it were. It would expand the software's capabilities massively.
Same goes for LES / DES / Hybrid RANS LES; but I'm not the one who's got to code it!
I m an amateur in the world of CFD. i want to study the flow over cylinder (mainly vorticity nd vortex shedding). I tried its analysis on Autodesk Simulation CFD 2013, the computational results were
right, but vortex were not visiible when i used trace points to visualize the flow. Also what all assumptions should i make so that vortex shedding is visble & also if you can recommend me some text
regarding the topic, it would be great.
I was wondering what proportion of current SimCFD users will really leverage LES/DES on their designs. RSM seems about the right high-end choice.
I was wondering if it would be possible to include the Realizable k-e model. This one has been (traditionally) more accurate than RNG, because of the realizability it induces in k and eps equations.
This would reduce the diffusion in raw k-e model, and at the same time can deal with high mean strain rates as well, where k-e may have problems. Can be a good alternative to RNG for non-swirl cases.
Additionally, it would be nice to have a non-linear k-eps model as well. This would be instrumental in simulating cross-flows by abolising the isotropic turbulence in two equation models, yet being
lean on computational resources as compared to RSM. Though, validation still remains the issue for such models.
|
{"url":"http://forums.autodesk.com/t5/Simulation-CFD/Turbulence-models-expansion/m-p/3220538","timestamp":"2014-04-16T22:41:06Z","content_type":null,"content_length":"209175","record_id":"<urn:uuid:ce355710-0f29-42b5-8e4c-03bc232c518b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Cauchy-Schwarz Inequality for Integrals
The Cauchy–Schwarz inequality for integrals states that for two real integrable functions in an interval . This is an analog of the vector relationship , which is, in fact, highly suggestive of the
inequality expressed in Hilbert space vector notation: . For complex functions, the Cauchy–Schwarz inequality can be generalized to . The limiting case of equality is obtained when and are linearly
dependent, for instance when ).
This Demonstration shows examples of the Cauchy–Schwarz inequality in the interval , in which and are polynomials of degree four with coefficients in the range .
Snapshot 1: inequality of an order of magnitude
Snapshot 2: limiting case of equality since and are proportional
Snapshot 3: case of two orthogonal functions
|
{"url":"http://demonstrations.wolfram.com/CauchySchwarzInequalityForIntegrals/","timestamp":"2014-04-17T03:54:08Z","content_type":null,"content_length":"44453","record_id":"<urn:uuid:b370a588-671d-42ac-ae74-91d31eb2fc52>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New View of Statistics: Linear Regression
A New View of Statistics © 2000 Will G Hopkins
Go to: Next · Previous · Contents · Search · Home
Generalizing to a Population:
SIMPLE MODELS AND TESTS continued Linear Regression
Let's use the same example that I used to introduce the concept of statistical models. As you can see, data for two variables like weight and height scream out to have a straight line drawn through
them. The straight line will allow us to predict any person's weight from a knowledge of that person's height. Obviously, the prediction won't be perfect, so we will also be able to say how strong
the linear relationship is between weight and height, or how well the straight line fits the data (the goodness of fit).
Here's how we represent the model:
model: numeric <= numeric
example: weight <= height
You normally think about a straight line as Y = mX + c, where m is the slope and c is the intercept. The way I would write this relationship, using the above notation, is simply Y <= X. We don't have
to worry about showing the constants, but the stats program worries about them. They're the parameters in the model.
The Slope
The most interesting parameter in a linear model is usually the slope. If the slope is zero, the line is flat, so there's no relationship between the variables. In the example, the slope is about
0.75 kg per cm (an increase in weight of 0.75 kg for each cm increase in height). We can also calculate the slope in two ways that don't have those ugly units (kg per cm).
One way is to calculate the percent change in weight per percent change in height. It's unusual, but sometimes it's the best way, especially for variables that need log transformation. The slope
expressed as % per % comes directly out of the analysis of log-transformed variables.
The other way to remove the units is to normalize the two variables by dividing their values by their standard deviations, then fit the straight line. The resulting slope is known as a standardized
regression coefficient. It represents the change in weight, expressed as a fraction of the standard deviation, per standard deviation change in height. You can also generate it by multiplying the
slope (in kg per cm) by the ratio of the standard deviations for height over the standard deviation for weight. In a simple linear regression, the value of the standardized regression coefficient is
exactly the same as the correlation coefficient, and you can interpret its magnitude in the same way. In the example, the value is about 0.9, or a difference of 0.9 standard deviations in weight per
change of one standard deviation in height. That's a really strong relationship!
Goodness of Fit
The stats program works out values for the slope and intercept (the parameters) that give the best fit. I'll explain how after I've dealt with all four simple models. Meanwhile, we want a measure of
how good the fit is. The correlation coefficient is one such measure. Another way to represent the fit is to square the correlation coefficient, multiply it by 100, then call the result the percent
of variance explained, or percent R^2. For example, the R^2 represents the proportion of variation in weight that can be attributed to height, when there is a linear relationship between weight and
height. A correlation of 0.9 is equivalent to an R^2 of 0.81 or 81%. I'll explain more about goodness of fit in a few pages' time.
The p value or the confidence interval for the correlation coefficient tell us how good the fit is likely to be in the population. The program can also give confidence intervals or p values for the
slope and intercept. The correlation coefficient can be considered as a test statistic for whether the line fits the data at all. But stats programs can also produce another statistic for this
purpose, called the F ratio. The values for F are quite different from those for r, but there is a one-to-one relationship between them, and the r and the F have the same p value for a given sample.
Go to: Next · Previous · Contents · Search · Home webmaster=AT=newstats.org
Last updated 10 Dec 00
|
{"url":"http://www.sportsci.org/resource/stats/linreg.html","timestamp":"2014-04-19T23:18:04Z","content_type":null,"content_length":"7833","record_id":"<urn:uuid:71e0fefc-94f9-498e-804d-caafa6af4f7f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summations in $\tan^2$
up vote 3 down vote favorite
Hey all,
I was just wondering if anyone had come across the following identities, valid for $m\in\mathbb{N}$. I've used Abramowitz and Stegun, Maple, Mathematica etc but can't find them anywhere. I can prove
these, though they happen 'accidentally' from a method which I am already looking at. Anyway the identities are
$$\sum_{k=1}^m \left(\tan\left(\frac{\pi(2k-1)}{4m}\right)\right)^2=m(2m-1) \hspace{4mm} \textrm{and} \hspace{4mm} \sum_{k=1}^m \left(\tan\left(\frac{\pi k}{2m+1}\right)\right)^2=m(2m+1)$$
$$\sum_{k=1}^m \left(\tan\left(\frac{\pi(2k-1)}{4m}\right)\right)^4=\frac{1}{3}m(2m-1)(4m^2+2m-3) \hspace{4mm} etc$$
There are other identities for all even powers but I haven't worked them out yet as I thought that there might not be any point if there are known results for these summations. It would be cool if
there were lists of such identities, or even a general formula, as this would provide me with many useful references indeed!
Many thanks on Christmas!
nt.number-theory special-functions trigonometric-sums
How do you derive these identities? – Anixx Dec 25 '10 at 15:19
seems you confused m and n in the last identity – Anixx Dec 25 '10 at 15:20
Not much of a simplification, but your first sum can be "cut in half": for even $m$, your sum is the same as $$\sum_{k=1}^{\lfloor m/2\rfloor}\left(\tan^2\left(\frac{\pi}{4m}(2k-1)\right)+\cot^2\
left(\frac{\pi}{4m}(2k-1)\right)\right)$$ ; for odd $m$, add 1. – J. M. Dec 25 '10 at 15:22
(and something similar can be done for the other sums) – J. M. Dec 25 '10 at 15:24
5 You'll find this and a list of further identities here emis.de/journals/HOA/IJMMS/30/3185.pdf – dke Dec 25 '10 at 15:48
show 9 more comments
1 Answer
active oldest votes
There is one natural way to study such identities based on discrete Fourier series. You can define functions as a Fourier series with coefficients equal to your summands. Then your sums
will be equal to the values of these functions at $0$. For two presented sums you'll get polynomials of 2 and 4 degrees respectively. They are 2 terms of some "good" sequence of polynomials
which can be defined recursively. These polynomials will be closely connected with Bernoulli polynomials. Some similar polynomials (discrete Bernoulli polynomials) are known as Korobov
up vote 2 polynomials.
down vote
Some related examples are here: Lemma 2+ formula (6) and more symmetrical Fourier series for $Q_{2n}(x)$ (page 10).
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory special-functions trigonometric-sums or ask your own question.
|
{"url":"http://mathoverflow.net/questions/50339/summations-in-tan2","timestamp":"2014-04-20T06:37:56Z","content_type":null,"content_length":"58208","record_id":"<urn:uuid:2683f696-fd3d-4baa-86ef-0e0e075cc0e3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jacinto City, TX Calculus Tutor
Find a Jacinto City, TX Calculus Tutor
I am currently a CRLA certified level 3. I have been tutoring for close to 5 years now on most math subjects from Pre-Algebra up through Calculus 3. I have done TA jobs where I hold sessions for
groups of students to give them extra practice on their course material and help to answer any question...
7 Subjects: including calculus, statistics, algebra 2, algebra 1
...I specialize in tutoring math (elementary math, geometry, prealgebra, algebra 1 & 2, trigonometry, precalculus, etc.), Microsoft Word, Excel, PowerPoint, and VBA programming. I'd love to talk
more about tutoring for your specific situation and look forward to hearing from you.During my time at T...
17 Subjects: including calculus, reading, geometry, algebra 1
...I will not help a student while that student is taking a quiz or test online, as that would be considered cheating. It is unethical for me, puts the student at risk of failing if discovered,
and does the student no favors in the long run. Success comes to those who are motivated and hard-working.
41 Subjects: including calculus, chemistry, English, reading
I am a Texas-certified teacher who is highly qualified to teach mathematics and computer science. I have been successful with gifted, struggling, and special education students. Whether you are
struggling to pass the STAAR, want to reinforce or supplement your class work, or just want to learn more about a subject you enjoy, Mr.
30 Subjects: including calculus, physics, statistics, geometry
...In addition, I also help the student(s) with their math required for doing the chemical calculations which is absolutely essential for their studies. References are available. I have dynamic
experience with all phases of P_Alg whether trying to bring student's grade up or trying to advance them in honors classes.
36 Subjects: including calculus, English, chemistry, geometry
|
{"url":"http://www.purplemath.com/Jacinto_City_TX_Calculus_tutors.php","timestamp":"2014-04-17T04:14:06Z","content_type":null,"content_length":"24437","record_id":"<urn:uuid:aa4353ff-710f-4f55-97f4-12bbe3654d1f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
|
centre of a group Z(G)
February 25th 2008, 11:18 AM #1
Junior Member
Aug 2007
centre of a group Z(G)
Hey there, i have a question on the center of a group.
The centre Z(G) of a group G is defined by $Z(G) = g \epsilon G: \forall x \epsilon G, xg = gx$
(i) Show that Z(G) is normal subgroup of G
(ii) By considering the Class Equation of G acting on itself by conjugation show that if $|G| = p^n$ ( p prime) then $Z(G) eq {1}$
(iii) If G is non abelian show that G/Z(G) is not cyclic.
(iv) Decude that any group of order $p^2$ is abelian.
(V) Deduce that a gorup of oder $p^2$ is isomorhpic either to $C_{p2}$ or to $C_p \times C_p$
any hints would be greatly appreicated and ill atempt the question once i have a clear idea of how to do them. consequently, i will post them online once i have worked them out. cheers guyz
Last edited by joanne_q; February 27th 2008 at 05:23 AM.
Let $x\in \text{Z}(G)$ prove that $gxg^{-1} \in \text{Z}(G)$ for any $g\in G$.
(ii) By considering the Class Equation of G acting on itself by conjugation show that if $|G| = p^n$ ( p prime) then $Z(G) eq {1}$
The conjugacy class equation says,
$|G| = |\text{Z}(G)| + \sum [G:\text{C}(x)]$.
Where the $x$'s are taken from distinct conjugacy classes of more than one element.
We know that the left hand side is divisible by $p$ so the right hand side is divisible by $p$. Now $\text{C}(x)$, the centralizer, is not whole $G$ because we are picking those $x$, this means
$1\leq |\text{C}(x)| \leq p^{n-1}$. This means $p$ divides the index of $\text{C}(x)$ under $G$. Thus, this proves that $\text{Z}(G)$ is divisible by $p$. Thus, the center is non-trivial.
Contrapositive is easier. Let $H=\text{Z}(G)$. If $G/H$ is cyclic then there is $aH$ which generates the group $G/H$. Let $x,y\in G$. Note $xH,yH\in G/H$ thus $xH=a^nH$ and $yH=a^mH$. This means
$x = a^n z_1$ and $y=a^mz_2$ where $z_1,z_2\in H$. But then $xy = a^n z_1 a^mz_2 = a^{n+m}z_1z_2$ and $yx = a^m z_2 a^n z_1 = a^{n+m}z_1z_2$ because $z_1,z_2$ commute with everything. Thus $G$ is
(iv) Decude that any group of order $p^2$ is abelian.
By Burnside's lemma (that is (ii)) we have that the center is non-trivial, forming the factor group we have a cyclic group. Thus the original needs to be abelian.
Last edited by ThePerfectHacker; February 25th 2008 at 08:35 PM.
Let $|G|=p^2$. Pick $aot = 1$. Form the subgroup $H=\left< a \right>$ if $H = G$ then the group is cyclic and proof is complete. Otherwise choose $b\in G\setminus H$ and form $K=\left< b\right>$.
Now $H\cap K = \{ 1\}$ this means $HK = G$*. Also $H,K\triangleleft G$ because the group is abelian. This means $G\simeq H\times K \simeq \mathbb{Z}_p \times \mathbb{Z}_p$.**
*)Because $|HK| = |G|$ by using the fact $|HK||H\cap K| = |H||K|$.
**)Theorem: If $H,K$ are normal subgroups with $H\cap K = \{ 1 \}$ and $HK = G$ then $G\simeq H\times K$.
Thanks a lot for the help
For part (ii), your solution was:
The conjugacy class equation says,
$|G| = |\text{Z}(G)| + \sum [G:\text{C}(x)]$.
Where the $x$'s are taken from distinct conjugacy classes of more than one element.
We know that the left hand side is divisible by $p$ so the right hand side is divisible by $p$. Now $\text{C}(x)$, the centralizer, is not whole $G$ because we are picking those $x$, this means
$1\leq |\text{C}(x)| \leq p^{n-1}$. This means $p$ divides the index of $\text{C}(x)$ under $G$. Thus, this proves that $\text{Z}(G)$ is divisible by $p$. Thus, the center is non-trivial.
here is my solution, is it correct aswell?
$G \equiv |Z(G)| (mod p)$ since Z(G) is a fixed point set.
Now $|Z(G)| \equiv p^n(mod p)$, |Z(G)|=0.
So Z(G) has atleast p elements.
for part (iii) i believe you have proved the opposite of the question? i.e. G/Z(G) is cyclic and thus abelian...
to prove that it is non cyclic, do all the points you stated have to be contradicted..?
Contrapositive is easier. Let $H=\text{Z}(G)$. If $G/H$ is cyclic then there is $aH$ which generates the group $G/H$. Let $x,y\in G$. Note $xH,yH\in G/H$ thus $xH=a^nH$ and $yH=a^mH$. This means
$x = a^n z_1$ and $y=a^mz_2$ where $z_1,z_2\in H$. But then $xy = a^n z_1 a^mz_2 = a^{n+m}z_1z_2$ and $yx = a^m z_2 a^n z_1 = a^{n+m}z_1z_2$ because $z_1,z_2$ commute with everything. Thus $G$ is
Have you ever done the conjugacy class equation? There is a way around it if you never done it that way, I think I have an idea of what you might have done.
Let $G$ be a finite $p$-group. Let $G$ act on a non-empty finite set $X$. Then $|X|\equiv |X^G|(\bmod p)$ where $X^G$ is the invariant subset fixed by $G$.
here is my solution, is it correct aswell?
$G \equiv |Z(G)| (mod p)$ since Z(G) is a fixed point set.
Now $|Z(G)| \equiv p^n(mod p)$, |Z(G)|=0.
So Z(G) has atleast p elements.
Let $G$ act on itself by conjugation (i.e. $X=G$ and $g*x = gxg^{-1}$). Also $G$ is a finite $p$-group which fits the above result thus $|G| \equiv |G^G| (\bmod p)$ but $G^G = \text{Z}(G)$
because that is the subset left fixed under conjugation. Thus, $|G| \equiv |\text{Z}(G)|(\bmod p)$ which means the center needs to be divisible by $p$, i.e. it cannot be trivial.
No, there is no contradiction argument. I proved the contrapositive statement.
February 25th 2008, 05:25 PM #2
Global Moderator
Nov 2005
New York City
February 25th 2008, 05:47 PM #3
Global Moderator
Nov 2005
New York City
February 25th 2008, 08:43 PM #4
Global Moderator
Nov 2005
New York City
February 26th 2008, 05:04 AM #5
Junior Member
Aug 2007
February 26th 2008, 05:08 AM #6
Junior Member
Aug 2007
February 26th 2008, 07:44 AM #7
Global Moderator
Nov 2005
New York City
February 26th 2008, 07:45 AM #8
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/advanced-algebra/29103-centre-group-z-g.html","timestamp":"2014-04-17T09:40:29Z","content_type":null,"content_length":"79017","record_id":"<urn:uuid:2a205d0b-ff78-4e0b-a686-569c9d3f58fa>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: [PATCH] Fix the R6RS exact-integer-sqrt and import into core guile
[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH] Fix the R6RS exact-integer-sqrt and import into core guile
From: Mark H Weaver
Subject: Re: [PATCH] Fix the R6RS exact-integer-sqrt and import into core guile
Date: Wed, 13 Apr 2011 11:31:43 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.3 (gnu/linux)
Hi Detlev,
Detlev Zundel <address@hidden> writes:
> Maybe it doesn't make sense to continue this discussion on a theoretical
> basis. Because on a theoretical basis I would probably cite the rule of
> modularity or even Antoine de Saint-Exupery:
> Perfection (in design) is achieved not when there is nothing more to
> add, but rather when there is nothing more to take away.
If we follow that rule, we should limit our available procedures to the
R5RS, and maybe some additional POSIX interfaces, and nothing more.
Since you don't want to continue this discussion on a theoretical basis,
can you please provide a concrete example of how the addition of
scm_exact_integer_sqrt might be a maintenance burden in the future,
given that our public C interface already consists of approximately 2K
functions, including most (maybe all?) of the other numeric operators?
What I'm asking for is a specific example of how some future change in
the internal workings of Guile's implementation might result in us
saying "Damn, if only we hadn't exported those last few numerical
operators, this would've been less of a pain!"
[Prev in Thread] Current Thread [Next in Thread]
• Re: [PATCH] Fix the R6RS exact-integer-sqrt and import into core guile, (continued)
• Re: [PATCH] Fix the R6RS exact-integer-sqrt and import into core guile, Detlev Zundel, 2011/04/11
• Re: [PATCH] Fix the R6RS exact-integer-sqrt and import into core guile, Noah Lavine, 2011/04/11
• Re: [PATCH] Fix the R6RS exact-integer-sqrt and import into core guile, Detlev Zundel, 2011/04/13
• Re: [PATCH] Fix the R6RS exact-integer-sqrt and import into core guile, Andy Wingo, 2011/04/13
• Re: [PATCH] Fix the R6RS exact-integer-sqrt and import into core guile, Mark H Weaver <=
• Re: [PATCH] Fix the R6RS exact-integer-sqrt and import into core guile, Detlev Zundel, 2011/04/13
• Re: [PATCH] Fix the R6RS exact-integer-sqrt and import into core guile, Noah Lavine, 2011/04/08
• Re: [PATCH] Fix the R6RS exact-integer-sqrt and import into core guile, Peter Brett, 2011/04/08
|
{"url":"http://lists.gnu.org/archive/html/guile-devel/2011-04/msg00106.html","timestamp":"2014-04-17T18:56:59Z","content_type":null,"content_length":"8520","record_id":"<urn:uuid:1968d455-4f8a-4a35-a546-705465146f21>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Eliminating x and y
November 14th 2009, 07:17 PM #1
Junior Member
Oct 2009
Hi there I have a problem that I am trying to figure out. It said that by eliminating $x$ and $y$ from the equations $\frac {1}{y}+\frac {1}{y}=1$, $x +y = a$, $\frac {y}{x}=m$ where $a =/0$.
obtain a relation between $m$adn $a$. Given that a is real, determine the ranges of values of $a$ for which $m$is real. Can some one help me with this problem. Thank you in advance.
I'm guessing the first term should be 1/x rather than another 1/y?
Solve the second for x and substitute this expression into the other two equations. You'll be nearly done.
I tried that and I am just not getting the problem solved at all.for the second I got $x=a-y$ but when I substitute that value for the other two eqaution every thing just seem confusing from
Perhaps this is a bit sloppy (but then I tend to make problems a bit rougher than they need to be.) Start with the two equaitons:
$\frac{1}{x} + \frac{1}{y} = 1$
$x + y = a$
Then we know that
$\frac{x + y}{xy} = 1$
Put the second equation in:
$\frac{a}{x(a - x)} = 1$
$x^2 - ax + a = 0$
Now start from x + y = a and y = mx.
$x + mx = a$
$x = \frac{a}{m + 1}$
Put this into the $x^2 - ax + a = 0$ equation to cancel the x and you are done.
Perhaps this is a bit sloppy (but then I tend to make problems a bit rougher than they need to be.) Start with the two equaitons:
$\frac{1}{x} + \frac{1}{y} = 1$
$x + y = a$
Then we know that
$\frac{x + y}{xy} = 1$
Put the second equation in:
$\frac{a}{x(a - x)} = 1$
$x^2 - ax + a = 0$
Now start from x + y = a and y = mx.
$x + mx = a$
$x = \frac{a}{m + 1}$
Put this into the $x^2 - ax + a = 0$ equation to cancel the x and you are done.
Thank you very much
November 14th 2009, 07:24 PM #2
MHF Contributor
Aug 2007
November 14th 2009, 07:40 PM #3
Junior Member
Oct 2009
November 14th 2009, 08:01 PM #4
November 14th 2009, 09:41 PM #5
Junior Member
Oct 2009
|
{"url":"http://mathhelpforum.com/pre-calculus/114585-eliminating-x-y.html","timestamp":"2014-04-21T17:05:09Z","content_type":null,"content_length":"48185","record_id":"<urn:uuid:0669501e-787c-46e7-b3d3-369f8a430338>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why I Got Euler's Identity Tattooed On My Back
The majority of people in my life are uninterested in math, to put it mildly, but that has not prevented them from expressing curiosity about my
Euler's Identity
tattoo. "What's it mean?" they ask. "Why is it so important to you?" These questions are difficult to answer without invoking so much math as to make the questioners' eyes glaze over (at best; at
worst they run screaming from the room). This post attempts to provide a non-mathematical metaphor that, I hope, will convey even to math-phobic readers the same kind of awe and beauty that I
experience when contemplating Euler's Identity. I'm going to start with a totally un-mathy topic:
literary characterization
Readers of
have slightly different ideas about what makes for good characterization than, say,
readers of comic books
. (I'm deliberately defying the growing pressure to call this benighted artform
"graphic novels"
by focusing specifically on comic books about
, which is how it all got started, no matter how much the breathless fanboys may inform me that it's grown up since.) Readers of novels define their characters in terms of interiority: they talk
about childhood memory, emotional nuance, mannerisms, depth. Readers of superhero comic books define their characters in terms of exteriority: they talk about costumes, superpowers, Achilles' heels,
weapons, amulets, etc. It's the difference between viewing a character as a human being versus as a mere bundle of attributes.
Although I have little patience in general with the superhero comic book way of approaching character, it is useful in this context because it provides an interesting approach to numbers. Numbers,
too, can be described merely as bundles of attributes. Of any number, there's a list of well-defined characteristics that it may or may not have:
odd or even
Positive or negative
? As long as we're willing to view the world through these glasses, then it's not such a stretch to reimagine the set of all numbers as literature's only infinite cast of characters.
Granted, some of these "characters" are a lot more interesting, compelling even, than others.
is a good example. We have yet to discover any other number that is as devoted a pacifist and yet simultaneously as rapacious a conqueror as zero. When it comes to addition, zero is a cuddly,
harmless teddy bear:
you can add anything to zero and that number will just emerge as itself, untouched
. But
when it comes to multiplication, zero is Genghis Khan, slaughtering all hapless comers, leaving only itself standing
. (Don't even get zero
In 2005,
Roger Ebert
popularized the term
"hyperlink movie"
(coined by Alissa Quart in that same year) in
his review of Syriana
, where he says it "describes movies in which the characters inhabit separate stories, but we gradually discover how those in one story are connected to those in another." Part of the power of this
narrative structure, he explains, is that "the motives of one character may have to be reinterpreted after we meet another one." The TV show
, for instance, also has a "hyperlink narrative."
Euler's Identity is the best evidence thus far discovered that all of mathematics is one giant hyperlink narrative. Before
Leonhard Euler
discovered it in the sixteenth century, mathematics better resembled, say, the work of
Stephen King
-- a collection of largely unrelated narratives that occasionally overlap in
. There was
the "number theory" narrative
, whose heroes are 0 and
the "calculus" narrative
, whose hero is
the "complex analysis" narrative
, whose hero is
; and
the "trigonometry" narrative
, whose hero is
. Nobody had any idea that the
arcs of these characters
, which had been entirely separate up to that point, would directly intersect each other.
If math is a hyperlink movie with numbers as its heroes, then Euler's Identity can be thought of as
that mindfuck scene at the end that makes the viewer go "No way!"
It is the first, and so far only, scene in
Math: The Movie
in which all five of these compelling number-characters, 0, 1,
, pi, and
, interact directly. (When
came out, much was made of the fact that Pacino and DeNiro, two Italian giants of crime noir, had a single six-minute scene together; imagine how much
of a publicity coup it would have been if they were joined by, say, Humphrey Bogart, Denzel Washington, and Laurence Olivier.)
But what are these mathemacting luminaries
to each other? No one knows. After proving the Identity in a lecture,
Benjamin Peirce
said, "It is absolutely paradoxical; we cannot understand it, and we don't know what it means, but we have proved it, and therefore we know it must be the truth." Euler's Scene is the mathematical
equivalent of something out of a
David Lynch
film -- arresting, bizarre, inscrutable -- only, this David Lynch film is actually a documentary.
4 comments:
Hey there - it's a great explanation of what it is but not, I think, an explanation of why it moves you to the degree that you wanted it engraved into your body. And it seems a pity that it had
to be put on your back rather than, say, the inside of your eyelid - where you could see it!!
Owen - this is good stuff and well spoken as usual. I like Kathryn's comment also.
I would only suggest that you not so quickly discount the super-hero/comic book genre so quickly. It seems to me that the people we meet in real life are also defined by their exteriority - for
the most part.
No matter how much I get to know someone, there are limits to how much I can know their heart or their head. I only know of you that which you want me to know and that which you can't control
yourself well enough to hide.
Cool beans!
Hi -
I was actually considering getting this tattoo myself and I googled it to see if anyone had ever gotten it. Your description here puts into words exactly why the Euler identity is so cool, and so
Thanks for this!
|
{"url":"http://higgsblogon.blogspot.com/2010/03/why-i-got-eulers-identity-tattooed-on.html","timestamp":"2014-04-19T17:01:04Z","content_type":null,"content_length":"46051","record_id":"<urn:uuid:a956f32d-9879-44c4-8d94-83a5aef6729c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can I use K-means algorithm on a string?
up vote 4 down vote favorite
I am working on a python project where I study RNA structure evolution (represented as a string for example: "(((...)))" where the parenthesis represent basepairs). The point being is that I have an
ideal structure and a population that evolves towards the ideal structure. I have implemented everything however I would like to add a feature where I can get the "number of buckets" ie the k most
representative structures in the population at each generation.
I was thinking of using the k-means algorithm but I am not sure how to use it with strings. I found scipy.cluster.vq but I don't know how to use it in my case.
add comment
3 Answers
active oldest votes
K-means doesn't really care about the type of the data involved. All you need to do a K-means is some way to measure a "distance" from one item to another. It'll do its thing based
on the distances, regardless of how that happens to be computed from the underlying data.
up vote 3 down
vote accepted That said, I haven't used scipy.cluster.vq, so I'm not sure exactly how you tell it the relationship between items, or how to compute a distance from item A to item B.
This answer doesn't make any sense. What is the "distance" between two strings of RNA such that it A) obeys the triangle inequality and B) is euclidean? There are many clustering
algorithms, and it seems beyond me how k-means in particular would be useful in this circumstance. – sclv Jun 9 '11 at 16:51
The distance I am using is the structural distance, for example sequences: (1) "(((....)))" and (2) "((((..))))" Have a distance of 1 since the only difference in an insertion –
Doni Jun 9 '11 at 20:35
add comment
One problem you would face if using scipy.cluster.vq.kmeans is that that function uses Euclidean distance to measure closeness. You'd have to find a way to convert your strings into
numerical vectors and be able to justify using Euclidean distance as a reasonable measure of closeness. What if two strings are identical except that one has an additional basepair
inserted somewhere. You might want to consider them "close", but Euclidean distance might be far apart...
up vote 4
down vote Perhaps you are looking for Levenshtein distance?
add comment
K-means only works with euclidean distance. Edit distances such as Levenshtein don't even obey the triangle inequality. For the sorts of metrics you're interested in, you're better off
using a different sort of algorithm, such as Hierarchical clustering: http://en.wikipedia.org/wiki/Hierarchical_clustering
up vote 4 Alternately, just convert your list of RNA into a weighted graph, with Levenshtein weights at the edges, and then decompose it into a minimum spanning tree. The most connected nodes of
down vote that tree will be, in a sense, the "most representative".
Why the downvote? – sclv Jun 9 '11 at 16:50
add comment
Not the answer you're looking for? Browse other questions tagged python algorithm cluster-analysis bioinformatics k-means or ask your own question.
|
{"url":"http://stackoverflow.com/questions/6293637/can-i-use-k-means-algorithm-on-a-string","timestamp":"2014-04-17T05:15:26Z","content_type":null,"content_length":"75414","record_id":"<urn:uuid:279e2da1-3fa4-4738-8ff2-c3d7620ef678>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inverse Parametric Analysis of Seismic Permanent Deformation for Earth-Rockfill Dams Using Artificial Neural Networks
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 383749, 19 pages
Research Article
Inverse Parametric Analysis of Seismic Permanent Deformation for Earth-Rockfill Dams Using Artificial Neural Networks
^1Faculty of Infrastructure Engineering, Dalian University of Technology, Dalian, Liaoning 116024, China
^2College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang 310023, China
Received 27 October 2012; Accepted 3 December 2012
Academic Editor: Sheng-yong Chen
Copyright © 2012 Xu Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
This paper investigates the potential application of artificial neural networks in permanent deformation parameter identification for rockfill dams. Two kinds of neural network models, multilayer
feedforward network (BP) and radial basis function (RBF) networks, are adopted to identify the parameters of seismic permanent deformation for Zipingpu Dam in China. The dynamic analysis is carried
out by three-dimensional finite element method, and earthquake-induced permanent deformation is calculated by an equivalent nodal force method. Based on the sensitivity analysis of permanent
deformation parameters, an objective function for network training is established by considering parameter sensitivity, which can improve the accuracy of parameter identification. By comparison, it
is found that RBF outperforms the BP network in this problem. The proposed inverse analysis model for earth-rockfill dams can identify the seismic deformation parameters with just a small amount of
sample designs, and much calculation time can be saved by this method.
1. Introduction
The dynamic response of rockfill dams under earthquake actions, mainly including absolution acceleration and permanent deformation, attracts more and more attention from engineers. The former is used
to assess the dynamical load and seismic resistance of the dam. The latter is adopted to provide a basis for the dam height reserved during the design phase. So the prediction of permanent
deformation is an essential problem in the seismic design for rockfill dams. As a result, it is important to select model parameters of rockfill dams reasonably, which makes for improving the
accuracy of the numerical calculation.
However, it is not easy to carry out. Because of the difference of construction technology and construction quality and so on, the spatial distribution of material properties is considerably random
in each project. Along with the development of construction technology, the maximum particle size of materials of rockfill dams may be bigger and bigger, but the model parameters are usually measured
in the laboratory by using samples with much smaller size. The prepared experimental samples in the laboratory are different from the construction conditions. Therefore, the mechanical properties of
samples determined in the laboratory may be more or less differing from those in situ. And then the stress and deformation acquired with laboratory parameters deviate far from the actual values
inevitably. As a consequence, it is necessary to take measures to get model parameters in accord with the results of dam observation and make an accurate evaluation of dam safety and stability after
that. Displacement backanalysis is an effective method to check and modify the parameters of soils.
In recent years, reverse analysis is mainly based on two methodologies: optimization algorithms and neural networks [1–5]. There are three types of optimization algorithms that have been frequently
used in inverse analysis. The first type is gradient-based direct search algorithms, such as Levenberg-Marquardt method. The second type is the relatively simple direct search methods making no use
of gradient, such as the simplex search method. The last type is kinds of intelligent global search algorithms, such as genetic algorithms, differential evolution, particle swarm optimization, and
ant colony optimization. The first and the second type algorithms both have an advantage of estimating the solutions in relatively short computational time, but the results are affected by the
initial values, and premature convergence is likely to happen. As an alternative to the direct search algorithms, intelligent global search algorithms are being widely adopted in reverse analysis,
but they have a disadvantage of being time consuming.
In the geotechnical engineering field, intelligent backanalysis methods based on artificial neural networks (ANNs) and genetic algorithms [6, 7] are often adopted. As for generic algorithms, the
range and trial values of the undetermined parameters should be given before the analysis, and then the time-consuming finite element method (FEM) calculation is performed repeatedly, so it is hard
to ideally solve complicated nonlinear problems with a lot of finite elements. That is why it has been primarily used for seeking answers to static problems and two-dimensional problems so far. They
need relatively few iterations and finite elements. Comparatively speaking, the strong nonlinear relationship between the known and unknown quantity in geotechnical engineering can be mapped well by
using ANNs. And neural network approach can obtain inversion parameters quickly with just a small amount of sample designs.
The aim of this paper is to present an inverse analysis model for seismic permanent deformation parameters of earth-rockfill dams based on artificial neural networks. Section 2 presents the theories
of forward computational models for static analysis, dynamic response analysis, permanent deformation analysis, and sensitivity analysis of design parameters. Section 3 introduces backprorogation
neural networks (BPNNs) and radial basis function neural networks (RBFNNs). Section 4 shows the performance of both ANNs. Finally, conclusions are made in Section 5.
2. The Mathematical Model
2.1. Static Analysis
Duncan and Chang’s E-B model [8] is used to simulate the mechanical behavior of the rockfill materials. It is a nonlinear elastic model with wide application and is characterized by seven parameters:
cohesion , friction (or , ), failure ration , modulus number , modulus exponent , bulk modulus number , and bulk modulus exponent . The nonlinear stress-strain relation of rockfill is represented by
a hyperbola, whose instantaneous slope is the tangent modulus . According to the conventional triaxial tests, can be expressed as follows: The E-B model follows the Mohr-Coulomb criterion, and the
wider the range of pressure involved the greater the curvature of the Mohr-Coulomb envelopes, since friction becomes smaller with the increase of minor principal stress . So as to coarse-grained
soil, friction is not constant everywhere in dams. This variation in property may be represented by an equation of the form: where is the value of for and is the reduction of for a 10-fold increase
in . And the bulk modulus can be expressed as
2.2. Dynamic Analysis
Equivalent linear elastic model [9] is used to simulate the dynamic properties of the earth-rock mixtures with two basic characteristics: nonlinearity and hysteresis. Soils have been deemed to be
viscoelasticity in the model, and the dynamic stress-strain relationship is reflected with equivalent shear modulus and equivalent damping ratio . The key of the model is to determine the relation
between maximum dynamic shear modulus and mean effective principal stress , as well as the variation of and along with the amplitude of dynamic shear strain. Based on a large number of experiments
conducted by China Institute of Water Resources and Hydropower Research (IWHR), maximum dynamic shear modulus can be expressed as where are modulus and exponent usually determined by experiments. is
the atmospheric pressure. In the experiments, the three-axis instrument and wave velocity test device were used. In regard to and , a relation curve is achieved firstly through experiments, which
describes the variation of the dynamic shear modulus ratio / and damping ratio with dynamic shear strain . Then is normalized by reference shear strain to make instantaneous and easy to get through
The effects of hydrodynamic pressure have to be considered when analyzing the dynamic interaction between dam and water in reservoir, since reservoirs may not always operate at a low water level
during an earthquake. An ideal way to consider the effects is to divide finite element grids regarding the water and dam body as a whole, and interface element is utilized between contacting
surfaces. However, it requires the computer with sufficient memory for the process and is very time consuming. Besides, stiffness coefficient of the interface element is hard to determine. So the
additional mass method has been widely used so far. The effects of dynamic water pressure on seismic response of the dam are taken into account with the equivalent additional mass, and the dynamic
analysis is done by adding the equivalent mass and the mass of the dam itself.
In this paper, equivalent additional mass is calculated by the lumped-mass method. The equivalent additional mass focusing on node is calculated by simplified Westergaard formula [10]: where is the
angle between the upstream slop and the horizontal plane is depth of water from the surface to the bottom of reservoirs. is depth of water from the surface to node , and is the corresponding control
area of node .
2.3. Permanent Deformation Analysis
2.3.1. The Model of Permanent Deformation
There have been several models for permanent deformation calculation at present, such as IWHR model [11], Debouchure model, Shen Zhujiang model [12], and improved models about them [13, 14], among
which Shen Zhujiang model owns the broadest application so that the model is expressed in an incremental form, and permanent deformations in various conditions including different vibrations, dynamic
shear strain, and stress levels can be calculated with only a set of parameters. Compared with the other models, Shen Zhujiang model not only has a consideration on both shearing deformation and
volume deformation, but also is easier to use. Residual volumetric strain and residual shear strain can be written as where is dynamic shear strain amplitude, is stress level, is the number of
vibrations, and , , , , and are experimental parameters. The parameter is assumed to have nothing to do with , that is, .
However, studies in recent years have suggested that the deformations calculated by Shen Zhujiang model are larger than actual performance, and it is adverse to an accurate evaluation of the seismic
behavior of faced rockfill dams. So it is necessary to appropriately improve the model. Zou Degao focused on the influence of stress level on the residual shear deformation and presented an improved
model based on a large number of cycle triaxial experiments [13]. When the earthquake-induced permanent deformations are calculated with FEM, the improved model can be expressed as an incremental
form as follows: where , are increments of residual volumetric strain and residual shear strain, respectively.
2.3.2. The Method of Permanent Deformation Analysis
The major ways of overall seismic deformation analysis are based on the method of equivalent nodal force and modulus soften model [15] now. The calculation process of modulus soften model is
relatively complicated, so the method of equivalent nodal force is a better choice for the permanent deformation analysis; the ideal about it is that the residual strain during an earthquake is
determined firstly by an empirical formula, then the residual strain is converted to equivalent node force of finite elements, and the contributions of residual strain to the dam are replaced by the
displacement calculated with the equivalent nodal force. The procedure comprises the following three steps.(1) Perform static calculation for the dam with midpoint incremental method, to get the
initial static stress and stress level .(2) Perform dynamical calculation through the approach on the basis of equivalent linear viscoelastic model, to get the dynamic stress of soil, then convert
the dynamic stress to stress state in laboratory, and get residual strain potential , according to (2.7). And strain potential of finite elements is obtained according to the following formula:
where is generalized shear stress, is average principal stress, and is increment of residual strain in Cartesian coordinates.(3) Calculate the equivalent nodal force with the converted strain
potential according to the formula that where is the conversion matrix of strain is the elastic matrix. Then permanent deformations are calculated with the equivalent nodal force applied on finite
3. Permanent Deformation Parameters Inversion Using Artificial Neural Networks
3.1. BP Neural Networks
For a BPNN with a structure , that is, a vector input , hidden units, and an output vector as in Figure 1, the equation that expresses the relationship between the input and output can be written
as where is the number of input units, is the number of neurons in the hidden layer, is the th input unit, is the weight parameter between input and hidden neuron , is the weight parameter between
hidden neuron and output neuron , is the activation function of the hidden layer, and is the transfer function of output layer.
The weights were estimated and adjusted in the learning process with an aim of minimizing an error function as follows: where is the number of input and output examples of the training dataset and is
the target value. The errors were fed backward through the network to adjust the weights until the error was acceptable for the network model. Once the ANN is satisfied in the training process, the
synaptic weights will be saved and then used to predict the outcome for the new data. To minimize , optimal parameters of weights and biases have to be determined. One of the algorithms for solving
this problem is the Levenberg-Marquardt (LM) algorithm. This algorithm is a modification of the Newton algorithm for finding optimal solutions to a minimization problem. The weights of an LMNN are
calculated using the following equation: where is the Jacobian matrix of output errors, is the identity matrix, and is an adaptive parameter. When , it becomes the Gauss-Newton method using the
approximate Hessian matrix. If is large enough, the LM algorithm becomes a gradient descent with a small step size (the same as in the standard backpropagation algorithm).
3.2. RBF Neural Networks
RBFNN is a kind of feedforward neural network and generally consists of three components: input layer, hidden layer, and output layer. Figure 2 displays an RBF network with a structure that there are
inputs, hidden units and outputs. is the input vector. is the weight matrix of inputs and are offset of output units. is the output vector of the network, and is the activation function of the hidden
unit ; one common function is the Gaussian function: where is spread constant, the role of which is to adjust sensitivity of the Gaussian function.
The final output of unit can be expressed as where is the weight of output layer and is spread constant of the base function.
3.3. Sample Set Designs
The prediction accuracy of neural networks is determined by sample quality to some extent, the samples used for inversion have to be accurate and balanced, so as to be representative enough, and it
is better to reflect inner characters of the model system. In this paper, the methods of central composite design [16] and orthogonal design are utilized to design samples. The central composite
design was presented by Box and Wilson. It consists of the following three parts: a full factorial or fractional factorial design, a central point, and an additional design, often a star design in
which experimental points are at a distance from its center. Figure 3 illustrates the full central composite design for optimization of three variables. Full uniformly rotatable central composite
designs present the following characteristics:(1)require an experiment number according to , where is the factor number and is the replicate number of the central point.(2)-values depend on the
number of variables and can be calculated by . For two, three, and four variables, it equals 1.41, 1.68, and 2.00, respectively.(3)All factors are studied in five levels ).
As a result, samples designed by both methods embody not only inner but also outer conditions of the cube within certain realms, and they have good density distribution and representativeness.
3.4. Parameter Sensitivity Analysis
Sensitivity analysis can estimate the influence of parameter variation on outputs and make us have a more intuitive understanding about the parameters to be considered. Morris method [17] was more
applied in sensitivity analysis. It figured out the influence of arguments on the dependent variable through disturbing a parameter and keeping the others the same. The corrected Morris method
changed arguments at a fixed step size, calculated the value of influence in each condition, and then took the average like this: where is the sensitivity factor, is the output of condition , is the
output derived from the initial parameters, is the percentage of parameter variations in condition compared to the initial parameters, and is the number of conditions.
4. Example Analysis
4.1. Brief Introduction to the Project
The Zipingpu dam is located in a valley, 9km away in northwest from the Chengdu City, Sichuan province. It is one of the high CFRDs more than 150m in China, with a maximum height of 156m and the
crest length 663.77m. It encountered the Sichuan 8.0-magnitude earthquake which was higher than its actual design level. The dam body emerged obvious damage, and it provided rich and precious
materials [18, 19] for earthquake engineering research on CFRDs.
4.2. Results of Static Calculation
Static parameters were directly selected from the experimental results coming from the IWHR, considering as well the results from Professor Zhu Cheng who partly backanalyzed the parameters of the
project by immune genetic algorithm. Integrating both results, the parameters of rockfill were determined in Table 1. Besides, the linear elastic model was adopted for the calculation of concrete
panels, concrete strength grade was 25, and the corresponding material parameters were density , MPa, Poisson’s ratio .
To simulate the process of construction and impoundment of the CFRD, midpoint incremental method and multistage loading process were used in the calculations. The dam was meshed into 6772 finite
elements with total 6846 nodes, as shown in Figure 4. And the main results of a typical cross-section in rockfill zone were shown as in Figures 5 and 6, which were at operational water level before
the earthquake.
4.3. Results of Dynamic Calculation
Due to influence of all kinds of factors, Zipingpu dam had no acceleration recordings of the actual principle shock of base rock. According to analysis [20, 21] of many scholars, the actual input
ground motion was likely to be more than 0.5g. Referring to the attenuation relationship [22] given by Yu et al., and considering the hanging wall and footwall effects, Professor Zhu Cheng deduced
that the peak accelerations of dam site in three directions, respectively, were 0.52g in east-west, 0.46g in north-south, and 0.43g in vertical. And based on the materials concerned in [22], the
relative position of seismometer stations and dam site was shown in Figure 7. Finally, the seismograms of Wolong Station (051WCW) closer to the dam site were selected as the ground motion input;
meanwhile the acceleration time histories were scaled in accordance with the peak accelerations as aforementioned, as shown in Figure 8.
According to the calculation results, the basic frequency of dam vibration was about 1.65Hz and the maximum acceleration responses at dam crest were 0.86g along the river, 0.74g in vertical, and
1.36g along the dam axial, which were consistent with the analysis results obtained by Kong et al. [21]. Under the three-dimensional earthquake, the maximum acceleration response lay in the
downstream dam crest, and rockfill slid when its acceleration response exceeded yield acceleration, which qualitatively explained the phenomenon on Zipingpu dam that some grains in the downstream dam
crest loosened and tumbled.
4.4. Backanalysis of the Project
Though the dam has different material zoning, material sources are same the and the construction control indexes are very near; therefore dynamic properties of different zones will not vary much. To
reduce the workload, all rockfills were thought to have the same set of permanent deformation parameters, they were but was out of the inversion range for that it was a constant as mentioned in (2.6
). Reference [23] listed several project cases at home and abroad and gave the test values of corresponding permanent deformation parameters [24–27]. However, parameters of different projects varied
entirely; as a result, a set of initial parameters were determined firstly referring to the similar engineering and the inversion range of parameter was preliminarily determined as Table 2.
4.4.1. Parameter Sensitivity Analysis
Disturb a parameter with a fixed step size 10% and keep the other parameters the same. And then sensitivity analysis of the dam on maximum subsidence was performed, and the result was shown in
Figures 9 and 10.
According to the classification results in [17], parameter sensitivity was graded into four levels: highly sensitive, sensitive, moderately sensitive, and insensitive. Figure 10 shows that is a
moderately sensitive parameter, , are sensitive ones, and is highly sensitive. Besides, Figure 9 shows that the subsidence under earthquake increases with the increase of and decreases with the
increase of .
4.4.2. Comparison of RBF and BP Networks in Inversion
In order to improve the generalization ability of neural network, a training method [28] was taken that samples were partitioned into several subsets, and then the network was trained and appraisal
was done at the same time with the change of spread constant. Samples were partitioned into three subsets here for training, validation, and testing. The training and validation datasets were used to
determine synoptic weights of the network model whereas the testing dataset is used to evaluate the prediction results. If the performance index (generally mean square error, (MSE)) of errors of
training dataset was satisfied, then determine the optimal network according to that of verification dataset.
There were 90 training samples in all generated by the methods of central composite design and orthogonal design, and 9 samples generated at random among which 6 ones were used for verification, and
the others were for testing. With observation displacement as input vector and permanent deformation parameter as an output vector, the neural network could be trained. One of the common ways to
evaluate performance of the network was by error of mean square root (RMSE). However, backanalysis of permanent deformation parameters by neural network was a problem of multi-input-output, if the
network performance was still evaluated like that: Then it could not reflect the error influence of different parameters; that is to say, the results calculated by inversion parameters might deviate
considerably from the actual value since the forecasting error of highly sensitive parameter was relatively big, though the total error was small. So in this paper, an objective function for
evaluating network performance was put forward to improve fitting accuracy of high sensitive parameter, where sensitivity factors were taken as weights of error, as follows: where is the sample
number of validation dataset, is the dimension of the output vector, and and are final parameter output and the actual parameter value, respectively.
The key of RBF network training is to determine the neural number of hidden layer. It has been a common method at present to gradually increase the neural number automatically by checking the output
error, starting from zero. So the process of establishing a network is also the one of training. Spread constant was adjusted until the prediction accuracy of testing samples met the engineering
requirements, and then the trained network can be practically used for parameter inversion. It is beneficial for the generalization of neural network to adjust spread constant, the higher the , the
more smooth the fitting, but not the higher the better, if is too high, all outputs are likely to become common and the base functions tend to overlap completely. Its value generally depends on the
distance between the input vectors. To analyze the effectiveness of the training, the transition of the RMSE was traced with the change of as shown in Figure 11. It can be seen that the RMSE
decreases rapidly in the initial stages and almost remains the same after .
The process of BP network training was similar to the RBF network; after MSE of training dataset was satisfied, then determine the optimal network according to RMSE of verification dataset. The
maximum number of training epochs was set to 3000. The MSE goal was set to 0.001. The initial value of adaptive parameter was set to 0.001. increased by 10 and decreased by 0.1 until performance
value reduced, and its maximum value is set to 1e10. During the training phase, the data were processed several times to see whether any changes occurred due to the assignment of random initial
weights. Figure 12 describes the results that the value of RMSE decreases from the largest value of 0.98 with one hidden neuron to the value of 0.18 with 11 hidden neurons. RMSE was then stable at
this value with an increasing number of hidden neurons.
To test the prediction accuracy of the networks, the outputs of the testing samples were listed in Table 3. For the three samples, the prediction accuracy of all parameters by RBFNN is around 5%
whereas not all the results by BPNN are satisfactory. However, for both networks, the forecasting error of parameter is generally smaller, which indicates that it has a pronounced effect on
improvement of highly sensitive parameters with the function of error evaluation, in the problem of multiple parameter inversion.
To further check the error precision of influence on subsidence, the predictive parameters by RBFNN were used for finite element calculations. Owing to space reasons, for the second sample only, the
results were compared with the actual value as in Figure 13, and the observation points were on central axis of the typical section. It can be seen that the computation values are very close, which
indicates that the prediction effect by the RBF network can basically meet engineering accuracy.
With actual observation displacement of Zipingpu dam as input vector, permanent deformation parameters of rockfill were backanalyzed by the trained RBF network, and the results are shown in Table 4.
The permanent displacement calculated by the inversion parameters is shown in Figure 14, and the marked values are actual displacements of the dam. It can be seen that the vertical displacements
increase with the increment of dam height, which accords with the law of the measured settlement, and both values are also very close. Figure 15 shows the deformed mesh for finite element
calculations, where deformation has already been enlarged to see clearly. Besides, the dam section becomes smaller and the slopes of both upstream and downstream shrink inward, which embodies the
macroscopic character of rockfill under earthquake action. Due to the upstream water load, the maximum of the horizontal permanent deformation points to the downstream of the dam but the
earthquake-induced permanent deformation is predominantly vertical settlement. In summary, the results are qualitatively rational, which indicates that it is feasible to calculate the
earthquake-induced permanent deformation by the method of equivalent nodal force on the basis of improved Shen Zhujiang model.
5. Conclusions
In the process of conventional back/inverse analysis, it is necessary to perform FEM analysis frequently. For a large-scale multiple parameter and nonlinear problem such as backanalysis of
displacement of an earth-rockfill dam, the workload is so daunting that sometimes the backanalysis cannot be carried out. In this paper, ANNs were introduced in the field of dynamic parameter
inversion of earth-rockfill dam, BP and RBF networks were compared, then on the basis of RBFNN, a backanalysis model for earthquake-induced permanent deformation parameters was proposed, and it was
used for backanalysis of parameters for Zipingpu CFRD. The results indicate the following:(1)The RBFNN model appears more robust and efficient than BPNN model for backanalysis of earthquake-induced
permanent deformation parameters of the earth-rockfill dam. Due to the assignment of random initial weights, the structure of BPNN is difficult to determine, network training results are unstable,
and it can easily be trap in a local optimum; all of the problems are still to be resolved, and so RBFNN is a better choice. (2)It is an easy and effective way to backanalysis of earthquake-induced
permanent deformation parameters of the earth-rockfill dam with RBF network model. In this way, the inversion range of parameters is determined according to the similar engineering, and several
samples are generated through the experimental design methods; then after the neural network is trained, the residual deformation parameters can be acquired quickly by putting in actual permanent
deformations of the dam.(3)The existing theory and method of dynamic calculation can basically reflect the earthquake-resistant behavior of earth-rockfill dam, but it is still an extremely complex
subject to research; many factors such as seismic input and calculation model may lead to great gap between calculated value and actual value. Thereby, further study on dynamic analysis method will
make the inversion of permanent deformation parameters more meaningful.
This work was supported by National Natural Science Foundation of China (51109028 and 90815024) and the Fundamental Research Funds for the Central Universities (DUT11RC(3)38).
1. F. Kang, J. Li, and Q. Xu, “Structural inverse analysis by hybrid simplex artificial bee colony algorithms,” Computers and Structures, vol. 87, no. 13-14, pp. 861–870, 2009. View at Publisher ·
View at Google Scholar · View at Scopus
2. P. Z. Lu, S. Y. Chen, and Y. J. Zheng, “Artificial intelligence in civil engineering,” Mathematical Problems in Engineering, vol. 2012, Article ID 145974, 22 pages, 2012. View at Publisher · View
at Google Scholar
3. S. Chen, Y. Zheng, C. Cattani, and W. Wang, “Modeling of biological intelligence for SCM system optimization,” Computational and Mathematical Methods in Medicine, vol. 2012, Article ID 769702, 10
pages, 2012. View at Zentralblatt MATH · View at MathSciNet
4. X.-H. Yang, F.-L. Jiang, S.-Y. Chen, and W.-L. Wang, “Modeling evolution of weighted clique networks,” Communications in Theoretical Physics, vol. 56, no. 5, pp. 952–956, 2011. View at Publisher
· View at Google Scholar
5. S. Y. Chen, C. Y. Yao, G. Xiao, Y. S. Ying, and W. L. Wang, “Fault detection and prediction of clocks and timers based on computer audition and probabilistic neural networks,” Lecture Notes on
Computer Science, vol. 3512, pp. 952–959, 2005.
6. Y. Yu, B. Zhang, and H. Yuan, “An intelligent displacement back-analysis method for earth-rockfill dams,” Computers and Geotechnics, vol. 34, no. 6, pp. 423–434, 2007. View at Publisher · View at
Google Scholar · View at Scopus
7. S. Zhu, G. Yang, J. P. Zhou, and Y. G. Song, “Back analysis on static and dynamic characteristics of Zipingpu CFRD under “5.12” Wenchuan Earthquake,” Journal of Sichuan University, vol. 42, no.
5, pp. 113–119, 2010. View at Scopus
8. J. M. Duncan and C. Y. Chang, “Nonlinear analysis of stress and strain in soils,” Journal of the Soil Mechanics and Foundations Division ASCE, vol. 96, no. SM5, pp. 1629–1653, 1970.
9. X. S. Liu, Z. N. Wang, X. G. Wang, et al., Large-Scale Shaking Table Model Test and Dynamic Analysis for CFRD, China Waterpower Press, Beijing, China, 2010.
10. H. Chen, S. Hou, and D. Yang, “Study on arch dam reservoir interaction under earthquake condition.,” Journal of Hydraulic Engineering, vol. 10, no. 7, pp. 29–39, 1989. View at Scopus
11. K. Y. Wang, Y. P. Chang, and N. Chen, “Residual deformation characteristics of coarse-grained soils under cyclic loading,” China Civil Engineering Journal, vol. 33, no. 3, pp. 48–53, 2000.
12. Z. J. Shen and G. Xu, “Deformation behavior of rock material under cyclic loading,” Hydro-Science and Engineering, vol. 6, no. 2, pp. 143–150, 1996.
13. D. G. Zou, F. W. Meng, X. J. Kong, et al., “Residual deformation behavior of rock-fill materials,” Chinese Journal of Geotechnical Engineering, vol. 30, no. 6, pp. 807–812, 2008.
14. S. Zhu and J. B. Zhou, “Deformation behavior of coarse grained materials under cyclic loading,” Rock and Soil Mechanics, vol. 31, no. 5, pp. 1375–1380, 2010.
15. H. L. Liu, Z. Z. Lu, and J. H. Qian, “Earthquake-induced permanent deformation of earth-rock dams,” Journal of Hohai University, vol. 24, no. 1, pp. 91–96, 1996.
16. M. A. Bezerra, R. E. Santelli, E. P. Oliveira, L. S. Villar, and L. A. Escaleira, “Response surface methodology (RSM) as a tool for optimization in analytical chemistry,” Talanta, vol. 76, no. 5,
pp. 965–977, 2008. View at Publisher · View at Google Scholar · View at Scopus
17. J. L. Huang, P. F. Du, W. Q. He, Z. D. Ou, H. C. Wang, and Z. S. Wang, “Local sensitivity analysis for urban rainfall runoff modelling,” China Environmental Science, vol. 27, no. 4, pp. 549–553,
2007. View at Scopus
18. S. S. Chen, J. P. Huo, and W. M. Zhang, “Analysis of effects of “5.12” Wenchuan Earthquake on Zipingpu Concrete Face Rock-fill Dam,” Chinese Journal of Geotechnical Engineering, vol. 30, no. 6,
pp. 795–801, 2008. View at Scopus
19. Z. Y. Yang, J. M. Zhang, and X. Z. Gao, “A primary analysis of seismic behavior and damage for Zipingpu CFRD during Wenchuan Earthquake,” Water Power, vol. 35, no. 7, pp. 30–33, 2009.
20. H. Q. Chen, Z. P. Xu, and M. Lee, “Wenchuan Earthquake and seismic safety of large dams,” Journal of Hydraulic Engineering, vol. 39, no. 10, pp. 1158–1167, 2008. View at Scopus
21. X. J. Kong, D. G. Zou, Y. Zhou, and B. Xu, “Earthquake damage analysis of Zipingpu concrete face rock-fill dam during Wenchuan earthquake,” Journal of Dalian University of Technology, vol. 49,
no. 5, pp. 667–674, 2009. View at Scopus
22. H. Yu, D. Wang, Y. Yang, Q. Xie, W. Jiang, and B. Zhou, “The preliminary analysis of strong ground motion records from the M s 8.0 Wenchuan Earthquake,” Journal of Earthquake Engineering and
Engineering Vibration, vol. 29, no. 1, pp. 1–13, 2009. View at Scopus
23. G. C. Gu, C. S. Shen, and W. J. Cen, Earthquake Engineering for Earthrock Dams, China Waterpower Press, Beijing, China, 2009.
24. S. Y. Chen and Q. Guan, “Parametric shape representation by a deformable NURBS model for cardiac functional measurements,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 3, pp.
480–487, 2011. View at Publisher · View at Google Scholar · View at Scopus
25. M. Li, S. C. Lim, and S. Chen, “Exact solution of impulse response to a class of fractional oscillators and its stability,” Mathematical Problems in Engineering, vol. 2011, Article ID 657839, 9
pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
26. S. Chen, Y. Wang, and C. Cattani, “Key issues in modeling of complex 3D structures from video sequences,” Mathematical Problems in Engineering, vol. 2012, Article ID 856523, 17 pages, 2012. View
at Publisher · View at Google Scholar
27. M. Carlini, S. Castellucci, M. Guerrieri, and T. Honorati, “Stability and control for energy production parametric dependence,” Mathematical Problems in Engineering, vol. 2010, Article ID 842380,
21 pages, 2010. View at Publisher · View at Google Scholar · View at Scopus
28. S. Duan, C. He, L. Xu, and D. Ma, “A training method for improving the generalization performance of radial basis function networks,” in Proceedings of the 3th World Congress on Intelligent
Control and Automation, pp. 859–863, July 2000. View at Scopus
|
{"url":"http://www.hindawi.com/journals/mpe/2012/383749/","timestamp":"2014-04-19T05:35:00Z","content_type":null,"content_length":"270814","record_id":"<urn:uuid:2b0fc5b8-a558-44ab-abf0-ab0b974a40a2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Good capacitors for opamp power rails? - Page 4 - diyAudio
Originally Posted by
Jerald Graeme does just this in Ch.3 of his book Optimizing Op Amp Performance. It's a high-level book (ie he didn't write it for a knucklehead like me) but I can give the formulae he derived:
For the primary bypass cap Cb = 50/pi*fc
"Here fc represents the unity-gain crossover frequency of the op amp and generally represents the upper limit of the amplifier's useful frequency range."
For the secondary bypass cap Cb2 = Lbp1
"Thus simply making the magnitude of the Cb2 capacitance equal to that of Cb1's parasitic inductance transfers line impedance control from Zcb1 to Zcb2 at the 1-ohm level."
That is interesting, too. It's a different focus than I was taking, since it's about bypassing more than decoupling. But it looks like he might be taking many or most of the same things into account.
I haven't read Graeme's stuff, yet. I had received Henry Ott's EMC book just after I started the decoupling explorations at the link I gave, so I drew heavily on his work and on some others found on
line. They are all mainly worried about high-frequency digital circuits on multi-layer PCBs but I tried to apply their methods (and some very-basic math) to high-current audio amplifier circuits on
one or two sided PCBs (or point-to-point), with mainly through-hole/leaded parts, since that is what most DIYers can implement most easily, and also seems more-likely to be difficult to get right.
Interestingly, the equation from Graeme that you gave, Cb = 50/pi*fc, looks closely-related to an equation that Ott provided for bandwidth versus rise time, i.e. f = 1 / (π trise), which can be
rearranged as trise = 1 / (π f), which is very similar to Graeme's 50 / (π f), implying that he uses Cb = 50 x trise. I'm not sure how the physical "units" work out, there, without seeing how he got
the "50". But it's interesting.
Ott basically started with the worst-case current range that would need to be slewed through (call it dI), along with a chosen maximum rail-voltage disturbance (call it dV) to try to have, and the
worst-case (shortest) rise time (call it dt) based on the max slew rate. With those, you can find a "target impedance" across the decoupling points, Zt = dV / dI, which must not be exceeded up
through at least the frequency implied by f = 1 / (π trise).
That gives enough constraints to solve for C in two different ways and we can just use the larger result:
C ≥ 1 / (2 π f Zt) (from the standard capacitor impedance equation)
C ≥ dI dt / dV (from the standard differential equation for capacitor behavior)
So far, the parasitic inductance problems have been left out. But since V = L dI/dt (the standard inductor equation), I think that we can use our previously-established dV, dI, and dt and find that
L ≤ dV (dt / dI)
is the maximum total inductance that can be tolerated, in the decoupling capacitance plus its connections.
(That's not quite the whole story, so please see the posts indicated, at the link I gave, if interested.)
For an example with 10 Amps in 2 μs, with 0.1 Volts or less rail-disturbance amplitude, that comes out to needing L ≤ 20 nH, which is on the order of one inch of combined trace/wire length plus
capacitor lead spacing, maximum!
That could be very difficult to implement on a one- or two-sided PCB with an LM3886 layout, especially since the capacitance would be at least 200 uF or more, for that example.
So, for practical DIY applications, the main problem to solve seemed to have become finding ways that could be used to "get around" the parasitic inductance problem.
Obviously, a professional PCB designer would immediately decide to use multi-layer boards with separate power and ground planes. And we could DIY several versions of that, by using thin PCB laminates
and gluing them together (which I am going to try, since the ease-of-layout benefits would also be so great). But I also wanted to see how far we could get with the easiest and more-traditional DIY
construction types.
One way to get lower inductance is to use several smaller decoupling caps in parallel. But note that the total inductance won't fully reduce (like total resistance reduces when paralleling resistors)
UNLESS there is no
inductance, which, I think, means that the connections could not share conductors, i.e. the connections would also have to be paralleled, all the way to the decoupling points if possible (which would
be much easier with power/gnd planes, but is also why even with planes the cap placement geometries do matter, as does the distribution geometry for injecting power into the power planes, i.e. the
currents should ideally use separate paths, on the planes, so there is less mutual inductance involved).
An extension of that might be to use multiple parallel power and ground rails, all the way from the PSU to the device, with each set of filter and decoupling capacitances basically having their own
pair of power/ground rails. That seems like a very promising approach and is where I have gotten, so far (having been interrupted by demands from my real job and some other stuff).
That approach should be able to give one of the main benefits of power and ground planes, but when using simple PCBs or point-to-point wiring. Theoretically, we would be able to make the PSU
impedance, as seen by the device pins, as small as desired, by adding more parallel paths from the PSU to the device, since the total parasitic inductance and parasitic resistance would be divided by
the number of conductor pairs that were used, while the total filter and decoupling capacitances would be multiplied by the number of conductor pairs that were used. There will probably be limits
that appear quickly for practical implementations. But at least all of the variables we were worried about move in the right directions.
Sorry to have blathered-on for so long about all of that. I don't want to take this thread too far away from what the OP intended. Maybe we can consider an LM3886 to be an overgrown opamp.
|
{"url":"http://www.diyaudio.com/forums/headphone-systems/204993-good-capacitors-opamp-power-rails-4.html","timestamp":"2014-04-18T00:32:56Z","content_type":null,"content_length":"80424","record_id":"<urn:uuid:1e2bc368-abce-49bd-b140-3d32bd8f38ad>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent application title: METHODS FOR CONFLICT-FREE, COOPERATIVE EXECUTION OF COMPUTATIONAL PRIMITIVES ON MULTIPLE EXECUTION UNITS
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A method for executing multiple computational primitives is provided in accordance with exemplary embodiments. A first computational unit and at least a second computational unit cooperate to execute
multiple computational primitives. The first computational unit independently computes other computational primitives. By virtue of arbitration for shared source operand buses or shared result buses,
availability of the first and second computational units needed to execute cooperatively the multiple computational primitives is assured by a process of reservation as used for a computational
primitive executed on a dedicated computational unit.
A method for executing a plurality of computational primitives, comprising:cooperating of a first computational unit and at least a second computational unit to execute a plurality of computational
primitives; andcomputing, independently by the first computational unit, other computational primitives;wherein, by virtue of arbitration for shared source operand buses or shared result buses,
availability of the first and second computational units needed to execute cooperatively the plurality of computational primitives is assured by a process of reservation as used for a computational
primitive executed on a dedicated computational unit.
The method of claim 1, wherein at least one of the first and second computational units which cooperate to execute the plurality of computational primitives does not independently compute a
computational primitive.
The method of claim 2, wherein for computation of additive reduction operations, three or more addends are reduced by carry-save addition to two operands in the second computational unit; andwherein
the two operands are transparently forwarded for completion to the first computational unit being capable of summing the two operands to a final result.
The method of claim 1, wherein at least one of the first and second computational units which cooperate to execute the plurality of computational primitives independently computes a computational
The method of claim 4, wherein for computation of a plurality of integer multiply operations and a plurality of integer multiply-add operations, one or more independent multiplications are performed
in a multiplier portion of a floating-point multiply-add unit;wherein one portion of an intermediate result proceeds through an adder in the floating-point multiply-add unit and a second portion of
the intermediate result is forwarded to a separate integer adder unit; andwherein a final result is composed by selecting an output of the floating-point multiply-add unit, an output of the separate
integer adder unit, or both the output of the floating-point multiply-add unit and the output of the separate integer adder unit.
TRADEMARKS [0001]
IBM® is a registered trademark of International Business Machines Corporation, Armonk, N.Y., U.S.A. Other names used herein may be registered trademarks, trademarks or product names of International
Business Machines Corporation or other companies.
BACKGROUND [0002]
1. Field of the Invention
Exemplary embodiments of the invention relate to the implementation of arithmetic computation units for integrated circuits. Specifically, exemplary embodiments discuss the cooperative use of two
different types of computation units to implement a more complex computation.
2. Description of Background
To improve computation performance in the face of decreasing benefit from generational silicon technology improvements, designs have moved to implement more complex computation primitives. In
general-purpose microprocessors, such computation primitives often take the form of expanded instruction sets implemented on accelerators coupled tightly to a processor core charged with implementing
the standard (legacy) set of instructions. Frequently, to improve computation throughput, such accelerators implement a short-vector SIMD (single-instruction multiple-data) computation model, whereby
each instruction specifies an operation to be performed across a wide data word, which, depending on the particular instruction, is interpreted as a vector of a small number (1-16) of sub-words. A
single instruction may then specify multiple operations on multiple pieces of data.
The disparate types of operations required in computation accelerators (e.g., arithmetic, logical, and data movement operations) and the wide variety of data types involved (e.g., signed and unsigned
integer numbers of different size, floating point numbers of different precisions) have typically required the implementation of many different kinds of functional computation units. For example, one
functional unit may handle simple integer operations (such as addition, subtraction, comparison, Boolean logical operations, etc.), another functional unit might be responsible for complex integer
operations (multiplications, multiply-adds, additive reductions, etc.), a third functional unit may be for floating-point operations, and still another functional unit may be for data formatting and
permutation. This exemplary implementation is very expensive in terms of design effort, circuitry required to implement the different functions separately, and power, especially since external
constraints (such as the number of computations that can begin or end at a given time), typically render most of this circuitry idle.
Another approach that leverages the external constraints to reduce the need for special purpose functional units would be beneficial.
SUMMARY OF EXEMPLARY EMBODIMENTS [0008]
A method for executing a plurality of computational primitives is provided in accordance with exemplary embodiments. A first computational unit and at least a second computational unit cooperate to
execute a plurality of computational primitives. The first computational unit independently computes other computational primitives. By virtue of arbitration for shared source operand buses or shared
result buses, availability of the first and second computational units needed to execute cooperatively the plurality of computational primitives is assured by a process of reservation as used for a
computational primitive executed on a dedicated computational unit.
Additionally, in accordance with exemplary embodiments, for computation of a plurality of integer multiply operations and a plurality of integer multiply-add operations, one or more independent
multiplications can be performed in a multiplier portion of a floating-point multiply-add unit. One portion of an intermediate result may proceed through an adder in the floating-point multiply-add
unit and a second portion of the intermediate result can be forwarded to a separate integer adder unit. A final result can be composed by selecting an output of the floating-point multiply-add unit,
an output of the separate integer adder unit, or both the output of the floating-point multiply-add unit and the output of the separate integer adder unit.
Additional features and advantages are realized through the techniques of the present invention. Exemplary embodiments of the invention are described in detail herein and are considered a part of the
claimed invention. For a better understanding of the invention with advantages and features, refer to the description and to the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS [0011]
The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features of
exemplary embodiments are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a block diagram of a state of the art execution pipeline with separate, pipelined computation units for simple integer, integer sum across, integer multiply-add, and floating-point
FIG. 2 illustrates a block diagram of a modified execution pipeline of FIG. 1 in accordance with exemplary embodiments; and
[0014]FIG. 3
illustrates a block diagram of the pipeline of FIG. 2 further modified in accordance with exemplary embodiments.
The detailed description explains exemplary embodiments, together with advantages and features, by way of example with reference to the drawings.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS [0016]
Exemplary embodiments describe a method of leveraging two (or more) computation units to cooperatively perform certain operations that currently require dedicated computation units. In a state of the
art pipeline, a dedicated computation unit must implement all of the circuitry required to perform those operations that are issued to it. In accordance with exemplary embodiments, minor
modifications to the functional units can greatly expand the types of instructions that may be performed in a cooperative manner. External constraints, in the form of shared issue ports, input
operand buses, and/or result writeback buses, are leveraged to ensure that pipelined computation units are not otherwise occupied when the computation units are required for cooperative execution in
exemplary embodiments.
Two exemplary embodiments are described. In the first exemplary embodiment, saturating-integer-sum-across operations are performed by implementing a small, dedicated special-purpose carry-save
reduction, followed by the reuse of the full carry-propagate adder and saturation logic already implemented as part of the simple integer computation unit. In the second exemplary embodiment,
integer-multiply-add instructions are implemented by using a multiplier within a floating-point computation unit followed by using an adder in the simple integer computation unit. In each case (the
first and second exemplary embodiments), the implementation of special-purpose hardware is avoided, and there is no performance penalty from the sharing. Also, only minor modifications to the
existing functional units are required.
FIG. 1 illustrates a block diagram of a state of the art execution pipeline with separate, pipelined computation units for simple integer, integer sum across, integer multiply-add, and floating-point
One exemplary embodiment allows for the transformation of the execution software/hardware represented in FIG. 1 into the simpler software/hardware of FIG. 2. In FIG. 1, a dedicated computation unit
(sum-across unit 102) for performing sum-across operations consists of multiple pipeline stages. The sum-across unit 102 shares input operand bus 110 and a result bus 112 with other computation
units. The other computation units include a simple integer unit 101, an integer multiply-add unit 103, and a floating-point multiply-add unit 104.
The operation of exemplary embodiments in FIGS. 1 through 3, however, is independent of whether the input operand buses 110 come from an operand store, such as memory or a register file, or an
inter-functional-unit result bypass. Arbitration for the shared result bus 112 allows only one functional unit at a time to drive the result bus. In FIGS. 1 through 3, arbitration is performed
logically via a multiplexer 111, although alternative implementations could employ, for example, a tri-state bus to achieve the same function.
Referring to FIG. 1, the sum-across operation (of the sum-across unit 102) adds together multiple sub-words of the input operands and returns the result, optionally saturating results that overflow
the result operand range. A well-known technique for efficiently performing such multiple operand additions involves the reduction of the operations via carry-save adders 120 into two summands, and
the two summands are then added together with some implementation of a carry-propagate adder 121 in order to generate the final sum. Saturation logic 122 causes the final result to be either the
minimum or maximum representable number in the event an overflow is detected and saturating arithmetic is desired. The sum-across unit 102 outputs its result to the multiplexer 111 via a connection
bus 180.
The simple integer unit 101 also contains a carry-propagate adder and saturation logic in order to implement straightforward, two-operand additions and subtractions.
FIG. 2 illustrates an optimized implementation of the sum-across computation unit 102 of FIG. 1 in accordance with exemplary embodiments. Also, FIG. 2 illustrates a modified execution pipeline
wherein the integer sum across unit 102 of FIG. 1 has been replaced by a smaller carry-save reduction unit followed by the use of a simple integer unit to complete the operation in accordance with
exemplary embodiments.
As seen in FIG. 2, the complex integer computation unit (i.e., the sum-across unit 102) of FIG. 1 has been replaced by a partial sum-across unit 130 containing only the carry-save adder 120 reduction
portion of the original computation unit (i.e., the sum-across unit 102). The carry-save adder 120 tree reduces inputs from the input operand bus 110 into partially-summed inputs, and the
partially-summed inputs are forwarded over an inter-functional unit connection bus 131 to the simple integer unit 201. As seen in FIG. 2, an input source selection multiplexer 132 (which is not in
the simple integer unit 101 of FIG. 1) has been included in the simple integer unit 201 to select input data from either the original input operand bus 110 or the inter-functional unit connection bus
131 depending on the type of operation. For operations requiring only the simple integer unit 101, source operands arrive on the operand bus 110 and are selected by the multiplexer 132. During a
complex integer computation, the multiplexer 132 selects the intermediate data on the bus 132 for completion in the carry-save adder 120. The final result of the sum-across instruction is now
generated in the simple integer unit 201, reducing the number of connections to the result selection multiplexer 111. Consequently, the connection bus 180, connecting the sum-across unit 102 to the
multiplexer 111 in FIG. 1, is removed in FIG. 2.
In FIG. 2, a feature of the exemplary embodiment is the observation that the simple integer unit 201 cannot be required for a simple integer computation at the same time that a partially completed
sum-across instruction (from the carry-save adder 120 of the partial sum-across unit 130) arrives at the input source selection multiplexer 132. This conflict is guaranteed never to occur due to the
existing structural hazard represented by the shared result bus 112. If a complex-integer result is expected on the shared result bus 112 in a particular cycle, then no simple-integer operation could
have started in a manner that would generate a result in the same cycle, or the shared result bus 112 would be overcommitted. Thus, complex-integer operations may be completed within the simple
integer unit 201 without conflicting with any simple integer operations using the same unit.
As a non-limiting example, given a latency of S cycles for the simple integer unit 201 and a latency of A (>S) cycles for the original, full sum-across unit 102 (of FIG. 1), the partial sum-across
unit 130 is designed with latency A-S. Therefore, given a sum-across computation requested in some cycle N, the result bus 112 is reserved for the result of the sum-across computation in cycle N+A.
In the original implementation of FIG. 1, no simple computation may begin in cycle N+A-S, as that would imply a resource conflict at the result bus 112. In the improved implementation in FIG. 2
according to exemplary embodiments, this exact same resource interlock ensures that the simple-integer computation unit 201 is able to receive the partially-completed sum-across instruction (from the
partial sum-across, unit 130) at time N+A-S without conflict.
[0027]FIG. 3
illustrates a block diagram of the pipeline of FIG. 2 further modified to eliminate the integer multiply-add by using the multiplication hardware in the early portion of the floating point pipeline
followed by the use, in parallel, of the carry-propagate full-adders present in the floating-point and simple-integer computation units to complete the operation in accordance with exemplary
embodiments. The integer multiply-add unit performs a plurality of operations consisting first of one or more multiplications between different parts (depending on the specific operation) of two
input operands (multiplicands) followed by one or more optional additions of parts of a third operand (the addend) to the result of the multiplication(s). The final result(s) may be saturated,
depending on the operation.
Referring to the exemplary embodiment of
FIG. 3
, the integer multiply-add computation unit 104 of FIG. 1 is completely replaced by the cooperative use of a slightly-modified floating point computation unit 304 and a simple integer computation
unit 301. The modified floating-point computation unit 304 further consists of an input formatter 141, a multi-format multiplier 142, an adder 143, and floating-point specific logic, such as a
normalizer and rounder 144. The normalizer and rounder 144 may be implemented as two separate modules.
The slightly-modified floating point computation unit 304 (similar to a multiply-add unit) performs independent integer multiplications on sub-blocks of the input operands at the multi-format
multiplier 142, and the floating point computation unit 304 can then add a third operand to (parts of) the multiplication result at the adder 143. As in the case of the sum-across operations (in the
sum-across unit 102 in FIG. 1), saturation may be applied to the result by saturation logic 148.
Further, in the improved implementation of
FIG. 3
, the multiplication commences in the floating point multiplier 142. An input formatter 141 is included to perform data alignment appropriate to the desired integer multiplication operation. If,
e.g., the modified floating point computation unit 304 supports multiple floating point precisions, a formatter (such as the formatter 141) might be needed to convert the different floating point
input formats into an internal format. The output of the floating point multiplier 142 is then added to the third operand (addend) to complete the overall multiply-add operation. However, as a
non-limiting example, the adder 143 in the modified floating point computation unit 304 might not be wide enough to independently perform all additions required by the now integrated integer
multiply-add operations. Therefore, a portion of the output of the floating point multiplier 142 may be forwarded to the simple integer unit 301 via an inter-functional unit connection bus 146. The
input source selection multiplexer 132 of the simple integer computation unit 301 is expanded to receive this additional data source from the floating point multiplier 142.
The other half of the floating point multiplier 142 result proceeds within the floating-point pipeline (of the modified floating point computation unit 304) through the (extant) adder 143 in parallel
with the addition performed by the simple integer unit 301. Saturation logic 148 is added after the (floating-point) adder 143 to perform integer result saturation, if required, on the partial result
in parallel with the same operations occurring in the simple-integer unit 301 on the other partial-result. A new partial-result bus 149 takes the completed integer result from the saturation logic
148 to the result multiplexer 111. The (result) multiplexer 111 is modified to select half of the result from the simple integer unit 301 via the normal result bus 151 and to select the other half
from the new partial result bus 149 to form the final result on the shared result bus 112. In this manner, the modified floating-point multiply-add unit 304 and the simple integer unit 301 cooperate,
each performing addition and (potentially) saturation on a portion of the final result. The normalizer and rounder 144, required for floating-point operations, are unused when the modified
floating-point multiply-add unit 304 is used for cooperatively computing complex integer results.
As discussed herein, the relative latencies of the various pieces of the functional units ensure the availability of the appropriate execution resources without conflict. For example, let latency of
the original multiply-add instructions be M, latency of the simple-integer unit 301 be S, latency of the modified floating-point unit 304 be F, latency of the floating-point multiplier 142 be X, and
latency of the (floating-point) adder 143 plus saturation logic 148 be A. In this case, we require that S=A such that both portions of the result are computed at the same time. X+S=X+A <=M ensures
that there is no performance penalty for applying the exemplary embodiment of the invention versus using dedicated hardware. Since all functional units share input operand buses 110, the
floating-point operation is never initiated simultaneously with an integer multiply-add operation. Therefore, the floating-point multiplier 142 and the adder 143 are available for use by such an
operation if it arrives. Because of the shared result bus 112, an integer multiply-add operation beginning in a cycle N is guaranteed the result bus 112 at cycle N+X+S. Therefore, no simple integer
operation may be initiated at cycle N+X, when the simple integer unit 301 is required to take the output of the floating point multiplier 142. The relative priorities of the different types of
operations are not necessarily material to the exemplary embodiments so long as the operand and result buses 110, 112 are managed without collisions.
Currently, modern microprocessors usually have a vector unit like SSE (streaming single instruction, multiple data extensions), SPE (synergistic processor element), or VMX (vector media extensions).
The instructions for these vector units can be divided into five groups: load/stores, permute operations, simple fixed point instructions (e.g., additions, logical operation, compares), complex fixed
point instructions (e.g., multiplications, sum across instructions), and floating point instructions. State-of-the-art implementations of vector units execute every group of instructions in a
separate execution unit.
However, exemplary embodiments execute complex fixed point instructions by a combination of the floating point execution unit and the simple fixed point. The multiplier reduction tree of the floating
point unit is reused for fixed point multiplications and most of the sum-across operations. The carry-save outputs of the multiplier are added up by the adders in the floating point unit and the
fixed point execution unit. For a reduced set of complex fixed point instructions, the needed reduction may be performed on dedicated hardware. The addition needed for this instruction is performed
in the simple fixed point unit. Exemplary embodiments significantly reduce the area and the passive power needed for a vector unit. At the same time, the modification is transparent to the outside,
i.e., the modification does not introduce new dependencies and does not modify the latencies of the units.
The capabilities of the present invention can be implemented in software, firmware, hardware or some combination thereof.
As one example, one or more features of the present invention can be included in an article of manufacture (e.g., one or more computer program products) having, for instance, computer usable media.
The media has embodied therein, for instance, computer readable program code means for providing and facilitating the capabilities of the present invention. The article of manufacture can be included
as a part of a computer system or sold separately.
Additionally, at least one program storage device readable by a machine, tangibly embodying at least one program of instructions executable by the machine to perform the capabilities of the present
invention can be provided.
The flow diagrams depicted herein are just examples. There may be many variations to these diagrams or the steps (or operations) described therein without departing from the spirit of the invention.
For instance, the steps may be performed in a differing order, or steps may be added, deleted or modified. All of these variations are considered a part of the claimed invention.
While exemplary embodiments of the invention have been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which
fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the invention first described.
Patent applications by Harry S. Barowski, Boeblingen DE
Patent applications by J. Adam Butts, Hartsdale, NY US
Patent applications by Jochen Preiss, Boeblingen DE
Patent applications by Silvia M. Mueller, Altdorf DE
Patent applications by Stephen V. Kosonocky, Fort Collins, CO US
Patent applications by International Business Machines Corporation
Patent applications in class Floating point or vector
Patent applications in all subclasses Floating point or vector
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20090198974","timestamp":"2014-04-18T22:31:26Z","content_type":null,"content_length":"54693","record_id":"<urn:uuid:b4b43f53-c434-4f80-b0bf-9181a6677d3d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New enumeration algorithm for protein structure comparison and classification
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Genomics. 2013; 14(Suppl 2): S1.
New enumeration algorithm for protein structure comparison and classification
Protein structure comparison and classification is an effective method for exploring protein structure-function relations. This problem is computationally challenging. Many different computational
approaches for protein structure comparison apply the secondary structure elements (SSEs) representation of protein structures.
We study the complexity of the protein structure comparison problem based on a mixed-graph model with respect to different computational frameworks. We develop an effective approach for protein
structure comparison based on a novel independent set enumeration algorithm. Our approach (named: ePC, efficient enumeration-based Protein structure Comparison) is tested for general purpose protein
structure comparison as well as for specific protein examples. Compared with other graph-based approaches for protein structure comparison, the theoretical running-time O(1.47^rnn^2) of our approach
ePC is significantly better, where n is the smaller number of SSEs of the two proteins, r is a parameter of small value.
Through the enumeration algorithm, our approach can identify different substructures from a list of high-scoring solutions of biological interest. Our approach is flexible to conduct protein
structure comparison with the SSEs in sequential and non-sequential order as well. Supplementary data of additional testing and the source of ePC will be available at http://bioinformatics.astate.edu
Protein structure comparison is an effective method for exploring protein structure-function relations and for studying evolutionary relations of different species. It can also be applied to identify
the active sites of carrier proteins, the binding sites of antibodies, the inhibition sites of enzymes, and the common structural motifs of proteins, which has significant applications in biological
and biomedical research.
The computational methods for protein structure comparison usually represent a protein structure by atomic coordinates in the Euclidean space, as a distance matrix [1] whose entries represent the
distances between two residues of the protein, or as a contact map [2], where a binary matrix is used to represent the distances between the residue pairs. A structure graph representation of a
protein tertiary structure was first defined in [3] for protein structure prediction. In this current work, we adopt the structure graph representation in [3]. We develop a very efficient graph-based
approach for protein structure comparison. Our approach transforms the comparison problem to an independent set problem in an auxiliary graph, and then applies a novel enumeration algorithm to
identify the best out of a set of good comparison candidates.
We first show the problem of comparing a query structure to another structure is intractable with respect to several computational frameworks. For example, we show that the problem is $NP-hard$ (even
for very restricted instances), cannot be approximated to a ratio $n12-ε$, for any ε > 0, unless $P=NP$, and is W [1]- complete with respect to the framework of parameterized complexity. We also show
that a useful case of the problem is solvable in polynomial time by reducing it to the 2-CNF-SATISFIABILITY problem.
Whereas the above results are negative hinting at the challenging nature of the problem, the graph-based approach we use allows us to model the problem as a maximum independent set problem, for which
a repertoire of effective exact algorithms exist in the literature. We use an algorithm developed by (some of) the authors [4] to enumerate the top-K maximum independent sets in a graph in time O
(1.47^nn^2), where n is the number of vertices in the graph (Note that the algorithm in [4] enumerates the top-K minimum vertex covers in a graph, but obviously can be used to enumerate the top-K
maximum independent sets in a graph using the standard reduction between vertex cover and independent set); this enumeration algorithm allows us to sift through the top SSE alignments for the protein
structure comparison problem, looking for the best amongst them in terms of accuracy. Compared with other graph-base approaches, the theoretical running-time O(1.47^rnn^2) of our approach ePC is the
current best, where n is the smaller number of SSEs of the two proteins, r is an introduced parameter of small values.
Many different approaches for protein structure comparison apply the secondary structure elements (SSEs) representation and database searching, such as deconSTRUCT [5], SSM [6], GANGSTA [7], MASS [8,
9], VAST [10], TOPS [11] and approaches in [12-19]. Our approach ePC utilizes the SSE-based representation of the protein structure, and takes into consideration the global 3D structural arrangements
of the SSEs of the proteins. We compare our approach with two other SSE-based approaches: deconSTRUCT, an approach for general purpose protein structure comparison and database search, and SSM, a
high-resolution structure comparison approach. Our approach has comparable performance as deconSTRUCT. With a more general and simplified representation and a unified graph enumeration algorithm, our
approach could detect a substructure or motif structure in a set of large structures, more than one common substructure shared by a set of proteins. It is very flexible. Our approach could use a wide
range of evaluation functions for protein structure comparison. It could be applied to handle sequential and non-sequential order of SSE alignment and be extended to handle challenging protein
multiple structure alignment and protein subset alignment.
A mixed graph for a protein structure is constructed from the PDB file as follows: each vertex represents a core/secondary structure element (i.e., an alpha helix element, or, a beta strand element),
each undirected edge represents the interaction between two cores, and each directed edge (arc) represents the loop between two consecutive cores (from the N-terminal to the C-terminal). A mixed
graph representation is used for protein structure prediction in [3]. The DSSP program [20,21] was used for the assignments of secondary structure elements for the protein entries from the Protein
Data Bank (PDB). Refer to the protein structure and the corresponding mixed graph representation in Figures Figures11 for protein with ID: 6ldh. Alpha helix elements are represented by circles and
beta strand elements are represented by squares. Therefore, a mixed graph can be represented as a triple G = (V (G), A (G), E (G)), where V (G) is the vertex-set of G, E (G) is the set of undirected
edges of G, and A (G) is the set of directed edges of G, which induces a directed path spanning all vertices of G, thus defining a linear order among the vertices of V (G). The aforementioned mixed
graph representation incorporates the SSE type, the sequential order of the SSEs, and the interactions of the SSEs. When comparing two protein structures, the problem could now be reduced to finding
the common subgraph of the two mixed graph.
Structure graph for 6ldh. Alpha helix elements are represented by circles and beta strand elements are represented by squares.
Goldman et al. [2] studied the protein comparison problem using the notion of contact maps. Contact maps are undirected graphs whose vertices are linearly ordered. Goldman et al. [2] formulated the
protein comparison problem as a CONTACT MAP OVERLAP problem, in which we are given two contact maps and we need to identify a subset of vertices S in the first contact map, a subset of vertices $S"$
in the second with $S=|S"$, and an order preserving (w.r.t. linear ordering) bijection $f:S→S"$, such that the number of edges in S (i.e., between the vertices in S) that correspond (under f) to
edges in $S"$ is maximized. In [2], the authors proved that the CONTACT MAP OVERLAP problem is MAXSNP-complete even when both contact maps have maximum degree one.
Song et al. [3] studied the problem of mixed-graph comparison, when each vertex v in the first mixed-graph is associated with a subset of vertices S[v ]in the second mixed-graph, and the bijection f
is restricted to map v to a vertex in S[v]. Song et al. [3] proved that this problem is NP-complete, even when the size of each subset S[v], referred to as the map width is at most 3. Our results in
the following section refine and extend the results in [3] in several aspects. We first prove that the problem defined in [3] is intractable with respect to many computational frameworks. For
example, we show that the problem: (1) is $NP-hard$ (even for very restricted instances), (2) cannot be approximated to a ratio $n12-ε$, for any ε > 0, unless $P=NP$, and (3) is W [1]-complete with
respect to the framework of parameterized complexity. We also show that a useful case of the problem is solvable in polynomial time by reducing it to the 2-CNF-SATISFIABILITY problem.
The graph embedding problem and complexity results
In this section, we study the complexity of the mixed graph embedding problem, which corresponds to the problem of identifying the query protein structure (e.g., a motif structure) as a substructure
in a larger protein structure.
We define the GRAPH EMBEDDING problem as follows:
Given two mixed graphs G = (V (G), A(G), E(G)) and H = (V (H), A(H), E(H)), where H is referred to as the host graph, such that each vertex v V (G) has a list L(v) V (H) of vertices in H that it can
be mapped to, decide if there exists an injection f: V (G) → V (H) such that:
(i) f (v) L(v) for every v V (G);
(ii) for any two vertices v, $ν"∈V(G)$, there is a directed path from v to $ν"$ in G if and only if there is a directed path from f(v) to $f(ν")$ in H; and
(iii) for any two vertices v, $ν"∈V(G)$, if $νν"∈E(G)$ then $f(ν)f(ν")∈E(H)$.
We shall call an injective embedding f satisfying properties (i)-(iii) above a valid embedding.
Informally speaking, the GRAPH EMBEDDING problem asks if we can embed G into H in such a way that the precedence order determined by the arcs of G is respected by this embedding, and the undirected
edges of G are respected by this embedding.
We define the restriction of the GRAPH EMBEDDING problem, denoted r-GRAPH EMBEDDING, where r is positive integer, by restricting the cardinality of the set L(v) to be at most r, for every v V (G);
that is, in the restrictions of the problems, a vertex in V (G) can be mapped to at most r vertices in H.
If one cannot embed the whole graph G into H, it is natural to seek an embedding that embeds the maximum number of vertices in G into H, while respecting conditions (i)-(iii) above. Therefore, we
define a version of GRAPH EMBEDDING, denoted GRAPH EMBEDDING[≥], by introducing a nonnegative parameter k, and asking whether there exists a subset S V (G) with |S|≥ k, and an injection f: S → V (H)
such that:
(i) f(v) L(v) for every v S;
(ii) for any two vertices v, $ν"∈S$, if there is a directed path from v to $ν"$ in G then there is a directed path from f(v) to $f(ν")$ in H; and
(iii) for any two vertices v, $ν"∈S$, if $νν"∈EG$ then $f(ν)f(ν")∈E(H)$.
The optimization/maximization version of the GRAPH EMBEDDING[≥ ]problem, denoted MAXIMUM GRAPH EMBEDDING, asks for a set S of maximum cardinality that satisfies conditions (i)-(iii) above. Similarly,
we can define the problems r-GRAPH EMBEDDING[≥ ]and MAXIMUM r-GRAPH EMBEDDING.
It was shown in [3] that a more general problem than r-GRAPH EMBEDDING, in which the set of edges A(G) do not necessarily induce a path, is $NP-complete$ for any r ≥ 3. The same proof actually shows
that the r-GRAPH EMBEDDING problem is $NP-complete$ for any r ≥ 3. We show next that the 2-GRAPH EMBEDDING is solvable in polynomial time.
Theorem 0.1 The 2-GRAPH EMBEDDING problem is solvable in polynomial time.
PROOF. We reduce the problem to 2-CNF-SATISFIABILITY, which is solvable in polynomial time (for example, see [22]. Recall that in the 2-CNF-SATISFIABILITY problem we are given a Boolean formula in
the conjunctive normal form (CNF) (i.e., the formula is the conjunction of clauses, and each clause is the disjunction of a literals, which are variables or negations of variables), in which each
clause contains at most two literals, and we are asked to decide whether or not the formula is satisfiable. Let (G, H) be an instance of 2-GRAPH EMBEDDING satisfying |L(v)| ≤ 2, for every v F of 2
-CNF-SATISFIABILITY such that G has a valid embedding into H if and only if F is satisfiable.
For every vertex v G: if $L(ν)={ν"}$ we add a variable $xνν"$ and add the clause ${xνν"}$ to F; and if $L(ν)={ν",ν″}$ we add the two variables $xνν"$, $xνν″$and the two clauses ${xνν",xνν"}$, $
{x¯νν",x¯νν″}$ to F. This ensures that every vertex v in G is mapped to one and only one vertex in H (i.e., the map is a well-defined function). (We assume that |L(v)| ≠ 0; otherwise, the instance
can be rejected.)
For every two vertices v and u in G such that there is a directed path from v to u in G (i.e., v appears before u in the directed path in G), and for very $ν"∈L(ν)$ and $u"∈L(u)$ such that $ν"=u"$ or
$u"$appears before u in the directed path in H, we add the clause ${x¯νν",x¯uu"}$ to F. This ensures that the desired mapping is injective, and ensures that the mapping respects the precedence order
among the vertices in G that is defined by the directed path in G (property (ii)).
For every two vertices v and u in G such that vu E(G), and for very $ν"∈L(ν)$ and $u"∈L(u)$ such that $ν"u'∉E(H)$, we add the clause ${x¯νν",x¯uu"}$ to F. This ensures that the desired mapping
respects the undirected edges of G (property (iii)).
This completes the construction of F. Clearly, this construction can be carried out in polynomial time.
It is not difficult to verify that (G, H) is a yes-instance of 2-GRAPH EMBEDDING if and only if F is a yes-instance of 2-CNF-SATISFIABILITY. This implies that 2-GRAPH EMBEDDING is polynomial-time
solvable. □
The above theorem, together with the result in [3], provides a complete characterization of the complexity (NP-hardness) of r-GRAPH EMBEDDING with respect to r.
If we consider the r-GRAPH EMBEDDING parameterized by r, the fact that the problem is NP-complete for r ≥ 3 [3] implies that the problem is not solvable in time O(n^r) unless $P=NP$, and hence, with
respect to the parameterized complexity framework, the problem is not in the class $XP$. Therefore, there is not much hope behind seeking parameterized algorithms (with respect to r) for the problem.
Moreover, the NP-hardness proof for r-GRAPH EMBEDDING (r ≥ 3) is via a reduction from 3-CNF-SATISFIABILITY (each clause contains at most three literals) that produces two graphs G and H, each of size
linear in the number of clauses of the 3-CNF-SATISFIABILITY instance. Therefore, based on the results in [23], we can conclude that r-GRAPH EMBEDDING (r ≥ 3) is not solvable in subexponential time
unless the exponential-time hypothesis (ETH) fails [23].
We investigate next the complexity of the r- GRAPH EMBEDDING[≥ ]problem.
Theorem 0.2 The r-GRAPH EMBEDDING[≥ ]problem is $NP-complete$, for any r ≥ 1.
PROOF. It suffices to prove the $NP-completeness$ of the 1-GRAPH EMBEDDING[≥ ]problem. We only prove the NP-hardness, as it is very easy to show the membership of the problem in $NP$. The proof is
via a reduction from the CLIQUE problem: Given a graph and a nonnegative integer k, determine if the graph has a clique (complete subgraph) of size k.
Let $(G",k)$ be an instance of CLIQUE, where $V(G")={ν1",...,νn"}$. We construct the instance (G, H, k) of 1- GRAPH EMBEDDING[≥ ]as follows. The set of vertices V (G) = {v[1], ... ,v[n]} and V (H) =
{u[1], ... ,u[n]} are copies of $V(G")$. We connect the vertices v[1 ],..., v[n ]in G by a directed path, and u[1], ... ,u[n ]in H by a directed path, and define $L(νi)={ui}$, for i = 1, ... ,n.
Finally, the undirected edges of G form a clique, and the undirected edges of H are those of $G"$; that is, v[i]v[j ]E(G) for every 1 ≤ i ≠ j ≤ n, and u[i]u[j ]E(H) if and only if $νi"νj'∈E(G")$.
This completes the reduction, which is obviously computable in polynomial time.
It is not difficult to verify that $(G",k)$ is a yes-instance of CLIQUE if and only if (G, H, k) is a yes-instance of 1- GRAPH EMBEDDING[≥]. This completes the proof. □
The reduction in the above theorem is an fpt-reduction, from the CLIQUE problem to 1- GRAPH EMBEDDING[≥], where the parameter is the size of the subgraph sought k. Since CLIQUE is known to be W [1]
-hard in the parameterized complexity hierarchy, we obtain:
Theorem 0.3 The r- GRAPH EMBEDDING[≥ ]problem is W [1]-complete, for any r ≥ 1. (Note that membership in W [1]follows from the results in the next section.)
Finally, we observe that the same reduction in Theorem 0.2 provides an L-reduction [24] (i.e., approximation-preserving reduction) from MAXIMUM CLIQUE (the problem of computing a clique of maximum
cardinality in a graph) to MAXIMUM 1-GRAPH EMBEDDING. It is well known that, unless $P=NP$, MAXIMUM CLIQUE cannot be approximated to a ratio $n12-ε$ for any ε > 0 [25]. It follows that:
Theorem 0.4 Unless $P=NP$, the MAXIMUM r-GRAPH EMBEDDING problem cannot be approximated to a ratio $n12-ε$for any ε > 0.
Graph embedding to independent set
In this section we show that the MAXIMUM r-GRAPH EMBEDDING problem can be modeled as an MAXIMUM INDEPENDENT SET problem. Recall that an independent set in a graph is set of vertices such that no two
of them are adjacent, and the MAXIMUM INDEPENDENT SET problem asks for an independent set of maximum cardinality in a graph.
Let (G, H) be an instance of MAXIMUM r-GRAPH EMBEDDING. Suppose that V (G) = {g[1], g[2], ... ,g[n]} with directed edges from g[i ]to g[i][+1], for 1 ≤ i ≤ n − 1, and suppose that V (H) = {h[1], h
[2], ... ,h[m]}, m ≥ n, and with directed edges from h[i ]to h[i][+1], for 1 ≤ i ≤ m − 1. Suppose that each vertex of G can be mapped to one of at most r vertices in H.
Theorem 0.5 If MAXIMUM INDEPENDENT SET is solvable in time 2^cn, then MAXIMUM r-GRAPH EMBEDDING is solvable in 2^crn time.
PROOF. Create an auxiliary graph X as follows. For each possible choice mapping g[i ]to h[j], create a vertex x[ij]. For any two vertices x[ij ]and x[kl], add an edge between them if and only if one
of the following conditions are true:
1. i = k or j = l.
2. i <k and j >l, or i >k and j <l.
3. There is an undirected edge between g[i ]and g[k ]in G, while there is no undirected edge between h[j ]and h[l ]in H.
Note that Condition 2 could be removed when the order of the mapped vertices are not required to be the same for the two graphs.
It is clear that any independent set of X corresponds to a common subgraph of G and H of the same size. So the problem of finding a maximum common subgraph of G and H is reduced to the problem of
finding a maximum independent set of X, which has rn vertices. In particular, to find if G is a subgraph of H it suffices to find an independent set of size n. Therefore if MAXIMUM INDEPENDENT SET is
solvable in time 2^cn, then MAXIMUM r-GRAPH EMBEDDING is solvable in 2^crn time. □
If we use the current-best exact algorithm for MAXIMUM INDEPENDENT SET by Robson [26] that runs in time O(2^n^/4), we conclude that:
Theorem 0.6 The MAXIMUM r-GRAPH EMBEDDING problem is solvable in time O(2^rn^/4), where n is the number of vertices in graph G.
Algorithm for structure comparison
The problem of protein structure comparison could be modeled as finding an independent set problem of an auxiliary graph. When aligning two protein structures, the auxiliary graph X is created as is
in the proof of Theorem 2.5. Note that when aligning three and more protein structures, the auxiliary graph X could be created similarly.
Refer to the following for the outline of the algorithm for protein structure comparison.
1 (Preprocessing). Generate the two structure graphs for the two proteins, based on both their secondary structure information (local structure) and tertiary structure (global structure) information.
2 (Auxiliary graph). Build the auxiliary graph from the two structure graphs;
3 (Top K independent sets). Generate the top K maximum independent sets of the auxiliary graph by applying the enumeration algorithm developed in [4].
4 (Matched SSEs). Evaluate the generated top K maximum independent sets and generate the SSE pairs with the best score of the two proteins.
We analyze the time complexity of the algorithm:
Step 1: The algorithm processes the two proteins to generate the corresponding two structure graphs, where each vertex of a graph represents an SSE of the corresponding protein. Suppose the number of
the vertices of each structure graph is bounded by n.
Step 2: We introduce a parameter r as the maximum number of pairs associated with each vertex of the structure graphs. The number of vertices of the auxiliary graph is bounded rn.
Step 3: Through calling the enumeration algorithm develop in [4], it takes time O(1.47^rn) to generate the top K independent sets of the auxiliary graph.
Step 4: It takes time O(1.47^rnn^2) to evaluate the generated independent sets and identify the independent set, which corresponding to the SSE pairs with the best score of the two proteins.
Refer to [27] for a discussion of the theoretical running times of several other graph-based approaches for protein structure comparison, which are of O((mn)^n) or O(m^n^+1)n), where m and n demote
the size of the structure graphs. The theoretical running-time O(1.47^rnn^2) of our approach ePC is the current best, where n is the smaller number of SSEs of the two proteins, r is a parameter of
small values.
Testing results
Our approach ePC is designed for general-purpose protein structure comparison. In this section we test our approach for this purpose using SABmark-sup and SABmark-twi [28], and specific novel folds
studied in the literature. Our approach is implemented using C++. The testing is mainly performed on a regular Macbook (8GB Mem). The running-time testing is conducted on a Dell server (PowerEdge
2950III, 32GB Mem). Due to the space limit, some testing results are not presented.
Given two proteins, A and B, the score of the a SSE pair is the sum of the L[ij ]of the residues for the SSE pair. L[ij ]is defined in [29] denoting the similarity between a segment centered around
residue i of one protein and a segment centered around residue i of the other protein, where $Lij=min{D(di-2,i+2A,dj-2,j+2B),D(di-2,i+1A,dj-2,j+1B),D(di-1,i+2A,dj-1,i+2B)}$, where D(d[1], d[2]) = 0.1
− |d[1 ]− d[2]|/(d[1 ]+ d[2]).
Let S be the sum of the scores of all the aligned SSEs. The normalized score $Sn=S/(lA*lB)$, where l[A ]and l[B ]are the lengths of the two proteins. A[c ]is the number of SSEs in A, B[c ]is the
number of SSEs in B and MCS[n ]is the size of the common subgraph of the two protein structure graphs, the CORE-COV is a percentage defined by: MCS[n ]/ min(A[c], B[c]).
Testing different parameter values
There are two important parameters of our algorithm r and K, where r is the maximum number of SSE pairs associated with each SSE of the structure graphs, and K is the number of enumerated independent
sets. Note that the score L[ij ]of the SSE pairs is the criteria for identifying the associated r SSEs. We test the impacts of the two parameter values on the running time and scoring for the protein
We present our testing results for accuracy (using the score S as a criteria) and running-time of our approach with different parameter r values. We have conducted the testing of 200 protein pairs
from SABmark-sup database with different parameter r values, where each SSE from one protein is matched with the top r SSEs from the other protein. Refer to Figure Figure22 for the average scores of
200 protein pairs from Sup database, when testing our approach with different parameter r values, r = 2, 3, 4, 5, 6, 7, 8, 9. Our testing results indicate that when the parameter r value increases,
the score has increased. Refer to Figure Figure33 for the average running times of 200 protein pairs from Sup database, when testing our approach with different parameter r values. When r increases,
the running time of our approach increases in general. However note that the running times when r = 5, 6, 7, 8, 9 are very similar; this is because trimming has been applied to reduce the sizes of
the auxiliary graphes before the enumeration of the independent sets, and also because the impact of the parameter K on the running time. Especially the running time when r = 2 is significantly lower
than the other cases, which matches our theoretical result that for r = 2 the r-GRAPH EMBEDDING problem is in P.
The running times for different r values. Note for all these testing, our approach use the same parameter K = 1000.
The scores for different r values. Note for all these testing, our approach use the same parameter K = 1000.
For the enumeration of independent sets, we have introduced a parameter K, which is the bound of the number of enumerated independent sets. Here we present our testing results for accuracy and
running-time of our approach with different parameter K values (See Table Table1).1). Similar as the testing for the parameter r, we have conducted the testing of 200 protein pairs from SABmark-sup
database with different parameter K values, K = 125, 250, 500, 1000. Our testing results indicate that when the parameter K value increases, the score has increased and the running time also
The running times and scores for different K values
Performing structure comparison
Self-querying in a large database of structures. As pointed in [5], a necessary condition for a approach to be of practical value for structure comparison and classification, it should be able to
find the query itself in a database of protein structures. To test this property of our approach, 1000 protein structures from the SABmark-sup database. Our approach with the normalized score
function can identify the query structure with ranking No. 1 with 100% accuracy.
Detecting a substructure in a set of larger structures. Our approach can detect a smaller query structure (or, a motif structure) within a larger target structure. We use the test set from the
previous test and required for each domain to be matched to the target domain embedded in the original full-protein structure. Our approach with the normalized score can identify the substructure
with ranking No. 1 with 100% accuracy.
Protein family classification. We compare the performance of our approach for protein family classification with deconSTRUCT, which is also an SSE-based method and designed for protein structure
database filtering. We have tested 1000 proteins pairs of the SABmark [28]. Due to the space limit, we only discuss some of the representative testing result. We align protein d1a6m (core size:7;
AAs: 151; from SABmark [28]) to proteins from 10 different families of the twi database with each family 10 proteins. Of the proteins in the top 10 ranking, 7 proteins identified through our approach
are the proteins from the same family as protein d1a6m. For deconSTRUCT, 7 proteins of the identified 10 proteins (without ranking) are the proteins from the same family as protein d1a6m. Form the
testing results, our approach has comparable performance with DeconSTRUCT for the general purpose protein structure comparison and structure classification. The mixed graph representation of our
approach ePC is much simpler compared with deconSTRUCT. Our approach ePC is more flexible than deconSTRUCT in that ePC can handle SSE alignments with and without respect to the order of SSEs, which
will be discussed in the next section for specific examples.
Specific examples
We test our approach on specific examples for common substructures and novel folds which share common substructures with non-sequential SSEs.
Detection of several different common substructures. We test our approach ePC using the four protein structures (PDB codes: 1a02N, 1iknA, 1nfiA, and 1a3qA) studied in [8,9]. The proteins share two
common domains: "p53-like transcription factors" and "E set domains". In [8,9] two different common substructures were detected, one for each domain. The first common substructure is part of the
"p53-like transcription factors" domain. It consists of 114 residues, and it forms a sandwich of nine beta-strands. The second common substructure is part of the "E set domains" domain. It consists
of 87 residues, and it forms a sandwich of seven beta-strands.
Please refer to the following testing results of our approaches, when 1a02N is compared with: 1iknA, 1nfiA, and 1a3qA. Our testing results match the results in [8,9]. Especially for the second common
substructure that is part of the "E set domains" domain with conserved matched SSEs: 12, 13, 14, 15, 16, 17 of 1a02N. Please refer to Figure Figure44 for its 3D structure and the two domains.
The 3D Structure of 1a02N with its two domains: p53-like transcription factors and E set domains. There are 18 cores/SSEs (0-17) with conserved SSEs marked with *. Matched SSEs of 1a02N and 1ikna:
(0,1) (1,2) (3,3) (7,5) (13,7) (14,8) (17,11); Matched ...
Three novel folds. The three novel folds were discussed in [7] to study the unique feature of GANGSTA+ to conduct non-sequential SSE alignment. Note that the protein structures that are structurally
similar to the listed three new folds were detected through scanning the ASTRAL40 database by GANGSTA+. The detected similar protein structures have non-sequential SSE alignments with the three novel
folds respectively. Please refer to our testing result in Table Table22 Figure Figure55 and and66.
Structure search and comparison of the three novel folds with the structural analogs
Structure alignment of PDB:2AJE and PDB:1J7NB. Structure alignment of the new fold PDB:2AJE and the structural analog PDB:1J7NB, showing nonsequential order of aligned SSEs.
Aligned SSEs of PDB:2AJE and PDB:1J7NB. The amino acid sequences of the new fold PDB:2AJE and the structural analog PDB:1J7NB, showing the non-sequential order of aligned SSEs of the two protein
We use an SSE-based graph model for general purpose protein structure comparison. We presented the computational complexity results related to the protein structure comparison problem. An effective
algorithm is developed integrating a novel enumeration of independent sets and parameterized computation for the problem. Our approach is tested for protein structure comparison using benchmark
testing sets. Compared with other SSE-based approaches, our approach has comparable performance for the general purpose protein structure comparison. We also demonstrate that our approach could be
applied to identify common substructure with non-sequential SSEs and proteins sharing more than one common substructure.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
XH, IK and GX carried out the study on the complexity and the design of the approach for the protein structure comparison problem, and drafted the manuscript. CA, DJ and KW participated in the
implementation and the testing of the algorithm. All authors have approved the final manuscript.
This research is supported by the National Institute of Health grants from the National Center for Research Resources (5P20RR016460-11) and the National Institute of General Medical Sciences
The publication costs for this article were funded by the corresponding author's institution.
This article has been published as part of BMC Genomics Volume 14 Supplement 2, 2013: Selected articles from ISCB-Asia 2012. The full contents of the supplement are available online at http://
• Holm L, Sander C. Protein structure comparison by alignment of distance matrices. J of Molecular Biology. 1993;233:123–138. doi: 10.1006/jmbi.1993.1489. [PubMed] [Cross Ref]
• Goldman D, Istrail S, Papadimitriou CH. Algorithmic Aspects of Protein Structure Similarity. FOCS. 1999. pp. 512–522.
• Song Y, Liu C, Huang X, Malmberg RL, Xu Y, Cai L. Efficient parameterized algorithms for biopolymer structuresequence alignment. IEEE/ACM Trans Comput Biology Bioinform. 2006;3(4):423–432. [
• Chen J, Kanj I, Meng J, Xia G, Zhang F. On the effective enumerability of NP problems. Proceedings of the 2nd InternationalWorkshop on Parameterized and Exact Computation, volume 4169 of Lecture
Notes in Computer Science. 2006. pp. 215–226.
• Zhang ZH, Bharatham K, Sherman WA, Mihalek I. deconSTRUCT: general purpose protein database search on the substructure level. Nucleic Acids Research. 2010;38(Web Server):W590–W594. doi: 10.1093/
nar/gkq489. [PMC free article] [PubMed] [Cross Ref]
• Krissinel E, Henrick K. Secondary-structure matching (PDBeFold), a new tool for fast protein structure alignment in three dimensions. Acta Cryst D60. 2004. pp. 2256–2268. [PubMed]
• Guerler, Knapp. Novel Folds and their Nonsequential Structural Analogs. Protein Science. 2008;17:8:1374–1382. [PMC free article] [PubMed]
• Dror O, Benyamini H, Nussinov R, Wolfson H. MASS: Multiple structural alignment by secondary structures. Bioinformatics. 2003;19(Suppl 1):i95–i104. doi: 10.1093/bioinformatics/btg1012. [PubMed] [
Cross Ref]
• Dror O, Benyamini H, Nussinov R, Wolfson H. Multiple structural alignment by secondary structures: algorithm and applications. Protein Science. 2003;12:2492–2507. [PMC free article] [PubMed]
• Gibrat JF, Madej T, Bryant SH. Surprising similarities in structure comparison. Curr Opin Struct Biol. 1996;6(3):377–385. doi: 10.1016/S0959-440X(96)80058-3. [PubMed] [Cross Ref]
• Michalopoulos I, Torrance GM, Gilbert DR, Westhead DR. TOPS: an enhanced database of protein structural topology. Nucleic Acids Research. 2004;32:251–254. doi: 10.1093/nar/gkh060. [PMC free
article] [PubMed] [Cross Ref]
• Alesker V, Nussinov R, Wolfson H. Detection of non-topological motifs in protein structures. Protein Eng. 1996;9:1103–1119. doi: 10.1093/protein/9.12.1103. [PubMed] [Cross Ref]
• Alexandrov N, Fischer D. Analysis of topological and nontopological structural similarities in the PDB: New examples with old structures. Proteins. 1996;25:354–365. doi: 10.1002/(SICI)1097-0134
(199607)25:3<354::AID-PROT7>3.3.CO;2-W. [PubMed] [Cross Ref]
• Grindley H, Artymiuk P, Rice D, Willett P. Identification of tertiary structure resemblance in proteins using a maximal common subgraph isomorphism algorithm. J Mol Biol. 1993;229:707–721. doi:
10.1006/jmbi.1993.1074. [PubMed] [Cross Ref]
• Holm L, Sander C. 3-D lookup: Fast protein structure database searches at 90% reliability. The Third International Conference on Intelligent Systems for Molecular Biology. 1995. pp. 179–187. [
• Koch I, Lengauer T, Wanke E. An algorithm for finding maximal common subtopologies in a set of proteins. J Comp Biol. 1996;3:289–306. doi: 10.1089/cmb.1996.3.289. [PubMed] [Cross Ref]
• Lu G. TOP: A new method for protein structure comparisons and similarity searches. J Appl Crystallogr. 2000;33:176–183. doi: 10.1107/S0021889899012339. [Cross Ref]
• Mitchel E, Artymiuk P, Rice D, Willet P. Use of techniques derived from graph theory to compare secondary structure motifs in proteins. J Mol Biol. 1990;212:151–166. doi: 10.1016/0022-2836(90)
90312-A. [PubMed] [Cross Ref]
• Yang AS, Honig B. An integrated approach to the analysis and modeling of protein sequences and structures. I. Protein structural alignment and a quantitative measure for protein structural
distance. J Mol Biol. 2000;301:65–678. [PubMed]
• Joosten RP, Te Beek TAH, Krieger E, Hekkelman ML, Hooft RWW, Schneider R, Sander C, Vriend G. A series of PDB related databases for everyday needs. NAR. 2010. doi: 10.1093/nar/gkq1105. [PMC free
article] [PubMed]
• Kabsch W, Sander C. Dictionary of protein secondary structure: pattern recognition of hydrogen-bonded and geometrical features. Biopolymers. 1983;22:2577–2637. doi: 10.1002/bip.360221211. [PubMed
] [Cross Ref]
• Papadimitriou CH. Computational Complexity. Addison-Wesley; 1994.
• Impagliazzo R, Paturi R, Zane F. Which problems have strongly exponential complexity? Journal of Computer and System Sciences. 2001;63(4):512–530. doi: 10.1006/jcss.2001.1774. [Cross Ref]
• Papadimitriou CH, Yannakakis M. Optimization, approximation, and complexity classes. J Comput Syst Sci. 1991;43(3):425–440. doi: 10.1016/0022-0000(91)90023-X. [Cross Ref]
• Håstad Johan. Clique is Hard to Approximate Within n^1-epsilon. Proceedings of the 37th Annual Symposium on Foundations of Computer Science. 1996. pp. 627–636.
• Robson JM. Technical Report. LaBRI, Universite Bordeaux I; Finding a maximum independent set in time O(2^n/^4), 2001; pp. 1251–01.
• Krissinel E, Henrick K. Protein structure comparison service Fold at European Bioinformatics Institute. http://www.ebi.ac.uk/msd-srv/ssm
• Van Walle I. et al. SABmark: a benchmark for sequence alignment that covers the entire known fold space. Bioinformatics. 2005;21:1267–1268. doi: 10.1093/bioinformatics/bth493. [PubMed] [Cross Ref
• Zhu J, Weng Z. FAST: a novel protein structure alignment algorithm. Proteins. 2005;58(3):618–627. [PubMed]
Articles from BMC Genomics are provided here courtesy of BioMed Central
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3582452/?tool=pubmed","timestamp":"2014-04-20T17:06:16Z","content_type":null,"content_length":"132689","record_id":"<urn:uuid:48aa8bb4-bab8-48fb-ab19-27f39718b6b8>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A.: Chaitin Numbers and Strong Reducibilities
Results 1 - 10 of 11
"... Solovay showed that there are noncomputable reals ff such that H(ff _ n) 6 H(1n) + O(1), where H is prefix-free Kolmogorov complexity. Such H-trivial reals are interesting due to the connection
between algorithmic complexity and effective randomness. We give a new, easier construction of an H-trivi ..."
Cited by 57 (31 self)
Add to MetaCart
Solovay showed that there are noncomputable reals ff such that H(ff _ n) 6 H(1n) + O(1), where H is prefix-free Kolmogorov complexity. Such H-trivial reals are interesting due to the connection
between algorithmic complexity and effective randomness. We give a new, easier construction of an H-trivial real. We also analyze various computability-theoretic properties of the H-trivial reals,
showing for example that no H-trivial real can compute the halting problem. Therefore, our construction of an H-trivial computably enumerable set is an easy, injury-free construction of an incomplete
computably enumerable set. Finally, we relate the H-trivials to other classes of "highly nonrandom " reals that have been previously studied.
"... A real is called recursively enumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay [23] and Chaitin [10] we say that an r.e. real
dominates an r.e. real if from a good approximation of from below one can compute a good approximation of from b ..."
Cited by 34 (3 self)
Add to MetaCart
A real is called recursively enumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay [23] and Chaitin [10] we say that an r.e. real dominates an
r.e. real if from a good approximation of from below one can compute a good approximation of from below. We shall study this relation and characterize it in terms of relations between r.e. sets.
Solovay's [23]-like numbers are the maximal r.e. real numbers with respect to this order. They are random r.e. real numbers. The halting probability ofa universal self-delimiting Turing machine
(Chaitin's Ω number, [9]) is also a random r.e. real. Solovay showed that any Chaitin Ω number is-like. In this paper we show that the converse implication is true as well: any Ω-like real in the
unit interval is the halting probability of a universal self-delimiting Turing machine.
- the Lect. Notes Log. 18, Assoc. for Symbol. Logic , 2001
"... We study computably enumerable reals (i.e. their left cut is computably enumerable) in terms of their spectra of representations and presentations. ..."
- Fundamenta Informaticae , 1997
"... How fast can one approximate a real by a computable sequence of rationals? We show that the answer to this question depends very much on the information content in the finite prefixes of the
binary expansion of the real. Computable reals, whose binary expansions haveavery low information content, ca ..."
Cited by 10 (3 self)
Add to MetaCart
How fast can one approximate a real by a computable sequence of rationals? We show that the answer to this question depends very much on the information content in the finite prefixes of the binary
expansion of the real. Computable reals, whose binary expansions haveavery low information content, can be approximated (very fast) with a computable convergence rate. Random reals, whose binary
expansions contain very much information in their prefixes, can be approximated only very slowly by computable sequences of rationals (this is the case, for example, for Chaitin's \Omega numbers) if
they can be computably approximated at all. We show that one can computably approximate any computable real also very slowly, with a convergence rate slower than any computable function. However,
there is still a large gap between computable reals and random reals: any computable sequence of rationals which converges (monotonically) to a random real converges slower than any computable
sequence of rat...
- THEORETICAL COMPUTER SCIENCE , 1999
"... A real # is computably enumerable if it is the limit of a computable, increasing, converging sequence of rationals; # is random if its binary expansion is a random sequence. Our aim is to offer
a self-contained proof, based on the papers [7, 14, 4, 13], of the following theorem: areal is c.e. and ra ..."
Cited by 10 (0 self)
Add to MetaCart
A real # is computably enumerable if it is the limit of a computable, increasing, converging sequence of rationals; # is random if its binary expansion is a random sequence. Our aim is to offer a
self-contained proof, based on the papers [7, 14, 4, 13], of the following theorem: areal is c.e. and random if and only if it a Chaitin# real, i.e., the halting probability of some universal
self-delimiting Turing machine.
- Logic, Language and Computation, Volume 3, CSLI Series , 1999
"... This paper is a subjective, short overview of algorithmic information theory. We critically discuss various equivalent algorithmical models of randomness motivating a #randomness hypothesis".
Finally some recent results on computably enumerable random reals are reviewed. 1 Randomness: An Informa ..."
Cited by 6 (6 self)
Add to MetaCart
This paper is a subjective, short overview of algorithmic information theory. We critically discuss various equivalent algorithmical models of randomness motivating a #randomness hypothesis". Finally
some recent results on computably enumerable random reals are reviewed. 1 Randomness: An Informal Discussion In which we discuss some di#culties arising in de#ning randomness. Suppose that one is
watching a simple pendulum swing back and forth, recording 0 if it swings clockwise at a given instant and 1 if it swings counterclockwise. Suppose further that after some time the record looks as
follows: 10101010101010101010101010101010: At this point one would like to deduce a #theory" from the experiment. 1 The #theory" should account for the data presently available and make #predictions"
about future observations. How should one proceed? It is obvious that there are many #theories" that one could write-down for the given data, for example: 10101010101010101010101010101010000000000...
- Theoretical Computer Science , 2000
"... Computably enumerable (c.e.) reals can be coded by Chaitin machines through their halting probabilities. Tuning Solovay’s construction of a Chaitin universal machine for which ZFC (if
arithmetically sound) cannot determine any single bit of the binary expansion of its halting probability, we show th ..."
Cited by 2 (0 self)
Add to MetaCart
Computably enumerable (c.e.) reals can be coded by Chaitin machines through their halting probabilities. Tuning Solovay’s construction of a Chaitin universal machine for which ZFC (if arithmetically
sound) cannot determine any single bit of the binary expansion of its halting probability, we show that every c.e. random real is the halting probability of a universal Chaitin machine for which ZFC
cannot determine more than its initial block of 1 bits—as soon as you get a 0, it is all over. Finally, a constructive version of Chaitin information-theoretic incompleteness
- THEORETICAL COMPUTER SCIENCE , 2002
"... Computably enumerable (c.e.) reals can be coded by Chaitin machines through their halting probabilities. Tuning Solovay’s construction of a Chaitin universal machine for which ZFC (if
arithmetically sound) cannot determine any single bit of the binary expansion of its halting probability, we show th ..."
Cited by 2 (0 self)
Add to MetaCart
Computably enumerable (c.e.) reals can be coded by Chaitin machines through their halting probabilities. Tuning Solovay’s construction of a Chaitin universal machine for which ZFC (if arithmetically
sound) cannot determine any single bit of the binary expansion of its halting probability, we show that every c.e. random real is the halting probability of a universal Chaitin machine for which ZFC
cannot determine more than its initial block of 1 bits—as soon as you get a 0, it is all over. Finally, a constructive version of Chaitin information-theoretic incompleteness
- Mathematical Logic Quarterly
"... We study the relationship between a computably enumerable real and its presentations. A set A presents a computably enumerable real α if A is a computably enumerable prefix-free set of strings
such that α = ∑ σ∈A 2−|σ |. Note that ∑ σ∈A 2−|σ | is precisely the measure of the set of reals that have a ..."
Cited by 2 (2 self)
Add to MetaCart
We study the relationship between a computably enumerable real and its presentations. A set A presents a computably enumerable real α if A is a computably enumerable prefix-free set of strings such
that α = ∑ σ∈A 2−|σ |. Note that ∑ σ∈A 2−|σ | is precisely the measure of the set of reals that have a string in A as an initial segment. So we will simply abbreviate ∑ σ∈A 2−|σ | by µ(A). It is
known that whenever A so presents α then A ≤wtt α, where ≤wtt denotes weak truth table reducibility, and that the wtt degrees of presentations form an ideal I(α) in the computably enumerable wtt
degrees. We prove that any such ideal is Σ 0 3, and conversely that if I is any Σ 0 3 ideal in the computably enumerable wtt degrees then there is a computable enumerable real α such that I = I(α). 1
, 1998
"... Communicated byM. Ito A real is called recursivelyenumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay(unpublished manuscript, IBM Thomas
J. Watson ..."
Add to MetaCart
Communicated byM. Ito A real is called recursivelyenumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay(unpublished manuscript, IBM Thomas J.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1582501","timestamp":"2014-04-16T16:29:53Z","content_type":null,"content_length":"36307","record_id":"<urn:uuid:a620d3ef-a443-419c-a4d8-47e9eee681f8>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: COMPLETELY BOUNDED ISOMORPHISMS
Alvaro Arias
Abstract. In this paper the author proves that any two elements from one of the
following classes of operators are completely isomorphic to each other.
1. fVN(Fn) : n 2g. The II1 factors generated by the left regular representation
of the free group on n-generators.
2. fC (Fn) : n 2g. The reduced C -algebras of the free group on n-generators.
3. Some \non-commutative" analytic spaces introduced by G. Popescu Po].
The paper ends with some applications to Popescu's version of Von Neumann's in-
1. Introduction and preliminaries
E. Christensen and A. M. Sinclair CS] showed that any non-elementary injective
von Neumann algebra on a separable Hilbert space is completely isomorphic to
B(H), and A. G. Robertson and S. Wassemann RW] generalized the work on CS]
and proved that an in nite dimensional injective operator system on a separable
Hilbert space is completely isomorphic to either B(H) or `1.
The techniques on those papers depend on the injectivity of the spaces and do
not extend to interesting non-injective von Neumann algebras or operator algebras.
In the present note we address some of these examples. For instance, we prove
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/559/1466446.html","timestamp":"2014-04-18T09:08:06Z","content_type":null,"content_length":"8275","record_id":"<urn:uuid:1b4c135e-f250-4651-a2e3-b64c4386ab92>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Visual Illusions: Google vs Facebook vs Yahoo
The ability to
visualize data
, enabled by the advent of graphical computer tools, has been a great boon to Cap and Perf. The power derives from the way graphical displays provide an efficient impedance match to the visual system
in our brain. The weakness derives from the way graphical displays provide an efficient impedance match to the visual system in our brain. We can get carried away by visual representations alone.
Every marketing organization exploits that weakness. Numbers do have poor cognitive impedance, but that doesn't mean numbers should ignored altogether. In fact, we often need a combination of both
numerical and visual data representations so that we don't suffer visual miscues and thus jump to the wrong conclusion. The following presents an example of how easily this can happen.
Recently, Guerrilla alumnus, Scott J. pointed me at this Chart of the Day showing how Google revenue growth was outpacing both Facebook and Yahoo, when compared 7 years after launching the respective
Clearly, this chart is intended to be an attention getter for the Silicon Alley Insider website but, it looks about right and normally I might have just accepted the claim without giving it anymore
thought. The notion that Google growth is dominating, is also consistent with a lot of other things one sees. No surprises there.
Exponential doubling period
In this particular case, however, I was struck by the
of the data and curious to find out if the growth of GOOG and FB revenue follows an exponential trend or not. Exponential growth is not unexpected because it's the continuous analog of
compound interest
. If they are growing exponentially, I can compare their
doubling periods numerically
and determine by how their growth will look in the future.
The doubling period is an analysis technique that I use in Chapter 8 of my Guerrilla Capacity Planning book to determine the traffic growth of major websites. In section 8.7.5 the doubling time t[2]
is defined as:
t[2] = Ln(2) / A
where A is the growth parameter of the fitted
exponential curve
(the rate at which it bends upward) and Ln(2) is the
natural logarithm
of 2 (2 for doubling). The only fly in the ointment is that I don't have the actual numeric values used in the histogram chart, but that need not be a showstopper. There are only a half dozen data
points for each company, so I can estimate them visually. Then, I can use R to fit the exponential models and calculate the respective doubling times.
Analysis in R
First, we read the data (as eyeballed from the online chart) into R. Since the amount of data is small, I simply use the
trick to write the data in situ, rather than using an external file.
gd <- read.table(textConnection("Year GOOG FB\tYAH
1 0.001 0.002 0.001
2 0.01 0.02 0.01
3 0.1 0.2 0.1
4 0.5 0.45 0.3
5 1.5 0.75 0.6
6 3.2 2.0 1.1
7 6.1 4.0 0.75"),
I can now plot those estimated data points and compare them with the original chart.
main="Annual revenues for GOOG (green), FB (blue), YAH (red)",
xlab="Years after launch", ylab="$ billions")
The result looks like this:
The dashed lines simply connect related points together. The two solid lines are produced by performing the corresponding exponential fits to the GOOG and FB data.
# x-values for continuous exp curves
x<-seq(from=1, to=7, by=0.1)
ggfit<-nls(gd$GOOG ~ g0*exp(g1*gd$Year),data=gd,start=list(g0=1,g1=1))
fbfit<-nls(gd$FB ~ f0*exp(f1*gd$Year),data=gd,start=list(f0=1,f1=1))
# report the doubling periods
text(1,5.0,sprintf("%2s doubling time: %4.2f months", names(gd)[2],12*log(2)/gc[2]),adj=c(0,0))
text(1,4.5,sprintf("%2s doubling time: %4.2f months", names(gd)[3],12*log(2)/fc[2]),adj=c(0,0))
From the R analysis we see that the doubling period for Google (t
= 11.39 months) is slightly
than that for Facebook (t
= 10.94 months). Despite the banner claim made by
Silicon Alley Insider
, based on these estimated data, Google is growing revenue at a slightly
rate than Facebook. How can that be?
In the original histogram chart, it looks like Google is growing faster than Facebook. Well,
looks can be deceiving
. Your brain can be fooled (easily) by optical illusions. That's why we need to do analysis in the first place. Viewed uncritically, your brain can easily be led astray.
To resolve this paradox, let's do two things:
1. Project the growth models out further than the 7 years associated with the data
2. Plot the projected curves on log-linear axes (for reasons that will become clear shortly)
Here's the result (you might want to click on the image to magnify it).
The left-hand plot shows that the two curves cross somewhere between 7 years out and 40 years out. Whereas green (Google) is currently on top, according to the data, blue (Facebook) eventually ends
up on top according to the exponential models; assuming nothing else changes in the future. The right-hand plot uses a log-scaled y-axis to reveal more clearly that the crossover occurs at t = 23.9
years. Once again, if you rely purely on visuals, you might think the crossover doesn't occur until after 30 years (what looks like a "knee" in the left-hand plot), but you'd be misled. It occurs
almost 10 years earlier.
If, for example, you were only interested in short-term gains (as Wall St is wont to do), the original visual (histogram) is correct. If, on the other hand, you are in your 20s and investing longer
term, e.g., for your retirement, you might get a surprise.
By now, you might be thinking that these projections are not very accurate, and I wouldn't completely disagree with you. But what is accurate here? The original data in the histogram (even the really
real actual data) probably aren't very accurate either; we really can't know without deeper investigation. And that's my point: independent of the accuracy of the data, the numerical analysis can
cause you to pay attention to, and possibly ask questions about, something you might otherwise have taken for granted on purely visual grounds.
Even wrong expectations are better than no expectations
I'm a big fan of data visualization, but not to the exclusion of numerical analysis. We need both and we need both to be easily accessible.
The art is in the science
3 comments:
Larry C said...
While I agree with your point and the message you are trying to get across I think you need to look closer at your model fits.
Assuming you buy into the model the value of A for the Google fit is 0.731 with a standard error of 0.043 and the fit for the Facebook model the value of A is 0.760 with a standard error of
0.033. With such a small difference in the fitted parameters and the relatively large standard error the numerical analysis would not conclude the two model fits are different. Plus there is a
fairly wide range of what the doubling interval is. In short with seven data points it is real hard to trust the results of the model you have decided to use.
SteveJ said...
Excellent piece.
Might be one of your best.
Excellent piece.
Might be one of your best.
- clear, concise & well written, good logical progression etc.
- starts with "Something Really Obvious" we can all see for ourselves and agree with
- pose a problem (no datapoints), get over it
- then proceed with straight-forward analyses
- and end up with a surprising, even counter-intuitive, result
By demonstration you've told us:
- things aren't always what they seem, don't just take things on first appearances
- "digging deeper" can be quick and easy
- the tools/techniques to do this are quick and easy, and simple to master.
You might go as far as saying, "always check your results"... Which is Just Good Science.
But what I love is the way you've quietly demonstrated that Xerox PARC observation:
"Point of View is worth 40 to 60 IQ points".
Good one. works at many levels.
Neil Gunther said...
Belated response to Larry C's comment.
Point taken and indeed, I might express it a little differently. If we imagine that the modeled curves were drawn using fatter lines, then the whole notion of "crossing" comes into question. cf.
Confidence bands for USL scaling curves.
However, I deliberately didn't show the summary stats on the log fit because I view the whole procedure a bit differently---perversely, perhaps.
Although I've never actually written this down before, the idea is to barrel through to an end point and then review. Roughly put, the steps are:
- Look at the plot
- Question: Is it log growth?
- Problem: No numeric values
- Solution: Guesstimate them
- Do the fit
- Question: What's the doubling period?
- Do the calculation
- Problem: Opposite from claim
- Question: Why?
- Solution: Curves cross at ~20 years (modulo above qualification).
In other words, I don't want to get distracted by numerical details in this phase. The goal is simply to reach a self-consistent explantion to the question about log growth. We may have to go
back and revise the whole thing but, hopefully,
we'll know what needs revision and why.
|
{"url":"http://perfdynamics.blogspot.com/2011/10/visual-illusions-google-vs-facebook-vs.html","timestamp":"2014-04-19T09:55:51Z","content_type":null,"content_length":"129950","record_id":"<urn:uuid:79db92e0-f42e-4f9b-96e7-60ec8505f441>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The MD5 message-digest algorithm. IETF RFC 1321
- Advances in Cryptology – EUROCRYPT ’04, Lecture Notes in Computer Science , 2004
"... Abstract. Textbooks tell us that a birthday attack on a hash function h with range size r requires r 1/2 trials (hash computations) to find a collision. But this is quite misleading, being true
only if h is regular, meaning all points in the range have the same number of pre-images under h; if h is ..."
Cited by 27 (2 self)
Add to MetaCart
Abstract. Textbooks tell us that a birthday attack on a hash function h with range size r requires r 1/2 trials (hash computations) to find a collision. But this is quite misleading, being true only
if h is regular, meaning all points in the range have the same number of pre-images under h; if h is not regular, fewer trials may be required. But how much fewer? This paper addresses this question
by introducing a measure of the “amount of regularity ” of a hash function that we call its balance, and then providing estimates of the success-rate of the birthday attack, and the expected number
of trials to find a collision, as a function of the balance of the hash function being attacked. In particular, we will see that the number of trials can be significantly less than r 1/2 for hash
functions of low balance. This leads us to examine popular design principles, such as the MD (Merkle-Damg˚ard) transform, from the point of view of balance preservation, and to mount experiments to
determine the balance of popular hash functions. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3994171","timestamp":"2014-04-20T14:35:40Z","content_type":null,"content_length":"12826","record_id":"<urn:uuid:9906ed80-d4e6-466e-a847-37b405d7bd00>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus with Pre-calculus Review I
Suggestions for Studying for the Final Exam
The final will cover the material typically covered in MCS 121. A good way to study for the final is to find some practice exams on the internet (google "calculus I final), work the problems that are
likely to be on our final, and check your solutions with your prof, one of the tutors, or a friend. The handouts we used over the year are another place to find problems for practicing. For last
semester's handouts, look here. You may use your calculator as long as it is a TI83/84 Plus or TI83/84 Plus Silver Edition, or has similar capabilities. You may not use a TI/89 on the exam. If you
have a calculator other than a TI83/84 Plus or TI83/84 Plus Silver Edition, please talk with me. You may prepare one page of handwritten notes (8.5 inches x 11 inches, one side of the page only) to
bring with you to the exam.
You will be tested on these topics:
• Limits
• Calculating derivatives from the definition
• Calculating derivatives using the formulas, from a graph, or from a table
• Calculating derivatives using implicit differentiation
• Solving a related rates problem
• Solving an optimization problem
• Doing marginal analysis
• Estimating the definite integral using a sum
• Finding the definite integral using area
• Finding the definite integral using FTC
• Finding antiderivatives analytically
• Integrating by substitutions
The pre-calculus material you should be comfortable with includes:
□ Polynomial and power functions
□ Exponential and logarithmic functions
□ Trig functions
□ Solving equations
□ Doing algebraic manipulations
□ Understanding tables and graphs
Students who have taken MCS118/119 in the past suggest that the best ways to study are:
□ Study frequently, in small doses.
□ Do the homework every night.
□ Go to class, pay attention, and ask questions.
□ Read the text sections to be covered before and after class.
□ Use office hours and tutors.
|
{"url":"http://homepages.gac.edu/~kaiser/mcs119/study-guide-final-exam.php","timestamp":"2014-04-17T16:07:13Z","content_type":null,"content_length":"6614","record_id":"<urn:uuid:24f86dd0-c4eb-41df-94cf-96acbab7a424>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US5140239 - Non-contact tracer control device
This invention relates to a non-contact tracer control device, and more particularly, to a non-contact tracer control device having an improved tracing accuracy.
Recently, non-contact tracer control devices for tracing a profile of a model by using a non-contact distance detector have been developed, and in this non-contact distance detector, an optical
distance detector is used in such a way that it is fixed to the distal end of a tracer head and the distance of the model surface therefrom is detected to thereby provide a tracing. Accordingly,
since there is no fear of damage to the model, models made of a soft material can be used, and thus a wider application of the profile machining is expected.
Nevertheless, conventional non-contact tracer control devices have a problem in that the tracing accuracy is lowered at a portion of a model where the angle of inclination is large. Namely, at such a
portion, the measurement optical axis of the distance detector becomes almost parallel to the model surface and the spot on the model surface is enlarged and becomes an ellipsoidal shape, thus
lowering the resolution of the distance detector and the tracing accuracy. Particularly, in triangulation type distance detectors, a problem sometimes arises in that the measurement optical axis
interferes with the model surface, depending on the inclination angle, and thus the measurement becomes impossible.
This invention was created in view of the above circumstances, and an object thereof is to provide a non-contact tracer control device having an improved tracing accuracy.
To solve the above problems, this invention provides a non-contact tracer control device for carrying out a profile machining on a workpiece while tracing a profile of a model in a non-contact
fashion, comprising first and second non-contact distance detectors mounted to a tracer head, which is controlled through a predetermined straight axis and a rotary axis rotatable about the straight
axis, and inclined at a predetermined angle with respect to the straight axis, each detector individually measuring a distance of a surface of the model therefrom, sampling means for sampling
measurement values obtained by the respective first and second non-contact distance detectors at predetermined sampling intervals, storage means for storing a first measurement value obtained by the
first non-contact distance detector and a second measurement value obtained by the second non-contact distance detector, both sampled at a previous sampling time, vector calculating means for
calculating a vector normal to the surface of the model, based on at least three of four measurement values including the first and second measurement values, a third measurement value obtained by
the first non-contact distance detector sampled at a current sampling time, and a fourth measurement value obtained by the second non-contact distance detector sampled at the current sampling time,
angle calculating means for calculating an angle of a projected vector obtained by projecting the normal vector onto a plane perpendicular to the straight axis, and rotary axis driving means for
rotating the rotary axis in a direction of the angle.
Based on the values measured at the previous and current sampling times by the two non-contact distance detectors mounted to the tracer head, the coordinates of the four vertexes of a very small
rectangle on the model surface are obtained, and the normal vector is obtained by using the coordinates of three required vertexes out of the four vertexes. The tracer head is rotated in the
direction of a projected vector obtained by projecting the normal vector onto an X-Y plane. Accordingly, the measurement axes of the non-contact distance detectors are oriented in a direction as
perpendicular as possible to the model surface, whereby a high-accuracy distance measurement can be carried out.
FIG. 1 is a block diagram showing an arrangement of a non-contact tracer control device according to one embodiment of this invention;
FIG. 2 is a diagram showing in detail a tracer head according to the embodiment of the invention;
FIG. 3 is a diagram illustrating a method of calculating an angle of rotation of the tracer head according to the embodiment of the invention; and
FIG. 4 is a flowchart of a process for calculating the rotation angle according to the embodiment of the invention.
An embodiment of this invention will be described with reference to the drawings.
FIG. 1 is a block diagram showing an arrangement of a non-contact tracer control device according to this invention, as well as peripheral devices. In the figure, a processor 11 reads out a system
program stored in a ROM 12, and controls the whole operation of a tracer control device 1 in accordance with the system program, through a bus 10. A RAM 13, as a temporary memory, stores measurement
values supplied from distance detectors, described later, and other temporary data. A nonvolatile memory 14 is backed up by a battery, not shown, and stores various parameters, such as a tracing
direction and a tracing speed, etc., input thereto from an operator panel 2 via an interface 15.
Distance detectors 5a and 5b are mounted to a tracer head 4 of a tracing machine tool 3. The distance detectors 5a and 5b each comprise a reflected light quantity type distance detector using a
semiconductor laser or a light emitting diode as a light source, and measure the distance of a model 6 therefrom in a non-contact fashion. Measurement values La and Lb obtained from these distance
detectors are respectively converted to digital values by A/D converters 16a and 16b in the tracer control device 1, and are successively read by the processor 11.
The processor 11 calculates the amounts of displacement of the individual axes, based on the measurement values La and Lb and signals from current position registers 19x, 19y, and 19z described
later, and generates speed commands Vx, Vy and Vz for the individual axes, in accordance with the displacement amounts and the instructed direction and speed for tracing, by using a technique known
in the art. These speed commands are converted to analog values by D/A converters 17x, 17y, and 17z and input to servo amplifiers 18x, 18y, and 18z, respectively. Based on the speed commands, the
servo amplifiers 18x and 18y drive servomotors 32x and 32y of the tracing machine tool 3, whereby a table 31 is moved in an X-axis direction and in a Y-axis direction perpendicular to the surface of
the drawing sheet, and the servo amplifier 18z drives a servomotor 32z, whereby the tracer head 4 and a tool 33 are moved in a Z-axis direction.
The servomotors 32x, 32y, and 32z are respectively provided with pulse coders 33x, 33y, and 33z, which generate detection pulses FPx, FPy, and FPz upon detecting a rotation of the respective
servomotors by predetermined amounts. The current position registers 19x, 19y, and 19z in the tracer control device 1 count the detection pulses FPx, FPy, and FPz up or down, respectively, depending
on the direction of rotation, to obtain current position data Xa, Ya, and Za of the respective axes, which data is input to the processor 11.
Simultaneously with the above-described control of the individual axes, the processor 11 samples the measurement values La and Lb obtained from the distance detectors 5a and 5b at predetermined
sampling intervals, obtains a vector normal to the surface of the model, based on the sampling data and by a method described later, and generates a rotation command SC corresponding to the direction
of the normal vector projected onto an X-Y plane. The rotation command SC is converted to an analog value by a D/A converter 17c, and then input to a servo amplifier 18c, whereby a C-axis servomotor
32c is driven in accordance with this command.
Accordingly, the tracer head 4 is rotated to the instructed angle, and the distance of the model 6 therefrom is controlled to a constant value as described later. Simultaneously, the table 31 is
moved in the instructed tracing direction at the instructed tracing speed, and a workpiece 35 is machined to a shape similar to that of the model 6 by the tool 34, which is controlled through the Z
axis, similar to the tracer head 4.
FIG. 2 is a diagram illustrating the tracer head 4 in more detail. As shown in the figure, the distance detector 5a is mounted to the tracer head 4 such that it is inclined at an angle φ with respect
to the Z axis, and is rotated along the circumference of a circle having a predetermined radius over the command angle Θc of the rotation command SC by the C axis. The distance detector 5b is mounted
to the outside of the distance detector 5a, and is similarly rotated over the command angle Θc.
As mentioned above, the value measured by the distance detector 5a is fed back to the tracer control device, and accordingly, the distance La from the distance detector 5a to a measurement point P1
on the model 6 is maintained at a constant value. This distance La is set to be equal to the distance from the distance detector 5a to a point at which the measurement axis of the detector 5a
intersects the Z axis. Accordingly, even when the tracer head 4 is rotated through the C axis, the measurement point P1 remains at the same position, and thus the distance L between the tracer head 4
and the model 6 is maintained at a constant value.
The distance detector 5b measures the distance Lb of a measurement point P2 on the model 6 therefrom, and supplies the measurement result to the tracer control device.
The method of calculating the angle of rotation for the tracer head 4 will be now described with reference to FIG. 3. As shown in the figure, a tracing is carried out while the tracer head 4 is moved
in the X-axis direction at a predetermined speed relative to the model 6, and at the same time, the values measured by the distance detectors 5a and 5b are sampled at predetermined intervals. Based
on the measurement values and the current position data output from the current position registers, the coordinate values of points P1.sub.1, . . . , P1.sub.n-1 and P1.sub.n and points P2.sub.1, . .
. , P2.sub.n-1 and P2.sub.n on the model 6 are obtained.
Then, a surface vector S1n[X2.sub.n -X1.sub.n, Y2.sub.n -Y1.sub.n, Z2.sub.n -Z1.sub.n ] is obtained from, e.g., the coordinates (X1.sub.n, Y1.sub.n, Z1.sub.n) of the point P1.sub.n and the
coordinates (X2.sub.n, Y2.sub.n, Z2.sub.n) of the point P2.sub.n. Further, a surface vector S2n[X1.sub.n-1 -X1.sub.n, Y1.sub.n-1 -Y1.sub.n, Z1.sub.n-1 -Z1.sub.n ] is obtained from the coordinates
(X1.sub.n, Y1.sub.n, Z1.sub.n) of the point P1.sub.n and the coordinates (X1.sub.n-1, Y1.sub.n-1, Z1.sub.n-1) of the point P1.sub.n-1.
Subsequently, the outer product of the surface vectors S1n and S2n is calculated, in accordance with the equation given below, to obtain a normal vector Nn at the point Pn.
(Nn, S1n and S2n denote the respective vectors.)
Then, an angle Θcn between the X axis and a projected vector N1n obtained by projecting the normal vector Nn onto the X-Y plane is obtained by the following equation:
Θcn=tan.sup.-1 (Jn/In)
In: X-axis component of the vector Nn,
Jn: Y-axis component of the vector Nn, and this angle Θcn is output as a command value for the C axis.
This angle varies in accordance with the degree of inclination of the model 6, and is equal, e.g., to Θcq at a point P1q.
Accordingly, the tracer head 4 is orientated such that the measurement axes of the distance detectors are always set in a direction as perpendicular as possible to the surface of the model 6, whereby
the distance measurement can be carried out with high accuracy.
FIG. 4 is a flowchart illustrating the process of calculating the aforesaid angle of rotation. In the figure, the numbers following "S" represent step numbers.
[S1] The measurement values obtained from the distance detectors 5a and 5b are sampled at predetermined intervals.
[S2] The vector S1 is obtained from the current measurement values obtained from the respective distance detectors.
[S3] The vector S2 is obtained from the current measurement value and previous measurement value obtained from the distance detector 5a.
[S4] The surface vector N is obtained by calculating the outer product of the vectors S1 and S2.
[S5] The angle Θc between the X axis and the projected vector obtained by projecting the surface vector N onto the X-Y plane is calculated.
In the above embodiment, the normal vector is obtained based on the previous measurement value obtained from one of the distance detectors and the current measurement values obtained from the two
distance detectors. Note, this invention is not limited to this process, and the normal vector can be similarly obtained from other combinations of three measurement points out of the four points
detected at least at the previous and current sampling times.
Further, distance detectors other than the reflected light quantity type may be used; for example, a triangulation type optical detector, an eddy-current type detector, or an ultrasonic type detector
may be used for this purpose.
As described above, according to this invention, a vector normal to the model surface is obtained based on the previously sampled values and currently sampled values obtained from the two non-contact
distance detectors mounted to the tracer head, and the tracer head is controlled to be in alignment with the direction of the projected vector obtained by projecting the normal vector onto the
predetermined plane. Accordingly, the measurement axes of the non-contact distance detectors are always oriented in a direction as perpendicular as possible to the model surface, whereby the distance
measurement can be carried out with high accuracy and the tracing accuracy is improved. Moreover, a dead angle attributable to the interference of the model surface does not exist, and thus a complex
three-dimensional model can be traced.
|
{"url":"http://www.google.ca/patents/US5140239","timestamp":"2014-04-21T02:40:26Z","content_type":null,"content_length":"69828","record_id":"<urn:uuid:39e14311-f99d-496e-afae-41d506b463f9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patente US4744939 - Method for correcting for isotope burn-in effects in fission neutron dosimeters
The invention which is the subject of this application was created under a contract with the U.S. Department of Energy.
This invention relates to monitoring neutron fluence by dosimeters and, more particularly, to a method for correcting for "burn-in" or ingrowth of interfering fissioning isotopes during fission rate
measurements for neutron dosimetry purposes.
Federal regulations require that reactor coolant pressure boundaries have sufficient margin to ensure that the boundary behaves in a non-brittle manner when stressed under operating, maintenance,
testing, and postulated accident conditions, and that the probability of rapidly propagating fracture is minimized. These requirements necessitate prediction of the amount of radiation damage to the
reactor vessel throughout its service life, which in turn requires that the neutron exposure to the pressure vessel be monitored.
Fission neutron monitors are often used in neutron dosimetry, and can provide pivotal fast flux spectral information, such as for light water reactor pressure vessel surveillance.
In neutron dosimetry, a fission monitor of charge Z and mass number A is exposed to a neutron beam having and energy spectrum φ (t, E) which generally is a function of time t and neutron energy E.
During the irradiation, a higher order or larger atomic weight actinide isotope (Z', A') can be created by neutron capture in the (Z, A) isotope of the fission neutron dosimeter. Neutron capture
actually produces the isotope (Z, A+1) and subsequently decay processes then create the (Z', A') isotope, with A'=A+1.
Consequently, the total number of fissions per unit volume, F[T], observed with this fission neutron dosimeter is given by:
F[T] =F[Z], A +B[Z'], A'
where F[Z], A is the number of fissions per unit volume produced in the isotope (Z, A) and B[Z'], A' is the number of fissions per unit volume produced in the isotope (Z', A') as it ingrows during
the irradiation. Although the quantity F[Z], A is desired, F[T] is actually measured. The term B[Z'], A' represents a contribution from the higher order actinide (Z', A'), i.e., the so-called
"burn-in" effect. In light water reactor pressure vessel surveillance work, this contribution can be non-negligible for a ^238 U threshold fission monitor where burn-in effects arise from fission in
^239 Pu. In fact, recent analysis shows that the burn-in effect for ^238 U can be as high as about 30 percent in light water reactor pressure vessel environments.
In light of the above, a method is desired for efficiently and accurately correcting for burn-in effects in fission neutron dosimeters.
Accordingly, it is an object of the present invention to provide a method for correcting for isotope burn-in effects in fission neutron dosimeters, which method is capable of adaptation to diverse
It is another object of the present invention to provide a method for correcting for isotope burn-in effects in fission neutron dosimeters, wherein relatively small dosimeters are employed that are
capable of being used in situ with negligible perturbation of the environment.
It is another object of the present invention to provide a method for correcting for isotope burn-in effects in fission neutron dosimeters which is capable of high sensitivity and absolute accuracy.
It is another object of the present invention to provide a method for correcting for isotope burn-in effects in fission neutron dosimeters which is capable of quantifying background effects.
Finally, it is an object of the present invention to provide a method for correcting for isotope burn-in effects in fission neutron dosimeters which is capable of conducting measurements in extremely
high neutron fluences.
To achieve the foregoing and other objects of the present invention, and in accordance with the purposes of the invention, there is provided a method for correcting for the burn-in effect in fission
neutron dosimeters, wherein two quantities are measured in order to quantify the burn-in contribution, namely P[Z'], A', the amount of (Z', A') isotope that is burned in, and F[Z'], A', the fissions
per unit volume that would be produced from the start of the irridation in a dosimeter made of the (Z', A') isotope. Monitors used to measure these two quantities must experience the very irradiation
that the fission neutron dosimeter undergoes, i.e., the same location and flux-time history.
To measure the burn-in of the (Z', A') isotope, two solid state track recorder fission deposits are prepared from the very same material that comprises the fission neutron dosimeter and the two are
quantified, i.e., the mass and mass density are measured. One of these deposits is exposed along with the fission neutron dosimeter, whereas the second deposit is subsequently used for observation of
background, which is any fission track contribution from actinide impurities in the fission dosimeter. The amount of burn-in of the (Z', A') isotope is determined by conducting a second irradiation,
wherein both the irradiated and unirradiated fission deposits are used in solid state track recorder dosimeters for observation of the absolute number of fissions per unit volume. The difference
between these two absolute solid state track recorder measurements can be used to quantify the amount of burn-in since the neutron cross-section is known.
The fissions per unit volume of the (Z', A') isotope can be obtained by using a fission neutron dosimeter prepared specifically for this isotope. The (Z', A') fission dosimeter is exposed along with
the original threshold fission neutron dosimeter, so that it experiences exactly the same neutron flux-time history at the same location.
In order to determine B[Z'], A' from these observations, certain assumptions on the time dependence of the neutron field must hold. More specifically, the neutron field must generally be either:
(1) time independent, or
(2) a separable function of time t and neutron energy E.
Reactor irradiations can often be carried out at constant power, in which event assumption (1) would be valid. In the case that assumption (1) does not hold, assumption (2) is quite likely to be
valid. Moreover, for this case, the reactor power intrumentation can often be used to determine the separable time-dependent behavior of the neutron field.
As discussed above, when fission neutron dosimeters are applied to neutron dosimetry, such as those used for surveillance in light water reactor pressure vessels, higher order actinide isotopes can
be produced in the dosimeter by the neutron field. These higher order actinide isotopes can also undergo fission and thereby contribute to the number of fissions or fission rate that is observed with
the dosimeter. This application describes a novel method which uses solid state track recorders to correct for burn-in.
In this method, two quantities are measured in order to quantify the burn-in contribution, namely P[Z'], A', the amount of (Z', A') isotope that is burned in, and F[Z'], A', the fissions per unit
volume produced in the (Z', A') isotope. Monitors used to measure these two quantities must experience the very same irradiation that the fission neutron dosimeter undergoes, i.e., it must be at the
same location and exposed to the same flux-time history.
To measure the amount of burn-in of the (Z', A') isotope P[Z'], A', two solid state track recorder fission deposits are prepared from the very same material that comprises the fission neutron
dosimeter. These two solid state track recorders are then quantified by accurately measuring the mass and mass density thereof. One of these deposits is exposed by placing it against the fission
neutron dosimeter, whereas the second deposit is subsequently used for observation of background. The amount of burn-in of the (Z', A') isotope P[Z'], A' is determined by conducting a second
irradiation, wherein both the irradiated and unirradiated fission deposits are used in solid state track recorder dosimeters for observation of the absolute number of fissions per unit volume.
For example, the ^239 Pu burn-in produced by using a ^238 U fission neutron dosimeter is most efficiently observed by using a thermal neutron field for the second irradiation. Here the second solid
state track recorder dosimeter is used to measure any fission background that can arise in the original fission neutron dosimeter material. The difference between these two absolute solid state track
recorder measurements can be used to quantify the amount of ^239 Pu, since the thermal neutron cross section of ^239 Pu is well known.
The fissions per unit volume F[Z'], A' of the (Z', A') isotope can be obtained by using a fission neutron dosimeter prepared specifically for this isotope. For example, a radiometric fission
dosimeter could be used for this purpose. A radiometric fission recorder is a dosimeter which measures the radioactivity of a specific fission product isotope. From the absolute radioactivity of this
fission product isotope, the fission rate is determined. A solid state track recorder fission dosimeter could also be used for this purpose. In any event, the (Z', A') fission dosimeter is exposed
along with the original threshold fission neutron dosimeter, so that it experiences exactly the same neutron flux-time history at the same location.
In order to determine the burn-in from these observations, i.e., the term B[Z'], A' in the above equation, certain assumptions on the time dependence of the neutron field must hold. More
specifically, the neutron field must generally be either:
(1) time independent; or
(2) a separable function of time t and neutron energy E.
For the latter case, the term B[Z'], A' must be expressible as the product of two functions, one of which is a function of time t only and the other being a function of neutron energy E only.
Reactor irradiation can often be carried out at constant power, in which event assumption (1) would be valid. In the latter case that assumption (1) does not hold, assumption (2) is quite likely to
be valid. Moreover, for this latter case, the reactor power instrumentation record can often be used to determine the separable time-dependent behavior of the neutron field. The use of these
assumptions in the determination of the burn-in term, B[Z'], A', is discussed below in greater detail.
Let f[Z], A (t) and f[Z'], A' (t) be the fission rates of the (Z, A) and (Z', A') isotopes per nucleus, respectively. These fission rates can be expressed in the form: ##EQU1## where σ[Z] ^f.sub., A
(E) and σ[Z'] ^f, [A'] (E) are the fission cross section of the (Z, A) and (Z', A') isotopes, respectively. Here φ (t, E) represents the neutron spectrum which generally depends on time t as well as
neutron energy E.
For a reactor irradiation of duration τ, the total fissions per nucleus F[Z], A (τ) and F[Z'], A' (τ) produced in the (Z, A) and (Z', A') isotopes, respectively, can be obtained by integration of
Equations (1) and (2) over time t, so that: ##EQU2##
During this irradiation, let p[Z'], A' be the rate of burn-in, that is, the production of the isotope that is created by neutron capture in the (Z, A) isotope of the fission neutron dosimeter at time
t. Under the assumption that the half-life of the decay processes forming the (Z', A') isotope are negligible compared with reactor irradiation times, this production rate can be expressed in terms
of the neutron capture cross section σ[Z] ^C.sub., A (E) of the (Z, A) isotope: ##EQU3## where n[Z], A is the atom density of the (Z, A) isotope. Atom density can be expressed in atoms per unit
volume or atoms per unit mass.
Hence, the density P[Z'], A' (t) of the production of the isotope (Z', Z') at any time t≦τ during the irradiation interval τ, is given by: ##EQU4## which can be written as: ##EQU5##
Equation (7) assumes there is negligible loss in n[Z], A over the irradiation time. This assumption is usually satisfied. If it is not, the method described herein is still valid and the actual time
dependence of n[Z], A can be taken into account in going from Equation (6) to (7).
Consequently, the fission density B[Z'], A, produced by the burn-in of the (Z', A') isotope during the irradiation can be expressed as: ##EQU6## where P[Z'], A' (t) is given by Equation (7) and f
[Z'], A' (t) is given by Equation (2).
While B[Z'], A' is the desired burn-in term, what is actually measured is production during the irradiation of duration τ, P[Z'], A' (τ) and the total fissions per nucleus F[Z'], A' (τ) from the
irradiation of duration τ. Consequently, it is of interest to examine the conditions under which B[Z'], A' can be obtained from Equation (8) in terms of the observed quantities P[Z'], A' (τ) and F
[Z'], A' (τ) . For a neutron field independent of t, one has 100 (t,E)=φ(E) and Equation (8) becomes: ##EQU7##
For the time-independent case, Equations (4) and (7) give, respectively: ##EQU8##
Using Equations (10) and (11) in Equation (9), one finds for the time-independent case:
B[Z'],A' =1/2{P[Z'],A' (τ)·F[Z] [Z'],A' (τ)}(12)
For the case of a separable time-dependent behavior of the neutron field, one can write:
φ(t,E)=T(t)·φ[o] (E) (13)
Here the time-dependent term T(t), is known from reactor power instrumentation records. Using this assumption in Equation (8), one finds: ##EQU9## Also under this assumption, P[Z'], A' (τ) and F[Z'],
A' (τ) become, respectively: ##EQU10##
Using Equations (15) and (16) in Equation (14), one finds the burn-in term can be expressed in the form: ##EQU11##
Except for the time-dependent integrals, Equation (17) is of the same form as Equation (12). Indeed, for T(t)=constant, one finds Equation (17) reduces to Equation (12). More generally, the
coefficient in Equation (17) arising from the time-dependent integrals can be evaluated using known reactor power instrumentation measurements, which define the separable time-dependent function T
(t). That is, chart recorders, usually located in the reactor operations room, provide a record of the power time history of the reactor irradiation.
The method described above can be applied in a number of other ways. The original fission monitor for the (Z, A) isotope, in which F[T] fissions per unit volume are observed, may be either a
radiometric or solid state track recorder fission dosimeter. The fission rate in the (Z', A') isotope can also be observed with either a radiometric or solid state track recorder fission dosimeter.
The requirements of the specific application at hand will usually dictate which type of dosimeters are employed. For example, extremely thin solid state track recorders can be expressed for the (Z,
A) and (Z', A') isotopes, so that infinitely dilute fission rate measurements can be conducted. Hence in applications where resonance self-shielding is non-negligible, use of solid state track
recorder dosimeters are recommended for both (Z, A) and (Z', A') fission rate observations.
On the other hand, radiometric fission dosimeters do not possess the fluence limitations of solid state track recorders, and consequently can be used for very high fluence applications. In the event
resonance self-shielding is non-negligible, extremely thin solid state track recorder deposits of both (Z, A) and Z', A') isotopes can serve as radiometric dosimeters for infinitely dilute fission
rate measurements of very high fluence. The solid state track recorder deposit for the (Z, A) isotope would then be available for use in follow-on irradiations to determine the burn-in term.
This method may also be applied retroactively to correct fission rate measurements already conducted with radiometric dosimeters. This is particularly important for light water reactor pressure
vessel surveillance work, where extensive radiometric fission dosimeters already exist. In such cases, solid state track recorder deposits would be prepared from irradiated and unirradiated
radiometric dosimeters that were used for the original (Z, A) fission rate measurement. These deposits could then be used to determine the amount of the (Z', A') isotope that was burned-in. It must
be stressed that this burn-in determination can in principle account for any resonance self-shielding introduced by the radiometric dosimeter. To complete this correction, the fission rate in the
(Z', A') isotope for the original reactor irradiation must be known. If the fission rate in the (Z', A') isotope was not measured during the original reactor irradiation, it would have to be
determined by either measurement or calculation. Measurement would entail a second irradiation of a radiometric or solid state track recorder fission dosimeter, which should duplicate the original
irradiation as closely as possible. In either event, the observed (Z', A') fission rate would require a calculated correction for resonance self-shielding.
The method described above possesses all the advantages of passive techniques used for neutron dosimetry, such as radiometric dosimetry, and also includes the following advantages:
(1) easily adapted to diverse geometries;
(2) dosimeters can be small in size and therefore can be used in-situ with negligible perturbation of the environment;
(3) high sensitivity and absolute accuracy are available;
(4) background effects can be quantified; and
(5) since radiometric dosimetry can be applied to observe fissions per unit volume induced in the (Z', A') and (Z, A) isotopes during the irradiation, measurements can be conducted to extremely high
The foregoing is considered illustrative only of the principles of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired
to limit the invention to the exact construction and operation shown and described. Accordingly, all suitable modifications and equivalents, falling with the scope of the invention and the appended
claims may be resorted to.
|
{"url":"http://www.google.es/patents/US4744939?hl=es&dq=flatulence","timestamp":"2014-04-18T05:37:05Z","content_type":null,"content_length":"78213","record_id":"<urn:uuid:4c0c6239-ccba-47b0-96c1-22f0593ace7e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
Optimal Bucket Allocation Design of k-ary MKH Files for Partial Match Retrieval
January-February 1997 (vol. 9 no. 1)
pp. 148-160
ASCII Text x
C.y. Chen, H.f. Lin, C.c. Chang, R.c.t. Lee, "Optimal Bucket Allocation Design of k-ary MKH Files for Partial Match Retrieval," IEEE Transactions on Knowledge and Data Engineering, vol. 9, no. 1,
pp. 148-160, January-February, 1997.
BibTex x
@article{ 10.1109/69.567057,
author = {C.y. Chen and H.f. Lin and C.c. Chang and R.c.t. Lee},
title = {Optimal Bucket Allocation Design of k-ary MKH Files for Partial Match Retrieval},
journal ={IEEE Transactions on Knowledge and Data Engineering},
volume = {9},
number = {1},
issn = {1041-4347},
year = {1997},
pages = {148-160},
doi = {http://doi.ieeecomputersociety.org/10.1109/69.567057},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Knowledge and Data Engineering
TI - Optimal Bucket Allocation Design of k-ary MKH Files for Partial Match Retrieval
IS - 1
SN - 1041-4347
EPD - 148-160
A1 - C.y. Chen,
A1 - H.f. Lin,
A1 - C.c. Chang,
A1 - R.c.t. Lee,
PY - 1997
KW - Multidisk file design
KW - bucket allocation problem
KW - multiple key hashing files
KW - partial match queries
KW - optimal performances.
VL - 9
JA - IEEE Transactions on Knowledge and Data Engineering
ER -
Abstract—This paper first shows that the bucket allocation problem of an MKH (Multiple Key Hashing) file for partial match retrieval can be reduced to that of a smaller sized subfile, called the
remainder of the file. And it is pointed out that the remainder type MKH file is the hardest MKH file for which to design an optimal allocation scheme. We then particularly concentrate on the
allocation of an important remainder type MKH file; namely, the k-ary MKH file. We present various sufficient conditions on the number of available disks and the number of attributes for a k-ary MKH
file to have a perfectly optimal allocation among the disks for partial match queries. Based upon these perfectly optimal allocations, we further present a heuristic method, called the CH (Cyclic
Hashing) method, to produce near optimal allocations for the general k-ary MKH files. Finally, a comparison, by experiment, between the performances of the proposed method and an "ideal" perfectly
optimal method, shows that the CH method is indeed satisfactorily good for the general k-ary MKH files.
[1] K.A.S. Abdel-Ghaffar and A. El. Abbadi, "Optimal Disk Allocation for Partial Match Queries," ACM Trans. Database Systems, vol. 18, no. 1, pp. 132-156, 1993.
[2] A.V. Aho and J.D. Ullman, "Optimal Partial-Match Retrieval When Fields Are Independently Specified," ACM Trans. Database Systems, vol. 4, no. 2, pp. 168-179, 1979.
[3] A. Bolour, "Optimality Properties of Multiple Key Hashing Functions," J. Assoc. Computing, vol. 26, no. 2, pp. 196-210, 1979.
[4] W.A. Burkhard, "Partial Match Hash Coding: Benefits of Redundancy," ACM Trans. Database Systems, vol. 4, no. 2, pp. 228-239, 1979.
[5] M.Y. Chan, "Multidisk File Design: An Analysis of Folding Buckets to Disks," BIT, vol. 24, pp. 262-268, 1984.
[6] M.Y. Chan, "A Note on Redundant Disk Allocation," IPL, vol. 20, pp. 121-123, 1985.
[7] C.C. Chang, "Optimal Information Retrieval When Queries Are Not Random," Information Sciences, vol. 34, pp. 199-223, 1984.
[8] C.C. Chang, "Application of Principal Component Analysis to Multidisk Concurrent Accessing," BIT, vol. 28, pp. 205-214, 1988.
[9] C.C. Chang and C.Y. Chen, "Gray Code as a Declustering Scheme for Concurrent Disk Retrieval," Information Science and Eng., vol. 13, no. 2, pp. 177-188, 1987.
[10] C.C. Chang and C.Y. Chen, "Symbolic Gray Code as a Data Allocation Scheme for Two-disk Systems," The Computer J.,U.K., vol. 35, no. 3, pp. 299-305, 1992.
[11] C.C. Chang, M.W. Du, and R.C.T. Lee, "Performance Analysis of Cartesian Product Files and Random Files," IEEE Trans. Software Eng., vol. 10, no. 1, pp. 88-99, 1984.
[12] C.C. Chang, R.C.T. Lee, and H.C. Du, "Some Properties of Cartesian Product Files," Proc. ACM-SIGMOD Conf., pp. 157-168, 1980.
[13] C.C. Chang and J.C. Shieh, "On the Complexity of File Allocation Problem," Proc. Int'l Conf. Foundation of Data Organization,Kyoto, Japan, pp. 113-115, May 1985.
[14] C.Y. Chen and H.F. Lin, "Optimality Criteria of the Disk Modulo Allocation Method for Cartesian Product Files," BIT, vol. 31, pp. 566-575, 1991.
[15] C.Y. Chen, H.F. Lin, R.C.T. Lee, and C.C. Chang, "Redundant MKH Files Design among Multiple Disks for Concurrent Partial Match Retrieval," The J. Systems and Software, 1996, to appear.
[16] H.C. Du, "Disk Allocation Methods for Binary Cartesian Product Files," BIT, vol. 26, pp. 138-147, 1986.
[17] H.C. Du and J.S. Sobolewski, "Disk Allocation for Product Files on Multiple Disk Systems," ACM Trans. Database Systems, vol. 7, Mar. 1982.
[18] C. Faloutsos and D. Metaxas, "Disk Allocation Methods Using Error Correcting Codes," IEEE Trans. Computers, Aug. 1991.
[19] M.T. Fang, R.C.T. Lee, and C.C. Chang, "The Idea of Declustering and its Applications," Proc. Int'l Conf. Very Large Databases, 1986.
[20] M.H. Kim and S. Pramanik, “Optimal File Distribution for Partial Match Retrieval,” Proc. ACM Int'l Conf. Management of Data, pp. 173-182, 1988.
[21] R.C.T. Lee and S.H. Tseng, "Multikey Sorting," Policy Analysis and Information Systems, vol. 3, no. 2, pp. 1-20, 1979.
[22] W.C. Lin, R.C.T. Lee, and H.C. Du, "Common Properties of Some Multi-Attribute File Systems," IEEE Trans. Software Eng., vol. 1, SE-5, no. 2, pp. 160-174, 1979.
[23] J.H. Liou and S.B. Yao, "Multi-Dimension Clustering for Database Organizations," Information Systems, vol. 2, no. 2, pp. 187-198, 1977.
[24] K. Ramamohanarao, J. Shepherd, and R. Sacks-Davis, "Multi-Attribute Hashing with Multiple File Copies for High Performance Partial-Match Retrieval," BIT, vol. 30, pp. 404-423, 1990.
[25] R.L. Rivest, "Partial-Match Retrieval Algorithms," SIAM J. Computing, vol. 14, no. 1, pp. 19-50, 1976.
[26] J.B. Rothnie and T. Lozano, “Attribute Based File Organization in a Paged Memory Environment,” Comm. ACM, vol. 17, no. 2, pp. 63–69, Feb. 1974.
[27] Y.Y. Sung, "Performance Analysis of Disk Allocation Method for Cartesian Product Files," IEEE Trans. Software Eng., vol. 13, no. 9, pp. 1,018- 1,026, 1987.
[28] C.Y. Tang, D.J. Buehrer, and R.C.T. Lee, "On the Complexity of Some Multiattribute File Design Problems," Information Systems, vol. 10, no. 1, pp. 21-25, 1985.
Index Terms:
Multidisk file design, bucket allocation problem, multiple key hashing files, partial match queries, optimal performances.
C.y. Chen, H.f. Lin, C.c. Chang, R.c.t. Lee, "Optimal Bucket Allocation Design of k-ary MKH Files for Partial Match Retrieval," IEEE Transactions on Knowledge and Data Engineering, vol. 9, no. 1, pp.
148-160, Jan.-Feb. 1997, doi:10.1109/69.567057
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tk/1997/01/k0148-abs.html","timestamp":"2014-04-18T14:12:00Z","content_type":null,"content_length":"59308","record_id":"<urn:uuid:6301ef9e-2c80-4996-8878-02190493f3e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two Equations (solving for inverses)
Find the inverse equations of the following:
1. $f(x) = 3x + ln(x)$
2. $g(x) = \frac{x+1}{2x+1}$
So the equations become...
1. $x = 3y + ln(y)$
2. $x = \frac{y + 1}{2y + 1}$
Sovle for $y$.
I've played with the algebra a few different ways, especially on the rational function, but I still can't solve for 'y' in either equation!
Step-by-step algebra would be helpful - thanks!
|
{"url":"http://mathhelpforum.com/algebra/125963-two-equations-solving-inverses.html","timestamp":"2014-04-16T17:40:51Z","content_type":null,"content_length":"43761","record_id":"<urn:uuid:98cdd8ae-747e-4eae-98f0-60b5e2570d26>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of no growth
Exogenous growth model
, also known as the
Neo-classical growth model
Solow growth model
is a term used to sum up the contributions of various authors to a
of long-run
economic growth
within the framework of
neoclassical economics
Development of the model
The Neo-classical model was an extension to the 1946
Harrod-Domar model
that included a new term,
growth. The most important contribution was probably the work done by
Robert Solow
; in 1956, Solow and
T.W. Swan
developed a relatively simple growth model which fit available data on
economic growth with some success. Solow received the 1987
Nobel Prize in Economics
for his work on the model.
Solow also was the first to develop a growth model with different vintages of capital. The idea behind Solow's vintage capital growth model is that new capital is more valuable than old (vintage)
capital because capital is produced based on known technology and because technology is improving. Both Paul Romer and Robert Lucas, Jr. subsequently developed alternatives to Solow's neo-classical
growth model. Today, economists use Solow's sources-of-growth accounting to estimate the separate effects on economic growth of technological change, capital, and labor.
Extension to the Harrod-Domar model
Solow extended the Harrod-Domar model by:
• Adding labor as a factor of production;
• Requiring diminishing returns to labor and capital separately, and constant returns to scale for both factors combined;
• Introducing a time-varying technology variable distinct from capital and labor.
The capital-output and capital-labor ratios are not fixed as they are in the Harrod-Domar model. These refinements allow increasing capital intensity to be distinguished from technological progress.
Short run implications
• Policy measures like tax cuts or investment subsidies can affect the steady state level of output but not the long-run growth rate.
• Growth is affected only in the short-run as the economy converges to the new steady state output level.
• The rate of growth as the economy converges to the steady state is determined by the rate of capital accumulation.
• Capital accumulation is in turn determined by the savings rate (the proportion of output used to create more capital rather than being consumed) and the rate of capital depreciation.
Long run implications
In neoclassical growth models, the long-run rate of growth is
determined - in other words, it is determined outside of the model. A common prediction of these models is that an economy will always
towards a
steady state
rate of growth, which depends only on the rate of
technological progress
and the rate of labor force growth.
A country with a higher saving rate will experience faster growth, e.g. Singapore had a 40% saving rate in the period 1960 to 1996 and annual GDP growth of 5-6%, compared with Kenya in the same time
period which had a 15% saving rate and annual GDP growth of just 1%. This relationship was anticipated in the earlier models, and is retained in the Solow model; however, in the very long-run capital
accumulation appears to be less significant than technological innovation in the Solow model.
The key assumption of the neoclassical growth model is that capital is subject to
diminishing returns
. Given a fixed stock of labor, the impact on output of the last unit of capital accumulated will always be less than the one before. Assuming for simplicity no technological progress or labor force
growth, diminishing returns implies that at some point the amount of new capital produced is only just enough to make up for the amount of existing capital lost due to depreciation. At this point,
because of the assumptions of no technological progress or labor force growth, the economy ceases to grow.
Assuming non-zero rates of labor growth complicates matters somewhat, but the basic logic still applies - in the short-run the rate of growth slows as diminishing returns take effect and the economy
converges to a constant "steady-state" rate of growth (that is, no economic growth per-capita).
Including non-zero technological progress is very similar to the assumption of non-zero workforce growth, in terms of "effective labor": a new steady state is reached with constant output per
worker-hour required for a unit of output. However, in this case, per-capita output is growing at the rate of technological progress in the "steady-state" (that is, the rate of productivity growth).
Variations in productivity's effects
Within the Solow growth model, the
Solow residual
total factor productivity
is an often used measure of technological progress. The model can be reformulated in slightly different ways using different productivity assumptions, or different measurement metrics:
• Average Labor Productivity (ALP) is economic output per labor hour.
• Multifactor productivity (MFP) is output divided by a weighted average of capital and labor inputs. The weights used are usually based on the aggregate input shares either factor earns. This
ratio is often quoted as: 33% return to capital and 66% return to labor (in Western nations), but Robert J. Gordon says the weight to labor is more commonly assumed to be 75%.
In a growing economy, capital is accumulated faster than people are born, so the denominator in the growth function under the MFP calculation is growing faster than in the ALP calculation. Hence, MFP
growth is almost always lower than ALP growth. (Therefore, measuring in ALP terms increases the apparent capital deepening effect.)
Technically, MFP is measured by the "Solow residual", not ALP..
Empirical evidence
A key prediction of neoclassical growth models is that the income levels of
poor countries
will tend to
catch up
with or
towards the income levels of rich countries. Since the 1950s, the opposite empirical result has been observed
on average
. If the average growth rate of countries since, say, 1960 is plotted against initial GDP per capita (i.e. GDP per capita in 1960), one observes a positive relationship. In other words, the developed
world appears to have grown at a faster rate than the developing world, the opposite of what is expected according to a prediction of convergence. However, a few formerly poor countries, notably
, do appear to have converged with rich countries, and in the case of Japan actually exceeded other countries' productivity, some theorise that this is what has caused Japan's poor growth recently -
convergent growth rates are still expected, even after convergence has occurred
; leading to over-optimistic investment, and actual
The evidence is stronger for convergence within countries. For instance the per-capita income levels of the southern states of the United States have tended to converge to the levels in the Northern
states. These observations have led to the adoption of the conditional convergence concept. Whether convergence occurs or not depends on the characteristics of the country or region in question, such
Evidence for conditional convergence comes from multivariate, cross-country regressions.
If productivity were associated with high technology then the introduction of information technology should have led to a noticeable productivity acceleration over the past twenty years; but it has
not: see: Solow computer paradox.
Econometric analysis on Singapore and the other "East Asian Tigers" has produced the surprising result that although output per worker has been rising, almost none of their rapid growth had been due
to rising per-capita productivity (they have a low "Solow residual").
Criticisms of the model
Empirical evidence offers mixed support for the model. Limitations of the model include its failure to take account of entrepreneurship (which may be catalyst behind economic growth) and strength of
institutions (which facilitate economic growth). In addition, it does not explain how or why technological progress occurs. This failing has led to the development of
endogenous growth theory
, which endogenizes technological progress and/or knowledge accumulation.
Some critics suggest that Schumpeter’s 1939 Theory of Business Cycles, modern Institutionalism and Austrian economics offer an even better prospect of explaining how long run economic growth occur
than the later Lucas/Romer models.
Marxist critics of growth theory itself have questioned the model's underlying assertion that economic growth is necessarily a good thing. Furthermore, the use of a representative agent hides equity
Graphical representation of the model
The model starts with a neoclassical production function Y/L = F(K/L), rearranged to y = f(k), which is the orange curve on the graph. From the production function; output per worker is a function of
capital per worker. The production function assumes diminishing returns to capital in this model, as denoted by the slope of the production function.
n = population growth rate
d = depreciation
k = capital per worker
y = output/income per worker
L = labor force
s = saving rate
Capital per worker change is determined by three variables:
• Investment (saving) per worker
• Population growth, increasing population decreases the level of capital per worker.
• Depreciation - capital stock declines as it depreciates.
When sy > (n+d)k, in other words, when the savings rate is greater than the population growth rate plus the depreciation rate, when the green line is above the black line on the graph, then capital
(k) per worker is increasing, this is known as capital deepening. Where capital is increasing at a rate only enough to keep pace with population increase and depreciation it is known as capital
The curves intersect at point A, the "steady state". At the steady state, output per worker is constant. However total output is growing at the rate of n, the rate of population growth.
Left of point A, point k[1] for example, the saving per worker is greater than the amount needed to maintain a steady level of capital, so capital per worker increases. There is capital deepening
from y[1] to y[0], and thus output per worker increases.
Right of point A where sy < (n+d)k, point y[2] for example, capital per worker is falling, as investment is not enough to combat population growth and depreciation. Therefore output per worker falls
from y[2] to y[0].
The model and changes in the saving rate
The graph is very similar to the above, however, it now has a second savings function s[1]y, the blue curve. It demonstrates that an increase in the saving rate shifts the function up. Saving per
worker is now greater than population growth plus depreciation, so capital accumulation increases, shifting the steady state from point A to B. As can be seen on the graph, output per worker
correspondingly moves from y[0] to y[1]. Initially the economy expands faster, but eventually goes back to the steady state rate of growth which equals n.
There is now permanently higher capital and productivity per worker, but economic growth is the same as before the savings increase.
│ header 1 │ header 2 │ header 3 │
│row 1, cell 1 │row 1, cell 2 │row 1, cell 3 │
│row 2, cell 1 │row 2, cell 2 │row 2, cell 3 │
The model and changes in population
This graph is again very similar to the first one, however, the population growth rate has now increased from n to n[1], this introduces a new capital widening line (n[1]+d)
Mathematical framework
The Solow growth model can be described by the interaction of five basic macroeconomic equations:
• Macro-production function
• GDP equation
• Savings function
• Change in capital
• Change in workforce
Macro-production function
This is a Cobb-Douglas function where Y represents the total production in an economy. A represents multifactor productivity (often generalized as technology), K is capital and L is labor.
An important relation in the macro-production function:
$Y=AK^aL^{1-a} leftrightarrow y=Ak^a$
Which is the macro-production function divided by L to give total production per capita y and the capital intensity k
GDP equation
Where C is private consumption, G is public consumption, NX is net exports, and I represents investments, or savings. Note that in the Solow model, we represent public consumption and private
consumption simply as total consumption from both the public and government sector. Also notice that net exports and government spending are excluded from Solow's model. This equation is called the
GDP equation because it is calculated much the same way as is the Gross domestic product (or more precisely the Gross national product).
Savings function
This function depicts savings, I as a portion s of the total production Y.
Change in capital
$Delta K=sY-,$$delta K,$
The $delta,$ is the rate of depreciation.
Change in workforce
gL is the growth function for L.
The model's solution
First we'll need to define some growth functions.
1. Growth in capital $gK=frac{Delta K}{K}$
2. Growth in the GDP $gY=frac{Delta Y}{Y}$
3. Growth function for capital intensity $gk=gK-gL,$
Solution assuming no multifactor productivity growth
This simplification makes the solution's derivation more comprehensible, as it allows the following calculations:
$gK=frac{Delta K}{K}rightarrowfrac{sY-dK}{K}rightarrowfrac{sY}{K}-d$
When there is no growth in A then we can assume the following based on the first calculation:
Moving on:
$gK=frac{sY}{K}-d$ Divide the fraction by L and you will see that $gK=frac{sy}{k}-d$
By subtracting gL from gK we end up with:
If k is known in the year t then this formula can be used to calculate k in any given year.
In the first segment on the right side of the equation we see that $lim_{k to 0} gk=infty$ and $lim_{k to infty}gk=-(d+gL)$
Deriving the Steady-state equation:
$frac{dfrac{K}{L}}{dt}=frac{dk}{dt},$ where $k=K/L$ and k denotes capital per worker.
Differentiating we obtain:
$frac{dk}{dt}=frac{frac{dk}{dt}L-frac{dL}{dt}K}{L^2}$ which is
$frac{dk}{dt}=frac{frac{dK}{dt}}{L}-left(frac{frac{dL}{dt}}{L}frac{K}{L}right)$ we know that
$frac{frac{dL}{dt}}{L}$ is the population growth rate over time denoted by n.
Furthermore we know that
$frac{dK}{dt}=sY-delta K,$
where $delta,$ is the depreciation rate of capital.
Hence we obtain:
$dk=sy-(n+delta )k,$
which is the fundamental Solow equation. The same can be done if technological progress is included.
In the steady state the change in $dk$ must be 0.
$sy=(n+delta )k,$
The steady state consumption will then be:
$c=y-(n+delta )k,^*$
A Simple Explanation
Consider a simple case Cobb-Douglas production function:
where $Y$ is level output, $K$ level of capital, $N$ level of employment (given, fixed), and $0 is relative capital intensity (given, fixed). Net capital accumulation per capita in period t is given
by:frac{K_{t+1}}{N}-frac{K_t}{N}=sleft(frac{Y_t}{N}right)-{delta}left(frac{K_t}{N}right),where s is the savings rate, and {delta} the depreciation rate. The economy reaches a steady state level of
output and capital when net capital accumulation per capita is zero. That is,sleft(frac{Y}{N}right)={delta}left(frac{K}{N}right),the amount of total investment (left side) is equal to the amount of
capital depreciation (right side) in any given period. From the production function we know that output per capita is given by:frac{Y}{N}=left(frac{K}{N}right)^alpha,which implies that the steady
state levels of capital and output, denoted by asterisks, are:frac{K^*}{N}=left(frac{s}{delta}right)^{frac{1}{1-alpha}},andfrac{Y^*}{N}=left(frac{s}{delta}right)^{frac{alpha}{1-alpha}}.for given
values of s,delta, and alpha.Now consider output as a Cobb-Douglas function of capital and effective labor AN:Y=K^{alpha}left(ANright)^{1-alpha},,where increases in technology A positively affect
output Y by improving the efficiency of labor N. If technology grows at a constant positive rate of g_A, and labor at g_A, then their product AN grows at a rate approximately equal to g_A+g_N.
Consequently, the steady state level of output per unit of effective labor (derived from the original steady state condition)frac{Y^*}{AN}=left(frac{s}{delta}right)^{frac{alpha}{1-alpha}},is actually
declining since output Y is by definition growing at zero in the steady state (left side numerator), whereas (in the denominator) effective labor is growing at g_A+g_N>0. Therefore, in order to
offset this additional source of per unit erosion in steady state output, the steady state condition must be modified to read:sleft(frac{Y}{AN}right)=left(delta+g_A+g_Nright)left(frac{K}{AN}right)
total investment (left side) must equal the amount of growth in effective labor in addition to the amount of capital depreciation. This modification implies that the steady state level of output per
unit of effective labor isfrac{Y^*}{AN}=left(frac{s}{delta+g_A+g_N}right)^frac{alpha}{1-alpha}.Similarly, the steady state level of capital per unit of effecitve labor isfrac{K^*}{AN}=left(frac{s}
{delta+g_A+g_N}right)^frac{1}{1-alpha}.Note: Although per unit growth is zero, the absolute levels of output Y^* and capital K^* in the steady state are still growing at a constant positive rate
g_A+g_N>0. This result is sometimes referred to as balanced growth. Also note that neither the savings rate s, the depreciation rate delta, nor the relative intensity of capital alpha affects the
rate of growth of output in the steady state, although it does still contribute to the initial level of output and capital at the start of a period of balanced growth.The golden rule savings rate s^*
maximizes the steady state level of aggregate consumption C^* per unit of effective labor, as defined by the national income (GDP) identity:frac{Y^*}{AN}=frac{C^*}{AN}+frac{I^*}{AN}.Assuming that the
steady state level of investment I^* equals sY^*, the golden rule savings rate solves the unconstrained maximization problemmax_s frac{C^*}{AN}=left(frac{s}{delta+g_A+g_N}right)^frac{alpha}{1-alpha}
-sleft(frac{s}{delta+g_A+g_N}right)^frac{alpha}{1-alpha}.Sinceln{left(frac{C^*}{AN}right)}=ln{left(1-sright)}+left(frac{alpha}{1-alpha}right)ln{left(frac{s}{delta+g_A+g_N}right)},this impliesfrac
{partialln{left(frac{C^*}{AN}right)}}{partial s}=-frac{1}{1-s}+left(frac{alpha}{1-alpha}right)left(frac{delta+g_N+g_A}{s}right)left(frac{1}{delta+g_N+g_A}right),setting equal to zero and
simplifying,0=-s+(1-s)left(frac{alpha}{1-alpha}right),finally,s^*=alpha,.Note: This implies that aggregate consumption per unit of effective labor in the steady state is maximized when the savings
rate is exactly equal to the relative intensity of capital in the production function. See also Golden rule savings rate Growth theory - historical overview Endogenous growth theory Labor-augmenting
References External links Java applet where you can experiment with parameters and learn about Solow model Solow Growth Model by Fiona Maclachlan, The Wolfram Demonstrations Project. A step-by-step
explanation of how to understand the Solow Model$
|
{"url":"http://www.reference.com/browse/no+growth","timestamp":"2014-04-16T13:22:47Z","content_type":null,"content_length":"99777","record_id":"<urn:uuid:6942b6d2-c5ff-4f16-b50e-2982b14b88ea>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DOCUMENTA MATHEMATICA, Vol. 4 (1999), 109-126
DOCUMENTA MATHEMATICA
, Vol. 4 (1999), 109-126
R. Weikard
On Rational and Periodic Solutions of Stationary KdV Equations
Stationary solutions of higher order KdV equations play an important role for the study of the KdV equation itself. They give rise to the coefficients of the associated Lax pair $(P,L)$ for which $P$
and $L$ have an algebraic relationship (and are therefore called algebro-geometric). This paper gives a sufficient condition for rational and simply periodic functions which are bounded at infinity
to be algebro-geometric as those potentials of $L$ for which $Ly=zy$ has only meromorphic solutions. It also gives a new elementary proof that this is a necessary condition for any meromorphic
function to be algebro-geometric.
1991 Mathematics Subject Classification: 35Q53, 34A20, 58F07
Keywords and Phrases: KdV equation, algebro-geometric solutions of integrable systems, meromorphic solutions of linear differential equations
Full text: dvi.gz 29 k, dvi 74 k, ps.gz 143 k.
Home Page of DOCUMENTA MATHEMATICA
|
{"url":"http://www.emis.de/journals/DMJDMV/vol-04/04.html","timestamp":"2014-04-21T14:44:01Z","content_type":null,"content_length":"1723","record_id":"<urn:uuid:ffb8ebb9-c705-4da4-9bf9-3c8757d70adf>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Local tail bounds for functions of independent random variables
Luc Devroye and Gábor Lugosi
Annals of Probability Volume to appear, , 2006.
It is shown that functions defined on $\{0,1,\ldots,r-1\}^n$ satisfying certain conditions of bounded differences that guarantee subgaussian tail behavior also satisfy a much stronger ``local''
subgaussian property. For self-bounding and configuration functions we derive analogous locally subexponential behavior. The key tool is Talagrand's (1994) variance inequality for functions defined
on the binary hypercube which we extend to functions of uniformly distributed random variables defined on $\{0,1,\ldots,r-1\}^n$ for $r\ge 2$.
PDF - Requires Adobe Acrobat Reader or other PDF viewer.
|
{"url":"http://eprints.pascal-network.org/archive/00002876/","timestamp":"2014-04-19T17:05:38Z","content_type":null,"content_length":"6845","record_id":"<urn:uuid:1ed07e37-8a79-4f68-985e-380166730dcb>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The falling body
October 15th 2007, 06:00 PM
The falling body
In the last of my relatively hard problems to figure out, this one applies to a falling body.
ln(mg / mg-cv) = (c/m) t
I need to solve for v(t) and what happens as t-->∞
Thanks, and yes im also new to ln so im a little embarassed if this is really easy:(. Thanks
October 15th 2007, 06:33 PM
$\ln \left( \frac {mg}{mg - cv} \right) = \frac cmt$
$\Rightarrow \frac {mg}{mg - cv} = e^{\frac cmt}$
can you continue?
October 16th 2007, 09:48 AM
Ok thanks again jhevon. What i came up with looks...messy.
V(t) = -1/C (mg/ e^((c/m)t) - mg)
And if this is right, i have no idea what this looks like as it approaches infinity...
October 16th 2007, 09:59 AM
$v = \frac{mg}{c} - \frac{mg}{c}e^{-ct/m}$
So as t approaches infinity, the argument of the exponential becomes very large and negative.
$\lim_{x \to \infty}e^{-x} =$?
October 16th 2007, 10:38 AM
Dan, is that equation what you got or is it written version of my answer? because mine is a little bit different, i have the e^((c/m)t) in the denominator. Maybe i forgot a set of parenthesis...
V(t) = -1/C ((mg/ e^((c/m)t)) - mg)..
OR = {-mg / Ce^[(c/m)t]} + {mg / C}
I could take a pic of my work if that makes things easier...
October 16th 2007, 01:50 PM
Dan, is that equation what you got or is it written version of my answer? because mine is a little bit different, i have the e^((c/m)t) in the denominator. Maybe i forgot a set of parenthesis...
V(t) = -1/C ((mg/ e^((c/m)t)) - mg)..
OR = {-mg / Ce^[(c/m)t]} + {mg / C}
I could take a pic of my work if that makes things easier...
Recall that $a^{-b} = \frac{1}{a^b}$, so our expressions are the same. Unless you meant to write
$v(t) = -\frac{1}{c} \cdot \frac{mg}{e^{ct/m} - mg}$
(which would be incorrect.)
Either way the question you need to answer is the same. I'm asking you for the value of
$\lim_{x \to \infty} e^{-x} = \lim_{x \to \infty} \frac{1}{e^x}$
(This last form might be easier for you to consider.)
October 16th 2007, 03:58 PM
Ah yes, that is the same answer i got, thank you!
And as for the limit...i would guess that as t approaches infinity, that thew function would get infinitely small as its in the denominator....Although with the final equation, its hard for me to
picture this in my head...:confused:
October 16th 2007, 04:05 PM
Ah yes, that is the same answer i got, thank you!
And as for the limit...i would guess that as t approaches infinity, that thew function would get infinitely small as its in the denominator....Although with the final equation, its hard for me to
picture this in my head...:confused:
Does this help?
$v = \frac{mg}{c} - \frac{mg}{c}e^{-ct/m}$
$\lim_{t \to \infty} \left ( \frac{mg}{c} - \frac{mg}{c}e^{-ct/m} \right ) = \frac{mg}{c} - \frac{mg}{c} \cdot 0 = \frac{mg}{c}$
|
{"url":"http://mathhelpforum.com/calculus/20667-falling-body-print.html","timestamp":"2014-04-18T12:31:13Z","content_type":null,"content_length":"13782","record_id":"<urn:uuid:794cb717-181b-409e-8662-c64eca36c7df>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matrix chain multiplication question
Re: Matrix chain multiplication question
Okay I got the definitions of your 3 matrices.
What is M[2,3]?
You want the dimensions of A1 x A2 x A3?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=19320","timestamp":"2014-04-19T09:48:49Z","content_type":null,"content_length":"11726","record_id":"<urn:uuid:0a611143-2275-4f4a-b331-ed30400d11a9>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Sine Distribution on a 2D Array
Ian Mallett geometrian@gmail....
Sat Jul 24 23:38:36 CDT 2010
So I have a square 2D array, and I want to fill the array with sine values.
The values need to be generated by their coordinates within the array.
The center of the array should be treated as the angle 90 degrees. Each of
the four edges should be 0 degrees. The corners, therefore, ought to be
-sqrt(2)*90 degrees. The angle is equal to
(distance_from_center/(dimension_of_array/2))*90 degrees. Then take the
sine of this angle.
To describe another way, if the array is treated like a height-field, a
single mound of the sine wave should just fit inside the array.
Right now, I'm having trouble because I don't know how to operate on an
array's values based on the index of the values themselves.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20100724/99f4e677/attachment.html
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-July/051837.html","timestamp":"2014-04-17T18:38:30Z","content_type":null,"content_length":"3560","record_id":"<urn:uuid:7d862980-e873-4412-816a-444d52431d35>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Intelligent flight control of an autonomous quadrotor
Intelligent Flight Control
of an Autonomous Quadrotor
Syed Ali Raza and Wail Gueaieb
University of Ottawa,
1. Introduction
This chapter describes the different steps of designing, building, simulating, and testing an
intelligent flight control module for an increasingly popular unmanned aerial vehicle
(UAV), known as a quadrotor. It presents an in-depth view of the modeling of the
kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a
challenging control problem due to its highly unstable nature. An effective control
methodology is therefore needed for such a unique airborne vehicle.
The chapter starts with a brief overview on the quadrotor's background and its applications,
in light of its advantages. Comparisons with other UAVs are made to emphasize the
versatile capabilities of this special design. For a better understanding of the vehicle's
behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the
equations of motion, which are used later as a guideline for developing the proposed
intelligent flight control scheme.
In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It
has been witnessed that fuzzy logic control offers several advantages over certain types of
conventional control methods, specifically in dealing with highly nonlinear systems and
modeling uncertainties. Two types of fuzzy inference engines are employed in the design of
the flight controller, each of which is explained and evaluated.
For testing the designed intelligent flight controller, a simulation environment was first
developed. The simulations were made as realistic as possible by incorporating
environmental disturbances such as wind gust and the ever-present sensor noise. The
proposed controller was then tested on a real test-bed built specifically for this project. Both
the simulator and the real quadrotor were later used for conducting different attitude
stabilization experiments to evaluate the performance of the proposed control strategy. The
controller's performance was also benchmarked against conventional control techniques
such as input-output linearization, backstepping and sliding mode control strategies.
Conclusions were then drawn based on the conducted experiments and their results.
1.1 Quadrotor background
Louis Bréguet and Jacques Bréguet, two brothers working under the guidance of Professor
Charles Richet, were the first to construct a quadrotor, which they named Bréguet Richet
Gyroplane No. 1 Breguet-Richet-1907. The first flight demonstration of Gyroplane No. 1
Source: Motion Control, Book edited by: Federico Casolo,
ISBN 978-953-7619-55-8, pp. 580, January 2010, INTECH, Croatia, downloaded from SCIYO.COM
246 Motion Control
with no control surfaces was achieved on 29 September 1907. Figure 1 shows the huge
quadrotor with double layered propellers being prepared for its first manned flight.
Fig. 1. Bréguet Richet Gyroplane No. 1 Rumerman (2002).
Later, two additional designs were developed and experimental flights were conducted. The
first, by Georges de Bothezat and Ivan Jerome in 1922, had six-bladed rotors placed at each
end of an X-shaped truss structure, as shown in Figure 2.
Fig. 2. Quadrotor designed by George De Bothezat, February 21, 1923 Rumerman (2002).
The second, shown in Figure 3, was built by Étienne Œ hmichen in 1924, and set distance
records, including achieving the first kilometer long helicopter flight.
Fig. 3. Œ hmichen quadrotor designed in 1924 Rumerman (2002).
At present, apart from military endeavours, UAVs are also being employed in various
commercial and industrial applications. In particular, these include the use of unmanned
helicopters for crop dusting or precision farming Sugiura et al. (2003), and microwave
Intelligent Flight Control of an Autonomous Quadrotor 247
autonomous copter systems for geological remote sensing Archer et al. (2004). STARMAC
Waslander et al. (2005) is a multi-agent autonomous rotorcraft, which has potential in
security-related tasks, such as remote inspections and surveillance. The commercially
available quadrotor kit called DraganFlyer Inc. (2008) has become a popular choice for aerial
mapping and cinematography.
UAVs are subdivided into two general categories, fixed wing UAVs and rotary wing UAVs.
Rotary winged crafts are superior to their fixed wing counterparts in terms of achieving
higher degree of freedom, low speed flying, stationary flights, and for indoor usage. A
quadrotor, as depicted in Figure 4, is a rotary wing UAV, consisting of four rotors located at
the ends of a cross structure. By varying the speeds of each rotor, the flight of the quadrotor
is controlled. Quadrotor vehicles possess certain essential characteristics, which highlight
their potential for use in search and rescue applications. Characteristics that provide a clear
advantage over other flying UAVs include their Vertical Take Off and Landing (VTOL) and
hovering capability, as well as their ability to make slow precise movements. There are also
definite advantages to having a four rotor based propulsion system, such as a higher
payload capacity, and impressive maneuverability, particularly in traversing through an
environment with many obstacles, or landing in small areas.
As illustrated by the conceptual diagram in Figure 4, the quadrotor attitude is controlled by
varying the rotation speed of each motor. The front rotor (Mf) and back rotor (Mb) pair
rotates in a clockwise direction, while the right rotor (Mr) and left rotor (Ml) pair rotates in a
counter-clockwise direction. This configuration is devised in order to balance the drag
created by each of the spinning rotor pairs. Figure 5 shows the basic four maneuvers that
can be accomplished by changing the speeds of the four rotors. By changing the relative
speed of the right and left rotors, the roll angle of the quadrotor is controlled. Similarly, the
pitch angle is controlled by varying the relative speeds of the front and back rotors, and the
yaw angle by varying the speeds of clockwise rotating pair and counter-clockwise rotating
pair. Increasing or decreasing the speeds of all four rotors simultaneously controls the
collective thrust generated by the robot. A roll motion can be achieved while hovering by
increasing the speed of the right rotor, while decreasing the speed of the left rotor by the
same amount. Hence, the overall thrust is kept constant.
Fig. 4. Conceptual diagram of a quadrotor.
248 Motion Control
Fig. 5. Quadrotor dynamics.
In the past few years, much research has already been conducted on the modeling and
control of quadrotors. Many control techniques, as summarized in Table 1, are proposed in
the literature, however, excluding STARMAC, their primary focus is mostly for indoor flight
control and therefore do not account for uncertainties and external disturbances. Lyapunov
stability theory is used for stabilization and control of the quadrotor in Bouabdallah et al.
(2004a) and Dzul et al. (2004). Conventional PD2 feedback and PID structures are used for
simpler implementation of control laws, and comparison with LQR based optimal control
theory is presented in Tayebi and McGilvray (2006) and Bouabdallah et al. (2004b).
Backstepping control is also proposed with the drawback of higher computational loads in
Guenard et al. (2005). Visual feedback is applied in many cases, using onboard or offboard
cameras for pose estimation by Altug et al. (2002) and Guenard et al. (2008). Fuzzy logic
control techniques have also been proposed Coza and Macnab (2006), along with neural
networks Tarbouchi et al. (2004) and reinforcement learning Waslander et al. (2005).
Many quadrotor test-beds have been constructed in different research projects, where
simulators are also developed for testing the control laws beforehand. In Kivrak (2006), LQR
is used for attitude stabilization of a commercially available Draganflyer Vti quadrotor
model in MATLAB Simulink. In another project, the modeling, design, and control of a
Miniature Flying Robot (MFR), named OS4 was accomplished Bouabdallah (2007), where a
mathematical model was developed for the simulation and control of a mini quadrotor
using linear and nonlinear control methods.
2. Quadrotor's kinematics and dynamics
Mathematical modelling provides a description of the behaviour of a system. The flight
behaviour of a quadrotor is determined by the speeds of each of the four motors, as they
vary in concert, or in opposition with each other. Hence, based on its inputs, a mathematical
representation of the system can be used to predict the position and orientation of the
quadrotor. The same can further be used to develop a control strategy, whereby
manipulating the speeds of individual motors results in achieving the desired motion.
Intelligent Flight Control of an Autonomous Quadrotor 249
Table 1. Quadrotor flight control techniques used in various projects.
To derive the full mathematical model of the quadrotor, we need to define its kinematics
and dynamics first. The kinematic equations provide a relation between the vehicle's
position and velocity, whereas the dynamic model defines the relation governing the
applied forces and the resulting accelerations.
2.1 Reference frames
Before getting into the equations of kinematics and dynamics of the quadrotor, it is
necessary to specify the adopted coordinate systems and frames of reference, as well as how
transformations between the different coordinate systems are carried out.
250 Motion Control
The use of different coordinate frames is essential for identifying the location and attitude of
the quadrotor in six degrees of freedom (6 DOF). For example, in order to evaluate the
equations of motion, a coordinate frame attached to the quadrotor is required. However, the
forces and moments acting on the quadrotor, along with the inertial measurement unit
(IMU) sensor values, are evaluated with reference to the body frame. Finally, the position
and speed of the quadrotor are evaluated using GPS measurements with respect to an
inertial frame located at the base station.
Thus, three main frames of reference are adopted, as shown in Figure 6:
1. The inertial frame, , is an earth-fixed coordinate system with the origin
located on the ground, for example, at the base station. By convention, the x-axis points
towards the north, the y-axis points towards the east, and the z-axis points towards the
center of the earth.
2. The body frame , with its origin located at the center of gravity (COG)
of the quadrotor, and its axes aligned with the quadrotor structure such that the x-axis
is along the arm with front motor, the y-axis is along the arm with right motor,
and the z-axis , where ‘x ’ denotes the cross product.
3. The vehicle frame, , is the inertial frame with the origin located at the
COG of the quadrotor. The vehicle frame has two variations, Fφ and Fθ. Fφ is the vehicle
frame, Fv, rotated about its z-axis by an angle ψ so that and are aligned with
and , respectively. Fθ is frame Fφ rotated about its y-axis, , by a pitching angle, θ,
such that and are aligned with and , respectively.
Fig. 6. The inertial, body and vehicle frames of reference.
Translation and rotation matrices are used to transform one coordinate reference frame into
another desired frame of reference. For example, the transformation from Fi to Fv provides
the displacement vector from the origin of the inertial frame to the center of gravity (COG)
of the quadrotor. Also, the transformation from Fv to Fb is rotational in nature, therefore
yielding the roll, pitch and yaw angles.
Intelligent Flight Control of an Autonomous Quadrotor 251
2.2 Quadrotor's kinematics
Let and denote the quadrotor's position and orientation
within a given frame F. The relation between the quadrotor's speed in the three predefined
frames is expressed as
Where is the rotation matrix that maps frame Fb to frame Fv and is defined by
with sθ = sinθ and cθ = cosθ. The same notation applies for sφ, cφ, sψ, and cψ.
The rotational motion relationship can therefore be derived using the appropriate state
variables, such as the vehicle frame angles (φ, θ, and ψ) and the body frame angular rate
( , and ). However, in order to do so, these variables need to be brought into one
common frame of reference. Using rotation matrices to transform vehicle frames Fφ, Fθ, and
Fv into the body frame of reference Fb, we get
where and .
It follows that,
Equations (1) and (2) represent the quadrotor’s equations of motion.
2.3 Quadrotor’s dynamics
To build the dynamic model of the quadrotor we will use Newton-Euler formalism, while
adopting the following assumptions:
1. The quadrotor structure is a rigid body.
2. The quadrotor frame is symmetrical.
252 Motion Control
3. The COG of the quadrotor coincides with the center of the rigid frame.
The moment of inertia is calculated by assuming the quadrotor as a central sphere of radius
r and mass Mo surrounded by four point masses representing the motors. Each motor is
supposed to have a mass m and attached to the central sphere through an arm of length l, as
shown in Figure 7.
Fig. 7. Moment of inertia.
Due to the symmetry of the quadrotor about all three axes, its inertial matrix becomes
symmetrical and is defined by
The dynamics of the quadrotor under external forces applied to its COG and expressed in
the body frame is derived by applying Newton-Euler formulation Beard (2008)
where M is the quadrotor’s total mass, and FT = [fx fy fz] and are the external
force and torque vectors applied on the quadrotor’s COG. The terms , , and are the
roll, pitch and yaw torques respectively.
Thus, the translational dynamic model can be written as
while the rotational model is
Intelligent Flight Control of an Autonomous Quadrotor 253
2.4 Aerodynamic forces and torques
With the derived kinematic and dynamic model, we will now define the forces and torques
acting on the quadrotor. The forces include the aerodynamic lift generated by each rotor,
and the gravitational pull acting in counter to the total lift generated. The moments are the
torques generated in order to achieve the roll, pitch and yaw movements. The following
forces and torques are produced:
(a) (b)
(c) (d)
Fig. 8. Forces and moments acting on the quadrotor: (a) Quadrotor thrust; (b) Rolling torque;
(c) Pitching torque; and (d) Yawing torque.
Upward Force (Thrust): The total quadrotor thrust is the sum of the thrust produced by
each propeller, as depicted in Figure 8(a):
Rolling Torque: This is the torque produced by increasing the left rotor’s thrust while
decreasing that of the right rotor, or vice versa, as shown in Figure 8(b):
Pitching Torque: The pitching torque in Figure 8(c) is produced by increasing the front
rotor’s thrust while decreasing that of the back rotor, or vice versa:
Yawing Torque: The yawing torque is the result of all four individual torques generated
due to the spinning rotors. The front and back rotors spin in the clockwise direction, while
the left and right rotors spin in the counterclockwise direction. As shown in Figure 8(d), an
imbalance between these two pairs results in a yawing torque causing the quadrotor to
rotate about its z-axis:
254 Motion Control
Gravitational Force (weight): Along with the other forces, the gravitational force acts on the
COG of the quadrotor. In the vehicle frame this force is expressed as
with g being the gravitational constant. Therefore, in the body frame, the weight can be
written as
Including the forces and torques acting on the system, the equations of motion become as
defined below.
3. Flight controller design
This section details the development of a fuzzy logic flight controller for the quadrotor. A
generalized overview of fuzzy logic control and the advantages it offers for nonlinear
control applications are presented. Based on the dynamics and kinematics derived in the
previous section, the autonomous flight control strategy is thereby introduced. The
proposed fuzzy logic controller is implemented with two types of inference engines for
3.1 Fuzzy logic control
Since its inception in Zadeh (1965), fuzzy logic has been applied to various fields of
engineering, manufacturing, business, and medicine, among others. Within the area of
engineering, control systems offer significant applications for fuzzy logic, designated as
fuzzy logic control. Before getting into details with regards to fuzzy logic control, we would
first like to provide some basic facts about fuzzy systems.
Fuzzy logic control offers a great advantage over some conventional control methods which
heavily depend on the exact mathematical model of the control system, specifically in
dealing with nonlinear systems subjected to various types of uncertainties. Being
independent of the plant’s parameters sets fuzzy controllers apart from their conventional
counterparts. Fuzzy controllers in general can be designed intuitively in light of the
Intelligent Flight Control of an Autonomous Quadrotor 255
knowledge acquired on the behavior of the system in hand. This knowledge is often gained
through experience and common sense, regardless of the mathematical model of the
dynamics governing this behavior. For example, in learning how to ride a bike, humans try
to build a set of common sense rules and learn from their failures without paying any
attention to the dynamic model of the bike. Fuzzy logic control tries to mimic this type of
human-like reasoning and embrace it within a pre-defined mathematical model to automate
the control of complex systems characterized by ill-defined mathematical models, for
3.2 Flight control algorithm
The quadrotor is an under-actuated system with four actuators controlling its six degrees-of-
freedom position/orientation. The flight controller is responsible for achieving two
challenging goals simultaneously: (i) controlling the quadrotor’s position, while (ii)
stabilizing its attitude, i.e., orientation (roll, pitch and yaw angles). More specifically, given a
desired position (px, py, pz) and yaw angle ψ, the goal is to design a controller to force these
control states to converge to their respective desired values, while maintaining the pitch and
roll angles as close to zero as possible.
Let PWMmot denote the PWM value of motor mot ∈ {f, r, b, l} for the front, right, back, and
left motors, respectively. Then, the thrust and torque applied on the quadrotor by motor mot
can be expressed as
where KT and Kτ are motor-dependent parameters. This yields
The above equations provide a basic understanding of how the angular speed of each motor
contribute to the overall thrust and torques exerted on the quadrotor. This knowledge will
serve as a guideline in developing a rule base of the direct fuzzy logic controller, as depicted
in Figure 9.
Three fuzzy controllers are designed to control the quadrotor’s roll (φ), pitch (θ) and yaw (ψ)
angles, denoted by FLCφ, FLCθ, and FLCψ, respectively, with the former two serving as
attitude stabilizers. Three fuzzy controllers, FLCx, FLCy and FLCz, are further designed to
control the quadrotor’s position. All six fuzzy controllers have identical inputs, (i) the error
, which is the difference between the desired signal (.)d and its actual
256 Motion Control
value (.), and (ii) the error rate e . The first input (error) is normalized to the interval [−1,+1],
while the second (error rate) is normalized to the interval [−3,+3].
Fig. 9. Control scheme.
Fig. 10. Flight controller block diagram.
In this control strategy, the desired pitch and roll angles, θd and φd, are not explicitly
provided to the controller. Instead, they are continuously anticipated by controllers FLCx
FLCy in such a way that they stabilize the quadrotor’s attitude. The input and output
membership functions (Figure 11) of each FLC are tuned empirically and are finalized as
A unified rule base comprising nine IF-THEN rules is developed and is presented in Table 2.
Intelligent Flight Control of an Autonomous Quadrotor 257
(a) Input variable error e. (b) Input variable error rate e . (c) Output variable U.
Fig. 11. Input and output membership functions.
Table 2. The rule base of the fuzzy controller.
For it to be modular and independent of the quadrotor’s parameters, the fuzzy logic
controllers are bounded by pre-processing and post-processing blocks (Figure 9). The pre-
processing module calculates the error e and error rate e and normalizes them to the
intervals [−1,+1] and [−3,+3], respectively. The post-processing block uses the controllers
output signals to calculate the PWM value of each motor as follows:
where ‘Offset’ is a priori-defined bias to counter balance the weight of the quadrotor. The
resultant PWM values are saturated to a maximum threshold that depends on the maximum
possible speed of the motors used.
It is important to note that this control scheme does not depend on the kinematic and
dynamic equations derived in section 2. Those equations are only used to build the
quadrotor’s model in the simulator, which would be unnecessary with a real quadrotor.
Being independent of the plant’s parameters sets the fuzzy controllers apart from
conventional control systems, which depend in one way or the other on the plant’s
mathematical model. The fuzzy controllers are designed in light of the knowledge acquired
on the quadrotor’s behavior and from its dynamic model. Therefore, changing the quadrotor
or some of its physical parameters like the mass and inertia does not require redesigning the
fuzzy logic controller. Instead, the postprocessing module may need to be fine-tuned to
optimize the controller’s performance, such as to calibrate the offset, for instance.
Two different fuzzy inference engines are implemented: (i) a Mamdani, and (ii) a Takagi-
Sugeno-Kang (TSK) fuzzy model. The Mamdani fuzzy inference method uses a min-max
operator for the aggregation and the centroid of area method for defuzzification. One
known problem with this type of controller is the high computational burden associated to
it, especially when implemented on an embedded system. To alleviate this problem, a zero
258 Motion Control
order TSK fuzzy inference engine is implemented for comparison. In this model, the output
membership functions of the Mamdani fuzzy controller are replaced with three fuzzy
singletons N = −1, Z = 0 and P = +1.
4. Numerical and experimental results
To test the proposed fuzzy logic flight controller with both inference engines and study their
performances, a simulation environment is first developed. After satisfactory performance
results are attained, the controller is implemented on a quadrotor testbed. The details of the
quadrotor simulator and the real-world test-bed are presented in the following subsections.
4.1 Simulation results
The quadrotor simulator is implemented in MATLAB Simulink, as shown in Figure 12. The
equations of motion derived earlier are used to model the quadrotor. The inputs to the
quadrotor are taken as the four PWM speed values of the motors. To make the simulations
more realistic, sensory noise and environmental disturbances such as wind are also taken
into account. Medium wind gust speeds are generated based on real data from Canada
Weather Statistics, Statistics (2009). The wind disturbance is incorporated as two further
inputs representing the north and east wind condition. The quadrotor model outputs are the
linear and angular accelerations that are integrated twice to obtain the position and
orientation vectors. The angular accelerations , , and are degraded with a white noise
and then used as a feedback to the fuzzy controller.
Fig. 12. MATLAB Simulink block diagram of the quadrotor simulator.
The values used for the quadrotor’s dynamic parameters are: M = 0.765 Kg, l = 0.61 m, Jx =
Jy = 0.08615 Kg·m2, Jz = 0.1712 Kg·m2, KT = 5.45, Ktau = 0.0549, Offset = Mg/(4KT ) = 0.344.
Intelligent Flight Control of an Autonomous Quadrotor 259
The six identical fuzzy controllers are developed using the MATLAB Fuzzy Logic Toolbox.
The input and output variables with membership functions described earlier are set
accordingly. The fuzzy controllers are used in a configuration as shown in Figure 10. The
inputs and outputs of the flight controller are pre- and post-processed, respectively.
The environmental disturbances are introduced such that a white noise is added to the
angular accelerations , , and for emulating the inertial measurement unit (IMU) sensor.
The IMU signals are further processed through rate transitions to incorporate the ADC
sampling rate.
The user-defined inputs are the desired translatory coordinates with respect to the
inertial frame, and the desired yaw angle ψ. The desired pitch and roll angles are implicitly
set to zero to achieve attitude stabilization.
Three simulations are conducted to test the performance of the proposed fuzzy logic flight
controllers with both inference engines. The system’s initial states are set to zero, while the
desired quadrotor’s position and orientation are set to = [10, 10, 25] m and = [0, 0, 30]
degrees in all three simulations. The purpose of the simulation is to assess the performance
of the fuzzy logic controller and compare the accuracy of the two fuzzy inference engines
under different disturbance conditions. The first simulation is run without any disturbances.
In the second simulation, the controller is subjected to sensor noise. In the third simulation,
it is subjected to sensor noise and medium north-east wind gust of 10 m/s.
The simulation results presented in Figures 13 and 14, demonstrate the satisfactory
performance of the proposed controller despite the presence of sensor noise and wind
disturbances. The Mamdani fuzzy controller converges to the desired states relatively faster
than its TSK counterpart. The yaw angle drift under wind disturbance is clearly visible with
the TSK controller.
(a) (b) (c)
(d) (e) (f)
Fig. 13. Simulation results of the Mamdani controller. Quadrotor states: (a) x-axis; (b) y-axis;
(c) z-axis (altitude); (d) pitch (θ); (e) roll (φ); and (f) yaw (ψ).
260 Motion Control
(a) (b) (c)
(d) (e) (f)
Fig. 14. Simulation results of the TSK controller. Quadrotor states: (a) x-axis; (b) y-axis; (c) z-
axis (altitude); (d) pitch (θ); (e) roll (φ); and (f) yaw (ψ).
4.2 Quadrotor test-bed and experimental results
The quadrotor test-bed, as shown in Figure 15, comprises light weight carbon fiber rods of
length 0.61 m connected to a center piece forming the desired cross structure for the frame.
The four propulsion units are made of brushless DC motors connected to electronic speed
controllers to provide actuation. Different propellers are also tested to find the optimal
thrust to power consumption ratio. The system is powered by a 2100 mAh, 11.1-Volt lithium
polymer battery. The quadrotor’s specifications are summarized in Table 3.
Fig. 15. The quadrotor test-bed.
The IMU is designed using a triple axis 3g accelerometer ADXL330, a dual axis 500 degree-
per-sec rate gyroscope IDG300 for pitch and roll angle rates. The yaw angle acceleration is
measured using a single axis 300 degree-per-sec ADXRS300 gyroscope. The bandwidths of
all the sensors are trimmed to 10 Hz. The MEMS inertial sensors are combined with
complementary filtering techniques for the online estimation of the attitude angles.
Intelligent Flight Control of an Autonomous Quadrotor 261
Table 3. The quadrotor test-bed component summary.
The processing is accomplished with a 34-gram Axon board based on ATmega640
microcontroller running at 16 MHz clock. The IMU data is sampled using an onboard 10-bit
ADC. The proposed flight control is implemented using the TSK fuzzy controllers converted
to a look-up table for a higher computational efficiency. The flight controller, elaborated in
Algorithm 1, is set to operate at a bandwidth of 100 Hz.
262 Motion Control
The first experiment is designed to test the quadrotor’s hovering and attitude stabilization
capabilities. So, the desired position is pre-defined as the quadrotor’s current position.
Both, experimental and simulation results are reported in Figure 16. In simulation, the
controller managed to keep the pitch and roll angles within the interval of [−3,+4] degrees.
However, in reality, the errors of these angles were fluctuating between −8 and +7 degrees
for the pitch, and −6 and +12 degrees for the roll. The main difference between the
simulation and experimental results stems from the vibration of the frame from which the
test-bed was made. In addition, the 0.61-m long carbon fiber arms were bending and
twisting when excited by the motors. This put the fuzzy logic controller to a higher
challenge than what was originally anticipated. Yet, it was successful in forcing the pitch
and roll angles to within an acceptable range.
(a) (b)
(c) (d)
Fig. 16. Simulation and experimental attitude stabilization results: (a) simulator pitch (θ); (b)
simulator roll (φ); (c) test-bed pitch (θ); and (d) test-bed roll (φ).
In the second experiment, the controller was tested under harsher conditions so as to
evaluate its behavior if the quadrotor collides into an obstacle, or if it is faced with other
types of disturbances. Hence, the previous experiment was repeated, but this time one of the
quadrotor’s arms is abruptly tapped in the middle of the flight. The results are shown in
Figure 17. As can be seen, the controller was able to quickly bring the pitch and roll angle
errors back to within a safe range.
Intelligent Flight Control of an Autonomous Quadrotor 263
Fig. 17. Experimental results of attitude stabilization under external disturbances: (a) test-
bed pitch θ); and (b) test-bed roll (φ).
5. Conclusion and future directions
This chapter addressed the problem of autonomous flight control of a quadrotor UAV. It
was conducted as a research project at the School of Information Technology and
Engineering (S.I.T.E.), University of Ottawa. Detailed mathematical modeling of the
quadrotor’s kinematics and dynamics was provided. A modular fuzzy logic approach was
proposed for the autonomous control of quadrotors in general, without the need for a
precise mathematical model of their complex and ill-defined dynamics. The fuzzy technique
was implemented through Mamdani and TSK inference engines for comparison purposes.
The controller comprises six individual fuzzy logic modules designated for the control of the
quadrotor’s position and orientation. The investigation on the two types of control
methodologies was conducted in a simulator environment, where disturbances such as
wind conditions and sensor noise were incorporated for a more realistic simulation. The
fuzzy flight controller was eventually implemented on a quadrotor test-bed specifically
designed and built for the project.
The experiments were first conducted on the simulator before being validated on the test-
bed. The results demonstrated a successful control performance especially with the
Mamdani inference engine. When compared to other conventional techniques applied for a
similar purpose Altug et al. (2002), Bouabdallah et al. (2004b), the proposed methodology
showed a higher robustness despite the induced disturbances.
The future work is directed towards achieving fully autonomous flight in outdoor
environments. Furthermore, adaptive fuzzy control techniques will be investigated to
automatically tune some of the controller’s parameters online, to further optimize its
6. References
E. Altug, J.P. Ostrowski, and R. Mahony. Control of a quadrotor helicopter using visual
feedback. In Proceedings of the 2002 IEEE International Conference on Robotics and
Automation, pages 72–77, 2002.
F. Archer, A. Shutko, T. Coleman, A. Haldin, E. Novichikhin, and I. Sidorov. Introduction,
overview, and status of the microwave autonomous copter system MACS.
Proceedings of the 2004 IEEE International Geoscience and Remote Sensing Symposium,
5:3574–3576, Sep 2004.
264 Motion Control
Randal W. Beard. Quadrotor dynamics and control. Brigham Young University, February 19
2008. URL http://www.et.byu.edu/groups/ece490quad/control/quadrotor.pdf.
lecture notes.
S. Bouabdallah. Design and control of quadrotors with application to autonomous flying.
Master’s thesis, Swiss Federal Institute of Technology, 2007.
S. Bouabdallah, P. Murrieri, and R. Siegwart. Design and control of an indoor micro quadrotor.
In Proceedings of the International Conference on Robotics and Automation, 2004a.
S. Bouabdallah, A. Noth, and R. Siegwart. PID vs LQ control techniques applied to an indoor
micro quadrotor. In International Conference on Intelligent Robots and Systems, 2004b.
M. Chen and M. Huzmezan. A combined MBPC/ 2 DOF H∞ controller for a quad rotor
UAV. In AIAA Guidance, Navigation, and Control Conference and Exhibit, 2003.
C. Coza and C.J.B. Macnab. A new robust adaptive-fuzzy control method applied to
quadrotor helicopter stabilization. In Annual meeting of the North American Fuzzy
Information Processing Society, pages 454–458, 2006.
A. Dzul, P. Castillo, and R. Lozano. Real-time stabilization and tracking of a four rotor mini
rotorcraft. IEEE Transaction on Control System Technology, 12(4): 510–516, 2004.
S. G. Fowers. Stabilization and control of a quad-rotor micro-UAV using vision sensors.
Master’s thesis, Brigham Young University, August 2008.
N. Guenard, T. Hamel, and V. Moreau. Dynamic modeling and intuitive control strategy for
an X4-flyer. In International Conference on Control and Automation, pages 141–146,
N. Guenard, T. Hamel, and R. Mahony. A practical visual servo control for an unmanned
aerial vehicle. IEEE Transactions on Robotics, 24(2):331–340, 2008.
S. D. Hanford. A small semi-autonomous rotary-wing unmanned air vehicle. Master’s
thesis, Pennsylvania State University, December 2005.
Draganfly Innovations Inc. Industrial aerial video systems and UAVs, October 2008. URL
A.Ő. Kivrak. Design of control systems for a quadrotor flight vehicle equipped with inertial
sensors. Master’s thesis, The Graduate School of Natural and Applied Sciences of
Atilim University, December 2006.
J. Rumerman. Helicopter development in the early twentieth century. [Online] Available,
December 2002. URL www.centennialofflight.gov.
Canada Weather Statistics. Courtesy of Environment Canada. [Online] Available, February
2009. http://www.weatherstats.ca/.
R. Sugiura, T. Fukagawa, N. Noguchi, K. Ishii, Y. Shibata, and K. Toriyama. Field
information system using an agricultural helicopter towards precision farming.
Proceedings of the 2003 IEEE/ASME International Conference on Advanced Intelligent
Mechatronics, 2:1073–1078, July 2003.
M. Tarbouchi, J. Dunfied, and G. Labonte. Neural network based control of a four rotor
helicopter. In International Conference on Industrial Technology, pages 1543– 1548, 2004.
A. Tayebi and S. McGilvray. Attitude stabilization of a VTOL quadrotor aircraft. IEEE
Transaction on Control System Technology, 14(3):562–571, May 2006.
S. L. Waslander, G. M. Hoffmann, J. S. Jang, and C. J. Tomlin. Multi-agent quadrotor testbed
control design: integral sliding mode vs. reinforcement learning. In International
Conference on Intelligent Robots and Systems, pages 468–473, 2005.
K. W. Weng and M. Shukri. Design and control of a quad-rotor flying robot for aerial
surveillance. 4th Student Conference on Research and Development (SCOReD 2006),
pages 173–177, 2006.
L.A. Zadeh. Fuzzy sets. Information and Control, 8:338–353, 1965.
Motion Control
Edited by Federico Casolo
ISBN 978-953-7619-55-8
Hard cover, 590 pages
Publisher InTech
Published online 01, January, 2010
Published in print edition January, 2010
The book reveals many different aspects of motion control and a wide multiplicity of approaches to the
problem as well. Despite the number of examples, however, this volume is not meant to be exhaustive: it
intends to offer some original insights for all researchers who will hopefully make their experience available for
a forthcoming publication on the subject.
How to reference
In order to correctly reference this scholarly work, feel free to copy and paste the following:
Syed Ali Raza and Wail Gueaieb (2010). Intelligent Flight Control of an Autonomous Quadrotor, Motion
Control, Federico Casolo (Ed.), ISBN: 978-953-7619-55-8, InTech, Available from:
InTech Europe InTech China
University Campus STeP Ri Unit 405, Office Block, Hotel Equatorial Shanghai
Slavka Krautzeka 83/A No.65, Yan An Road (West), Shanghai, 200040, China
51000 Rijeka, Croatia
Phone: +385 (51) 770 447 Phone: +86-21-62489820
Fax: +385 (51) 686 166 Fax: +86-21-62489821
|
{"url":"http://www.docstoc.com/docs/136672568/Intelligent-flight-control-of-an-autonomous-quadrotor","timestamp":"2014-04-20T14:16:05Z","content_type":null,"content_length":"92757","record_id":"<urn:uuid:255ba963-24b3-4b74-801c-a69046f6b001>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|