content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
How do I align the UVW_Modifier's Gizmo to a selected face?
fn alignUVGizmo theObj theFace =
-- First get the face normal vector. -- It is shown in BLUE on the image
faceNormal = in coordsys theObj (getFaceNormal theObj theFace)
-- This is the desired up vector in world space -- It is shown in YELLOW on the image
worldUpVector = [0,0,1]
-- Now get the cross-product of the face normal and the up vector. -- This will give you a vector that is perpendicular to the plane defined -- by the normal and the up vector. Normalize it to get a normal vector -- pointing to the right. -- It is shown in RED on the image
rightVector = normalize (cross worldUpVector faceNormal)
-- Now using the face normal and the new vector, -- get a vector that is perpendicular to the plane defined by the two. -- This is the "local up vector", the vector that is the projection of -- the world up vector on the face you selected. This one is perpendicular -- to both the face normal and the right vector, and you have 3 normals now -- that define the X, Y and Z of your new orthogonal coordinate system -- for the UVW gizmo! -- Note that this new vector can be seen as the SHADOW of the World Up vector -- on the face of the object in the above image. -- It is now displayed in green in the image below:
upVector = normalize ( cross rightVector faceNormal )
-- Using the 3 vectors, define a matrix3 value which represents the -- coordinate system of the gizmo. The face normal is the Z axis, -- the right vector is the X axis, and the local up vector is the Y axis:
theMatrix =matrix3 rightVector upVector faceNormal [0,0,0]
theMap = Uvwmap()
modPanel.addModToSelection theMap ui: on
theMap.gizmo.transform = theMatrix
|
{"url":"http://docs.autodesk.com/3DSMAX/16/ENU/MAXScript-Help/files/GUID-1193B0E7-10DF-4D3F-A89F-29FF0A291646.htm","timestamp":"2014-04-20T08:15:04Z","content_type":null,"content_length":"17536","record_id":"<urn:uuid:22372670-7cce-412a-a766-56f408a58493>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quick Question
March 19th 2007, 08:21 PM
Quick Question
How is that I use the quotient rule for this particular problem, and get the correct answer. Then check it by splitting it up into 2 terms because there is only one unit in the denominator, yet
my 2nd answer is wrong. I am know that I am right using the quotient rule in the 1st half, however this is the first problem I have run into checking my work for the lower half. This must be one
of those cases. If you could look over it for me I would greatly appreciate it. Again, the top is right, yet the bottom is given me a slightly smaller answer.
March 19th 2007, 08:30 PM
Simplify the answer you got for the quotient rule (combine like terms, factor, then reduce). You will get the same answer.
|
{"url":"http://mathhelpforum.com/calculus/12759-quick-question-print.html","timestamp":"2014-04-21T05:19:43Z","content_type":null,"content_length":"4042","record_id":"<urn:uuid:edda8194-dd21-4751-aaa9-0b564b3835cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Entropy: Heat addition to surrounding.
If 12007 kJ of heat is lost to the surroundings with an ambient temperature of 25 degrees centigrade during a cooling process, and the ambient temperature of the surroundings is unaffected by the
heat addition, what is the entropy change of the surroundings?
If Δs=∫δQ/T, then Δs=ΔQ/T=12007 kJ/(25+273.15)K= 40.272 kJ/K.
Is my thinking process here correct?
|
{"url":"http://www.physicsforums.com/showthread.php?p=4151663","timestamp":"2014-04-18T03:15:23Z","content_type":null,"content_length":"23563","record_id":"<urn:uuid:580c2071-c2ba-499f-bb16-9ab2b6150847>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interval arithmetic in matrix computation
Results 1 - 10 of 25
- SIAM Journal on Numerical Analysis , 1997
"... This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses
intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists in ..."
Cited by 101 (7 self)
Add to MetaCart
This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals
for numerical correctness and for pruning the search space early. The pruning in Newton consists in enforcing at each node of the search tree a unique local consistency condition, called
box-consistency, which approximates the notion of arc-consistency well-known in artificial intelligence. Box-consistency is parametrized by an interval extension of the constraint and can be
instantiated to produce the Hansen-Segupta's narrowing operator (used in interval methods) as well as new operators which are more effective when the computation is far from a solution. Newton has
been evaluated on a variety of benchmarks from kinematics, chemistry, combustion, economics, and mechanics. On these benchmarks, it outperforms the interval methods we are aware of and compares well
with state-of-the-art continuation methods. Limitations of Newton (e.g., a sensitivity to the size of the initial intervals on some problems) are also discussed. Of particular interest is the
mathematical and programming simplicity of the method.
, 1998
"... A classical circuit-design problem from Ebers and Moll [6] features a system of nine nonlinear equations in nine variables that is very challenging both for local and global methods. This system
was solved globally using an interval method by Ratschek and Rokne [23] in the box [0; 10] 9 . Their ..."
Cited by 21 (1 self)
Add to MetaCart
A classical circuit-design problem from Ebers and Moll [6] features a system of nine nonlinear equations in nine variables that is very challenging both for local and global methods. This system was
solved globally using an interval method by Ratschek and Rokne [23] in the box [0; 10] 9 . Their algorithm had enormous costs (i.e., over 14 months using a network of 30 Sun Sparc-1 workstations) but
they state that "at this time, we know no other method which has been applied to this circuit design problem and which has led to the same guaranteed result of locating exactly one solution in this
huge domain, completed with a reliable error estimate." The present paper gives a novel branch-and-prune algorithm that obtains a unique safe box for the above system within reasonable computation
times. The algorithm combines traditional interval techniques with an adaptation of discrete constraint-satisfaction techniques to continuous problems. Of particular interest is the simplicity o...
- IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION , 2002
"... We investigate the application of genetic algorithms (GAs) for recognizing real two-dimensional (2-D) or three-dimensional (3-D) objects from 2-D intensity images, assuming that the viewpoint is
arbitrary. Our approach is model-based (i.e., we assume a predefined set of models), while our recognitio ..."
Cited by 18 (5 self)
Add to MetaCart
We investigate the application of genetic algorithms (GAs) for recognizing real two-dimensional (2-D) or three-dimensional (3-D) objects from 2-D intensity images, assuming that the viewpoint is
arbitrary. Our approach is model-based (i.e., we assume a predefined set of models), while our recognition strategy lies on the recently proposed theory of algebraic functions of views. According to
this theory, the variety of 2-D views depicting an object can be expressed as a combination of a small number of 2-D views of the object. This implies a simple and powerful strategy for object
recognition: novel 2-D views of an object (2-D or 3-D) can be recognized by simply matching them to combinations of known 2-D views of the object. In other words, objects in a scene are recognized by
"predicting" their appearance through the combination of known views of the objects. This is an important idea, which is also supported by psychophysical findings indicating that the human visual
system works in a similar way. The main difficulty in implementing this idea is determining the parameters of the combination of views. This problem can be solved either in the space of feature
matches among the views ("image space") or the space of parameters ("transformation space"). In general, both of these spaces are very large, making the search very time consuming. In this paper, we
propose using GAs to search these spaces efficiently. To improve the efficiency of genetic search in the transformation space, we use singular value decomposition and interval arithmetic to restrict
genetic search in the most feasible regions of the transformation space. The effectiveness of the GA approaches is shown on a set of increasingly complex real scenes where exact and near-exact
matches are found reliably and q...
- Journal of Reliable Computing
"... Latest scientific and engineering advances have started to recognize the need of defining multiple types of uncertainty. Probabilistic modeling cannot handle situations with incomplete or little
information on which to evaluate a probability, or when that information is nonspecific, ambiguous, or co ..."
Cited by 13 (2 self)
Add to MetaCart
Latest scientific and engineering advances have started to recognize the need of defining multiple types of uncertainty. Probabilistic modeling cannot handle situations with incomplete or little
information on which to evaluate a probability, or when that information is nonspecific, ambiguous, or conflicting [1]. Many generalized models of uncertainty have been developed to treat such
situations. Among them, there are five major frameworks that use interval-based representation of uncertainty, namely: imprecise probabilities, possibility theory, Dempster-Shafer theory of evidence,
Fuzzy set theory, and convex set modeling. Regardless what model is adopted, the proper interval solution will represents the first requirement for any further rigorous formulation. In this work an
interval technique is applied to Finite Element Methods. Finite Element Methods (FEM) are an essential and frequently indispensable part of engineering analysis and design. An Interval Finite Element
Method (IFEM) is presented that handles stiffness and load uncertainty in the linear static problems of mechanics. Uncertain parameters are introduced in the form of unknown but bounded quantities
(intervals). To avoid overestimation, the new formulation is based on an element-by-element (EBE) technique. Element matrices are
- Journal of Inequalities in Pure and Applied Mathematics 4, 2, Article
"... Abstract. This paper provides bounds for second-order linear recurrences with restricted coefficients. It is determined that whenever the coefficients of the associated monic equation are less
than the constant {l/3) 1 1 3, all solutions tend to zero at an exponential rate. This constant is optimal. ..."
Cited by 13 (6 self)
Add to MetaCart
Abstract. This paper provides bounds for second-order linear recurrences with restricted coefficients. It is determined that whenever the coefficients of the associated monic equation are less than
the constant {l/3) 1 1 3, all solutions tend to zero at an exponential rate. This constant is optimal. Explicit inequalities are also provided, and some residue class structure is revealed. 1.
- IN PROCEEDINGS OF THE TWELFTH INTERNATIONAL CONFERENCE ON LOGIC PROGRAMMING, LOGIC PROGRAMMING , 1994
"... We propose the use of the preconditioned interval Gauss-Seidel method as the backbone of an efficient linear equality solver in a CLP(Interval) language. The method, as originally designed,
works only on linear systems with square coefficient matrices. Even imposing such a restriction, a naive incor ..."
Cited by 12 (1 self)
Add to MetaCart
We propose the use of the preconditioned interval Gauss-Seidel method as the backbone of an efficient linear equality solver in a CLP(Interval) language. The method, as originally designed, works
only on linear systems with square coefficient matrices. Even imposing such a restriction, a naive incorporation of the traditional preconditioning algorithm in a CLP language incurs a high
worst-case time complexity of O(n^4), where n is the number of variables in the linear system. In this paper, we generalize the algorithm for general linear systems with m constraints and n
variables, and give a novel incremental adaptation of preconditioning of O(n 2 (n + m)) complexity. The efficiency of the incremental preconditioned interval Gauss-Seidel method is demonstrated using
large-scale linear systems.
- Industrial & Engineering Chemistry Research , 2001
"... In recent years, molecularly-based equations of state, as typified by the SAFT (statistical associating fluid theory) approach, have become increasingly popular tools for the modeling of phase
behavior. However, whether using this, or even much simpler models, the reliable calculation of phase behav ..."
Cited by 9 (6 self)
Add to MetaCart
In recent years, molecularly-based equations of state, as typified by the SAFT (statistical associating fluid theory) approach, have become increasingly popular tools for the modeling of phase
behavior. However, whether using this, or even much simpler models, the reliable calculation of phase behavior from a given model can be a very challenging computational problem. A new methodology is
described that is the first completely reliable technique for computing phase stability and equilibrium from the SAFT model. The method is based on interval analysis, in particular an interval Newton
/generalized bisection algorithm, which provides a mathematical and computational guarantee of reliability, and is demonstrated using nonassociating, self-associating, and cross-associating systems.
New techniques are presented that can also be exploited when conventional point-valued solution methods are used. These include the use of a volume-based problem formulation, in which the core
thermodynamic function for phase equilibrium at constant temperature and pressure is the Helmholtz energy, and an approach for dealing with the internal iteration needed when there are association
effects. This provides for direct, as opposed to iterative, determination of the derivatives of the internal variables. 1
- COMPUTER VISION AND IMAGE UNDERSTANDING , 1998
"... this paper, we propose the use of algebraic functions of views for indexing-based object recognition. During indexing, we consider groups of model points and we represent all the views (i.e.,
images) that they can produce in a hash table. The images that a group of model points can produce are compu ..."
Cited by 6 (5 self)
Add to MetaCart
this paper, we propose the use of algebraic functions of views for indexing-based object recognition. During indexing, we consider groups of model points and we represent all the views (i.e., images)
that they can produce in a hash table. The images that a group of model points can produce are computed by combining a small number of reference views which contain the group using algebraic
functions of views. Fundamental to this procedure is a methodology, based on Singular Value Decomposition and Interval Arithmetic, for estimating the allowable ranges of values that the parameters of
algebraic functions can assume. During recognition, scene groups are used to retrieve from the hash table the most feasible model groups that might have produced the scene groups. The use of
algebraic functions of views for indexing-based recognition offers a number of advantages. First of all, the hash table can be built using a small number of reference views per object. This is in
contrast to current approaches which build the hash table using either a large number of reference views or 3D models. Most importantly, recognition does not rely on the similarity between reference
views and novel views; all that is required for the novel views is to contain common groups of points with a small number of reference views. Second, verification becomes simpler. This is because
candidate models can now be back-projected onto the scene by applying a linear transformationona small number of reference views of the candidate model. Finally, the proposed approach is more general
and extendible. This is because algebraic functions of views have been shown to exist over a wide range of transformations and projections. The recognition performance of the proposed approach is
demonstrated using both artific...
, 2006
"... Abstract. Current parametric CAD systems require geometric parameters to have fixed values. Specifying fixed parameter values implicitly adds rigid constraints on the geometry, which have the
potential to introduce conflicts during the design process. This paper presents a soft constraint representa ..."
Cited by 6 (5 self)
Add to MetaCart
Abstract. Current parametric CAD systems require geometric parameters to have fixed values. Specifying fixed parameter values implicitly adds rigid constraints on the geometry, which have the
potential to introduce conflicts during the design process. This paper presents a soft constraint representation scheme based on nominal interval. Interval geometric parameters capture inexactness of
conceptual and embodiment design, uncertainty in detail design, as well as boundary information for design optimization. To accommodate under-constrained and over-constrained design problems, a
double-loop Gauss-Seidel method is developed to solve linear constraints. A symbolic preconditioning procedure transforms nonlinear equations to separable form. Inequalities are also transformed and
integrated with equalities. Nonlinear constraints can be bounded by piecewise linear enclosures and solved by linear methods iteratively. A sensitivity analysis method that differentiates active and
inactive constraints is presented for design refinement. 1.
- SIAM Jour-nal on Numerical Analysis , 1997
"... This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses
intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists in ..."
Cited by 2 (0 self)
Add to MetaCart
This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals
for numerical correctness and for pruning the search space early. The pruning in Newton consists in enforcing at each node of the search tree a unique local consistency condition, called
boxconsistency, which approximates the notion of arc-consistency well-known in artificial intelligence. Box-consistency is parametrized by an interval extension of the constraint and can be
instantiated to produce Hansen-Segupta narrowing operator (used in interval methods) as well as new operators which are more effective when the computation is far from a solution. Newton has been
evaluated on a variety of benchmarks from kinematics, chemistry, combustion, economics, and mechanics. On these benchmarks, it outperforms the interval methods we are aware of and compares well with
state-of-the-art continuatio...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=219105","timestamp":"2014-04-20T14:31:44Z","content_type":null,"content_length":"43080","record_id":"<urn:uuid:dfee3c10-0bb7-4d54-ab14-8861372354cc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Richard Askey
Born: 4 June 1933 in St Louis, Missouri, USA
Richard Askey's parents were Philip Edwin Askey and Bessie May Yates. He attended Washington University and graduated with a B.A. in 1955. Already attracted towards analysis by the strong analysis
school at Washington University, he then went to Harvard University to study for his Master's degree and in 1956 he received his M.A.
After graduating from Harvard, Askey moved to Princeton University to study for his doctorate. In 1958 he accepted a post of Instructor back at Washington University and he held this position until
1961, the year in which he graduated with his Ph.D. from Princeton University. His next move was to the University of Chicago where he was an Instructor for two years before accepting the post of
Assistant Professor of Mathematics at the University of Wisconsin-Madison in 1963. Two years later he was promoted to Associate Professor of Mathematics then, in 1968, he became a full Professor.
It is impossible in an article like this to give much in the way of details of the impressive publications by Askey on the harmonic analysis of special functions, orthogonal polynomials and special
functions, and special functions related to group theory. To date over 180 publications are listed under his name. The first of these publications was Weighted quadratic norms and ultraspherical
polynomials in 1959 which he wrote jointly with Isidore Hirschman Jr. It appeared in the Transactions of the American Mathematical Society. This was the first of two papers by Askey and Hirschman
which completed a programme of research initiated by Hirschman in 1955. In 1965 Askey published On some problems posed by Karlin and Szegö concerning orthogonal polynomials and by this time his major
contributions to special functions and orthogonal polynomials was well under way.
Askey published an important book Orthogonal polynomials and special functions in 1975. This work was based on ten lectures that he gave to the National Science Foundation Regional Conference at
Virginia Polytechnic Institute and State University in June of 1974. In the book hypergeometric functions, Bessel functions, the Jacobi orthogonal polynomials, the Hahn orthogonal polynomials,
Laguerre polynomials, Hermite polynomials, Meixner polynomials, Krawtchouk polynomials and Charlier polynomials all play their part in addition to other orthogonal polynomials and special functions.
Askey states clearly in this text why he is interested in special functions:
One studies special functions not for their own sake, but to be able to use them to solve problems.
G Gasper, in reviewing Askey's book, states:-
This is one of the best introductions to special functions for mathematicians, scientists, and engineers that I know of.
Twenty five years after he gave his series of ten lectures to the National Science Foundation Regional Conference, Askey published another major work on special functions. This 1999 book is
co-authored by George E Andrews and Ranjan Roy, and is called simply Special Functions. Published by Cambridge University Press, this new work is six times the length of the earlier one. Bruce C
Berndt, reviewing this important book, puts it in context:-
Special functions, which include the trigonometric functions, might be called "useful" functions. For some years, the field was somewhat dormant, but in the past three decades, special functions
have returned to the forefront of mathematical research. ... fresh new ideas, in large part due to the authors; the many new uses of special functions; and their connections with other branches
of mathematics, physics, and other sciences have played leading roles in this revival. ...
The book genuinely reflects the authors' vast accumulated insights. Most notably, the authors demonstrate a superb familiarity with the historical roots of their subject. Many of their historical
insights and references could have been provided only by them.
If one were to single out one paper by Askey of being of particular importance, it must be the one which contained a result which was used by Louis de Branges in giving his complete proof of the
Bieberbach Conjecture in 1984.
Askey is not only known for his mathematical research, however, for he is also a highly respected writer on mathematical education. Perhaps his most famous article on this topic is Good Intentions
are not Enough. He has also served the American Mathematical Society in many ways since he first became a member in 1966, sitting on numerous committees and being Vice President of the Society in
1986-87. In particular, his great interest in the history of mathematics is reflected but the fact that he served on the Committee on Mathematical History from 1987 to 1991.
We have already referred above to his lectures to the National Science Foundation Regional Conference in 1974. He also gave two series of lectures in 1984, namely the University of Illinois
Trjitzinsky Lectures and the Pennsylvania State University College of Science Lectures. In 1992 he gave the Turán Memorial Lectures in Budapest.
Among the many honours which have been given to Askey we can mention only a few. He is an Honorary Fellow of Indian Academy of Sciences and a Fellow of American Academy of Arts and Science. He
received what might be considered his greatest honour in 1999 when he was elected to the National Academy of Sciences.
Askey has remained at the University of Wisconsin-Madison since his first appointment in 1963, becoming Gabor Szegö Professor of Mathematics in 1986 and John Bascom Professor of Mathematics in 1995.
Article by: J J O'Connor and E F Robertson
April 2002
MacTutor History of Mathematics
|
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Printonly/Askey.html","timestamp":"2014-04-18T18:19:20Z","content_type":null,"content_length":"6704","record_id":"<urn:uuid:e4d63feb-61c2-41c2-b274-34807d7b326a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
|
QSet Class Reference
The QSet class is a template class that provides a hash-table-based set. More...
#include <QSet>
Note: All functions in this class are reentrant.
Public Types
class const_iterator
class iterator
typedef ConstIterator
typedef Iterator
typedef const_pointer
typedef const_reference
typedef difference_type
typedef key_type
typedef pointer
typedef reference
typedef size_type
typedef value_type
Public Functions
QSet ()
QSet ( const QSet<T> & other )
const_iterator begin () const
iterator begin ()
int capacity () const
void clear ()
const_iterator constBegin () const
const_iterator constEnd () const
const_iterator constFind ( const T & value ) const
bool contains ( const T & value ) const
bool contains ( const QSet<T> & other ) const
int count () const
bool empty () const
const_iterator end () const
iterator end ()
iterator erase ( iterator pos )
const_iterator find ( const T & value ) const
iterator find ( const T & value )
const_iterator insert ( const T & value )
QSet<T> & intersect ( const QSet<T> & other )
bool isEmpty () const
bool remove ( const T & value )
void reserve ( int size )
int size () const
void squeeze ()
QSet<T> & subtract ( const QSet<T> & other )
void swap ( QSet<T> & other )
QList<T> toList () const
QSet<T> & unite ( const QSet<T> & other )
QList<T> values () const
bool operator!= ( const QSet<T> & other ) const
QSet<T> operator& ( const QSet<T> & other ) const
QSet<T> & operator&= ( const QSet<T> & other )
QSet<T> & operator&= ( const T & value )
QSet<T> operator+ ( const QSet<T> & other ) const
QSet<T> & operator+= ( const QSet<T> & other )
QSet<T> & operator+= ( const T & value )
QSet<T> operator- ( const QSet<T> & other ) const
QSet<T> & operator-= ( const QSet<T> & other )
QSet<T> & operator-= ( const T & value )
QSet<T> & operator<< ( const T & value )
QSet<T> & operator= ( const QSet<T> & other )
bool operator== ( const QSet<T> & other ) const
QSet<T> operator| ( const QSet<T> & other ) const
QSet<T> & operator|= ( const QSet<T> & other )
QSet<T> & operator|= ( const T & value )
Static Public Members
QSet<T> fromList ( const QList<T> & list )
Related Non-Members
QDataStream & operator<< ( QDataStream & out, const QSet<T> & set )
QDataStream & operator>> ( QDataStream & in, QSet<T> & set )
Detailed Description
The QSet class is a template class that provides a hash-table-based set.
QSet<T> is one of Qt's generic container classes. It stores values in an unspecified order and provides very fast lookup of the values. Internally, QSet<T> is implemented as a QHash.
Here's an example QSet with QString values:
QSet<QString> set;
To insert a value into the set, use insert():
Another way to insert items into the set is to use operator<<():
set << "twelve" << "fifteen" << "nineteen";
To test whether an item belongs to the set or not, use contains():
if (!set.contains("ninety-nine"))
If you want to navigate through all the values stored in a QSet, you can use an iterator. QSet supports both Java-style iterators (QSetIterator and QMutableSetIterator) and STL-style iterators (
QSet::iterator and QSet::const_iterator). Here's how to iterate over a QSet<QWidget *> using a Java-style iterator:
QSetIterator<QWidget *> i(set);
while (i.hasNext())
qDebug() << i.next();
Here's the same code, but using an STL-style iterator:
QSet<QWidget *>::const_iterator i = set.constBegin();
while (i != set.constEnd()) {
qDebug() << *i;
QSet is unordered, so an iterator's sequence cannot be assumed to be predictable. If ordering by key is required, use a QMap.
To navigate through a QSet, you can also use foreach:
QSet<QString> set;
foreach (const QString &value, set)
qDebug() << value;
Items can be removed from the set using remove(). There is also a clear() function that removes all items.
QSet's value data type must be an assignable data type. You cannot, for example, store a QWidget as a value; instead, store a QWidget *. In addition, the type must provide operator==(), and there
must also be a global qHash() function that returns a hash value for an argument of the key's type. See the QHash documentation for a list of types supported by qHash().
Internally, QSet uses a hash table to perform lookups. The hash table automatically grows and shrinks to provide fast lookups without wasting memory. You can still control the size of the hash table
by calling reserve(), if you already know approximately how many elements the QSet will contain, but this isn't necessary to obtain good performance. You can also call capacity() to retrieve the hash
table's size.
See also QSetIterator, QMutableSetIterator, QHash, and QMap.
Member Type Documentation
typedef QSet::ConstIterator
Qt-style synonym for QSet::const_iterator.
typedef QSet::Iterator
Qt-style synonym for QSet::iterator.
This typedef was introduced in Qt 4.2.
typedef QSet::const_pointer
Typedef for const T *. Provided for STL compatibility.
typedef QSet::const_reference
Typedef for const T &. Provided for STL compatibility.
typedef QSet::difference_type
Typedef for const ptrdiff_t. Provided for STL compatibility.
typedef QSet::key_type
Typedef for T. Provided for STL compatibility.
typedef QSet::pointer
Typedef for T *. Provided for STL compatibility.
typedef QSet::reference
Typedef for T &. Provided for STL compatibility.
typedef QSet::size_type
Typedef for int. Provided for STL compatibility.
typedef QSet::value_type
Typedef for T. Provided for STL compatibility.
Member Function Documentation
QSet::QSet ()
Constructs an empty set.
See also clear().
QSet::QSet ( const QSet<T> & other )
Constructs a copy of other.
This operation occurs in constant time, because QSet is implicitly shared. This makes returning a QSet from a function very fast. If a shared instance is modified, it will be copied (copy-on-write),
and this takes linear time.
See also operator=().
Returns a const STL-style iterator positioned at the first item in the set.
See also constBegin() and end().
This is an overloaded function.
Returns a non-const STL-style iterator positioned at the first item in the set.
This function was introduced in Qt 4.2.
int QSet::capacity () const
Returns the number of buckets in the set's internal hash table.
The sole purpose of this function is to provide a means of fine tuning QSet's memory usage. In general, you will rarely ever need to call this function. If you want to know how many items are in the
set, call size().
See also reserve() and squeeze().
void QSet::clear ()
Removes all elements from the set.
See also remove().
Returns a const STL-style iterator positioned at the first item in the set.
See also begin() and constEnd().
Returns a const STL-style iterator pointing to the imaginary item after the last item in the set.
See also constBegin() and end().
const_iterator QSet::constFind ( const T & value ) const
Returns a const iterator positioned at the item value in the set. If the set contains no item value, the function returns constEnd().
This function was introduced in Qt 4.2.
See also find() and contains().
bool QSet::contains ( const T & value ) const
Returns true if the set contains item value; otherwise returns false.
See also insert(), remove(), and find().
bool QSet::contains ( const QSet<T> & other ) const
Returns true if the set contains all items from the other set; otherwise returns false.
This function was introduced in Qt 4.6.
See also insert(), remove(), and find().
int QSet::count () const
Same as size().
bool QSet::empty () const
Returns true if the set is empty. This function is provided for STL compatibility. It is equivalent to isEmpty().
Returns a const STL-style iterator positioned at the imaginary item after the last item in the set.
See also constEnd() and begin().
This is an overloaded function.
Returns a non-const STL-style iterator pointing to the imaginary item after the last item in the set.
This function was introduced in Qt 4.2.
Removes the item at the iterator position pos from the set, and returns an iterator positioned at the next item in the set.
Unlike remove(), this function never causes QSet to rehash its internal data structure. This means that it can safely be called while iterating, and won't affect the order of items in the set.
This function was introduced in Qt 4.2.
See also remove() and find().
const_iterator QSet::find ( const T & value ) const
Returns a const iterator positioned at the item value in the set. If the set contains no item value, the function returns constEnd().
This function was introduced in Qt 4.2.
See also constFind() and contains().
iterator QSet::find ( const T & value )
This is an overloaded function.
Returns a non-const iterator positioned at the item value in the set. If the set contains no item value, the function returns end().
This function was introduced in Qt 4.2.
QSet<T> QSet::fromList ( const QList<T> & list ) [static]
Returns a new QSet object containing the data contained in list. Since QSet doesn't allow duplicates, the resulting QSet might be smaller than the list, because QList can contain duplicates.
QStringList list;
list << "Julia" << "Mike" << "Mike" << "Julia" << "Julia";
QSet<QString> set = QSet<QString>::fromList(list);
set.contains("Julia"); // returns true
set.contains("Mike"); // returns true
set.size(); // returns 2
See also toList() and QList::toSet().
const_iterator QSet::insert ( const T & value )
Inserts item value into the set, if value isn't already in the set, and returns an iterator pointing at the inserted item.
See also operator<<(), remove(), and contains().
QSet<T> & QSet::intersect ( const QSet<T> & other )
Removes all items from this set that are not contained in the other set. A reference to this set is returned.
See also operator&=(), unite(), and subtract().
bool QSet::isEmpty () const
Returns true if the set contains no elements; otherwise returns false.
See also size().
bool QSet::remove ( const T & value )
Removes any occurrence of item value from the set. Returns true if an item was actually removed; otherwise returns false.
See also contains() and insert().
void QSet::reserve ( int size )
Ensures that the set's internal hash table consists of at least size buckets.
This function is useful for code that needs to build a huge set and wants to avoid repeated reallocation. For example:
QSet<QString> set;
for (int i = 0; i < 20000; ++i)
Ideally, size should be slightly more than the maximum number of elements expected in the set. size doesn't have to be prime, because QSet will use a prime number internally anyway. If size is an
underestimate, the worst that will happen is that the QSet will be a bit slower.
In general, you will rarely ever need to call this function. QSet's internal hash table automatically shrinks or grows to provide good performance without wasting too much memory.
See also squeeze() and capacity().
int QSet::size () const
Returns the number of items in the set.
See also isEmpty() and count().
void QSet::squeeze ()
Reduces the size of the set's internal hash table to save memory.
The sole purpose of this function is to provide a means of fine tuning QSet's memory usage. In general, you will rarely ever need to call this function.
See also reserve() and capacity().
QSet<T> & QSet::subtract ( const QSet<T> & other )
Removes all items from this set that are contained in the other set. Returns a reference to this set.
See also operator-=(), unite(), and intersect().
void QSet::swap ( QSet<T> & other )
Swaps set other with this set. This operation is very fast and never fails.
This function was introduced in Qt 4.8.
QList<T> QSet::toList () const
Returns a new QList containing the elements in the set. The order of the elements in the QList is undefined.
QSet<QString> set;
set << "red" << "green" << "blue" << ... << "black";
QList<QString> list = set.toList();
See also fromList(), QList::fromSet(), and qSort().
QSet<T> & QSet::unite ( const QSet<T> & other )
Each item in the other set that isn't already in this set is inserted into this set. A reference to this set is returned.
See also operator|=(), intersect(), and subtract().
QList<T> QSet::values () const
Returns a new QList containing the elements in the set. The order of the elements in the QList is undefined.
This is the same as toList().
See also fromList(), QList::fromSet(), and qSort().
bool QSet::operator!= ( const QSet<T> & other ) const
Returns true if the other set is not equal to this set; otherwise returns false.
Two sets are considered equal if they contain the same elements.
This function requires the value type to implement operator==().
See also operator==().
QSet<T> QSet::operator& ( const QSet<T> & other ) const
Returns a new QSet that is the intersection of this set and the other set.
See also intersect(), operator&=(), operator|(), and operator-().
QSet<T> & QSet::operator&= ( const QSet<T> & other )
Same as intersect(other).
See also operator&(), operator|=(), and operator-=().
QSet<T> & QSet::operator&= ( const T & value )
This is an overloaded function.
Same as intersect(other), if we consider other to be a set that contains the singleton value.
QSet<T> QSet::operator+ ( const QSet<T> & other ) const
Returns a new QSet that is the union of this set and the other set.
See also unite(), operator|=(), operator&(), and operator-().
QSet<T> & QSet::operator+= ( const QSet<T> & other )
Same as unite(other).
See also operator|(), operator&=(), and operator-=().
QSet<T> & QSet::operator+= ( const T & value )
Inserts a new item value and returns a reference to the set. If value already exists in the set, the set is left unchanged.
See also insert().
QSet<T> QSet::operator- ( const QSet<T> & other ) const
Returns a new QSet that is the set difference of this set and the other set, i.e., this set - other set.
See also subtract(), operator-=(), operator|(), and operator&().
QSet<T> & QSet::operator-= ( const QSet<T> & other )
Same as subtract(other).
See also operator-(), operator|=(), and operator&=().
QSet<T> & QSet::operator-= ( const T & value )
Removes the occurrence of item value from the set, if it is found, and returns a reference to the set. If the value is not contained the set, nothing is removed.
See also remove().
QSet<T> & QSet::operator<< ( const T & value )
Inserts a new item value and returns a reference to the set. If value already exists in the set, the set is left unchanged.
See also insert().
QSet<T> & QSet::operator= ( const QSet<T> & other )
Assigns the other set to this set and returns a reference to this set.
bool QSet::operator== ( const QSet<T> & other ) const
Returns true if the other set is equal to this set; otherwise returns false.
Two sets are considered equal if they contain the same elements.
This function requires the value type to implement operator==().
See also operator!=().
QSet<T> QSet::operator| ( const QSet<T> & other ) const
Returns a new QSet that is the union of this set and the other set.
See also unite(), operator|=(), operator&(), and operator-().
QSet<T> & QSet::operator|= ( const QSet<T> & other )
Same as unite(other).
See also operator|(), operator&=(), and operator-=().
QSet<T> & QSet::operator|= ( const T & value )
Inserts a new item value and returns a reference to the set. If value already exists in the set, the set is left unchanged.
See also insert().
Related Non-Members
QDataStream & operator<< ( QDataStream & out, const QSet<T> & set )
Writes the set to stream out.
This function requires the value type to implement operator<<().
See also Format of the QDataStream operators.
Reads a set from stream in into set.
This function requires the value type to implement operator>>().
See also Format of the QDataStream operators.
No notes
|
{"url":"http://qt-project.org/doc/qt-4.8/qset.html","timestamp":"2014-04-17T21:32:46Z","content_type":null,"content_length":"80323","record_id":"<urn:uuid:699b9c81-1383-4ba6-9786-0039f09aef97>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IMA Newsletter #354
Aria Abubakar (Schlumberger) Imaging and inversion algorithms based on the source-type integral equation formulations
Abstract: Joint work with Tarek M. Habashy (Schlumberger-Doll Research, USA) and Peter van den Berg (Delft University of Technology, The Netherlands). In this presentation we present a class of
inversion algorithms to solve acoustic, electromagnetic and elastic inverse scattering problems of the constitutive material properties of bounded objects embedded in a known background medium. The
inversion utilizes measurements of the scattered field due to the illumination of the objects by a set of known wave-fields. By using the source-type integral equation formulations we arrive at two
set of equation in terms of the contrast and the contrast sources (the product of the contrast and the fields). The first equation is the integral representation of the measured data (the data
equation) and the second equation is the integral equation over the scatterers (the object equation). These two integral equations are solved by recasting the problem as an optimization problem. The
main differences of the presented algorithms then other inversion algorithms available in the literature are: (1) We use the object equation itself to constrain the optimization process. (2) We do
not solve any full forward problem in each iterative step. We present three inversion algorithms with increasing complexity, namely, the regularized Born inversion (linear), the diagonalized contrast
source inversion (semi-linear) and the contrast source inversion (full non-linear). The difference between these three inversion algorithms is in the way we use the object equation in the cost
functional to be optimized. Although the inclusion of object equation in the cost functional serves as a physical regularization of the ill-conditioned data equation, the inversion results can be
enhanced by introducing an additional regularizer that can help in accounting for any a priori information known about the contrast profile or in imposing any constraints such as limiting the spatial
variation of the contrast. We propose to use the multiplicative regularized inversion technique so that there is no necessity to determine the regularization parameter before the optimization is
started. This parameter is determined automatically during the optimization process. As numerical examples we present some synthetic and real data inversion from oilfield, biomedical and microwave
applications both in two and three-dimensional configurations. We will show that by employing the above approach it is possible to solve full non-linear inverse problem with large number of unknowns
using a personal computer with a single processor.
Stephanie Allassonniere (Ecole Normale Superieure de Cachan) Generative Model and Consistent Estimation Algorithms for non-rigid Deformation Model
Abstract: The link between Bayesian and variational approaches is well known in the image analysis community in particular in the context of deformable models. However, true generative models and
consistent estimation procedures are usually not available and the current trend is the computation of statistics mainly based on PCA analysis. We advocate in this paper a careful statistical
modeling of deformable structures and we propose an effective and consistent estimation algorithm for the various parameters (geometric and photometric) appearing in the models.
Pablo Arias (Universidad de la Republica), Gregory J. Randall (Universidad de la Republica), Pablo Segmentation of ultrasound images with shape priors. Application to automatic cattle rib-eye area
Sprechmann (Universidad de la Republica) estimation
Abstract: Automatic ultrasound (US) image segmentation is a difficult task due to the important amount of noise present in the images and to the lack of information in several zones produced by the
acquisition conditions. In this paper we propose a method that combines shape priors and image information in order to achieve this task. This algorithm was developed in the context of quality meat
assessment using US images. Two parameters that are highly correlated with the meat production quality of an animal are the under-skin fat and the rib eye area. In order to estimate the second
parameter we propose a shape prior based segmentation algorithm. We introduce the knowledge about the rib eye shape using an expert marked set of images. A method is proposed for the automatic
segmentation of new samples in which a closed curve is fitted taking in account both the US image information and the geodesic distance between the evolving and the estimated mean rib eye shape in a
shape space. We think that this method can be used to solve many similar problems that arise when dealing with US images in other fields. The method was successfully tested over a data base composed
of 600 US images, for which we have two expert manual segmentations.
Joint work with P. Arias, A. Pini, G. Sanguinetti, P. Cancela, A. Fernandez, and A.Gomez.
Evgeniy Bart (University of Minnesota) View-invariant recognition using corresponding object fragments
Abstract: In this work, invariant object recognition is achieved by learning to compensate for appearance variability of a set of class-specific features. For example, to compensate for pose
variations of a feature representing an eye, eye images under different poses are grouped together. This grouping is done automatically during training. Given a novel face in e.g. frontal pose, the
model for it can be constructed using existing frontal image patches. However, each frontal patch has profile patches associated with it, and these are also incorporated in the model. As a result,
the model built from just a single frontal view can generalize well to distinctly different views, such as profile.
Fred L. Bookstein (University of Washington) Symmetries of a mathematical model for deformation noise in realistic biometric contexts
Abstract: By the "realistic biometric context" of my title, I mean an investigation of well-calibrated images from a moderately large sample of organisms in order to evaluate some nontrivial
hypothesis about systematic form-factors (e.g., a group difference). One common approach to such problems today is "geometric morphometrics," a short name for the multivariate statistics of landmark
location data. The core formalism here, which handles data schemes that mix discrete points, curves, and surfaces, applies otherwise conventional linear statistical modeling strategies to
representatives of equivalence classes of these schemes under similarity transformations or relabeling maps. As this tradition has matured, algorithmic successes involving statistical manipulations
and the associated diagrams have directed our community's attention away from a serious underlying problem: Most biological processes operate not on the submanifolds of the data structure but in the
embedding space in-between. In that context constructs such as diffeomorphism, shape distance, and image energy are mainly metaphors, however visually compelling, that may have no particular
scientific authority when some actual biometrical hypothesis is being seriously weighed. Instead of phrasing this as a problem in the representation of a signal, it may be useful to recast the
problem as that of a suitable model for noise (so that signal becomes, in effect, whatever patterns rise above the amplitude of the noise). The Gaussian model of conventional statistics can be
derived as an expression of the symmetries of a plausible physical model (the Maxwell distribution in statistical mechanics), and it would be nice if some equally compelling symmetries could be
invoked to help us formulate biologically meaningful noise models for deformations. We have had initial success with a new model of self-similar isotropic noise borrowed from the field of stochastic
geometry. In this approach, a deformation is construed not as a deterministic mapping but as a distribution of mappings given by an intrinsic random process such that the plausibility of a meaningful
focal structural finding is the same regardless of physical scale. Simulations instantiating this process are graphically quite compelling--their selfsimilarity comes as a considerable (and
counterintuitive) surprise--and yet as a tool of data analysis, for teasing out interesting regions within an extended data set, the symmetries (and their breaking, which constitutes the signal being
sought) seem quite promising. My talk will review the core of geometric morphometrics as it is practiced today, sketch the deep difficulties that arise in even the most compelling biological
applications, and then introduce the formalisms that, I claim, sometimes permit a systematic circumvention of these problems when the context is one of a statistical data analysis of a serious
scientific hypothesis. This work is joint with K. V. Mardia.
Yan Cao (John Hopkins University) Axial Representation of Shapes Based on Principal Curves
Abstract: Generalized cylinders model uses hierarchies of cylinder-like modeling primitives to describe shapes. We propose a new definition of axis for cylindrical shapes based on principal curves.
In a 2D case, medial axis can be generated from the new axis, and vice versa. In a 3D case, the new axis gives the natural (intuitive) curve skeleton of the shape instead of complicated surfaces
generated as medial axis. This is illustrated by numerical experiments on 3D laser scan data.
Daniel Cremers (University of Bonn) Robust Variational Computation of Geodesics on a Shape Space
Abstract: Parametric shape representations are considered as orbits on an appropriate manifold. The distance between shapes is determined by computing geodesics between these orbits. We propose a
variational framework to compute geodesics on a manifold of shapes. In contrast to existing algorithms based on the shooting method, our method is more robust to the initial parameterization, is less
prone to self-intersections of the contour. Moreover computation times improve by a factor of about 1000 for typical resolutions.
Steven Benjamin Damelin (Georgia Southern University) Some open problems in Dimension Reduction, Inverse Scattering and Power Systems
Abstract: Nowdays, we are constantly flooded with information of all sorts and forms and a common denominator of data analysis in many emerging fields of current interest are large amounts of
observations that have high dimensionality. In this talk, we will outline work in progress that relates to the idea of local dimension reduction in imaging and distributed power networks. In
particular, we will discuss joint work on Paley Weiner theorems in inverse scattering, learning on curved manifolds and terrain manifold estimation from localization graphs of sensor and neural
networks. This is joint and ongoing work with T. Devaney (Northeastern), R. Luke (Delaware), P.Grabner (Graz), M. Werman (Hebrew U), D. Wunsch (Missouri-Rolla) and Armit Argawal (Singapore).
Mathieu Desbrun (California Institute of Technology) Discrete Calculus for Shape Processing
Abstract: In this talk, we give an overview of a discrete exterior calculus and some of its multiple applications to computational modeling, ranging from geometry processing to physical simulation.
We will focus on discrete differential forms (the building blocks of this calculus) and show how they provide differential, yet readily discretizable computational foundations for shape spaces — a
crucial ingredient for numerical fidelity. Parameterization and quad meshing will be stressed as straightforward, yet powerful applications.
Persi Diaconis (Stanford University) Math Matters: IMA Public Lecture Series
Mathematics and Magic Tricks
Abstract: Sometimes the way a magic trick works is even more amazing than the trick itself. I will illustrate with some performable tricks that seem to fool magicians. The math involved has
application to breaking and entering, robot vision, cryptography, random number generation, and DNA sequence analysis.
Marc Droske (University of California - Los Angeles) Higher-order regularization of geometries and Mumford-Shah surfaces
Abstract: Active contours form a class of variational methods, based on nonlinear PDEs, for image segmentation. Typically these methods introduce a local smoothing of edges due to a length
minimization or minimization of a related energy. These methods have a tendency to smooth corners, which can be undesirable for tasks that involve identifying man-made objects with sharp corners. We
introduce a new method, based on image snakes, in which the local geometry of the curve is incorporated into the dynamics in a nonlinear way. Our method brings ideas from image denoising and
simplification of high contrast images - in which piecewise linear shapes are preserved - to the task of image segmentation. Specifically we introduce a new geometrically intrinsic dynamic equation
for the snake, which depends on the local curvature of the moving contour, designed in such a way that corners are much less penalized than for more classical segmentation methods. We will discuss
further extensions that allow segmentation based on geometric shape priors.
Joint work with A. Bertozzi.
Ian Dryden (University of Nottingham) Shape space smoothing splines for planar landmark data
Abstract: A method for fitting smooth curves through a series of shapes of landmarks in two dimensions is presented using unrolling and unwrapping procedures in Riemannian manifolds. An explicit
method of calculation is given which is analogous to that of Jupp and Kent (1987, Applied Statistics) for spherical data. The resulting splines are called shape space smoothing splines. The method
resembles that of fitting smoothing splines in Euclidean spaces in that: if the smoothing parameter is zero the resulting curve interpolates the data points, and if it is infinitely large the curve
is the geodesic line. The fitted path to the data is defined such that its unrolled version at the tangent space of the starting point is a cubic spline fitted to the unwrapped data with respect to
that path. Computation of the fitted path consists of an iterative procedure which converges quickly, and the resulting path is given in a discretized form in terms of a piecewise geodesic path. The
procedure is applied to the analysis of some human movement data. The work is joint with Alfred Kume and Huiling Le.
P. Thomas Fletcher (University of Utah) Riemannian Metrics on the Space of Solid Shapes
Abstract: We formulate the space of solid objects as an infinite-dimensional Riemannian manifold in which each point represents a smooth object with non-intersecting boundary. Geodesics between
shapes provide a foundation for shape comparison and statistical analysis. The metric on this space is chosen such that geodesics do not produce shapes with intersecting boundaries. This is possible
using only information of the velocities on the boundary of the object. We demonstrate the properties of this metric with examples of geodesics of 2D shapes. Joint work with Ross Whitaker.
Matthias Fuchs (University of Innsbruck), Otmar Scherzer (University of Innsbruck) Mumford-Shah with A-Priori Medial-Axis Information
Abstract: We minimize the Mumford-Shah functional over a space of parametric shape models. In addition we penalize large deviations from a mean shape prior. This mean shape is the average of shapes
obtained by segmenting a set of training images. The parametric description of our shape models is motivated by their medial axis representation. The central idea of our approach to image
segmentation is to represent the shapes as boundaries of a medial skeleton. The skeleton data is contained in a product of Lie-groups, which is a Lie-group itself. This means that our shape models
are elements of a Riemannian manifold. To segment an image we minimize a simplified version of the Mumford-Shah functional (as proposed by Chan & Vese) over this manifold. From a set of training
images we then obtain a mean shape (and the corresponding principal modes) by performing a Principal Geodesic Analysis. The metric structure of the shape manifold allows us to measure distances from
this mean shape. Thus, we regularize the original segmentation functional with a distance term to further segment incomplete/noisy image data.
Jooyoung Hahn (KAIST), Chang-Ock Lee (KAIST) Highly Accurate Segmentation Using Geometric Attraction-Driven Flow in Edge-Regions
Abstract: We propose a highly accurate segmentation algorithm for objects in an image that has simple background colors or simple object colors. There are two main concepts, "geometric
attraction-driven flow" and "edge-regions," which are combined to give an exact boundary. Geometric attraction-driven flow gives us the information of exact locations for segmentation and
edge-regions helps to make an initial curve quite close to an object. The method can be successfully done by a geometric analysis of eigenspace in a tensor field on a color image as a two-dimensional
manifold and a statistical analysis of finding edge-regions. There are two successful applications. One is to segment aphids in images of soybean leaves and the other is to extract a background from
a commercial product in order to make 3D virtual reality contents from many real photographs of the product. Until now, those works have been done by a manual labor with a help of commercial programs
such as Photoshop or Gimp, which is time-consuming and labor-intensive. Our segmentation algorithm does not have any interaction with end users and no parameter manipulations in the middle of
Darryl D. Holm (Los Alamos National Laboratory) Soliton Dynamics in Computational Anatomy
Abstract: Computational Anatomy (CA) introduces the idea that shapes may be transformed into each other by geodesic deformations on groups of diffeomorphisms. In particular, the template matching
approach involves Riemannian metrics on the tangent space of the diffeomorphism group and employs their projections onto specific landmark shapes, or image spaces. A singular momentum map provides an
isomorphism between landmarks (and outlines) for images and singular soliton solutions of the geodesic equation. This isomorphism suggests a new dynamical paradigm for CA, as well as a new data
representation. The main references for this talk are
Soliton Dynamics in Computational Anatomy,
D. D. Holm, J. T. Ratnanather, A. Trouvé, L. Younes, http://arxiv.org/abs/nlin.SI/0411014
Momentum Maps and Measure-valued Solutions for the EPDiff Equation,
D. D. Holm and J. E. Marsden, In The Breadth of Symplectic and Poisson Geometry, A Festshrift for Alan Weinstein, 203-235,Progr. Math., 232, J.E. Marsden and T.S. Ratiu, Editors, Birkhäuser Boston,
Boston, MA, 2004. Also at http://arxiv.org/abs/nlin.CD/0312048
D. D. Holm and M. F. Staley, Interaction Dynamics of Singular Wave Fronts, at Martin Staley's website, under "Recent Papers" at http://cnls.lanl.gov/~staley/
Stephan Huckemann (University of Goettingen) Principal Component Geodesics for Planar Shape Spaces
Abstract: Currently, principal component analysis for data on a manifold such as Kendall's landmark based shape spaces is performed by a Euclidean embedding. We propose a method for PCA based on the
intrinsic metric. In particular for Kendell's shape spaces of planar configurations (i.e. complex projective spaces) numerical methods are derived allowing to compare PCA based on geodesics to PCA
based on Euclidean approximation. Joint work with Herbert Ziezold (Universitaet Kassel, Germany).
Sarang Joshi (University of North Carolina) Statistics of Shape: Simple Statistics on Interesting Spaces
Abstract: A primary goal of Computational Anatomy is the statistical analysis of anatomical variability. A natural question that arises is how dose one define the image of an "Average Anatomy" given
a collection of anatomical images. Such an average image must represent the intrinsic geometric anatomical variability present. Large Deformation Diffeomorphic transformations have been shown to
accommodate the geometric variability but performing statistics of Diffeomorphic transformations remains a challenge. Standard techniques for computing statistical descriptions such as mean and
principal component analysis only work for data lying in a Euclidean vector space. In this talk, using the Riemannian metric theory the ideas of mean and covariance estimation will be extended to
non-linear curved spaces, in particular for finite dimensional Lie-Groups and the space of Diffeomorphisms transformations. The covariance estimation problem on Riemannian manifolds is posed as a
metric estimation problem. Algorithms for estimating the "Average Anatomical" image as well as for estimating the second order geometrical variability will be presented.
Yoon Mo Jung (University of Minnesota), Jianhong Shen (University of Minnesota) First-Order Modeling and Analysis of Illusory Shapes/Contours
Abstract: In visual cognition, illusions help elucidate certain intriguing but latent perceptual functions of the human vision system, and their proper mathematical modeling and computational
simulation are therefore deeply beneficial to both biological and computer vision. Inspired by existent prior works, the current paper proposes a first-order energy-based model for analyzing and
simulating illusory shapes and contours. The lower complexity of the proposed model facilitates rigorous mathematical analysis on the detailed geometric structures of illusory shapes/contours. After
being asymptotically approximated by classical active contours (via Lebesgue Dominated Convergece), the proposed model is then robustly computed using the celebrated level-set method of Osher and
Sethian with a natural supervising scheme. Potential cognitive implications of the mathematical results are addressed, and generic computational examples are demonstrated and discussed. (Joint work
with Prof. Jackie Shen; Partially supported by NSF-DMS.)
Martin Kilian (Vienna University of Technology) 3D Shape Warping based on Geodesics in Shape Space
Abstract: In the context of Shape Spaces a warp between two objects becomes a curve in Shape Space. One way to construct such a curve is to compute a geodesic joining the initial shapes. We propose a
metric on the space of closed surfaces and present some morphs to illustrate the behavior of the metric.
Reinhard Klette (University of Auckland) Combinatorics on adjacency graphs and incidence pseudographs
Abstract: Adjacency graphs and incidence pseudographs are possible models for the "interaction" of pixels in digital images. Both are treated in a recent monograph [in Chapters 4 and 5 of R. Klette
and A. Rosenfeld: Digital Geometry. Morgan Kaufman, San Francisco, 2004]. The talk explains how these models relate to image analysis. Oriented adjacency graphs provide formulas (also in
generalization of studies on planar graphs) which are useful when analyzing connected regions in digital 2D images. For example, the formula of G. Pick (1899) can be generalized to oriented regular
adjacency graphs. Oriented adjacency graphs allow no straightforward generalization to three dimensions (or more). Incidence pseudographs are defined for any finite dimension, and they provide a
topological model for 2D or 3D image analysis. Combinatorial formulas for incidence pseudographs are, again, of value for image analysis. For example, boundary counts allow direct conclusions about
volumes or the calculation of the Euler number.
Kathryn Leonard (California Institute of Technology) Model Selection for 2D Shape
Abstract: We derive an intrinsic, quantitative measure of suitability of shape models for any shape bounded by a simple, twice-differentiable curve. Our criterion for suitability is efficiency of
representation in a deterministic setting, inspired by the work of Shannon and Rissanen in the probabilistic setting. We compare two shape models, the boundary curve and Blum's medial axis, and apply
our efficiency measure to chose the more efficient model for each of 2,322 shapes.
Hstau Liao (University of Minnesota) Local feature modeling in image reconstruction-segmentation
Abstract: Given some local features (shapes) of interest, we produce images that contain those features. This idea is used in image reconstruction-segmentation tasks, as motivated by electron
microscopy .
In such application, often it is necessary to segment the reconstructed volumes. We propose approaches that directly produce, from the tomograms (projections), a label (segmented) image with the
given local features.
Joint work with Gabor T. Herman, CUNY.
Z.Q. John Lu (National Institute for Standards and Technology) Statistics and Metrology for Geometry Measuring Machine (GEMM)
Abstract: NIST is developing the Geometry Measuring Machine (GEMM) for precision measurements of aspheric optical surfaces. Mathematical and statistical principles for GEMM will be presented. We
especially focus on the uncertainty theory of profile reconstruction from GEMM using nonparametric local polynomial regression. Newly developed metrology results in Machkour-Deshayes et al (2006) for
comparing GEMM to NIST Moore M-48 Coordinate Measuring Machine will also be presented.
Rolando Magnanini (Università di Firenze) Seminar: Invariant level surfaces of solutions of evolutionary PDE's
Abstract: I will first prove that, if the solutions of certain Cauchy and Cauchy-Dirichlet problems for the heat equation possess one invariant spatial level surface, then they must be symmetric. In
order to prove this result, short-times asymptotic estimates for the heat content of balls touching the boundary of the domain are needed. Secondly, I will show how these estimates and symmetry
results can be extended to relevant nonlinear settings such as the porous media and the evolutionary p-Laplace equations. Finally, I will show some connections to some related problems for the
Helmholtz equation.
Riccardo March (Consiglio Nazionale delle Ricerche) Asymptotic reconstruction properties of the Nitzberg-Mumford-Shiota model for image segmentation
with depth
Abstract: We consider the Nitzberg-Mumford-Shiota variational formulation of the segmentation with depth problem. This is an image segmentation model that allows regions to overlap in order to take
into account occlusions between different objects. The purpose of segmentation with depth is to recover the shapes of the objects in an image, as well as the occluded boundaries and the ordering of
the objects in space. We discuss a research in progress about qualitative properties of the Nitzberg-Mumford-Shiota functional within the framework of the relaxation methods of the Calculus of
Variations. We try to characterize minimizing segmentations of images made up of smooth overlapping regions, when the weight of the fidelity term in the functional becomes large. This should give
some theoretical informations about the capability of the model to reconstruct both occluded boundaries and depth order.
Kanti Mardia (University of Leeds) Recent Advances in Unlabelled Shape Analysis
Abstract: We discuss some new statistical methods for matching configurations of points in space where the points are either unlabelled or have at most a partial labelling constraining the match. The
aim is to draw simultaneous inference about the matching and the transformation. Various questions arise: how to incorporate concommitant information? How to simulate realistic configurations? What
are the implementation issues? What is the effect of multiple comparisons when a large data base is used?, and so on. Applications to protein bioinformatics, and image analysis will be described. We
will also discuss some open problems and suggest directions for future work.
Stephen Marsland (Massey University) A Minimum Description Length Objective Function for Groupwise Non-Rigid Image Registration
Abstract: Groupwise non-rigid registration aims to find a dense correspondence across a set of images, so that analogous structures in the images are aligned. For purely automatic inter-subject
registration the meaning of correspondence should be derived purely from the available data (i.e., the full set of images), and can be considered as the problem of learning correspondences given the
set of example images. We demonstrate that the Minimum Description Length (MDL) approach is a suitable method of statistical inference for this problem, and give a brief description of applying the
MDL approach to transmitting both single images and sets of images, and show that the concept of a reference image (which is central to defining a consistent correspondence across a set of images)
appears naturally as a valid model choice in the MDL approach. This poster provides a proof-of-concept for the construction of objective functions for image registration based on the MDL principle.
Peter W. Michor (Universitat Wien) Geometries on the space of planar shapes - geodesics and curvatures
Abstract: The L^2 or H^0 metric on the space of smooth plane regular closed curves induces vanishing geodesic distance on the quotient Imm(S^1,R^2)/Diff(S^1). This is a general phenomenon and holds
on all full diffeomorphism groups and spaces Imm(M,N)/Diff(M) for a compact manifold M and a Riemanninan manifold N. Thus we have to consider more complicated Riemannian metrics using lenght or
curvature, and we do this is a systematic Hamiltonian way, we derive geodesic equation and split them into horizontal and vertical parts, and compute all conserved quantities via the momentum
mappings of several invariance groups (Reparameterizations, motions, and even scalings). The resulting equations are relatives of well known completely integrable systems (Burgers, Camassa Holm,
Hunter Saxton).
Washington Mio (Florida State University) A Demo on Shape of Curves
Abstract: I will present a brief demo on shape geodesics between curves in Euclidean spaces and a few applications to shape clustering.
David Mumford (Brown University) On the metrics on the space of simple closed plane curves
Abstract: There are so many Riemannian metrics on the space of curves, it is worthwhile to compare them. I will take one fixed shape and contrast the shape of the unit ball in 5 of these metrics.
After that, I want to discuss in more detail one particular Riemannian metric which was first proposed by Younes and has recently been investigated by Mio-Srivastava and by Shah.
Peter J. Olver (University of Minnesota) Invariant Signatures for Recognition and Symmetry
Abstract: The mathematical foundations of invariant signatures for object recognition and symmetry detection are based on the Cartan theory of moving frames and its more recent extensions developed
with a series of students and collaborators. The moving frame calculus leads to mathematically rigorous differential invariant signatures for curves, surfaces, and moving objects. The theory is
readily adapted to the design of noise-resistant alternatives based on joint (or semi-)differential invariants and purely algebraic joint invariants. Such signatures can be effectively used in the
detection of exact and approximate symmetries, as well as recognition and reconstruction of partially occluded objects. Moving frames can also be employed to design symmetry-preserving numerical
approximations to the required differential and joint differential invariants.
Xavier Pennec (INRIA Sophia Antipolis) Statistical Computing on Manifolds: From Riemannian Geometry to Computational Anatomy
Abstract: Based on a Riemannian manifold structure, we have previously develop a consistent framework for simple statistical measurements on manifolds. Here, the Riemannian computing framework is
extended to several important algorithms like interpolation, filtering, diffusion and restoration of missing data. The methodology is exemplified on the joint estimation and regularization of
Diffusion Tensor MR Images (DTI), and on the modeling of the variability of the brain. More recent developments include new Log-Euclidean metrics on tensors, that give a vector space structure and a
very efficient computational framework; Riemannian elasticity, a statistical framework on deformations fields, and some new clinical insights in anatomic variability.
Wolfgang Ring (University of Graz) A Newton-type Total Variation Diminishing Flow
Abstract: A new type of geometric flow is derived from variational principles as a steepest descent flow for the total variation functional with respect to a variable, Newton-like metric. The
resulting flow is described by a coupled, non-linear system of differential equations. Geometric properties of the flow are investigated, the relation to inverse scale space methods is discussed, and
the question of appropriate boundary conditions is addressed. Numerical studies based on a finite element discretization are presented.
Martin Rumpf (University of Bonn) Joint Methods in Shape Matching and Motion Extraction
Abstract: Variational methods are presented which allow to correlate pairs of implicit shapes in 2D and 3D images, to morph pairs of explicit surfaces, or to analyse motion pattern in movies. A
particular focus is on joint methods. Indeed, fundamental tasks in image processing are highly interdependent: Registration of image morphology significantly benefits from previous denoising and
structure segmentation. On the other hand, combined information of different image modalities makes shape segmentation significantly more robust. Furthermore, robustness in motion extraction of
shapes can be significantly enhanced via a coupling with the detection of edge surfaces in space time and a corresponding feature sensitive space time smoothing. The methods are based on a splitting
of image morphology into a singular part consisting of the edge geometry and a regular part represented by the field of normals on the ensemble of level sets. Mumford-Shah type free discontinuity
problems are applied to treat the singular morphology both in image matching and in motion extraction. For the discretization a multi scale finite element approach is considered. It is based on a
phase field approximation of the free discontinuity problems and leads to effective and efficient algorithms. Numerical experiments underline the robustness of the presented approaches.
Guillermo R. Sapiro (University of Minnesota) Comparing and Warping Shapes in a Metric Framework
Abstract: A geometric framework for comparing manifolds given by point clouds is first presented in this talk. The underlying theory is based on Gromov-Hausdorff distances, leading to isometry
invariant and completely geometric comparisons. This theory is embedded in a probabilistic setting as derived from random sampling of manifolds, and then combined with results on matrices of pairwise
geodesic distances to lead to a computational implementation of the framework. The theoretical and computational results described are complemented with experiments for real three dimensional shapes.
In the second part of the talk, based on the notion Minimizing Lipschitz Extensions and its connection with the infinity Laplacian, a computational framework for surface warping and in particular
brain warping (the nonlinear registration of brain imaging data) is presented. The basic concept is to compute a map between surfaces that minimizes a distortion measure based on geodesic distances
while respecting the boundary conditions provided. In particular, the global Lipschitz constant of the map is minimized. This framework allows generic boundary conditions to be applied and allows
direct surface-to-surface warping. It avoids the need for intermediate maps that flatten the surface onto the plane or sphere, as is commonly done in the literature on surface-based non-rigid brain
image registration. The presentation of the framework is complemented with examples on synthetic geometric phantoms and cortical surfaces extracted from human brain MRI scans. Joint works with F.
Memoli and P. Thompson.
Emil Saucan (Technion - Israel Insititute of Technology) Metric Curvatures and Applications
Abstract: Various notions of metric curvature, such as Menger, Haantjes and Wald were developed early in the 20-th Century. Their importance was emphasized again recently by the works of M. Gromov
and other researchers. Thus metric differential geometry was revived as thriving field of research. Here we consider a number of applications of metric curvature to a variety of problems. Amongst
them we mention the following: (1) The problem of better approximating surfaces by triangular meshes. We suggest to view the approximating triangulations (graphs) as finite metric spaces and the
target smooth surface as their Haussdorff-Gromov limit. Here intrinsic, discrete, metric definitions of differentiable notions such as Gauss, mean and geodesic curvatures are considered. (2)
Employing metric differential geometry for the analysis weighted graphs/networks. In particular, we employ Haantjes curvature, i.e. as a tool in communication networks and DNA microarray analysis.
This represents joint work with Eli Appleboim and Yehoshua Y. Zeevi.
Eitan Sharon (Brown University) A metric space of shapes — the conformal approach
Abstract: We introduce a metric hyperbolic space of shapes that allows shape classification by similarities. The distance between each pair of shapes is defined by the length of the shortest path
continuously morphing them into each other (a unique geodesic). Every simple closed curve in the plane (a "shape") is represented by a 'fingerprint' which is a differentiable and invertible
transformation of the unit circle onto itself (a 1D, real valued, periodic function). In this space of fingerprints, there exists a group operation carrying every shape into any other shape, while
preserving the metric distance when operating on each pair of shapes. We show how this can be used to define shape transformations, like for instance 'adding a protruding limb' to any shape. This
construction is the natural outcome of the existence and uniqueness of conformal mappings of 2D shapes into each other, as well as the existence of the remarkable homogeneous Weil-Petersson metric.
This is a joint work with David Mumford.
Anuj Srivastava (Florida State University) Statistical Analysis of Shapes of 2D Curves, 3D Curves, and Facial Surfaces
Abstract: Our previous work developed techniques for computing geodesics on shape spaces of planar closed curves, first with and later without restrictions to arc-length parameterizations. Using
tangent principal component analysis (TPCA), we have imposed probability models on these spaces and have used them in Bayesian shape estimation and classification of objects in images. Extending
these ideas to 3D problems, I will present a "path-straightening" approach for computing geodesics between closed curves in R3. The basic idea is to define a space of such closed curves, initialize a
path between the given two curves, and iteratively straighten it using the gradient of an energy whose critical points are geodesics. This computation of geodesics between 3D curves helps analyze
shapes of facial surfaces as follows. Using level sets of smooth functions, we represent any surface as an indexed collection of facial curves. We compare any two facial surfaces by registering their
facial curves, and by comparing shapes of corresponding curves. Note that these facial curves are not necessarily planar, and require tools for analyzing shapes of 3D curve. (This work is in
collaboration with E. Klassen, C. Samir, and M. Daoudi)
Sheshadri R. Thiruvenkadam (University of California - Los Angeles) Using Shape Based Models for Detecting Illusory Contours, Disocclusion, and Finding Nonrigid
Level-Curve Correspondences
Abstract: Illusory contours are intrinsic phenomena in human vision. In this work, we present two different level set based variational models to capture a typical class of illusory contours such as
Kanizsa triangle. The first model is based on the relative locations between illusory contours and objects as well as known shape information of the contours. The second approach uses curvature
information via Euler's elastica to complete missing boundaries. We follow this up with a short summary of our current work on disocclusion using prior shape information. Next, we look at the problem
of finding nonrigid correspondences between implicitly represented curves. Given two level-set functions, we search for a diffeomorphism between their zero-level sets that minimizes a
shape-similarity measure. The diffeomorphisms are generated as flows of vector fields, and curve-normals are chosen as the similarity criterion. The resulting correspondences are symmetric and the
energy functional is invariant with respect to rotation and scaling of the curves. We also show how this model can be used as a basis to compare curves of different topologies. Joint Work with: Tony
Chan, Wei Zhu, David Groisser, Yunmei Chen.
Alain Trouve (Ecole Normale Superieure de Cachan) Statistical modelling and estimation problems with deformable templates
Abstract: The link between Bayesian and variational approaches is well known in the image analysis community in particular in the context of deformable models. However, the current trend is the
computation of statistics mainly based on PCA analysis or non-linear extension on manifold using local linearization through the exponential mapping. We will try to show in talk that going from
statistics to statistical modelling in the context of deformable models leads to interesting new questions, mainly unsolved, about the statistical modelling itself but also about the derivation of
consistent and effective estimation algorithms.
Namrata Vaswani (Iowa State University) Statistical Models for Contour Tracking
Abstract: (based on joint work with Yogesh Rathi, Allen Tannenbaum, Anthony Yezzi) We consider the problem of sequentially segmenting an object(s) or more generally a "region of interest" (ROI) from
a sequence of images. This is formulated as the problem of "tracking" (computing a causal Bayesian estimate of) the boundary contour of a moving and deforming object(s) from a sequence of images. The
observed image is usually a noisy and nonlinear function of the contour. The image likelihood given the contour (``observation likelihood") is often multimodal (due to multiple objects or background
clutter or partial occlusions) or heavy tailed (due to outliers or low contrast). Since the state space model is nonlinear and multimodal, we study particle filtering solutions to the tracking
problem. If the contour is represented as a continuous curve, contour deformation forms an infinite (in practice, very large), dimensional space. Particle filtering from such a large dimensional
space is impractical. But in most cases, one can assume that for a certain time period, "most of the contour deformation" occurs in a small number of dimensions. This ``effective basis" for contour
deformation can be assumed to be fixed (e.g. space of affine deformations) or slowly time varying. We have proposed practically implementable particle filtering algorithms under both these
Michael Wakin (Rice University) Manifold-based models for image processing
Abstract: The information contained in an image ("What does the image represent?") also has a geometric interpretation ("Where does the image reside in the ambient signal space?"). It is often
enlightening to consider this geometry in order to better understand the processes governing the specification, discrimination, or understanding of an image. We discuss manifold-based models for
image processing imposed, for example, by the geometric regularity of objects in images. We present an application in image compression, where we see sharper images coded at lower bitrates thanks to
an atomic dictionary designed to capture the low-dimensional geometry. We also discuss applications in computer vision, where we face a surprising barrier -- the image manifolds arising in many
interesting situations are in fact nondifferentiable. Although this appears to complicate the process of parameter estimation, we identify a multiscale tangent structure to these manifolds that
permits a coarse-to-fine Newton method. Finally, we discuss applications in the emerging field of Compressed Sensing, where in certain cases a manifold model can supplant sparsity as the key for
image recovery from incomplete information. This is joint work with Justin Romberg, David Donoho, Hyeokho Choi, and Richard Baraniuk.
Lei Wang (Washington University School of Medicine) Application of PCA and Geodesic 3D Evolution of Initial Velocity in Assessing Hippocampal Change
in Alzheimer's Disease
Abstract: In large-deformation diffeomorphic metric mapping (LDDMM), the diffeomorphic matching of given images are modeled as evolution in time, or a flow, of an associated smooth velocity vector
field V controlling the evolution. The geodesic length of the path in the space of diffeomorphic transformations connecting the given two images defines a metric distance between them. The initial
velocity field v0 parameterizes the whole geodesic path and encodes the shape and form of the target image (1). Thus methods such as principal components analysis (PCA) of v0 leads to analysis of
anatomical shape and form in target images without being restricted to small-deformation assumption (1, 2). Further, specific subsets of the principal components (eigenfunctions) discriminate subject
groups, the effect of which can be visualized by 3D geodesic evolution of the velocity field reconstructed from the subset of principal components. An application to Alzheimer's disease is presented
Joint work with: Laurent Younes, M. Fais.
1. Vaillant, M., Miller, M. I., Younes, L. & Trouve, A. (2004) Neuroimage 23 Suppl 1, S161-9.
2. Miller, M. I., Banerjee, A., Christensen, G. E., Joshi, S. C., Khaneja, N., Grenander, U. & Matejic, L. (1997) Statistical Methods in Medical Research 6, 267-299.al Beg, J. Tilak Ratnanather.
Todd Wittman (University of Minnesota) A variational approach to image and video super-resolution
Abstract: Super-resolution seeks to produce a high-resolution image from a set of low-resolution, possibly noisy, images such as in a video sequence. We present a method for combining data from
multiple images using the Total Variation (TV) and Mumford-Shah functionals. We discuss the problem of sub-pixel image registration and its effect on the final result.
Anthony J. Yezzi (Georgia Institute of Technology) Sobolev Active Contours
Abstract: Following the observation first noted by Michor and Mumford, that H^0 metrics on the space of curves lead to vanishing distances between curves, Yezzi and Mennucci proposed conformal
variants of H^0 using conformal factors dependent upon the total length of a given curve. The resulting metric was shown to yield non-vanishing distance at least when the conformal factor was greater
than or equal to the curve length. The motivation for the conformal structure, was to preserve the directionality of the gradient of any functional defined over the space of curves when compared to
its H^0 gradient. This desire came in part due to the fact that the H^0 metric was the consistent choice of metric in all variational active contour methods proposed since the early 90's. Even the
well studied geometric heat flow is often referred to as the curve shrinking flow as it arises as the gradient descent of arclength with respect to the H^0 metric.
Changing strategies, we have decided to consider adapting contour optimization methods to a choice of metric on the space of curves rather than trying to constrain our metric choice in order to
conform to previous optimization methods. As such, we reformulate the gradient descent approach used for variational active contours by utilizing gradients with respect to H^1 metrics rather than H^0
metrics. We refer to this class of active contours as "Sobolev Active Contours" and discuss their strengths when compared to more classical active contours based on the same underlying energy
functionals. Not only due Sobolev active contours exhibit more regularity, regardless of the choice of energy to minimize, but they are ideally suited for applications in computer vision such as
tracking, where it is common that a contour to be tracked changes primarily by simple translation from frame to frame (a motion which is almost free for many Sobolev metrics).
(Joint work with G. Sundaramoorthi and A. Mennucci.)
Laurent Younes (Johns Hopkins University) New algorithms for diffeomorphic shape analysis
Abstract: We present a series of applications of the Jacobi evolution equations along geodesics in groups of diffeomorphisms. We describe, in particular, how they can be used to perform feasible
gradient descent algorithms for image matching, in several situations, and illustrate this with 2D and 3D experiments. We also discuss parallel translation in the group, with its projections on shape
manifolds, and focus in particular on an implementation of the associated equations using iterated Jacobi fields.
Jean-Paul Zolesio (Institut National de Recherche en Informatique Automatique (INRIA)) Shape Tube Metric
Abstract: We introduce the TUBE connection for domains with finite perimeters; Then a metric and we characterise the necessary condition for the geodesic tube. We obtain a complete metric space of
Shapes with non prescribed toplogy. That metric extends the Courant metric developed in the book Shape and Geometry (Delfour and Z) , SIAM 2001.
|
{"url":"https://www.ima.umn.edu/newsletters/2006/04/","timestamp":"2014-04-17T04:01:57Z","content_type":null,"content_length":"118324","record_id":"<urn:uuid:d4586d6b-032f-48ca-92f7-e934cf7e400c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Washington Street, NJ Trigonometry Tutor
Find a Washington Street, NJ Trigonometry Tutor
...I've consistently scored 800, often without a single error. The dreaded GMAT. I've scored nearly perfect on this test, which is significantly tougher than the SAT.
34 Subjects: including trigonometry, English, physics, calculus
I am a degreed mechanical engineer who enjoys teaching others. I have helped all my kids from elementary through college with their homework. I have been in the industry for over 20 years.
20 Subjects: including trigonometry, geometry, ASVAB, algebra 1
...If you have interest, then you will want to achieve results yourself rather than someone pushing you to achieve them, and in the end you will be rewarded by having the best grades and
fulfilling your dreams.Algebra 1 is the first level of mathematics where it is critical to develop higher level t...
15 Subjects: including trigonometry, calculus, geometry, GRE
...I am a Mathematics major, minoring in Italian. For the entirety of my high school and college career, I have tutored friends and classmates in many areas of mathematics, including Alg 1+2,
Geometry, Trig, PreCalc, Calc 1+2. Although this has been mostly volunteer work, I nevertheless have learn...
9 Subjects: including trigonometry, calculus, geometry, algebra 1
...I have also designed summer program for students who want to get a head-start in Geometry. I even wrote a mini-Geometry book for this purpose. I can cover most major topics in Geometry in
little as one month (it all depends on how motivated your child is) but most will take six-weeks (5 days a week). A solid foundation in Prealgebra will help you breeze through Algebra I and
Algebra II.
11 Subjects: including trigonometry, calculus, algebra 1, geometry
Related Washington Street, NJ Tutors
Washington Street, NJ Accounting Tutors
Washington Street, NJ ACT Tutors
Washington Street, NJ Algebra Tutors
Washington Street, NJ Algebra 2 Tutors
Washington Street, NJ Calculus Tutors
Washington Street, NJ Geometry Tutors
Washington Street, NJ Math Tutors
Washington Street, NJ Prealgebra Tutors
Washington Street, NJ Precalculus Tutors
Washington Street, NJ SAT Tutors
Washington Street, NJ SAT Math Tutors
Washington Street, NJ Science Tutors
Washington Street, NJ Statistics Tutors
Washington Street, NJ Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Bridgewater, NJ trigonometry Tutors
Broadway, NJ trigonometry Tutors
Changewater trigonometry Tutors
East Brunswick trigonometry Tutors
Easton, PA trigonometry Tutors
Hampton, NJ trigonometry Tutors
Harmony Township, NJ trigonometry Tutors
High Bridge, NJ trigonometry Tutors
Hillsborough, NJ trigonometry Tutors
North Plainfield, NJ trigonometry Tutors
Oxford, NJ trigonometry Tutors
Piscataway trigonometry Tutors
Port Murray trigonometry Tutors
Union, NJ trigonometry Tutors
Washington, NJ trigonometry Tutors
|
{"url":"http://www.purplemath.com/Washington_Street_NJ_Trigonometry_tutors.php","timestamp":"2014-04-19T12:31:36Z","content_type":null,"content_length":"24632","record_id":"<urn:uuid:831a7138-f19a-42b3-b01a-c20ec5a33b2d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On Near-Uniform URL Sampling
Monika R. Henzinger^1 - Allan Heydon^2 - Michael Mitzenmacher^3 - Marc Najork^2
^1Google, Inc. 2400 Bayshore Parkway, Mountain View, CA 94043.
^2Compaq Systems Research Center, 130 Lytton Avenue, Palo Alto, CA 94301.
^3Harvard University, Division of Engineering and Applied Sciences.
We consider the problem of sampling URLs uniformly at random from the web. A tool for sampling URLs uniformly can be used to estimate various properties of web pages, such as the fraction of pages in
various internet domains or written in various languages. Moreover, uniform URL sampling can be used to determine the sizes of various search engines relative to the entire web. In this paper, we
consider sampling approaches based on random walks of the web graph. In particular, we suggest ways of improving sampling based on random walks to make the samples closer to uniform. We suggest a
natural test bed based on random graphs for testing the effectiveness of our procedures. We then use our sampling approach to estimate the distribution of pages over various internet domains and to
estimate the coverage of various search engine indexes.
Keywords: URL sampling, random walks, internet domain distribution, search engine size.
1. Introduction
Suppose that we could choose a URL uniformly at random from the web. Such a tool would allow us to answer questions about the composition of the web using standard statistical methods based on
sampling. For example, we could use random URLs to estimate the distribution of the length of web pages, the fraction of documents in various internet domains, or the fraction of documents written in
various languages. We could also determine the fraction of web pages indexed by various search engines by testing for the presence of pages chosen uniformly at random. However, so far, no methodology
for sampling URLs uniformly, or even near-uniformly, at random from the web has been discovered.
The contributions of this paper are threefold. First, we consider several sampling approaches, including natural approaches based on random walks. Intuitively, the problem with using a random walk in
order to sample URLs from the web is that pages that are more highly connected tend to be chosen more often. We suggest an improvement to the standard random walk technique that mitigates this
effect, leading to a more uniform sample. Second, we describe a test bed for validating our technique. In particular, we apply our improved sampling approach to a synthetic random graph whose
connectivity was designed to resemble that of the web, and then analyze the distribution of these samples. This test bed may prove useful for testing other similar techniques. Finally, we apply our
sampling technique to three sizable random walks of the actual web. We then use these samples to estimate the distribution of pages over internet domains, and to estimate the coverage of various
search engine indexes.
1.1. Prior Work
For the purposes of this paper, the size of a search engine is the number of pages indexed by the search engine. Similarly, the size of the web corresponds to the number of publicly accessible,
static web pages, although, as we describe in Section 4, this is not a complete or clear definition.
The question of understanding the size of the web and the relative sizes of search engines has been studied previously, most notably by Lawrence and Giles [14,15] and Bharat and Broder [2]. Part of
the reason for the interest in the area is historical: when search engines first appeared, they were often compared by the number of pages they claimed to index. The question of whether size is in an
appropriate gauge of search engine utility, however, remains a subject of debate [19]. Another reason to study size is to learn more about the growth of the web, so that appropriate predictions can
be made and future trends can be spotted early.
In 1995, Bray simply created (in a non-disclosed way) a start set of about 40,000 web pages and crawled the web from them [3]. He estimated the size of the web to be the number of unique URLs the
crawl encountered.
The initial work by Lawrence and Giles used a sampling approach based on the results of queries chosen from the NEC query logs to compare relative sizes of search engines [14]. Based on published
size figures, the authors estimated the size of the web. The approach of sampling from NEC query logs leaves questions as to the statistical appropriateness of the sample, as well as questions about
the repeatability of the test by other researchers. In contrast, we seek tests that are repeatable by others (with sufficient resources).
Further work by Lawrence and Giles used an approach based on random testing of IP addresses to determine characteristics of hosts and pages found on the web, as well as to estimate the web's size [15
]. This technique appears to be a useful approach for determining characteristics of web hosts. Given the high variance in the number of pages per host, however, and the difficulties in accessing
pages from hosts by this approach, it is not clear that this technique provides a general methodology to accurately determine the size of the web. In particular, the scalability of this approach is
uncertain for future 128 bit IP-v6 addresses.
Bharat and Broder, with motivation similar to ours, suggested a methodology for finding a page near-uniformly at random from a search engine index [2]. Their approach is based on determining queries
using random words, according to their frequency. For example, in one experiment, they chose queries that were conjunctions of words, with the goal of finding a single page (or a small number of
pages) in the search engine index containing that set of words. They also introduced useful techniques for determining whether a page exists in a search engine index. This problem is not as obvious
as it might appear, as pages can be duplicated at mirror sites with varying URLs, pages might change over time, etc. Although Bharat and Broder used their techniques to find the relative overlap of
various search engines, the authors admit that their techniques are subject to various biases. For example, longer pages (with more words) are more likely to be selected by their query approach than
short pages.
This paper is also related to a previous paper of ours [9], in which we used random walks to gauge the weight of various search engine indexes. The weight of an index is a generalization of the
notion of its size. Each page can be assigned a weight, which corresponds to its importance. The weight of a search engine index is then defined to be the sum of the weights of the pages it contains.
If all pages have an equal weight, the weight of an index is proportional to its size. Another natural weight measure is, for example, the PageRank measure (described below). We used the standard
model of the web as a directed graph, where the pages are nodes and links between pages represent directed edges in the natural way. With this interpretation, we used random walks on the web graph
and search-engine probing techniques proposed by Bharat and Broder [2] to determine the weight of an index when the weight measure is given by the PageRank measure. The random walks are used to
generate random URLs according to a distribution that is nearly equal to the PageRank distribution. This paper extends that approach to generate URLs according to a more uniform distribution.
1.2. Random Walks and PageRank
We first provide some background on random walks. Let X = {s[1],s[2],...s[n]} be a set of states. A random walk on X corresponds to a sequence of states, one for each step of the walk. At each step,
the walk switches from its current state to a new state or remains at the current state. Random walks are usually Markovian, which means that the transition at each step is independent of the
previous steps and depends only on the current state.
For example, consider the following standard Markovian random walk on the integers over the range {0...j} that models a simple gambling game, such as blackjack, where a player bets the same amount on
each hand (i.e., step). We assume that if the player ever reaches 0, they have lost all their money and stop, and if they reach j, they have won enough money and stop. Hence the process will stop
whenever 0 or j is reached. Otherwise, at each step, one moves from state i (where i is not 0 or j) to i+1 with probability p (the probability of winning the game), to i-1 with probability q (the
probability of losing the game), and stays at the same state with probability 1-p-q (the probability of a draw).
The PageRank is a measure of a page suggested by Brin and Page [4] that is fundamental to our sampling approach. Intuitively, the PageRank measure of a page is similar to its in-degree, which is a
possible measure of the importance of a page. The PageRank of a page is high if it is linked to by many pages with a high PageRank, and a page containing few outgoing links contributes more weight to
the pages it links to than a page containing many outgoing links. The PageRank of a page can be easily expressed mathematically. Suppose there are T total pages on the web. We choose a parameter d
such that 0 < d < 1; a typical value of d might lie in the range 0.1 < d < 0.15. Let pages p[1], p[2], ..., p[k] link to page p. Let R(p) be the PageRank of p and C(p) be the number of links out of p
. Then the PageRank R(p) of a page is defined to satisfy:
This equation defines R(p) uniquely, modulo a constant scaling factor. If we scale R(p) so that the PageRanks of all pages sum to 1, R(p) can be thought of as a probability distribution over pages.
The PageRank distribution has a simple interpretation in terms of a random walk. Imagine a web surfer who wanders the web. If the surfer visits page p, the random walk is in state p. At each step,
the web surfer either jumps to a page on the web chosen uniformly at random, or the web surfer follows a link chosen uniformly at random from those on the current page. The former occurs with
probability d, the latter with probability 1-d. The equilibrium probability that such a surfer is at page p is simply R(p). An alternative way to say this is that the average fraction of the steps
that a walk spends at page p is R(p) over sufficiently long walks. This means that pages with high PageRank are more likely to be visited than pages with low PageRank.
2. Sampling-Based Approaches
We motivate our approach for sampling a random page from the web by considering and improving on a sequence of approaches that clearly fail. Our approach also has potential flaws, which we discuss.
One natural approach would be to simply try to crawl the entire web, keeping track of all unique pages. The size of the web prevents this approach from being effective.
Instead, one may consider crawling only a part of the web. If one obtains a large enough subset of the web, then perhaps a uniform sample from this subset would be sufficient, depending on the
application. The question is how to obtain this sample. Notice that crawling the web in some fixed, deterministic manner is problematic, since then one obtains a fixed subset of the web. One goal of
a random sampling approach is variability; that is, one should be able to repeat the sampling procedure and obtain different random samples for different experiments. A sampling procedure based on a
deterministic crawl of the web would simply be taking uniform samples from a fixed subset of the web, making repeated experiments problematic. Moreover, it is not clear how to argue that a
sufficiently large subset of the web is representative. (Of course, the web might change, leading to different results in different deterministic experiments, but one should not count on changes over
which one has no control, and whose effect is unclear.)
Because of the problems with the deterministic crawling procedure, it is natural to consider randomized crawling procedures. For example, one may imagine a crawler that performs a random walk,
following a random link from the current page. In the case where there are no links from a page, the walk can restart from some page in its history. Similarly, restarts can be performed to prevent
the walk from becoming trapped in a cycle.
Before explaining our sampling approach, we describe our tool for performing PageRank-like random walks. We use Mercator, an extensible, multi-threaded web crawler written in Java [10,17]. We
configure Mercator to use one hundred crawling threads, so it actually performs one hundred random walks in parallel, each walk running in a separate thread of control. The crawl is seeded with a set
of 10,000 initial starting points chosen from a previous crawl. Each thread begins from a randomly chosen starting point. Recall that walks either proceed along a random link with probability 1-d, or
perform a random jump with probability d (and in the case where the out-degree is 0). When a walk randomly jumps to a random page instead of following a link, it chooses a page at random from all
pages visited by any thread so far (including the initial seeds).
Note that the random jumps our walk performs are different from the random jumps for the web surfer interpretation of PageRank. For PageRank, the random web surfer is supposed to jump to a page
chosen uniformly at random from the entire web. We cannot, however, choose a page uniformly at random; indeed, if we could do that, there would be no need for this paper! Hence we approximate this
behavior by choosing a random page visited by Mercator thus far (including the seed set). Because we use a relatively large seed set, this limitation does not mean that our walks tend to remain near
a single initial starting point (see Section 6 below). For this reason, we feel that this necessary approximation has a reasonably small effect.
A problem with using the pages discovered by a random walk is that certain pages are more likely to be visited during the course of a random walk than other pages. For example, the site
www.microsoft.com/ie is very likely to appear even during the course of a very short walk, because so many other pages point to it. We must account for this discrepancy in determining how to sample
pages visited in the course of our random walk.
More concretely, consider a sampling technique in which we perform a random walk in order to crawl a portion of the web, and we then sample pages from the crawled portion in order to obtain a
near-uniform sample. For any page X,
We first concentrate on finding an approximation for the first term on the right hand side. Consider the following argument. As we have already stated, the fraction of the time that each page is
visited in equilibrium is proportional to its PageRank. Hence, for sufficiently long walks,
where L is the length of the walk.
Unfortunately, we cannot count on being able to do long walks (say, on the order of the number of pages), for the same reason we cannot simply crawl the entire web: the graph is too large. Let us
consider a page to be well-connected if it can be reached by almost every other page through several possible short paths. Under the assumption that the web graph consists primarily of well-connected
pages, approximation (2) is true for relatively short walks as well. (Here, by a short walk, we will mean about n is the number of pages in the Web graph; see Section 4 below regarding more about
this assumption.) This is because a random walk in a well-connected graph rapidly loses the memory of where it started, so the short-term behavior is like its long-term behavior in this regard.
Now, for short walks, on the order of n IDs, you need roughly
Combining approximations (2) and (3), we have
Our mathematical analysis therefore suggests that Pr(X is crawled) is proportional to its PageRank. Under this assumption, by equation (1) we will obtain a uniform sampling if we sample pages from
the crawled subset so that Pr(X is sampled | X is crawled) is inversely proportional to the PageRank of X. This is the main point, mathematically speaking, of our approach: we can obtain more nearly
uniform samples from the history of our random walk if we sample visited pages with a skewed probability distribution, namely by sampling inversely to each page's PageRank.
The question therefore arises of how best to find the PageRank of a page from the information obtained during the random walk. Our random walk provides us with two possible ways of estimating the
PageRank. The first is to estimate R(X) by what we call the visit ratio of the page, or VR(X), which is simply the fraction of times the page was visited during the walk. That is,
Our intuition for using the visit ratio is that if we run the walk for an arbitrarily long time, the visit ratio will approach the PageRank. If the graph consists of well-connected pages, we might
expect the visit ratio to be close to the PageRank over small intervals as well.
We also suggest a second possible means of estimating the PageRank of a page. Consider the graph consisting of all pages visited by the walk, along with all edges traversed during the course of the
walk. We may estimate the PageRank R(X) of a page by the sample PageRank R'(X) computed on this sample graph. Intuitively, the dominant factor in the value R'(X) is the in-degree, which is at least
the number of times the page was visited. We would not expect the in-degree to be significantly larger than the number of times the page was visited, since this would require the random walks to
cross the same edge several times. Hence R'(X) should be closely related to VR(X). However, the link information used in computing R'(X) appears to be useful in obtaining a better prediction. Note
that calculating the values R'(X) requires storing a significant amount of information during the course of the walk. In particular, it requires storing much more information than required to
calculate the visit ratio, since all the traversed edges must also be recorded. It is therefore not clear that in all cases computing R'(X) will be feasible or desirable.
In this section, we consider the limitations in our analysis and framework given above. In particular, we consider biases that may impact the accuracy of our approach.
It is first important to emphasize that our use of random walks as described above limits the pages that can be obtained as samples. Hence, we must clarify the set of web pages from which our
approach is meant to sample. Properly defining which pages constitute the web is a challenging prospect in its own right. Many web pages lie behind corporate firewalls, and are hence inaccessible to
the general public. Also, pages can be dynamically created in response to user queries and actions, yielding an infinite number of potential but not truly extant web pages.
Our crawl-based approach finds pages that are accessible only through some sequence of links from our initial seed set. We describe this part of the web as the publicly accessible web. Implicitly, we
are assuming that the bulk of the web lies in a giant component reachable from major sites such as Yahoo. Furthermore, we avoid crawling dynamic content by stripping the query component from
discovered URLs, and we log only those pages whose content type is text/html.
Finally, because our random walk involves jumping to random locations frequently, it is very difficult for our random walk to discover pages only accessible though long chains of pages. For example,
if the only way to reach a page N is though a sequence of links B is from A, and so on, then N will almost never be discovered. Hence, our technique is implicitly biased against pages that are not
well-connected. If we assume that the giant component of publicly accessible pages is well-connected, then this is not a severe problem. Recent results, however, suggest that the graph structure of
the Web may be more complex, with several pages reachable only by long chains of links and a large component of pages that are not reachable from the remainder of the Web [5].
We therefore reiterate that our random walk approach is meant to sample from the publicly accessible, static, and well-connected web.
We now consider limitations that stem from the mathematical framework developed in Section 3. The mathematical argument is only approximate, for several reasons that we outline here.
• Initial bias. There is an initial bias based on the starting point. This bias is mitigated by choosing a large, diverse set of initial starting points for our crawl.
• Dependence. More generally, there is a dependence between pages in our random walk. Given a page on the walk, that page affects the probability that another page is visited. Therefore we cannot
treat pages independently, as the above analysis appears to suggest.
• Short cycles. This is a specific problem raised by the dependence problem. Some pages that lie in closed short cycles may have the property that if they are visited, they tend to be visited again
very soon. For these pages, our argument does not hold, since there is a strong dependence in the short term memory of the walk: if we see the page we are likely to see it again. In particular,
this implies that approximation (3) is inaccurate for these pages; however, we expect the approximation to be off by only a small constant factor, corresponding to the number of times we are
likely to visit the page in a short interval given we have visited it.
• Large PageRanks. Approximation (3) is inappropriate for long walks and pages with very high PageRank. For a page with very high PageRank, the probability that the page is visited is close to one,
the upper bound. Approximation (4) will therefore overestimate the probability that a high PageRank page is crawled, since the right hand side can be larger than 1.
• Random jumps. As previously mentioned, our random walk approximates the behavior of a random web surfer by jumping to a random page visited previously, rather than a completely random page. This
leads to another bias in our argument, since it increases the likelihood that a page will be visited two or more times during a crawl. This bias is similar to the initial bias.
Despite these problems, we feel that for most web pages X, our approximation (4) for Pr(X is crawled) will be reasonably accurate. However, approximation (4) does not guarantee uniform samples, since
there are other possible sources of error in using the visit ratio to estimate the PageRank of a page. In particular, the visit ratio yields poor approximations for pages with very small PageRank.
This is because the visit ratio is discrete and has large jumps compared to the smallest PageRank values.
To see this more clearly, consider the following related example. Suppose that we have two bins, one containing red balls and the other containing blue balls, representing pages with small and large
PageRanks, respectively. The balls have unique IDs and can therefore be identified. There are one million red balls and one hundred blue balls. We ``sample'' balls in the following manner: we flip a
fair coin to choose a bin, and we then choose a ball independently and uniformly at random from the selected bin. Suppose we collect ten thousand samples in this manner.
Let us treat this sampling process as a random walk. (Note that there are no links; however, this is a random process that gives us a sequence of balls, which we may treat just like the sequence of
pages visited during a random walk.) Suppose we use the visit ratio as an approximation for the long-term sample distribution. Our approximation will be quite good for the blue balls, as we take
sufficient samples that the visit ratio gives a fair approximation. For any red balls we choose, however, the visit ratio will be (at it smallest) 1 in 10,000, which is much too large, since we
expect to sample each red ball once in every 2,000,000 samples.
The problem is that rarely visited pages (i.e., pages with small PageRank) cannot be properly accounted for, since over a short walk we have no chance to see the multitude of such pages. Hence, our
estimate for the PageRank of pages with small PageRank is too large, and hence our estimate for the inverse of the PageRank (which we use as Pr(X is sampled | X is crawled) in equation (1)) is too
small. The effect is that such pages will still be somewhat under-sampled by our sampling process. Computing R'(X) in place of VR(X) does not solve this problem; an analogous argument shows that
pages with small PageRank are also under-sampled if this estimate is used.
The best we can hope for is that this sampling procedure provides a more uniform distribution of pages. In what follows, we describe the experiments we performed to test the behavior of our sampling
procedure on a random graph model, as well as the results from random walks on the web.
In order to test our random walk approach, it is worthwhile to have a test bed in which we can gauge its performance. We suggest a test bed based on a class of random graphs, designed to share
important properties with the web.
It has been well-documented that the graph represented by the web has a distinguishing structure. For example, the in-degrees and out-degrees of the nodes appear to have a power-law (or Zipf-like)
distribution [13]. A random variable X is said to have a power-law distribution if
for some real number k. One explanation for this phenomenon is that the web graph can be thought of as a dynamic structure, where new pages tend to copy the links of other pages.
For our test bed, we therefore choose random graphs with in-degrees and out-degrees governed by power-law distributions. The in-degrees and out-degrees are chosen at random from a suitable
distribution, subject to the restriction that the total in-degree and out-degree must match. Random connections are then made from out links to in links via a random permutation. This model does not
capture some of the richer structure of the web. However, we are primarily interested in whether our sampling technique corrects for the variety of the PageRanks for the nodes. This model provides a
suitable variety of PageRanks as well as in-degrees and out-degrees, making it a useful test case.
We present results for a test graph. The probability of having out-degree k was set to be proportional to 1/k^2.38, for k in the range five to twenty. The probability of having in-degree k was set to
be proportional to 1/k^2.1. The range of the in-degrees were therefore set to lie between five and eighteen, so that the total in-degree would be close to the total out-degree. (There are a few nodes
with smaller in-degree, due to the restriction that the total in-degree and out-degree must match.) The exponents 2.38 and 2.1 were chosen based on experimental results [13]. Our final graph has
10,000,000 nodes and 82,086,395 edges. Note that we chose a relatively high minimum in-degree and out-degree to ensure that with high probability our graph is strongly connected. That is, there are
no small isolated components, and every page can reach every other page through some path.
To crawl this graph, we wrote a program that reads a description of the graph and acts as a web server that returns synthetic pages whose links correspond to those of the graph. We then used Mercator
to perform a random walk on this server, just as we would run Mercator on the real web. The walk visited 848,836 distinct nodes, or approximately 8.5% of the total graph. Three sets of two thousand
samples each were chosen from the visited nodes, using three different sampling techniques. A PR sample was obtained by sampling a crawled page X with probability inversely proportional to its
apparent PageRank R'(X). Similarly, a VR sample was obtained by sampling a crawled page X with probability inversely proportional to its visit ratio VR(X). Finally, a random sample was obtained by
simply choosing 2000 of the crawled pages independently and uniformly at random.
One way to test the efficacy of our sampling technique is to test if the sampled nodes are uniformly distributed according to certain graph attributes that may affect which nodes we sample. In
particular, it seems likely that a node's out-degree, in-degree, and PageRank might affect how likely it is to be sampled. For example, since a node's PageRank is so closely tied to the likelihood
that we crawl it, there is a good chance that the node's PageRank will be somewhat correlated with the probability that our sampling technique samples it, while this will of course not be the case if
our sampling technique is truly uniform. We therefore compared the proportions of the in-degrees, out-degrees, and PageRanks of our samples with their proportions in the original graph.
For example, we consider first the out-degrees, shown in Figure 1. The graph on the left shows the distributions of node out-degrees for the original graph and nodes collected by three sampling
techniques. The graph on the right hand side normalizes these distributions against the percentages from the original graph (shown as a horizontal blue line with value 1). In both graphs, sample
curves closer to the graph curve (shown in blue) are better. Although the distributions for the samples differ somewhat from that of the original graph, the differences are minor, and are due to the
variation inherent in any probabilistic experiment. As might be expected, there does not appear to be any systematic bias against nodes with high or low out-degree in our sampling process.
Figure 1: Out-degree distributions for the original graph and for nodes obtained by three different sampling techniques.
In contrast, when we compare our samples to the original graph in terms of the in-degree and PageRank, as shown in Figures 2 and 3, there does appear to be a systematic bias against pages with low
in-degree and low PageRank. (Note that in Figure 3, the PageRank values are scaled as multiples of the average PageRank, namely 10^-6, the inverse of the number of nodes in the graph. For example,
the percentage of pages in the PageRank range 1.0-1.2 corresponds to the percentage of pages whose PageRanks lie between 1.0 and 1.2 times the average.) This systematic bias against pages with low
in-degree or PageRank is naturally understood from our previous discussion in Section 4. Our random walk tends to discover pages with higher PageRank. Our skewed sampling is supposed to ameliorate
this effect but cannot completely correct for it. Our results verify this high-level analysis.
As can be seen from the right-hand graphs in Figures 2 and 3, the most biased results for in-degree and PageRank appear in the random samples. In other words, both PR and VR sampling produces a net
sampling that is more uniform than naive random sampling of the visited sub-graph.
Figure 2: In-degree distributions for the original graph and for nodes obtained by three different sampling techniques.
Figure 3: PageRank distributions for the original graph and for nodes obtained by three different sampling techniques.
We have similarly experimented with random graphs with broader ranges of in- and out-degrees, more similar to those found on the web. A potential problem with such experiments is that random graphs
constructed with small in- and out-degrees might contain disjoint pieces that are never sampled, or long trails that are not well-connected. Hence graphs constructed in this way are not guaranteed to
have all nodes publicly accessible or well-connected. In such graphs we again find that using the values VR(X) or R'(X) to re-scale sampling probabilities makes the resulting sample appear more
uniform. However, the results are not exactly uniform, as can be seen by comparing the distribution of in-degrees and PageRanks of the samples with those of the original graph.
To collect URLs for sampling, we performed three random walks of the web that lasted one day each. All walks were started from a seed set containing 10,258 URLs discovered by a previous, long-running
web crawl. From the logs of each walk, we then constructed a graph representation that included only those visited pages whose content type was text/html. Finally, we collected PR, VR, and uniformly
random samples for each walk using the algorithms described above.
Various attributes of these walks are shown in Table 1. For each walk, we give the walk's start date, the total number of downloaded HTML pages (some of which were fetched multiple times), as well as
the number of nodes and (non-dangling) edges in the graph of the downloaded pages. Note that Walk 3 downloaded pages at roughly twice the rate as the other two walks; we attribute this to the
variability inherent in network bandwidth and DNS resolution.
│ Name │ Date │ Downloads │ Nodes │ Edges │
│ Walk 1 │ 11/15/99 │ 2,702,939 │ 990,251 │ 6,865,567 │
│ Walk 2 │ 11/17/99 │ 2,507,004 │ 921,114 │ 6,438,577 │
│ Walk 3 │ 11/18/99 │ 5,006,745 │ 1,655,799 │ 12,050,411 │
Table 1: Attributes of our three random web walks.
Given any two random walk starting from the same seed set, one would hope that the intersection of the sets of URLs discovered by each walk would be small. To check how well our random walks live up
to this goal, we examined the overlaps between the sets of URLs discovered by the three walks. Figure 4 shows a Venn diagram representing this overlap. The regions enclosed by the blue, red, and
green lines represent the sets of URLs encountered by Walks 1, 2, and 3, respectively. The values in each region denote the number of URLs (in thousands), and the areas accurately reflect those
values. The main conclusion to be drawn from this figure is that 83.2% of all visited URLs were visited by only one walk. Hence, our walks seem to disperse well, and therefore stand a good chance of
discovering new corners of the web.
7. Applications
Having a set of near-uniformly sampled URLs enables a host of applications. Many of these applications measure properties of the web, and can be broadly divided into two groups: those that determine
characteristics of the URLs themselves, and those that determine characteristics of the documents referred to by the URLs. Examples of the former group include measuring distributions of the
following URL properties: length, number of arcs, port numbers, filename extensions, and top-level internet domains. Examples of the latter group include measuring distributions of the following
document properties: length, character set, language, number of out-links, and number of embedded images. In addition to measuring characteristics of the web itself, uniformly sampled URLs can also
be used to measure the fraction of all web pages indexed by a search engine. In this section we report on two such applications, top-level domain distribution and search engine coverage.
7.1. Estimating the Top-Level Domain Distribution
We analyzed the distribution of URL host components across top-level internet domains, and compared the results to the distribution we discovered during a much longer deterministic web crawl that
downloaded 80 million documents. Table 2 shows for each walk and each sampling method (using 10,000 URLs) the percentage of pages in the most popular internet domains.
│ │ Deterministic │ Uniform sample │ PR sample │ VR sample │
│ Domain │ Crawl │ Walk 1 │ Walk 2 │ Walk 3 │ Walk 1 │ Walk 2 │ Walk 3 │ Walk 1 │ Walk 2 │ Walk 3 │
│ com │ 47.03 │ 46.79 │ 46.48 │ 47.02 │ 46.59 │ 46.77 │ 47.53 │ 45.62 │ 46.01 │ 45.42 │
│ edu │ 10.25 │ 9.01 │ 9.02 │ 8.90 │ 9.31 │ 9.36 │ 9.13 │ 9.84 │ 9.08 │ 9.96 │
│ org │ 8.38 │ 8.51 │ 8.82 │ 8.99 │ 8.66 │ 8.74 │ 8.38 │ 9.12 │ 8.91 │ 8.65 │
│ net │ 6.41 │ 4.80 │ 4.52 │ 4.39 │ 4.96 │ 4.63 │ 4.62 │ 4.74 │ 4.50 │ 4.52 │
│ jp │ 3.99 │ 3.83 │ 3.74 │ 3.41 │ 3.70 │ 3.22 │ 3.61 │ 3.87 │ 3.62 │ 3.62 │
│ gov │ 2.75 │ 2.97 │ 3.04 │ 2.74 │ 3.13 │ 3.09 │ 2.53 │ 3.42 │ 3.53 │ 2.89 │
│ uk │ 2.53 │ 2.46 │ 2.65 │ 2.70 │ 2.73 │ 2.77 │ 2.76 │ 2.59 │ 3.08 │ 2.83 │
│ us │ 2.44 │ 1.73 │ 1.86 │ 1.53 │ 1.65 │ 1.73 │ 1.62 │ 1.77 │ 1.52 │ 1.80 │
│ de │ 2.14 │ 3.24 │ 2.93 │ 3.29 │ 3.21 │ 3.25 │ 3.06 │ 3.26 │ 3.13 │ 3.52 │
│ ca │ 1.93 │ 2.07 │ 2.31 │ 1.94 │ 2.13 │ 1.85 │ 1.86 │ 2.05 │ 1.89 │ 2.07 │
│ au │ 1.51 │ 1.85 │ 1.87 │ 1.64 │ 1.75 │ 1.66 │ 1.66 │ 1.74 │ 1.49 │ 1.71 │
│ fr │ 0.80 │ 0.96 │ 1.04 │ 0.99 │ 0.84 │ 0.69 │ 0.89 │ 0.99 │ 1.01 │ 0.90 │
│ se │ 0.72 │ 0.81 │ 1.33 │ 1.04 │ 0.86 │ 1.27 │ 1.06 │ 0.84 │ 1.10 │ 1.05 │
│ it │ 0.54 │ 0.65 │ 0.63 │ 0.80 │ 0.91 │ 0.82 │ 0.70 │ 0.82 │ 0.82 │ 0.83 │
│ ch │ 0.37 │ 0.87 │ 0.71 │ 0.99 │ 0.64 │ 0.71 │ 0.87 │ 0.92 │ 0.72 │ 0.89 │
│ Other │ 8.21 │ 9.45 │ 9.05 │ 9.63 │ 8.93 │ 9.44 │ 9.72 │ 8.41 │ 9.59 │ 9.34 │
Table 2: Percentage of sampled URLs in each top-level domain.
Note that the results are quite consistent over the three walks that are sampled in the same way. Also, as the size of the domain becomes smaller, the variance in percentages increases, as is to be
expected by our earlier discussion.
There appears to be a relatively small difference between the various sampling techniques in this exercise. Although this may be in part because our skewed sampling does not sufficiently discount
high PageRank pages, it also appears to be because the PageRank distributions across domains are sufficiently similar that we would expect little difference between sampling techniques here. We have
found in our samples, for example, that the average sample PageRank and visit ratio are very close (within 10%) across a wide range of domains.
7.2. Search Engine Coverage
This section describes how we have used URL samples (using 2,000 pages) to estimate search engine coverage. For each of the URL samples produced as described in Section 6 above, we attempt to
determine if the URL has been indexed by various search engines. If our samples were truly uniform over the set of all URLs, this would give an unbiased estimator of the fraction of all pages indexed
by each search engine.
To test whether a URL is indexed by a search engine, we adopt the approach used by Bharat and Broder [2]. Using a list of words that appear in web documents and an approximate measure of their
frequency, we find the r rarest words that appear in each document. We then query the search engine using a conjunction of these r rarest words and check for the appropriate URL. In our tests, we use
r = 10. Following their terminology, we call such a query a strong query, as the query is designed to strongly identify the page.
In practice, strong queries do not always uniquely identify a page. First, some sampled pages contain few rare words; therefore, even a strong query may produce thousands of hits. Second, mirror
sites, duplicates or near-duplicates of the page, or other spurious matches can create difficulties. Third, some search engines (e.g., Northern Light) can return pages that do not contain all of the
words in the query, despite the fact that a conjunctive query was used.
To deal with some of these difficulties, we adopt an approach similar to one suggested by Bharat and Broder [2]. In trying to match a URL with results from a search engine, all URLs are normalized by
converting to lowercase, removing optional extensions such as index.htm[l] and home.htm[l], inserting defaulted port numbers if necessary, and removing relative references of the form ``#...''. We
also use multiple matching criteria. A match is exact if the search engine returns a URL that, when normalized, exactly matches the normalized target URL. A host match occurs if a search engine
returns a URL whose host component matches that of the target URL. Finally, a non-zero match occurs if a search engine returns any URL as a result of the strong query. Non-zero matches will
overestimate the number of actual matches; however, the number of exact matches may be an underestimate if a search engine removes duplicate pages or if the location of a web page has changed.
To measure the coverage of several popular search engines, we fetched the 12,000 pages corresponding to the URLs in the PR and VR samples of the three walks described in Section 6. We then determined
the 10 rarest words in each fetched page, and performed queries on the following eight search engines: AltaVista [1], Excite [6], FAST Search [7], Google [8], HotBot [9], Infoseek [10], Lycos [16],
and Northern Light [18].
The results of these experiments are shown in Figures 5, 6, and 7, which show the exact, host, and non-zero match percentages, respectively. Note that the results are quite consistent over the three
walks and the two sampling methods.
An issue worth remarking on is that Google appears to perform better than one might expect from reported results on search engine size [19]. One possible reason for the discrepancy is that Google
sometimes returns pages that it has not indexed based on key words in the anchor text pointing to the page. A second possibility is that Google's index may contain pages with higher PageRank than
other search engines, and the biases of our approach in favor of such pages may therefore be significant. These possibilities underscore the difficulties in performing accurate measurements of search
engine coverage.
Figure 5: Exact matches for the three walks.
Figure 6: Host matches for the three walks.
Figure 7: Non-zero matches for the three walks.
8. Conclusions
We have described a method for generating a near-uniform sample of URLs by sampling URLSs discovered during a random walk of the Web. It is known that random walks tend to over-sample URLs with
higher connectivity, or PageRank. To ameliorate that effect, we have described how additional information obtained by the walk can be used to skew the sampling probability against pages with high
PageRank. In particular, we use the visit ratio or the PageRanks determined by the graph of the pages visited during the walk.
In order to test our ideas, we have implemented a simple test bed based on random graphs with Zipfian degree distributions. Testing on these graphs shows that our samples based on skewed sampling
probabilities yield samples that are more uniform over the entire graph than the samples obtained by sampling uniformly over pages visited during the random walk. Our samples, however, are still not
Currently, we have focused attention on making our approach universal, in that we do not take advantage of additional knowledge we may have about the web. Using additional knowledge could
significantly improve our performance, in terms of making our samples closer to uniform. For example, we could modify our sampling technique to more significantly lower the probability of sampling
pages with apparently high PageRank from our random walk, and similarly we could significantly increase the probability of sampling pages with apparently low PageRank from our random walk. Our
sampling probabilities could be based on information such as the distribution of in-degrees and out-degrees on the web. However, such an approach might incur other problems; for example, the changing
nature of the web makes it unclear whether additional information used for sampling can be trusted to remain accurate over time.
[1] AltaVista, http://www.altavista.com/
[2] K. Bharat and A. Broder. A Technique for Measuring the Relative Size and Overlap of Public Web Search Engines. In Proceedings of the 7th International World Wide Web Conference, Brisbane,
Australia, pages 379-388. Elsevier Science, April 1998.
[3] T. Bray. Measuring the Web. World Wide Web Journal, 1(3), summer 1996.
[4] S. Brin and L. Page. The Anatomy of a Large-Scale Hypertextual Web Search Engine. In Proceedings of the 7th International World Wide Web Conference, Brisbane, Australia, pages 107-117. Elsevier
Science, April 1998.
[5] A. Broder, R. Kumar, F. Maghoul, P. Raghavan, S. Rajagopalan, R. Stata, A. Tomkins, and J. Wiener. Graph Structure in the Web. These proceedings.
[6] Excite, http://www.excite.com/
[7] FAST Search, http://www.alltheweb.com/
[8] Google, http://www.google.com/
[9] M. Henzinger, A. Heydon, M. Mitzenmacher, and M. Najork. Measuring Search Engine Quality using Random Walks on the Web. In Proceedings of the Eighth International World Wide Web Conference, pages
213-225, May 1999.
[10] Allan Heydon and Marc Najork. Mercator: A Scalable, Extensible Web Crawler. World Wide Web, 2(4):219-229, December 1999.
[11] HotBot, http://www.hotbot.com/
[12] Infoseek, http://www.infoseek.com/
[13] S.R. Kumar, P. Raghavan, S. Rajagopalan, and A. Tomkins. Extracting Large Scale Knowledge Bases from the Web. IEEE International Conference on Very Large Databases (VLDB), Edinburgh, Scotland,
pages 639-650, September 1999.
[14] S. Lawrence and C. L. Giles. Searching the World Wide Web. Science, 280(536):98, 1998.
[15] S. Lawrence and C. L. Giles. Accessibility of Information on the Web. Nature, 400:107-109, 1999.
[16] Lycos, http://www.lycos.com/
[17] Mercator Home Page, http://www.research.digital.com/SRC/mercator/
[18] Northern Light, http://www.northernlight.com/
[19] Search Engine Watch, http://www.searchenginewatch.com/reports/sizes.html
Monika R. Henzinger received her Ph.D. from Princeton University in 1993 under the supervision of Robert E. Tarjan. Afterwards, she was an assistant professor in Computer Science at Cornell
University. She joined the Digital Systems Research Center (now Compaq Computer Corporation's Systems Research Center) in 1996. Since September 1999 she is the Director of Research at Google, Inc.
Her current research interests are information retrieval on the World Wide Web and algorithmic problems arising in this context.
Allan Heydon received his Ph.D. in Computer Science from Carnegie Mellon University, where he designed and implemented a system for processing visual specifications of file system security. In
addition to his recent work on web crawling, he has also worked on the Vesta software configuration management system, the Juno-2 constraint-based drawing editor, and algorithm animation. He is a
senior member of the research staff at Compaq Computer Corporation's Systems Research Center.
Michael Mitzenmacher received his Ph.D. in Computer Science from the University of California at Berkeley in 1996. He then joined the research staff of the Compaq Computer Corporation's Systems
Research Center. Currently he is an assistant professor at Harvard University. His research interests focus on algorithms and random processes. Current interests include error-correcting codes, the
Web, and distributed systems.
Marc Najork is a senior member of the research staff at Compaq Computer Corporation's Systems Research Center. His current research focuses on 3D animation, information visualization, algorithm
animation, and the Web. He received his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1994, where he developed Cube, a three-dimensional visual programming
|
{"url":"http://www9.org/w9cdrom/88/88.html","timestamp":"2014-04-21T12:09:18Z","content_type":null,"content_length":"62770","record_id":"<urn:uuid:25ed022c-b9d0-4a75-8a62-24125a1a5295>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
guess my number
Topic closed
guess my number
1-100,even divisble by 3
Re: guess my number
Last edited by anonimnystefy (2012-05-02 09:12:51)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: guess my number
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: guess my number
no no and no
Re: guess my number
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: guess my number
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: guess my number
Re: guess my number
Last edited by anonimnystefy (2012-05-06 00:11:27)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: guess my number
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: guess my number
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: guess my number
bobbym yes new number 1-100,divisble by 3,
Re: guess my number
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: guess my number
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: guess my number
Re: guess my number
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: guess my number
no way
Re: guess my number
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: guess my number
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: guess my number
no no
Re: guess my number
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: guess my number
yilkes 75 new one divisble by 6 up to 100
Last edited by mathgogocart (2012-05-27 02:32:14)
Re: guess my number
Is it 48?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Topic closed
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=217822","timestamp":"2014-04-21T05:47:24Z","content_type":null,"content_length":"35540","record_id":"<urn:uuid:877d76e2-0082-4e15-91cb-bff28d992057>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Skewness in Venture Capital Returns
The following is a nice chart (courtesy of Focus Ventures) showing the large gaps in performance across quartiles in venture capital. Anyone surprised that most smart investors want to invest in the
first quartile of venture funds, or none at all? After all, risk-adjusted performance of the VC second-quartile is less than that of the public markets.
Related posts:
1. Specious Argument says:
Skewness in Returns
Paul Kedrosky has an interesting graphic on weblog post Skewness in Venture Capital Returns. It reminds me of when I used to look at junior oil and gas companies in Calgary. Arc Financial did and
probably still does release a…
2. Bill Burnahm says:
Hi Paul,
A few comments on the whole top quartile issue intended to stir the pot a bit:
1. It should not be surprising that the top quartile of VC funds produces superior returns. In fact, you don’t even need any actual data on VC returns to determine this; just simple statistics
and the basic constructs of Venture Capital. From a mathematics standpoint, what you have is a one-tailed distribution with a rather long tail representing the extreme variance that can occur in
a business where you can never lose more than 100% on a single deal but you can sometimes make a 10,000%+ return. When you divide such a tail into quartiles, the top quartile is guaranteed to
“perform” better than the other three in any kind of reasonable distribution. Thus, there’s no way that the 2nd and 3rd quartiles will ever catch up unless there is a change in the independence
of the individual outcomes or a substantial change in the shape of the distribution, both of which are possible but not likely.
2. As economists have determined other infamous quartile studies, perhaps the most prominent of which being studies of income distribution in the US, the question is not really “Can the other
quartiles catch the top quartile?”, because that’s a logical impossibility, but “Is there sufficient turnover in the top quartile to provide for the reasonable prospect of upward mobility?”, i.e.
how consistent is the cohort within a given quartile over time. In the VC space, you hear a lot about certain firms that are “consistent” top quartile performers, but I have never actually seen a
study that reliably quantifies this consistency. My guess is that while there are a number of consistent performers there are probably quite a few firms that wander between quartiles from
fund-to-fund. If this is the case, then for LPs life is a bit trickier than simply picking a top quartile fund because there’s always a decent chance that the firm that just turned in a second or
even a third quartile fund, will turn in a 1st quartile performance next time around and that their top quartile firm will turn in a real clunker next time.
Perhaps the most interesting studies to see then would be how consistent the cohorts are within each quartile are over a certain time period and what, if any, factors are most predictive of top
quartile performance over time. Such studies may prove to be better investment tools for LPs than their rearview mirrors.
3. Nivi says:
Monsieur Burnham made a great point that there may be churn within each of the quartiles.
I believe that “A Random Walk Down Wall Street” refers to various studies that show that past performance is not an indicator of future performance for mutual funds.
Significant turnover in the quartiles could be the reason LPs are giving money to today’s poor performers. The zeros of today could be the heros of tommorrow. But are the LPs venture allocations
beating the S&P 500?
You could move this study of quartiles up a level and segment LP performance (just their venture allocation) by quartile. I wonder what percentile of LPs are beating the S&P 500?
And then, what is the turnover in the quartiles for LPs?
4. Daniel Nerezov says:
I am not a VC but I would like to know something.
Whilst your meetings with LP’s are sure to include a healthy discussion of venture returns and distribution there of…
Isn’t the primary motivation for LPs plain old diversification? In other words, to better practice modern porfolio theory, don’t the LPs *have to* sign up for your funds as a measure to diversify
risk rather than get all hyped up about your returns?
Isn’t it just the matter of having exposure to *all* assets which is what drives the investment decision, rather than any kind of “let’s pick the winners” kind of funny business?
|
{"url":"http://paul.kedrosky.com/archives/2005/07/skewness_in_ven.html","timestamp":"2014-04-20T21:29:12Z","content_type":null,"content_length":"35319","record_id":"<urn:uuid:e2ab9912-d8f9-4d89-8f1b-77cb1721b594>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proving an equivalence relation.
April 10th 2013, 11:01 PM #1
Feb 2013
Proving an equivalence relation.
Let A and B be subsets of the set Z of all integers, and let F denote the set of all functions f : A--> B. Define a relation R on F by: for any f,g
I need to prove that R is an equivalence relation on F.
Re: Proving an equivalence relation.
So, what seems to be the problem?
Re: Proving an equivalence relation.
Ok so i believe i have to prove that it is reflexive symmetric and transitive. How would i prove that this is reflexive?
Re: Proving an equivalence relation.
Please confirm that you know what "reflexive" is. Because if you do, the fact that R is reflexive should be obvious to you. (You know that y - y = 0 for all y, don't you?) On the other hand, if
you don't know what "reflexive" means, asking a question that hides this lack of knowledge on a forum is not the best idea. Instead, you should read your textbook or lecture notes or look it up
in Wikipedia, MathWorld or a similar site.
Re: Proving an equivalence relation.
Alright i hope this is correct:
Reflexive: fRf = f(x)-f(x) for x (element of) A
=0 which is a constant. Therefore reflexive
Symmetric: There exists f,g (element of) F if fRg, then gRf
gRf = g(x) - f(x)
=-(f(x) - g(x))
Still a little confused on transitivity.
Re: Proving an equivalence relation.
If $f\mathcal{R}g~\&~g\mathcal{R}h$ what can be said of $(f-g)+(g-h)~?$
Re: Proving an equivalence relation.
The idea is correct but should be written differently. Equality = is mostly used between numbers, vectors, functions, matrices and other math objects. On the other hand, fRf is a proposition,
i.e., something that can be either true or false. It is customary to write "A ⇔ B" or "A iff B" to indicate that propositions A and B are equivalent. So here I would write:
R is reflexive iff for all f ∈ F, fRf. Fix an arbitrary f ∈ F. Then fRf iff there exists a c such that for all x ∈ A, f(x) - f(x) = c. Consider c = 0 and fix an arbitrary x ∈ A. Indeed, f(x) - f
(x) = 0, as required.
First, it's "for all f, g" and not "there exists "f, g". I would write:
R is symmetric iff for all f, g ∈ F, fRg implies gRf. Fix arbitrary f and g ∈ F and assume fRg. This means that there exists a c such that for all x ∈ A, f(x) - g(x) = c. We need to show gRf,
i.e., there exists a c' such that for all x ∈ A, g(x) - f(x) = c'. Take c' = -c and fix an arbitrary x ∈ A. Then g(x) - f(x) = -(f(x) - g(x)) = -c, as required.
Follow Plato's hint on transitivity.
You may notice that this problem requires proving propositions as well as using assumptions of the form "for all x, A(x)", "there exists an x such that A(x)" and "A implies B". I recommend going
over how to do this again.
April 11th 2013, 12:18 AM #2
MHF Contributor
Oct 2009
April 11th 2013, 05:29 PM #3
Feb 2013
April 12th 2013, 03:13 AM #4
MHF Contributor
Oct 2009
April 12th 2013, 09:35 AM #5
Feb 2013
April 12th 2013, 09:40 AM #6
April 12th 2013, 09:56 AM #7
MHF Contributor
Oct 2009
|
{"url":"http://mathhelpforum.com/discrete-math/217228-proving-equivalence-relation.html","timestamp":"2014-04-19T21:04:07Z","content_type":null,"content_length":"52579","record_id":"<urn:uuid:0e6360fc-d4c6-4cbb-8fad-92441c85c998>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Golden Years of Moscow Mathematics
This is the second edition of a book first published in 1992, just after the collapse of the Soviet Union. The book collects reflections and memoirs about mathematics in Moscow during the Soviet
years, a period that was hardly "golden" for Russians in general. But mathematics in Moscow flourished during this time, particularly in the 1930s and then in the 1950s and 1960s. This was partly due
to the fact that mathematics was a "safe" topic, free from political interference, and partly due to the presence of a few great and inspiring mathematicians who created a school and a tradition.
The articles collected here mostly focus on mathematicians. There are articles on Kolmogorov (two of them) and on Markov, and two other articles are basically collections of stories about
mathematicians. Many photographs are included. Several articles have an autobiographical thrust. One article, by A. B. Sossinsky, tells the story of his return from the West to Russia, and ends with
a discussion of the dangers of the early 1990s. He asks whether perestroika would "destroy what the KGB could not" by allowing a massive "brain drain" to the West. The answer to that seems to have
been yes.
In 1992, a group of scholars had just established the Independent University of Moscow, an attempt to preserve, and maybe regenerate, the Moscow mathematical tradition. IUM is mentioned briefly in
the original preface, but it was clearly too recent a development to feature in the articles. The mood seemed somber. The past was golden; the future was uncertain.
The new edition adds a new preface and one new article, in addition to a list of errata and an index of names. (One can easily tell the new material, since it is in a slightly different typeface.)
The index of names and the errata are certainly very welcome. The new preface notes both the huge number of mathematicians who emigrated and the fact that many of them have preserved some sort of tie
to Russia, often by visiting frequently and giving lectures.
The new article, by V. M. Tikhomirov, is something of a disappointment. Rather than focus on the period since 1992, Tikhomirov has written an overview of mathematics in Moscow since the 1920s. That
is well and good, but it means that very little space is given to updating the story. The phrase "in the 15 years since the collapse of the USSR" occurs on page 281, and the article ends on page 283.
Those two pages contain a little bit of information, but not much. In particular, we learn little about the fortunes of IUM except for a list of those who have spoken at its "Globus" seminar.
One infers from the tone of Tikhomirov's article that these have been hard years for mathematics at Moscow. One cannot read this book without admiring what was achieved during those golden years, nor
without a bracing awareness of how easy it was for all that to come to and end.
Fernando Q. Gouvêa is Carter Professor of Mathematics at Colby College in Waterville, ME. He is the editor of MAA Reviews.
|
{"url":"http://www.maa.org/publications/maa-reviews/golden-years-of-moscow-mathematics","timestamp":"2014-04-16T14:16:46Z","content_type":null,"content_length":"97916","record_id":"<urn:uuid:15e0877b-c67e-4300-b34e-3849aa9349ef>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Resources and services for Utah Higher Education faculty and students such as Canvas and collegEmedia.
Identify the relationship between the corresponding sides in two similar figures and
their areas. Find area of similar figures.
Main Curriculum Tie:
Mathematics Grade 7
Draw construct, and describe geometrical figures and describe the relationships
between them. 1.
Solve problems involving scale drawings of geometric figures, including computing
actual lengths and areas from a scale drawing and reproducing a scale drawing at a
different scale.
• Pattern Blocks for each team
• Centimeter Paper (attached)
• Worksheets: Growing Generations of Similar Figures, Area of Similar Figures
Background For Teachers:
Enduring Understanding (Big Ideas):
Indirect measures for similar figures
Essential Questions:
• How can we describe the relationship between the corresponding sides of similar
figures and their areas?
• How can this relationship help us determine the area for similar figures?
Skill Focus:
Use the relationship between corresponding sides of similar figures to find area.
Vocabulary Focus:
Corresponding sides, similar figures
Ways to Gain/Maintain Attention (Primacy):
Manipulatives, cheer/movement, game, writing
Instructional Procedures:
1. Find the perimeter of this rectangle.
2. Find the area for the rectangle
3. If these rectangles are similar, what is the measure of side EF?
4. What scale factor was used on the smaller rectangle to get the larger dimensions?
Lesson Segment 1: How can we describe the relationship between the corresponding
sides of similar figures and their areas?
Have students review the journal Frayer Models: Defining Similar Figures and Defining
Scale Factor. Ask students to read through their definitions, examples and
nonexamples. Tell them they will be using what they know about similar figures,
corresponding sides and scale factor for today’s learning goal.
Give each team of students 16 of each pattern block: orange squares, green triangles,
blue rhombi. They will be building growing patterns, or growing generations of
similar figures. Geometry Standard in the 7th Grade Utah Mathematics Core uses the
terms "scale drawings" rather than generations. Make clear to students that each
generation is a scale drawing of the original figure. Work with them in completing
the Growing Generations of Similar Figures investigation worksheet , discussing the
answers to the questions on page 2 of the worksheet through # 2 on the second page.
Ask them to be looking for a pattern that relates the scale factor to the number of
area units in each generation.
To help students visualize the figures in the table for # 3 on the second page of
Growing Generations of Similar Figures, give each a Centimeter Paper. Students can
sketch each of the generations listed in the table for the “5 cm²” shape (as shown in
red on the attached key for the Centimeter Paper) as well as the 2nd and 3rd
generation listed in the table for the “4 yd²” shape as shown in green. As they
sketch the figures, compare the areas, and complete these first six rows of the table
on the worksheet, have them focus on the relationship between the scale factor and
the original area. They should begin to see that the units of area in the larger
figure are always the original area multiplied by the square of the scale factor.
Help them make the connection between area being measured in square units and the
scale factor being squared when finding the area of the larger generation. Students
should then be able to complete the last four rows of the table by using the pattern
described rather than needing to sketch.
Lesson Segment 2: How can the relationship between areas of similar figures help us
find missing areas?
Teach the students the following cheer. Have teams create moves for the cheer or a
rap to present to class.
│ Similar figures? I’m not scared! │ │
│ I’ll find area if I’m dared. │ │
│ Multiply the scale factor squared │ │
│ By the smaller area-I’m prepared! │ │
Play Lie Detector for completing the worksheet, Area of Similar Figures.
Materials: Give each small group a Smart Pal with blank paper or large team board and
Procedure: Divide the class into two teams, A and B. Team members work together to
complete one assigned part of the worksheet. Give students a little time to check
with team members and to work the part of the worksheet correctly. The team should
then decide if they are going to tell a lie or tell the truth. If they choose to tell
the truth, a scribe writes the responses to the worksheet correctly on the Smart Pal
or Team Board. If they decide to tell a lie, the scribe writes part of their response
incorrectly on the Smart Pal or team board. Teacher selects one person from team A to
be The Presenter. The Presenter stands in the front of the room shows the team board
and explains what was done (either telling the truth, or telling a lie about the
problem). The class is given a little time to discuss the response in small groups.
The Presenter then chooses one person from Team B to be the Lie Detector and to tell
whether they believe the explanation was truth or lie. If The Lie Detector thinks the
explanation was a lie, he/she has to explain where the lie occurred and correct it.
If the Lie Detector is correct, Team B gets a point. If not, The Presenter tells
whether their explanation was the truth or a lie. If it was a lie, The Presenter
tells why. The game proceeds with each team taking a turn to be Presenter and Lie
Journal: Have students take the roll of expert explaining a student who had just come
into the class. Students should write a complete explanation for how to answer the
following question using words, drawings and math symbols. An evaluation for math
writings has been attached. Students should do the writing on the back of the
Question: You have a great photo that is 3” by 5”. You want to enlarge the area of
the picture using a scale factor of 3. What is the area of the larger picture? Will
it fit on your 9 x 12 scrapbook page?
Assign text practice as needed.
□ squares.gif
□ man.gif
Assessment Plan:
Performance task, observation of student groups, written response
This lesson plan was created by Linda Bolin.
Utah LessonPlans
Created Date :
May 14 2009 13:30 PM
Summary:Identify the relationship between the corresponding sides in two similar figures and their areas. Find area of similar figures.
Main Curriculum Tie: Mathematics Grade 7 Draw construct, and describe geometrical figures and describe the relationships between them. 1.Solve problems involving scale drawings of geometric figures,
including computing actual lengths and areas from a scale drawing and reproducing a scale drawing at a different scale.
Background For Teachers:Enduring Understanding (Big Ideas): Indirect measures for similar figures
Skill Focus: Use the relationship between corresponding sides of similar figures to find area.
Lesson Segment 1: How can we describe the relationship between the corresponding sides of similar figures and their areas? Have students review the journal Frayer Models: Defining Similar Figures and
Defining Scale Factor. Ask students to read through their definitions, examples and nonexamples. Tell them they will be using what they know about similar figures, corresponding sides and scale
factor for today’s learning goal.
Give each team of students 16 of each pattern block: orange squares, green triangles, blue rhombi. They will be building growing patterns, or growing generations of similar figures. Geometry Standard
in the 7th Grade Utah Mathematics Core uses the terms "scale drawings" rather than generations. Make clear to students that each generation is a scale drawing of the original figure. Work with them
in completing the Growing Generations of Similar Figures investigation worksheet , discussing the answers to the questions on page 2 of the worksheet through # 2 on the second page. Ask them to be
looking for a pattern that relates the scale factor to the number of area units in each generation.
To help students visualize the figures in the table for # 3 on the second page of Growing Generations of Similar Figures, give each a Centimeter Paper. Students can sketch each of the generations
listed in the table for the “5 cm²” shape (as shown in red on the attached key for the Centimeter Paper) as well as the 2nd and 3rd generation listed in the table for the “4 yd²” shape as shown in
green. As they sketch the figures, compare the areas, and complete these first six rows of the table on the worksheet, have them focus on the relationship between the scale factor and the original
area. They should begin to see that the units of area in the larger figure are always the original area multiplied by the square of the scale factor. Help them make the connection between area being
measured in square units and the scale factor being squared when finding the area of the larger generation. Students should then be able to complete the last four rows of the table by using the
pattern described rather than needing to sketch.
Lesson Segment 2: How can the relationship between areas of similar figures help us find missing areas? Teach the students the following cheer. Have teams create moves for the cheer or a rap to
present to class.
│ Similar figures? I’m not scared! │ │
│ I’ll find area if I’m dared. │ │
│ Multiply the scale factor squared │ │
│ By the smaller area-I’m prepared! │ │
Play Lie Detector for completing the worksheet, Area of Similar Figures. Materials: Give each small group a Smart Pal with blank paper or large team board and marker Procedure: Divide the class into
two teams, A and B. Team members work together to complete one assigned part of the worksheet. Give students a little time to check with team members and to work the part of the worksheet correctly.
The team should then decide if they are going to tell a lie or tell the truth. If they choose to tell the truth, a scribe writes the responses to the worksheet correctly on the Smart Pal or Team
Board. If they decide to tell a lie, the scribe writes part of their response incorrectly on the Smart Pal or team board. Teacher selects one person from team A to be The Presenter. The Presenter
stands in the front of the room shows the team board and explains what was done (either telling the truth, or telling a lie about the problem). The class is given a little time to discuss the
response in small groups. The Presenter then chooses one person from Team B to be the Lie Detector and to tell whether they believe the explanation was truth or lie. If The Lie Detector thinks the
explanation was a lie, he/she has to explain where the lie occurred and correct it. If the Lie Detector is correct, Team B gets a point. If not, The Presenter tells whether their explanation was the
truth or a lie. If it was a lie, The Presenter tells why. The game proceeds with each team taking a turn to be Presenter and Lie Detector.
Journal: Have students take the roll of expert explaining a student who had just come into the class. Students should write a complete explanation for how to answer the following question using
words, drawings and math symbols. An evaluation for math writings has been attached. Students should do the writing on the back of the evaluation.
Question: You have a great photo that is 3” by 5”. You want to enlarge the area of the picture using a scale factor of 3. What is the area of the larger picture? Will it fit on your 9 x 12 scrapbook
|
{"url":"http://www.uen.org/Lessonplan/preview.cgi?LPid=23528","timestamp":"2014-04-21T12:19:46Z","content_type":null,"content_length":"45744","record_id":"<urn:uuid:bf684192-5b17-4870-938b-d02eabda2fa8>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relative Standard Error
Behavioral Risk Factor Survey
Relative Standard Error
Survey results are estimates of population values and always contain some error because they are based on samples. Confidence intervals are one tool for assessing the reliability, or precision, of
survey estimates. Another tool for assessing reliability is the relative standard error (RSE) of an estimate. Estimates with large RSEs are considered less reliable than estimates with small RSEs.
How large is "large" when looking at RSE? There is no absolute cutoff point. The Office of Health Informatics follows guidelines used by the National Center for Health Statistics (PDF, 289 KB, exit
DHS) and recommends that estimates with RSEs above 30 percent should be considered unreliable.
How is RSE calculated? Relative standard error is calculated by dividing the standard error of the estimate by the estimate itself, then multiplying that result by 100. Relative standard error is
expressed as a percent of the estimate. For example, if the estimate of cigarette smokers is 20 percent and the standard error of the estimate is 3 percent, the RSE of the estimate = (3/20) * 100, or
15 percent.
How should RSE be applied to the estimates produced in this module? RSE is particularly helpful where the confidence interval is quite large; for example, 8%-9% or larger. In such a case, the
reliability of the estimate would be suspect in the absence of additional information; however, if RSE does not exceed 30%, the estimate may still be considered reliable.
RETURN to previous page
Last Revised: March 07, 2013
|
{"url":"http://www.dhs.wisconsin.gov/wish/main/BRFS/rse.htm","timestamp":"2014-04-18T05:33:32Z","content_type":null,"content_length":"10421","record_id":"<urn:uuid:f708b39a-a5d9-465f-bcfa-460296f0bbcd>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Resolution of a free lie algebra as a module over its universal enveloping algebra.
up vote 1 down vote favorite
Let $L=L(V)$ be a free Lie algebra on a vector space $V$ and $A=T(V)$ the tensor algebra. Make $L$ into a module over $A$ consistent with the formula $a\cdot \alpha=[a,\alpha]$ for $a\in V$ and $\
alpha\in L$.
What is a canonical resolution of $L$ by free $A$ modules? I'm really most interested in the case where there is a grading and a differential.
After thinking I realize there is the bar construction $B(A,A,L)$. Is there anything smaller in this special case?
lie-algebras differentials
add comment
2 Answers
active oldest votes
There is a very small resolution. Everything is graded so we can in fact speak of minimal resolutions. The most refined version is to make the basis (which I for simplicity assume is finite
of cardinality $n$) is graded by itself so that everything becomes $\mathbb N^n$-graded. (Assume that the generators are $X_1,\dots,X_n$.) Then the $X_i$ form a minimal set of generators of
$L$ as a $T$-module and the first step of a minimal resolution has the form $\bigoplus_iT[-e_i]\to L \to 0$ (where $e_i$ is the standard basis of $\mathbb N^n$). Now $T$ has global dimension
$1$ so that the kernel $K$ is projective and being graded is free graded. A quick look doesn't reveal any elements in the kernel but in principle we can tell in which degrees the generators
are using multi-degree Hilbert series:
We have that the Hilbert series $H_T$ of $T$ is $1/(1-\sigma_1)$, where $\sigma_1=\sum_ix_i$ and if the Hilbert series $L_T$ of $L$ is $\sum_{\alpha}a_\alpha x^\alpha$, then we have $1/(1-\
sigma_1)=\prod_\alpha(1-x^\alpha)^{a_\alpha}$ which determines the coefficients $a_\alpha$ (done explicitly using Möbius inversion, see for instance Bourbaki, Lie algebras, Chap. II). If $p$
is the generating series for the basis of $K$ we get $\sigma_1H_T=pH_T+L_T$ which determines $p$.
In any case this only gives information on a possible resolution (even though my guess is people have done it otherwise you could perhaps guess what happens using the above to see where the
first relations would appear).
[Added later]
As penance for failing to find the first very simple relation I computed some terms of the generating series of the basis. Notice that it is clearly a symmetric function in the $x_i$ and in
fact we have a natural $GL(V)$-action on the span of a basis so that this symmetric function is in fact a character of $GL(V)4. We have the following:
In degree $2$ we have $\sigma_1^2-\sigma_2$ (not $\sigma_2$ as I claimed in a comment). This is the character of $S^2V$ and in fact the relations $X_ie_j+X_je_i$ account for that part.
In degree $3$ we have $\sigma_3$ which is the character of $\Lambda^3V$.
In degree $4$ we have $\sigma_2^2-\sigma_1 \sigma_3$.
I have put the Mathematica notebook with the calculations here.
[Added later still]
I did some calculations of the representations involved. Using the parametrisation of irreducible representations in terms of partitions I get the following (to possibly avoid confusion as
to whether I use a partition or its dual, $(n)$ is $S^nV$ and $(1^n)$ is $\Lambda^nV$):
2: $(2)$
3: $(1^3)$
4: $(2^2)$
5: $(3, 1^2)$
up vote
5 down 6: $(5,1) + 2(4,1^2) + 3(3,2,1) + (2^3) + 2(3,1^3) + (2^21^2) + (2,1^4)$
Not a very discernible pattern. (I don't think there is any reason why there should be one.) A different way (and in some sense more concrete) of thinking about the problem is to use
polynomial functors. Hence we have that the map $T(V)\bigotimes V \to L(V)$ is functorial in $V$. McDonald's theory of polynomial functors (the consequence I'll be using can easily be proved
directly) gives that everything is determined by what the map does on monomials where all the variables are distinct. Rather than going through the details of this let me give the concrete
In degree $2$ we have the element $x\cdot y+y\cdot x$ in the kernel (I use the dot to distinguish the second factor in $T(V)\bigotimes V$) as $x\cdot y$ maps to $[x,y]$ and $y\cdot x$ to $
[y,x]=-[x,y]$. This implies that the kernel in general is spanned by tensors of the form $u\cdot v+v\cdot u$ for $u,v\in V$. Furthermore, the action of the symmetric group $\Sigma_2$ on $x\
cdot y+y\cdot x$ is trivial and then spans the trivial representation which under the Weyl-McDonald equivalence corresponds to $S^2V$ so that the kernel has that form.
In degree $3$ we have the elements coming from degree $2$ which are $z(x\cdot y+y\cdot x)$, $y(x\cdot z+z\cdot x)$ and $x(y\cdot z+z\cdot y)$. We also have $xy\cdot z+zx\cdot y+yz\cdot x$
coming from the Jacobi identity. To see that they span the whole kernel it is convenient to compute in $L(V)$ using a Hall basis (see Bourbaki again). With the appropriate choice of ordering
on monomials (which is part of the building up of a Hall basis) we have a Hall basis for the monomials for which each variable occurs only once which is $[z,[x,y]]$ and $[y,[x,z]]$ and then
the different monomials map as follows to linear combinations of Hall monomials:
$xy\cdot z \mapsto [ y,[ x,z] ] -[ z,[ x,y]] $
$xz\cdot y \mapsto [ z,[ x,y] ] -[y,[ x,z] ]$
$ yx\cdot z \mapsto [ y,[ x,z] ]$
$ yz\cdot x \mapsto -[ y,[ x,z] ]$
$ zx\cdot y \mapsto [ z,[ x,y] ]$
$ zy\cdot x \mapsto -[ z,[ x,y] ] $
This shows that the kernel is indeed spanned by the relations we have found. Furthermore, we see that $xy\cdot z+zx\cdot y+yz\cdot x$ modulo the others is antisymmetric showing that the new
basis elements give a $\Lambda^3V$ (as the signature representations correspond to the exterior power). Note that the space spanned by $xy\cdot z+zx\cdot y+yz\cdot x$ is not $\
Sigma_3$-invariant so does not give an embedding of $\Lambda^3V$ in the kernel. However, anti-symmetrising it gives $xy\cdot z+zx\cdot y+yz\cdot x-yx\cdot z-zy\cdot x-xz\cdot y$ and hence
the space spanned by $uv\cdot w+wu\cdot v+vw\cdot u-vu\cdot w-wv\cdot u-uw\cdot v$ for $u,v,w\in V$ spans a $\Lambda^3V$.
This can be continued but I have not done so. Note, from the point of view of operads the Lie operad is the quotient by a free operad using the anti-symmetry and Jacobi relations. This may
make it somewhat confusing as to why we get more basis elements in degrees higher than $3$. The reason is that from the current point of view we allow ourselves only to generate new
relations (which are not part of the basis) by commuting with arbitrary elements whereas in the operadic description we also allow the substitution arbitrary monomials in old relations: For
instance, we get from $[x,y]=-[y,x]$ that $[[x,y],[z,w]]=-[[z,w],[x,y]]$.
I think you get a relation from $[X_1,X_2]=-[X_2,X_1]$ so that $X_1\cdot e_2=-X_2\cdot e_1$. – Don Stanley Mar 1 '10 at 14:07
Thanks this seems to be helpful and close to what I actually need. – Don Stanley Mar 1 '10 at 19:45
Silly me! I tried to get a relation from the Jacobi identity but it turned out to be a relation in the enveloping algebra, didn't occur to me to use anti-symmetry... In any case, the
generating series $p$ for the basis is a symmetric function and this shows that $\sigma_2$ appears in it. – Torsten Ekedahl Mar 2 '10 at 5:15
add comment
There is a very short resolution of $T(V)$ as a $T(V)$-bimodule, $$0\to T(V)\otimes V\otimes T(V)\to T(V)\otimes T(V)\twoheadrightarrow T(V)$$ with the first map being the unique one maps of
$T(V)$-bimodules such that $1\otimes v\otimes 1\in T(V)\otimes V\otimes T(V)$ maps to $v\otimes 1-1\otimes v\in T(V)\otimes T(V)$, and the second one simply by the product on $T(V)$. Now the
Lie algebra $L(V)$ is a $T(V)$-module (on the left, say), so we can apply the functor $(\mathord-)\otimes_{T(V)}L(V)$ to out complex, getting, up to standard identifications, $$0\to T(V)\
up vote otimes V\otimes L(V)\to T(V)\otimes L(V)\twoheadrightarrow L(V),$$ with induced maps. This is a $T(V)$-projective resolution of $L(V)$.
3 down
vote This is a graded resolution, if you want to consider the natural grading on $T(V)$, $L(V)$ and the induced gradings on the complex. `Handling a differential' depends on what you mean by
This is great. Thanks a lot. I need to think about it for a bit. – Don Stanley Mar 1 '10 at 13:39
For the differential I mean that $T(V)$ is a differential graded algebra and $L(V)$ is a differential graded module over it. Then it would be great to have $L(V)$ as the cone on a map in
the category of $T(V)$ modules (as in your resolution), but I guess that's not possible. – Don Stanley Mar 1 '10 at 13:53
Slightly less good would be to have a $T(V)$ dg-module quasi-isomorphic to $L(V)$ that was free as a $T(V)$ module (like the bar construction). – Don Stanley Mar 1 '10 at 13:58
You can take the total complex of the resolution I constructed above: that is a free $T(V)$-module with a differential. The map induced by my map $T(L)\otimes L(V)\to L(V)$ is then a
quasi-iso. – Mariano Suárez-Alvarez♦ Mar 1 '10 at 14:05
What is the differential on the $T(V)\otimes V\otimes T(V)$ (or $T(V)\otimes V\otimes L(V)$) term? – Don Stanley Mar 1 '10 at 14:13
show 3 more comments
Not the answer you're looking for? Browse other questions tagged lie-algebras differentials or ask your own question.
|
{"url":"http://mathoverflow.net/questions/16750/resolution-of-a-free-lie-algebra-as-a-module-over-its-universal-enveloping-algeb","timestamp":"2014-04-21T04:36:37Z","content_type":null,"content_length":"70764","record_id":"<urn:uuid:5db27335-611c-482b-9c8c-2890d3845e87>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
|
vertex, focus, and directrix of parabola
May 2nd 2009, 05:07 PM #1
Mar 2009
<Any Help would be greatly appreciated>
This stuff confuses me
2. Find the vertex, focus, and directrix of the parabola: (x+5)+(y-1)²=0
3. Find the vertex, focus, and directrix of the parabola: x²-2x+8y+9=0
Last edited by mr fantastic; May 5th 2009 at 06:40 AM. Reason: Questions moved from original thread.
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/pre-calculus/87590-vertex-focus-directrix-parabola.html","timestamp":"2014-04-19T23:33:56Z","content_type":null,"content_length":"29195","record_id":"<urn:uuid:b026ea44-ce3d-475d-b213-6e2e956ff589>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 3 Statistics and Data Analysis The National Highway Traffic Safety
Administration’s (NHTSA’s) final rule regarding consumer information on rollover resistance (Federal Register 2001) notes that “the effect of SSF [static stability factor] must be shown to have a
significant influence on the outcome of actual crashes (rollover vs. no rollover) to be worth using for consumer information.” To this end, the agency undertook a statistical study to investigate the
relationship between measured values of SSF for a range of vehicles and corresponding rollover rates determined from real-world crash data (Federal Register 2000). The agency subsequently conducted
further statistical analyses in response to public comment on the first study (Federal Register 2001). As noted in Chapter 1, rollover crashes are complex events influenced by driver characteristics,
the driving environment, and the vehicle and the interaction among the three. Therefore, one of the challenges in analyzing rollover crash data is to isolate the effect of a particular variable—such
as SSF—from that of other variables. Differences in rollover risk1 due to how, when, where, and by whom a vehicle is operated complicate comparisons of the rollover risk of different vehicles.
NHTSA’s analyses of crash data involved the use of binary-response models. A binary-response model is a regression model in which the dependent variable (outcome of the crash) is binary (“rollover”
or “no rollover”). NHTSA used such models to study the effect of various explanatory variables—such as driver characteristics, environmental conditions, and vehicle metrics—on the probability of
rollover. According to NHTSA, the results of its statistical analyses reveal that, in the event of a single-vehicle crash, the effect2 of SSF on the probability of rollover is highly important, even
when driver characteristics—such as age—and environmental characteristics—such as road and weather conditions—contribute to the crash. This statistical correlation between SSF and the probability of
rollover is the foundation for NHTSA’s star rating system for rollover resistance and for the one- to five-star ratings assigned to different vehicles. 1 For consistency with NHTSA’s analyses,
rollover risk is defined as the probability of rollover in the event of a single-vehicle crash. In the present context, rollover risk does not predict the likelihood of a single-vehicle crash or the
type or severity of injuries expected. 2 The analyses described in this chapter demonstrate that there is a causal relationship between SSF and the probability of rollover, but it is difficult to
isolate the causal effect from confounded effects using data on past crashes. Confounding occurs if variables that are correlated with both SSF and the probability of rollover are omitted.
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 This chapter presents the committee’s review of the statistical
analyses that form the basis for NHTSA’s rollover resistance rating system. The results of additional statistical analyses performed by the agency at the committee’s request are also discussed. The
purpose of all these analyses was to investigate what crash data indicate about the effect of SSF on a vehicle’s propensity to roll over. The chapter begins with a review of the available sources of
crash data and a description of the data selected by NHTSA for use in its analyses. Some basic statistical ideas and the notation used in the chapter are then presented. The next section describes
the binary-response models used by NHTSA in constructing a rating system. The influence of the driver and driving environment on the probability of rollover is then examined in depth, and a
preliminary estimate of a nonparametric version of the binary-response model for rollovers is presented. Next, the potential—from a statistical perspective—of the binary-response models used by NHTSA
to provide practical, useful information to the public is examined. The chapter concludes with a summary of the committee’s findings and recommendations in the area of statistics and data analysis.
ROLLOVER CRASH DATA This section begins with a brief overview of the major sources of data available to NHTSA for the purposes of its statistical analysis of rollover crashes. The rationale behind
the agency’s choice of crash data is then reviewed, with particular emphasis on the selection of data from six states for use in constructing the rollover resistance rating system. Crash Data Files
Four major databases maintained by NHTSA have the potential to support evaluation of rollover collisions, including rollover rates: State Data System (SDS); Fatality Analysis Reporting System (FARS);
General Estimates System (GES); and Crashworthiness Data System (CDS). Table 3-1 summarizes the key features of these databases. All four include some information on rollover crashes. As indicated in
the table, however, there are variations in the numbers of rollovers reported and in the level of detail provided about each crash (e.g., extent of injuries or information on crash site).
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 TABLE 3-1 Features of NHTSA’s Major Crash Databases Database Key
Features Data on Rollover Crashes State Data System (SDS) Contains police-reported crash data collected in 17 states Crash files developed and maintained by responsible agency in each state Large
amount of rollover crash data available Need to be aware of state-to-state differences in road characteristics, driver use patterns, and reporting practices Fatality Analysis Reporting System (FARS)
Data for all fatal crashes in the country occurring on public roads Data obtained from police crash reports, driver licensing files, vehicle registration files, hospital records, and other sources
Used to generate NHTSA’s annual publication Traffic Safety Facts Moderate number of rollover crashes Restriction to fatal crashes limits use for examining propensity of vehicles to roll over under
the full range of possible collision types General Estimates System (GES) Part of National Automotive Sampling System (NASS); became operational in 1988 Approximately 50,000 crashes included annually
Data acquired from sample of police-reported crashes in 400 jurisdictions within 60 areas across the United States Can be used to produce national estimates of crash-related safety problems at all
levels of injury severity, from property-damage–only to fatal System relies on sampling, so number of rollover crashes is relatively small compared with datasets within SDS database Estimates of
rollover rates, injury severity, and other characteristics associated with rollover crashes should provide reasonable national estimates of the problem, provided the sampling is not biased
Crashworthiness Data System (CDS) Part of NASS Includes detailed postcrash data collected by trained investigators 4,000–5,000 crashes included annually, selected randomly from a sample of national
jurisdictions; includes all levels of injury severity Data acquisition includes detailed review of crash site, examination of vehicle(s) involved, review of medical records of injured, and interviews
with crash victims Expensive to develop Contains most-detailed crash data available in any national file, including an entire subset of variables associated with rollover Does not contain sufficient
numbers of rollover crashes to be useful for modeling analysis Used by NHTSA to assess relative frequencies of “investigator defined” tripped and untripped rollovers Rationale Behind NHTSA’s
Selection of Data The crash data used by NHTSA to develop statistical models are derived from police crash reports and form part of the SDS. The decision to use data from specific states within the
SDS was driven largely by the desire to have a robust data set for the analysis. The GES and CDS were judged inappropriate because the numbers of rollover crashes reported in these databases are
relatively small. And although the FARS database includes a moderate number of rollovers, the restriction to fatal crashes limits the range of crash scenarios represented in which a vehicle may
overturn. Although the police-reported data in the SDS are the most important in understanding NHTSA’s modeling efforts, the three other databases man-
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 aged by NHTSA and listed in Table 3-1 are often referenced in reports
and documents addressing the rollover crash problem. All three of these databases were considered by NHTSA in the process of identifying appropriate data for statistical modeling. Thus although the
FARS, GES, and CDS databases were deemed inadequate, they were useful in informing NHTSA’s analyses. Importance of Single-Vehicle Crashes NHTSA’s analyses used SDS crash data relating to
single-vehicle events only (see below). Indeed, although FARS and GES data were not used to derive the statistical correlation between rollover rates and SSF, these data highlight the preponderance
of rollover-related deaths and injuries associated with single-vehicle crashes. For example: Analysis of 1999 FARS data shows that 82 percent of light-vehicle rollover fatalities were associated with
single-vehicle crashes. According to 1999 FARS data, rollover accounted for 55 percent of all occupant fatalities for single-vehicle crashes involving light vehicles. GES data for the period
1995–1999 indicate that, on average, 241,000 light vehicles rolled over each year nationwide. Of this total, 205,000 (85 percent) were single-vehicle events that resulted in 46,000 severe
(incapacitating) or fatal injuries. Tripped Versus Untripped Rollover The CDS database—part of the National Automotive Sampling System (NASS)—identifies many different categories of rollover,
including “tripover” (also known as tripped rollover) and “turn-over” (also known as untripped rollover).3 The different rollover types coded in the CDS database are determined primarily from crash
scene and vehicle inspections, with additional evidence derived from photographs, police reports, and interviews with drivers and others.4 It is widely acknowledged that the interpretation of crash
scene evidence can be problematic, with resulting uncertainties in distinguishing between tripped and untripped rollovers. In 1998, the coding of a number of crashes in the CDS database for the
period 1992–1996 was revisited, and revisions were made. In particular, many of crashes originally coded as untripped were recoded as tripped (NHTSA 1999). NHTSA has sought to demonstrate that the
vast majority of single-vehicle passenger-vehicle rollovers 3 See Chapter 2 for definitions of tripped and untripped rollover. 4 “Collection of NASS CDS Data Relating to Rollover,” presentation to
the Committee for the Study of a Motor Vehicle Rollover Rating System by Robert Woodill (Veridian Engineering) and John Brophy (NHTSA), Washington, D.C., May 29, 2001.
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 are tripped. According to the agency’s analysis of CDS data for the
period 1992 through 1996, more than 95 percent of single-vehicle crashes involving rollover were tripped (NHTSA 1999; Federal Register 2001). As noted in Chapter 2, it is the magnitude and duration
of the forces on a vehicle—rather than the tripping mechanism—that determine whether rollover occurs. In light of this observation, as well as the practical difficulties involved in distinguishing
the two categories of rollover, tripped and untripped rollovers are not addressed separately in the present discussion. Moreover, the SDS data used by NHTSA in developing its rollover probability
model and subsequent rating system do not distinguish between the two types of rollovers, and crash data for both types were included in the agency’s statistical analyses. State Selection NHTSA’s
analyses were based on SDS data for specific states, selected on the basis of the following criteria (Federal Register 2000): The state had to participate in the SDS and must have provided data for
1997.5 The vehicle identification number (VIN) had to be included in the electronic file. The file had to include a variable indicating whether a rollover occurred as either the first or a subsequent
event in the crash. NHTSA selected six states for modeling: Florida, Maryland, Missouri, North Carolina, Pennsylvania, and Utah. The corresponding single-vehicle crash data were used in the modeling
analysis that resulted in the curve used to establish the star rating values for individual vehicle models. Data from New Mexico and Ohio were also used for some of the supporting analyses, but were
not included in the modeling efforts because of differences in crash reporting practices. Single-vehicle crashes served as the exposure measure for assessing the relative magnitude of the rollover
problem (i.e., number of rollover events or number of single-vehicle crashes). The crashes included in the analysis were single-vehicle collisions for all light vehicles (less than 10,000 pounds
gross vehicle weight) between 1994 and 1998 (see Table 3-2). Such crashes were defined as not involving another motor vehicle, pedestrian, bicyclist, animal, or train. Special classes of vehicles
were also excluded from the analysis, notably emergency vehicles (e.g., fire, ambulance, police, or military), parked vehicles, and vehicles pulling a trailer. The total number of single-vehicle
crashes initially included in the dataset was 227,194. 5 The second analysis conducted by NHTSA in response to public comment also included 1998 data.
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 TABLE 3-2 Single-Vehicle Crash Frequencies for Six States Included in
Modeling Analysis Calendar Year of Data State 1994 1995 1996 1997 1998 Total Florida 6,174 8,295 9,552 10,766 10,832 45,619 Maryland 3,795 4,296 5,079 4,957 4,974 23,101 Missouri 6,001 7,464 8,988
8,957 9,620 41,030 North Carolina 8,555 10,674 12,880 13,609 12,866 58,584 Pennsylvania 9,303 11,143 13,530 14,885 a 48,861 Utah 1,499 1,731 1,955 2,338 2,476 9,999 Total 35,327 4,3603 51,984 55,512
40,768 227,194 a1998 data for Pennsylvania were not used because they did not contain curve and grade variables. SOURCE: Federal Register 2001. NHTSA identified 100 different vehicle make and model
combinations, each with a unique SSF (see Federal Register 2001, Appendix I). All the 227,194 single-vehicle crashes in the dataset involved vehicles with VINs that matched one of the 100 groups.
However, any of the 100 make and model groups for which there were fewer than 25 crashes were excluded from the analysis. The final dataset used for analysis comprised 226,117 crashes in 87 make and
model groups. Of these crashes, 45,574 (20.16 percent) resulted in rollover. In light of NHTSA’s responsibilities for establishing national policy and providing information relevant at the national
level, it is important that the rollover crash data used to derive consumer information be representative of all states. Hence, the agency undertook an additional effort that involved using the GES
database to determine whether the rollover rate for a national sample of single-vehicle crashes was similar to the rate for the six states included in the original analysis. Using GES data for 1994
through 1998 (the same years as the SDS data), a total of 9,910 vehicles were identified that (1) had VINs that placed them in the group of 100 make/model categories, and (2) were involved in
single-vehicle crashes. Of these vehicles, 2,377 rolled over. After applying the appropriate weighting factors to account for the GES sampling scheme, NHTSA obtained national estimates for
single-vehicle crashes and subsequent rollover crashes of 1,185,474 and 236,335, respectively. The resulting rollover rate was 19.94 percent—essentially the same as the rate of 20.16 percent derived
for the six states used in the modeling analysis. BACKGROUND AND NOTATION This section reviews some basic statistical ideas relevant to the present discussion, together with the notation used in
statistical analyses of rollover crash data. As stated earlier, NHTSA’s rollover resistance rating system is
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 based on a binary-response model of rollover events. The dependent
variable and explanatory variables of the model are first described, and the specification of the relation between the dependent variable and explanatory variables is then discussed. Finally, the
concept of the rollover curve—the basis for NHTSA’s rating system—is introduced, and two interpretations of this curve are presented. Dependent and Explanatory Variables The binary-response model for
rollovers states that the probability of rollover, given that a single-vehicle crash has occurred, is a certain function of selected explanatory variables. Let Y denote the dependent variable in a
binary-response model of rollovers. This variable Y is equal to 1 if there is a rollover and 0 otherwise. Thus, the probability of a rollover is the probability that Y = 1. This probability depends
on the values of the explanatory variables incorporated in the model. The commonly used explanatory variables include driver characteristics, environmental variables, and vehicle metrics. An example
of a driver variable is YOUNG (Z1), where Z1 = 1 if the driver is under 25 years old and 0 otherwise. An example of a road condition variable is CURVE (Z2), where Z2 = 1 if the crash occurred on a
curve area and 0 otherwise. An example of a vehicle metric is SSF, which is denoted by X. The explanatory variables are typically divided into two groups: the vehicle metrics are in one group and the
driver characteristics and environmental variables in the other. This latter group defines what is called a scenario. Let Z denote the array of driver and environmental variables. To simplify the
exposition, suppose a scenario is defined by one driver variable and one environmental variable, unless noted otherwise. In this case, Z has only two components: Z = (Z1, Z2). If a scenario is
defined by the variables YOUNG and CURVE, there are four possible scenarios: (0,0), (0, 1), (1,0), (1,1). For example, the scenario (1,1) describes the case of a single-vehicle accident involving a
young driver on a curve. SSF is the only vehicle metric used by NHTSA for the purpose of constructing a rating system. However, because driver and environmental variables also may be important in
determining rollover risk, variables in these other categories were considered as well. These variables are explained in Table 3-3. The criterion for the selection of the driver and environmental
variables was the availability of appropriate data both within the GES and for the six SDS states used in NHTSA’s analysis. The variables ultimately considered in the models were DARK, STORM, FAST,
HILL, CURVE, BADSURF, MALE, YOUNG, OLD, and DRINK (see Table 3-3). NHTSA also included the six SDS states as explanatory variables. An example of a state variable is S1, say, where S1 = 1 if Florida
is the state in which the single-vehicle crash occurred and 0 otherwise. The need for these state-
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 TABLE 3-3 Variables Available for Inclusion in NHTSA’s SSF-Rollover
Rate Model Variable Definition ROLLa Proportion of single-vehicle crashes that involved a rollover SSFa Numeric value of static stability factor DARKa Proportion of single-vehicle crashes that
occurred during darkness STORMb Proportion of single-vehicle crashes that occurred during inclement weather RURAL Proportion of single-vehicle crashes that occurred in rural areas FASTb Proportion of
single-vehicle crashes that occurred on roadways where the speed limit was 50 mph or greater HILLb Proportion of single-vehicle crashes that occurred on a grade, at a summit, or at a dip CURVEb
Proportion of single-vehicle crashes that occurred on a curve BADROAD Proportion of single-vehicle crashes that occurred on roads with potholes or other bad road conditions BADSURFa Proportion of
single-vehicle crashes that occurred on wet, icy, or other bad surface conditions MALEb Proportion of single-vehicle crashes involving a male driver YOUNGb Proportion of single-vehicle crashes
involving a driver under 25 years old OLDb Proportion of single-vehicle crashes involving a driver age 70 or older NOINSURE Proportion of single-vehicle crashes involving an uninsured driver DRINKb
Proportion of single-vehicle crashes involving a driver who was drinking or using illegal drugs NUMOCC Average number of vehicle occupants a Variable included in models. b Environmental or driver
variable found statistically significant in models. SOURCES: Federal Register 2000, 2001 (Table 7). based variables is explained by the known differences among states in crash reporting practices
(see Federal Register 2001, Table 5), roadway characteristics, driver demographics and vehicle usage patterns, and other such factors. As discussed in Chapter 2, physics indicates that SSF is an
indicator of a vehicle’s rollover propensity. The purpose of the statistical analysis is to investigate what the crash data indicate about the effect of SSF on a vehicle’s propensity to roll over and
whether the magnitude of this effect depends on driver and environmental variables. The example of a double-decker bus illustrates the complexities involved in interpreting the results of such crash
data analyses. The double-decker bus has a low SSF. This fact does not automatically imply that accident data for the double-decker bus will show that SSF is strongly correlated with the incidence of
rollover, because the accident history depends on the bus driver and the driving conditions as well as on SSF. If a double-decker bus is normally driven by a professional driver in an urban area, the
number of accidents is likely to be low, and in the accidents that do occur, there are likely to be relatively few rollovers. This example illustrates that the scenario can attenuate the observed
effect of SSF. At the same time, however, the accident history in this example does not negate the fundamental physics of rollover. Thus SSF remains important in determining a vehicle’s
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 rollover propensity, as discussed in Chapter 2, although its influence
is not clearly manifested in the crash data because the double-decker bus is rarely involved in higher-risk scenarios, and these vehicles experience relatively few rollovers. Functional Forms The
statistical problem is to estimate the probability that Y = 1 (i.e., the probability of a rollover), considered as a function of the explanatory variables. For this purpose, the conventional approach
is to specify what is called a parametric binary-response model. In this approach, the form of the relation between the probability that Y = 1 and the explanatory variables is assumed known, while
the values of certain parameters in the relationship are to be determined. Linear regression analysis is a well-known example of this approach. In linear regression analysis, the relation between the
dependent and explanatory variables is assumed to be linear, but the values of the coefficients in the linear relation are assumed to be unknown. In the case of a binary-response model, the relation
between the probability that Y = 1 and the explanatory variables is generally assumed to be nonlinear. Following the parametric approach, suppose that the true probability that Y = 1 given that Z = z
and X = x is (1) where the function F specifies the relation between the probability that Y = 1 and the explanatory variables. The assumption is that the functional form F is known and that the
values of the parameters α0, α1, α2, and β are unknown. The typical assumption is that F is a cumulative distribution function. The commonly used distribution functions are smooth S-shaped curves.
The most widely used binary-response models are logit and probit models. A binary-response model is referred to as a logit model if F is the cumulative logistic distribution function and as a probit
model if F is the cumulative normal distribution function. NHTSA employed a logit model in its statistical analysis of rollover crash data. Generally, both types of models produce highly similar
statistical results because the logistic and normal distributions are both symmetrical around zero and have very similar shapes, except that the logistic distribution has fatter tails. The problem is
to estimate the unknown parameters. The parameters of logit and probit models are typically estimated by maximum likelihood, and this is the estimation method used by NHTSA for its logit model. The
maximum-likelihood estimator has good properties in large samples.6 In par- 6 One criterion for judging whether a sample is large is to determine whether large sample approximations work. Such
approximations are assumed to work for the sample sizes used in the present analyses.
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 ticular, it is asymptotically efficient; that is, it is the precise
estimator in large samples. Rollover Curve and Interpretations The rating system proposed by NHTSA is based on SSF. Suppose that the (true) probability that Y = 1 given that X = x is (2) where the
functional form G is known, and the parameters β0 and β1 are unknown. This model gives the relation between the probability that Y = 1 and X. This relation is called the rollover curve. The physics
of rollover strongly suggests that the rollover curve is downward sloping; that is, the probability that Y = 1 decreases as SSF increases. The rollover curve has two interpretations, depending on how
the model G(β0 +β1x) is derived. In one interpretation, the rollover curve gives the average of the rollover probability for each value of SSF, where the average is taken over the scenarios. In this
case, the rollover curve can be estimated using data on only one explanatory variable, namely SSF. In the other interpretation, the rollover curve gives the rollover probability for the average
scenario. In this case, data on driver and environmental variables, as well as SSF, are used in estimating the curve. Either approach can be used to estimate the rollover curve, although the two
approaches yield different results (see Box 3-1). NHTSA has employed the second approach extensively in estimating the rollover curve. This is a reasonable choice provided the average scenario is an
empirically relevant baseline for comparing vehicles. STATISTICAL MODELS NHTSA’s initial analysis of single-vehicle crash data was based on an exponential model—a type of model that is little used in
the statistical literature. The current rating system for rollover resistance was constructed using an estimated rollover curve also based on an exponential model. The uncertainties associated with
this estimated rollover curve were not considered in deriving the star rating categories. Subsequently, NHTSA conducted further analyses using a logit model, which, as noted earlier, is a widely used
type of binary-response model. The results obtained using the logit model are presented below. Following a brief discussion of issues related to uncertainty in estimating statistical models, this
section describes exponential and logit parametric binary-response models. The rollover curves and associated confidence intervals obtained by NHTSA in its analyses are then considered.
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 BOX 3-1 Two Interpretations of the Rollover Curve Taking the average of
the rollover probability for each value of SSF, where the average is taken over the scenarios, the average rollover probability is In contrast, the rollover probability for the average scenario is
where is the array of the sample means of the scenario variables. The two formulas for the rollover curve do not produce the same result: The first formula can be written as which says that P(Y = 1|X
= x) is a weighted average of functions. The weighted average P(Y = 1|X = x) = G(β0 +β1x) can be estimated using data on only one explanatory variable, namely SSF. The second formula can be expressed
as which says that P*(Y = 1|X = x) is a function of the average scenario. In this case, F(α0 +α1z1 +α2z2 +βx) has to be estimated to estimate P*(Y = 1|X = x); that is, the data on driver and
environmental variables are also used in the estimation. The reason P(Y = 1|X = x) ≠ P*(Y = 1|X = x) is that the average of the function is not the function of the average when the function is
nonlinear. NHTSA has employed the second formula extensively in estimating the rollover curve.
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 abilities estimated by maximum likelihood. The same is not the case for
the exponential model. The confidence intervals for the rollover curve based on the logit model are presented below. The committee asked NHTSA to calculate the large-sample 95 percent confidence
intervals for rollover probabilities based on maximum-likelihood estimation of the logit model using ungrouped binary data. The logit model included as explanatory variables SSF, driver and
environmental variables, scenario dummy variables, and five state dummy variables (Missouri, the sixth state, was used as the baseline). The formula for the large-sample confidence interval is
available in the statistical literature (see, for example, Greene 2000, p. 824). The estimated rollover curve and the 95 percent confidence intervals using the data for the six states combined are
shown in Figure 3-2.7 The maximum-likelihood estimates of the parameters of the logit model are reported in Appendix C.8 The first point to note is that an increase in SSF reduces the probability of
rollover. The second point is that the widths of the confidence intervals FIGURE 3-2 Estimated probability of rollover and 95 percent confidence intervals based on maximum-likelihood estimation of a
logit model using data from six states combined (n = 206,822). 7 The confidence intervals calculated by NHTSA were verified by independent review in selected cases. 8 Because the logit model is
nonlinear, the estimated parameters are not proportional to correlation coefficients.
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 are very narrow—about 0.01 or less for all values of SSF. These
confidence intervals for the rollover probabilities are very narrow because the size of the crash dataset is very large; as discussed above, the widths of the confidence intervals shrink as the size
of the crash dataset increases. The confidence intervals displayed in Figure 3-2 suggest that, from a statistical perspective, it is possible to discriminate meaningfully among the reported rollover
rates for vehicles within a single vehicle class using the logit model. The range of SSF for the four vehicle types used in the analysis is plotted in Figure 3-2 for comparison. These ranges are 1.00
to 1.20 for SUVs, 1.03 to 1.28 for pickup trucks, 1.08 to 1.24 for passenger vans, and 1.29 to 1.53 for passenger cars (Federal Register 2001, 3,412–3,415). SCENARIO EFFECTS In addition to SSF and
the six states, NHTSA included driver and environmental variables as explanatory variables. As discussed earlier, the driver and environmental variables define a scenario. In this section, a scenario
is defined by a unique combination of the following variables: STORM, FAST, HILL, CURVE, MALE, YOUNG, OLD, and DRINK (see Table 3-3 for definitions). Each of these variables takes on the value 1 or
0, that is, “yes” if it is present and “no” otherwise. Thus, a scenario designated “01001000” would indicate a crash that occurred on a roadway where the speed limit was 50 mph or greater (FAST) and
that involved a male driver (MALE). The rollover resistance rating system proposed by NHTSA using an exponential model is based on an “average” rollover curve for an “average” scenario. The average
is a measure of the location of a distribution, but another important feature of a distribution is its variance or dispersion. The greater the variance or dispersion, the less informative is the
average for decision making. Analysis of crash data indicates that, although an increase in SSF reduces the probability of rollover, the rollover curves are different for different scenarios. These
variations suggest that potentially useful information about the occurrence of rollovers is not captured by the average rollover curve. A plausible hypothesis—consistent with the double-decker bus
example discussed earlier—is that the influence of SSF on rollover rates in real-world crashes is more apparent in higher-risk than in lower-risk scenarios. To investigate this hypothesis, the
committee asked NHTSA to estimate rollover curves for specific scenarios using the data from all six states. Six scenarios were selected to represent the range of driver and environmental conditions
found in the database. The eight binary variables listed above define a theoretical total of 192 (or 26 × 3) unique scenarios. [The number of unique scenarios is fewer than the 256 (or 28) possible
combinations of variables because YOUNG and OLD are mutually exclusive.] In fact, only 188 scenarios were encountered in the database for the six states combined. The scenarios can be ordered by the
observed frequency of rollovers: when the fre-
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 quency is low, the scenarios are said to be low risk, and when the
frequency is high, the scenarios are high risk. The following key percentiles were selected: Low risk (close to minimum)—Scenario 00000010; 25th percentile—Scenario 00001100; Mean—Scenario 11000000;
Median—Scenario 01001000; 75th percentile—Scenario 11101000; and High risk (close to maximum)—Scenario 01011001. For example, using these definitions, the high-risk scenario would be the combination
of the NO STORM, FAST, NO HILL, CURVE, MALE, NOT YOUNG, NOT OLD, and DRINK variables. The logit model was used to estimate the probability of a single-vehicle rollover crash as a function of SSF and
state dummy variables for each of the six scenarios. The average scenario–average state logit model developed to estimate the probability of a single-vehicle rollover crash across all scenarios and
states is shown in Figure 3-2. The estimated rollover curves and their 95 percent confidence intervals for the six selected scenarios, averaged across states, are presented in Figures 3-3 through
3-8. The upper and lower 95 percent confidence limits for the probability of rollover were computed using the formula for asymptotic variance of the estimated probabilities given by Greene (2000,
824). The associated regression results are shown in Appendix C. Figures 3-39 through 3-8 reveal that the estimated rollover curves are indeed different for different scenarios. The curves tend to be
flat for low-risk scenarios, more steeply (negatively) sloped for scenarios with about average risk, and still more steeply (negatively) sloped for high-risk scenarios. Figures 3-3 through 3-8
illustrate that the observed effect on rollover rate of an increase in SSF depends on the scenario. For example, comparison of the rollover curves for low-risk and mean-risk scenarios (Figures 3-3
and 3-5, respectively) reveals some notable differences. For the low-risk scenario, an increase in SSF from 0.95 to 1.20 results in a decrease in rollover probability of about 0.07, whereas a
corresponding increase in SSF for the mean-risk scenario results in a decrease in rollover probability of about 0.20. The estimated reduction in rollover probability for the low-risk scenario is
subject to far greater uncertainty than that for the mean-risk scenario because the associated 95 percent confidence bands are far wider. Thus, Figures 3-3 through 3-8 show that assessment of the
importance of SSF in real-world crashes depends on which scenario is considered. 9 The rollover curve for the low-risk scenario shown in Figure 3-3 has wide confidence bands at low SSF. This is due,
in part, to three effects: the standard errors of the estimated coefficients for the logit model are large (see Appendix C); the “center of gravity” of the curve is at a relatively high value of SSF;
and all calculations are performed on the log scale and then transformed back to the original scale.
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 FIGURE 3-3 Estimated probability of rollover and 95 percent confidence
intervals based on maximum-likelihood estimation of a logit model using data from six states for low-risk scenario. [NOTE: (STORM, FAST, HILL, CURVE, MALE, YOUNG, OLD, DRINK) = 00000010; 908
observations (0.4 percent of total) and 28 rollovers.] FIGURE 3-4 Estimated probability of rollover and 95 percent confidence intervals based on maximum-likelihood estimation of a logit model using
data from six states for 25th-percentile–risk scenario. [NOTE: (STORM, FAST, HILL, CURVE, MALE, YOUNG, OLD, DRINK) = 00001100; 8,101 observations (3.9 percent of total) and 1,082 rollovers.]
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 FIGURE 3-5 Estimated probability of rollover and 95 percent confidence
intervals based on maximum-likelihood estimation of a logit model using data from six states for mean-risk scenario. [NOTE: (STORM, FAST, HILL, CURVE, MALE, YOUNG, OLD, DRINK) = 11000000; 3,346
observations (1.6 percent of total) and 694 rollovers.] FIGURE 3-6 Estimated probability of rollover and 95 percent confidence intervals based on maximum-likelihood estimation of a logit model using
data from six states for median-risk scenario. [NOTE: (STORM, FAST, HILL, CURVE, MALE, YOUNG, OLD, DRINK) = 01001000; 9,256 observations (4.5 percent of total) and 2,030 rollovers.]
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 FIGURE 3-7 Estimated probability of rollover and 95 percent confidence
intervals based on maximum-likelihood estimation of a logit model using data from six states for 75th-percentile–risk scenario. [NOTE: (STORM, FAST, HILL, CURVE, MALE, YOUNG, OLD, DRINK) = 11101000;
2,594 observations (1.3 percent of total) and 677 rollovers.] FIGURE 3-8 Estimated probability of rollover and 95 percent confidence intervals based on maximum-likelihood estimation of a logit model
using data from six states for high-risk scenario. [NOTE: (STORM, FAST, HILL, CURVE, MALE, YOUNG, OLD, DRINK) = 01011001; 1,270 observations (0.6 percent of total) and 537 rollovers.]
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 NONPARAMETRIC MODEL The confidence intervals calculated for the
rollover curve using the logit model assume that the logit model is correctly specified. If the functional form of a model is incorrectly specified, the analysis based on confidence intervals may be
misleading. The question addressed in this section is whether the logit model provides a satisfactory approximation to the true rollover curve. This amounts to asking whether F (see Equation 1) is
indeed the cumulative distribution function of the logistic distribution or some other function. The true, but unknown, functional form can be estimated using a non-parametric binary-response model—a
model in which the functional form F is not assumed to be known. Hence, it is of interest to compare the estimated logit model with the estimated nonparametric model. The objective of this comparison
is to reveal the extent to which the logistic cumulative distribution function provides a good approximation of the true, but unknown, functional form. Estimation of the nonparametric model is
challenging because it involves estimating the unknown functional form using the data. The non-parametric rollover curve was estimated by kernel regression, a well-known nonparametric estimation
method. This method is discussed briefly by Greene (2000, 844–846); a more detailed exposition is found in Härdle (1990). In this section, the nonparametric estimation is illustrated using the binary
data for Florida only. This nonparametric analysis was performed for illustrative purposes using a subset of the available data. A more extensive analysis using a larger dataset will be required if
the nonparametric model is to be used to obtain a rollover curve that provides information at the national level. Figure 3-9 presents the nonparametric estimate of the rollover curve and uniform 95
percent confidence intervals. This figure shows that an increase in SSF reduces the probability of rollover. The estimated rollover curve based on the logit model appears to be a reasonable
approximation to the nonparametric-based rollover curve using limited data, suggesting that the logit model is a sensible starting point for constructing a rollover rating system. ROLLOVER CURVE AND
STAR RATING SYSTEM NHTSA derived its five star rating categories for rollover resistance from the estimated rollover curve shown in Figure 3-1. Two features of the agency’s approach are of concern:
The lack of accuracy resulting from the representation of a continuous curve by an overly coarse discrete approximation, and The lack of resolution resulting from the choice of breakpoints between
star rating categories.
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 FIGURE 3-9 Nonparametric estimate of probability of rollover using a
quartic kernel with a bandwidth of h = 0.07, n = 37,680. (NOTE: Both the logit and nonparametric curves illustrated in this figure are for Florida only.) These related problems of accuracy and
resolution need to be addressed in developing future consumer information on rollover to provide consumers with more useful and practical advice, commensurate with the evidence from real-world crash
data. Accuracy The approach adopted by NHTSA was to approximate a continuous curve— the estimated rollover curve—by a discrete approximation comprising five levels, or star rating categories. This is
a coarse approximation that results in a substantial loss of information, particularly at lower SSF values where the rollover curve is relatively steep. A more accurate approximation of the
continuous rollover curve would use more levels. There would still be artificial jumps at the breakpoints between the levels, but this is an inherent feature of all such discrete rating systems.
Figures 3-10 and 3-11 show two examples of defining breakpoints on the SSF axis. The first figure is an example of a coarse four-step approximation to the estimated curve—the lines are drawn at 10,
20, and 30 percent. The horizontal lines drawn at these points define four bands of SSF values. Note that these bands are not of equal width since the curve is not a straight
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 FIGURE 3-10 Example of using four SSF categories based on the model in
Figure 3-2. FIGURE 3-11 Example of using seven SSF categories based on the model in Figure 3-2.
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 line at a 45-degree angle. Figure 3-11 shows six lines drawn
horizontally at 10, 15, 20, 25, 30, and 35 percent. These lines provide a finer resolution on the SSF axis with seven bands. Of course, the more bands there are, the closer is the approximation to
the curve. Resolution A further problem with the star rating categories derives from the decision to select the breakpoints between categories by dividing the probability axis of the rollover curve
into four equal 10-percentage-point probability intervals, plus one additional interval above 40 percent probability. The first interval represents rollover probabilities of 0–10 percent (five
stars), the second represents probabilities of 10–20 percent (four stars), and so on up to probabilities greater than 40 percent (one star). However, equal intervals on the probability axis do not
produce equal intervals on the SSF axis because the rollover curve is not a straight line, and its slope changes with changing SSF. One important consequence is that the SSF intervals in the lower
SSF range (up to approximately 1.25), where rollover probability changes quite rapidly with changing SSF, are too wide to permit discrimination among vehicles, even though analysis using the logit
model indicates that such discrimination is statistically meaningful on the basis of real-world crash experience. The choice of breakpoints for the rating system does not exploit the richness of the
available data, and consequently the rating system is not as informative as it could be. For example, the rollover resistance ratings for both SUVs and passenger sedans each span two rating
categories: SUVs receive either two- or three-star ratings, whereas passenger sedans receive four- or five-star ratings. However, SUVs are more susceptible to rollover than are passenger sedans, and
the rate of reduction of rollover probability with increasing SSF is greater for SUVs. The lack of resolution for vehicles with higher rollover risk detracts from the usefulness of the rating system,
and a finer distinction among the rollover propensities of SUVs could be helpful in informing vehicle purchase decisions. Alternatively, as noted in Chapter 4, it may be possible to avoid the use of
categories altogether. This could be achieved by presenting the actual SSF values or rescaled SSF values—for example, on a scale of 0–100. FINDINGS AND RECOMMENDATIONS Findings 3-1. Analysis of
single-vehicle crash data indicates that an increase in SSF reduces the likelihood of rollover. 3-2. NHTSA’s implementation of an exponential model does not provide sufficient accuracy to permit
discrimination of the differences in rollover risk associated with different vehicles within a vehicle class.
OCR for page 38
An Assessment of the National Highway Traffic Safety Administration's Rating System for Rollover Resistance: Special Report 265 3-3. The relation between rollover risk and SSF can be estimated
accurately with available crash data and software using a logit model. 3-4. Given the richness of the available data, nonparametric analysis can provide a closer approximation of rollover risk. 3-5.
The current practice of approximating the rollover curve with five discrete levels does not convey the richness of the information provided by available crash data. Recommendations 3-1. Instead of
using an exponential model, NHTSA should use a logit model as a starting point for analysis of the relation between rollover risk and SSF. 3-2. For future analysis of rollover risk, NHTSA should
employ non-parametric methods. 3-3. NHTSA should consider a higher-resolution representation of the relation between rollover risk and SSF. REFERENCES Abbreviation NHTSA National Highway Traffic
Safety Administration Donelson, A.C., and R.M. Ray. 2001. Motor Vehicle Rollover Ratings: Toward a Resolution of Statistical Issues. Exponent Failure Analysis Associates, Inc., Menlo Park, Calif.
Federal Register. 2000. Consumer Information Regulations; Federal Motor Vehicle Safety Standards; Rollover Prevention; Request for Comments. Vol. 65, No. 106, June 1, pp. 34,998–35,024. Federal
Register. 2001. Consumer Information Regulations; Federal Motor Vehicle Safety Standards; Rollover Prevention; Final Rule. Vol. 66, No. 9, Jan. 12, pp. 3,388–3,437. Greene, W. H. 2000. Econometric
Analysis, Fourth Edition. Prentice-Hall, Inc., N.J. Härdle, W. 1990. Applied Nonparametric Regression. Cambridge University Press, Cambridge, U.K. NHTSA. 1999. Passenger Vehicles in Untripped
Rollovers. Research Note, National Center for Statistics and Analysis.
|
{"url":"http://www.nap.edu/openbook.php?record_id=10308&page=38","timestamp":"2014-04-20T06:52:46Z","content_type":null,"content_length":"98143","record_id":"<urn:uuid:d4ba0003-f932-4db1-b11c-26e3e04ce63b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dice (the plural of the word die, probably from the Latin dare: to give) are, in general, small polyhedral objects with the faces marked with numbers or other symbols, thrown in order to choose one
of the faces randomly. The most common dice are small cubess 1-2 cm across, whose faces are numbered from one to six (usually by patterns of dots, with opposite sides totalling seven, and numbers 1,
2 and 3 set in counterclockwise direction).
In Unicode, the faces of common cubical dice are ⚀ ⚁ ⚂ ⚃ ⚄ ⚅
Dice are thrown to provide (supposedly uniformly distributed) random numbers for gambling and other games (and thus are a type of hardware random number generator).
Dice are thrown, singly or in groups, from the hand or from a cup or box designed for the purpose, onto a flat surface. The face of each die that is uppermost when it comes to rest provides the value
of the throw. A typical dice game today is craps, wherein two dice are thrown at a time, and wagers are made on the total value of up-facing spots on the two dice. They are also frequently used to
randomize allowable moves in board games such as Backgammon.
"Loaded" or "gaffed" dice can be made in many ways to cheat at such games. Weights can be added, or some edges made round while others are sharp, or some faces made slightly off-square, to make some
outcomes more likely than would be predicted by pure chance. Dice used in casinos are often transparent to make loading more difficult.
In cooking, to dice means to chop into small cubes, in allusion to the dice used in games.
Dice probably evolved from knucklebones, which are approximately tetrahedral. Even today, dice are sometimes colloquially referred to as "bones". Ivory, bone, wood, metal, and stone materials have
been commonly used, though the use of plastics is now nearly universal. It is almost impossible to trace clearly the development of dice as distinguished from knucklebones, on account of the
confusing of the two games by the ancient writers. It is certain, however, that both were played in times antecedent to those of which we possess any written records.
The fact that dice have been used throughout the Orient from time immemorial, as has been proved by excavations from ancient tombs, seems to point clearly to an Asiatic origin. Dicing is mentioned as
an Indian game in the Rig-veda. In its primitive form knucklebones was essentially a game of skill played by women and children. In a derivative form of knucklebones, the four sides of the bones
received different values and were counted as with modern dice. Gambling with three or sometimes two dice was a very popular form of amusement in Greece, especially with the upper classes, and was an
almost invariable accompaniment to banquets (symposium).
The Romans were passionate gamblers, especially in the luxurious days of the Roman Empire, and dicing was a favourite form, though it was forbidden except during the Saturnalia. Horace derided the
youth of the period, who wasted his time amid the dangers of dicing instead of taming his charger and giving himself up to the hardships of the chase. Throwing dice for money was the cause of many
special laws in Rome. One of these stated that no suit could be brought by a person who allowed gambling in his house, even if he had been cheated or assaulted. Professional gamblers were common, and
some of their loaded dice are preserved in museums. The common public-houses were the resorts of gamblers, and a fresco is extant showing two quarrelling dicers being ejected by the indignant host.
Tacitus states that the Germans were passionately fond of dicing, so much so, indeed, that, having lost everything, they would even stake their personal liberty. Centuries later, during the middle
ages, dicing became the favourite pastime of the knights, and both dicing schools and guilds of dicers existed. After the downfall of feudalism the famous German mercenaries called landsknechts
established a reputation as the most notorious dicing gamblers of their time. Many of the dice of the period were curiously carved in the images of men and beasts. In France both knights and ladies
were given to dicing. This persisted through repeated legislation, including interdictions on the part of St. Louis in 1254 and 1256.
In Japan, China, Korea, India, and other Asiatic countries, dice have always been popular and are so still. The markings on Chinese dominoes evolved from the markings on dice, taken two at a time.
Other kinds of dice
Non-cubical dice
Dice with non-cubical shapes were once almost exclusively used by fortune-tellers and in other occult practices, but they have become popular lately among players of roleplaying games and wargames.
Such dice are typically plastic, and have faces bearing numerals rather than patterns of dots. Reciprocally symmetric numerals are distinguished with a dot in the lower right corner (6. vs 9.) or by
being underlined (6 vs 9).
The platonic solids are commonly used to make dice of 4, 6, 8, 12, and 20 faces; other shapes can be found to make dice with 10, 30, and other numbers of faces. (See Zocchihedron and polyhedral dice
Dice with various numbers of faces are often described by their numbers of sides, with a d6 being a six-sided die, a d10 a ten-sided die, and so forth.
20, 10 and 4-sided dice
A large number of different probability distributions can be obtained using these dice in various ways; for example, 10-sided dice (or 20-sided dice labeled with single digits) are often used in
pairs to produce a linearly-distributed random percentage. Summing multiple dice approximates a normal distribution (a "bell curve"), while eliminating high or low throws can be used to skew the
distribution in various ways. Using these techniques, games can closely approximate the real probability distributions of the events they simulate.
Spherical dice also exist; these function like the plain cubic dice, but have some sort of internal cavity in which a weight moves which causes them to settle in one of six orientations when rolled.
Cowry shells or coins may be used as a kind of two-sided dice ("d2"). (In the case of cowries it is questionable if they yield a uniform distribution.)
Dice with other labels
Although most dice are labelled with numbers (starting at 1), all sorts of other symbols may be used. The most common ones include (probably among others):
• color dice (e.g., with the colors of the playing pieces used in a game)
• Poker dice, with the following labels somewhat reminiscent of the names of standard playing cards:
□ Nine (of spades; black)
□ Ten (of diamonds; red)
□ Jack (blue)
□ Queen (blue)
□ King (red)
□ Ace (of clubs; black)
• dice with letters (cf. Boggle)
• Persi Diaconis and Joseph B. Keller. "Fair Dice". The American Mathematical Monthly, 96(4):337-339, 1989. (Discussion of dice that are fair "by symmetry" and "by continuity".)
External links
This article incorporates text from the public domain 1911 Encyclopædia Britannica.
|
{"url":"http://july.fixedreference.org/en/20040724/wikipedia/Dice","timestamp":"2014-04-17T04:04:37Z","content_type":null,"content_length":"14766","record_id":"<urn:uuid:dcc3cd61-0538-4ead-935c-619ca0d5d45f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A fully abstract translation between a -calculus with reference types and Standard ML
- Higher Order Operational Techniques in Semantics, Publications of the Newton Institute , 1998
"... Techniques of operational semantics do not apply universally to all language varieties: techniques that work for simple functional languages may not apply to more realistic languages with
features such as objects and memory effects. We focus mainly on the characterization of the so-called finite ele ..."
Cited by 4 (0 self)
Add to MetaCart
Techniques of operational semantics do not apply universally to all language varieties: techniques that work for simple functional languages may not apply to more realistic languages with features
such as objects and memory effects. We focus mainly on the characterization of the so-called finite elements. The presence of finite elements in a semantics allows for an additional powerful
induction mechanism. We show that in some languages a reasonable notion of finite element may be defined, but for other languages this is problematic, and we analyse the reasons for these
difficulties. We develop a formal theory of language embeddings and establish a number of properties of embeddings. More complex languages are given semantics by embedding them into simpler
languages. Embeddings may be used to establish more general results and avoid reproving some results. It also gives us a formal metric to describe the gap between different languages. Dimensions of
the untyped programming language design space addressed here include functions, injections, pairs, objects, and memories. 1
, 1997
"... This paper considers some theoretical and practical issues concerning the use of linear logic as a logical foundation of functional programming languages such as Haskell and SML. First I give an
operational theory for a linear PCF: the (typed) linear - calculus extended with booleans, conditional a ..."
Cited by 3 (1 self)
Add to MetaCart
This paper considers some theoretical and practical issues concerning the use of linear logic as a logical foundation of functional programming languages such as Haskell and SML. First I give an
operational theory for a linear PCF: the (typed) linear - calculus extended with booleans, conditional and non-termination. An operational semantics is given which corresponds in a precise way to the
process of fi-reduction which originates from proof theory. Using this operational semantics I define notions of observational equivalence (sometimes called contextual equivalence). Surprisingly, the
linearity of the language forces a reworking of the traditional notion of a context (the details are given in an appendix). A co-inductively defined notion, applicative bisimularity, is developed and
compared with observational equivalence using a variant of Howe's method. Interestingly the equivalence of these two notions is greatly complicated by the linearity of the language. These
equivalences ar...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1657009&sort=cite&start=10","timestamp":"2014-04-24T14:10:22Z","content_type":null,"content_length":"15876","record_id":"<urn:uuid:69f8e283-f76c-4bd7-85a7-ba35030e3f6c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graph Theory with Prooving
December 7th 2009, 08:32 PM
Graph Theory with Prooving
Let F be a Simple Graph with n $>=$2 vertices.
Proove that F contains at least two vertices of the same degree.
well i know that A graph is called simple if there is at most one edge between any two points.
December 7th 2009, 10:12 PM
let $d_i$ be the degree of the vertex $v_i.$ then either $d_i \in \{1, 2, \cdots , n-1 \}, \ \forall i,$ or $d_i \in \{0,1, \cdots, n-2 \}, \ \forall i.$ you should be able to easily finish the
proof now.
December 7th 2009, 10:21 PM
thank you for this...just little bit more
well i see that you are making two sets and one less then other, but how are you supposed to put that in to proof language...
December 7th 2009, 10:34 PM
the number of elements of both sets is $n-1.$ so we have $n$ vertices and $n-1$ possible values for the degrees of the vertices. thus at least two vertices must have the same degree.
December 7th 2009, 10:36 PM
|
{"url":"http://mathhelpforum.com/discrete-math/119240-graph-theory-prooving-print.html","timestamp":"2014-04-17T20:48:39Z","content_type":null,"content_length":"10213","record_id":"<urn:uuid:dc88b527-8cd9-4085-aaaa-84fab66d9236>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reinvestigation of Relationship Between Macroeconomic Indexes and Energy Consumption in Iran
Although energy have an important role in economical development and countries welfare, but energy crisis in 1970 decade and high level and unforeseen of energy conveyer prices specially about oil,
cause execute limitation policies and saving in energy consumption, as many of industrial countries have leaded toward gradual management of increasing energy consumption (Emadzadeh et al., 2002). In
the end of 1970 decade and early of 1980 decade relationship between energy consumption and economical growth located in focus point of economical analyzer attention. Therefore in this period many
studies have been accomplished about effect of increasing energy price and consequently its consumption limitation on economical growth and study it from different views (Abrishami and Mostafaie,
2000). Marginal energy consumption in year 1958 was equal to 53.4 million oil barrels and in year 1968 this measure increased to 206.9 million barrels oil as annual average having growth equivalent
to 14.6%. After Iran revolution and political and economical changes, specially imposed war, energy consumption gently flows its increasing trend and from 199.7 million barrels in year 1969 increased
to 331.4 million barrels in year 1979. After obtain liberation oil products consumption in year 1979, again energy consumption become increase fast as in years 1979 to 1983 (across first development
program) having a growth equivalent to 5.88%. This growth in 1980-2000 decreased fewly and equal to 3.1%. Masih and Masih (1997) showed that energy consumption for Malaysia, Singapore and Philippines
is none related with income but there is an aside relation from energy consumption to GNP for India. Aqeel and Butt (2001) with utilized Granger causality test showed that economical growth is that
cause of energy consumption in Pakistan and economical growth cause consumption growth of oil products. About gas part, there isn`t a causality relationship between gas consumption and economical
growth. In part of power, electronic consumption cause economical growth. But there isn`t a reflected effect from gas consumption to economical growth. Chiou-Wei et al. (2008) found evidence
supporting a neutrality hypothesis for the United States, Thailand and South Korea. However, empirical evidence on Philippines and Singapore revealed a unidirectional causality running from economic
growth to energy consumption while energy consumption may have affected economic growth for Taiwan, Hong Kong, Malaysia and Indonesia. In the low income group, there existed no causal relationship
between energy consumption and economic growth; in the middle income groups (lower and upper middle income groups), economic growth leads energy consumption positively; in the high income group
countries, economic growth leads energy consumption negatively. Jinke et al. (2008) discovered that unidirectional causality running from GDP to coal consumption exists in Japan and China and no
causality relationship exist between coal consumption and GDP in India, South Korea and South Africa while the series are not cointegrated in USA. The major OECD or non-OECD countries especially
China, India and South Africa should reduce their CO[2] emissions in coal consumption to reach sustainable development. Although economic growth and energy consumption lack short-run causality, there
is long-run unidirectional causality running from energy consumption to economic growth i.e., reducing energy consumption does not adversely affect GDP in the short-run but would in the long-run.
Zamani (2007) discovered that there is A long-run unidirectional relationship from GDP to total energy and bidirectional relationship between GDP and gas as well as GDP and petroleum products
consumption for the whole economy. Causality is running from value added to total energy, electricity, gas and petroleum products consumption and from gas consumption to value added in industrial
sector. The long-run bidirectional relations hold between value added and total energy, electricity and petroleum products consumption in the agricultural sector. The short-run causality runs from
GDP to total energy and petroleum products consumption and also industrial value added to total energy and petroleum products consumption in this sector. Fatia et al. (2004) showed that in Newzland
there isn`t a causality relationship between oil, gas and coal consumption and real gross national product and variables are endogenous toward each other. On the other hands, there is an aside
Granger causality relationship from gross national product to total marginal energy consumption and energy consumption in industrial department. According to these studies reveal that their results
about causality relationship between energy consumption and economical growth isn`t equal which may be resulting of structural, constructional and political changes adopted with countries and
difference in research methodology. In addition, using of Engle and Granger (1987) tests utilized have encountered with many criticisms. Time series properties effect on sensitivity of tests. Also,
the most of studies suppose that time series data are stationary and because of this result haven`t utilized from suitable estimation. The main objective of this study is to find whether in Iran
economical growth is a factor for energy consumption or energy consumption can make economical growth field from direct and indirect channels such as increasing total consumption, increasing
profitability, promotion efficiency and etc. Therefore, this study tries to find causality relationship between energy consumption in there area: total consumption, housing consumption and commercial
consumption and industrial consumption and GNP.
Theoretical foundation of relationship between total product and energy consumption: Nowadays addition to labor and capital inputs, also energy is propounded as one of important inputs in
macroeconomic models. Therefore, production is a function of labor, capital and energy inputs.
Q = Total product
K = Capital input
L = Labor input
E = Energy input (Abrishami and Mostafaie, 2000)
Energy input can provide with a collection of factors such as oil, gas, electricity and etc which named energy carrier (Abrishami and Mostafaie, 2000). Pindyck (1979) believed that energy price
effect on economical growth depend on energy role in products structure. With his standpoint in industries which energy utilized as an intermediate input in production, price increase and
consequently energy consumption decrease, national product will be reduced. While a country want to fallow its growth, in spite of rising in prices and adjusting its economical structure, must
concentrate domestic products supply on investment goods which proportionally use less energy. Therefore inside of parts also demand for investment goods and none investment goods cause structural
changes and investigations which can reduce energy consumption reserving suitable and lead economic to a side which reduce consumption intensity and increasing efficiency.
Causality concept: Although regression analyzing can study dependence of one variable to other variables, but necessarily don`t give causality means. In detail of causality means this question is
propounding whether can find a statistically causality relationship between two variables that have regency and primacy. In the answer of this question a test which inclusive estimate below
regression defined by Granger (1986, 1988):
• If sum of estimated lagged coefficient y[t] in Eq. 2 statistically opposed zero (Σα[i] = 0) and sum of estimated lagged coefficient x[t] in Eq. 3 equal zero (Σδ[i] = 0) there is a aside causality
from y[t] to x[t]
• If inverse of above condition happened there is causality from x[t] to y[t]
• If sum of coefficient x[t ]and y[t ]in both of regression is statistically significant and opposed zero then two variables are independent
Data of this study for 1970-2000 collected from power ministry statistic yearbook of Iran and to improve results, variables utilized with logarithm form. In this study LTEC means total energy
consumption, LREC is housing and commercial energy consumption, LIEC is industrial energy consumption, LGDP is gross national product and LCPI is price index. In this study first with utilized
Augment Dickey Fuller (ADF) test for unit root, stationary of variables have studied (Dickey and Fuller, 1979, 1981). Then with utilized Johansen cointegration test (Johansen, 1988, 1992), number of
cointegration vectors between variable national product, energy consumption in different parts and price level have studied and side of causality between variables have studied from Granger opinion
with utilized three vector error correction models that each of them inclusive one of the energy consumption parts with two variables: national product and price level and according to it, exogenous
and endogenous of each variables have estimated. This study is done in Iran in 2007.
According to stationary augment Dickey Fuller test (Table 1) obtained that all variables in their level are non-stationary and in type of first difference are stationary. This means those variables
are I (1). In the next step by using of Johansen-Julius, cointegration test (according to λ[max], λ[trace] indexes) have done. In this study three model have studied where each of them include three
variables. This test has made for each of pattern. First model including LCPI, LGDP, LTEC, second model including LREC, LGDP, LCPI and third model including LIEC, LGDP, LCPI.
According to Table 2 information, tests results for each pattern show that each of them will have a cointegration vector. After determine number of cointegration vectors for each model, can estimate
error correction equations. Error correction equations include short run and error correction component processes. Now, by using Wald test can determine causality relationship between variable
separately in each of model. Estimated results of this test showed in Table 3.
Three first column (from left side) show χ^2 statistic about significantly test for summation of lags which actually can say this explain short run causality. Fourth column shows error correction
coefficient and it`s t-test. Last three columns contemporary consider short run relationship and cointegration vector and its test which can consider as long run causality. According to results
revealed that in total energy consumption equation, Null hypothesis based on being zero sum of LPCI and lagged LGDP coefficient has reject in 5% significant level. This matter show that in short run
price index and gross national income effect on total energy consumption in Iran and their changes can influence total consumption. According to positive relationship between total energy consumption
and these two variables can say that in short run with increasing price indexes or gross national products, total energy consumption increase too. Also, it shows that in long run, price index and
gross domestic product are effective factors on total energy consumption. In fact, GDP increase directly through increasing society income and therefore increasing on energy demand. With increasing
investigation and therefore increase demand for energy can effect on energy consumption. In equation GDP and CPI in model one observes that none of variables have significant effect on them either
short or long run. Therefore, these two variables are weak exogenous for first pattern. This subject show that error correction term is not being significant of each equation. In second model which
studied relationship between housing energy consumption, total product and price index, although total product doesn`t influence housing and commercial consumption but in long run have effect on
housing and commercial consumption. This subject is completely compatible with reality because increasing incomes in long run demand will be increase for energy directly and indirectly. In long run
housing and commercial consumption and also gross domestic production measure having significant influence one price index. This event can be consider as a result of inflation pressure of increasing
total demand which makes of economical growth and increasing energy consumption in long run. Also prices index in short and long run influence on housing and commercial consumption. In third pattern
industrial energy consumption has studied with gross domestic production and prices index. Also in this model total product, either in short or long run takes account as an effective factor on energy
consumption in industrial level. This matter show that with increasing domestic product, needed to energy, increased specially on industrial sector. Generally results of this study show that energy
consumption in three fields: total consumption, housing, commercial consumption and industrial consumption mustn`t receive exogenous from gross domestic production and inflation rate. Energy
consumption in Iran haven`t key role on gross domestic production and domestic production in Iran mainly depend on other factors which energy consumption is the less important of it rather than
other. In fact, these results show to us that about Iran there is aside causality from domestic product to energy consumption. Results of this study confirm results of the most studies. With respect
to results, government must do optimum pricing in energy sector in sub sectors of commercial, housing and industrial in parallel to economic growth and increasing of gross national domestic product.
This strategy cause optimum management in this area. In other words, energy consumption must not consider exogenous of GDP and inflation rate. In fact, impose stabilization energy price policy in
economical growth conditions are not good phenomena and will encourage energy demand dimension. So, government must change energy price policy towards variable pricing based on amount of consumption
especially in peak and load duration.
Table 1: Estimated results of stationary test
Table 2: Results of determining number of cointegration vectors
Table 3: Results of Wald test for determining of long and short run causality
*Significant at 5% level, **Significant at 10% level
Abrishami, H. and A. Mostafaie, 2000. Study relationship between economical growth and oil products in Iran. Knowledge Dev. J., 11: 46-46.
Aqeel, A. and M.S. Butt, 2001. The relationship between energy consumption and economic growth in Pakistan. Asia Pacific Dev. J., 8: 101-110.
Direct Link |
Chiou-Wei, S.Z. and Ch. F. Chen and Z. Zhu, 2008. Economic growth and energy consumption revisited-Evidence from linear and nonlinear granger causality. Energy Econ., 30: 3063-3076.
CrossRef | Direct Link |
Dickey, D.A. and W.A. Fuller, 1979. Distributions of the estimators for autoregressive time series with a unit root. J. Am. Stat. Assoc., 74: 427-431.
Direct Link |
Dickey, D.A. and W.A. Fuller, 1981. Likelihood ratio statistics for autoregressive time series with a unit root. Econometrica, 49: 1057-1072.
Direct Link |
Emadzadeh, M., A. Sharifi, R. Dalaliasfahani and M. Safdari, 2002. An analysis of energy growth trend in OECD countries. Commercial Res. J., 28: 95-118.
Engle, R.F. and C.W.J. Granger, 1987. Cointegration and error-correction: Representation, estimation and testing. Econometrica, 55: 251-276.
Direct Link |
Fatia, K., O. Les and F.G. Scrimgeour, 2004. Modeling the causal relationship between energy consumption and GDP in New Zealand, Australia, India and Indonesia: The Philippines and Thailand. Math.
Comput. Simulat., 64: 431-445.
Direct Link |
Granger, C.W.J., 1986. Developments in the study of cointegrated economics variables. Oxford Bull. Econ. Stat., 48: 213-228.
Direct Link |
Granger, C.W.J., 1988. Developments in a concept of causality. J. Econometrics, 39: 199-211.
Direct Link |
Jinke, L., S. Hualing and G. Dianming, 2008. Causality relationship between coal consumption and GDP: Difference of major OECD and non-OECD countries. Applied Energy, 85: 421-429.
CrossRef |
Johansen, S., 1988. Statistical and hypothesis testing of cointegration vectors. J. Econ. Dyn. Control, 12: 231-254.
Johansen, S., 1992. Cointegration in partial systems and the efficiency of single equation analysis. J. Econometrics, 52: 389-402.
Direct Link |
Masih, A.M.M. and R. Masih, 1997. On the temporal causal relationship between energy consumption, real income and prices: Some new evidence from Asian energy dependent NICs based on multivariate
cointegration/vector error correction approach. J. Policy Model., 19: 417-440.
Direct Link |
Pindyck, R.S., 1979. Structure of World Energy Demand. MIT Press, Cambridge, MA., ISBN-10:0-262-66177-2.
Zamani, M., 2007. Energy consumption and economic activities in Iran. Energy Econ., 29: 1135-1140.
CrossRef |
|
{"url":"http://scialert.net/fulltext/?doi=jas.2009.1578.1582","timestamp":"2014-04-20T18:27:51Z","content_type":null,"content_length":"82422","record_id":"<urn:uuid:362ca924-ad32-4738-b778-633b31a53341>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: April 2008 [00946]
[Date Index] [Thread Index] [Author Index]
Re: Probably memory problem
• To: mathgroup at smc.vnet.net
• Subject: [mg88075] Re: Probably memory problem
• From: Szabolcs Horvát <szhorvat at gmail.com>
• Date: Wed, 23 Apr 2008 06:05:10 -0400 (EDT)
• Organization: University of Bergen
• References: <fumr02$sef$1@smc.vnet.net>
Alejandra Lozada wrote:
> Dear MathGroup,
> _____________________________________
> Question:
> Do anyone knows if using the palletes (Alebraic Manipulation,
> BasicMathInput,
> BasicTypesetting) in Mathematica 6 needs more stack memory that writing
> algebra in full form?
> ______________________________________
> Problem:
> I am interested in check an equality:
> Side 1= Side 2 of the form:
> 24*n*y*y=-4*b+2*v*z-v*v/k*e^z
> With y,b,v,z,k,n variables which all but n
> are veeeeeery long expressions of the form:
> y:=
> b:=
> etc...
> and to make them look shorter I defined other
> expressions, so that y, b, v, etc, need
> other expressions to be complete.
> I know that the equality is analitically true, but
> I get using TrueQ(side 1, side2), PossibleZeroQ,
> that the equality is False.
First, you have to use correct syntax:
Equality is ==, not =
Did you mean E^z when where you've written e^z?
Functions are written with square brackets, TrueQ[expr], not TrueQ(expr)
TrueQ is a structural (not Mathematica operation). It will only return
True is the input is explicitly True.
Use Simplify/FullSimplify with appropriate assumptions to check whether
the equality is true.
> __________________________________________
> Options:
> So, I think that the problem could be in:
> - Lack of memory, because Mathematica stops evaluating
> the notebook after the 16th definition,
What do you mean when you say that Mathematica stops evaluating the
notebook? This is very vague. Do you get an error message? Does the
kernel quit?
> although I asked Mathematica about the memory used:
> MemoryInUse= 6,314,760 bites just before it stopped
> evaluating the notebook but I think Mathematica=B4s limit
> is about 19,654,528 bites.
> -Maybe I do not have enough computer memory, but
> I have 1GBite of RAM=1,073,741,824 bites.
> -Maybe I did not write well the definitions.
> ___________________________________________
> Any comments or suggestions will be very appreciated, :)
> thank you very much in advance,
> Alejandra Lozada
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Apr/msg00946.html","timestamp":"2014-04-19T06:56:14Z","content_type":null,"content_length":"27294","record_id":"<urn:uuid:f9218fa4-7a44-44b5-9e2e-855112b4871f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
|
My Research
(Last updated August 2009)
Cancer is a highly complex and heterogeneous set of diseases. Dynamic changes in the genome, epigenome, transcriptome and proteome that result in the gain of function of oncoproteins or the loss of
function of tumor suppressor proteins underlie the development of all cancers. While the underlying principles that govern the transformation of normal cells into malignant ones is rather well
understood (Hanahan and Weinberg, 2000). a knowledge of the changes that occur in a given cancer is not sufficient to deduce clinical outcome.
The difficulty in predicting clinical outcome arises because many factors other than the mutations responsible for oncogenesis determine tumor growth dynamics. Multidirectional feedback loops occur
between tumor cells and the stroma, immune cells, extracellular matrix and vasculature (Kitano, 2004). Given the number and nature of these interactions, it becomes increasingly difficult to reason
through the feedback loops and correctly predict tumor behavior. For this reason, a better understanding of tumor growth dynamics can be expected from a computational model.
The holy grail of computational tumor modeling is to develop a simulation took that can be utilized in the clinic to predict neoplastic progression and response to treatment. Not only must such a
model incorporate the many feedback loops involved in neoplastic progression, the model must also account for the fact that cancer progression involves events occurring over a variety of time and
length scales. By developing individual models that focus on certain tumor-host interactions, validating these models and merging the individual pieces together, I aim to build one algorithm with the
potential to address a variety of questions about neoplastic growth and survival.
Model Background
My Ph.D. thesis advisor, Professor Salvatore Torquato, along with several others, has developed a novel cellular automaton (CA) model to simulate the mechanistic complexity of solid tumor growth. The
model takes into account four cell types: healthy cells, proliferating tumor cells, non-proliferating tumor cells and necrotic tumor cells. The algorithm they developed was able to successfully
predict three-dimensional tumor growth and composition using a simple set of automaton rules and a set of four microscopic parameters that account for the nutritional needs of the tumor,
cell-doubling time and an imposed spherical symmetry term (Kansal et al., 2000a). This simple model of tumor growth was applied to consider the effect that an emerging clone of novel genotype has on
neoplastic progression (Kansal et al., 2000b, Figure 1) and to study the response of a tumor mass to surgical resection followed by chemotherapy (Schmitz et al., 2002).
Figure 1
Vascular Growth Model
The success of the original CA model is in part related to its simplicity, and one of the simplifying assumptions is that the vasculature is implicitly present and evolves as the tumor grows. In
order to incorporate more biological detail in the model, I worked to modify the original algorithm to study the feedback that occurs between the growing tumor and the evolving host blood vessel
network (Gevertz and Torquato, 2006).
The computational algorithm is based on the co-option/regression/growth experimental model of tumor vasculature evolution. In this model, as a malignant mass grows, the tumor cells co-opt the mature
vessels of the surrounding tissue that express constant levels of bound Angiopoietin-1 (Ang-1). Vessel co-option leads to the upregulation of the antagonist of Ang-1, Angiopoietin-2 (Ang-2). In the
absence of the anti-apoptotic signal triggered by vascular endothelial growth factor (VEGF), this shift destabilizes the co-opted vessels within the tumor center and marks them for regression (Holash
et al., 1999). Vessel regression in the absence of vessel growth leads to the formation of hypoxic regions in the tumor mass. Hypoxia induces the expression of VEGF, stimulating the growth of new
blood vessels.
We developed a system of reaction-diffusion equations to track the spatial and temporal evolution of the aforementioned key factors involved in blood vessel growth and regression. Based on a set of
algorithmic rules, the concentration of each protein and bound receptor at a blood vessel determines if a vessel will divide, regress or remain stagnant. The structure of the blood vessel network, in
turn, is used to estimate the oxygen concentration at each cell site, and the oxygen concentration determines the proliferative capacity of each automaton cell.
The model proved to quantitatively agree with experimental observations on the growth of tumors when angiogenesis (growth of new blood vessels from existing vessels) is successfully initiated and
when angiogenesis is inhibited. In particular, the model exhibits an "angiogenic switch" such that in some parameter regimes the tumor can grow to a macroscopic size whereas in other parameter
regimes tumor growth is thwarted beyond a microscopic size (Gevertz and Torquato, 2006, Figure 2).
Figure 2
Growth in Confined, Heterogeneous Environments
Both the original CA model and the vascular growth model limit the effects of mechanical confinement to one parameter that imposes a maximum radius on a spherically symmetric tumor. However, tumors
can grow in organs of any shape with nonhomogeneous tissue structure, and in a study performed by Helmlinger et al (1997), it was demonstrated that neoplastic growth is spherically symmetric only
when the environment in which the tumor develops imposes no physical boundaries on growth. In particular, it was shown that human adenocarcinoma cells grown in a 0.7% gel inside a cylindrical glass
tube develop to take on an ellipsoidal shape, driven by the geometry of the capillary tube. However, when the same cells are grown outside the capillary tube, a spheroidal mass develops. This
experiment highlights that the assumption of radially symmetric growth is not valid in complex, non-spherically symmetric environments.
Since many organs, like the brain, impose non-radially symmetric physical confinement on tumor growth, we modified the original CA algorithm to incorporate boundary and heterogeneity effects on
neoplastic progression. The CA evolution rules are comparable to the rules used in the original model, except that all assumptions of radial symmetry are removed and replaced with rules that account
for tissue geometry and topology (Gevertz et al., 2008). In doing this, we showed that models that do not account for the structure of the confining boundary and organ heterogeneity lead to
inaccurate predictions on tumor size, shape and spread. In Figure 3, this is illustrated by studying tumor growth in a 2D representation of the cranium and comparing the size of the tumor as a
function of time when the original and modified algorithm is employed.
Figure 3
Heterogeneous Tumor Cell Population: Genetic Mutations
It is well known that each patient's tumor has its own genetic profile, and that this profile is important in determining growth dynamics and response to treatment. To account for tumor cell
heterogeneity, we begin with the assumption that tumors are monoclonal in origin; that is, a tumor mass arises from a single cell that accumulates genetic and epigenetic alterations over time
(Fialkow, 1979). In our algorithm, a tumor cell of one genotype/phenotype initiates tumor growth. As a tumor cell undergoes mitosis, we assume that there is a 1% chance that an error occurs in DNA
replication. When such an error occurs, the daughter cell will take on a mutant phenotype that differs in one respect from the mother cell. Given the tumor cell properties considered in the model, we
allow malignant cells to acquire altered phenotypes related to cell proliferation rates (which can correspond to cells altering the production rate of growth factor such as PDGF and TGF-α or tumor
suppressor proteins such as pRb or p53) and the length of time a cell can survive under sustained hypoxia (which can correspond to the expression levels of the X-box binding protein 1). We have
implemented the model such that a tumor cell can express one of eight mutant phenotypes (Gevertz and Torquato, 2008), all of which are defined in Figure 4.
Figure 4
Using the genetic mutations model, we have explored the probability of emergence of both beneficial and deleterious mutations in the tumor cell population. We have found that, as expected, mutations
which give cells a proliferative advantage arise frequently in our simulations (78% of the time). More surprisingly, we have also observed that the obviously deleterious mutation that decreases the
proliferation rate of a cell emerges in 7% of our simulations. Furthermore, the mutation that determines the hypoxic lifespan of a cell is not strongly selected for or against, provided the changes
in the lifespan are not too drastic in either direction (Gevertz and Torquato, submiited). Using this model, we can explore how growth dynamics are impacted by the emergence of one or more mutant
Merged Model of Tumor Growth
Each of the previously discussed algorithms were designed to address a particular set of questions and successfully served their purpose. Much can be gained, however, by merging the evolving
vasculature, confined growth and genetic mutations models. The resulting multiscale simulation tool will not only consider multiple forms of feedback and heterogeneity, it will also likely have
emergent properties not identifiable prior to the integration of the three models.
To pinpoint the effects that the environment has on tumor growth, here I show environment in which the tumor grows (bottom row of Figure 5), along with the shape and spread of a tumor after
approximately four months of growth in this environment (top row of Figure 5). This figure allows us to see how and to what extent the structure of the environment alters tumor shape and spread.
Figure 5
Currently, this merged model is being used to test the impact that various vascular-targeting therapies have on tumor progression.
Future Work
1. Generalize the model to three dimensions. Currently, the original CA, confined and heterogeneous growth model, and the genetic mutations model have been implemented in both 2D and 3D. However,
the vascular algorithm has only been implemented in 2D, limiting the merged model to be two-dimensional as well.
2. Develop a more realistic model of the underlying capillary network in tissue. We only focus on modeling the capillaries because this is the level of the vascular tree where oxygen and nutrient
exchange occurs. The capillary network, it should be noted, does not exhibit the same branching structure that is seen in higher-order vessels.
3. Expand upon the confined-growth algorithm to examine not only how the host deforms the tumor, but also to consider the reciprocal deformities induced in the host by the growing neoplasm.
4. Incorporate other forms of tumor-host interactions: extracellular matrix, immune system...
5. Include the process of single cell invasion in which individual cancer cells break off the main tumor mass and invade healthy tissue (particularly important in brain tumor growth dynamics)
D. Hanahan and R.A. Weinberg, 2000. The hallmarks of cancer. Cell 100: 57-70.
H. Kitano, 2004. Cancer as a robust system: implications for anticancer therapy. Nature Rev. Cancer 4: 227-235.
A.R. Kansal et al., 2000a. Simulated brain tumor growth dynamics using a three-dimensional cellular automaton. J. Theor. Biol. 203: 367-382.
A.R. Kansal et al., 2000b. Emergence of a subpopulation in a computational model of tumor growth. J. Theor. Biol. 207: 431-441.
A.R. Kansal and S. Torquato, 2001. Globally and locally minimal weight spanning tree networks. Physica A 301: 601-619.
J.E. Schmitz et al., 2002. A cellular automaton model of brain tumor treatment and resistance. J. Theor. Medicine 2002(4): 223-239.
J.L. Gevertz and S. Torquato, 2006. Modeling the effects of vasculature evolution on early brain tumor growth. J. Theor. Biol. 243: 517-531.
J. Holash et al., 1999. Vessel cooption, regression, and growth in tumors mediated by angiopoietins and VEGF. Science 284: 1994-1998.
G. Helmlinger et al., 1997. Solid stress inhibits the growth of multicellular tumor spheroids. Nature Biotech. 15: 778-783.
J.L. Gevertz, G. Gillies and S. Torquato, 2008. Simulating tumor growth in confined heterogeneous environments. Submitted to Physical Biology.
P.J. Fialkow, 1979. Clonal origin of human tumors. Ann. Rev. Med. 30: 135-143.
J.L. Gevertz and S. Torquato. Growing heterogeneous tumors in silico. Submitted for publication.
|
{"url":"http://www.tcnj.edu/~gevertz/research.html","timestamp":"2014-04-20T05:42:36Z","content_type":null,"content_length":"20108","record_id":"<urn:uuid:093aa72b-5635-4ccc-afb8-49db2ec33bc4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
VBA: how to check if the value of a cell is a number
This is a discussion on VBA: how to check if the value of a cell is a number within the Excel Questions forums, part of the Question Forums category; Hi All, I am a newbie with VBA and I have 2
simple question...but I couldn't find the answers in ...
Hi All, I am a newbie with VBA and I have 2 simple question...but I couldn't find the answers in the forum or by googling it 1) check if in cell A1 there is a number. Something like Code: If .Cells
(1,1).Value is number then .... 2) how to get the "code" of cell A1 in VBA (i.e =code(A1)).I tried Code: .Cell(1,1).Code but it didn't work. Many thanks for your help. Peace
#1 IsNumeric(Cells(1,1)) #2 What do you mean by code? Perhaps Application.WorkSheetFunction.Code(Range("A1")) lenze
Last edited by lenze; Jul 22nd, 2009 at 06:27 PM.
If you have to tell your boss you're good with Excel, you're NOT!! All I know about Excel I owe to my ignorance! Scotch: Because you don't solve great Excel problems over white wine
Hi 1) Can you be more specific? Do you mean like the worksheet function IsNumber()? Questions: - If the cell has text convertible to a number like the string "123" do you want True or False? - If the
cell has a date, the worksheet function IsNumber() would give you a True. Is this what you want? 2) I guess you mean the function Asc(). It can also be its big sister AscW() In case of a Unicode
Kind regards PGC To understand recursion, you must understand recursion.
Thank you for your answers! pgc01, - If it's a date I would like to get FALSE - If I get a string "123" I would prefer TRUE (but if it's too complicated, FALSE will do it). 2) by code I mean the
worksheet function =Code(), so I'm guessing that your suggestion Asc(Cell(1,1)) will do the job Thank you guys
I have a related question: 1a) I would like to count how many numbers (IsNumeric() will do) different from 0 I have in a given range, say A1:A5. In the worsheet I would write =SUMPRODUCT(--Isnumber
(A1:A5), (A1:A5<>0)). To find the numbers (even 0), as a first step, I tried Code: HowMany = Application.WorksheetFunction.SumProduct(--IsNumeric(rng)) but didn't work Any suggestions?
Last edited by fab54; Jul 22nd, 2009 at 07:26 PM.
Use IsNumeric() as Lenze suggested. It accepts numbers and strings convertible to numbers and refuses dates. Remark: it also accepts booleans, if you think it's relevant to your problem test against
it. P. S. Just saw your other post. Instead of the worksheet function SumProduct() you can use a loop.
Last edited by pgc01; Jul 22nd, 2009 at 07:40 PM.
Thank you! Managed to create the loop...yay!! (was quite easy though) Great suggestions guys
Take into account that string "1D2" is recognized as numeric equal to 1*10^2 = 100. Therefore IsNumeric("1D2") = True To avoid it you can use: IsNumeric(Replace(ActiveCell, "D", "?") The code below
can help you: Code: ' Count the numerical values in the Rng range ' VBA usage: ' NumsCount(Range("A1:A5")) <-- Zeroes are not numerical ' NumsCount(Range("A1:A5"), True) <-- Zeroes are numerical '
Formula usage: =NumsCount(A1:A5) Function NumsCount(Rng As Range, Optional UseZero As Boolean) As Long Dim arr, v arr = Rng If Not IsArray(arr) Then ReDim arr(0): arr(0) = Rng For Each v In arr If
IsNum(v) Then If UseZero Then NumsCount = NumsCount + 1 ElseIf v <> 0 Then NumsCount = NumsCount + 1 End If End If Next End Function ' The same as IsNumeric() but different for strings like "1D2",
and skips the boolean values ' If NumOnly=True then strings are not recognised as numeric at all. ' If UseD=True then strings like "1D2" are recognised as numeric. Function IsNum(TxtOrNum, Optional
NumOnly As Boolean, Optional UseD As Boolean) As Boolean Select Case VarType(TxtOrNum) Case 2 To 6, 14 ' Any type of numbers IsNum = True Case 8 ' vbString If Not NumOnly Then Dim d As Double If Not
UseD Then If InStr(UCase(TxtOrNum), "D") > 0 Then Exit Function End If On Error Resume Next d = TxtOrNum IsNum = Err = 0 End If End Select End Function Regards, Vladimir
' Count the numerical values in the Rng range ' VBA usage: ' NumsCount(Range("A1:A5")) <-- Zeroes are not numerical ' NumsCount(Range("A1:A5"), True) <-- Zeroes are numerical ' Formula usage: =
NumsCount(A1:A5) Function NumsCount(Rng As Range, Optional UseZero As Boolean) As Long Dim arr, v arr = Rng If Not IsArray(arr) Then ReDim arr(0): arr(0) = Rng For Each v In arr If IsNum(v) Then If
UseZero Then NumsCount = NumsCount + 1 ElseIf v <> 0 Then NumsCount = NumsCount + 1 End If End If Next End Function ' The same as IsNumeric() but different for strings like "1D2", and skips the
boolean values ' If NumOnly=True then strings are not recognised as numeric at all. ' If UseD=True then strings like "1D2" are recognised as numeric. Function IsNum(TxtOrNum, Optional NumOnly As
Boolean, Optional UseD As Boolean) As Boolean Select Case VarType(TxtOrNum) Case 2 To 6, 14 ' Any type of numbers IsNum = True Case 8 ' vbString If Not NumOnly Then Dim d As Double If Not UseD Then
If InStr(UCase(TxtOrNum), "D") > 0 Then Exit Function End If On Error Resume Next d = TxtOrNum IsNum = Err = 0 End If End Select End Function
Last edited by ZVI; Jul 22nd, 2009 at 08:23 PM.
Hi Vladimir Good idea, your IsNum(). Maybe you want to deal also with another problem. The vba does not care about commas in expressions when it converts to numbers. For example if in a cell you have
"12,3,45" you would say it's a list, but vba and your IsNum() will say it's a valid number. Also for "1,23,4.5,67", if you assign it to a double you get 1234.567, the commas don't matter to vba, but
you would not say that's a number. In case of a string maybe the best is to use regular expressions and check the possible formats.
Hi PGC, May be additional checking of comma in string is enough for solving the list recognising issue: Code: ' The same as IsNumeric() but different for strings like "1D2", and skips the boolean
values ' If NumOnly=True then strings are not recognised as numeric at all. ' If UseD=True then strings like "1D2" are recognised as numeric. ' Comma in string means the list and recognised as not
numerical Function IsNum(TxtOrNum, Optional NumOnly As Boolean, Optional UseD As Boolean) As Boolean Select Case VarType(TxtOrNum) Case 2 To 6, 14 ' Any type of numbers IsNum = True Case 8 ' vbString
If Not NumOnly Then If InStr(TxtOrNum, ",") > 0 Then Exit Function ' <- Comma means the list Dim d As Double If Not UseD Then If InStr(UCase(TxtOrNum), "D") > 0 Then Exit Function End If On Error
Resume Next d = TxtOrNum IsNum = Err = 0 End If End Select End Function Regards, Vladimir
' The same as IsNumeric() but different for strings like "1D2", and skips the boolean values ' If NumOnly=True then strings are not recognised as numeric at all. ' If UseD=True then strings like
"1D2" are recognised as numeric. ' Comma in string means the list and recognised as not numerical Function IsNum(TxtOrNum, Optional NumOnly As Boolean, Optional UseD As Boolean) As Boolean Select
Case VarType(TxtOrNum) Case 2 To 6, 14 ' Any type of numbers IsNum = True Case 8 ' vbString If Not NumOnly Then If InStr(TxtOrNum, ",") > 0 Then Exit Function ' <- Comma means the list Dim d As
Double If Not UseD Then If InStr(UCase(TxtOrNum), "D") > 0 Then Exit Function End If On Error Resume Next d = TxtOrNum IsNum = Err = 0 End If End Select End Function
Last edited by ZVI; Jul 22nd, 2009 at 09:16 PM.
|
{"url":"http://www.mrexcel.com/forum/excel-questions/404540-visual-basic-applications-how-check-if-value-cell-number.html","timestamp":"2014-04-19T15:31:16Z","content_type":null,"content_length":"100750","record_id":"<urn:uuid:0fc2c2ec-c617-410c-bfb4-8d0521d5fe2a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Project Euler Questio 2
June 12th, 2012, 03:11 AM
Project Euler Questio 2
Hello, can any one please help me in problem 2 of project euler?
Here is the problem::::::::::::
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
My Code:
import java.io.*;
class pro2
public static void main(String args[])
int a=1,b=2,c=0,sum=0;
June 12th, 2012, 06:58 AM
Re: Project Euler Questio 2
Hello goldenpsycho!
The only problem I can see with your code is that you start adding to sum the even prime numbers after three. i.e you don't add prime number 2 to your sum. Just add 2 to your sum and you will get
what you want.
Hope this helps.
June 12th, 2012, 07:25 AM
Re: Project Euler Questio 2
Hi andread90, thank you so much for your help. i can't believe that i overlooked such a small thing. I had written this program 3 times before coming to this conclusion so i guess i lost a bit of
Thanks, your solution worked.
|
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/16117-project-euler-questio-2-a-printingthethread.html","timestamp":"2014-04-21T06:08:02Z","content_type":null,"content_length":"5335","record_id":"<urn:uuid:bc984431-e5fd-4eb2-8f67-9f9f79c0b94e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Understanding the IP3 specification and linearity, Part 2 | EE Times
Design How-To
Understanding the IP3 specification and linearity, Part 2
Intercept point (IP) specifications provide a useful tool for determining the degree of linearity exhibited by electronic devices. In part one of this two-part story, the authors reviewed the basics
of intercept point specifications and linearity. For an expanded view of the equations, click here.
Intermodulation (IM) to intercept point (IP)
Now that we understand the origins of IM products, and particularly IM3, we are better prepared to determine its values and measure them with a common method and unit of measurement.
Please note: IMn are the intermodulation products, while IPn are the actual measures.
The previous discussion showed that the terms for i > 1 in the function transfer A are responsible for device nonlinearity. The larger they are, the greater is the distortion. Thus we can simplify
and only measure the values of A[2], A[3], ... A[i]...A[n].
But such absolute values are meaningless because one does not know how they compare to the useful linear performance (A[1]). Therefore, it is more useful to know their deviation versus the good
parameter (A[1]), or more precisely, the ratio A[i]/A[1] or A[1]/A[i]. We will investigate the latter since it will yield a higher value for a high-linearity device.
We could start by trying to evaluate how the terms compare to A[0], or A[2], or any A[i]. But those parameters are not useful. We want a linear behavior (gain, attenuation, etc.), so only A[1]
interests the RF engineer.
Since the dynamic of A[1] can be very large, it is convenient to use the dB or dBm units for the ratio. We flag the different contributors in the very original figure of y versus x, but this time the
two axes are logarithmic (Figure 5).
Figure 5. The individual behavior of terms to y in the log axes.
From Figure 5, we find that:
• The term A[0] is a constant value (offset) and independent of the value of x.
• The term A[1] x is the linear portion; in a double-log scales graph, y-x is a straight line with offset defined by A[1] and the slope is just 1dB/dB (doubling x, results in doubling y).
• The term A[2x²] is the quadratic term (second order). It has an offset determined by A[2] and a slope that is exactly twice of the previous slope (2dB/dB); or restated, doubling the input x will
result in quadrupling y.
• The term A[3x^3] is the third-order part. It is a straight line in the graph y-x with offset determined by A[3]. The slope is exactly three times sharper than for the linear term (3dB/dB); or
restated, doubling x will result in multiplying x by 8.
• This log is applied to all the following terms and the nth-order line will have a slope of ndB/dB.
Since the higher-order terms have lines with a sharper slope, sooner or later there will be a moment (a point actually) where the high-order line will cross the first-order line. The crossing points
are called intercept points (IPn).
One can easily observe that the more a device is linear, the more the first-order line is high in the graph (compared to the other lines). Therefore, a higher value is reached for IP points.
Graphically, this is easy to see (Figure 6). The slope is fixed, so when the device is strongly linear, the nth-order terms will be very small. (The A
lines start from deeper values and, hence, will cross the first-order line much later, far away in the axes.)
Figure 6. IPn as crossing points between nth-order and first-order curves.
From Figure 6 we see that IP2 is the point where the first-order and second-order lines cross. IP3 is the point where first-order and third-order lines cross. The process continues in this fashion.
The values are read in the x or y axis. There are thus two actual values for measuring the IP point: the input or output intercept point. They are noted as:
• IIPn for nth-order input intercept point, measured on the input power axis (x)
• OIPn for nth-order output intercept point, measured on the output power axis (y)
|
{"url":"http://www.eetimes.com/document.asp?doc_id=1279788","timestamp":"2014-04-17T01:13:05Z","content_type":null,"content_length":"135598","record_id":"<urn:uuid:81082153-ec8d-473f-85dd-2e6784547f6f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
group collinearity
Why does clogit sometimes report a coefficient but missing value for the standard error, confidence interval, etc.?
Why is there no intercept in the clogit model?
Why can’t I use covariates that are constant within panel?
Title Within group collinearity in conditional logistic regression
Author William Gould, StataCorp
Date November 1999; updated July 2011; minor revisions July 2013
The short answer is that the variable reported with the missing standard error is “within-group collinear” with other covariates in the model. You need to drop the within-group collinear variable and
reestimate. You can verify within-group collinearity is the problem by using fixed-effects regressions on the covariates.
All of this is explained below and, along the way, we also explain why clogit sometimes produces the messages “var omitted because of no within-group variance” and “var omitted because of
The contents of this FAQ are
3. Recommendation
1. The conditional logistic model
Conditional logistic regression is similar to ordinary logistic regression except the data occur in groups,
group 1:
obs. 1 outcome=1 x1 = ... x2 = ...
obs. 2 outcome=0 x1 = ... x2 = ...
group 2:
obs. 3 outcome=1 x1 = ... x2 = ...
obs. 4 outcome=0 x1 = ... x2 = ...
obs. 5 outcome=0 x1 = ... x2 = ...
group 3:
Group G:
and we wish to condition on the number of positive outcomes within group. That is, we seek to fit a logistic model that explains why obs. 1 had a positive outcome in group 1 conditional on one of the
observations in the group having a positive outcome.
In biostatistical applications, this need arises because researchers collect data on the sick and infected (the so-called positive outcomes), and then match those cases with controls who are not sick
and infected. Thus the number of positive outcomes is not a random variable. Within each group, there had to be the observed number of positive outcomes because that is how the data were constructed.
Economists refer to this same model as the McFadden choice model. An individual is faced with an array of choices and must choose one.
Regardless of the justification, we are seeking to fit a model that explains why obs. 1 had a positive outcome in group 1, obs. 3 in group 2, and so on.
2. Model derivation
We assume the unconditional probability of a positive outcome is given by the standard logit equation
Pr(positive outcome) = G(x*b)
= e^(x*b)/(1+e^(x*b)) (1)
Equation (1) is not the appropriate probability for our data because it does not account for the conditioning. In the first group, for instance, we want
Pr(obs. 1 positive and obs. 2 negative | one positive outcome)
and that is easy enough to write down in terms of the unconditional probabilities. It is
Pr(1 positive)*Pr(2 negative)
------------------------------------------------------------- (2)
Pr(1 positive)*Pr(2 negative) + Pr(1 negative)*Pr(1 positive)
From now on, when I write Pr(1 positive) and Pr(2 negative), etc., I mean the probability that observation 1 had a positive outcome, the probability that observation 2 had a negative outcome, and so
Substituting (1) into (2), we obtain
Pr(1 positive and 2 negative | one positive outcome)
= -------------------- (3)
e^(x1*b) + e^(x2*b)
So that is the model we seek to fit. (At least, that is the term for group 1, and there are similar terms for all the other groups. I have ignored the possibility of multiple positive outcomes within
group because that just complicates things and is irrelevant to my point.)
2.1 Notation
In this FAQ, we will use the following mathematical notation. If you wish, you can skip to the next section and return here if our notation confuses you.
• Pr(1 positive), Pr(2 negative), etc.
Probability obs. 1 had a positive outcome,
Probability obs. 2 had a negative outcome, etc.
• Pr(1 positive and 2 negative | one positive outcome)
Probability obs. 1 positive and obs. 2 negative given one positive outcome in the group.
• e
2.7182818...; we will write e^anything to mean exp(anything).
• x
Vector of values of explanatory variables for an observation.
• x1, x2, etc.
Vector of values of explanatory variables for obs. 1, obs. 2, etc.
• b
Vector of coefficients.
x*b is thus the summed product of the explanatory variables with their respective coefficients.
• var1, var2, etc.
variables in the x vector.
• var1_1, var1_2, var2_1, var2_2, etc.
var1_1: value of var1 in obs. 1.
var1_2: value of var1 in obs. 2.
var2_1: value of var2 in obs. 1.
var2_2: value of var2 in obs. 2.
• a, b, c
Scalars; elements of b.
x*b = a + b*var1 + c*var2 + ...
x1*b = a + b*var1_1 + c*var2_2 + ...
• A, B, d
More scalars.
• G(x*b)
Cumulative “logistic” distribution.
2.2 Intercept
Equation (3) has an unfortunate property. Let’s pretend x, the vector of explanatory variables, includes var1 and var2. Thus our model of the probabilities is, from (1),
Pr(positive outcome) = G(a + b*var1 + c*var2)
e^(a + b*var1 + c*var2)
= ---------------------------
1 + e^(a + b*var1 + c*var2)
Equation (3), the probability for the first group is similarly
e^(a + b*var1_1 + c*var2_1)
e^(a + b*var1_1 + c*var2_1) + e^(a + b*var1_2 + c*var2_2)
e^a e^(b*var1_1) e^(c*var2_1)
= --------------------------------------------------------------
e^a e^(b*var1_1) e^(c*var2_1) + e^a e^(b*var1_2) e^(c*var2_2)
e^(b*var1_1) e^(c*var2_1)
= ------------------------------------------------------
e^(b*var1_1) e^(c*var2_1) + e^(b*var1_2) e^(c*var2_2)
where var1_1 and var1_2 are the values of var1 in observations 1 and 2, respectively.
e^a cancelled in the numerator and denominator. Whatever is the true value of the intercept, it plays no role in determining the conditional probabilities of positive outcomes within groups. a could
be 0, −10, or 57.12, and it would make no difference.
Since a plays no role, we will not be able to estimate it. In our model for the unconditional probabilities, we have
Pr(positive outcome) = G(a + b*var1 + c*var2)
^ ^ ^
| | |
| can be estimated by conditional logistic
cannot be estimated
by conditional logistic
That’s too bad but most researchers do not care much about the intercept anyway.
2.3 Within-group constants
The problem, however, can be worse than that. Say var2 is constant within group. Remember, our term for the first group is
e^(b*var1_1) e^(c*var2_1)
e^(b*var1_1) e^(c*var2_1) + e^(b*var1_2) e^(c*var2_2)
If var2_1==var2_2 (var2 is equal for the first two observations), then e^(c*var2) cancels, and we are left with
e^(b*var1_1) + e^(b*var1_2)
If this same cancellation occurs in groups 2, 3, ...—if var2 is a constant value in each group—then whatever is the true value of c, it plays no role in our model. c could be anything, and it would
not change any part of our calculation. For this problem to arise, var2 does not have to be a single constant value, it merely has to be constant within group.
So now, in our unconditional model, we have
Pr(positive outcome) = G(a + b*var1 + c*var2)
^ ^
| |
| cannot be estimated by conditional
| logistic because var2 is constant
| within group
cannot be estimated
by conditional logistic
because constant
None of this is very surprising. The conditional logistic model attempts to explain which observations within each group had positive outcomes, and things that do not vary within group play no role
in the explanation. Moreover, there can be a real advantage in this. I may think that var2 belongs in the Pr(positive outcome) model but not know how it should be specified. Does var2 have a linear
effect c*var2 or should it be quadratic c*var2+d*var2^2 or should be in the logs c*ln(var2) or how? In the conditional logistic model, if var2 is constant within group, it drops out no matter how the
effect ought to be parameterized. This is a great advantage if my interest is in the effect of var1 and not var2.
All of this is a long explanation for why, when you fit a conditional logistic model, Stata sometimes says
. clogit outcome var1 var2 var3 ..., group(id)
note: var2 omitted because of no within-group variance
Iteration 0: ...
(model without var2 reported)
2.4 Collinearity
I want to go back to our model
Pr(positive outcome) = G(a + b*var1 + c*var2)
for which, in the first group,
Pr(1 positive and 2 negative | one positive outcome)
e^(b*var1_1) e^(c*var2_1)
= ------------------------------------------------------
e^(b*var1_1) e^(c*var2_1) + e^(b*var1_2) e^(c*var2_2)
This time, let’s assume that var1 and var2 are collinear, meaning we can write
var2 = A + B*var1
It will not surprise you to learn that we will not be able to estimate b and c. Substituting var2 = A + B*var1 into our formula for the conditional probability for group 1, we obtain
e^(b*var1_1) e^(c*(A+B*var1_1))
e^(b*var1_1) e^(c*(A+B*var1_1)) + e^(b*var1_2) e^(c*(A+B*var1_2))
e^(b*var1_1) e^(c*A) e^(c*B*var1_1)
= ------------------------------------------------------------------------
e^(b*var1_1) e^(c*A) e^(c*B*var1_1) + e^(b*var1_2) e^(c*A) e^(c*B*var1_2)
e^(b*var1_1) e^(c*B*var1_1)
= -----------------------------------------------------
e^(b*var1_1) e^(c*B*var1_1) + e^(b*var1_2) e^(c*B*var1_2)
= ---------------------------------------
e^((b+c*B)*var1_1) + e^((b+c*B)*var1_2)
Let us write d = b+c*B. The term can then be written
e^(d*var1_1) + e^(d*var1_2)
This is just what the term would look like if we estimated on var_1 alone. Thus to fit this model we could
1. Estimate on var1 alone to obtain d.
2. Solve d = b+c*B to obtain b and c.
The problem occurs in step 2. We have one equation and two unknowns (b and c).
All of this is a long explanation for why, when you fit a conditional logistic model, Stata sometimes says
. clogit outcome var1 var2 var3 ..., group(id)
note: var2 omitted because of collinearity
Iteration 0: ...
(model without var2 reported)
2.5 Within-group collinearity
The conditional logistic model is subject to another form of collinearity. As before, let us assume
Pr(positive outcome) = G(a + b*var1 + c*var2)
but this time var1 and var2 are *NOT* collinear,
var2 *IS NOT EQUAL TO* A + B*var1
Instead, however, let us assume that, for each group
var2 = A_g + B*var1
That is, var1 and var2 are linearly related in the first group, linearly related in the second group, and so on. The coefficient B multiplying var1 is the same across groups but the intercept A is
allowed to differ.
If you go back through the algebra for the simple collinearity case, you will note that it is all applicable because only the within-group collinearity of var1 and var2 were used.
The final equation still holds. The conditional probability for the first group can be written
e^((b+c*B)*var1_1) + e^((b+c*B)*var1_2)
and again, this is just what the term would look like if we estimated on var_1 alone.
All of this is an explanation for why, when you fit a conditional logistic model, Stata sometimes says
. clogit outcome var1 var2
note: var2 omitted because of collinearity.
Iteration 0: ...
3. Recommendation
If you suspect this kind of collinearity,
1. Take the variable that was dropped—let’s call it var2—and estimate a fixed-effects regression on all the other independent variables using xtreg with the fe option:
. xtset group
. xtreg var2 ..., fe
2. If you obtain an R-sq within of 1, then you do have within-group collinearity. You will have to admit that you cannot estimate the var2 effect. Refit your clogit model, omitting the variable.
|
{"url":"http://www.stata.com/support/faqs/statistics/within-group-collinearity-and-clogit/","timestamp":"2014-04-16T04:35:46Z","content_type":null,"content_length":"41013","record_id":"<urn:uuid:ea4c2fe4-625e-4c2e-9543-1b64c6995583>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
|
P(H heads in a row) given N coin flips
October 3rd 2009, 03:04 PM #1
Oct 2009
P(H heads in a row) given N coin flips
I recently thought of the following problem and my basic grasp of probability and combinatorics is not enough to handle it:
You flip a coin N times. What is the probability P(N,H) that you get at least one sequence of at least H heads in a row?
Since this question deals with the number of times a result will happen, I tried to use a Poisson Distribution, but I'm having trouble separating the problem into mutually exclusive events.
I wrote a program to do a simulation and found that P(20,7) ≈ 0.058, so please make sure your answer fits with this result.
Thank you for your help!
Last edited by lambda; October 3rd 2009 at 03:15 PM. Reason: added what I've tried
I recently thought of the following problem and my basic grasp of probability and combinatorics is not enough to handle it:
You flip a coin N times. What is the probability P(N,H) that you get at least one sequence of at least H heads in a row?
Since this question deals with the number of times a result will happen, I tried to use a Poisson Distribution, but I'm having trouble separating the problem into mutually exclusive events.
I wrote a program to do a simulation and found that P(20,7) ≈ 0.058, so please make sure your answer fits with this result.
Thank you for your help!
I would do it this way: treat the "H heads in a row" as a single object and call it "R" for "row". With N flips and "H heads in a row", there are N- H other flips. How many different ways are
there to order 1 object and N- H other objects?
Problem - counts certain cases twice
Thank you for your reply.
I tried this approach and got $P(n,h) = \frac{acceptable\_series}{total\_possible\_series} = \frac{2^{n-h}(n-h + 1)}{2^n} = \frac{n-h + 1}{2^h}$ but kept getting answers that were roughly 2x too
I've realized that this approach counts certain answers twice. For example, for n=7 and h=3, the series HHHHHHH is counted 5 times, as RHHHH, HRHHH, HHRHH, HHHRH, and HHHHR. Do you have any tips
on how to fix this?
October 3rd 2009, 04:42 PM #2
MHF Contributor
Apr 2005
October 3rd 2009, 09:31 PM #3
Oct 2009
|
{"url":"http://mathhelpforum.com/statistics/105884-p-h-heads-row-given-n-coin-flips.html","timestamp":"2014-04-19T01:49:26Z","content_type":null,"content_length":"38634","record_id":"<urn:uuid:02a86dea-8103-43e7-b5c5-fccb5e62a805>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-User] Numpy pickle format
Robert Kern robert.kern@gmail....
Wed Nov 24 16:21:36 CST 2010
On Wed, Nov 24, 2010 at 16:00, David Baddeley
<david_baddeley@yahoo.com.au> wrote:
> I was wondering if anyone could point me to any documentation for the (binary)
> format of pickled numpy arrays.
> To put my request into context, I'm using Pyro to communicate between python and
> jython, and would like push numpy arrays into the python end and pull something
> I can work with in jython out the other end (I was thinking of a minimal class
> wrapping the std libraries array.array, and having some form of shape property
> (I can pretty much guarantee that the data going in is c-contiguous, so there
> shouldn't be any strides nastiness).
> The proper way to do this would be to convert my numpy arrays to this minimal
> wrapper before pushing them onto the wire, but I've already got a fair bit of
> python code which pushes arrays round using Pyro, which I'd prefer not to have
> to rewrite. The pickle representation of array.array is also slightly different
> (broken) between cPython and Jython, and although you can pickle and unpickle,
> you end up swapping the endedness, so to recover the data [in the Jython ->
> Python direction] you've got to create a numpy array and then a view of that
> with reversed endedness.
> What I was hoping to do instead was to construct a dummy numpy.ndarray class in
> jython which knew how to pickle/unpickle numpy arrays.
> The ultimate goal is to create a Python -> ImageJ bridge so I can push images
> from some python image processing code I've got across into ImageJ without
> having to manually save and open the files.
|3> a = np.arange(5)
|4> a.__reduce_ex__()
(<function numpy.core.multiarray._reconstruct>,
(numpy.ndarray, (0,), 'b'),
|6> a.dtype.__reduce_ex__()
(numpy.dtype, ('i4', 0, 1), (3, '<', None, None, None, -1, -1, 0))
See the pickle documentation for how these tuples are interpreted:
|12> x = np.core.multiarray._reconstruct(np.ndarray, (0,), 'b')
|13> x
array([], dtype=int8)
|14> x.__setstate__(Out[11][2])
|15> x
array([0, 1, 2, 3, 4])
|16> x.__setstate__?
Type: builtin_function_or_method
Base Class: <type 'builtin_function_or_method'>
String Form: <built-in method __setstate__ of numpy.ndarray object
at 0x387df40>
Namespace: Interactive
a.__setstate__(version, shape, dtype, isfortran, rawdata)
For unpickling.
version : int
optional pickle version. If omitted defaults to 0.
shape : tuple
dtype : data-type
isFortran : bool
rawdata : string or list
a binary string with the data (or a list if 'a' is an object array)
In order to get pickle to work, you need to stub out the types
numpy.dtype and numpy.ndarray, and the function
numpy.core.multiarray._reconstruct(). You need numpy.dtype and
numpy.ndarray to define appropriate __setstate__ methods.
Check the functions arraydescr_reduce() and arraydescr_setstate() in
numpy/core/src/multiarray/descriptor.c for how to interpret the state
tuple for dtypes. If you're just dealing with straightforward image
types, then you really only need to pay attention to the first element
(the data kind and width, 'i4') in the argument tuple and the second
element (byte order character, '<') in the state tuple.
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the SciPy-User mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-November/027640.html","timestamp":"2014-04-20T13:48:52Z","content_type":null,"content_length":"6633","record_id":"<urn:uuid:8f4fbcf1-e5c7-4f99-a016-1049cbc95720>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lifting a piece from Spanos' contribution* will usefully add to the mix
Lifting a piece from Spanos’ contribution* will usefully add to the mix
The following two sections from Aris Spanos’ contribution to the RMM volume are relevant to the points raised by Gelman (as regards what I am calling the “two slogans”)**.
6.1 Objectivity in Inference (From Spanos, RMM 2011, pp. 166-7)
The traditional literature seems to suggest that ‘objectivity’ stems from the mere fact that one assumes a statistical model (a likelihood function), enabling one to accommodate highly complex
models. Worse, in Bayesian modeling it is often misleadingly claimed that as long as a prior is determined by the assumed statistical model—the so called reference prior—the resulting inference
procedures are objective, or at least as objective as the traditional frequentist procedures:
“Any statistical analysis contains a fair number of subjective elements; these include (among others) the data selected, the model assumptions, and the choice of the quantities of interest. Reference
analysis may be argued to provide an ‘objective’ Bayesian solution to statistical inference in just the same sense that conventional statistical methods claim to be ‘objective’: in that the solutions
only depend on model assumptions and observed data.” (Bernardo 2010, 117)
This claim brings out the unfathomable gap between the notion of ‘objectivity’ as understood in Bayesian statistics, and the error statistical viewpoint. As argued above, there is nothing
‘subjective’ about the choice of the statistical model M[θ](z) because it is chosen with a view to account for the statistical regularities in data z[0], and its validity can be objectively assessed
using trenchant M-S testing. Model validation, as understood in error statistics, plays a pivotal role in providing an ‘objective scrutiny’ of the reliability of the ensuing inductive procedures.
Objectivity does NOT stem from the mere fact that one ‘assumes’ a statistical model. It stems from establishing a sound link between the process generating the data z[0] and the assumed M[θ](z), by
securing statistical adequacy. The sound application and the objectivity of statistical methods turns on the validity of the assumed statistical model M[θ](z) for the particular data z[0.] Hence, in
the case of ‘reference’ priors, a misspecified statistical model M[θ](z) will also give rise to an inappropriate prior π(θ).
Moreover, there is nothing subjective or arbitrary about the ‘choice of the data and the quantities of interest’ either. The appropriateness of the data is assessed by how well data z[0] correspond
to the theoretical concepts underlying the substantive model in question. Indeed, one of the key problems in modeling observational data is the pertinent bridging of the gap between the theory
concepts and the available data z[0] (see Spanos 1995). The choice of the quantities of interest, i.e. the statistical parameters, should be assessed in terms of the statistical adequacy of the
statistical model in question and how well these parameters enable one to pose and answer the substantive questions of interest.
For error statisticians, objectivity in scientific inference is inextricably bound up with the reliability of their methods, and hence the emphasis on thorough probing of the different ways an
inference can go astray (see Cox and Mayo 2010). It is in this sense that M-S testing to secure statistical adequacy plays a pivotal role in providing an objective scrutiny of the reliability of
error statistical procedures.
In summary, the well-rehearsed claim that the only difference between frequentist and Bayesian inference is that they both share several subjective and arbitrary choices but the latter is more honest
about its presuppositions, constitutes a lame excuse for the ad hoc choices in the latter approach and highlights the huge gap between the two perspectives on modeling and inference. The
appropriateness of every choice made by an error statistician, including the statistical model M[θ](z) and the particular data z[0], is subject to independent scrutiny by other modelers.
6.2 ‘All models are wrong, but some are useful’
A related argument—widely used by Bayesians (see Gelman, this volume) and some frequentists—to debase the value of securing statistical adequacy, is that statistical misspecification is inevitable
and thus the problem is not as crucial as often claimed. After all, as George Box remarked:
“All models are false, but some are useful!”
A closer look at this locution, however, reveals that it is mired in confusion. First, in what sense ‘all models are wrong’?
This catchphrase alludes to the obvious simplification/idealization associated with any form of modeling: it does not represent the real-world phenomenon of interest in all its details. That,
however, is very different from claiming that the underlying statistical model is unavoidably misspecified vis-à-vis the data z[0.] In other words, this locution conflates two different aspects of
empirical modeling:
(a) the ‘realisticness’ of the substantive information (assumptions) comprising the structural model M[φ](z) (substantive premises), vis-à-vis the phenomenon of interest, with
(b) the validity of the probabilistic assumptions comprising the statistical model M[θ](z) (statistical premises), vis-à-vis the data z[0] in question.
It’s one thing to claim that a model is not an exact picture of reality in a substantive sense, and totally another to claim that this statistical model M[θ](z) could not have generated data z[0]
because the latter is statistically misspecified. The distinction is crucial for two reasons. To begin with, the types of errors one needs to probe for and guard against are very different in the two
cases. Substantive adequacy calls for additional probing of (potential) errors in bridging the gap between theory and data. Without securing statistical adequacy, however, probing for substantive
adequacy is likely to be misleading. Moreover, even though good fit/prediction is neither necessary nor sufficient for statistical adequacy, it is relevant for substantive adequacy in the sense that
it provides a measure of the structural model’s comprehensiveness (explanatory capacity) vis-à-vis the phenomenon of interest (see Spanos 2010a). This indicates that part of the confusion pertaining
to model validation and its connection (or lack of) to goodness-of-fit/prediction criteria stem from inadequate appreciation of the difference between substantive and statistical information.
Second, how wrong does a model have to be to not be useful? It turns out that the full quotation reflecting the view originally voiced by Box is given in Box and Draper (1987, 74):
“[. . . ] all models are wrong; the practical question is how wrong do they have to be to not be useful.”
In light of that, the only criterion for deciding when a misspecified model is or is not useful is to evaluate its potential unreliability: the implied discrepancy between the relevant actual and
nominal error probabilities for a particular inference. When this discrepancy is small enough, the estimated model can be useful for inference purposes, otherwise it is not. The onus, however, is on
the practitioner to demonstrate that. Invoking vague generic robustness claims, like ‘small’ departures from the model assumptions do not affect the reliability of inference, will not suffice because
they are often highly misleading when appraised using the error discrepancy criterion. Indeed, it’s not the discrepancy between models that matters for evaluating the robustness of inference
procedures, as often claimed in statistics textbooks, but the discrepancy between the relevant actual and nominal error probabilities (see Spanos 2009a).
In general, when the estimated model M[θ](z) is statistically misspecified, it is practically useless for inference purposes, unless one can demonstrate that its reliability is adequate for the
particular inferences.
*A. Spanos 2011, “Foundational Issues in Statistical Modeling: Statistical Model Specification and Validation, ” RMM Vol. 2, 2011, 146–178, Special Topic: Statistical Science and Philosophy of
**Note: Aspects of the on-line exchange between me and Senn are now published in RMM; comments you post or send for the blog (on any of the papers it this special RMM volume) if you wish, can
similarly be considered for inclusion in tthe discussions in RMM.
43 thoughts on “Lifting a piece from Spanos’ contribution* will usefully add to the mix”
Mayo, this is a nice discussion by Spanos.
But how does it solve Berkson paradox? Given enough data, isn’t every model going to be statiscally misspecified?
Let me try to be more clear. Let’s use Spanos example, CAPM.
At page 161, he rejects almost all assumptions of the model, the only exception is linearity. So, all the inferences made within the model are called into question, because the error
probabilities are not properly controlled.
Now let’s suppose for a minute that CAPM had passed all tests. Then, by analogy, all inferences would be correct (in the sense that errors probabilities were controlled).
But,like Berkson argued, it’s likely that our model isn’t just exactly right. We just didn’t have enough data to reject our assumption. Suppose, then, that some years after evaluating CAPM as
“statistically adequate”, we get more data. And then we end up rejecting every assumption (like Spanos did in page 161). Then, what we had thought as valid inferences before, wouldn’t be valid
inferences anymore, right?
Well, if a practioneer get this kind of “disapointment” very often, like Berkson, then he would extrapolate this kind of thinking to every model he is evaluating.
The Berkson paradox stems from assuming that a rejection provides equally good evidence against the null hypothesis, irrespective of the sample size, which is false. Such reasoning ignores
the fact that when one rejects a null hypothesis with a smaller sample size (or a test with lower power), it provides better evidence for a particular departure d from the null than rejecting
it with a larger sample size. This intuition is harnessed by Mayo’s severity assessement in order to quantify a rejection (or acceptance) in terms of the discrepancy from the null warranted
by the particular data; see Mayo and Spanos (2006).
In relation to the CAPM example, I have used a much larger sample size (n=1318) to test the theory model by respecifying the underlying statistical model; the original n=64 was totally
inadequate for “capturing” the temporal structure and heterogeneity in the data. It turns out that one needs to go beyond the Normal/static family of models into a Student’s t dynamic model
with mean heterogeneity to find a statistically adequate model. On the basis of the latter model the substantive restrictions of the CAPM are strongly rejected. I will be happy to share the
data with anybody who wants to test the statistical model assumptions independently.
Final note: I will urge people to avoid expressions like “a model or a hypothesis is exactly right”; they are misplaced in the context of inductive (statistical) inference.
Thank you, Spanos.
I’m reading your 2007 paper on curve fitting and I got interested in Kepler’s example.
Berger’s book (1985) had mentioned Kepler as a model that it is seen as good but it would be statistically rejected today.
In your paper, you tested the statistical adequacy of Kepler’s model with n=28 and it passed.
With a bigger n, wouldn’t we eventually reject Kepler’s model just as you rejected Ptolemy’s geocentric model?
I have tested the Kepler model using modern data (n=700) and it passes all M-S tests with flying colors! By the way the Ptolemy model is statistically misspecified even when the M-S
testing is perfomed using much smaller samples sizes, say n=40! Hence, M-S testing is not about sample sizes, as long as there are enough observations to render the selected M-S tests
effective (poweful) enough!
Oh, I would like to ask something else too!
In Spanos review of Ziliak and McCloskey’s book
also page 161 (what are the odds?) there’s this passage:
“These two examples demonstrate how the same test result τ(x0)=1.82, arising from two different sample sizes, n=10000 and n=10, can give rise to widely different ‘severely passed’ claims
concerning the warranted substantive discrepancy: γ 1.1, respectively.”
But wouldn’t it be γ<1.1?
Because we didn't reject H0, so we could say that the discrepancy is, with high severity (0.9 or higher), below 1.1. Not above 1.1.
also the passage:
“[...] the minimum discrepancy warranted by data x0 is γ >1.1″
Wouldn’t it be
“[...] the maximum discrepancy warranted by data x0 is γ <1.1"
Never mind the last two questions hehe
I thought that Spanos had said the data supported discrepancy higher that 1.1, and I thought it very odd… but he just said that values above 1.1 passed the severity test that u<γ. Hence, we could
say that the minimum discrepancy with "high" severity is 1.1.
Glad you straightened the last two questions out…but on the general question, I have this to say:
I think there is very often a confusion between asserting a hypothesis H is false (or , in some cases, asserting H is discrepant by more than such and such) and asserting that a procedure can or
will find it false. This occurs, for example, when it is proclaimed that any null hypothesizing 0 correlation of some sort, is invariably false.
That we use models, or language altogether for that matter, already means we are viewing reality through a “framework”, but given a linguistic (or modeling) framework, whether or not a hypothesis
is true/false approximately true/false, adequate/inadequate, etc, is a matter of what is the case.
If you’ve got a procedure that will always declare a hypothesis false, even when it is true, then it’s declaring H is false, is very poor evidence for it’s falsity. “H is false” passes a test
with minimal severity.
Note, by the way, that in order to be warranted in finding a scientific theory false, it is required to affirm as true (or approximately true) a “falsifying hypothesis” with severity. See “no
pain” philosophy. Sorry, to be dashing in undergrounds—did not want to ignore blog entirely.
I continue to be impressed with this framework, but every time I ask myself why I’m not doing this myself (or rather, teaching myself how to do it from Spanos’s papers) I think, “But what about
Cox’s theorem?”
Cox’s theorem establishes probability theory as an extension of classical logic to reasoning under uncertainty about the truth values of the propositions under consideration. In particular,
the theorem shows that any univariate real-valued measure of plausibility that is not equivalent to Bayes must violate some compelling (to me) desiderata for reasoning under uncertainty. I
regard science as the endeavour of reducing uncertainty in our knowledge of the way the universe works; Cox’s theorem shows me how to do the data analysis part of this job quantitatively.
To the extent that things like M-S tests and severity require p-values as univariate real-valued measures of plausibility, they must conflict with the desiderata.
Oy and No—yet very glad you raised this. That ASSUMES an ungodly amount of assumptions (e.g., about assigning probabilities to everything as measures of “plausibility”, taking all bets,
coherence, …on and on..and on…which Cox certainly does not accept. Please check the “ifs” of the “theorem”. Further, deductive updating via Bayes theory, like all deductive moves, always results
in no more info than the premises contained—so no growth of knowledge. But, on the other hand, there could be a challenge here…for classical Bayesians. Error Statisticians have an alternative
philosophy of scientific and inductive inference, do the Bayesians? Are they essentially seeking foundations under the error statistical umbrella.
The Cox in question isn’t Sir David Cox — it’s Richard T. Cox, an American physicist who studied electric eels.
Bets don’t enter into the desiderata. In this it is quite different from de Finetti’s “bets plus Dutch-book-style coherence” approach and Savage’s “bets plus rational preferences” approach.
Same argument, same problem. You’ll have to demonstrate why you think it applies to severity. I can show that probability logic is the wrong logic for assessment of well-testedness. Example:
Irrelevant conjunctions get some support from data that well-corroborates one conjunct, even if it is irrelevant to the other.
I’m not sure why you state “same argument, same problem.” You wrote that “the argument assumes an ungodly number of assumptions,” but the argument that proves Cox’s theorem doesn’t have a list of
premises that runs “on and on… and on”. In fact, I’d be interested to know which specific premise(s) of the theorem you reject (you can find a nice intro to the entire argumenthere.
The other problems you state don’t trouble me: the growth of knowledge comes from the accumulation of data. Propositional logic is a set of rules of inference that can operate on previously
unknown data, generating novel deductions, and Bayes is an extension of propositional logic, generating novel plausible inferences.
As to conjunctions get some support from data that well-corroborates one conjunct, even if it is irrelevant to the other, I’m not seeing the problem — that’s just as it should be, as far as I can
tell. This is not dangerous because a conjunction cannot be more probable than any of its conjuncts. Perhaps you can point me to an example from one of your papers.
As for a specific example, someone posted a link in the comments of a previous post to a paper (by Morris deGroot, I think?) that shows how p-values fail in this regard. I’ll have to dig it up
and rephrase it in terms of severity.
Ah, no — the p-value paper was by Mark Schervish, referenced but not linked by commenter “Guest” in this post.
@Corey, if you haven’t seen it already, John Cook nicely restates Schervish’s example. But the two hypotheses are tested with different power (i.e. severity) which *should* also affect
the interpretation of their p-values …even though it’s no caricature to say this doesn’t happen much in practice.
Thanks for the link, Guest. (Severity isn’t the same as power — in the normal model it’s a mathematically related to the confidence distribution._
Sorry I missed this, I need to redirect comments from Elba to me. I see others more or less addressed points except for when you wrote (in your March 10 comment):
“As to conjunctions get some support from data that well-corroborates one conjunct, even if it is irrelevant to the other, I’m not seeing the problem — that’s just as it should be, as far as
I can tell. This is not dangerous because a conjunction cannot be more probable than any of its conjuncts. Perhaps you can point me to an example from one of your papers.”
The danger is that if x confirms H & J, merely because x well-corroborates H, but is utterly irrelevant to J, then x confirms J, even a little, and so anything that confirms something
confirms anything. Moreover, we want an account that lets us say/show that you have carried out a terrible test of J, one which will easily find some support for J even if J is false.
In general, as I have been arguing, probability logic, which is deductive, does a very poor job of capturing ampliative 9evidence-transcending) inference.
I can’t make any sense of your comment, which suggests that you and I are using the same words to mean different things. I’ll explain how I’m using the key words and why your statements
make no sense under my definitions, and then maybe we can isolate the problem. To disambiguate, I’ll use the prefix “p-” for my sense of the key words. All of my probability expressions
will explicitly condition on a prior state of information Z.
Evidence x is said to p-support (p-confirm, p-well-corroborate, etc.) proposition A (given prior information Z) iff Pr(A | x, Z) > Pr(A | Z).
Evidence x is said to be p-irrelevant to the plausibility of proposition A (given prior information Z) iff for any proposition B, Pr(A | x, B, Z) = Pr(A | B, Z). An immediate implication
is Pr(A | x, Z) = Pr(A | Z).
Now consider the p-support offered to a conjunction H & J by evidence x. By the definition of conditional probability,
Pr(H & J | x, Z) / Pr(H & J | Z) = [ Pr(H | x, Z)*Pr(J | x, H, Z) ] / [ Pr(H | Z)*Pr(J | H, Z) ].
Your claim was “if x confirms H & J, merely because x well-corroborates H, but is utterly irrelevant to J, then x confirms J, even a little, and so anything that confirms something
confirms anything.” Using my sense of the words, this claim is false. If x is p-irrelevant to J, then by definition Pr(J | x, H, Z) = Pr(J | x, Z) and
Pr(H & J | x, Z) / Pr(H & J | Z) = Pr(H | x, Z) / Pr(H | Z),
that is, x p-supports H & J to the precise degree that it p-supports H alone. Furthermore, as noted above, Pr(J | x, Z) = Pr(J | Z), that is, x does not p-support J.
I just noticed a typo. I wrote:
“If x is p-irrelevant to J, then by definition Pr(J | x, H, Z) = Pr(J | x, Z)…”
That sentence should read:
“”If x is p-irrelevant to J, then by definition Pr(J | x, H, Z) = Pr(J | H, Z)…”
Yes, we’re speaking a different language. SEV is not a probability assignment, and it gets the logic right as regards the problem of irrelevant conjuncts. My problem is already with
claiming x supports H & J, with J irrelevant. But one can also go further and notice that the conjunction entails the conjunct J. Will write when I am in one place: unexpected travel.
Thank you again, Spanos! I was goint to cite Berger’s example on my dissertation, but then I ran into your paper and now your comment definetely made me think twice about that.
But I’m still trying to understand some things here.
Let’s suppose our model is statistically adequate. Then, I can see how severity assessment can prevent one to make rejection or acception fallacies that are so commonly made nowadays (fruits of
the hybrid incoherent “null ritual” as defined by Gigerenzer).
But I still didn’t get how to assess severity in the very statistical adequacy we need !
For example, how can I distinguish “how far” my model is from normality? By looking only the p-value of a M-S test? How do I draw the line, when is it 5% or when is it, say, 19%?
Because, as far as I am making a judgement about a parameter, say, a price elasticity, I can, as an economist, say when it is small and when it is big. So, in the error statistics framework, I
could see how an economist could judge, with severity, if the rejection of a null has substantive meaning.
But as to departures from normality, what is small and what is big is still not very clear.
For example, in Kepler’s model, the non-normality p-value was arround 10%… why isn’t it significant?
Thanks agains!
I apologize for the delay in replying; I have been traveling for the last two days. The severity assessment has a role to play in the context of M-S testing. For example, in the case of testing
Normality the test statistic is based on a combination of: skewness=0 and kurtosis=3. One can establish the discrepancy from this null warranted by the particular data using the post-data
severity evaluation. What is “small” and “big” for this context is not particularly difficult to determine. For instance, a discrepancy from skewness bigger than .5 should be considered serious;
it will imply serious asymmetry. Similarly, a discrepancy of bigger than plus or minus 1 from kurtosis=3 is also considered serious enough. In the case of the empirical example with Kepler’s fist
law the warranted discrepancies from the skewness and kuortosis were considerably less than these thresholds ensuring that there is no serious departure from the symmetry or the mesokurtosis of
the Normal distribution. Of course, one should always apply several different tests for the different model assumptions, both individually and jointly, as a cross-check in order to ensure a more
reliable assessment. In the case of Kepler’s law I supplemented the skewness-kurtosis test with a nonparametric test [Shapiro-Wilks] which confirmed the result of no evidence against Normality.
Having done a lot of research with financial data, I can testify that Normality is almost never valid for such data because they often exhibit leptokurtosis [kurtosis > 3] and one has to use
distributions like the Student’s t when symmetry is not a problem.
Thank you, I think it’s getting clearer now!
Let me see if I got it right.
For instance, the Jarque-Bera test assumes S=0 and K=3. Let’s suppose our sample is large enough for the asymptotics of the test be “just fine”.
Usually, what people do is to just check if the p-value<5% and reject normality. That, if I got it right, can be very misleading. A better practice would be to see what discrepancies of S and
K are warranted.
So, let's say my sample was huge, two hundred thousand.
Then, I could get a very small p-value, p<0,00001. But, still, if it turns out that the discrepancy warranted for S and K are arround 0.1 then I could say that the departures, though
statistically different from S=0 and K=4, are not "significant" in the substantive sense that, for inference purposes, the nominal error probabilities are not very different from the real
error probabilities.Is that it?
If it is, I think this approach can indeed make classical testing more sensible.
You now have the gist of how one can avoid being misled by inferences results due to a large sample size! Indeed, it can work the other way around when the sample is small and the M-S
test used does not have enough power to detect an existing departure. The post data severity evaluation addresses both problems simultaneously.
Hi, Mayo,
What do you think about this paper from Hoening?
See page 3.
He thinks it’s “nonsensical” to do post-data power analysis because it contradicts the notion of “p-value” as measure of evidence against he null.
His example is two estimates of the same effect size, but in one of them the standard-error is lower. Then, in this one, the Z statistic is higher and the p-value is lower.
But, if one would calculte post-data power analysis, he would find-out that the more precise estimation, with higher Z, givers lower upper-bound for the true effect-size. And he thinks this is
“nonsense” because if the p-value is lower, than it should be more evidence agains the null.
My opinion is that it is his notion of “p-value” as measure of evidence that is flawed – see Schervish (1996) or Spielman (1973).
What do you think?
Oy! Please see my posts on “shpower”. You will see that the fault lies not in power but in this completely wrong-headed invention “shpower”. His notion is equivalent to “shpower”.
Thanks, got it!
Reply to Mayo, April 2 (nesting limit reached):
Let me see if I understand your position. You claim that a reasonable definition of support ought to be such that if evidence x supports claim H but is irrelevant to J, then x does not support
the conjunction H & J. This is because H & J entails J, so under any reasonable definition of support, any evidence that supports H & J also supports J. Do I have that right?
If I have that right, how crucial would you say the above claim is to your motivation for developing the severity framework?
Cut out everything after “This is because”. Then write: this is because the experiment that produced x has done nothing to rule out the falsity of J. The entailment point, is just another
known consequence of all such accounts. Moving in car…will write more later.
When you wrote “Cut out everything before” did you mean everything *after*?
yes, “after”, fixed it (told you I was moving in a vehicle)
Is more forthcoming? (I ask because I have a response drafted, but I was waiting for you to complete your comment.)
Corey: Sorry, forgot to get back to this, but I plan to be talking about the very idea of using probability as an inductive support measure of some sort. It is a view I reject, but I am happy to
allow logicians of support or belief or the like to have their research program, and talk instead about an account of well-testedness, warranted, evidence, corroboration, severe testing or the
like. Hopefully statistically -minded scholars will consider the mileage that they can bring to this (admittedly) different philosophy of ampliative inference. If an account regards data x as
warranting hypothesis J, even a little bit, when in fact the procedure had 0 capability to have resulted in denying J is warranted, even were J false–, then the procedure producing x provides no
test at all for J. One can put it in zillions of ways but that is essentially the weakest requirement for scientific evidence in my philosophy. See my Popper posts. Of course, in practice, I do
not think scientists would contemplate the evidence x supplies to a conjunction when J is utterly irrelevant, and so the two do not share data models (e.g., general relativity and prion theory
entail light deflection. Ordinary first-order logic with material conditionals is quite inadequate for scientific reasoning (e.g., false antecedents yield true conditionals).
You’re simply wrong that probabilities gets conjunctions wrong. Corey demonstrated that they get it right explicitly above.
Also, probabilities get your “weakest requirement for scientific evidence” right as well. To see why note that it’s not possible for both of these to be true:
(1) There is no x that supports J being false.
(2) There is some x that supports J being true.
Using Corey’s notation these become:
(1) For all x, P(not J|x,Z)P(J|Z)
Roughly these say “there is no x that would increase the chance that J is false” and “ there is some x’ that increased the chance J is true”. You can show in a few lines that (1) and (2) imply
the contradiction:
So you can’t have (1) and (2) both be true.
Moreover, these are not isolated examples. There are famous books by Poyla and Jaynes which give example after example where probabilities get these kinds of thing right. Many of these are real
world and not just toy examples.
Of course all the examples in the world don’t prove that probabilities will always work this way. But then their is Cox’s theorem which Corey mentioned above. Basically, anytime you try to reason
by assigning real numbers to hypothesis (it doesn’t matter what you call those real numbers: they could be “evidence” or “belief” or “severity” or whatever) then these numbers will fit into a
formalism identical to probability theory or lead to some embarrassing conclusions.
So regardless of how you feel about Cox’s theorem, at the very least it does imply that the formalism of probability is going to be able to handle or mimic how we do reason a surprising amount of
the time (otherwise nothing like Cox’s theorem would be true).
At any rate, the demonstrations me and Corey gave are a few lines of elementary mathematics. There shouldn’t be any disagreement about those.
I don’t assign numbers to hypotheses, only to methods, e.g., tests. The problem of “irrelevant conjunction” is a standard one for probabilists and it has been discussed in scads of articles
(e.g., (2004). Discussion: Re-Solving Irrelevant Conjunction with Probabilistic Independence. Philosophy of Science 71:505-514.). If you have a new way for probabilists to handle it, you
should notify them.
Severity, like error probabilities associated with inferences (in error statistical tests and confidence intervals), are not probabilistic assignments to the inferences. They don’t obey
probability relations. It is not I who have my logic wrong–recheck what you wrote.
For some reason the two conditions got cut off. Here they are again:
(1) For all x, P(not J|x,Z)P(J|Z)
Third times a charm:
For all x, P(not J|x,Z)P(J|Z)
It still not posting it. Not sure what hte problem is, but the second condition should be
there exists an x_prime such that P(not J|x_prime,Z)>P(not J|Z)
Please delete the other ones. There must be some kind of formatting issue that is causing the blog to mangle the formulas. I think these will work:
First: For all x P(not J|x,Z)P(J|Z)
(1) for all x it’s true that P(not J|x,Z) is less than or equal to P(not J|Z)
(2) there is some x such that P(J|x,Z) is strickly greater than P(J|Z)
The last comment gets it right. There was a mangling of (1) and (2) and it wasn’t printing out right for some reason.
I welcome constructive comments for 14-21 days
Categories: Philosophy of Statistics, Statistics, Testing Assumptions, U-Phil Tags: all models are wrong, Aris Spanos, misspecification testing, model specification, objectivity 43 Comments
|
{"url":"http://errorstatistics.com/2012/03/08/lifting-a-piece-from-spanos-contribution-will-usefully-add-to-the-mix/","timestamp":"2014-04-21T02:31:59Z","content_type":null,"content_length":"142492","record_id":"<urn:uuid:d25c0416-1ed3-4d9e-b7dd-94cf4fc3bcb5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: 1
Efficient Verified Red-Black Trees
Princeton University, Princeton NJ 08540, USA
(e-mail: appel@princeton.edu)
I present a new implementation of balanced binary search trees, compatible with the MSets interface
of the Coq Standard Library. Like the current Library implementation, mine is formally verified (in
Coq) to be correct with respect to the MSets specification, and to be balanced (which implies asymp-
totic efficiency guarantees). Benchmarks show that my implementation runs significantly faster than
the library implementation, because (1) Red-Black trees avoid the significant overhead of arithmetic
incurred by AVL trees for balancing computations; (2) a specialized delete-min operation makes
priority-queue operations much faster; and (3) dynamically choosing between three algorithms for
set union/intersection leads to better asymptotic efficiency.
1 Introduction
An important and growing body of formally verified software (with machine-checked
proofs) is written in pure functional languages that are embedded in logics and theorem
provers; this is because such languages have tractable proof theories that greatly eases the
verification task. Examples of such languages are ML (embedded in Isabelle/HOL) and
Gallina (embedded in Coq). These embedded pure functional languages extract to ML
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/811/4870307.html","timestamp":"2014-04-17T13:00:21Z","content_type":null,"content_length":"8443","record_id":"<urn:uuid:918503d3-71d4-49bd-bad8-6ae669a77ffe>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
basic problems of algebraic topology
In algebraic topology one defines and uses functors from some category of topological spaces to some category of algebraic objects to help solve some existence or uniqueness problem for spaces or
There are 4 basic problems of algebraic topology for the existence of maps: extension problems, retraction problems, lifting problems and section problems.
Regarding that these problems make sense in any category, we will talk about objects and morphisms and not spaces and maps.
Extension problem
Given morphisms $i:A\to X$, $f:A\to Y$ find an extension of $f$ to $X$, i.e. a morphism $\tilde{f}:X\to Y$ such that $i\circ\tilde{f}=f$. Notice that if $i:A\hookrightarrow X$ is a subobject, then $i
\circ\tilde{f}$ is the restriction $\tilde{f}{|_A}$, and the condition is $\tilde{f}{|_A} = f$.
Retraction problem
Let $f:A\to Y$ be a morphism. Find a retraction of $f$, that is a morphism $r:Y\to A$ such that $r\circ f = id_A$.
The retraction problem is a special case of the extension problem for $A=X$ and $i=id_A$. Conversely, the general extension problem may (in Top and many other categories) be reduced to a retraction
Proposition (Reducing an extension to a retraction)
If the pushout $Y\coprod_A X$ exists (for $i$, $f$ as above) then the extensions $\tilde{f}$ of $f$ along $i$ are in 1–1 correspondence with the retractions of $i_*(f) : Y\to Y\coprod_A X$.
Lifting problem
Given morphisms $p:E\to B$ and $g:Z\to B$, find a lifting? of $g$ to $E$, i.e. a morphism $\tilde{g}:Z\to E$ such that $p\circ\tilde{g}=g$.
Section problem
For any $p:E\to B$ find a section $s: B\to E$, i.e. a morphism $s$ such that $p\circ s = id_B$.
The section problem is a special case of a lifting problem where $g = id_E : E\to E$. Then the lifting is the section: $\tilde{g} = s$. A converse is true in the sense
Proposition (Reducing a lifting to a section)
If the pullback $Z\times_B E$ exists then the general liftings for of $G$ along $p$ as above are in a bijection with the section of $g^*(p)=Z\times_B p : Z\times_B E\to Z$.
|
{"url":"http://www.ncatlab.org/nlab/show/basic+problems+of+algebraic+topology","timestamp":"2014-04-18T05:29:29Z","content_type":null,"content_length":"24094","record_id":"<urn:uuid:69dd8554-e756-4d25-b7e7-60cdb540dbfb>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
|
scipy.special.sph_harm = <numpy.lib.function_base.vectorize object at 0x4983450>[source]¶
Compute spherical harmonics.
This is a ufunc and may take scalar or array arguments like any other ufunc. The inputs will be broadcasted against each other.
m : int
|m| <= n; the order of the harmonic.
n : int
where n >= 0; the degree of the harmonic. This is often called l (lower case L) in descriptions of spherical harmonics.
Parameters :
theta : float
[0, 2*pi]; the azimuthal (longitudinal) coordinate.
phi : float
[0, pi]; the polar (colatitudinal) coordinate.
y_mn : complex float
Returns :
The harmonic $Y^m_n$ sampled at theta and phi
There are different conventions for the meaning of input arguments theta and phi. We take theta to be the azimuthal angle and phi to be the polar angle. It is common to see the opposite
convention - that is theta as the polar angle and phi as the azimuthal angle.
|
{"url":"http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.special.sph_harm.html","timestamp":"2014-04-17T06:51:19Z","content_type":null,"content_length":"11110","record_id":"<urn:uuid:0faad7e0-b6ae-4f18-af04-40cfb0caf695>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Problem C1.1. Inviscid Flow through a Channel with a Smooth Bump
This problem is aimed at testing high-order methods for the computation of internal flow with a high-order curved boundary representation. In this subsonic flow problem, the geometry is smooth, and
so is the flow. Entropy should be a constant in the flow field. The L2 norm of the entropy error is then used as the indicator of solution accuracy since the analytical solution is unknown.
Governing Equations
The governing equation is the 2D Euler equations with a constant ratio of specific heats of 1.4.
Flow Conditions
The inflow Mach number is 0.5 with 0 angle of attack.
The computational domain is bounded between x = -1.5 and x = 1.5, and between the bump and y = 0.8, as shown in Figure 1.1. The bump is defined as
Figure 1.1 Channel with a Smooth Bump
Boundary Conditions
Left boundary: subsonic inflow
Right boundary: subsonic exit
Top boundary: symmetry
Bottom boundary: slip wall
1. Start the simulation from a uniform free stream with M = 0.5 everywhere, and monitor the L[2] norm of the density residual. Compute the work units required to achieve a steady state. Use the
following non-dimensional entropy error as the accuracy indicator
2. Perform this exercise for at least three different meshes and with different orders of accuracy to assess the performance of high-order schemes of various accuracy. 3. Plot the L[2] entropy error
vs. work units to evaluate efficiency, and L[2] entropy error vs. length scale to assess the numerical order of accuracy. 4. Submit two sets of data to the workshop contact for this case a. Error vs
work units for different h and p b. Error vs[] for different h and p
|
{"url":"http://www.as.dlr.de/hiocfd/case_c1.1.html","timestamp":"2014-04-18T06:48:28Z","content_type":null,"content_length":"7981","record_id":"<urn:uuid:89e18794-702c-4a9f-aaa7-f07b7c8c5ba3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
|
peteg's blog
Vale, John C. Reynolds.
Tue, Apr 30, 2013./cs | Link
John C. Reynolds passed away recently. Lambda-the-Ultimate reflects on his scientific life.
Vijay told me of his passing. Sad I never saw him speak.
This book looks fantastic. Social choice, formal epistemics, game theory, so on and so forth. I really need a few spare lives right now.
What makes it even more fantastic is that Cambridge University Press has allowed them to distribute it as a free eBook too, making them surely the most progressive academic publisher.
OK, take a deep breath. Look at this page and tell me the world isn't crazy.
Say you want to talk to the world in Unicode, but you want to do it quickly. Well, obviously you're going to draft C's atoi and friends to convert numerals to your internal integer type, right?
That's great in theory, but when your code is running on someone else's webserver that you know little about, things might get a little tricky.
Haskell's FFI specifies that the functions in the CString module are subject to the current locale, which renders them unpredictable on the hitherto mentioned webserver. I can imagine a numeral
encoding that e.g. strtol_l understands with the locale setting of today that it fails to understand tomorrow. I don't think there are enough manpages in all the world to clarify this problem.
Solution? Use integers only for internal purposes, like user identifiers, render them in ASCII, and use Unicode strings for everything else. Don't use the CString module, carefully unpack UTF-8
ByteStrings into Haskell Strings, and don't expect warp speed. If you're (cough) putting this stuff in a library, hope like hell your users don't try anything too weird.
One day someone will resolve all the issues of implementing a proper Unicode I/O layer, and I will thank them for it.
Trees ain't trees.
Wed, Jan 03, 2007./cs | Link
Any idiot knows a tree has a branching structure, or at least that's what we've been telling the first years since time immemorial. There was a proper CS tree at UNSW back before I got there that the
internet doesn't appear to remember so well. This photo of the "right way up" fig is from Geoff Whale's data structures textbook, and carries the following attribution:
Evidence that trees sometimes really are as shown in data structure textbooks. Sculpture using the medium of a dead fig tree at UNSW. Photo by Russell Bastock.
I reproduce it here without permission. That must be the Applied Science building in the background and my guess is that it sat at the eastern end of where the Red Centre is now.
I spotted this purported "tree" on Centennial Avenue, near Avoca St (which runs between Queen and Centennial parks) several months ago but have only now got around to photographing it. I grant that
it appears to be a DAG, and is therefore for most purposes a tree.
...and my only excuse is that so many others have made the same mistake. To atone I present here law 11, so familiar to those who deal with modern universities:
The bigger the system, the narrower and more specialized the interface with individuals.
The most-recent edition has been renamed to The Systems Bible. I caved in and ordered a copy from Amazon. True to form, it won't be here until sometime in March.
I've been trying to acquire a copy since reading an excerpt in the knowledge book with some intriguing assertions about general systems. Alibris has screwed me around twice but now someone has put it
on the 'net.
Most of the workshop was quite technical, as I expected. Andrew Tridgell gave a good talk on his appreciation of the GPL, and Eben Moglen was quite amusing while arguing multifarious legal details.
The CSIRO finally got a decision on the legitimacy of their wireless patent in the US, and perhaps it will solve their perennial funding crisis. As this is about hardware protocols I'm not altogether
sure it's a terrible thing, being close to the original intended use of patents and all that. Anyway, I'm all for another Microsoft-esque tax on corporate America. Let's just hope they use it for
more than beer and skittles.
After finishing this book it struck me that it's helplessly Americentric; such giants of the field as Thierry Coquand and Gérard Huet, the entire COQ project, the notion of a "logical framework"
(thank you Gordon Plotkin and friends), and sundry other influential things fail to rate a mention. (OK, Per Martin-Löf gets a guernsey as the godfather of neo-constructivism, though the entire
Swedish-west-coast movement he inspired goes unremarked.) The problem of having to manually provide variable instantiations (due to the undecidability of higher-order unification) is alluded to
(p270), but where is the follow-up pointer to the pragmatic resolution used in e.g. Isabelle, with linear patterns, stylised rules and all that?
The continual reference to the issue of proof and to mathematical practice as normative or indicative is misleading; the issue is how to engineer computer systems as rigorously as we do other
artefacts, such as bridges and planes, and most of the epistemological issues involved are not unique to computer science. Similarly we need not think of these proof assistants as oracles so much as
mechanisations of other people's expertise, in much the same way that structured programming and libraries (or even design patterns if you like that sort of thing) guide us towards best-practice, and
employing algebraic structures leads to all sorts of nice things, like compositionality.
Putting it another way, why would one ever think that a computer system can be engineered to be more reliable than a motor car?
Still, the book does have some interesting discussions on just what a proof is. John Barwise's conception of formal proof as an impoverished subclass of mathematical proof rings true to me, though
the particular example (p324):
... [C]onsider proofs where one establishes one of several cases and then observes that the others follow by symmetry considerations. This is a perfectly valid (and ubiquitous) form of
mathematical reasoning, but I know of no system of formal deduction that admits such a general rule.
is a bit weak: in Isabelle, one could prove a case, prove that the others are in fact symmetric variants of it, then draw the conclusion. If the symmetry proof were abstract enough it could be
enshrined in a library.
I stole this from Tim's shelf while he wasn't watching. I find it really patchy, perhaps because the main reason I picked it up was his potted history of proof assistants (Chapter 8). Here's some
Chapter 6: Social Processes and Category Mistakes
I'm quite partial to James H. Fetzer's position that claiming computer programs can be verified is a category error. (I remarked to Kai a few weeks ago, self-evidently I thought, that proof at best
assures us that our reasoning about an artefact is sound, and that the disconnect between what we talk about and the actuality of putting the artefact in the environment can only be bridged by
testing, an issue our breathren in the empirical sciences have been ruminating about for centuries.) Note that I am firmly of the opinion that formal proof of just about anything is good if one can
get it, the effort of comprehending what's being said aside. This is about the epistemology of computer science, of examining the so-rarely articulated general issues of moving from theory and
My impression is that his position is dismissed (see, for example, this terse rebuttal by the usually erudite RB Jones) as being a category error itself; software-as-mathematical-artefact,
hardware-as-physical-process, read-the-proof-to-see-what's-proven. In context here, though, Fetzer's claims are set against the blue-eyed optimism of those who want to make the correctness of their
system contingent only on the laws of physics (p236):
"I'm a Pythagorean to the core," says Boyer, "Pythagoras taught that mathematics was the secret to the universe." Boyer's dream — not a hope for his lifetime, he admitted — is that mathematical
modeling of the laws of nature and of the features of technology will permit the scope of deductive reasoning to be far greater than it currently is. "If you have a nuclear power plant, you want
to prove that it's not going to melt down ... one fantasizes, one dreams that one could come up with a mathematical characterization ... and prove a giant theorem that says 'no melt-down.' In my
opinion, it is the historical destiny of the human race to achieve this mathematical structure ... This is a religious view of mine."
Unusually for me I'll say no more until I've read Fetzer's paper.
Chapter 8: Logics, Machines and Trust
• MacKenzie doesn't say anything much about the logic employed in PVS; this is another point of departure from the Boyer-Moore theorem prover (beyond the approach taken to automation). My
understanding, from Kai, is that it uses a version of higher-order logic.
• Cute: there's a "mathematically natural" example of a statement that is true but does not follow from the Peano axioms (cf Gödel incompleteness). Check out the Paris-Harrington theorem.
More as I get around to it.
How cool is this, someone's organised a family tree of Linear Temporal Logic (LTL) translations. Who would have thought there's so damn many.
Tim told me about this talkfest. I've only watched half of Amir Pnueli's presentation so far, and it seems to be mostly a recapitulation of old, old stuff (cf Kai's presentation in his undergraduate
concurrency class).
Anyway, the big problem is the downloadable videos don't include the slides, so it's pretty tedious trying to follow what's going on. I'm told the streaming versions don't suffer from this.
This book has an excellent introduction to complexity, especially space complexity. Thanks mrak.
The copy in the UNSW Library has already been nicked, it's that good.
Professor Comer at Purdue shares his searing insights into the social dimensions of computer science.
Gerwin has been busy again.
Full Abstraction Factory
. What a great name for a website.
|
{"url":"http://peteg.org/blog/cs/index.autumn","timestamp":"2014-04-21T00:00:05Z","content_type":null,"content_length":"43402","record_id":"<urn:uuid:281a2d40-d601-434b-be11-eba9c88eae5e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lecture 6: Driven Coupled Oscillators and Cramer's Rule
PROFESSOR: I will start today calculating for you the normal mode frequencies of a double pendulum. And then, I will drive that double pendulum. And then, we'll see very dramatic things that are
So let us start here with the double pendulum. This is the equilibrium position if it hangs straight. Length is l, mass is m. This angle will be theta 1. And this angle, theta 2. I call this position
x1, and I call this position x2.
I'm going to introduce a shorthand notation that omega 0 squared is g/l. I want to remind you that the sine of theta 1 is x1 divided by l and that the sine of theta 2 is going to be x2 minus x1
divided by l.
For small angle approximation, we do know the tensions. Here, we have a tension which I will call T2. That is here, of course, mg. So it has also a tension T2 here. Action equals minus reaction. And
then, there is here the tangent, T1. And here is mg. Those are the only forces on these objects.
At small angles, I do know that T1 must be approximately 2mg because it's carrying both loads, so to speak. And we know that T2 is very close to mg. And we're going to use that in our approximation.
So let me first start with the bottom one that has only two forces on it, so that may be the easiest, m2x2 double dot. And so the only force that is driving it back to equilibrium in the small angle
approximation is, then, the horizontal component of the T2. So that equals minus T2 times the sine of theta 2.
And for that, I can write minus mg divided by l times theta 2 x2 minus x1. We divide out m. And we use our shorthand notation, so we get that x2 double dot. And now we have here minus mg over lx2 So
that comes in. That becomes a plus. So I got plus omega 0 squared times x2. I have here plus x1. When that comes in, that becomes a minus. So I get minus omega 0 squared x1. And that equals 0.
So this is my first differential equation for the second object. So now, I'm going to my first object. So now, we get mx1 double dot. And now, there is one force that is driving it back to
equilibrium. That is the horizontal component of T1. But the horizontal component of T2 is driving it away from equilibrium in the drawing that I have made here.
So you get minus T1 times the sine of theta 1 plus T2 times the sine of theta 2. And so I'm going to substitute in there now the sines. And I'm going to substitute in there T2 equals 2mg and T2
equals mg. So this becomes equal to minus 2mg times the sine of theta 1, which is x1 divided by l. And then, I get plus T2, which is mg times the sine of x2 minus x1 divided by l.
And I'm going to divide by m. And I'm going to use my shorthand notation. And when I do that, I will come out here. x1 double dot is this one. Then, the m goes. g/l becomes omega 0 squared. But now,
look closely. There is here a 2 times x1 and here there is a 1 times x1. And both have a minus sign.
So when they come out, I get 3 omega 0 squared. So I get plus 3 omega 0 squared times x1. And then, I have to bring x2 out. That becomes minus omega 0 squared times x2. And that is a 0.
And so we have to solve now this differential equation coupled with x1 and x2 together with this one. My solution has to satisfy both. I want to go over that one to make sure that it is correct. It
probably pays off because one sign wrong and you hang. The whole thing falls apart. You're dead in the water. So it pays off to think about it again.
So we have x2 double dot. I can live with that, plus omega 0 squared x2. That has the right smell for me. Minus omega 0 squared x1. I can live with that differential equation.
And I go to this one. Of course, the 3 is well known in a system like this that you get the 3. I have an x1 double dot plus 3 omega 0 squared x1. I think we are in good shape.
And so now, we're going to put in our trial solutions, x1 is C1 times cosine omega t and x2 is C2 times cosine omega t. And we're going to search for the frequencies. So this omega, we have to solve
for this omega. This is not a given.
Since we're looking for normal mode solutions, these two omegas must be the same-- can't write down omega 1 and omega 2. And I don't have to worry about any phase angles because since we have no
damping, either they're in phase or they're 180 degrees out of phase. And 180 degrees out of phase simply means a minus sign.
So that's the great thing about this. The signs will take care, then, of the possible phase angles. So now, we're going to substitute this solution in these two differential equations.
And I'm going to write it down in the form that I put the C1 to the left and the C2 to the right. And so I'm first going through this one that is my object number one. I take the second derivative.
So I get C1 times minus omega squared because the second derivative here gives me a minus omega squared.
I ditch the cosine omega t terms because I'm going to have a cosine omega t everywhere. So I'm not going to write down the cosine omega t. I have here plus 3 omega 0 squared times C1. I have here
minus omega 0 squared times C2, and that is 0. Notice I put the C1's here and the C2 there.
I'm going to the other equation. I'm going to put this C1 on the left. So we get minus omega 0 squared times C1. That's the term you have here. And then, we have plus C2 times omega 0 squared minus
omega squared. And this minus omega squared comes from the second derivative of this one, second derivative of this one, minus omega squared.
You get omega squared comes out. And then, the cosine goes to a sine. That gives you the minus sign. So now, we have here two equations with three unknowns. That is typical for normal mode solutions
of a system with two oscillations.
We don't know C1. We don't know C2. And we do not know omega. And so we remember from last time that you can always solve for the ratio C1/C2, and you can solve for omega. You can only find C1 if you
also know the initial conditions, which I have not given. Instead of solving it the high school way that I did last week, the simple way, the fast way, I'm going to do it now using Cramer's Rule.
And the reason why I want to do this once, even though in this case it really is not necessary, when you have three coupled oscillators or four, there's just no way that the high school method will
do it for you. And you have to have a more general approach. I've sent this to you by email, all of you. And I also assume that you have worked on this a little bit since I requested that you would
prepare for that.
So I'm first going to write down what D is, which is the determinant of these columns that you see. There are a, b, and c. Of course, we only have here two columns, the C1's and we have the C2's. So
let me write down here what D is.
So I'm going to get 3 omega 0 squared. That is the C1. Then, I get here minus omega squared. That's the C2.
And then, I get here minus omega 0 squared. And here I get omega 0 squared minus omega squared. And the determinant of this, that's D. And I presume you all know how to get the determinant of this
very simple matrix. We'll do that together, of course.
So following Cramer's Rule, then, my C1 which is x there, that's the one I want to solve for. So my C1, I now have to take this 0-- this is also a 0, by the way. I forgot to mention that this is a 0.
This equation is a 0.
So this column now, 0, 0, has to come first. So I get here 0, 0. And then, the second column is the same as it was here, minus omega 0 squared. And then I get omega 0 squared minus omega squared. And
that's divided by D. So we now know that that is C1.
And then, we go to C2. So now the 0, 0 column shifts towards the right, goes here. And the first column is unchanged. So we have here 3 omega 0 squared minus omega squared minus omega 0 squared. 0,
0, divided by D.
You've got to admit that the upstairs here, the determinant of this matrix, is 0 because you have two 0's here. And it's also 0 for this one. But clearly, zero solutions for C1 and C2 are
meaningless. They're not incorrect because you will get, in this differential equation, that 0 is 0, which is rather obvious. So we don't want the solutions that C1 and C2 are 0.
And the only way that we can avoid that is to demand that D becomes 0. Because now, you get 0 divided by 0. And that's a whole different story. That's not necessarily 0. And that, then, is the idea
behind getting the solutions to the searched for normal mode frequencies.
So we must demand in this case that this becomes 0. Otherwise, you get trivial solutions which are of no interest. So when we make D equal 0, we do get that the determinant of that matrix becomes
three omega 0 squared minus omega squared times omega 0 squared minus omega squared, and then minus the product of that becomes a plus-- that becomes a minus, not a plus, becomes a minus. Because
minus minus is already plus, so I get minus omega 0 to the 4 equals 0.
And this equation is an equation in omega to the 4. You can solve that. You can solve that for omega squared. I will leave you with that solution. This is utterly trivial. And out comes, then, that
omega minus squared, which is the lowest frequency of the two, is 2 minus the square root of 2 times omega zero squared. And omega plus squared is 2 plus the square root of 2 times omega 0 squared.
So this step for you will take you no more than maybe half a minute. But be careful, because you can easily slip up of course. So we now have a solution that omega minus is approximately, if I
calculate the 2 minus square root of 2, it's approximately 0.77, 0.76 omega 0. Not intuitive at all, so it's lower than the resonance frequency of a single pendulum. And then, I can substitute this
value for omega either back into my equations, or I substitute it back in this if you want to.
You get 0 divided by 0, which is not going to be 0. And you're going to find, then, that C2 divided by C1 in that minus mode, in that lowest possible mode, you will find that it is 1 plus the square
root of 2, which is approximately plus 2.4.
And the omega plus solution gives you a frequency which is about 0.85 omega 0. That solution, you can put back into your differential equations here. Or not the differential equations, I mean into
this one. Or you put it back into this if you want to. And you will find now that C2/C1 for that plus mode-- I call this the plus mode, that's my shorthand notation-- is going to be minus 1 divided
by 1 plus the square root of 2.
So it's going to be minus 1 divided by 2.4. So it's going to be roughly minus 0.41. So those are, then, the formal solutions for the normal modes. And if I gave you the initial conditions, then of
course you could calculate C1. And then, you know everything because you know the ratio C2/C1. But without the initial conditions, you cannot do that.
Each of these normal modes solutions satisfies both differential equations. So therefore, a linear superposition of the two normal mode solutions is a general solution. So when you start that system
off at time t equals 0, you specify x1. You specify the velocity of object one. You specify x2, and you specify the velocity of that object. Then, it's going to oscillate in the superposition, the
linear superposition of two normal modes solutions.
One you see here has omega minus. And this will be the ratio of the amplitudes. And this is the omega plus. And that'll be the ratio of the amplitudes. And that is non-negotiable. That's what the
system will do.
Now, we're going to make a dramatic change. Now, we're going to drive this system. So now, this is the equilibrium position now of the whole system. And I am going to drive it back and forth, holding
it in my hand like this. And I'm going to shake it. I call the position of my hand eta equals eta 0 times the cosine omega t.
So eta 0 is the amplitude of my motion of my hand. That is what I'm going to do. And so when I look now at this equation, at this figure, not the equation but at the figure, I call this now x1. I
call this now x2 because you should always call x1 and x2 the distance from equilibrium. And this is now equilibrium.
If I don't shake at all, I have the pendulum here. And so it's hanging straight down. So this is now equilibrium. And this location here at a random moment in time is now eta.
What has changed? Well, at first sight, very little has changed. The only thing that has changed now is that the sine of theta 1 is now x1 minus eta divided by l. So I have here a minus eta. That
looks rather innocent. But the consequences will be unbelievable.
So I can leave everything on the blackboard the way I have it. All I have to do is to put instead of the sine theta x1 divided by l, I have to put in x1 minus eta divided by l. See, nature was very
kind to me. It already left some space there. You see that? Nature anticipated that I was going to do that.
So you get a minus eta there. Well, if now you divide m out and you're going to use the shorthand notation, then leaving eta on the right side, which is nice to do, you get minus minus eta. So this 0
now becomes 2 omega 0 squared times eta 0 times the cosine of omega t. It becomes 2 omega 0 squared times eta. You see?
So it is no longer a 0. And so when we now substitute in there these trial functions, they now have a completely different meaning. Omega is no longer negotiable. Omega, as set by me, is a driver. So
we're not going to solve for omega. Omega is a given.
That means, if omega is a given, that we're going to end up not with two equations with three unknowns-- C1, C2, and omega-- but we're going to end up with two equations with two unknowns, only C1
and C2 because omega is a given.
And so in the steady state solution, you get a number for C1. And you get a number for C2 in terms of eta 0, of course. You will see how that works.
So when we put in these functions now, these omegas are fixed, are no longer something that we search for. They're my omegas. I can make them 0. I can make them infinite. I can make them anything I
want to. And that's what we want to study.
So if we want to go back now to these two equations, what changes here since we have put in this trial function, the only thing that disappears is this cosine omega t. So we end up here with 2 omega
0 squared times eta 0. And nothing else changes. But now, we're looking now at our solution for C1. And we're going to look at our solution for C2 in steady state.
It's easy, right? We apply Cramer's Rule. And the only thing that changes is this one, which has to be replaced by this, 2 omega 0 squared times eta 0. Eta 0, remember, is the amplitude of my hand.
Eta is the displacement of my hand at any moment in time. Eta 0 is the amplitude.
And so here, we get then also 2 omega 0 squared times eta 0. And so now, we can solve for C1 and C2. We get an answer-- not just a ratio only, we get an answer. We know exactly what C1 is going to
be. And we know exactly what C2 is going to be because omega is non-negotiable. Omega is now a known. When we solved this, we were searching for omega. And out came these omegas. That's not the case
anymore. We know omega. It's called omega, and that's it.
So I can write down now C1. So I take the determinant there of the upstairs. So that gives me 2 omega 0 squared times eta 0 times omega 0 squared minus omega squared. That is this diagonal. And this
one is 0.
And I have to divide that by D. Now, I could write D in that form. And I could do that. There's nothing wrong with that. But I can write it in a form which is a little bit more transparent.
We do know that omega minus and omega plus will make D 0. So you can write this then in the following way. You can write here omega squared minus omega minus squared times omega squared minus omega
plus squared. That must be the same as what I have there because you see that this one becomes omega minus, then this goes to 0. And when this one becomes omega plus, that is also 0. So it's a
different way of writing. Gives you a little bit more insight.
It reminds you that the downstairs will go to 0 when you hit those resonance frequencies. And so I will also write down C2, then. This one is 0. I get minus this one. So I get plus 2 omega 0 to the 4
times eta 0 divided by that same D. You can write this for it.
So let me check that, see whether I'm happy with that. 2 omega 0 squared, eta 0, omega 0 squared, right? Is omega squared? That looks good. This looks good. This looks good. And I have here-- I'm
Our task now is not to look at these equations but to see through them. And they are by no means trivial, what they're going to do as a function of omega. It's an extraordinarily complicated
dependence on omega.
And I have plotted for you these values of C1 and C2 for which you get an answer now. You also know C1/C2 of course because you know C1 and you know C2. And I'm going to show you this as a function
of omega. And then, we will try to digest that together.
And this plot, like the other plots that I will show you later today, will be on the 803 website. They will be part of lecture notes. So you have to click on Lecture Notes. And then, you will see
these plots. This is the first one, which is the double pendulum.
What is plotted here horizontally is omega divided by omega 0. So that's the omega 0 that we mentioned there. And so you see the first resonance here is about at 0.76 omega 0. And the second
resonance here is at that 1.85 omega 0.
And what we plot here is the amplitude, C, divided by eta 0. Because obviously if you make eta 0 larger, you expect that has an effect on the C's, of course. If you drive it to the larger amplitude,
of course the object will also respond accordingly. And so therefore, we have it as a function of eta 0. You see the eta 0 here? C1 is linearly proportional with eta 0. C2 is linearly proportional to
eta 0. That's no surprise, of course. So we divided by eta 0.
When we plot it upstairs, it means that the amplitude is in phase with the driver. That's the way we have written it. And if it is below the zero line, it means that the amplitude of the object is
out of phase with the driver. That's all it means. That's the sign convention.
So let us now look at this and try to digest it and use, to some degree, our intuition and see whether that agrees with our intuition. And let us start when omega goes to 0. And don't look now at the
solutions. If omega is 0, I have a double pendulum in my hand. And I'm going to move it to the left. And 25 years from now, I'm going to go back. And I move it to the right again.
So the pendulum is always straight. And it is clear that C1 and C2 to both must be eta 0. And you better believe it that if you substitute in there omega 0, that's what you will find.
So C1 must be C2 and must be eta 0. And it must even be plus eta 0. It must be in phase with the driver. And you see that the ratio-- not the ratio, which is C divided by eta 0. That is a ratio that
is plotted at plus 1. And so both are in phase with the driver.
So that's a trivial result. It means that the pendulum which is here, 25 years from now is here. That's omega 0, almost 0. So notice that we now know the ratio C2/C1, which is plus 1. The ratio C1/C2
or C2/C1 are now entirely dictated by this. Nothing to do with normal modes anymore, so don't be surprised that C1/C2 is now plus 1.
Now, look what happens. I'm going to increase my omega. And what you see is that the red curve, which is the second object, the lowest one of the two, is going to have a larger amplitude than the top
one. You see, it already begins to grow very rapidly.
And when you reach resonance, the ratio is going to be plus 2.4, of course. That's obvious. Now at resonance, you get an infinite amplitude for each. That, of course, is nonsense. It has physically
no meaning. So you should think of it as going to infinity. For one thing, if C1 became anything, that means if this point here ends up there in the hole, it's hard to argue that it's a small angle,
So in any case, the solution wouldn't even hold apart from the fact, of course, it has always some damping. And so we never go completely off scale, so to speak. However, you will see that when you
drive the system, the double pendulum, and you approach resonance, that you will very quickly see that the ratio of the amplitude of the second one over the top one will grow, will become 1 and 1/2,
will become 2. And then, in the limiting case, you will hit plus 2.4.
So as omega goes up, you're going to see that C2/C1 is going to be larger than one, right there, this one. And then ultimately, it will reach that 2.4. But that is that extreme case of resonance.
And when you look at the motion of the pendulum, if you drive it somewhere here, notice that they are both in phase with the driver. And C2 is larger than C1. So what you will see is this is C1. And
then, this is C2. And so C2 is larger than C1. And they are in phase with the driver. And so when they return, you will see the pendulum like this. So that's the sweeping that you will see.
There's something truly bizarre. I go a little higher in frequency. I go over the resonance. And I see here a point whereby the frequency happens to be exactly the frequency of a single pendulum--
omega divided by omega 0 is 1-- and the top one refuses to move. But the bottom one does.
The bottom one here has an amplitude which is twice the amplitude of the driver. You look, you see a 2 here. And it is out of phase with the driver. That is unimaginable. It's unimaginable what
you're going to see.
So when omega is omega 0, C1 becomes 0. C2 becomes a minus 2 eta 0. And so this pendulum is going to look, then, as follows. I'll make a drawing here. I have a little bit more space.
So if my hand is here at eta 0, then number 1 will stand still, won't do anything. But number two will have an amplitude which is twice eta 0. But it's on the other side. So this is 2 eta 0.
And then, if you look half a period later, it will look like this. So this is eta 0. And then, this is 2 eta 0. And this one doesn't move. That's what it tells you.
How on Earth can the lower pendulum oscillate if the upper one stands still? I want you to think about that. I'm still having sleepless nights about it. And so maybe you will have some tool. And if
you have some clever ideas, come to see me.
But the logical consequence of what we did is that this one would stand still. And the other one will still oscillate and be driven by my hand. I have to keep moving this. Otherwise, this will go to
pieces. I must be doing this all the time at that frequency, which is exactly the resonance frequency of a single pendulum. It is that omega 0. I must keep doing this. And this one does nothing. And
this one has double the swing of this and is out of phase.
If, then, you go even higher, then you will see that the two objects will go out of phase. The upper one will be in phase with the driver. And the lower one will be out of phase with the driver. And
there, you hit that second resonance when things will get out of hand. And you get that ratio, minus 0.42, back of course.
I will want to demonstrate to you this situation here and this situation to see whether they make sense. And so now, I'm going to use a double pendulum and drive it with my own frequency, which I
determine. I'm the boss. I determine omega, not looking for normal mode solutions. I determine omega.
And I'm going to first drive it with omega 0 with omega is about 0. In other words, I'm going to drive it here. And what you're going to see is, of course, fantastic, absolutely fantastic. It's
hanging straight down now. And I'm moving it over a distance of one foot. So eta 0 is one foot. And C1 is one foot, and C2 is one foot. And they're in phase with the driver. Physics works.
If I go a little higher up here, so I go somewhere here approaching resonance, then you will see that C2 becomes larger than C1. This is C2 and this is C1. And you get to see a picture which is very
much like this. And I'll try that.
So I'll drive it below resonance but not too far below. And there you see it. You see? C1 is smaller than C2. No longer 1 plus 1. If this was 1, your C1/C2 is 1. That's no longer the case now. You
really see that C2 is getting ahead-- not in terms of phase ahead, but in terms of amplitude, very clear.
Now, I'm going to attempt to do the impossible. And the impossible is to try to hit this point, the point whereby the upper one stands still and whereby the lower one will have an amplitude which is
twice that of my hand but out of phase with my hand. How on Earth can I ever drive this system with that frequency, omega 0, which is the frequency of a single pendulum?
Well, maybe I can't. But I will try. And the way that I'm going to try this is the following. I know what the resonance frequency of a single pendulum is. That's this. I can feel it in my hands. I
can feel it in my stomach. I can feel it in my brain. I feel it all over my body.
I can burn this frequency into my chips here. And then, I can close my eyes while you are looking and generate that frequency which was burned here and drive the system as a double pendulum. If I
succeed, you will see the upper one stand still and the bottom one will have twice the amplitude of my hands. So the success of this depends exclusively on how accurately I can burn this frequency
into my chips.
So you have to be quiet. So I'm going to count. One, two, three-- I'm burning now-- one, two, three, four, one, two, three, four. I'm closing my eyes. One, two, three, four. One, two, three, four.
One, two, three, four. One, two, three, four. One-- you're not saying anything! Didn't you see this one standing still? Did you see it? Did you also see that the other one had twice the amplitude of
my hands and out of phase with my hands? You didn't see that, right? Admit it. You didn't see it because you were not looking for it. Especially for you, I'll do it again.
So you really have to see number one that this one will practically standstill. Number two, that this one has double the amplitude of my hands but out of phase. The decay time of burning is only one
minute. So I have to burn in again. One, two, three, four, five. One, two, three, four, five. One-- uh-oh. These things happen. You have to start all over with the burning.
There we go. One, two, three, four, five. One, two, three, four, five. One, two, three, four, five. One, two, three--- are you seeing it?
AUDIENCE: Yes!
PROFESSOR: All right.
PROFESSOR: This is an ideal moment for the break. I'm going to hand out the mini-quiz. And we will reconvene. I'll give you six minutes this time, so you can even stretch your legs. So I would like
some help handing this out. And then, you bring it back. And I'll put some boxes out there.
So if you can help me handing it out here, you can start right away. Hand this out here. For those of you who have no seats, come forward and get some seats. Nicole, we are still friends, right? So
why don't you hand that out. And why don't you hand this out here? You can also give it to people here, here, if you do that. You can start right away.
I'm now going to do something perhaps even more ambitious. And I'm going to now couple three oscillators-- not pendulums yet, but I'm going to couple three oscillators which I connect with four
springs. I'm going to work on this. Three masses, equal masses, four springs, spring constant, k. And the spring constants are the same.
And I'm going to drive that system one, two, three, four. And this is the end. In other words, I have here a spring. And here's the first mass, second mass, third mass. And here, it is fixed. And I'm
going to drive it here with a displacement, eta, which is eta 0 times cosine omega t.
So at a random moment in time, this is where my hands will be. So this is eta. This is where the first mass will be. Remember, you always call the displacement x1 from its equilibrium, that is from
its dotted line.
So here's the spring. This one is here. So I call this x2. So here is the spring. And this one is here. So this is x3. So here is the spring, and here is a spring. You may have noticed more than once
now that I have a certain discipline that I always offset them in the same direction. Do you have to do that? No.
If you don't do it, your chance of a mistake on a sign slip is much larger than if you always set them off in the same direction. You'll see shortly why. So that is certainly something that is not a
must, but it's a smart thing to do. I define this as my positive direction, but that, of course, is a complete free choice.
Now, let at this situation at this moment in time, let x1 be larger than eta. Let x2 be larger than x1. And let x3 be larger than x2. And this assumption will have no consequences for what follows,
at least for the differential equations.
If x1 is larger than eta, that first spring is longer than it wants to be because I've assumed that x1 is larger than eta. And so that means there will be a force in this direction because this
spring is longer than it wants to be.
If x2 is larger than x1, this spring is also longer than it wants to be. So it will contract. So there's a force in this direction. And so I can write down now the differential equation for my first
So that's going to be nx1 double dot, that equals minus k times x1 minus eta because that's the amount by which it is longer than it wants to be. So times x1 minus eta, that is this force. And this
force is now in the plus direction, is plus k times-- this spring here is longer than it wants to be by an amount x2 minus x1. Not omega 1, but x1.
That's my differential equation for the first object. And this one is always correct, even if x1 is not larger than eta because if x1 is not larger than eta, then this force flips over. Well, this
will also flip over. So that's why it's always kosher and advisable to make that assumption to start with because, again, it reduces the probability of making mistakes. That's all. There's nothing
else to it, just reduce the chance of slipping up.
So let's now go to this object. If this spring is longer than it wants to be, it wants to contract. So this object will see a force to contract. But if this spring is longer than it wants to be
because x3 is larger than x2, it will experience a force to the right. So I can write down now the differential equation for object number two.
mx2 double dot, notice that the one that is here to the left is the same one that is here to the right, right? Action equals minus reaction. This pull is the same as this pull. So it is going to be
this term which now has a minus sign. And you always see that in coupled oscillators that what was a plus here is going to come out here as a minus sign. You see, that comes out nicely because this
spring is longer than it wants to be by an amount x2 minus x1. And the force is in the minus direction.
And this one is now going to be plus k times x3 minus x2. So now, I go to the next spring, to the next object. So this object here will experience a force to the left because this thing is longer
than it wants to be. So it wants to contract. But this one is pushing. So therefore, the force due to this spring is now also in this direction because the end is fixed.
And so we get for the third object mx3 double dot equals minus k times x3 minus x2, which is this term, but it switches signs. And then in addition, I get minus k times x3.
When you reach this point on an exam, you pause, take a deep breath, and you go over every term and every sign. If you slip up on one sign, one casual mistake that you just even though you know it
you casually write here for instance a 1 instead of a 3, it's all over. You're dead in the water. The problem will fall apart. And it may not even oscillate in a simple harmonic way.
So therefore, let's look at it. mx1 double dot-- x1 is larger than eta. Therefore, force is in this direction. I love it. The other one is in this direction, perfect, x2 minus x1. That same force
here is going to pull on the second one. So if this is correct, this is also correct. This one is driving it away from equilibrium, x3 minus x2, got to be right. This term shows up here with a minus
sign, can't go wrong there. And since this spring is always shorter here, if it's pushed to the right, I am happy with my differential equations.
So now, you're going to substitute in here x1 is C1 cosine omega t, trial functions. X2 is C2 cosine of omega t. And x3 is C3 cosine omega t. Are we looking for omegas? Oh, no, oh, no. Omega is given
by me. I am telling you what omega is. You're not going to negotiate that with me.
We are only solving for C1, C2, and C3. And in the steady state, you will be able to do that because omega is nonnegotiable. You're going to get three equations with three unknowns--- C1, C2, C3. You
don't have to settle to only calculate the ratios of the amplitude. No, you're going to get a real answer for C1, for C2, and for C3 which, of course, will depend on eta 0-- sure, if you know eta 0.
Do we worry about phase angles here? No, because there's no damping. And if there's no damping, either the objects are in phase or they're out of phase because it is the damping that gives these
phase angles in between. And 180 degrees out of phase is a minus sign. So we have the power to introduce 180 phase changes and 0 phase. For that, we have plus and minus signs.
Now, you are going to do some grinding. I did all the grinding on every detail of the double pendulum. Now, you're going to do the grinding. However, I want to make sure that if you go through that
effort to make the grinding that you indeed end up with the right solution. So in that sense, I'm going to help you a little bit by giving you the D, which is the D that we have here.
But you have to bring me to the D. And so the D is going to be-- so we have to divide by m. You have to also-- omega s squared, k/m, shorthand notation. Some of you may want to call it omega 0.
That's fine because there's only one-- only springs, there are no pendulums and springs. But I still call it an s to remind you that it is the resonance frequency or a single spring.
Then, my D becomes minus omega squared. That's always the result of that second derivative, remember? You always get that minus omega squared out. Then you get plus 2 omega s squared. Then, you get
in the second column minus omega s squared. And in the third column, you get a 0.
No surprise that you get a 0 in the third column because the first differential equation has no connection with x3 at all. And so you never see anything in the third column that will be a 0. But if
you look at the second differential equation, that has an x1, an x2, and an x3 in it. So now, you don't see 0's. So what you're going to see is minus omega s squared. That's going to be the C1 term.
And then, you get here minus omega squared. I think it is plus 2 omega squared. And your last column is going to be minus omega s squared.
Now, the third differentiable equation, there's no x1. Therefore, that is a 0 here. And then, you get minus omega s squared. And then here, you get minus omega squared plus 2 omega s squared. And you
have to take the determinant of this matrix. That is D.
Let me check it to make sure that I didn't slip up with a minus sign so when you get home that you don't wonder, why didn't you get that result? And I think that looks good to me. Remember, all those
omega squareds always come from those second derivatives because you have to take the second derivative of cosine omega t. That always brings out the minus omega squared. So no surprise that this is
a minus, that this is a minus, and that that is a minus.
So now, we want to know what C1 is. And the first column will reflect this eta because the right side now is not going to be 0, remember, like the double pendulum? So you're going to get here in the
first column, you're going to get omega s squared times eta 0. And there, you're going to get a 0 and a 0.
And then, this column comes here. And this column comes here. That is the determinant of the upstairs. And you divide it by D.
That, then, is C1. And of course, I will only go one step further to go to C2. But that becomes a little boring now. C2, then, you get that's this one. And then, the second column according to
Cramer's Rule is going to be omega s squared, eta 0, 0, 0. And then, the third one is going to be this. And then, you go on, and we divide it by D, of course. And then, you can write down C3. You're
on your own. I'll help you.
So when you do this, you could, if you wanted to, first solve for what we call the resonance frequencies. The resonance frequencies are the ones which are the normal mode frequencies. That's a
resonance. And so you may want to put in D equals 0. So you make that determinant equal 0, which gives you, then, the three resonance frequencies, which earlier we would have called normal modes in
case we are not driving.
And so for those of you who worked this out, the lowest frequency, resonance frequency which was a normal mode, is 2 minus the square root of 2 times omega s squared. The one that follows is 2 omega
0 squared. And the one that follows then, which I will call omega plus plus, is going to be 2 plus the square root of 2 times omega s squared. None of these are, of course, intuitive. But none of the
resonance frequencies of our coupled double pendulum were intuitive.
There's no way that you could even look at this and say oh yeah, of course. Excuse me? What did I do wrong?
AUDIENCE: Omega 0, or--
PROFESSOR: Yes, thank you very much. I have an omega 0 there. You deserve extra credit. I've called that omega s. If you want to change all the omega s's and omega 0's, that's fine. But you cannot
have one omega 0 and the other omega s. Yes, so this is the square of the frequency or a single spring oscillating mass m. Thank you very much for pointing that out.
PROFESSOR: Ah, boy, you also deserve extra credit. I really set you up with this, didn't I? I wanted two people to get extra credit. You deserve one. Come see me later. And who was the other one? You
did one, thank you. Yeah, there's a square here. Oh, there are more people than one who claim this one. All right, thank you very much.
OK, so now for any given value of omega, for any given value of omega you find three answers. You get C1. You get C2. And you get C3 in the steady state solution.
Of course, you get infinite amplitude, which is physically meaningless. So you've got to stay away in your solutions from the infinities. But you'll see if you don't come to close to infinities that
the results you get are quite accurate for high q systems. And so now I will show you the three amplitudes for this system, which will also be put on the web this afternoon. And so that is the three
car system which we have set up there.
The only difference with the previous one is that we don't plot here omega divided by omega 0 but the squares. This plot was provided to me by Professor Wyslouch who lectured 803 a year ago. I was a
recitation instructor at the time. And it was very kind of him that he gave me this plot. He even added this at my request, which is very nice.
Car 1 is the first car, green. And then, the car 2 is the red one, is the second car. And this is supposed to be blue. You don't see it. But in any case, if you think it's black, that's fine. That's
then the black line. So horizontally, it is the ratio of the frequency squared. So you see that the second resonance is indeed at a 2 here. You notice that 2 that I have there on the blackboard.
If we plot it above the 0, it means that it is in phase with the driver. If we plot it below the 0, it means it's out of phase with the driver. If now you look at this, then already at omega 0 you
see something that is by no means intuitive.
Notice that C1, C2, and C3 are all substantially lower than eta 0 because this is in units of eta 0. And they're not even the same. They're all three different. Would I have anticipated that? No, I
Maybe two would be the same for me, but not all three different. And that's the case. They're all three different. When we approach resonance, things go out of hand. All three are in phase. And when
you just cross over the first resonance, they are all three again in phase but out of phase with the driver. That's not so surprising all by itself.
But now, look at this ridiculous point. If I drive that system with the resonance frequency of the individual spring with one mass on it-- because remember, if this squared is 1, then omega divided
by omega 0 is also 1, but that's exactly the square root of k/m-- then this one will stand still. And these two have roughly the same amplitude. You could eyeball it here. It looks like it almost
crosses over there. And it's about minus eta 0 because it's about minus 1.
Let me write that on the blackboard. That's a quite bizarre situation. So at that one frequency, so omega equals the square root of k/m, I get C1 equals 0. And my C2 is about C3, maybe even exactly
C3. I never checked that. And that is roughly minus eta 0. You can just see that there.
So it means that this car will stand still. It's closest to the driver. These two are in phase with each other, have the same amplitude. But they're out of phase with the driver. Crazy! Got to be
wrong, right? How on Earth can this one stand still, and these happily go hand in hand and go--? It can't be right. But I will demonstrate to you that it is right at least to a high degree of
I'm going to concentrate on two points in that graph. The first point that I want to concentrate on is this one, the hardest one of the two. I'm going to drive this system at a frequency that we
think is almost there. I'm going to turn it on right now because it will take three to four minutes for the transience to die out. And then in the meantime, I'll explain what you're going to see.
I'm driving it now with that frequency. So don't look at it now. It looks chaotic. It's going into a ridiculous transient mode. This one is also oscillating. It's not supposed to oscillate. Look
there. It's oscillating, and it looks chaotic. The amplitudes are changing. Just let it cook.
Now, what are you going to see? If the transients die out which can really take three minutes, you will see that this is my eta 0. This is 2 eta 0. You see this arrow? That is 2 eta 0. That's the
driver. So you can calibrate the amplitude eta 0.
You will see, then, that these two cars have the same displacement but out of phase. So when this one goes to the right for me, they go to the left for me and vice versa. They have an amplitude which
is very closely the same. And this one is going to stand still.
What you're going to see is not something truly spectacular. But it's extremely subtle. You have never seen it before and I don't think this has ever been demonstrated in any lecture hall. It's
extremely subtle, first of all, to have the patience to wait four minutes for this to happen, number one. And then, number two, to make such a daring prediction that this one will almost come to a
halt and that the other will have an amplitude which is the same as the driver but 180 degrees out of phase.
So be patient. And it will pay off. Don't fall asleep now. Let's look at this middle one to see whether its amplitude is becoming constant. As long as the amplitudes are not constant, there is still
transience. Well, these two are already going hand in hand. You see that? And I would say very much with the same amplitude.
And look at this. Hey, [INAUDIBLE]. They are already out of phase with each other. You see that? Now this one, oh boy. That amplitude is nowhere nearly as large as this one. And yet, when we started
it, it was even larger than this one because of the transient phenomenon.
Look at this one. It's almost standing still already. Can we find that precise frequency? No. We have to set a dial somehow. We do the best we can. I think it's fantastic. It's growing my mind.
I can't believe it. Physics is working. This one is practically standing still. If I had to estimate this amplitude, I would say maybe 1/10 of eta 0. And these, you can mark, you see the marks here.
This one is really eta 0. For those of you who see these marks, that is the same as this. It's really eta 0. And boy, they go hand in hand. Aren't you thrilled?
PROFESSOR: Now, I'm going to try to make you see this point. I'm going to start it. And then, I'll explain to you what is so special about that point. Because again, we have to be patient. And we
have to wait for the transience to die out.
I'm going to drive it very close to resonance right below to resonance. You get a huge amplitude of C1 and C3. But look what C2 is going to do. C2 is going to have an amplitude which you and I may
have thought is 0, namely the outer ones do this and the middle one stands still. No, no, look at 1 and 3, by the way. They go nuts already. You see? That is this. This is 1 and this is 3.
Now you see the transient phenomenon. Now, it's picking up again. So we have to be patient. But there's something very special about this C2. Why is this C2 not 0, which is what you would expect? You
would expect that the outer two go like this and the middle one would just stand still.
PROFESSOR: That's one answer, yes, of course. But isn't it so that this must be 0? You don't believe that this upstairs is going to be 0? Just say you don't believe it.
AUDIENCE: No.
PROFESSOR: It is 0! Why now, even though the upstairs is 0, why do we get an amplitude which is half eta 0 but with a minus sign? Because we get 0 divided by 0 because right at that frequency here,
that is a resonance frequency. So D is 0. So you get an amazing coincidence that the upstairs is 0, the downstairs is 0. But that's what we have 1801 for. The ratio is not 0. The ratio is minus 1/2
eta 0. And that's what you see, it's minus 1/2 eta 0.
Now look. 1 and 3 are happy. They oscillate happily out of phase with each other. Number 1 has to be in phase with the driver. It's doing that.
Number 3, out of phase with the driver, it's doing that. And look at the middle one. The middle one, for those of you who can see, the arrows are given eta 0 to eta 0, its right half of it. Just walk
up to it and you can see. It's amazing, this.
We hit that right on the nose. The reason why this is easy is look, my omega doesn't have to be precisely here. Even if it's a little bit to the left, I'm still OK. That amplitude of number 2 is not
changing very much. You see that? This is so nicely horizontal. So this, for me, was a piece of cake. This was hard. Piece of cake. So you see there?
Now, I'm going to move to the triple pendulum. In other words, now you've seen the structure of my talk. We first did the double pendulum. The double pendulum was-- I worked out all the way for you.
Then, we did the three cars with the four springs. I set it up for you so that you can't go wrong anymore. I gave you the final solution. And I demonstrated that indeed it is working the way we
Now, I'm going to simply show you the results of a triple pendulum. And the plots that I'm going to show you will be on the web, but no calculations at all. So that triple pendulum, or that-- let me
put these down so that we have not so much shadow on this board.
Triple pendulum, this is a triple pendulum. And we're going to drive it here, eta equals eta 0 times the cosine of omega t. The top one is going to be green. The middle one is going to red. And the
bottom one is going to be blue even though it may look black there, but I'll make it blue. That is the color code that we have on our plot, which will be put on the web this afternoon.
So here it comes. Horizontally, we plot again omega divided by omega 0 where omega 0 is the square root of g/l, the resonance frequency of a single pendulum. And vertically, we do the same thing. We
plot C divided by eta 0. Everything has the same meaning, namely plus 1 means in phase with the driver and the same amplitude as eta 0. And minus is out of phase with the driver.
Now, if you take this pendulum and you move it with 0 frequency, you don't have to know any differential equations that it will be here. And 20 years from now, it will be there. And this separation
is, then, eta 0. And so you expect that C1, C2, and C3 will all three be eta 0. And they will be plus eta 0-- not minus, but plus, in phase with the driver. And look, that is what you see there. So
that is by no means a surprise.
You go closer to resonance. And you see that the bottom one, which is the blue one, black here, is picking up an amplitude which is larger than the middle one and the red one. And the red one, larger
than the top one.
So you're going then into the domain where you're going to see something like this. So this one, C1, C2, C3. And then finally, when you hit resonance, I do not know what the ratios are. But
everything gets out of hand anyhow.
And now, there comes a point which boggles the mind right here. The top one is not moving, stands still. And there's nothing really so special about that frequency. That frequency for which the top
one stands still, so C1 goes to 0, that omega if I try to eyeball it, I would say it's about 0.75, 0.77. 0.77 times omega 0.
And somehow at that frequency, the top one will not move. The other two will move. They have an amplitude which is not stunning. It's nothing to write your mother about. But it is about eta 0. It is
a minus sign, so it's out of phase with the driver. And this one is a little more. Bizarre.
Could I demonstrate this? No way on Earth can I generate 0.77 times omega 0. I can burn into my chip omega 0, as I did. But I cannot do 0.77. There's no way. So forget it. I have to disappoint you.
But now comes the good news. Look at this point here. That is a point where the middle one will stand still. And that is truly amazing. The middle one stands still. The top one has an amplitude of
about 0.7 eta 0. I just eyeball that. And the bottom one has an amplitude of about minus 1.5 eta 0. And let me try to make a drawing of that.
So this now is at omega equals omega 0. So my hand is here, which is eta 0. The top one has an amplitude which is about 0.7 eta 0, this one. So it is roughly here. In phase, so this is connected. So
this is roughly 0.7 eta 0.
The next one stands still, believe it or not. It's here. And then, the next one is out of phase with me and has roughly an amplitude of 1 and 1/2 times this. So that's 1 and 1/2. So I call that minus
1.5 times eta 0.
And then half a period later, my hand is here. And this object is here. And this one stands still. And this one is here.
Truly, these are almost not believable. And then, there's another frequency whereby the top one would stand still. I will attempt-- I will make the daring attempt to aim for this solution. And the
reason why I can try that is because the frequency is omega 0, which I can burn into my chips. And that's the last attempt I will make today.
So this is the triple pendulum. So I have to go through the exercise of learning again the period. And then, as I close my eyes and drive it with this frequency, the idea then is that you would see
the top one move in phase with my hand, the one below that will stand still. And then, the one below that will have an even larger amplitude.
So one, two, three, four, five, six, seven, one, two, three, four, five, one, two, three, four, five, one-- you see it?
AUDIENCE: Yes.
PROFESSOR: Three. You see it? Did you see that one stand still? Isn't it amazing that physics works? OK, see you Thursday.
|
{"url":"http://ocw.mit.edu/courses/physics/8-03-physics-iii-vibrations-and-waves-fall-2004/video-lectures/lecture-6/","timestamp":"2014-04-16T10:15:27Z","content_type":null,"content_length":"99322","record_id":"<urn:uuid:60af61a6-2d16-4666-9ef3-0285fcaa64de>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts from August 2010 on Quomodocumque
New paper on the arXiv with Chris Hall and Emmanuel Kowalski.
Suppose you have a 1-dimensional family of polarized abelian varieties — or, just to make things concrete, an abelian variety A over Q(t) with no isotrivial factor.
You might have some intuition that abelian varieties over Q don’t usually have rational p-torsion points — to make this precise you might ask that A_t[p](Q) be empty for “most” t.
In fact, we prove (among other results of a similar flavor) the following strong version of this statement. Let d be an integer, K a number field, and A/K(t) an abelian variety. Then there is a
constant p(A,d) such that, for each prime p > p(A,d), there are only finitely many t such that A_t[p] has a point over a degree-d extension of K.
The idea is to study the geometry of the curve U_p parametrizing pairs (t,S) where S is a p-torsion point of A_t. This curve is a finite cover of the projective line; if you can show it has genus
bigger than 1, then you know U_p has only finitely many K-rational points, by Faltings’ theorem.
But we want more — we want to know that U_p has only finitely many points over degree-d extensions of K. This can fail even for high-genus curves: for instance, the curve
C: y^2 = x^100000 + x + 1
has really massive genus, but choosing any rational value of x yields a point on C defined over a quadratic extension of Q. The problem is that C is hyperelliptic — it has a degree-2 map to the
projective line. More generally, if U_p has a degree-d map to P^1, then U_p has lots of points over degree-d extensions of K. In fact, Faltings’ theorem can be leveraged to show that a kind of
converse is true.
So the relevant task is to show that U_p admits no map to P^1 of degree less than d; in other words, its gonality is at least d.
Now how do you show a curve has large gonality? Unlike genus, gonality isn’t a topological invariant; somehow you really have to use the geometry of the curve. The technique that works here is one
we learned from an paper of Abramovich; via a theorem of Li and Yau, you can show that the gonality of U_p is big if you can show that the Laplacian operator on the Riemann surface U_p(C) has a
spectral gap. (Abramovich uses this technique to prove the g=1 version of our theorem: the gonality of classical modular curves increases with the level.)
We get a grip on this Laplacian by approximating it with something discrete. Namely: if U is the open subvariety of P^1 over which A has good reduction, then U_p(C) is an unramified cover of U(C),
and can be identified with a finite-index subgroup H_p of the fundamental group G = pi_1(U(C)), which is just a free group on finitely many generators g_1, … g_n. From this data you can cook up a
Cayley-Schreier graph, whose vertices are cosets of H_p in G, and whose edges connect g H with g_i g H. Thanks to work of Burger, we know that this graph is a good “combinatorial model” of U_p(C);
in particular, the Laplacian of U_p(C) has a spectral gap if and only if the adjacency matrix of this Cayley-Schreier graph does.
At this point, we have reduced to a spectral problem having to do with special subgroups of free groups. And if it were 2009, we would be completely stuck. But it’s 2010! And we have at hand a
whole spray of brand-new results thanks to Helfgott, Gill, Pyber, Szabo, Breuillard, Green, Tao, and others, which guarantee precisely that Cayley-Schreier graphs of this kind, (corresponding to
finite covers of U(C) whose Galois closure has Galois group a perfect linear group over a finite field) have spectral gap; that is, they are expander graphs. (Actually, a slightly weaker condition
than spectral gap, which we call esperantism, is all we need.)
Sometimes you think about a problem at just the right time. We would never have guessed that the burst of progress in sum-product estimates in linear groups would make this the right time to think
about Galois representations in 1-dimensional families of abelian varieties, but so it turned out to be. Our good luck.
Iranian election statistics — never mind the digits?
I blogged last year about claims that fraud in the 2009 Iranian election could be detected by studying irregularities in the distribution of terminal digits. Eric A. Brill just e-mailed me an
article of his which argues against this methodology, pointing out that the provincial vote totals (the ones with the fishy last digits) agree with the sums of the county totals, which in turn agree
with the sums of the district totals. In order for the provincial totals to have been made up, you’d have to change a lot of county totals too (changing the total in just one county by a believable
amount presumably wouldn’t make a big enough difference in the provincial totals.) But if you add Ahmadinejad votes to a county here and a county there, the provincial total would be the sum of a
bunch of human-chosen numbers, and there’s no reason to expect such a sum to have non-uniformly distributed last digits. The Beber-Scacco model requires that the culprits start with a target number
at the provincial level and then carefully modify county and district level numbers to make the sums match. But why would they?
Tagged beber, brill, digits, election, iran, scacco
Counting Canadians
Canada’s chief statistician resigned last month in protest of the government’s decision to replace the long-form census questionnaire, previously mandatory for 20 percent of the population, with a
voluntary version. I imagine the point is that a voluntary questionnaire can’t possibly be delivering anything like a random sample of the population, though the linked article doesn’t make the
statistical issues very clear. The new head census-taker says the voluntary survey will be “usable and useful” but not comparable to the previous census. Do I have any Canadian readers who can
explain to me why the government thought this was a good idea?
And for Americans: did you know that the response rate for the Canadian census is 96%? Whoa.
Tagged canada, census, sampling
In which I make oblique reference to a change in my personal circumstances
When you fill out a birth certificate in Wisconsin, there’s a “Mother’s Information” section and a “Husband’s Information” section. If you’re unmarried, you’re not allowed to put the father’s name
on the birth certificate. You have to leave it blank, and petition the State Vital Records Office after the fact to get the father included. And if you are married, you have to put the husband on
the birth certificate, whether or not he’s the father of the child. In fact, if you’re married to Mr. X, conceive a child by Mr. Y, and subsequently get divorced from Mr. X, the ex-Mrs.X still has
to put Mr. X on the birth certificate. He can only be removed by court order.
What can the rationale for this be? I guess it must arise from acrimonious cases where the paternity of baby X is really unknown, but Mr. X, angry at having been cheated on and dumped, insists,
rightly or not, that the baby is not his, and refuses to pay child support.
It is not at all clear how you’re supposed to fill out the form if you’ve been married to more than one man over the course of the pregnancy.
(Mrs. Q would like me to clarify that the abovementioned change in my personal circumstances is the one which entails filling out a birth certificate request with the State of Wisconsin, and does not
involve any alterations in marital status, unknown biological parentage, or outstanding claims of child support.)
Tagged birth certificate, wisconsin
Math busking
Now I know what to say next time the dean asks us for some innovative fundraising ideas for the math department.
Tagged busking
A clip of Charles Fleischer, a stand-up comic, wearing an endigitted blazer and performing a routine with a lot of numerology in it:
I think the very first joke in this is funny and concise, but it quickly degenerates into a kind of sub-Robin-Williams “I talk loudly and quickly and change accents a lot and am kind of manic, is it
funny yet? No? LOUDER, QUICKER, MORE ACCENTS!” schtick.
But the joke is on us, because Fleischer’s not kidding about his theory of “moleeds.” In 2005 he gave a TED talk about it. This is a weird and in some ways uncomfortable thing to watch — the
audience still thinks they’re watching a comedy routine, and just keeps chuckling while Fleischer argues, with ever-increasing fervor, that the equation 27 x 37 = 999 somehow explains mirror
symmetry and the theory of Calabi-Yau manifolds.
The talk doesn’t cast TED in the best light, to be honest. Don’t they have someone on staff who can do some minimal vetting of talks that claim to be about math?
(Note: there is always the possibility that Fleischer’s whole act is an extravagantly thorough Kaufmannesque send-up of people’s tendency to attach themselves to meaningless patterns and theories.
But it doesn’t read that way to me.)
Tagged andy kaufmann, charles fleischer, moleeds, numerology, stand-up, ted, ted talks
California prayer agency, Earth-2 Lawrence v. Texas
Continuing from the last post, here’s a test case for the view that judges applying the “rational basis” must defer completely to referenda.
Suppose the voters of California pass a referendum instituting a state agency which employed people to pray to Jesus for the health of sick Californians. Can a judge declare this a 1st amendment
violation, or not? Surely prolonging the lives of thousands of citizens constitutes a legitimate state interest, and, per Kennedy’s opinion in the last post, it is not the government’s
responsibility to provide evidence that the referendum would aid that interest, nor the judge’s responsibility to consider such evidence. On what basis could the referendum be unconstitutional?
Actually, looking this up, it seems that a law violating the Establishment Clause triggers (at least sometimes) “strict scrutiny,” a more stringent requirement than “rational basis.” I expect the
referendum above would be axed on that basis. Racially discriminatory laws have to pass strict scrutiny as well. But discrimination against gays triggers only the weaker rational basis test.
Justice O’Connor wrote in her concurrence in Lawrence v. Texas that Texas’s law forbidding same-sex sodomy failed the rational basis test, because it was motivated solely by moral disapproval, rather
than by a legitimate state interest.” (By the way, O’Connor writes in the same opinion that “preserving the traditional institution of marriage” is a legitimate state interest.)
Question: Suppose the lawyers for Texas had argued, without providing any evidence, that the state felt same-sex sodomy was more likely than opposite-sex sodomy to promote unspecified disease. Would
the law have still been held unconstitutional? Or would it have met the rational basis standard?
Tagged gay marriage, law, lawrence v. texas, rational basis, strict scrutiny, supreme court
Gay marriage and the null hypothesis
Two controversial topics in one post!
Orin Kerr this week on Perry vs. Schwarzenegger:
Several of the key factual findings in Judge Walker’s opinion are in the form of predictions, not facts. For example, Judge Walker finds that “permitting same-sex couples to marry will not . . .
otherwise affect the stability of opposite-sex marriages.” But real predictions have confidence levels. You might think you’re going to get an “A” on an exam next week, but that’s not a fact.
It’s just a prediction, and there’s a hidden confidence level: Maybe there’s an 80% chance you’ll get that grade, or a 60% chance. Judge Walker’s prediction-facts have no confidence levels,
however. He doesn’t say that there is an 87% chance that permitting same-sex marriage will not affect the stability of opposite-sex marriages. He says that it is now a fact — with 100% certainty
— that that will happen.
I think Kerr is incorrect about Walker’s meaning. When we say, for instance, that a clinical trial shows that a treatment “has no effect” on a disease, we are certainly not saying that, with 100%
certainty, the treatment will not change a patient’s condition in any way. How could we be? We’re saying, instead, that the evidence before us gives us no compelling reason to rule out “the null
hypothesis” that the drug has no effect. Elliott Sober writes well about this in Evidence and Evolution. It’s unsettling at first — the meat and potatoes of statistical analysis is deciding whether
or not to rule out the null hypothesis, which as a literal assertion is certainly false! It’s not the case that not a single opposite-sex marriage, potential or actual, will be affected by the
legality of same-sex marriage; Walker is making the more modest claim that the evidence we have doesn’t provide us any ability to meaningfully predict the size of that effect, or whether it will on
the whole be positive or negative.
This doesn’t speak to Kerr’s larger point, which is that Walker’s finding of fact might not be relevant to the case — California can outlaw whatever it wants without any evidence that the outlawed
thing causes any harm, as long as it has a “rational basis” for doing so. The key ruling here seems to be Justice Kennedy’s in Heller v. Doe, which says:
A State, moreover, has no obligation to produce evidence to sustain the rationality of a statutory classification. “[A] legislative choice is not subject to courtroom factfinding and may be based
on rational speculation unsupported by evidence or empirical data.”
and later:
True, even the standard of rationality as we so often have defined it must find some footing in the realities of the subject addressed by the legislation.
I’m in the dark about what Kennedy can mean here. If speculation is unsupported by evidence, in what sense is it rational? And what “footing in the realities of the subject” can it be said to have?
More confusing still: in the present case, the legislation at issue comes from a referendum, not the legislature. So we have no record to tell us what kind of speculation, rational or not, lies
behind it — or, for that matter, whether the law is intended to serve a legitimate government interest at all. Maybe there is no choice under the circumstances but for the “rational basis” test to
be no test at all, and for the courts to defer completely to referenda, however irrational they may seem to the judge?
(Good, long discussion of related points, esp. “to what extend should judges try to read voters’ minds,” at Crooked Timber.)
Tagged marriage, gay marriage, law, same-sex marriage, walker, rational basis, supreme court, null hypothesis
Turkey burgers, gazpacho, Paul Robeson
It’s hard to make a turkey burger taste good. You kind of need to season the hell out of it. We mixed a pound of ground turkey with a minced half-onion, a couple of cloves of garlic, an egg, and —
CJ’s idea — 1/2 tsp each cinnamon and cumin. Kind of a turkofta. Onions keep it from getting dry, spices keep it from getting bland. I blog it in order to remember it.
In other news, this New York Times gazpacho smoothie is ace and we’ve been making it three times a week. The suggested pecorino crackers are too salty and unnecessary.
We’re in the heart of tomato season now and I’m buying about 10 lb a week. Did you know there was a Paul Robeson tomato? Once you sang on Broadway and battled for civil rights, Paul Robeson. Now
you are in my smoothie.
Tagged cinnamon, cumin, gazpacho, paul robeson, tomatoes, turkofta
Show report: New Pornographers at the Orpheum
New Pornographers played the Orpheum last night.
• Boston’s “Foreplay” on the sound system before the band comes on. Comes off as witty.
• On the records Dan Bejar’s singing doesn’t stand out as much as it does live. Something about the way he approaches the microphone makes him look like he’s always about to rap rather than sing.
Bejar leaves the stage during the songs he’s not singing. This seems churlish to me. He couldn’t just stand there and bang a tambourine on his hip?
• “My Slow Descent into Alcoholism,” the best song they ever wrote — probably my favorite song anybody released last decade — appears in the encore. It is great, but all live versions lack the
precision which is part of the glory of the studio version — precision married to absolutely unmoderated rocking-out-ness. See:
• Kathryn Calder, once an occasional vocal stand-in for Neko Case, is now a full member of the band, playing keyboards and singing backup. Both facially and in manner she reminds me very
powerfully of Doris Finsecker.
• Show ends with “Testament to Youth in Verse.” Openers the Dodos come on stage, everybody’s singing the big “no no no…” at the end of the song, swaying, waving goodbye, drinking beers. The
cellist in the back picks up a saxophone. It’s an almost exact replica of the credit sequence of Saturday Night Live. On purpose?
Tagged dan bejar, doris finsecker, fame, kathryn calder, new pornographers, orpheum
|
{"url":"http://quomodocumque.wordpress.com/2010/08/","timestamp":"2014-04-17T06:41:09Z","content_type":null,"content_length":"101285","record_id":"<urn:uuid:0013e574-00d7-41f6-8ae5-4453fe013508>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roswell, GA Prealgebra Tutor
Find a Roswell, GA Prealgebra Tutor
Dear student or parent, I am a professional academic tutor with 20+ years of experience tutoring most subjects at the high school, college and graduate school levels. These include most * Math
courses including algebra, geometry and pre-calc, calculus, linear and coordinate math, statistics differ...
126 Subjects: including prealgebra, English, reading, chemistry
...If my students do not understand the way I am teaching I will adjust my teachings to be more suitable for my students. I am flexible with my schedule and I am always punctual. I am a strong
believer that practice makes perfect and that homework is necessary for students to achieve greatness.
14 Subjects: including prealgebra, chemistry, geometry, algebra 1
I hold a bachelor's degree in Secondary Education and a master's degree in Education. I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High
School level in both private and public schools.
10 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I now that is difficult, so I'm here to help you study. In my Algebra 2 course, I had to work hard to remember all of the formulas and operations of Algebra 1. It was not the easiest class I
ever completed, but I made it through and understood it much better than other math courses.
16 Subjects: including prealgebra, Spanish, writing, geometry
...My goal is to show each child that they can succeed and then show them how to do it. My students have experienced tremendous success, regardless of the struggles they were facing or the
difficulties they had in the past. I am a certified Teacher in the State of Georgia in Early Childhood Education (K-5) and Middle Grades (6-8). My certification number is 629898.
7 Subjects: including prealgebra, reading, geometry, algebra 1
Related Roswell, GA Tutors
Roswell, GA Accounting Tutors
Roswell, GA ACT Tutors
Roswell, GA Algebra Tutors
Roswell, GA Algebra 2 Tutors
Roswell, GA Calculus Tutors
Roswell, GA Geometry Tutors
Roswell, GA Math Tutors
Roswell, GA Prealgebra Tutors
Roswell, GA Precalculus Tutors
Roswell, GA SAT Tutors
Roswell, GA SAT Math Tutors
Roswell, GA Science Tutors
Roswell, GA Statistics Tutors
Roswell, GA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Alpharetta prealgebra Tutors
College Park, GA prealgebra Tutors
Decatur, GA prealgebra Tutors
Doraville, GA prealgebra Tutors
Duluth, GA prealgebra Tutors
Dunwoody, GA prealgebra Tutors
Johns Creek, GA prealgebra Tutors
Mableton prealgebra Tutors
Marietta, GA prealgebra Tutors
Milton, GA prealgebra Tutors
Norcross, GA prealgebra Tutors
Sandy Springs, GA prealgebra Tutors
Smyrna, GA prealgebra Tutors
Snellville prealgebra Tutors
Woodstock, GA prealgebra Tutors
|
{"url":"http://www.purplemath.com/roswell_ga_prealgebra_tutors.php","timestamp":"2014-04-21T10:50:15Z","content_type":null,"content_length":"24156","record_id":"<urn:uuid:4bacb5f5-9421-42b4-8f12-c84b9c83fe20>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The union of the totally split primes
up vote 1 down vote favorite
Let $R$ be a Dedekind domain with quotient field $K$, let $L$ be a finite separable extension of $K$, and let $S$ be the integral closure of $R$ in $L$. If $\mathfrak{p}$ is a nonzero prime ideal of
$R$ that is contained in the union of the prime ideals of $R$ that split completely in $S$, does it follow that $\mathfrak{p}$ splits completely in $S$? It follows easily if $\mathfrak{p}$ is
principal or (by prime avoidance) if only finitely many prime ideals of $R$ split completely in $S$. If necessary, assume that $R$ is the ring of integers in a number field and/or $L/K$ is Galois.
nt.number-theory algebraic-number-theory dedekind-domains number-fields
Isn't it immediate that the only prime ideals contained in $\bigcup_i \mathfrak{p}_i$, where $\mathfrak{p}_i$ are prime, are these $\mathfrak{p}_i$ themselves? – Alex B. Aug 31 '11 at 3:36
2 @Alex, not quite - If $\mathfrak{p}$ has infinite order in the class group, then every element of $\mathfrak{p}$ is contained in a prime ideal $\mathfrak{q} \ne \mathfrak{p}$, and so $\mathfrak{p}
$ is contained in the union of all the other prime ideals. – Michael Aug 31 '11 at 6:24
add comment
1 Answer
active oldest votes
If the class group is finite, then writing $\mathfrak{p}^h = (\alpha)$, it follows that $\alpha$ is contained in a prime ideal $\mathfrak{q}$ which splits completely in $S$, and
up vote 5 down thus $\mathfrak{p}^h \subseteq \mathfrak{q} \Rightarrow \mathfrak{p} = \mathfrak{q}$ (because $R$ has dimension one). It sounds like that suffices for your purposes.
vote accepted
More generally that works if the class group is torsion. That suffices for me. Thanks! – Jesse Elliott Aug 31 '11 at 7:33
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory algebraic-number-theory dedekind-domains number-fields or ask your own question.
|
{"url":"http://mathoverflow.net/questions/74130/the-union-of-the-totally-split-primes","timestamp":"2014-04-16T19:45:30Z","content_type":null,"content_length":"53289","record_id":"<urn:uuid:59360ecd-6adb-4a58-b1ab-975fc4b44b6b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This paper comprises the results that I have been able to accumulate on regular star polygons. The results have been broken up into five sections. The first section covers some definitions and the
preliminary result of how many regular star polygons there are. The second section discusses the difference between simple stars, those which can be traced out going from point to point and
contacting all the points before getting back to the starting point, and composite stars, those where if you try that, you will get back to the starting point before you have touched all the points.
This section contains applications to subgroups of finite cyclic groups and can be used to illustrate Lagrange's theorem in the case of cyclic groups as well as methods for solving bucket problems.
In the third section it is noted that each star contains all the stars with the same number of points with a smaller index. These ideas are illustrated in the examples to which one can link from the
table of contents through an examples menu page.
The fourth section provides general formulas for the angles at points in the figures and distances between intersection points on the lines that make up the figure. One of the features of studying
regular star polygons which is the most entertaining is the number of ways that these problems can be attacked, and the way that through the magic of trigonometry, the formulas which are obtained by
different methods can be shown to be equivalent. This is particularly true in the fifth section on areas and perimeters which culminates in the remarkable fact that the area of a regular star polygon
is one half of the perimeter times the length of the apothem. This very important result in the study of regular convex polygons which is used to show that the same number can be used for pi in the
circumference of a circle as in the area is also true for the non convex regular star polygons.
The page for each section contains only definitions and statements of theorems with a small amount of discussion. To find the proofs, click on the theorem number. This is a link to the proof. At the
end of the proof, there is a link labeled "return to text" which will take you back to the statement of the theorem in the main page for the section. Any place you see a link from "Regular Star
Polygons", it will take you back to the Table of Contents where you will find links to all of the sections, this Forward and the example menu page. Many of the calculations are presented using
graphics. If you set your browser to a 12 point New York font, the transition between text and graphics will appear to be a more seamless.
I hope you have as much fun with these figures as I have.
|
{"url":"http://www.sonoma.edu/users/w/wilsonst/Papers/Stars/Forward.html","timestamp":"2014-04-20T06:49:07Z","content_type":null,"content_length":"3888","record_id":"<urn:uuid:3b4b9193-d462-45ed-bb83-9ff22aee1fcb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is the solution bounded Diophantine problem NP-complete?
up vote 9 down vote favorite
Let a problem instance be given as $(\phi(x_1,x_2,\dots, x_J),M)$ where $\phi$ is a diophantine equation, $J\leq 9$, and $M$ is a natural number. The decision problem is whether or not a given
instance has a solution in natural numbers such that $\sum_{j=1}^J x_j \leq M$. With no upper bound M, the problem is undecidable (if I have the literature correct). With the bound, what is the
computational complexity? If the equation does have such a solution, then the solution itself serves as a polytime certificate, putting it in NP. What else can be said about the complexity of this
computability-theory computational-complexity
@R Hahn: As Diophantine Mathematician answered, the problem in the revised (v2) question is NP-complete. – Tsuyoshi Ito Aug 23 '10 at 11:54
add comment
2 Answers
active oldest votes
A particular quadratic Diophantine equation is NP-complete.
$R(a,b,c) \Leftrightarrow \exists X \exists Y :aX^2 + bY - c = 0$
is NP-complete. ($a$, $b$, and $c$ are given in their binary representations. $a$, $b$, $c$, $X$, and $Y$ are positive integers).
up vote 16 down vote
accepted Note that there are trivial bounds on the sizes of $X$ and $Y$ in terms of $a$, $b$, and $c$.
Kenneth L. Manders, Leonard M. Adleman: NP-Complete Decision Problems for Quadratic Polynomials. STOC 1976: 23-29
Thank you. I have edited the problem to make it more precise. – R Hahn Aug 23 '10 at 5:21
What about the sub-problem for $a=1$? $R(b,c) \Leftrightarrow \exists X \exists Y :X^2 + bY - c = 0$ – user22042 Mar 28 '12 at 15:12
And the answer to this question is that it is also NP-complete. The problem with general $a$ can be seen to be reducible to this special case with not much effort. – Emil
Jeřábek Nov 28 '12 at 16:59
add comment
Seems to me that you could encode SAT in the usual polynomial manner, with variables restricted to being 0 or 1.
up vote 6
down vote
Do you mean we can take such a Diophantine problem and encode as an SAT instance? This seems right, but the other direction is the more interesting one and it isn't obvious to me:
that any SAT formula can be encoded as such a norm-bounded Diophantine equation. – R Hahn Aug 23 '10 at 3:14
no i meant it the correct way. Take a SAT formula and encode it as a polynomial using $x$ for a variable, $1-x$ for its negation, and so on. – Suresh Venkat Aug 23 '10 at 4:45
note also that it's easy to encode the bounded norm constraint as well, since the total sum of all variables is at most $n$, in addition to the integer constraint. – Suresh Venkat Aug
23 '10 at 4:46
right, of course. thank you. If I rephrase the problem in terms of at most 9 unknowns -- which is sufficient so that the unbounded decision problem is undecidable -- this reduction
isn't so straightforward. I am editing the question to reflect this more specific case. – R Hahn Aug 23 '10 at 5:16
add comment
Not the answer you're looking for? Browse other questions tagged computability-theory computational-complexity or ask your own question.
|
{"url":"http://mathoverflow.net/questions/36420/is-the-solution-bounded-diophantine-problem-np-complete/36422","timestamp":"2014-04-21T04:52:29Z","content_type":null,"content_length":"60929","record_id":"<urn:uuid:43e564c3-23e1-4a07-a2b7-56fd0915907d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Seven of the World’s Incredible Mathematicians
Colin R October 22, 2013 0
Most of us have enough computing power in our pockets these days to render the work of many of history’s mathematical magicians the work of a few seconds. But, much of the modern world is built on
math in its many varieties – statistics, geometry, algebra and many more complex branches – and without it your house would fall down, your car wouldn’t run and girls in eastern Europe wouldn’t be
able to send you Facebook messages telling you how much they love your profile and that you should really start talking.
We’ll let you know if math ever becomes as sexy as it should be – about 100 times sexier than sports stars, pop musicians and social media gurus – but till then why not just marvel at these seven
giants of the world of numbers.
1 – Grigori Yakovlevich Perelman (born 1966)
Photos of Mr Perelman show a man who looks like he’d be equally at home in an Orthodox monastery as a great academic institution and the Russian mathematical giant has fallen out with the world he
could dominate and now lives, unemployed, at home with his mum.
Despite his hatred of prizes, the mathematical world has attempted to shower him with them, most of which he has turned down and belittled. He’s an expert in Riemannian geometry and geometric
topology and was responsible for solving the most important open question in his field, the Poincare conjecture, which had been sitting waiting for a genius to come up with an answer since 1904 until
Perelman came along in 2004.
He was spotted as a high-flyer from a young age and hot-housed in the best Soviet fashion until he became a world-wide legend. Typically of this man he turned down big US jobs to return to research
in St Petersburg.
He’s been offered the Field’s Medal, one of the most prestigious prizes in mathematics, but turned it down, not wanting to become ‘like an animal in the zoo’. He was then put up for a Millennium
Prize worth $1 million and said sod off to that judging panel too, taking a swipe at the whole world of mathematics in the process. In 2003 he turned his back on mathematics, but is now believed to
be working on his own, at home with mom.
2 – Benoit Mandelbrot (1924-2010)
Chances are you’ve seen some fractals. If at any time you’ve been involved in the drug-soaked world of raves you certainly have. If you haven’t take a look on YouTube – steel yourself for some
ambient music – they’re really stunning to look at. More importantly, they help computers work, make your cell phone pick up a signal, help make pretty computer pictures in movies and games and have
fundamentally altered our understanding of the world of nature.
Born in Poland, Mandelbrot – who was Jewish – was forced by the Nazis to flee with his family to France and after the war headed further west to the United States and started studying in earnest. By
the end of the 50s he had hooked up with IBM and was using their computers to design the images that became fractals – he wanted to prove that nature’s seemingly random and undersigned beauty (things
like clouds and shorelines) did in fact have an order to them.
A master of applying his work to the real world, Mandelbrot also worked in economics, fluid dynamics, information theory and cosmology.
He has an asteroid named after him and was awarded dozens of prizes, and thousands of stoned students staring at screens thank him every day.
3 – Terry Tao (born 1975)
Terry Tao may well be a natural born genius. He certainly started early and was spotted as a prodigy and was doing degree-level study from the age of nine, by the age of 24 he was a professor (the
youngest ever) at UCLA after moving from his native Australia.
His prizes are too numerous to list here, but top of the tree is his Fields Medal he won in 2006. The citation lists contributions to ‘partial differential equations, combinatorics, harmonic analysis
and additive number theory’.
Tao’s natural ability and steady flow of new discoveries have led to him being called the Mozart of Mathematics, but Tao himself (and you can follow his blog) insists solving the world’s toughest
math problems is a very logical process with no Eureka moments.
At the moment, Tao’s work is largely in the world of theory and has little application, but in time it almost certainly will prove vital to some other area of science or technology.
4 – Andrew Wiles (born 1953)
Fermat’s Last Theorem was probably the most famous unanswered question in mathematics and it took Andrew Wiles to solve it.
The legendary problem was left by Pierre de Fermat in 1637 along with a note that he had a proof to his problem but no room to write it down. In the following centuries the best mathematicians in the
world tried to match what Fermat had claimed. Wiles’ interest in math was stimulated by the idea of this unsolved problem, which he first came across aged 10.
He worked for six years in absolute secret on the proof, only to have a mistake pointed out when he published it. He went back to it for another year and sorted it out.
Rather like Tao, his work is so advanced (very few mathematicians knew enough to even check his proof of Fermat’s Last Theorem) that it awaits the arrival of uses. However, the solving of this famous
riddle has made him one of the most famous mathematicians in the world, featuring in a couple of rock songs, an episode of Star Trek: Deep Space Nine and some of Stieg Larsson’s thrillers.
5 – John Tate (born 1925)
The Nobel Prize for Mathematics doesn’t exist, however, the Norwegian government does honor mathematicians, and in 2010 awarded the Abel prize to John Tate.
His work has had major implications in the mathematical world. He helped found the theory of automorphic forms and he was a big noise in number theory. His name is scattered across a range of
theories and proofs as a measure of his influence.
The Abel Prize was awarded for “”his vast and lasting impact on the theory of numbers”, crediting Tate with a “conspicuous imprint on modern mathematics.”
6 – James Maxwell (1831 – 1879)
Crossing the boundary between mathematics and physics, James Maxwell’s most important work came up with electromagnetism – work that has a huge impact on the modern world.
He came up with his proofs in 1865, showing that both electrical and magnetic fields are zooming around at the speed of light.
Incredibly, he was dismissed as slow by his first tutor, but after moving to the best school in Edinburgh (his family wasn’t short of money) he had published his first paper by the time he was 14.
His interests were wide-ranging, and we can add the first color photograph – from 1861 – to his achievements in electromagnetics. He also worked on gases and structural problems.
Although he was said to be socially awkward, he also loved writing his own poetry which he sang to his own guitar accompaniment.
7 – Leonhard Euler (1707 – 1783)
Euler is a giant of mathematics, rated by many as the greatest ever, and churned out work at an extraordinary rate, most of it extraordinarily brilliant.
He was born in Switzerland but spent much of life in Russia where Peter the Great was busy trying to bring his country up to educational speed. In his spare time Euler also worked as a navy doctor.
He later moved to Berlin, but fell out with his colleagues and returned to St Petersburg where he was to die of a brain hemorrhage suffered during a discussion about Uranus.
His work was extraordinarily broad, covering algebra, geometry, calculus and number theory as well as physics. He has two numbers named after him, a unique distinction. Euler’s work is the foundation
of much modern mathematics and he also put his enormous brain to work looking at music, astrology and engineering.
A towering genius whose work is worthy of further reading.
Get Our Weekly Update Mail
Related Posts
|
{"url":"http://kizaz.com/2013/10/22/seven-of-the-worlds-incredible-mathematicians/","timestamp":"2014-04-21T15:28:59Z","content_type":null,"content_length":"37746","record_id":"<urn:uuid:5aa11289-735b-46d4-ab85-85137548be99>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
|
C# Best way to determine the # of times a single product could fit within a - Programming (C#, C++, JAVA, VB, .NET etc.)
Math time!
So I'm working on a project where you have those plastic adjustable compartment boxes, and you want to fill one of those compartments with a certain product. I want to figure out how many can fit
instead said box.
At the most basic level, I could do.
public static int VolCalc(int mLength, int mWidth, int mHeight, out int mVol)
// Solve for product and compartment volume.
mVol = mLength * mWidth * mHeight;
return mVol;
public static int TotalCalc(int mPVol, int mBVol, out int mTot)
// Solve for # of times a product can fill the box.
var pMultiply = 0;
for (mTot = 0; pMultiply <= mBVol;)
pMultiply = mPVol * mTot;
// Since I'm using integers, remove the last one to account for <=.
mTot -= 1;
return mTot;
Would you consider this the best way to approach this problem?
|
{"url":"http://www.neowin.net/forum/topic/1130232-c-best-way-to-determine-the-of-times-a-single-product-could-fit-within-a-box/","timestamp":"2014-04-21T04:34:53Z","content_type":null,"content_length":"77346","record_id":"<urn:uuid:aa99430b-617e-4497-9fac-bfb8f0b27c13>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is...? Seminar new videos
What is…? Seminar new videos
One of the most valuable experiences during my time as a PhD student lay in helping to establish a ‘What is…?’ seminar at the Freie Universität Berlin and later/now at the Berlin Mathematical School
I originally came into contact with the concept while visiting the University of Michigan in the winter 2007/2008. However, back in Berlin I wanted to use the theme for a different purpose. In
conversations with a couple of friends we developed the idea to create a seminar by PhD students for PhD students.
This idea became central since the regular colloquium never attracted PhD students nor did the PhD students ever gather together (which thankfully now changes with the BMS). In particular, we were
looking for something with a more open atmosphere.
Looking at Harvard University’s experience with (from what I have been told) first having a ‘Basic Notions’ seminar the non-trivial nature of which lead the students to compensate with a ‘Trivial
Notions’ seminar , we decided to exclude professors at first. This in fact got us some really negative responses when we sent out emails looking for all the PhD students hidden in workgroups outside
our own fields (one professor in particular simply could not fathom that the presence of your “boss” might hinder a free discussion). It was rather shocking that even professors actively popularizing
mathematics simply reacted with “these things only last as long as a single person is behind them” (and this was before we even started — talk about support…).
Nevertheless, the seminar got on its way. The first semester was tough, with lots of, shall we say, “experiments”, trying to find our way (and above all speakers from other fields). In the second
semester a PhD student from the BMS joined us with the idea of making the seminar as part of the biweekly BMS Friday . This semester has seen yet another expansion with some talks taking place at the
BMS lounge at the Technische Universität Berlin .
Since I’m now leaving Berlin it has been a pleasure to see the next generation take over. However — and this was the whole point of the post before this melancholic rambling took over — I still am
involved in making video recordings of the talks available whenever possible. I want to stress how much I am indebted to the speaker for allowing the publication of their talks. This is especially
important since the videos are sometimes not very good (see my own soon to be put up and very bad talk about topological dynamics). The point is that the seminar is a platform to experiment and test
oneself which is something that students of mathematics do not get to do a lot. Therefore I think we can be very happy that so many speakers are ready to put themselves out there and learn from the
Anyway, yesterday I published two more videos, Carsten Schultz’s " What is Morse theory?’ and Inna Lukyanenko’s ‘What is a quantum group?’ . The good user experience of vimeo might lead to all of the
videos eventually appearing there, but so far Inna’s video is the first on vimeo and the rest is on SciVee (but another one might end up on vimeo next week, we’ll see…).
|
{"url":"http://boolesrings.org/krautzberger/2010/01/28/what-is-seminar-new-videos/","timestamp":"2014-04-18T23:15:32Z","content_type":null,"content_length":"23918","record_id":"<urn:uuid:94eea300-871c-426c-98dd-885ef404c934>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Programme Theme
A notable aspect of many of the most important recent developments is the importance of increasingly sophisticated techniques from mathematics and theoretical computer science: examples include the
use of random states and operations, techniques from operator theory and functional analysis, and convex geometry. The range of mathematical techniques already employed is diverse, but the expertise
is rather scattered within the community. We also see other areas of mathematics that offer the potential to make a major future impact in the field, random matrix theory being an example of
particular current interest.
Among the mathematical challenges addressed during the semester will be some of the big open questions in the field, as well as recently opened up directions:
• Additivity violations of capacities and minimal output entropies; “weak additivity” for certain quantities?
• Existence of bound entanglement with non-positive partial transpose? The question, originally of information theoretic origin, can be cast as a problem about positivity and 2-positivity of matrix
• Abstract positivity and complete positivity: In quantum zero-error communication a link to operator systems was exhibited, promising a functional analytic generalisation of graph theory.
• Techniques from convexity and convex optimisation have had high impact in non-local games. One question of particular importance is whether non-local games with shared entanglement obey a
parallel repetition theorem (Raz) – this is known with only classical correlation and with arbitrary no-signalling help, but for shared entanglement it is only proved in special cases. Another
one is the complexity of maximum quantum violations of Bell inequalities.
• Is there a quantum version of the PCP theorem? Likewise, it is open whether in QMA, witnesses can be made unique (echoing a well-known classical probabilistic reduction of Valiant-Vazirani).
• Measurement-based quantum computation poses questions on characterising the complexity of quantum computations via the properties of the “resource states” used.
• Random matrix theory: starting with its use in proofs of non-additivity, new problems motivated by quantum information have emerged. These include largest eigenvalue fluctuations and spectra of
higher tensor equivalents of Wishart ensembles; perturbations of Wigner ensembles by diagonal matrices with fixed or deterministic statistics – information on the distribution of eigenvectors of
such matrices would have deep implications on quantum statistical mechanics.
We plan to hold a week-long workshop at the beginning, drawing together all topics of the above proposal. In addition we propose to hold a smaller and more focussed workshop, in the middle of the
meeting. Finally we will hold a workshop at the end of the programme that will survey the state of the field as it stands following the work during the programme; there will be an emphasis on open
problems and directions for the future.
Details on all workshops can be found on the Workshop page.
Image of Claude Shannon copyright MFO
|
{"url":"http://www.newton.ac.uk/programmes/MQI/","timestamp":"2014-04-19T17:11:12Z","content_type":null,"content_length":"9594","record_id":"<urn:uuid:ac8be824-eaa1-4d6d-8be6-c0dfe7bd6f28>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Going Superlinear
A Word About Sequential Efficiency versus Complexity
At this point, it's important to consider and understand a potential objection. "But wait," someone could complain, "your example so far is unfair because you've stacked the deck. The truth is that,
when you find a superlinear speedup, what you've really found is an inefficiency in the sequential algorithm." Taking a deep breath, the dissenter might correctly point out: "For any P-way parallel
algorithm, we can find a sequential algorithm that is only P times slower (that is, the parallel advantage is linear, not superlinear) as follows: Just visit the nodes in the same order as the
parallel algorithm, only one at a time instead of P at a time. Visit the set of nodes visited first by each worker, then the set of nodes visited second by each worker, and so on. Obviously, that
will be slower than the parallel version by only a factor of P." Give yourself bonus points if you noticed that, and more bonus points if you noticed that even Amdahl's Law (covered last month [3])
implies that the maximum speedup though the use of P processors cannot be more than P.
It is indeed important and useful to know we can turn any parallel algorithm into a sequential one by just doing the steps of the work one at a time instead of P at a time. This is a simple and cool
technique that often helps us reason clearly about the performance of our parallel algorithms; we definitely need this one in our algorithm design toolchests.
But we cannot therefore conclude that superlinear speedups are simply due to inefficiency in the sequential algorithm ("you should have written the sequential one better"). Let's consider a few
pitfalls in the proposed resequentialized algorithm.
First, it has worse memory behavior, especially if (a) the algorithm makes multiple passes over the same region of data and/or (b) the collection is an array. The proposed algorithm has worse memory
and cache locality because it deliberately keeps jumping around through the search space. Further, its traversal order is hostile to prefetching: Hardware prefetchers try to hide the cost of
accessing memory by noticing that if your program is asking for memory location X now, it's likely to want memory locations like X+1 or X-1 next, so it can choose to speculatively request those at
the same time without actually waiting for the program to ask for them. This typically gives a big performance boost to code that traverses memory in order, either forward or backward, instead of
jumping around. (This applies less if the collection is a node-based data structure like a tree or graph, because node-based structures typically make their traversals jump around in memory anyway.)
Note that the parallel algorithm avoids this problem because each worker naturally maintains both linear traversal order and locality within its contiguous subrange. You might expect that if the
workers all share the same cache anyway, the analysis could come out the same, but the problem doesn't bite us for reasons we'll consider in more detail next month when we cover hardware
considerations. (Hint: The workers typically do not share the same cache...)
Second, it's more complex. It has to do more bookkeeping than a simple linear traversal. This additional work can be a small additional source of performance overhead, but the more important effect
is that algorithms that are more complex require more work to write, test, and maintain.
Third, it will sometimes be a pessimization. Even when the values are nonuniform (which is what the complexified algorithm is designed to exploit), sometimes there will be high-probability areas near
the front of the collection, and the original sequential search would have searched them thoroughly first whereas the modified version will keep jumping away from them. After all, there's no way to
tell what visitation order is best without knowing in advance where the high-probability regions are.
Finally, and perhaps most importantly, when we're comparing the proposed algorithm with simple parallel search, we're not really comparing apples with apples. We are comparing:
• A complex sequential algorithm that has been designed to optimize for certain expected data distributions, and
• A simple parallel algorithm that doesn't make assumptions about distributions, works well for a wide range of distributions, and naturally takes advantage of the special ones the optimized one is
trying to exploit...and still gets linear speedup over its optimized sequential competition on the latter's home turf.
On Deck
There are two main ways into the superlinear stratosphere:
• Do disproportionately less work.
• Harness disproportionately more resources.
This month, I focused on the first point. Next month, I conclude that with a few more examples, then consider how to set superlinear speedups by harnessing more resources quite literally, running on
a bigger machine without any change in the hardware.
Thanks to Tim Harris, Stephan Lavavej, and Joe Duffy for their input on drafts of this article.
[1] V.N. Rao and V. Kumar. "Superlinear speedup in parallel state-space search," Foundations of Software Technology and Theoretical Computer Science (Springer, 1988).
[2] Si. Pi Ravikumar. Parallel Methods for VLSI Layout Design, 4.3.2 (Ablex/Greenwood, 1996).
[3] H. Sutter. "Break Amdahl's Law!" (DDJ, February 2008).
|
{"url":"http://www.drdobbs.com/cpp/going-superlinear/206100542?pgno=3","timestamp":"2014-04-20T08:20:16Z","content_type":null,"content_length":"93910","record_id":"<urn:uuid:a74702b0-1859-4b21-9d4a-7b0cddb26031>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
|
predator prey model
April 13th 2009, 12:47 PM
predator prey model
How can you find the critical points of the a predator prey model and solve the linear comparisonsystem corresponding to each critical point?
Then sketch thetrajectories in the vicinity of each?
April 13th 2009, 12:50 PM
Presumably, the predator-prey model has an equation that models a relationship. Without that, you can't do anything.
April 13th 2009, 12:52 PM
yer can use
x' = 5x − xy, y' = −2y + 3xy.
April 13th 2009, 12:58 PM
Given a function $f(x,y)$, the critical points occur at $\frac{df}{dx} = 0, \frac{df}{dy} = 0$. Using your notation, this is where x' = 0 and y' = 0. So you have to solve that system of equations
to find the critical points.
April 13th 2009, 01:05 PM
so you sub these in and them find out x and y to be
x=2/3 and y=5?
April 13th 2009, 01:26 PM
April 13th 2009, 01:40 PM
April 13th 2009, 01:49 PM
Yes, there is only one critical point, which is the one you found. I'm afraid I don't know what "solving the linear comparison" refers to. The tangent plane to the surface $z(x, y)$ at that point
is going to be $z = c$ for some constant c, because the surface is flat at a critical point. I don't know what else to say about it.
April 13th 2009, 01:53 PM
Yes, there is only one critical point, which is the one you found. I'm afraid I don't know what "solving the linear comparison" refers to. The tangent plane to the surface $z(x, y)$ at that point
is going to be $z = c$ for some constant c, because the surface is flat at a critical point. I don't know what else to say about it.
it means to determine a relationship
between x and y (or between u and v for translated critical points).
Determine the type and stability of each critical point, and sketch the
trajectories in the vicinity of each.
Which im not too sure about
Its probably obvious
|
{"url":"http://mathhelpforum.com/calculus/83546-predator-prey-model-print.html","timestamp":"2014-04-18T12:09:48Z","content_type":null,"content_length":"9627","record_id":"<urn:uuid:1f934880-5cb9-4444-9d6f-1bf432e17f78>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Math Forum Internet News
NSDL Online Workshops
The Math Forum has been hosting a series of online workshops for teachers who work with students in 5th through 9th grade. Teachers will explore the Math Tools digital library and several software
tools that contribute in some way to mathematical understanding, problem solving, reflection and discussion.
Applications are now open for:
Using Technology and Problem Solving to Build Algebraic Reasoning
Technology Tools for Thinking and Reasoning about Probability
Both workshops run June 15 - July 27, 2009. Applications are due by June 8.
Currently the $100 workshop fees are paid for participants by a grant from the National Science Foundation (NSF). For a fee of $25 to cover administrative costs, 1.5 Continuing Education Units (15
contact hours) can be provided in the form of a certificate from the Drexel University School of Education to any participants who meet the minimum requirements.
Tutor.com Homework Help Resources
Find worksheets, lessons, sample problems, and more in this new collection of free online resources, such as the Math Forum's Ask Dr. Math archives.
Top-level math categories include:
• Elementary
• Middle Grades
• Algebra
• Algebra II
• Geometry
• Trigonometry
• Calculus
• Statistics
Tutor.com has partnered with the Math Forum, offering their tutoring services to users of Ask Dr. Math who need more immediate help than our volunteer math doctors can provide.
Conteúdos Digitais em Matemática para o Ensino Médio
Ana Maria Kaleff and Humberto José Bortolossi of the Universidade Federal Fluminense offer Java applets and lesson plans in Portuguese, predominantly for the middle school level.
Topics include:
• real functions
• algebra
• permutations
• polygons
• tangrams
• conic sections
• solids of revolution
• other two- and three-dimensional geometry
Kaleff and Bortolossi offer two English-language geometry resources: A Plethora of Polyhedra and Trip-Lets.
|
{"url":"http://mathforum.org/electronic.newsletter/mf.intnews14.21.html","timestamp":"2014-04-16T16:16:02Z","content_type":null,"content_length":"11757","record_id":"<urn:uuid:4b004dde-31eb-40e4-80a9-0d565bbad65e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jo Ann Curls Page Books,$$Compare Prices at 110 Stores! Author Search by Best Selling, Page 1
│Compare book prices at 110 online bookstores worldwide for the lowest price for new & used textbooks and discount books! 1 click to get great deals on cheap books, cheap textbooks & discount │
│college textbooks on sale. │
│Home Bookmark Us Recommend Us Browse Toolbar Top Sellers Coupons Link to Us Help│
1. By: Jo Ann Curls Page
ISBN: 0788423282 (0-788-42328-2)
ISBN-13: 9780788423284 (978-0-788423-28-4/978-0788423284)
Pub. Date: 2003-03-01
List Price: N/A
<= Click to Compare 110+ Bookstores Prices!
[Snapshot Info.] [Books Similar to Extract of the Rejected Applications of the Guion Miller Roll of the Eastern Cherokee, Volume 2]
2. By: Jo Ann Curls Page
ISBN: 0788423509 (0-788-42350-9)
ISBN-13: 9780788423505 (978-0-788423-50-5/978-0788423505)
Pub. Date: 2003-07
List Price: $38.00
<= Click to Compare 110+ Bookstores Prices!
[Snapshot Info.] [Books Similar to Extract of the Rejected Applications of the Guion Miller Roll of the Eastern Cherokee (Volume 3)]
3. By: Jo Ann Curls Page
ISBN: 0788404954 (0-788-40495-4)
ISBN-13: 9780788404955 (978-0-788404-95-5/978-0788404955)
Pub. Date: 2009-05-01
List Price: $21.50
<= Click to Compare 110+ Bookstores Prices!
[Snapshot Info.] [Books Similar to Index to the Cherokee Freedmen Enrollment Cards of the Dawes Commission, 1901-1906]
4. By: Jo Ann Curls Page
ISBN: 0788413155 (0-788-41315-5)
ISBN-13: 9780788413155 (978-0-788413-15-5/978-0788413155)
Pub. Date: 2009-05-01
List Price: $46.00
<= Click to Compare 110+ Bookstores Prices!
[Snapshot Info.] [Books Similar to Extract of Rejected Applications of the Guion Miller Roll of the Eastern Cherokee, Volume 1]
5. By: Jo Ann Curls. Page
Amazon ASIN: B007HE145M
Pub. Date: 1994
List Price: N/A
<= Click to Compare 110+ Bookstores Prices!
[Snapshot Info.] [Books Similar to Descendants of Samuel & Maria Riley]
6. By: Jo Ann Curls Page
Amazon ASIN: B0006F5SAG
Pub. Date: 1994
List Price: N/A
<= Click to Compare 110+ Bookstores Prices!
[Snapshot Info.] [Books Similar to Descendants of Joseph Lynch & Sophie Ross]
Sort by: [Top Matches] [Best Selling] [Pub Date] [Title]
Recent Book Searches:
0133225879 / 978-0133225877 / Fluency Squares for Business and Technology / Phillip L. Knowles, Ruth A. Sasaki
0133235106 / 978-0133235104 / Fluorescent Polymers (Ellis Horwood Series in Polymer Science and Technology) / N. N. Barashkov, O. A. Gunder
0133240207 / 978-0133240207 / Football's Stunting Defenses / Charles Roche
0133242110 / 978-0133242119 / Burn-In Testing: Its Quantification and Optimization / Dimitri Kececioglu, Feng-Bin Sun
0133244431 / 978-0133244434 / Cogeneration and Small Power Production Manual / Scott A. Spiewak, Larry Weiss, Fairmont Press
0133251276 / 978-0133251272 / Force of Persuasion: Dynamic Techniques for Influencing People and Making Sales / Forrest Patton
0133202356 / 978-0133202359 / Authorware Academic Models for Instructional Design for Windows / Michael Allen
0133206319 / 978-0133206319 / Elections and Voting Behavior in Britain / David Denver
0133214311 / 978-0133214314 / Advanced Engineering Mathematics (2nd Edition) / Michael Greenberg
0133227502 / 978-0133227505 / Flowcharting / Mario V. Farina
0133230988 / 978-0133230987 / Food and Your Future / Ruth Bennett White
0133241793 / 978-0133241792 / Football's Modular Defense: A Simplified Multiple System / John W. Durham
0133243699 / 978-0133243697 / Cultural Theory and Popular Culture: A Reader /
0133244849 / 978-0133244847 / Air Fares and Ticketing (3rd Edition) / Doris S. Davidoff, Philip G. Davidoff
0133253414 / 978-0133253412 / Foxpro 2.5 Advanced Developer's Handbook/Book and Disk / Pat Adams, Jordan Powell
0133206491 / 978-0133206494 / Welfare and Ideology / Vic George, Paul Wilding
0133206807 / 978-0133206807 / The Racism of Psychology: Time for Change / Dennis Howitt, J. Owusu-Bempah
0133207064 / 978-0133207064 / Yeats the Poet / Edward Larrissy
013320846X / 978-0133208467 / F. Scott Fitzgerald: A Collection of Critical Essays (Twentieth Century Views) /
013321091X / 978-0133210910 / Five Economic Challenges / Robert L. Heilbroner
013321902X / 978-0133219029 / Botanical Masters: Plant Portraits by Contemporary Artists / William T. Stearn
0133221164 / 978-0133221169 / Financing the Small Business (A Prentice Hall Small Business Guide) / Lawrence W. Tuller
0133223132 / 978-0133223132 / The Flexible Organization: A Unique New System for Organizational Effectiveness and Success / Barbara Forisha-Kovach
0133225607 / 978-0133225600 / Flushed With Pride; The Story of Thomas Crapper. / Wallace Reyburn
0133228347 / 978-0133228342 / Food for Thought (Spectrum Book) / Saul Miller, Jo-Anne Miller
0133229424 / 978-0133229424 / Flight Attendant: Future Aviation Professionals of America (Flight Attendant) / David Massey
0133230147 / 978-0133230147 / Folk Arts Around the World / Virginie Fowler
0133231488 / 978-0133231489 / Folk Toys Around the World: And How to Make Them / Joan Joseph
0133231976 / 978-0133231977 / Fluid Power Systems: Modeling, Simulation, Analog and Microcomputer Control / J. Watton
0133240622 / 978-0133240627 / Football Defense of the Future: The 2-Level Model / John M. Thomson, Bill Arnsparger
Browse ISBN Directory:
9789624216318-9789625931098 9789625931104-9789625935041 9789625935058-9789626340356 9789626340363-9789626342077 9789626342091-9789626345283 More...
More About Using This Site and Buying Books Online:
Be Sure to Compare Book Prices Before Buy
This site was created for shoppers to compare book prices and find cheap books and cheap college textbooks. A lot of discount books and discount text books are put on sale by many discounted book
retailers and discount bookstores everyday. You just need to search and find them. Our site provides many book links to some major bookstores for book details and book coupons. But be sure not just
jump into any bookstore site to buy. Always click "Compare Price" button to compare prices first. You would be happy that how much you could save by doing book price comparison.
Buy Books from Foreign Country
Our goal is to find the cheapest books and college textbooks for you, both new and used books, from a large number of bookstores worldwide. Currently our book search engines fetch book prices from
US, UK, Canada, Australia, New Zealand, Netherlands, Ireland, Germany, France, and Japan. More bookstores from other countries will be added soon. Before buying from a foreign book store or book
shop, be sure to check the shipping options. It's not unusual that shipping could take 2 -3 weeks and cost could be multiple of a domestic shipping charge.
Buy Used Books and Used Textbooks
Buying used books and used textbooks is becoming more and more popular among college students for saving. Different second hand books could have different conditions. Be sure check used book
condition from the seller's description. Also many book marketplaces put books for sale from small bookstores and individual sellers. Make sure to check store review for seller's reputation when
available. If you are in a hurry to get a book or textbook for your class, you would better choose buying new books for prompt shipping.
Please See Help Page for Questions regarding ISBN / ISBN-10 / ISBN10, ISBN-13 / ISBN13, EAN / EAN-13, and Amazon ASIN
|
{"url":"http://www.alldiscountbooks.net/_Jo_Ann_Curls_Page_Books_a_.html","timestamp":"2014-04-19T18:02:22Z","content_type":null,"content_length":"48751","record_id":"<urn:uuid:d5b649c5-763b-4990-915d-a5e83550eea5>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find Radius of Smaller Circle
Let $O_1, \ O_2$ be the centers of the circles. (The bigger circle has the center $O_1$), $A\in(O_1), \ B\in(O_2)$ Then $O_1A\perp AB, \ O_2B\perp AB$. Let $BC\parallel O_1O_2, \ C\in (O_1A)$ In the
right triangle CAB apply Pitagora: $BC^2=AC^2+AB^2\Rightarrow 400=(11-r)^2+19^2$ Now solve the quadratic. Remember that $r<11$
Hi magentarita, Ok, here's what I think. In my diagram I have drawn a line from the center of the larger gear to the tangent point of the smaller gear. Use the Pythagorean Theorem to find its length.
$c^2=11^2+19^2$ $c=\sqrt{482}$ Find angle DBA using Arctan. $\arctan \frac{19}{11}\approx 59.9314$ Angle BAC is also 59.9314 since alternate interior angles are congruent (BD and CA are parallel
because we have two lines perpendicular to the same line) Now look at triangle ABC. We know $c=\sqrt{482}$, a = 20, and angle BAC = 59.9314. Use the Law of Cosines. $a^2=b^2+c^2-2bc \cos A$ $20^2=b^2
+ (\sqrt{482})^2-2b(\sqrt{482}) \cos 59.9314$ This all boils down to $b^2-22b+82=0$ Apply the quadratic formula to get your 2 results. One you have to throw away because it is bigger than the larger
gear radius. Now you have your answer.
Actually you are dealing with a right triangle. (See attachment) 1. $r = 11-x$ 2. $x^2+19^2=20^2$ Thus the correct answer is b)
I want to thank all of you for your reply.
|
{"url":"http://mathhelpforum.com/geometry/95194-find-radius-smaller-circle-print.html","timestamp":"2014-04-18T05:32:33Z","content_type":null,"content_length":"9762","record_id":"<urn:uuid:8a6d8e6a-60e2-436e-85c8-a928a895675e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fourier Series Basics Problem
February 20th 2007, 04:25 PM #1
Dec 2006
Fourier Series Basics Problem
I am learning to calculate fourier series, but am having difficulty with this basic problem. The problem is to find the fourier series for the given function (extended periodically in this case
for the sine series only)
*please excuse notation...sorry
f(x)={ x , 0<x<1
1 , 1<x<2
This function is then extended periodically to have a period of 4, with L = 2. when calculating the b_n fourier coefficients I am using the form:
b_n = (1/2) ∫ f(x)cos(n*pi*x/L)dx
I integrated over the interval (-1,3). So from (-1,1) i am integrating xsin(ax) and a square wave centered around the x-axis from (1,3). The square wave integral is an odd function over a
symmetric interval so goes to zero....I am left with the xsin(ax) portion from (-1,1). I get:
(-(2/(n*pi))cos((n*pi*x)/2)+(4/((n*pi)^2)sin((n*pi*x)/2) evaluated from -1 to 1
using cos(-x) = cos(x) and sin(-x) = -sin(x), it looks to me as though the sine terms will cancel off, and the cosine terms should double. but this is not the correct answer (solving just for the
coefficient not the fourier series). Any help would be greatly appreciated.
The period is 2 not 4.
well, maybe i didnt explain the problem very clearly. i am given a function f(x), which is defined over an interval (a,b). I am then told to extend this periodically as either a sine series or
cosine series which will essentially make the period (-b,b).
So the initial function is defined from (0,2), but after being extended as an odd function for the sine series representation it is periodic from (-2,2)....at least this is how I understood the
February 21st 2007, 07:01 AM #2
Global Moderator
Nov 2005
New York City
February 21st 2007, 08:33 AM #3
Dec 2006
|
{"url":"http://mathhelpforum.com/calculus/11782-fourier-series-basics-problem.html","timestamp":"2014-04-17T20:53:55Z","content_type":null,"content_length":"35772","record_id":"<urn:uuid:34788ffd-323c-4085-a616-c0106d096b1c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about mathematics on Is all about math Weblog
Tag Archives: mathematics
Princeton University Press just published the Princeton Companion to Mathematics.
I learned about this book while I was reading a blog post on Timothy Gower’s first blog post.
I have read quite a bit and I think is a wonderful book. While it does lack a lot of detail (by this I mean there is practically no demonstration in the book). It does excel at giving a general view
at what mathematics is all about.
If you ever wonder
What is mathematics?
or what do mathematicians do. This book is a good start. The book is a compilation of essays in different topics in mathematics many written by first class mathematicians including Timothy Gowers and
Terence Tao both recipient of the Fields Medal in mathematics the equivalent to the Nobel prize and many others.
I believe the book should be part of any mathematician or aspiring mathematician library.
The book covers mathematical history and also mathematics itself. Some parts of the book could be read by high school students but for the great mayority is necesary to have at least and undergrad
degree to be able to understand it. I wonder if a new Ramanujan found this book if he will be able to reinvent the whole of mathematics from this book?
The book is selling for 66 dollars at amazon from the 99 dollars publisher’s price so is a good bargain.
Curiously the shell image in the cover of this book depicts a section of the Nautilus shell long believe to be related to the Fibonacci golden ratio but I have learn from God plays dice that this is
not so!
I assume that the believe relation between the Chambered Nautilus shell and Fibonacci golden ratio was the motivation to place the image on the cover as an example of Mathematics appearing in
It is interesting to see also this video hosted at the Clay mathematics site
Timothy Gowers The importance of Mathematics and it is also available at Clay Videos.
for other post click isallaboutmath
The Chambered Nautilus shape does seems to be related to the Logarithmic Spiral.
In the back flap of the dust jacket is stated that the reason to place the Chambered Nautilus Shell image on the cover is due to its relation with the Fibonacci Sequence something I had speculated in
my review above.
A few post ago we used Mathematica to draw the altitudes of an arbitrary triangle of given coordinates. Now we are solving these other two problems The first figure the red lines represent the
medians and on the second figure the yellow lines represent the angle bisectors. So our problem consist in finding the Mathematica code to produce these figures.
Let us do the easy one first. To draw a triangle and the medians we will use the Lineal Bezier equation we have use before and since we are interested in finding the mid points on each side the
resulting Mathematica code is very simple.
a = {0, 0};
b = {3, 0};
c = {1, 2};
BAB[t_, a_, b_] := (1 – t) a + t b;
Graphics[{Background -> Black,
{{Thick, Blue, Line[{a, b, c, a}]},
{Thick, Red, Line[{c, BAB[1/2, a, b]}]},
{Thick, Red, Line[{a, BAB[1/2, c, b]}]},
{Thick, Red, Line[{b, BAB[1/2, c, a]}]}
Inset[Text[Style["A", White, Italic, Large]], {-.1, 0}],
Inset[Text[Style["B", White, Italic, Large]], {3.1, 0}],
Inset[Text[Style["C", White, Italic, Large]], {1.1, 2.1}]}]
Now for the angle bisectors it is a bit harder. We are going to need some property the bisectors satisfy that will allow us to get the coordinates of the intersection of each bisector with each of
the sides. One property that could help us is this one.
Theorem: For a triangle ABC If CQ is the bisector thru the angle ACB then AC/CB=AQ/QB.
Notice we will need to find some distances between sides that are given by coordinates so the function EuclideanDistance will be of help. Since we can easily compute AC/CB we need to find Q in AB
such that AQ/QB is equal to AC/CB but this is not difficult to achieve if we use the function Nearest. We can guess the best value of Q by building a table where Q is going from A to B and then we
pick the best value that approaches to the ratio AC/CB. We can accomplished that with the following Mathematica Code.
a = {0, 0};
b = {3, 0};
c = {1, 2};
ac = EuclideanDistance[a, c];
bc = EuclideanDistance[b, c];
ab = EuclideanDistance[a, b];
cc = ac/bc;
BAB[t_, a_, b_] := (1 – t) a + t b;
tabl = Table[
EuclideanDistance[a, BAB[t, a, b]]/
EuclideanDistance[b, BAB[t, a, b]], {t, 0.000001, 1, 0.001}];
nearest = Nearest[tabl, N[cc]];
Flatten[Position[tabl, First[nearest]]]
that will give us the value 443 that we will use in conjunction with the 0.001 doing similarly for the other sides of the triangle we get the very compact solution
a = {0, 0};
b = {3, 0};
c = {1, 2};
BAB[t_, a_, b_] := (1 – t) a + t b;
Graphics[{Background -> Black,
{{Thick, Blue, Line[{a, b, c, a}]},
{Thick, Yellow, Line[{c, BAB[443*0.001, a, b]}]},
{Thick, Yellow, Line[{a, BAB[428*0.001, c, b]}]},
{Thick, Yellow, Line[{b, BAB[486*0.001, c, a]}]}
Inset[Text[Style["A", White, Italic, Large]], {-.1, 0}],
Inset[Text[Style["B", White, Italic, Large]], {3.1, 0}],
Inset[Text[Style["C", White, Italic, Large]], {1.1, 2.1}]}]
Again this time Nearest comes to the rescue and helps us get the best value from a list of possible values!
These are the links to the Phun scenes on our lecture
Are you having phun Yet?
Binary Computer with decimal conversion
Livio’s Multiplication Machine
If you like to play or modify the Scene above in the Phun simulator you can download the Phun program from
Posting from
Have Phun!
Going back once more to the original problem of finding a formula to compute the $n$ triangular number.
$T_n=1+2+3+ \cdots +n$
Triangular numbers suggest by their name the idea of geometry and it should be interesting to try apply some geometric reasoning to solve the problem. As we explained in the first lecture triangular
numbers are obtained when we arrange a number of stones in an equilateral triangular shape. It just happens that it is very convenient for us to re arrange those stones in another triangular shape.
That is right angle triangle. In that way we are able to produce triangular numbers and it become very simple to arrange the stones in a two dimensional array of stones that could be easily counted.
Now we will see how the geometric demonstration works for $T_4.$
First, we duplicate the number of stones so we have $2 T_4$ and conveniently rotating the second $T_4$ we can joint the first $T_4$ and we have as a result a rectangular shape.
The number of stones in that rectangular shape is very easy to compute. It will be the product of the number of stones in two sides. In this case $2 T_4=4(4+1)$ and since we have duplicated the
original number $T_4$ now we need to divide by 2 and that will us the value $T_4=4(4+1)/2.$ The same procedure could be carry out for 100 stones or any other number. We do not suggest you do this
literally for 100 stones, but you should be able to play this same argument in your mind.
In the previous example we were extremely close to form a square number. We actually missed this by just one row. That is why we have the $(4+1)$ factor. A natural question to ask is.
What should we add to any triangular number $T_k$ to get a square number?
We can see it in the diagram. What we need is to add the prior triangular number! We can express this algebraically with the following relation
What this formula is saying is that the sum of two consecutive triangular numbers is an square number. We can verify that this is true since $T_1+T_{2}=1+3=4$ and also $T_2+T_{3}=3+6=9.$ It is easy
to see from the geometric configuration why the sum of two consecutive triangular numbers is an square. Since $T_k$ is something we are interested in finding then the relation seems to be also a kind
of equation where the unknown to be found is $T_k$. Notice also that $T_{k-1}$ appears on the equation. Naturally if we know how to compute $T_k$ we then also know to compute $T_{k-1}.$ Equations of
this type are known as recursive equations. The last row of $T_k$ will be the diagonal since subtracting the diagonal the resulting triangular number is equal to $T_{k-1}.$ We can also write another
Now these two equations are defining $T_k$ in terms of the prior triangular number $T_{k-1}.$ These type of relations are called recurrence equations and we will discussed them in more detail in some
other lecture. For now let us try and see if we can solve the first equation.
(This is a partial transcription of the Video Lecture Triangular Numbers (III) the video continues displaying a Solution)
The complete video lecture can be seen at Triangular Numbers (III)
There is a prior post similar to this at Triangular Numbers (I)
There is a prior post similar to this at Triangular Numbers (II)
This is a blog posting from www.isallaboutmath.com
In the prior lecture we have shown how Gauss solved the problem of
finding the sum $1+2+3+ \cdots +100.$
In Gauss solution we reduce the problem of finding the sum of different natural numbers to the problem of finding the sum of 50 equal numbers. It is very natural in mathematics to generalized
concepts and results. A natural small generalization of the prior result will be to
find the sum $1+2+3+ \cdots +n.$
of the first $n$ natural numbers. It is also a natural impulse to try and solve similar problems with analogous solutions. In this case we will try to solve the problem using the same idea as Gauss
with a very small modification. As in the prior lecture let us call $T_n=1+2+3+ \cdots +n$ the $n$ triangular number. We can easily re arrange the order of the elements in the addition in reverse
order. Therefore we can write $T_n=n+(n-1)+(n-2)+ \cdots +1.$ If we add this two equalities we get
$2 T_n=(n+1)+(n+1)+(n+1) \cdots +(n+1)$
as in Gauss solution to the problem we have found again that adding a number from the beginning of the sequence to one number from the end of the sequence the sum stays constant. The right hand side
of the equality is also very easy to compute. Since we have $n$ of those terms therefore
This solution was easy to find because the solution is base on the same idea as Gauss’s solution. For the special case $T_n=1+2+3+ \cdots +100$ we have already point out why Gauss’s solution works
and the same is true here. The solution works by translating the problem of finding the sum of different numbers to the problem of finding the sum of equal numbers and we are able to produce the
equal numbers by conveniently re arranging the numbers in the sequence. In both cases we are dealing with sequences of consecutive natural numbers.
Could we generalize this a little bit more?
Yes, we can. What if instead of a sequence that starts on one we get a sequence that starts on $a_1$ and we obtain the next element by adding a constant natural number $d.$ So the elements of this
progression will be
$a_1, a_{2}=a_1+d, a_{3}=a_1+2d, ..., a_{n}=a_1+(n-1)d$
this progression is a bit more general than the sequence of natural numbers. First, it does start on an arbitrary number and the difference of two consecutive terms is $d$ instead of 1. Progressions
that satisfy this conditions are called arithmetic progressions. Example of arithmetic progressions are
$1, 2, 3, ...,100$
In this case the first element is 1 and the value for $d$ is also 1.
Another example is
$2,4,6, ...,200$
In this other example the first element is 2 and the increment is by $d=2$. So we obtained each term by adding 2.
Can we find a formula for the sum of the first $n$ terms of the arithmetic progression?
That is to find $a_1+a_2+a_3 \cdots +a_n$
where $a_k=a_1+(k-1)d$ for $k$ from 1 to $n$ to find this formula we suspect that we may be able to find some invariant as before …
(This is a partial transcription of the Video Lecture Triangular Numbers (II) the video continues displaying a Solution)
The complete video lecture can be seen at Triangular Numbers (II)
There is a prior post similar to this at Triangular Numbers (I)
This is a blog posting from www.isallaboutmath.com
On Project Gutenberg, Math Books and Google Books and DJVU.
As many of you know Project Gutenberg is the brain child of Michael S. Hart.
The web site for Project Gutenberg is at
It collects in one place public domain works. The number of works of literature they have collected so far at Project Gutenberg is about 22,000. They use volunteers to proof read the converted works.
The current process is a laborious one even with the use of very sophisticated technology.
A book is scanned and the imaged pages are feed into a computer OCR program usually ABBY Reader then a text file is produce and the page images and the text file are presented to volunteers in a
process called distributed proof reading.
The Project Gutenberg mainly includes works in the English languages, only very few works on other languages are included. Another big disappointment with Project Gutenberg is that very few
mathematical works are part of the project. This is not for lack of available material but I will guess for the difficulty in translating mathematical notations into $\LaTeX$ or MathML.
Meanwhile some other options have appear that are trying to filled this vacuum. One other choice is the project initiated by Google. The project that I am referring to here is the Google Books. In
this project Google has put a large collection of works this time including a big collection of mathematical books. The books from big university libraries are being scanned. Giving us back this
treasures of old.
The works available for free on the net in Adobe PDF Reader format are the works of the greatest mathematicians in history. From the completed works of Gauss to Euler’s to Augustin L Cauchy,Evariste
Galois, J Lagrange, Camille Jordan, Serre etc.
While the work done by Google is commendable they are still some issues. Some of the works have not being scanned properly and the pages are crooked and some of the pages the fonts are not readable.
On the other hand it is great to be able to do a Google search on all the available classical books! There will be hopefully a day when we can just do research on existing material from the comfort
of one’s own home.
Another similar effort is done by
similar in spirit to Google Books and to Project Gutenberg. Mainly all this projects are using the established Adobe PDF file format to publish the works. Unfortunately the file size for adobe PDF
files is quite large since most of them are stored as images. For the average book the file size goes from 10 to 25 megabytes. This will be a problem for those with very slow connections to the
internet to be able to access this great works!
On the other hand another technology similar to Adobe PDF have emerge. It’s produced by Lizardtech and name of the file format is DJVU (Pronounced “Deja Vu”!)
Their reader can be downloaded freely from
A big collection of mathematical works using the DJVU technology is freely accessible to those able to read Russian.
One is able to find a wonderful collection of math and physics Russian books at
ИНТЕРНЕТ БИБЛИОТЕКА (Internet Library)
there you can find books like
Р.Курант, Г.Роббинс. Что такое математика?
translation to the Russian of Courant and Robbins What is Mathematics? and many more jewels for free.
The majority of the books in the collection are oriented towards elementary mathematics.
You may encounter wonderful books by Yaglom, Perelman, Kolmogorov, Lovachevsky , Euclid even more modern books by Prasolov on Geometry and Topology and all that is needed is the DJVU reader a good
internet connection and a bit of patience.
Below you will find a few samples of some classics available at Google Books.
Galois Completed Works at Google
Euler’s Differential Calculus
Agustin Louis Cauchy Cours D’Analyse
Joseph Louis Lagrange Traité de la résolution des équations numériques de tous les degrés.
Camille Jordan Traité des substitutions et des équations algébriques
and many more classics works by N. H. Abel, Jacobi, S. Lie, just to name a few more.
Another important source of mathematical classics is at
La bibliothèque numérique Gallica de la Bibliothèque Nationale de France .
Hope you will enjoy the free availability of very hight quality mathematical material!
This is a blog posting from www.isallaboutmath.com
A while back while surfing the web I discover Google Reader and it was nice to use Google Reader to place links to web sites and blogs I was interested in reading. Google Reader is what they call and
aggregator. Once you setup a folder and add a few items to it then by clicking on that folder you get to see a list containing all the items you have added to the folder.
The analog to this is like having an input box for reading and placing papers to read there. Some advantages of this computer box or folder is that it gets updated with new entries from the sources
you have selected.
Here is a screen shoot of what Google Reader looks like
well, all is fine with Google Reader until you decided to filter or order the information or in general display the information in any of many multiple different ways that can be displayed.
Then again while searching on the web I discover yahoo pipes
this is another free internet service but this one is by yahoo and this service allows you to graphically create a mash up feed that could contain all the things I was missing from using Google
Reader alone!
Here is screen shoot of what a yahoo pipes program looks like
So as you can see you are actually programming when you use Yahoo pipes but you are doing it with a graphical interface and using pipes to make connections.
In this particular case I wanted to create a Yahoo pipes that will contain a few feeds I was interested in reading and I wanted to sort them following the publication date.
Only takes about a minute to do such programming task with yahoo pipes!
and then I grab my pipe url
something that looks like this
and inserted that into one of my Google Reader folders and Voilà!
I am now able to sort and filter and do much more to RSS feeds!
So amazingly we can now combine the technology from two different companies
to produce with very little effort things that could make the web do the searching and retrieving of information for us!
One more thing
Your Yahoo Pipes can be shared with other people so you can send them the Url of your pipes so others could use it! They could even cloned and change it so modifying and customizing it to their own
Obviously this can be apply to any other topic.
hope you enjoy reading this is a blog posting from www.isallaboutmath.com
for more info on Yahoo pipes check out Tim O’Reilly’s blog on Yahoo pipes.
This is the isallaboutmath.com blog!
|
{"url":"http://blog.isallaboutmath.com/tag/mathematics/","timestamp":"2014-04-20T15:50:22Z","content_type":null,"content_length":"88689","record_id":"<urn:uuid:905be46a-0146-4300-8704-cace174ceddf>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solution to a Modified Binomial Probability Distribution
October 21st 2012, 01:33 PM #1
Oct 2012
Solution to a Modified Binomial Probability Distribution
I could use help determining whether the following can be solved analytically.
I have an equation $\sum_{s=c}^{Q}\frac{s-c+1}{s+1}{Q\choose s}p^{s}(1-p)^{Q-s}=\frac{a-b}{a-d}$.
Notice this equation is the binomial probability distribution from c to Q (as opposed to going from 0 to Q), and it is multiplied by $\frac{s-c+1}{s+1}$.
I understand that $\sum_{s=0}^{Q}s{Q\choose s}p^{s}(1-p)^{Q-s}$ simplifies to $pn$. Can a similar simplification be made to the equation I've presented above? Any thoughts on how I might go about
Thank you for your help.
Last edited by aberrantProtagonist; October 21st 2012 at 01:40 PM.
Follow Math Help Forum on Facebook and Google+
$\sum _{s=c}^Q \frac{(-c+s+1) p^s \binom{Q}{s} (1-p)^{Q-s}}{s+1} =$
$=p^c \Gamma (c+2) (1-p)^{Q-c} \left(\frac{\binom{Q}{c} \, _2\tilde{F}_1\left(1,c-Q;c+2;\frac{p}{p-1}\right)}{c+1}-\frac{p \binom{Q}{c+1} \, _2\tilde{F}_1\left(2,c-Q+1;c+3;\frac{p}{p-1}\right)}
2F1= $Hypergeometric function \left._2F_1(a,b;c;z)\right/\Gamma (c)$ref
Last edited by MaxJasper; October 21st 2012 at 02:01 PM.
Follow Math Help Forum on Facebook and Google+
October 21st 2012, 01:55 PM #2
|
{"url":"http://mathhelpforum.com/advanced-statistics/205823-solution-modified-binomial-probability-distribution.html","timestamp":"2014-04-16T16:11:07Z","content_type":null,"content_length":"34826","record_id":"<urn:uuid:2d65a0cd-5c7a-43c9-af0e-9bfb44069728>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
• Dictionary definitions
• Enter a word for the dictionary definition.
From The Collaborative International Dictionary of English v.0.48:
Dimension \Di*men"sion\, n. [L. dimensio, fr. dimensus, p. p. of
dimetiri to measure out; di- = dis- + metiri to measure: cf.
F. dimension. See Measure.]
1. Measure in a single line, as length, breadth, height,
thickness, or circumference; extension; measurement; --
usually, in the plural, measure in length and breadth, or
in length, breadth, and thickness; extent; size; as, the
dimensions of a room, or of a ship; the dimensions of a
farm, of a kingdom.
[1913 Webster]
Gentlemen of more than ordinary dimensions. --W.
[1913 Webster]
Space of dimension, extension that has length but no
breadth or thickness; a straight or curved line.
Space of two dimensions, extension which has length and
breadth, but no thickness; a plane or curved surface.
Space of three dimensions, extension which has length,
breadth, and thickness; a solid.
Space of four dimensions, as imaginary kind of extension,
which is assumed to have length, breadth, thickness, and
also a fourth imaginary dimension. Space of five or six,
or more dimensions is also sometimes assumed in
[1913 Webster]
2. Extent; reach; scope; importance; as, a project of large
[1913 Webster]
3. (Math.) The degree of manifoldness of a quantity; as, time
is quantity having one dimension; volume has three
dimensions, relative to extension.
[1913 Webster]
4. (Alg.) A literal factor, as numbered in characterizing a
term. The term dimensions forms with the cardinal numbers
a phrase equivalent to degree with the ordinal; thus,
a^2b^2c is a term of five dimensions, or of the fifth
[1913 Webster]
5. pl. (Phys.) The manifoldness with which the fundamental
units of time, length, and mass are involved in
determining the units of other physical quantities.
Note: Thus, since the unit of velocity varies directly as the
unit of length and inversely as the unit of time, the
dimensions of velocity are said to be length [divby]
time; the dimensions of work are mass [times]
(length)^2 [divby] (time)^2; the dimensions of
density are mass [divby] (length)^3.
Dimensional lumber, Dimension lumber, {Dimension
scantling}, or Dimension stock (Carp.), lumber for
building, etc., cut to the sizes usually in demand, or to
special sizes as ordered.
Dimension stone, stone delivered from the quarry rough, but
brought to such sizes as are requisite for cutting to
dimensions given.
[1913 Webster]
|
{"url":"http://www.crosswordpuzzlehelp.net/old/dictionary.php?q=Dimension","timestamp":"2014-04-19T17:09:13Z","content_type":null,"content_length":"8042","record_id":"<urn:uuid:2c72d6cd-c3c8-428d-b46f-07f1e511e768>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A math question
you took a spelling test with 50 questions. you got 7 wrong. what is the % you got wrong.
by on Feb. 12, 2013 at 8:42 PM
Add your quick reply below:
You must be a member to reply to this post.
Replies (1-8):
on Feb. 12, 2013 at 9:00 PM
assuming the test is equal to 100 points every question would be 2 points. so 7 Times 2= 14 , 100-14= 86 if I did the math right. I can divide, multiply, do fractions and percentages but simple
adding and subtracting isis really hard.sadly Hannah's the same way.
Your math to that point is correct. However, I need the answer to what % is wrong. I say if you got an 86, that's 86% correct so that meansyou got 14% wrong. But when asked someone else they say the
answer is 7% but can't explain why that's the answer. Unless because 7 is half of 14 and the test has 50 questions which is half of 100, but yes, the test is worth 100 points and each quest is worth
2 points.
Quoting lady-J-Rock:
assuming the test is equal to 100 points every question would be 2 points. so 7 Times 2= 14 , 100-14= 86 if I did the math right. I can divide, multiply, do fractions and percentages but simple
adding and subtracting isis really hard.sadly Hannah's the same way.
14% of the questions were incorrect.
Thank you
Quoting Kmary:
14% of the questions were incorrect.
Thank you everyone for your answers! I'm the one who isn't great at math and I knew the answer immediately, I think others were over thinking it.
Add your quick reply below:
You must be a member to reply to this post.
|
{"url":"http://www.cafemom.com/group/107955/forums/read/18066325/A_math_question?last","timestamp":"2014-04-17T09:52:45Z","content_type":null,"content_length":"67768","record_id":"<urn:uuid:5b787073-657d-4b4c-a515-68e450104695>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: What math should I learn if computers can do it all? Mathematica,
Replies: 20 Last Post: Nov 18, 2013 2:11 PM
Messages: [ Previous | Next ]
fom Re: What math should I learn if computers can do it all? Mathematica,
Posts: 1,969 Posted: Nov 13, 2013 11:46 PM
Registered: 12/4/12
On 11/13/2013 12:10 PM, Peter Percival wrote:
> Hetware wrote:
>> I'm in a conundrum twixt the use of computers to do my thinking for me,
>> and learning to think for myself. Should a child learn his times
>> tables, or learn to use a computer to do it for him?
> It is a well-known fact that if a child comes within a mile of a
> computer he or she will be corrupted by porn and groomed by paedophiles.
Well, there is certainly a dose of reality
in that response! It might just be off-topic,
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=9324316","timestamp":"2014-04-20T19:30:46Z","content_type":null,"content_length":"41112","record_id":"<urn:uuid:00e26e9d-54e1-4211-aa2e-65275306bc47>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PharmPK Discussion
PharmPK Discussion - Allometry and Dedrick Plots
PharmPK Discussion List Archive Index page
• On 16 Apr 2007 at 12:09:33, Tonika Bohnert (tonika.bohnert.aaa.biogenidec.com) sent the message
I am new to the field of allometry and am having some troubles with
one particular prediction that I am working on. For this one
compound, we have IV CL(mL/min) and Vss(L/kg) respectively as
follows: Mouse (0.0555, 0.778), Rat (0.65, 0.548), Dog (20.28, 1.1).
When I plot log Cl vs log BW, I get very nice straight lines for both
Cl and Vss with equations being : log CL = 0.9488 log BW + 0.3106 and
log Vss = 1.069 log BW - 0.837 with R-squared values of 0.999 for
both CL and Vss. However, after this for concentration projections,
when I convert the concentrations and times to plot Complex Dedrick
plots (I am using Complex Dedrick since exponent of Vss is greater
than 1) , instead of overlapping plots , I get almost 3 parallel
plots with C0 for Dog about 10-fold higher than rat and C0-rat being
10 fold higher than C0-mouse (on a log scale). When the plots are
overlapping, then concentration vs time projections calculations are
straight forward but when the plots are parallel with about 10-fold
difference in their C0 values , then how does one proceed to do these
projections? Does this mean allometry fails to predict conc vs time
(for a specified dose & BW of another species in question/
prediction). Do we need to get data from another species to make any
further predictions?
For Dedrick plots I am plotting C/(Dose/BW d) vs Time/ BW d-b where
d = 1.069 and b = 0.9488 from above equations.
For another compound in the same series we got v.similar data but for
that we also had monkey data and surprisingly, the monkey and rat
plots were exactly superimposible and the dog had 10-fold higher C0
(but parallel plot) and mouse had 10-fold lower C0 (again parallel
plot). So what does this tell us?
Thanks so much for any advice/input on this. Shall be eagerly looking
forward to it !
Tonika Bohnert, Ph.D
Biogen Idec Inc.,
15 Cambridge Center
Cambridge, MA 02142
Back to the Top
• On 17 Apr 2007 at 08:57:44, Nick Holford (n.holford.-a-.auckland.ac.nz) sent the message
The following message was posted to: PharmPK
It is essentially impossible with such small amounts of data to
estimate allometric model exponents with any hope they might be
unbiased and precise. The theoretically correct values for the
exponent of Clearance of 0.75 and for volume is 1.0.
What are you trying to do with your allometric predictions?
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
email:n.holford.at.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
Back to the Top
• On 17 Apr 2007 at 10:16:18, "Ma Guangli" (guanglima.-a-.gmail.com) sent the message
Dear Tonika,
Although there are many allmetric methods, why not to examine
the molecular properties of your compound first? The studied
molecular descriptors are simple.
Jolivette LJ, Ward KW 2005. Extrapolation of human pharmacokinetic
parameters from rat, dog, and monkey data: Molecular properties
associated with extrapolative success or failure. J Pharm Sci 94(7):
Evans CA, Jolivette LJ, Nagilla R, Ward KW 2006. Extrapolation of
preclinical pharmacokinetics and molecular feature analysis of
"discovery-like" molecules to predict human pharmacokinetics. Drug
Metab Dispos 34(7):1255-1265.
Back to the Top
Want to post a follow-up message on this topic? Support PharmPK by using the
If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "Allometry and Dedrick Plots" as the subject link to buy books etc.
from Amazon.com
PharmPK Discussion List Archive Index page
Copyright 1995-2011 David W. A. Bourne (david@boomer.org)
|
{"url":"http://www.pharmpk.com/PK07/PK2007019.html","timestamp":"2014-04-20T08:15:23Z","content_type":null,"content_length":"5188","record_id":"<urn:uuid:09efe174-4027-43b3-b231-0f34e8017b53>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trigonometry identities help
November 3rd 2012, 02:52 PM
Trigonometry identities help
Section B
Number 9) II
How do go about answering this?
Please see attached pdf
What i did
cos 2x = cos (x+x)
cos (x+x) = cosA*CosB-SinA*SinB
cos( 60 + 90)= cos60*cos90 - sin60*cos90
answer i got was wrong
section B part III number 9
cos 2x = cos^2x - sin^2x
cos( x + x) = cosA*CosB-SinA*SinB
cos(90 + 45)= cos90*cos45-sin90*sin45
I attempted this one similarly.. wrong answer... help!
November 3rd 2012, 02:57 PM
Re: Trigonometry identities help
9. II: Note that $1 - 2 \sin^2 {75^{\circ}} = \cos(150^{\circ})$.
November 3rd 2012, 03:02 PM
Re: Trigonometry identities help
I know, sorry i made a mistake in labelling and typing , now fixed,
November 3rd 2012, 03:09 PM
Re: Trigonometry identities help
Oh okay, I was a little confused with the 45/60 thing.
Also, $\sin(60^{\circ}) = \frac{\sqrt{3}}{2}$, not $\frac{2}{\sqrt{3}}$. The sine/cosine of an angle can only be from -1 to 1 inclusive. Since $-\frac{2}{\sqrt{3}}$ is outside the set [-1,1],
this answer cannot be correct.
November 3rd 2012, 03:46 PM
Re: Trigonometry identities help
got it! thanks !
November 3rd 2012, 04:02 PM
Re: Trigonometry identities help
what about 9) III? my answer is -1/square root of 2
|
{"url":"http://mathhelpforum.com/trigonometry/206691-trigonometry-identities-help-print.html","timestamp":"2014-04-17T03:51:06Z","content_type":null,"content_length":"6469","record_id":"<urn:uuid:2f0aecdd-ff6b-4b6e-baae-bc78f44837d3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computational and Theoretical Analysis of Null Space and Orthogonal Linear Discriminant Analysis
Venue: JOURNAL OF MACHINE LEARNING RESEARCH 7 (2006) 1183--1204
Citations: 15 - 5 self
@ARTICLE{Ye06computationaland, author = {Jieping Ye and Tao Xiong}, title = {Computational and Theoretical Analysis of Null Space and Orthogonal Linear Discriminant Analysis}, journal = {JOURNAL OF
MACHINE LEARNING RESEARCH 7 (2006) 1183--1204}, year = {2006}, volume = {7}, pages = {2006}}
Dimensionality reduction is an important pre-processing step in many applications. Linear discriminant analysis (LDA) is a classical statistical approach for supervised dimensionality reduction. It
aims to maximize the ratio of the between-class distance to the within-class distance, thus maximizing the class discrimination. It has been used widely in many applications. However, the classical
LDA formulation requires the nonsingularity of the scatter matrices involved. For undersampled problems, where the data dimensionality is much larger than the sample size, all scatter matrices are
singular and classical LDA fails. Many extensions, including null space LDA (NLDA) and orthogonal LDA (OLDA), have been proposed in the past to overcome this problem. NLDA aims to maximize the
between-class distance in the null space of the within-class scatter matrix, while OLDA computes a set of orthogonal discriminant vectors via the simultaneous diagonalization of the scatter matrices.
They have been applied successfully in various applications. In this
8946 Statistical Learning Theory - Vapnik - 1998
2702 Indexing by latent semantic analysis - Deerwester, Dumais, et al. - 1990
1997 Principal component analysis - Jolliffe - 1986
1909 Pattern classification - Duda, Hart, et al. - 2000
940 Face recognition using eigenfaces - Turk, Pentland - 1991
687 The elements of statistical learning: Data mining, inference, and prediction - Tibshirani, R, et al. - 2001
528 Using linear algebra for intelligent information retrieval - Berry, Dumais, et al. - 1995
501 Comparison of discrimination methods for the classification of tumors using gene expression data - Dudoit, Fridlyand, et al. - 2002
391 Using Discriminant Eigenfeatures for Image Retrieval - Swets, Weng - 1996
302 Regularized discriminant analysis - Friedman - 1989
155 A new LDA-based face recognition system which can solve the small sample size problem - Chen, Liao, et al. - 2000
91 A Comparative Study of Feature Selection and Multiclass Classification Methods for Tissue Classification Based - Li, Zhang, et al.
90 Face recognition using kernel direct discriminant analysis algorithms - Lu, Plataniotis, et al. - 2003
55 An optimal set of discriminant vectors - Foley, Sammon - 1975
54 2001] Learning with Kernels: Support Vector - Schlkopf, Smola
54 KPCA plus LDA: a complete kernel fisher discriminant framework for feature extraction and recognition - Yang, Frangi, et al. - 2005
50 Eigenfaces vs. Fisherfaces: Recognition Using Class Specifin Linear Projection - Belhumeour, Hespanha, et al. - 1997
49 Characterization of a family of algorithms for generalized discriminant analysis on undersampled problems - Ye - 2005
46 Geometric representation of high dimension, low sample size data - Hall, Marron, et al.
44 Discriminant analysis with singular covariance matrices: Methods and applications to spectroscopic data - Krzanowski, Jonathan, et al. - 1995
42 Expected classification error of the fisher linear classifier with pseudo-inverse covariance matrix - Raudys, Duin - 1998
31 Face recognition based on the uncorrelated discrimination Transformation, Pattern Recognition 34 (7 - Jin, Yang, et al. - 2001
29 An optimal transformation for discriminant and principal component analysis - Duchene, Leclercq - 1988
28 Introduction to Statistical Pattern Classification - FUKUNAGA - 1990
22 S.D.: “Solving the Small Sample Size Problem of LDA - Huang, Liu, et al. - 2002
19 Using uncorrelated discriminant analysis for tissue classification with gene expression data - Ye, Li, et al.
9 Feature extraction via generalized uncorrelated linear discriminant analysis - Ye, Janardan, et al.
8 Null Space Approach of Fisher Discriminant Analysis for Face Recognition, ECCV Biometric Authentication Workshop - Liu, Wang, et al. - 2004
Developed at and hosted by The College of Information Sciences and Technology
|
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.61.5557","timestamp":"2014-04-18T06:18:41Z","content_type":null,"content_length":"29152","record_id":"<urn:uuid:125ae03e-389c-450c-8b92-20110a4be0d2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find a Savage, MD Tutor
...There are some difficult questions about functions, usually linear and quadratic. There are also quite a few Geometry questions. It is important students practice so that they have the
formulas they need memorized and they have some strategies in place.
24 Subjects: including reading, calculus, geometry, GRE
...When I started working at MSSD, I had no knowledge of American Sign Language (ASL). It was a life changing experience for me in many ways. I realized that I was not only responsible for
learning the proper signs to communicate, but also the importance of sign in the culture of deaf individuals. ...
3 Subjects: including Japanese, sign language, TOEFL
...Most importantly, I stress that the student take as many full practice tests as possible within the available time before his/her chosen test date. Some parts of the practice tests are taken
during lessons. We evaluate errors and develop the best strategies for that particular student for avoiding these errors.
22 Subjects: including English, reading, writing, French
...My name is Eulalie. I have extensive experience tutoring French and teaching/tutoring English as a Second or foreign language. My students ranged from age 4-5 to over 50.
6 Subjects: including reading, English, French, writing
...I had been a tutor for dynamic meteorology, thermodynamics, and physics laboratory courses in college. Also I had been a substitute teacher for maths and science in an elementary school. My
students like me for being friendly and nice.
9 Subjects: including physics, calculus, geometry, SAT math
Related Savage, MD Tutors
Savage, MD Accounting Tutors
Savage, MD ACT Tutors
Savage, MD Algebra Tutors
Savage, MD Algebra 2 Tutors
Savage, MD Calculus Tutors
Savage, MD Geometry Tutors
Savage, MD Math Tutors
Savage, MD Prealgebra Tutors
Savage, MD Precalculus Tutors
Savage, MD SAT Tutors
Savage, MD SAT Math Tutors
Savage, MD Science Tutors
Savage, MD Statistics Tutors
Savage, MD Trigonometry Tutors
Nearby Cities With Tutors
Berwyn Heights, MD Tutors
Crownsville Tutors
Dayton, MD Tutors
Gambrills Tutors
Glenelg Tutors
Hanover, MD Tutors
Highland, MD Tutors
Jessup, MD Tutors
Laurel, MD Tutors
Linthicum Heights Tutors
Martins Add, MD Tutors
Martins Additions, MD Tutors
Russett, MD Tutors
Simpsonville, MD Tutors
Spencerville, MD Tutors
|
{"url":"http://www.purplemath.com/Savage_MD_tutors.php","timestamp":"2014-04-20T11:06:14Z","content_type":null,"content_length":"23278","record_id":"<urn:uuid:0e8d27ec-0aff-41c6-a46c-8582d3fb2009>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Monotonic Increase of the Ratios of Generalized Stirling Functions of the Second Kind
up vote 0 down vote favorite
My motivation to the following question stems from the discussion at Complex Zeroes of Stirling functions of the second kind about the location of the complex zeroes of Stirling functions of the
second kind, I am curious in exploring the monotonic increase of a ratio of two consecutive Stirling functions of the second kind.
As a reminder, $S_{(x,n)}=\frac{1}{n!}\sum\limits_{k=1}^{n}{n\choose k}(-1)^{n-k}k^{x}$.
I believe that $f_{n}(x)=\frac{S_{(x,n+1)}}{S_{(x,n)}}$ is an increasing function for increasing real parts of $x$. Furthermore, my motivation for this conjecture stems from the infinite series
representation $g(x)=\sum\limits_{n=1}^{\infty}S_{x,n}a_{n}$ with $a_{n}=\sum\limits_{k=1}^{n}s_{(n,k)}g(k)$ where $s_{(n,k)}$ are Stirling numbers of the first kind. There are a class of functions
that satisfy this infinite series representation that maps a sequence to a complex valued function.
In the case of $g(x)=\frac{1}{x}$ this infinite series seems to converge for $\Re{(x)}>0$. Furthermore, there is the property of this series that I can shift the series if $g(x)$ converges for $\Re
{(x)}>\Re{(c_{1})}$ and write $g(x+c_{2})$ converges for $\Re{(x)}>\Re{(c_{1}-c_{2})}$. There are some series in practice that this infinite series representation works for. It works for the Lerch
Transcendent (up to a pole), the Hurwitz Zeta Function (up to a pole), The Riemann Zeta function (up to a pole), a large class of other Dirichlet series (up to a pole), all polynomials, inverse
polynomials (up to a pole), exponential functions, and other functions. I am trying to prove that the $\mathcal{S}$-representation of function converges for $\Re{(x)}>\Re{(a)}$ where $a$ is the pole
of the function with the largest real part. My conjecture is important because it implies that if an $\mathcal{S}$-representation converges for $x=c_{1}$ it converges for $\Re{(x)}>\Re{(c_{1})}$.
There may be a range of exceptions to this conjecture. These exceptions are the exceptions I would like to know about now.
These infinite series I have derived can be shown to be equivalent in some cases to already existing formulas for these functions. For example, in the case of the Riemann Zeta function. By
mathematical manipulation it is possible to show that the $S$-representation of $\zeta(2-x)(1-x)$ is equivalent to the slower of Hasses's two globally convergent formulas for the Riemann zeta
function that converges everwhere except at the pole $x=1$.
However, it would also be nice even if these representation only work for $x$ real.
Furthermore, I conjecture $f_{n}(x)$ is increasing in value, but decreasing in absolute value for increasing integer $n$. This makes sense because of what it says about when I try to apply the ratio
test to the $\mathcal{S}$-representation of $g(x)$.
This is the motivation behind asking this question. Except I have had a lot of trouble with finding a proof of this property of Stirling Functions of the Second Kind. I have tried looking at $\frac
$h(x)>0$ iff $S_{(x,n)}\frac{d}{dx}S_{(x,n+1)}-S_{(x,n+1)}\frac{d}{dx}S_{(x,n)}>0 \forall x$. Then I can multiply the sum representation for $S_{(x,n)}$, $S_{(x,n+1)}$, and their derivatives. It is
easy to see that this function is eventually increasing. However, it is very difficult to show that this function is increasing for all increasing real parts of $x$.
I have also looked at work by Kilbas, Butzer, and Trujillo who define generalized Stirling functions of the second kind in a similar manner for $\Re{(x)}>0$ in order to find another representation
for this approximation:
Therefore, $f_{n}(x)=\frac{-1}{n+1}\frac{\int\limits_{0}^{\infty}(1-e^{-t})^{n+1}\frac{dt}{t^{1+x}}}{\int\limits_{0}^{\infty}(1-e^{-t})^{n}\frac{dt}{t^{1+x}}}$.
However, it also seems like a difficult task to prove that $f_{n}(x)$ is monotonically increasing for increasing real parts of $x$ in this way. I was wondering if anyone had any feedback or insight?
Thank you in advance for any help. At least in my opinion, this is a difficult question, so I very much appreciate help from anyone who can shed some light on the validity of my conjecture and if it
is wrong how wrong it is.
For now the information I am most interested in whether my conjecture holds true for real $x$. This is what has in my experience been most supported by graphical data.
In the real case, I conjecture that $f_{n}(x)$ is a strictly increasing function for all increasing real $x$.
complex-analysis co.combinatorics
What does it mean for a complex-valued function to be "increasing"? – Noam D. Elkies Jan 5 '13 at 1:55
Sorry, let me be more specific. The complex valued function is decreasing in absolute value. However, it's real part should be negative and increasing. Furthermore, I am not sure about this, but I
think that the imaginary part is also negative and increasing. This is a strange phenomenon but that is what I mean by a complex-valued function to be "increasing" in this case. Increasing in both
real and imaginary part. Hopefully, this makes more sense. – Daniel Niv Jan 5 '13 at 4:07
Oh, the most important other fact is that the ratio is increasing for $n\ge x$. This is the most important fact. – Daniel Niv Jan 5 '13 at 4:09
About what I just said that the ratio is increasing for $n\ge x$. I stated that incorrectly. The ratio is increasing for all increasing real parts of $x$. However, what I mean is that the ratio is
negative, decreasing in absolute value, but increasing for $n\ge x$. The ratio is $0$ at $x=n$. Then the ratio is increasing and positive for $x>n$. In the case where the imaginary part of $x$ is
not zero, this becomes a more difficult question. However, I still hope a similar phenomenon occurs. – Daniel Niv Jan 5 '13 at 19:50
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged complex-analysis co.combinatorics or ask your own question.
|
{"url":"http://mathoverflow.net/questions/118096/monotonic-increase-of-the-ratios-of-generalized-stirling-functions-of-the-second","timestamp":"2014-04-19T22:14:44Z","content_type":null,"content_length":"56427","record_id":"<urn:uuid:dc3c5626-6f41-40b8-90ab-f6f9d930b0b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics of two-dimensional turbulence / Sergei Kuksin, Armen Shirikyan.
Publication date:
Cambridge, [England] ; New York : Cambridge University Press, 2012.
□ Book
□ xvi, 320 p. : ill. ; 24 cm.
Includes bibliographical references (p. 307-318) and index.
□ Preliminaries
□ Two-dimensional Navier-Stokes equations
□ Uniqueness of stationary measure and mixing
□ Ergodicity and limiting theorems
□ Inviscid limit
□ Miscellanies.
"This book is dedicated to the mathematical study of two-dimensional statistical hydrodynamics and turbulence, described by the 2D Navier-Stokes system with a random force. The authors' main goal
is to justify the statistical properties of a fluid's velocity field u(t, x) that physicists assume in their work. They rigorously prove that u(t, x) converges, as time grows, to a statistical
equilibrium, independent of initial data. They use this to study ergodic properties of u(t, x) - proving, in particular, that observables f(u(t, .)) satisfy the strong law of large numbers and
central limit theorem. They also discuss the inviscid limit when viscosity goes to zero, normalising the force so that the energy of solutions stays constant, while their Reynolds numbers grow to
infinity. They show that then the statistical equilibria converge to invariant measures of the 2D Euler equation and study these measures. The methods apply to other nonlinear PDEs perturbed by
random forces"-- Provided by publisher. "This book deals with basic problems and questions, interesting for physicists and engineers working in the theory of turbulence. Accordingly Chapters 3-5
(which form the main part of this book) end with sections, where we explain the physical relevance of the obtained results. These sections also provide brief summaries of the corresponding
chapters. In Chapters 3 and 4, our main goal is to justify, for the 2D case, the statistical properties of fluid's velocity"-- Provided by publisher.
jump to top
|
{"url":"http://searchworks.stanford.edu/view/9803493","timestamp":"2014-04-16T18:57:31Z","content_type":null,"content_length":"27766","record_id":"<urn:uuid:5e691eaf-1c3c-4792-8806-4fb88e72c085>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Berkeley Lake, GA ACT Tutor
Find a Berkeley Lake, GA ACT Tutor
I have concentrated on preparing students for the ACT for the last year. I have worked with them on time management as well as test taking strategies. Student scores have improved upon their
21 Subjects: including ACT Math, calculus, geometry, algebra 1
...I have been doing private math tutoring since I was a sophomore in high school. I believe in guiding students to the answers through prompt questions. This makes sure that when the student
leaves he or she is equipped to answer the problems on their own for tests and quizzes.
9 Subjects: including ACT Math, geometry, algebra 1, algebra 2
...I have taken a few linear algebra courses at the college level (Math 250 and a proof based course, Math 350) and received A's in both courses. In addition, as a math and physics major, I have
regularly been using concepts from linear algebra in many different contexts and courses. As a math major I have taken many courses which involve logic and also courses just on mathematical
41 Subjects: including ACT Math, reading, physics, writing
...I have been involved in private and company tutoring for nearly 9 years. I worked for Kaplan Test Prep company where I taught 3+ classes of 20 students every week how to master the SAT. I have
also helped students with English as a second language perform well enough on college admissions exams...
27 Subjects: including ACT Math, reading, English, geometry
Hello. My name is Nikki. I attended Prairie View A&M University where I received my Bachelor's degree in Biology, and a minor in Chemistry.
29 Subjects: including ACT Math, chemistry, reading, physics
Related Berkeley Lake, GA Tutors
Berkeley Lake, GA Accounting Tutors
Berkeley Lake, GA ACT Tutors
Berkeley Lake, GA Algebra Tutors
Berkeley Lake, GA Algebra 2 Tutors
Berkeley Lake, GA Calculus Tutors
Berkeley Lake, GA Geometry Tutors
Berkeley Lake, GA Math Tutors
Berkeley Lake, GA Prealgebra Tutors
Berkeley Lake, GA Precalculus Tutors
Berkeley Lake, GA SAT Tutors
Berkeley Lake, GA SAT Math Tutors
Berkeley Lake, GA Science Tutors
Berkeley Lake, GA Statistics Tutors
Berkeley Lake, GA Trigonometry Tutors
Nearby Cities With ACT Tutor
Chamblee, GA ACT Tutors
Clarkston, GA ACT Tutors
Cumming, GA ACT Tutors
Doraville, GA ACT Tutors
Duluth, GA ACT Tutors
Holly Springs, GA ACT Tutors
Johns Creek, GA ACT Tutors
Lilburn ACT Tutors
Norcross, GA ACT Tutors
North Metro ACT Tutors
Peachtree Corners, GA ACT Tutors
Scottdale, GA ACT Tutors
Stone Mountain ACT Tutors
Sugar Hill, GA ACT Tutors
Suwanee ACT Tutors
|
{"url":"http://www.purplemath.com/Berkeley_Lake_GA_ACT_tutors.php","timestamp":"2014-04-18T19:02:07Z","content_type":null,"content_length":"23870","record_id":"<urn:uuid:42f23ee8-1a25-467e-b263-f893859959dd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Far Could a Drug Cannon Shoot? | Science Blogs | WIRED
• By Rhett Allain
• 03.04.13 |
• 11:06 am |
Drug cannon? What’s that? So, apparently some drug smugglers decided not to smuggle drugs but instead shoot them over the border. OK, drug smuggling is not a good thing, but this is an awesome idea.
(But don’t do it.) It works basically just like a potato gun. Compressed air is stored in a tank and then released very quickly. This air pushes a payload out the cannon and BOOM – there go the
What kind of range could you get from something like this? This is just like the Punkin Chunkin contest, but it doesn’t use pumpkins.
Before the calculations, some estimations about this drug cannon. From this Washington Post article, it says the the device could be used to shoot a load of marijuana weighing up to 30 pounds. The
other thing to estimate is the diameter of the launch tube. Just a guess, but I would say it has a radius of 10 cm.
Estimated Range
Based on this design, how far could this sucker (well, blower) shoot? Let’s first consider the current technology of Punkin Chunkin pneumatic pumpkin launchers (as seen above). The first difference
between a pumpkin shooter and the drug launcher is the size. Notice how long the barrel length is on these pumpkin launchers? That’s so that the pumpkin can be accelerated by the compressed air over
a longer distance. This does two things. First, it increases the speed. (Longer is better.) Second, the longer barrel allows for a higher final speed with a lower acceleration. This is important for
pumpkins because if the acceleration is too high, they go splat. (The pressure will break them on launch.)
OK, so just for a comparison these pumpkin shooters have a final speed around 600 mph using a 100 psi pressure tank. Could the drug shooter have values similar to this? First, on the pressure side, I
would say no. The tank in the picture just doesn’t look as sturdy as the Punkin Chunkin tanks. With the shorter barrel, I would have to have a rough guess of a launch speed of 300 mph instead of 600
mph. Even this is probably an over estimation. One more thing, the launch angle. It isn’t exactly clear how this device was used, but from the picture I would say the launch angle would be about 25°.
What about the payload? If it is a 30-pound package of marijuana, I am going to say it is a cylinder with a 10 cm radius and a length of 40 cm. (Just guessing.)
Now, how far would it go? This isn’t such a simple question. Why? Air resistance. Let me draw a diagram showing the drugs right after they are shot from the cannon.
Here you can see that a common model for the air resistance force depends on the magnitude of the velocity of the object. This means that there is not a constant acceleration for the drugload and you
can’t just use kinematic equations to find out how far it goes. No, instead the best way to determine the range is with a numerical model. The basic idea of a numerical model is to break the motion
of the drug projectile into small time intervals. During each time interval I can assume the air resistance force is constant, which means I can use intro-level physics to determine the motion – but
just during this short time interval. Next I have to update all my values and then do it again for the next time interval. Of course this process is quite tedious. So tedious that I will make a
computer do it.
Here is a plot of a drug package shot at 300 mph (134 m/s) at a 25° angle.
So it goes about 700 meters. Not too bad, but not too good either. That is less than half a mile. That would mean the launcher would have to get less than a quarter mile from the border to shoot the
drugs a quarter mile into the USA. It seems like you could spot this pretty easily.
A Better Range
What if these drug smugglers went for a bigger design? What would happen as they increased the launch speed of the drug package? First, let me show one thing. What happens as you change the launch
angle? If there is no air resistance, then a launch angle of 45° gives the best range. This is not true when you include air resistance. Here is a plot showing the drugs shot at 300 mph at 3
different angles.
You can see that the package would actually go farther at an angle of 35° than at 45°. This best angle depends on the launch speed.
I don’t want to change the angle – just because I don’t want to help out the drug smugglers too much. But let’s say they keep a 25° launch angle and increase the speed. What happens to the range?
Here is a plot of range versus launch speed.
Even with a launch speed of 1000 m/s (2,200 mph), the drugs only go 1.5 miles. Yes, you might be able to get a better range with a different angle. The point is that you can just keep increasing the
speed and expect a much better range.
|
{"url":"http://www.wired.com/2013/03/how-far-could-a-drug-cannon-shoot/","timestamp":"2014-04-20T14:11:13Z","content_type":null,"content_length":"106405","record_id":"<urn:uuid:7c86cc9c-ec6a-4a73-887e-3043b587080a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
t of Math
Mission of the Department of Mathematics and Statistics
The mission of the Department of Mathematics and Statistics is to provide students with a solid foundation in the theory and applications of mathematics and statistics and to produce mathematically
intuitive majors with expertise in problem-solving and numerical and geometric reasoning who are prepared to pursue graduate degrees or to successfully compete in the job market.
About the Department
The Department of Mathematics and Statistics at the University of Central Oklahoma is one of seven departments in the College of Mathematics and Science. The faculty represents a broad array of
scholastic interests, including algebraic topology, cryptography, history of mathematics, differential equations, numerical analysis, mathematical biology, complex analysis, biostatistics,
mathematics education, matrix theory, mathematical physics, dynamical systems and geometry. The members of the faculty are committed to excellence in teaching and pride themselves on their
collaborative endeavors with students to learn and conduct research.
Check current and upcoming courses at the registrar.
|
{"url":"http://www.math.uco.edu/","timestamp":"2014-04-20T13:18:56Z","content_type":null,"content_length":"14234","record_id":"<urn:uuid:b99f7bc7-164b-4288-8d98-82330dbd5991>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graphing Functions
Sub As we know that in a function each input value is instantly lined with one corresponding output value of that function. General concept of a graph is characterized by graph of relation. A
Topics function is always recognized by its graph but they are not same because it happens that two Functions that have different co- domain must have same graph. If we test graph of a function then
we use vertical line test and if we test whether function is one – to – one then we use horizontal line test. If we have Inverse Function then graph of inverse function can be obtained by
plotting graph of function along the line v = u. (here 'u' is along x - axis and 'v' is along y – axis. In a graph a curve is one – to – one function if and only if it is a function. Now we
will try to understand Graphing and functions with help of an example.
Suppose we have a function: y = x – x2, we can plot graph of this function as shown below.
We need to follow some steps for Graphing Functions:
Step 1: Our function here is y = x – x^2.
Step 2: We assume different values of x – coordinate to find values of y – coordinate.
If we value of x – coordinate as '0' then we get the value of y – coordinate as '0'. In same way if we put value of x – coordinate as -2, -1, 1, 2, 3 then we get value of y – coordinate as -6,
-2, 0, -2, -6 respectively.
│ X │ -2 │ -1 │ 0 │ 1 │ 2 │ 3 │
│ Y │ -6 │ -2 │ 0 │ 0 │ -2 │ -6 │
Step 3: Now plot these coordinate on graph. So graph of given function is:
This is all about Graphing and Functions.
Graphing can be defined as process of plotting Functions on graphs.
Abstract equations, functions, expressions and concepts become more clearly visible when projected on an x - y coordinate plane, also known as a Cartesian coordinate system. This method for
determining 'x' and y- coordinates for an equation becomes more complicated, going into the depth of mathematics.
The first step we follow to graph any mathematical concept is to find x and y - axes on a regular coordinate plain. A coordinate or a rectangular plain looks like a big plus sign with Numbers marked
on axes at particular intervals and arrowheads at ends of the lines. Horizontal line is x- axis, on which the x- coordinates are graphed, and vertical line is y- axis, with y- coordinates. Point
where two axes i.e. the x and y- axis is called as origin (It is a fixed Point for reference).
In coordinate system each and every point is represented by two numbers written in parentheses separated by a comma: (x, y). Where, 'x' represents coordinate graphed on x- axis and y- represents
coordinate graphed on y- axis. As these points are just measures, we need to count number of units on x- axis (right or left) and number of units on y- axis (bottom or up). These measures are taken
from a reference point that is origin (0, 0). For example, to graph a point (5, 12), count five units to right of origin, then move 12 units up and mark the point.
Sometimes you may find a need to solve for two coordinates by using general algebraic operations. For instance, given formula y = 2x + 4, pick values for 'x', say, 0, 1, -2 and 3, and put them into
formula to get 4, 6, 0 and 10 respectively, giving you coordinates (0, 4), (1, 6), (-2, 0), (3, 10). Graph these points and draw a line connecting them to get desired expression.
An equation is said to represent a line if highest power of variable it consists is 1.
Most general form of linear equation representing a line is given as: y = mx + c Remember to resolve the equation if it is present in different form. Graphing Lines is an easy task to be done by
working following steps:
In above equation “ c ” represents y- intercept. So, to start Graphing by drawing a dot on y- axis at y = c distance from origin. This Point at which graph would be crossing y- axis after its
completion. Next select any random value of 'x' other than 0 and mark it on horizontal x - axis you've drawn. Draw vertical line that crosses this point on your x-axis for further help in plotting
the graph. Now substitute this value of 'x' just marked into your equation i.e. y = mx + c and solve it for 'y'. After you get the point on y- axis, mark it and draw a horizontal line that crosses
this value y- axis.
Put a dot where two lines (drawn horizontally and vertically) intersect. From this point draw a
Straight Line
to y- intercept. When you extend this line in both directions towards the edges of the plot, it represents graph of the given equation.
There are certain points to be remembered like value of "m" in y = mx + c is the Slope of graph. Slope can be calculated as Ratio of vertical change to horizontal change between two points on graph.
For Example: Graph of equation y = 4x + 10 has Slope of 4 and crosses y- axis at an intercept y = 10.
Circle is a conic section with its both axes (major and minor) equal in length. The key features of a Circle are its radius (half of the measure of any axis or Diameter), its circumference, chords
(diameter being the longest), center, arcs, sectors etc. In other words a circle can be defined as Set of all those points which are equidistant from center Point. Circles can also be defined by
Intersection of a plane and a cone. Graphing circles means Graphing Equations of circles. General form of a circle equation is given as:
(x – h)
+ (y – k)
= r
…..............equation 1.
h, k and c are constants (h and k being the coefficients).
'r' is Radius of Circle and point (h, k) represents the Center of Circle. For origin i.e. (0, 0) to be the center, values of 'h' and 'k' must be equals to zero giving simple equation for a circle.
(x – 0)
+ (y – 0)
= r
+ (y)
= r
To graph equation 1, first plot the center i.e. (h, k). Next draw all points “r” units away from center by substituting values of 'x' for getting the corresponding values of 'y'. You can plot a few
of them and then connect dots in a circular shape using a protractor.
Taking examples of two circles whose equations can be given as:
+ (y)
= 25 and
(x - 4)
+ (y - 4)
= 25
First equation has its origin at (0, 0) while second equation has its origin at (4, 4). Radii of both circles are equal. These circle equations can be graphed as follows as shown in figure:
The Definition of a Function in mathematics can be given as representation of relationship between a Set of variables and constants
. These are used to solve equations for unknown variables. A function possesses unique output or value for each and every input given to it. A function in its simplest form can be written as: y = x.
For any value of 'x' we insert in the function, same will be value of 'y'. So, here 'y' can be said to be dependent on 'x'. Mostly complex Functions involve mathematical operations being applied to
'x' to determine final value of 'y'. For example, if we take y = x2 + 5x + 6. Here, first we need to simplify the function in 'x' using Factorization method. Then for two possible values of 'x' we
get two values of 'y'. In functions, values will change, but relationship between variables remains constant.
A much known form to represent a function is: f(x).
Mostly functions are written with f(x) in place of 'y'. For example, f (x) = 4x. In this notation, function of 'x' is equal to four times value of 'x'. So, for any value of 'x' say 2, function of
'x', or f(x) is equal to 8.
Evaluating a function means solving a mathematical problem or equation involving a function. For this we need to provide an input. For each input given for variable 'x', there can be only one output
for function.
For example, in function f(x) = 20x, inputs may be given as:
x = 2,
x = 3,
x = 5,
and corresponding values or outputs of functions that we get will be:
x=2, f(x) = 40,
x=3, f(x) = 60,
x=5, f(x) = 100,
Functions are of importance in various fields of maths, physics, science etc.
In Algebraic calculations, you would most deal with the systems of Linear Equations that can also be correlated with finding the Combination of two Functions.
Combining functions can Mean several possible operations like adding up two functions together, subtracting one function from the other, multiplying one function by the other, dividing one function
by the other, or substituting one function into the other.
To perform combining of functions arrange them such that similar terms are present in the same order. For instance, let us consider two functions given as:
F (x) = x
- 4x + 5 and
G (x) = 4x + 4 – y.
Here y represents the function G (x). So, we need to rearrange the second equation by adding “y” both sides to obtain 2G (x) = 4x + 4, then divide both sides of the equation by 2 to obtain G (x) = 2x
+ 2.
Now let us perform the combining operations on the functions F (x) and G (x). First we go for addition where the like terms of the function G(x) are added to those of the function F (x). We will add
2x to - 4x to obtain -2x, and you will add 2 to +5 to obtain 7. Thus,
F (x) + G (x) = x
- 2x + 5
F (x) - G (x) = x
- 4x + 5 – (2x + 2) = x
- 6x + 3
F (x) / G (x) = x
- 4x + 5 * (2x + 2) = 2x
– 8x
+ 10x + 2x
– 8x + 10 = 2x
– 6x
+ 2x + 10
F (G (x)) = (2x + 2)
^ 2
– 4 (2x + 2) + 5 = 4x
+ 4 + 8x – 8x – 8 + 5 = 4x
In a function each input value is directly lined with one corresponding output value of that function.
For example: If we have a relation that is defined by function rule f (s) = s
, then it compares its input’s’ to its Square, both values are real. If we put input value as -8 then we get its output as 64. In function form we can also write it as: f (-8) = 64. Real number graph
is indistinguishable for representation of a function. Representation of a graph is not applied for general function. General concept of a graph is characterized by graph of relation. A function is
always recognized by its graph but they are not same because it happens that two Functions that have different value of co- domain must have same graph. If we test graph of a function then we use
vertical line test and if we test whether function is one – to – one then we use horizontal line test. If we have Inverse Function then graph of inverse function can be obtained by plotting graph of
function along the line q = p. (here 'p' is along x - axis and 'q' is along y – axis. In a graph a curve is one – to – one function if and only if it is a function. For example: Suppose we have a
F (x) = p, if p = 1,
q, if p = 2,
r, if p = 3. Then Graphing Functions of given values.
Here we get function value (1, p), (2, q), (3, r). We can also write function in cubical form as:
F (p) = p
– 9p, on plotting graph of this function, we get curve shown below:
This is how we plot functions on a graph.
|
{"url":"http://math.tutorcircle.com/algebra/graphing-and-functions.html","timestamp":"2014-04-16T20:12:55Z","content_type":null,"content_length":"56973","record_id":"<urn:uuid:d1fff0cd-79fe-4667-a444-87d2e6c826d0>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculating SNR of an acoustic signal
There are 8 messages in this thread.
You are currently looking at messages 1 to .
Is this discussion worth a thumbs up?
Calculating SNR of an acoustic signal - stojakapimp - 2007-02-14 15:50:00
I'm wondering what a good method is to calculate the SNR of a signal. For
example, I'm emitting a 60-130 kHz 50 ms chirp in a tank full of water and
recording the sound waves with different hydrophones. I'm curious to see
how changing the chirp length affects the SNR, but I'm not real clear on
how to exactly calculate the SNR.
I know that SNR is the ratio of signal to noise power levels, so does that
mean that in my received signal I can plot the power density spectrum and
then subtract the average low power by the peak power? To be more
precise, I'm using the pwelch command in Matlab to plot the power density
spectrum for the received signal which has units of dB/frequency, but
wasn't sure on if I can do that to calculate SNR.
Re: Calculating SNR of an acoustic signal - Rune Allnor - 2007-02-14 16:40:00
On 14 Feb, 21:50, "stojakapimp" <jrenf...@gmail.com>
> I'm wondering what a good method is to calculate the SNR of a signal. For
> example, I'm emitting a 60-130 kHz 50 ms chirp in a tank full of water and
> recording the sound waves with different hydrophones. I'm curious to see
> how changing the chirp length affects the SNR, but I'm not real clear on
> how to exactly calculate the SNR.
Changing the chirp length does NOT change the SNR, assuming that
the chirp length is the only system parameter you change.
> I know that SNR is the ratio of signal to noise power levels, so does that
> mean that in my received signal I can plot the power density spectrum and
> then subtract the average low power by the peak power? To be more
> precise, I'm using the pwelch command in Matlab to plot the power density
> spectrum for the received signal which has units of dB/frequency, but
> wasn't sure on if I can do that to calculate SNR.
Estimating the SNR is not necessarily easy. If you can, measure the
of the background noise. Then, if signal parameters permit, measure
PSD of the signal and noise together. From that, comute the SNR.
This poses two problems. The first is to measure the background
Most active sonar systems sync the receiver with the transmitter,
it all but impossible to make a measurement without sending a pulse.
The second is that most sonar pulses are transients, not stationary,
meaning that it is very difficult to make the *power* of the signal.
Re: Calculating SNR of an acoustic signal - stojakapimp - 2007-02-14 17:50:00
Well there are two parameters that I set...the chirp length of the signal,
and the amount of time that I want to record. So if my chirp length is
50ms, then I usually record for about 100ms in order to obtain the chirp +
its reverberation. So you're saying that increasing the chirp length will
not increase the SNR?
Also, for these measurements, I am emitting with one transducer and
receiving with 3 hydrophones. So yes, I can record sound in the tank
without having to emit a signal. Are you recommending that I make
measurements in the tank without producing a sound in order to obtain the
noise signal levels? Then make measurement with a ping and then compare
that to the noise levels?
>On 14 Feb, 21:50, "stojakapimp" <jrenf...@gmail.com> wrote:
>> I'm wondering what a good method is to calculate the SNR of a signal.
>> example, I'm emitting a 60-130 kHz 50 ms chirp in a tank full of water
>> recording the sound waves with different hydrophones. I'm curious to
>> how changing the chirp length affects the SNR, but I'm not real clear
>> how to exactly calculate the SNR.
>Changing the chirp length does NOT change the SNR, assuming that
>the chirp length is the only system parameter you change.
>> I know that SNR is the ratio of signal to noise power levels, so does
>> mean that in my received signal I can plot the power density spectrum
>> then subtract the average low power by the peak power? To be more
>> precise, I'm using the pwelch command in Matlab to plot the power
>> spectrum for the received signal which has units of dB/frequency, but
>> wasn't sure on if I can do that to calculate SNR.
>Estimating the SNR is not necessarily easy. If you can, measure the
>of the background noise. Then, if signal parameters permit, measure
>PSD of the signal and noise together. From that, comute the SNR.
>This poses two problems. The first is to measure the background
>Most active sonar systems sync the receiver with the transmitter,
>it all but impossible to make a measurement without sending a pulse.
>The second is that most sonar pulses are transients, not stationary,
>meaning that it is very difficult to make the *power* of the signal.
Re: Calculating SNR of an acoustic signal - Fred Marshall - 2007-02-16 19:31:00
"stojakapimp" <j...@gmail.com> wrote in message
> Well there are two parameters that I set...the chirp length of the signal,
> and the amount of time that I want to record. So if my chirp length is
> 50ms, then I usually record for about 100ms in order to obtain the chirp +
> its reverberation. So you're saying that increasing the chirp length will
> not increase the SNR?
> Also, for these measurements, I am emitting with one transducer and
> receiving with 3 hydrophones. So yes, I can record sound in the tank
> without having to emit a signal. Are you recommending that I make
> measurements in the tank without producing a sound in order to obtain the
> noise signal levels? Then make measurement with a ping and then compare
> that to the noise levels?
> Thanks!
It's a lot more complicated than that because active sonar has more than one
"noise" to deal with and you have to be particular about what you're
about and when.
Sometimes, usually long after the transmit pulse, the receiver noise floor
will be the ambient noise - whatever that is.... affected by flow noise if
there's platform motion or strong current and, best case, limited by the
receiver electronics.
The rest of the time the receiver noise floor will be due to reverberation.
Then, you need to consider *where* in the receiver you want to measure SNR.
You might measure it in a broadband sense over the entire receiver
passband - and get the worst possible SNR. Or, you might measure it at the
output of a matched filter or bank of filters, or ...... A simple example
is a tone pulse of bandwidth "W" and measuring SNR with bandwidth W,
2W, 4W,
etc. as the reciever or filter bandwidth gets wider, the SNR gets 3dB worse
for white noise.
Since you're using a chirp then I assume there's a matched filter. So, that
helps decide part of the answer above.
The tank has to be large compared to the extent of the pulse in water and
you'll probably have to do something to observe the signal *before* boundary
reverberation messes everything up at the receiver. This is common practice
One approach might be to ping away with no object of interest present and
characterize the "noise" coming out of the receiver as a function of
Then, introduce an object of interest and compare its signal level to the
noise you've characterized at the same range - assuming there is enough SNR
for detection and measurement!
Another similar but real time approach might be to measure the "noise"
times just before and just after the echo in order to make the comparison.
A simple plot of signal plus noise with time at the receiver output will
usually allow this to be done. So, your 100msec record needs to be around
the echo time *and* the echoing object of interest has to be of very limited
physical extent in range. Otherwise the echo may be longer than 100msec. A
sphere that's not *too* big relative to the wavelength would be good. You
could look up radar cross section of a sphere vs. size and wavelength to
find the point where cross section levels off (or sonar target size ... same
Re: Calculating SNR of an acoustic signal - Rune Allnor - 2007-02-17 03:08:00
On 14 Feb, 23:50, "stojakapimp" <jrenf...@gmail.com>
> Well there are two parameters that I set...the chirp length of the signal,
> and the amount of time that I want to record. So if my chirp length is
> 50ms, then I usually record for about 100ms in order to obtain the chirp +
> its reverberation. So you're saying that increasing the chirp length will
> not increase the SNR?
Keeping the power constant, increasing transmit time will increase
the energy of the signal. SNR is expressed in terms of power.
> Also, for these measurements, I am emitting with one transducer and
> receiving with 3 hydrophones. So yes, I can record sound in the tank
> without having to emit a signal. Are you recommending that I make
> measurements in the tank without producing a sound in order to obtain the
> noise signal levels? Then make measurement with a ping and then compare
> that to the noise levels?
As Fred said, pay attention to propagation paths. 50 ms pings
to some 75 m of propagation. Quite a bit larger than your tank, right?
I would
be surprised if you measure a very big difference when setting 100ms
lengths; you are probably only measuring reverberation inside the
Re: Calculating SNR of an acoustic signal - John Herman - 2007-03-18 18:54:00
The SNR is the ratio of signal energy to noise power for matched filters.
This means that when you double the leagth of the pulse, you double the energy
and increase the SNR by 3 dB.
Rune is right. You cannot do what you want to do in that tank unless you
shorten the pulses to a few (two or three) millisecopnds in length.
In article <1...@p10g2000cwp.googlegroups.com>,
Allnor" <a...@tele.ntnu.no> wrote:
>On 14 Feb, 23:50, "stojakapimp" <jrenf...@gmail.com> wrote:
>> Well there are two parameters that I set...the chirp length of the
>> and the amount of time that I want to record. So if my chirp length
>> 50ms, then I usually record for about 100ms in order to obtain the
chirp +
>> its reverberation. So you're saying that increasing the chirp length
>> not increase the SNR?
>Keeping the power constant, increasing transmit time will increase
>the energy of the signal. SNR is expressed in terms of power.
>> Also, for these measurements, I am emitting with one transducer and
>> receiving with 3 hydrophones. So yes, I can record sound in the tank
>> without having to emit a signal. Are you recommending that I make
>> measurements in the tank without producing a sound in order to obtain
>> noise signal levels? Then make measurement with a ping and then
>> that to the noise levels?
>As Fred said, pay attention to propagation paths. 50 ms pings
>to some 75 m of propagation. Quite a bit larger than your tank, right?
>I would
>be surprised if you measure a very big difference when setting 100ms
>lengths; you are probably only measuring reverberation inside the
Re: Calculating SNR of an acoustic signal - Jerry Avins - 2007-03-18 21:28:00
John Herman wrote:
> The SNR is the ratio of signal energy to noise power for matched filters.
> This means that when you double the leagth of the pulse, you double the
> and increase the SNR by 3 dB.
I'm puzzled here. How can a ratio of energy to power be dimensionless?
Engineering is the art of making what you want from things you can get.
Re: Calculating SNR of an acoustic signal - Rune Allnor - 2007-03-19 03:08:00
On 19 Mar, 02:28, Jerry Avins <j...@ieee.org> wrote:
> John Herman wrote:
> > The SNR is the ratio of signal energy to noise power for matched
> > This means that when you double the leagth of the pulse, you double
the energy
> > and increase the SNR by 3 dB.
> ...
> I'm puzzled here. How can a ratio of energy to power be dimensionless?
It can't. But then, SNR is a measure of power ratios, which
makes little sense for transient signals.
The way I understand it, SNR for transients is computed as
some RMS power value averaged over some "typical" time
frame of the transient. For a transient with constant
amplitude, then, the SNR can be computed from the peak
power, the SNR being valid "for as long as the pulse lasts".
Since the matched filter works with energy, pulse duration
influences the processing gain of the matched filter.
|
{"url":"http://www.dsprelated.com/showmessage/72272/1.php","timestamp":"2014-04-18T15:38:28Z","content_type":null,"content_length":"38069","record_id":"<urn:uuid:38e9377b-1780-4f5d-9d44-0edf22cc2c9a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Monday 12-06-10 – The Museum of Mathematics
Math Monday: Bagel Cutting Revisited
DECEMBER 6, 2010
by George Hart
It has now been one year since I started writing these Math Monday columns on MAKE for
The Museum of Mathematics
. Thinking back to the first column on cutting
linked bagel halves
, I thought an appropriate anniversary column would show yet another interesting way to cut a bagel. Slicing on a slanted plane which is tangent to the surface at two places reveals a geometric
The planar cross section is two overlapping circles called Villarceau circles after the French mathematician, Yvon Villarceau, who wrote about them in the mid 1800s.
I’ve indicated them here with colored markers, but you can see they are not perfectly round on a real bagel, because of its flat bottom and other irregularities. On an ideal torus, this slanted slice
gives two perfect overlapping circles.
The proper position and slant of the slice will depend on the size of the bagel’s hole. As the above side-view shows, the slicing plane (red) must be chosen so it is tangent to the bagel at two
This article first appeared on Make: Online, December 6, 2010.
Return to Math Monday Archive.
|
{"url":"http://momath.org/home/math-monday-12-06-10/","timestamp":"2014-04-19T22:13:53Z","content_type":null,"content_length":"12787","record_id":"<urn:uuid:38fea52f-b4fd-437c-bc5f-bee9bf53b54d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Augment or Push: A Computational Study of Bipartite Matching and Unit-Capacity Flow Algorithms
Results 1 - 10 of 22
- J. Supercomputing , 2002
"... A phylogeny is the evolutionary history of a group of organisms; systematists (and other biologists) attempt to reconstruct this history from various forms of data about contemporary organisms.
Phylogeny reconstruction is a crucial step in the understanding of evolution as well as an important tool ..."
Cited by 21 (7 self)
Add to MetaCart
A phylogeny is the evolutionary history of a group of organisms; systematists (and other biologists) attempt to reconstruct this history from various forms of data about contemporary organisms.
Phylogeny reconstruction is a crucial step in the understanding of evolution as well as an important tool in biological, pharmaceutical, and medical research. Phylogeny reconstruction from molecular
data is very difficult: almost all optimization models give rise to NP-hard (and thus computationally intractable) problems. Yet approximations must be of very high quality in order to avoid outright
biological nonsense. Thus many biologists have been willing to run farms of processors for many months in order to analyze just one dataset. High-performance algorithm engineering offers a battery of
tools that can reduce, sometimes spectacularly, the running time of existing phylogenetic algorithms, as well as help designers produce better algorithms. We present an overview of algorithm
engineering techniques, illustrating them with an application to the "breakpoint analysis" method of Sankoff et al., which resulted in the GRAPPA software suite. GRAPPA demonstrated a speedup in
running time by over eight orders of magnitude over the original implementation on a variety of real and simulated datasets. We show how these algorithmic engineering techniques are directly
applicable to a large variety of challenging combinatorial problems in computational biology.
- In Proc. 8th WADS , 2003
"... We consider the problem of fairly matching the left-hand vertices of a bipartite graph to the right-hand vertices. We refer to this problem as the optimal semimatching problem; it is a
relaxation of the known bipartite matching problem. We present a way to evaluate the quality of a given semi-matchi ..."
Cited by 13 (0 self)
Add to MetaCart
We consider the problem of fairly matching the left-hand vertices of a bipartite graph to the right-hand vertices. We refer to this problem as the optimal semimatching problem; it is a relaxation of
the known bipartite matching problem. We present a way to evaluate the quality of a given semi-matching and show that, under this measure, an optimal semi-matching balances the load on the right hand
vertices with respect to any Lp-norm. In particular, when modeling a job assignment system, an optimal semi-matching achieves the minimal makespan and the minimal flow time for the system. The
problem of finding optimal semi-matchings is a special case of certain scheduling problems for which known solutions exist. However, these known solutions are based on general network optimization
algorithms, and are not the most efficient way to solve the optimal semi-matching problem. To compute optimal semi-matchings efficiently, we present and analyze two new algorithms. The first
algorithm generalizes the Hungarian method for computing maximum bipartite matchings, while the second, more efficient algorithm is based on a new notion of cost-reducing paths. Our experimental
results demonstrate that the second algorithm is vastly superior to using known network optimization algorithms to solve the optimal semi-matching problem. Furthermore, this same algorithm can also
be used to find maximum bipartite matchings and is shown to be roughly as efficient as the best known algorithms for this goal. Key words: bipartite graphs, load-balancing, matching algorithms,
optimal algorithms, semi-matching
- PROC. 2ND WORKSHOP ON ALGORITHM ENG. WAE 98, MAX-PLANCK INST. FÜR INFORMATIK, 1998, IN TR MPI-I-98-1-019 , 1998
"... We study the eects of caches on basic graph and hashing algorithms and show how cache effects inuence the best solutions to these problems. We study the performance of basic data structures for
storing lists of values and use these results to design and evaluate algorithms for hashing, Breadth-Fi ..."
Cited by 12 (0 self)
Add to MetaCart
We study the eects of caches on basic graph and hashing algorithms and show how cache effects inuence the best solutions to these problems. We study the performance of basic data structures for
storing lists of values and use these results to design and evaluate algorithms for hashing, Breadth-First-Search (BFS) and Depth-First-Search (DFS). For the basic
- J. Univ. Comput. Sci , 2001
"... The last twenty years have seen enormous progress in the design of algorithms, but little of it has been put into practice. Because many recently developed algorithms are hard to characterize
theoretically and have large running-time coefficients, the gap between theory and practice has widened over ..."
Cited by 9 (4 self)
Add to MetaCart
The last twenty years have seen enormous progress in the design of algorithms, but little of it has been put into practice. Because many recently developed algorithms are hard to characterize
theoretically and have large running-time coefficients, the gap between theory and practice has widened over these years. Experimentation is indispensable in the assessment of heuristics for hard
problems, in the characterization of asymptotic behavior of complex algorithms, and in the comparison of competing designs for tractable problems. Implementation, although perhaps not rigorous
experimentation, was characteristic of early work in algorithms and data structures. Donald Knuth has throughout insisted on testing every algorithm and conducting analyses that can predict behavior
on actual data; more recently, Jon Bentley has vividly illustrated the difficulty of implementation and the value of testing. Numerical analysts have long understood the need for standardized test
suites to ensure robustness, precision and efficiency of numerical libraries. It is only recently, however, that the algorithms community has shown signs of returning to implementation and testing as
an integral part of algorithm development. The emerging disciplines of experimental algorithmics and algorithm engineering have revived and are extending many of the approaches used by computing
pioneers such as Floyd and Knuth and are placing on a formal basis many of Bentley's observations. We reflect on these issues, looking back at the last thirty years of algorithm development and
forward to new challenges: designing cache-aware algorithms, algorithms for mixed models of computation, algorithms for external memory, and algorithms for scientific research.
- Proceedings of WWW , 2002
"... A large fraction of the useful web comprises of specification documents that largely consist of hattribute name, numeric valuei pairs embedded in text. Examples include product information,
classified advertisements, resumes, etc. The approach taken in the past to search these documents by first est ..."
Cited by 9 (0 self)
Add to MetaCart
A large fraction of the useful web comprises of specification documents that largely consist of hattribute name, numeric valuei pairs embedded in text. Examples include product information,
classified advertisements, resumes, etc. The approach taken in the past to search these documents by first establishing correspondences between values and their names has achieved limited success
because of the difficulty of extracting this information from free text. We propose a new approach that does not require this correspondence to be accurately established. Provided the data has "low
reflectivity ", we can do effective search even if the values in the data have not been assigned attribute names and the user has omitted attribute names in the query. We give algorithms and indexing
structures for implementing the search. We also show how hints (i.e., imprecise, partial correspondences) from automatic data extraction techniques can be incorporated into our approach for better
accuracy on high reflectivity datasets. Finally, we validate our approach by showing that we get high precision in our answers on real datasets from a variety of domains.
, 2010
"... It is a well-established result that improved pivoting in linear solvers can be achieved by computing a bipartite matching between matrix entries and positions on the main diagonal. With the
availability of increasingly faster linear solvers, the speed of bipartite matching computations must keep up ..."
Cited by 9 (5 self)
Add to MetaCart
It is a well-established result that improved pivoting in linear solvers can be achieved by computing a bipartite matching between matrix entries and positions on the main diagonal. With the
availability of increasingly faster linear solvers, the speed of bipartite matching computations must keep up to avoid slowing down the main computation. Fast algorithms for bipartite matching, which
are usually initialized with simple heuristics, have been known for a long time. However, the performance of these algorithms is largely dependent on the quality of the heuristic. We compare
combinations of several known heuristics and exact algorithms to find fast combined methods, using real-world matrices as well as randomly generated instances. In addition, we present a new heuristic
aimed at obtaining high-quality matchings and compare its impact on bipartite matching algorithms with that of other heuristics. The experiments suggest that its performance compares favorably to the
best-known heuristics, and that it is especially suited for application in linear solvers.
- PROC. OF THE 21ST ANNUAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE (STACS), LNCS 2996 , 2006
"... We present an improved average case analysis of the maximum cardinality matching problem. We show that in a bipartite or general random graph on n vertices, with high probability every
nonmaximum matching has an augmenting path of length O(log n). This implies that augmenting path algorithms like th ..."
Cited by 7 (0 self)
Add to MetaCart
We present an improved average case analysis of the maximum cardinality matching problem. We show that in a bipartite or general random graph on n vertices, with high probability every nonmaximum
matching has an augmenting path of length O(log n). This implies that augmenting path algorithms like the Hopcroft–Karp algorithm for bipartite graphs and the Micali–Vazirani algorithm for general
graphs, which have a worst case running time of O(m √ n), run in time O(m log n) with high probability, where m is the number of edges in the graph. Motwani proved these results for random graphs
when the average degree is at least ln(n) [Average Case Analysis of Algorithms for Matchings and Related Problems, Journal of the ACM, 41(6), 1994]. Our results hold, if only the average degree is a
large enough constant. At the same time we simplify the analysis of Motwani.
, 2010
"... We report on careful implementations of seven algorithms for solving the problem of finding a maximum transversal of a sparse matrix. We analyse the algorithms and discuss the design choices. To
the best of our knowledge, this is the most comprehensive comparison of maximum transversal algorithms ba ..."
Cited by 6 (4 self)
Add to MetaCart
We report on careful implementations of seven algorithms for solving the problem of finding a maximum transversal of a sparse matrix. We analyse the algorithms and discuss the design choices. To the
best of our knowledge, this is the most comprehensive comparison of maximum transversal algorithms based on augmenting paths. Previous papers with the same objective either do not have all the
algorithms discussed in this paper or they used non-uniform implementations from different researchers. We use a common base to implement all of the algorithms and compare their relative performance
on a wide range of graphs and matrices. We systematize, develop and use several ideas for enhancing performance. One of these ideas improves the performance of one of the existing algorithms in most
cases, sometimes significantly. So much so that we use this as the eighth algorithm in comparisons. 1
"... Abstract. We describe a two-level push-relabel algorithm for the maximum flow problem and compare it to the competing codes. The algorithm generalizes a practical algorithm for bipartite flows.
Experiments show that the algorithm performs well on several problem families. 1 ..."
Cited by 3 (1 self)
Add to MetaCart
Abstract. We describe a two-level push-relabel algorithm for the maximum flow problem and compare it to the competing codes. The algorithm generalizes a practical algorithm for bipartite flows.
Experiments show that the algorithm performs well on several problem families. 1
- York University , 1999
"... iii Preface Needless to say, this work would not have been possible without the continuing support of Robert Hummel and Benjamin Goldberg. To them goes my deepest gratitude. iv Table of Contents
Acknowledgements............................................................................. iii ..."
Cited by 2 (0 self)
Add to MetaCart
iii Preface Needless to say, this work would not have been possible without the continuing support of Robert Hummel and Benjamin Goldberg. To them goes my deepest gratitude. iv Table of Contents
Acknowledgements............................................................................. iii
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=236950","timestamp":"2014-04-17T07:14:19Z","content_type":null,"content_length":"41218","record_id":"<urn:uuid:467c6a77-8257-4282-97e1-e6d876d4b70a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
|
precent hall math.com
Author Message
H-Sal Posted: Saturday 16th of Sep 15:47
Hi there, I'm a freshman in college and I am experiencing trouble with my assignments. One of my issues is coping with precent hall math.com; Could someone help me to comprehend what it
is all about? I require to finish this very quickly. Thanking You In Advance .
Back to top
Vofj Posted: Sunday 17th of Sep 15:21
Timidrov I comprehend that problem as I suffered the comparable difficulties where I attended high school. I was genuinely behind in algebra, particularly in precent hall math.com. My marks were
extraordinarily poor. I began trying Algebra Buster to help me with my math courses as well as with all of the homework exercises. Before too long I initiated getting all As in
mathematics. This is a remarkably capable software as it details the questions in a stepwise presentation; thus, anybody should understand them as well. I'm certain that anybody will find
out how useful can be also.
Back to top
Vild Posted: Sunday 17th of Sep 17:51
I can advise assuming you can present additional particulars about those courses. Conversely, you might assess Algebra Buster that is an exceptional piece of software which helps to
figure out algebra courses. This piece of software displays all that's possible systematically and causes | as well as making the topics look genuinely simple. I really can state that it
is without a question worthwhile | invaluable | priceless | exemplary | blue-chip | meritorious | praiseworthy | worth every individual dime.
Back to top
Marcos Posted: Monday 18th of Sep 18:38
Wemtel Hot dog! I imagine that is exactly what I require. Could you tell me a way to acquire this software?
Back to top
Mov Posted: Tuesday 19th of Sep 10:02
You can get it from http://www.algebra-online.com/order-algebra-online.htm. There are other information offered at the site. You could search to see if it meets your requirements.
Back to top
|
{"url":"http://www.algebra-online.com/algebra-homework-1/precent-hall-math.com.html","timestamp":"2014-04-17T13:59:12Z","content_type":null,"content_length":"27387","record_id":"<urn:uuid:38689f83-bb68-423f-b933-9497808dca75>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Farmersville, TX Math Tutor
Find a Farmersville, TX Math Tutor
...I am confident in my abilities to relate complex chemical concepts to students in easy to understand ways. I received an A+ and an A in both semesters of biochemistry and recently graduated
Cum Laude from the University of Texas at Dallas with a B.S. in Biology. I believe that success in biochemistry really boils down to truly functional understanding of both chemistry and biology.
15 Subjects: including algebra 1, algebra 2, chemistry, physics
...I have taken courses in pre-algebra, algebra I and II, Matrix Algebra, Trigonometry, pre-calculus, Calculus I and II, Geometry and Analytical Geometry, Differential Equations. I was a tutor in
college for students that needed help in math. I have a Master's degree in civil engineering and have practiced engineering for almost 40 years where math important to performing my job.
11 Subjects: including algebra 1, algebra 2, American history, geometry
...I taught Geometry for two months as a long term substitute in Wylie. Unfortunately I picked the wrong year of the century to become a teacher in Texas, so I accepted a position working in the
Dallas County Community College District Service Center and plan to be a tutor. I am highly qualified to teach grades 7-12 Mathematics in Texas.
4 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...While I'm new to this website, I'm not new to tutoring. I've worked as a GRE and GMAT instructor for Kaplan for over three years. Scored in the top 90th percentile in both verbal and math on
the GRE (won't mention the score, but if you really want to know you can ask me personally!). I've also worked as a math, physics, and statistics tutor for a tutoring company.
41 Subjects: including statistics, linear algebra, differential equations, logic
...I've taken many literature courses. In fact I was only 3-6hrs shy of having Literature as a second minor. I'm educated in World, Religious, and English (British and American) Literature.
14 Subjects: including algebra 2, algebra 1, biology, chemistry
Related Farmersville, TX Tutors
Farmersville, TX Accounting Tutors
Farmersville, TX ACT Tutors
Farmersville, TX Algebra Tutors
Farmersville, TX Algebra 2 Tutors
Farmersville, TX Calculus Tutors
Farmersville, TX Geometry Tutors
Farmersville, TX Math Tutors
Farmersville, TX Prealgebra Tutors
Farmersville, TX Precalculus Tutors
Farmersville, TX SAT Tutors
Farmersville, TX SAT Math Tutors
Farmersville, TX Science Tutors
Farmersville, TX Statistics Tutors
Farmersville, TX Trigonometry Tutors
|
{"url":"http://www.purplemath.com/farmersville_tx_math_tutors.php","timestamp":"2014-04-19T17:37:25Z","content_type":null,"content_length":"24147","record_id":"<urn:uuid:5c68a883-b6ce-4c2f-b2bd-a21bda0d1a58>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Graph theory matching size
September 30th 2013, 12:26 PM
Graph theory matching size
Prove that every graph G without isolated vertices has a matching of size at least n(G)/(1+∆(G)). (Hint: Apply induction on e(G)).
I have started this problem several times, but this is my latest attempt:
For the base case, let every edge of G be incident to a vertex with degree = 1. Then each component of G has, at most, one vertex with degree > 1 which implies that each component is a star. A
matching can be formed by using one edge from each component. The number of components is n(G)/∆(G)+1 since each component has 1 + dG(v)
Now suppose the hypothesis is true for G with k edges and consider a graph H with k+1 edges. Then ∆(G) would have to be k, right? And then apply the IH?
Thanks for any help.
September 30th 2013, 03:09 PM
Re: Graph theory matching size
If there are no isolated vertices, then all vertices have degree at least 1. Hence, $e(G) \ge \dfrac{n(G)}{2}$. Now, $\Delta(G)$ is the maximum degree of any vertex. If $G$ has $k$ edges and $\
Delta(G) = k$, then there exists a vertex incident to every edge in $G$. This would imply that $G$ has only one component. Why are you limiting to only connected graphs that also happen to be
Instead, assume the hypothesis holds for any graph with fewer than $k$ edges. Let $G$ be a graph with $k$ edges, and let $H = G-e$ for some $e \in E(G)$. Now, by the IH, $H$ satisfies the
hypothesis. Add back $e$. If the smallest integer greater than or equal to $\dfrac{n(G)}{1+\Delta(G)}$ equals the smallest integer greater than or equal to $\dfrac{n(H)}{1+\Delta(H)}$, then a
matching in $H$ that satisfies the hypothesis is a matching in $G$ that satisfies the hypothesis. So, figure out when adding $e$ to $H$ will make that number increase. That will help you figure
out what you need to prove. (Hint: $\Delta(G) \ge \Delta(H)$)
September 30th 2013, 03:34 PM
Re: Graph theory matching size
The number will increase when you add e back to a vertex in H with the maximum degree. Is that what you're thinking?
September 30th 2013, 03:43 PM
Re: Graph theory matching size
If $\Delta(G) \ge \Delta(H)$ and $n(G) = n(H)$ then $\dfrac{n(H)}{1+\Delta(H)} \ge \dfrac{n(H)}{1+\Delta(G)} = \dfrac{n(G)}{1+\Delta(G)}$. So, any matching of $H$ that satisfies the hypothesis
will still be a matching in $G$ that will satisfy the hypothesis.
September 30th 2013, 03:45 PM
Re: Graph theory matching size
(Oh, unless deleting $e$ from $G$ leaves an isolated vertex)... so, you need to be careful when picking $e$. If removing any $e$ will leave an isolated vertex, then you know something else about
$G$, and you can still show that the graph satisfies the hypothesis.
|
{"url":"http://mathhelpforum.com/advanced-math-topics/222444-graph-theory-matching-size-print.html","timestamp":"2014-04-18T07:22:41Z","content_type":null,"content_length":"12006","record_id":"<urn:uuid:8c124000-ddfd-4204-8d3d-2a99ac2e66f7>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Are there "motivic" proofs of Weil conjectures in special cases?
up vote 8 down vote favorite
This is a question meant as a first step to get into reading more on Weil conjectures and standard conjectures. It is known that the standard conjectures on vanishing of cycles would imply the Weil
conjectures. So, are there proofs of Weil conjectures in special cases using partial results on the standard conjectures? If so, which cases, and what are the references?
Background: Borcherds mentions here that Manin proved a few special cases in higher dimensions using motives.
ag.algebraic-geometry weil-conjectures motives
add comment
3 Answers
active oldest votes
Of course, there's Serre's Analogues kählériens de certaines conjectures de Weil, Annals 1960, where he deduces an analogue of the Weil-Riemann hypothesis over $\mathbb{C}$ using standard
facts from Hodge theory. This is technically not an answer at all, but I thought I'd mention it since I had the (perhaps mistaken) impression that this was partly the inspiration for the
standard conjectures.
up vote The other more relevant comment is that one can give an elementary proof of the Weil conjecture for any smooth variety whose Grothendieck motive lies in the tensor category generated by
10 down curves. I should explain, especially in light of Minhyong's comments, that this could be understood as shorthand for saying the variety can built up from curves by taking products, taking
vote images, blow ups along centres which of the same type, and so on. Actually, for such varieties, the Frobenius can be seen to act semisimply. I think this open in general. So perhaps there's
some value in this.
If you believe the Tate conjectures, then every motive belongs to the tensor category generated by motives of curves. I recall from somewhere that Grothendieck had the idea to prove the
4 Weil conjectures by covering varieties with products of curves; then Serre found a surface that couldn't be covered rationally by a product of curves (see Grothendieck-Serre
correspondence, 31/3/1964). – Tony Scholl Jul 28 '10 at 19:36
add comment
The paper by Manin is MR0258836 Manin, Ju. I. Correspondences, motifs and monoidal transformations. Mat. Sb. (N.S.) 77 (119) 1968 475--507 where he uses motives to prove the Weil
up vote conjectures for unirational projective 3-folds.
10 down
It's a nice paper. But I have to disagree that Manin 'uses' motives in any real sense. What he uses are: A unirational three-fold $Y$ admits a generically finite proper map from a
4 variety $X$ that is obtained from $P^3$ using a sequence of blow-up along smooth centers that are curves or points. From the computation of the cohomology of such a blow-up, $X$
satisfies the Weil conjecture. (Because, projective spaces and curves do.) Since the cohomology of $Y$ embeds into that of $X$, $Y$ satifies the Weil conjecture. – Minhyong Kim Jul 28
'10 at 15:38
3 I realize this is something of a hard line to take, but it seems to be mathematically more fruitful to be somewhat stricter when speaking of 'using motives' in a proof. At least, this
stance sets up more interesting challenges to people working on motives. – Minhyong Kim Jul 28 '10 at 15:41
I agree with Minhyong's hard line stance, but want to add a comenstatory remark: the sense in which Manin uses motives, and the reason they appear in the title of his paper, is that he
2 is using the idea that, since the Weil conjectures are essentially homological in nature, one can study them using the idea of cutting up spaces (in this case, cutting up a unirational
3-fold into pieces related to curves and projective spaces). This was always a basic idea of algebraic topology, but the idea of using it in algebraic geometry is part of the yoga of
motives. (cont'd ...) – Emerton Jul 28 '10 at 15:47
3 I agree with Matthew in a philosphical sense. Even though they're unnecessary mathematically, the paper is inspired by the following two motivic ideas (in the notation I used above): (1)
$Y$ is a direct summand of $X$; (2)$X$ is the sum of Tate motives and Tate twists of curves. – Minhyong Kim Jul 28 '10 at 15:55
2 @Emerton: algebraic topology = algebraic geometry? :) – David Hansen Jul 28 '10 at 16:21
show 3 more comments
Weil's proofs for curves and abelian varieties essentially use special cases of the standard conjectures, and the framework of the standard conjectures (like many other conjectures of a
motivic nature) is suggested by trying to generalize the abelian variety case (or if you like, the case of H^1) to general varieties (or if you like, to cohomology in higher degrees).
Deligne's proof for K3 surfaces uses a motivic relation between K3s and abelian varieties (which is most easily seen on the level of Hodge structures) to import the result for abelian
up vote varieties into the context of K3 surfaces. This is not so different in spirit to Manin's proof for unirational 3-folds, except that the relationship between the K3 and associated abelian
10 down variety (the so-called Kuga-Satake variety) is not quite as transparent.
[Added, in light of the comments by Donu Arapura and Tony Scholl below:] In the K3 example, it would be better to write "a conjectural motivic relation ... (which can be observed rigorously
on the level of Hodge structures) ...".
For the sake of completeness: Here's the MR link for Deligne's proof for K3 surfaces: ams.org/mathscinet-getitem?mr=296076 – Anweshi Jul 28 '10 at 16:12
When you talk about the motivic relation between K3s and abelian varieties, I assume you mean at the level of absolute Hodge cycles. Isn't it unknown in general as to whether it's given
by a genuine correspondence? – Donu Arapura Jul 28 '10 at 18:05
Donu: In fact, Deligne explains in the introduction that his (quite miraculous) proof is inspired by motivic considerations, but that he does not construct an identity between motives.
4 Apart from some trivial cases (eg Kummer surfaces) there are no proven identities between K3 motives and abelian variety motives, except in the absolute Hodge category - which doesn't
behave well under reduction mod p. It is the rigidity of families that enables Deligne to get reduction mod p to work. One should read his proof in full - even if the result has been
superceded by Weil I/II. – Tony Scholl Jul 28 '10 at 19:21
Tony, thanks for shedding some light. I had wondered how Deligne got mod p consequences out of this very transcendental construction. I should look at it (one day...). – Donu Arapura Jul
28 '10 at 19:49
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry weil-conjectures motives or ask your own question.
|
{"url":"http://mathoverflow.net/questions/33665/are-there-motivic-proofs-of-weil-conjectures-in-special-cases?sort=newest","timestamp":"2014-04-17T10:04:57Z","content_type":null,"content_length":"73049","record_id":"<urn:uuid:058e4a0e-dab0-4de1-9aef-d9ed8675b47a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
existance of an integrable product of fg
May 3rd 2010, 08:38 AM #1
Junior Member
Dec 2009
Does there exist a strictly positive non-integrable functions f,g:[0,1]-->R such that their product fg is integrable?
Please provide an example or prove that they don't exist.
just a question i was pondering
Oops, I missed the strictly positive part.
Last edited by JG89; May 3rd 2010 at 10:07 AM.
Let f(x) = 1/2 if x is rational and be equal to 2 if x is irrational. Let g(x) = 2 if x is rational and be equal to 1/2 if x is irrational. Both f and g are not Riemann-integrable (I'm assuming
this is the type of integrability you're talking about) in [0,1] since they both have an uncountable amount of discontinuities in the interval. But fg = 1, which is integrable.
May 3rd 2010, 08:49 AM #2
Aug 2009
May 3rd 2010, 10:11 AM #3
Aug 2009
|
{"url":"http://mathhelpforum.com/differential-geometry/142818-existance-integrable-product-fg.html","timestamp":"2014-04-16T16:18:09Z","content_type":null,"content_length":"33872","record_id":"<urn:uuid:35e2b8e6-72ac-4a2a-8e58-3be574008445>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Using Simpson's rule to find the work done on an object
April 29th 2013, 05:45 PM
Using Simpson's rule to find the work done on an object
The table shows values of a force function f(x), where x is measured in meters and f(x) in newtons. Use Simpson's Rule to estimate the work done by the force in moving an object a distance of 18
x 0 3 6 9 12 15 18
f(x) 9.5 9.4 8.4 8 7.9 7.5 7.3
S= deltax/3[f(x)+4f(x)+2f(x)+4f(x)+...f(x)]
How do I relate this to finding the work done?
I thought maybe I could take integral from 0 to 18 of 149xdx and I got 24138 as an answer, which is wrong. Am I making a silly error? Thanks for any help.
April 29th 2013, 10:45 PM
Re: Using Simpson's rule to find the work done on an object
Hey Steelers72.
The work done is the force applied over some distance. In this particular case it will be the integral over the path (which is the x direction vector).
May 1st 2013, 12:02 PM
Re: Using Simpson's rule to find the work done on an object
|
{"url":"http://mathhelpforum.com/calculus/218382-using-simpsons-rule-find-work-done-object-print.html","timestamp":"2014-04-20T20:24:12Z","content_type":null,"content_length":"7024","record_id":"<urn:uuid:3ca0e193-5e18-4cb5-a5cc-b15eaee61708>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the degree of 9m^4n^2 show work
• one year ago
• one year ago
Best Response
You've already chosen the best response.
It'd be 6, the degree of a polynomial is the highest exponent. The term is :\[9m ^{4}n ^{2}\] so 4 + 2=6.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/510adf90e4b070d859bf5310","timestamp":"2014-04-18T18:40:40Z","content_type":null,"content_length":"27611","record_id":"<urn:uuid:6503e080-67b0-42d5-a7fd-4b0bb8a94e58>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Instrumental Variables without Traditional Instruments
Typically, regression models in empirical economic research suffer from at least one form of endogeneity bias.
The classic example is economic returns to schooling, where researchers want to know how much increased levels of education affect income. Estimation using a simple linear model, regressing income on
schooling, alongside a bunch of control variables, will typically not yield education’s true effect on income. The problem here is one of omitted variables – notably unobserved ability. People who
are more educated may be more motivated or have other unobserved characteristics which simultaneously affect schooling and future lifetime earnings.
Endogeneity bias plagues empirical research. However, there are solutions, the most common being instrumental variables (IVs). Unfortunately, the exclusion restrictions needed to justify the use of
traditional IV methodology may be impossible to find.
So, what if you have an interesting research question, some data, but endogeneity with no IVs. You should give up, right? Wrong. According to Lewbel (forthcoming in Journal of Business and Economic
Statistics), it is possible to overcome the endogeneity problem without the use of a traditional IV approach.
Lewbel’s paper demonstrates how higher order moment restrictions can be used to tackle endogeneity in triangular systems. Without going into too much detail (interested readers can consult Lewbel’s
paper), this method is like the traditional two-stage instrumental variable approach, except the first-stage exclusion restriction is generated by the control, or exogenous, variables which we know
are heteroskedastic (interested practitioners can test for this in the usual way, i.e. a White test).
In the code below, I demonstrate how one could employ this approach in R using the GMM framework outlined by Lewbel. My code only relates to a simple example with one endogenous variable and two
exogenous variables. However, it would be easy to modify this code depending on the model.
# gmm function for 1 endog variable with 2 hetero exogenous variable
# outcome in the first column of 'datmat', endog variable in second
# constant and exog variables in the next three
# hetero exog in the last two (i.e no constant)
g1 <- function(theta, datmat) {
#set up data
y1 <- matrix(datmat[,1],ncol=1)
y2 <- matrix(datmat[,2],ncol=1)
x1 <- matrix(datmat[,3:5],ncol=3)
z1 <- matrix(datmat[,4:5],ncol=2)
# if the variable in the 4th col was not hetero
# this could be modified so:
# z1 <- matrix(datmat[,5],ncol=1)
#set up moment conditions
in1 <- (y1 -theta[1]*x1[,1]-theta[2]*x1[,2]-theta[3]*x1[,3])
M <- NULL
for(i in 1:dim(z1)[2]){
M <- cbind(M,(z1[,i]-mean(z1[,i])))
in2 <- (y2 -theta[4]*x1[,1]-theta[5]*x1[,2]-theta[6]*x1[,3]-theta[7]*y1)
for(i in 1:dim(x1)[2]){M <- cbind(M,in1*x1[,i])}
for(i in 1:dim(x1)[2]){M <- cbind(M,in2*x1[,i])}
for(i in 1:dim(z1)[2]){M <- cbind(M,in2*((z1[,i]-mean(z1[,i]))*in1))}
# so estimation is easy
# gmm(function(...), data matrix, initial values vector)
# e.g : gmm(g1, x =as.matrix(dat),c(1,1,1,1,1,1,1))
I also tested the performance of Lewbel’s GMM estimator in comparison a mis-specified OLS estimator. In the code below, I perform 500 simulations of a triangular system containing an omitted
variable. For the GMM estimator, it is useful to have good initial starting values. In this simple example, I use the OLS coefficients. In more complicated settings, it is advisable to use the
estimates from the 2SLS procedure outlined in Lewbel’s paper. The distributions of the coefficient estimates are shown in the plot below. The true value, indicated by the vertical line, is one. It is
pretty evident that the Lewbel approach works very well. I think this method could be very useful in a number of research disciplines.
beta1 <- beta2 <- NULL
for(k in 1:500){
#generate data (including intercept)
x1 <- rnorm(1000,0,1)
x2 <- rnorm(1000,0,1)
u <- rnorm(1000,0,1)
s1 <- rnorm(1000,0,1)
s2 <- rnorm(1000,0,1)
ov <- rnorm(1000,0,1)
e1 <- u + exp(x1)*s1 + exp(x2)*s1
e2 <- u + exp(-x1)*s2 + exp(-x2)*s2
y1 <- 1 + x1 + x2 + ov + e2
y2 <- 1 + x1 + x2 + y1 + 2*ov + e1
x3 <- rep(1,1000)
dat <- cbind(y1,y2,x3,x1,x2)
#record ols estimate
beta1 <- c(beta1,coef(lm(y2~x1+x2+y1))[4])
#init values for iv-gmm
init <- c(coef(lm(y2~x1+x2+y1)),coef(lm(y1~x1+x2)))
#record gmm estimate
beta2 <- c(beta2,coef(gmm(g1, x =as.matrix(dat),init))[7])
d <- data.frame(rbind(cbind(beta1,"OLS"),cbind(beta2,"IV-GMM")))
d$beta1 <- as.numeric(as.character(d$beta1))
sm.density.compare(d$beta1, d$V2,xlab=("Endogenous Coefficient"))
title("Lewbel and OLS Estimates")
legend("topright", levels(d$V2),lty=c(1,2,3),col=c(2,3,4),bty="n")
16 thoughts on “Instrumental Variables without Traditional Instruments”
1. This is great! thanks for sharing.
□ Thanks!
2. Nice job!
□ Thanks, I hope you found it useful!
3. How does this compare to Amemya and McCurdy’s method? Sounds similar.
□ AFAIK A & McC is a panel method. I would have to check that though, so I am willing to be corrected. Thanks for the comment!
4. Great to see this econometric topics discussed in R, thanks!
5. Thanks a lot for the illustrative example. I have two questions:
Say, I wanted to estimate the correct OLS (including the ommitted variable), just for the sake of curiosity. So I replace line 18 in the second code panel with
beta1 <- c(beta1,coef(lm(y2~x1+x2+y1+ov))[4])
(Note the additional regressor "ov" in the equation.) Why are the results for y1 still upward biased? Actually, they do not differ significantly from the restricted case at all.
I tried to compare Lewbel's method to the Klein & Vella (2010) estimator (JEconometrics 154 (2),154-164). It also uses heteroskedasticity for identification. In their paper they do MC simulations
with the following data generating process:
x1 <- rnorm(1000,0,1)
x2 <- rnorm(1000,0,1)
vstar <- rnorm(1000,0,1)
ustar <- .33*vstar+rnorm(1000,0,1)
u <- 1 + exp(.2*x1+.6*x2)*ustar
v <- 1 + exp(.6*x1+.2*x2)*vstar
y1 <- 1 + x1 + x2 + v
y2 <- 1 + x1 + x2 + y1 + u
In this DGP endogeneity doesn't result from ommitted variables but from the correlated error terms, u and v, with a degree of .33. This corresponds to their "multiplicative error structure", for
which they prove their estimator to be consistent. They discuss ommitted variables in a way, that they can be included as factor in the composite error terms:
u=S_u * w * u^star
v=S_v * w * v^star
where S are heteroskedasticity functions that "scale" the homoskedastic (and correlated) errors, u^star and v^star, to being heteroskedastic errors. "w" is meant to be a common element,
representing ommitted variables like, for example, "ability" in wage regressions.
Using this data generating process for your simulation example shows that Lewbel's estimator performs very poorly. (You can see this by just replacing the DGP in the providing code above.)
What went wrong? Any ideas? Does Lewbel's estimator not allow for such DGPs?
□ Hey Nils,
Thanks for the detailed comment.
I will try and answer as concisely as possible.
On 1/, you are right. This problem seems to be caused by the error terms, the distributions for which are far from normal. However, when I ran the MC analysis the OLS estimate that includes
the omitted variable is 1.049 and the OLS without is 1.140 (as in the example). So the upward bias is still there, but not as bad. Using robust regressions (rlm in the MASS package) reduces
this bias further to 1.03919.
Your point on 2/ is something I have been thinking about. I can’t give you a full reply now. What I want to do is write up a function that estimates Klein and Vella’s method, then compare
both the KV and Lewbel approaches using the MC simulations undertaken in both the KV and the Lewbel papers. I think this would be a useful and informative exercise. Hopefully, I get some time
to do this in the coming weeks. Bear with me on this!
☆ Thanks for your reply. I just found a typo in my program that caused Lewbel to go wild. So “very poor performance” was a too harsh judgement.
Instead I find that Lewbel reduces the bias, but not all of it. The betas’ distributions centers at roughly 1.12, which is closer to the true value than OLS.
I am also working on testing Klein & Vella in comparison to Lewbel. Though I start with the parametrized version of Farré, Klein & Vella (2010, EmpEconomics). I keep you updated.
☆ Great. Keep me posted, and I will do likewise!
6. great. Thanks for the post.
7. Thanks a lot for your post. It is very helpful. In order to fully understand what Lewbel IV does, I am trying to replicate the Stata results that I get for ivreg2h for my data and cannot manage
to get the same numbers. How would your code look like without the gmm black box? Have you alreadu done that? Thanks a lot for your help!
□ Hi Catalina,
Thanks for the question. I am actually in the process of writing up some functions that perform the Lewbel calculation. I will email you and let you know when they are ready to go.
I am very busy at the moment, so it might be a while before I get around to it. However, it is worth pointing out that the Stata GMM estimator is not the GMM estimator that Lewbel describes
in his paper. The Stata GMM estimator is just a wrapper for ivreg2 gmm, where the generated IVs are just included as regular regressors. This estimator is just like the simple one proposed by
Lewbel. The Stata function ignores the uncertainty generated by estimating the model with generated instruments so the standard errors in the Stata function will/could (correct me if
incorrect) be wrong (how wrong, I am not sure what difference it makes). The upside to this is that you should be able to use the gmm or AER packages in R to generate comparable estimates and
standard errors to the Stata version.
Send me an email if you need any further help.
☆ Thanks a lot for your reply, it is really helpful! I am now trying to create a code in R. I would like to be able to play around with the parameters to see how the bias changes and up to
what extent Lewbel works or not with credible assumptions for my data. I will take into account the gmm or AER packages for getting the right SE. I will let you know if I have questions
and if I get something interesting! Thanks again!
|
{"url":"http://diffuseprior.wordpress.com/2012/04/14/instrumental-variables-without-traditional-instruments/","timestamp":"2014-04-18T11:58:12Z","content_type":null,"content_length":"88487","record_id":"<urn:uuid:615a03fe-1885-4a3e-9ecf-e43fc21cc157>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Edmonton's hot hand over Saskatchewan ends after 29 seasons
Edmonton's hot hand over Saskatchewan ends after 29 seasons
In 2006, for the first time in 30 years, the CFL’s Saskatchewan Roughriders will finish with a better record than the Edmonton Eskimos.
In this article (link may expire), Brian Grest, a math teacher from the Saskatoon area, quotes the odds against 29 straight 50-50 underachievings at 536,870,912 to 1.
Actually, the correct odds are actually 536,870,911 to 1. But I bet that error is the reporter’s fault, not Grest’s. (Also, the reporter calls this “nearly half-a-billion to one” -- the word “nearly”
seems kind of inappropriate here.)
A quick check of the CFL website shows that the two teams actually tied in the standings twice in that 29-year span. In both cases, the CFL website shows Edmonton ahead. The article says the Eskimos
got the nod in 2004 based on points differential, which was the CFL tiebreaker rule. However, in 1988, it was the Saskatchewan that had the better differential. Maybe the tiebreaker rule was
different then, but I’m too lazy to try to find out.
Mr. Grest seems he would have thought of ties, and I’m betting he got it right. But if we count ties as just ties, and they happen 5% of the time, the probability of 29 straight wins or ties is
“only” 1 in 130,430,813. If ties happen twice in 29 years on average, the chance is 1 in 77,610,895.
Finally, I think we have a new winner of the “reporter attributing every mathematical fact to someone else, just in case” award:
“Grest calculated that, all things being equal, Edmonton has a one in two chance of finishing ahead of Saskatchewan in any given season.”
Not to mention that that the word “calculated” also seems kind of inappropriate here.
By the way, I don’t know whether 29 consecutive years of domination is exceedingly rare, or just plain rare. Did the Yankees ever finish ahead of anyone for 29 consecutive years? Maybe the A’s?
1 Comments:
At Tuesday, October 24, 2006 2:04:00 AM, Bob Timmermann said...
Links to this post:
|
{"url":"http://blog.philbirnbaum.com/2006/10/edmontons-hot-hand-over-saskatchewan.html","timestamp":"2014-04-17T13:24:59Z","content_type":null,"content_length":"25764","record_id":"<urn:uuid:7d5bba8e-ccb3-46e5-810d-e8814ad645b4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
|
C14 : Elliptic integrals
s21bac nag_elliptic_integral_rc
Degenerate symmetrised elliptic integral of 1st kind R[C](x,y)
s21bbc nag_elliptic_integral_rf
Symmetrised elliptic integral of 1st kind R[F](x,y,z)
s21bcc nag_elliptic_integral_rd
Symmetrised elliptic integral of 2nd kind R[D](x,y,z)
s21bdc nag_elliptic_integral_rj
Symmetrised elliptic integral of 3rd kind R[J](x,y,z,r)
s21bec nag_elliptic_integral_F
Elliptic integral of 1st kind, Legendre form, F(φ ∣ m)
s21bfc nag_elliptic_integral_E
Elliptic integral of 2nd kind, Legendre form, E (φ ∣ m)
s21bgc nag_elliptic_integral_pi
Elliptic integral of 3rd kind, Legendre form, Π (n ; φ ∣ m)
s21bhc nag_elliptic_integral_complete_K
Complete elliptic integral of 1st kind, Legendre form, K (m)
s21bjc nag_elliptic_integral_complete_E
Complete elliptic integral of 2nd kind, Legendre form, E (m)
s21dac nag_general_elliptic_integral_f
Elliptic integrals of the second kind with complex arguments
© The Numerical Algorithms Group Ltd, Oxford UK. 2012
|
{"url":"http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/INDEXES/GAMS/c14.html","timestamp":"2014-04-17T01:18:55Z","content_type":null,"content_length":"6498","record_id":"<urn:uuid:a769967b-a48e-418a-9684-0d5b588a2baf>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Establish formula for approximation
September 23rd 2010, 10:50 PM #1
Senior Member
Jul 2009
Establish formula for approximation
If x=a is an approximation for the equation f(x)=0, then $x=a-\frac{f(a)}{f'(a)}$ is a closer approximation, establish the formula
$x_{r+1}=x_r (2-Nx_r)$ as a method of successive approximations to the reciprocal of N.
I don't know where to begin.
Thank you
As an iterative formula, Newton's method takes the form,
$\displaystyle x_{n+1}=x_{n}-\frac{f(x_{n})}{f^{\prime}(x_{n})}$.
To use the method to find the reciprocal of the number $N$, you have to solve the equation
$\displaystyle \frac{1}{x}-N=0,$
in which case
$\displaystyle f(x)= \frac{1}{x}-N.$
Substitute this into the RHS of the formula and simplify.
how did you arrive at the $\frac{1}{x}-N=0$?
can you give me like the reason, logic behind it?
September 24th 2010, 12:57 AM #2
Super Member
Jun 2009
September 24th 2010, 12:59 AM #3
Senior Member
Jul 2009
September 24th 2010, 01:05 AM #4
Super Member
Jun 2009
|
{"url":"http://mathhelpforum.com/calculus/157263-establish-formula-approximation.html","timestamp":"2014-04-19T21:06:09Z","content_type":null,"content_length":"38692","record_id":"<urn:uuid:0670c1c6-6700-49ef-8557-bebc551c45fd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kids.Net.Au - Encyclopedia > Denominator
, a
consists of one quantity
divided by
another quantity. The fraction "three divided by four" or "three over four" or "three fourths" can be written as
<math> \frac{3}{4} </math>
or 3 ÷ 4
or 3/4
In this article, we will use the latter notation. Other typical fractions include -2/7 and (x+1)/(x-1). The first quantity, the number "on top of the fraction", is called the numerator, and the other
number is called the denominator. The denominator can never be zero. A fraction consisting of two integers is called a rational number.
Several rules for the calculation with fractions are useful:
Cancelling. If both the numerator and the denominator of a fraction are multiplied or divided by the same number, then the fraction does not change its value. For instance, 4/6 = 2/3 and 1/x = x / x^
Adding fractions. To add or subtract two fractions, you first need to change the two fractions so that they have a common denominator; then you can add or subtract the numerators. For instance, 2/3 +
1/4 = 8/12 + 3/12 = 11/12.
Multiplying fractions. To multiply two fractions, multiply the numerators to get the new numerator, and multiply the denominators to get the new denominator. For instance, 2/3 × 1/4 = (2×1) / (3× 4)
= 2 / 12 = 1 / 6.
Dividing fractions. To divide one fraction by another one, flip numerator and denominator of the second one, and then multiply the two fractions. For instance, (2/3) / (4/5) = 2/3 × 5/4 = (2×5) /
(3×4) = 10/12 = 5/6.
In abstract algebra, these rules can be proved to hold in any field. Furthermore, if one starts with any integral domain R, one can always construct a field consisting of all fractions of elements of
R, the field of fractions of R.
All Wikipedia text is available under the terms of the GNU Free Documentation License
|
{"url":"http://encyclopedia.kids.net.au/page/de/Denominator","timestamp":"2014-04-19T09:42:03Z","content_type":null,"content_length":"14290","record_id":"<urn:uuid:4ac8cb84-d213-4118-a511-21ccd1b92093>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Princeton Township, NJ Geometry Tutor
Find a Princeton Township, NJ Geometry Tutor
...I obtained my MS in Computer Science at Pennsylvania State University in 1972. The Computer Science Program at Pennsylvania State University included (1) Programming Languages, and Language
theory, (2) Compiler, Operating System, and Data Base design, (3) Mathematical Logic, Number Theory, etc. ...
11 Subjects: including geometry, calculus, algebra 1, algebra 2
My background is mathematical physics (PhD). I also have a strong background in mathematics. I have been tutoring mathematics and physics for the past ten years. I like to make learning math and
physics fun for the students I tutor, giving them examples to which they can relate.
12 Subjects: including geometry, calculus, physics, algebra 1
...I've also worked at a sleep-away camp in Pennsylvania for 9 weeks over the summer, so I have a lot of experience working with kids. I enjoy math (especially physics) and consider myself blessed
with the understanding of these subjects. I also play soccer for my school.
20 Subjects: including geometry, chemistry, physics, calculus
...One of my daughters, who is a successful WyzAnt tutor, has encouraged me to become a tutor and help others the way I've helped her. I can teach all levels of math up to pre-calculus and all
levels of general and organic chemistry up to the graduate level. Even in my research laboratory, I train...
10 Subjects: including geometry, chemistry, algebra 1, organic chemistry
...Also available for general science, biology and chemistry. I was an Adjunct Professor of Chemistry at Rider University where I taught Chem to non-science majors for about 5 years. I can show
your student the applications of all this science in everyday life.
13 Subjects: including geometry, chemistry, biology, algebra 1
|
{"url":"http://www.purplemath.com/Princeton_Township_NJ_geometry_tutors.php","timestamp":"2014-04-16T19:22:05Z","content_type":null,"content_length":"24489","record_id":"<urn:uuid:c0cd683b-8a70-4006-9ba8-5a4ecd998f0b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
|
end-point extrema
October 7th 2011, 01:22 AM
Stuck Man
end-point extrema
Is this book wrong about point B? I think it is an end-point but not an end-point minimum. The function at 0 has a lower value.
October 7th 2011, 02:03 AM
Re: end-point extrema
Book is correct here:
And end-point minimum if there exists a region in the domain for which B is an end-point (you agreed it's true) and for which $f(B)\leq x, \forall x$. Note: if exists a region, so it doesn't have
to be for every region (that would be definition of minimum), but only for a region (in this example use $[3,4]$)
October 7th 2011, 04:11 AM
Stuck Man
Re: end-point extrema
Thanks. I had started to think it is correct. Surely you are talking about B not A?
October 7th 2011, 04:19 AM
Re: end-point extrema
yes, my mistake, I've edited previous post!
October 7th 2011, 05:50 AM
Stuck Man
Re: end-point extrema
In this example C and D are end-point minimums aren't they?
October 7th 2011, 07:12 AM
Re: end-point extrema
Nope; even though they satisfy second condition they are not endpoints!
October 7th 2011, 07:44 AM
Stuck Man
Re: end-point extrema
They are end-points of the second and third functions. The book definitely describes these as end-points.
October 7th 2011, 07:55 AM
Re: end-point extrema
Could you please copy here how end-point is exactly defined?
And you are watching endpoints of $f(x)$, not every part of it. By that logic you could split function above (one that is defined on -2 to 4) split into infinite many parts and have infinite many
October 7th 2011, 08:07 AM
Stuck Man
Re: end-point extrema
Unfortunately it is not defined.
October 7th 2011, 08:10 AM
Stuck Man
Re: end-point extrema
The book does talk about C and D as being at end-points of the subdomains.
October 7th 2011, 08:35 AM
Stuck Man
Re: end-point extrema
This is a good guide to piecewise functions and extrema: Powered by Google Docs
|
{"url":"http://mathhelpforum.com/pre-calculus/189725-end-point-extrema-print.html","timestamp":"2014-04-21T00:13:54Z","content_type":null,"content_length":"8249","record_id":"<urn:uuid:ecb69581-4c13-4bdd-afeb-892bd17362a7>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
how to sketch a graph of the product of two trig identities?
April 14th 2013, 02:14 PM #1
Apr 2013
New Jersey
how to sketch a graph of the product of two trig identities?
Hey all,
so, I have encountered a problem that asks me to sketch the graph of the following: f(x) = sin(x)cos(12x). I know how to sketch them both separately, but what is the procedure so that I can do
this properly without using a calculator?
another type of problem asks me to sketch something like this: 4cos(x)-3sin(x). I have the same issue here as I would the above example.
thanks in advance for your help!
Re: how to sketch a graph of the product of two trig identities?
Test the f(x) values incrementally, to get a rough idea on the structure of the graph.
If this is between [0, $2\pi$], test:
0, $\frac{\pi}{4}$, $\frac{\pi}{2}$, $\frac{3\pi}{4}$, $\pi$, $\frac{5\pi}{4}$, $\frac{3\pi}{2}$, $\frac{7\pi}{4}$, $2\pi$.
Given that the frequency of the cosine function is 12, you may want to do $\frac{\pi}{3}$, $\frac{\pi}{6}$, etc as well..
Essentially, do what you did in high school when you first learnt how to graph linear equations, have a row of values for x, and a row for f(x).
Of course, there is always Wolfram|Alpha; it can give you some guidance.
sin(x)cos(12x) - Wolfram|Alpha
Re: how to sketch a graph of the product of two trig identities?
Additionally, you might make use of trig identities, when convenient.
Your second example, for example, can be written in the form
$5\left(\frac{4}{5}\cos x - \frac{3}{5}\sin x \right)=5\cos (x +\alpha), \text{ where } \alpha = \tan^{\, -1}(3/4).$
The first one can be written in the form
$\frac{1}{2}\left( \sin \frac{13}{2}x - \sin \frac{11}{2}x \right)$
which doesn't help quite so much.
April 19th 2013, 11:45 PM #2
April 20th 2013, 04:45 AM #3
Super Member
Jun 2009
|
{"url":"http://mathhelpforum.com/trigonometry/217477-how-sketch-graph-product-two-trig-identities.html","timestamp":"2014-04-19T00:38:09Z","content_type":null,"content_length":"37780","record_id":"<urn:uuid:82a4e98b-f981-4f0f-9082-40d4f7980d56>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Assignment 2,3 Solution
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
University of Waterloo, Waterloo - ECON - 101
onopoly as producersurplus.Rent SeekingAny surplus—consumer surplus, producer surplus, or economic profit—is called economicrent.Rent seeking is the pursuit of wealth by capturing economic
rent.Rent seekers pursue their goals in
SUNY Plattsburgh - MGB - 201
Delaware Tech - ECON - 103
Ashlee GrangerFinal AssignmentMarch 19, 20131. Vacate means to do away with something . In court a judgment becomes final when thecase is over and a judge has signed a piece of paper. The judgment is
then effective untilit is satisfied (by payment, o
Sekolah Tinggi Akuntansi Negara - AKUNTANSI - 001
www.bpkp.go.idUNDANG-UNDANG REPUBLIK INDONESIANOMOR 37 TAHUN 2004TENTANGKEPAILITAN DAN PENUNDAAN KEWAJIBAN PEMBAYARAN UTANGDENGAN RAHMAT TUHAN YANG MAHA ESAPRESIDEN REPUBLIK INDONESIA,Menimbang:a.
bahwa pembangunan hukum nasional dalam rangka mewu
American InterContinental University - PRESS111 - 03
UNIT ONE DB- RHETORICAL TRIANGLEJanelle ClayAmerican InterContinentalThe four ways for developing a presentation and they are; A Speech or Lecture, Aworkshop, a discussion, and a group activity.
These groups have the three Rhetorical Triangle incommo
University of Economics and Technology - ECONOMICS - 400
REVIEW QUESTIONS FOR MID-SEMESTER TEST1. What is the main difference between a merger and an acquisition, specify 4typical ways in M&A deal structure that you have known? Give an exampleand show
clearly this/these M or A belong to what type of M &A?S
Virtual University of Pakistan - FINANCE - 701
MARKETJUNE 08-JUNE 10, 2004IntroductionThe bond ma
CUNY John Jay - ENGLISH - 102
Persuading the American DreamThe play Death of a Salesman, by Arthur Miller and the novel The Great Gatsby, by F.Scott Fitzgerald use their characters as the illusion of pursuing the American dream.
The authorsused several symbols such as color, archit
Universidade de Brasília - ECON - 101
Homework 5 SolutionsChapter 66.1Yes, for example, an iterative block where the test condition remains true for each iteration.This procedure will never end and is therefore not _nite and not an
algorithm. The followingis an example of a procedure tha
American University of Central Asia - COM - 101
Chapter 4:What are the six principles of verbal messages?Message meanings are in people, messages are denotative and connotative, messages vary inabstraction, message meanings vary in politeness,
messages vary in assertiveness, messages areinfluenced
Art Institute of Philadelphia - ECON - 203
Microeconomics 203Quiz 61) When economists say that a good is non-rival in consumption, they mean that:A) no one wants the good.B) everyone wants the good.C) the good is widely available.D) more than
one person can enjoy the good at the same time.2
CSU Fullerton - MBA - 279
Starbucks Digital Network12 million Facebook fans, nearly a million Twitter followers and the growth ofmystarbucksidea.com, where customers share, vote on and discuss ideas for Starbucks.Starbucks
experience by delivering free, premium offerings that a
University of Toronto - ECON - ECMB06
ECM B06: Midterm Exam -SOLUTIONSSummer 2010Q1.Professor Jack Parkinson(20 marks) - SOLUTIONPART A:LR OutputY = 10(1,600)0.50(3,600)0.50 = 10(40)(60) = 24,000(2 marks)Real interest rate Y=C+I+G 24,000
= 2,000 + 0.75(24,000 4,800) + 3,000 80r + 4
Ashford University - EDUCATION - 645
TEST AND ESSAY ITEMSTest and Essay ItemsRegina Robinson-HendrixEDU645: Learning & Assessment for the 21st CenturyInstructor: Dr. Stephanie RobinsonMarch 25, 2013TEST AND ESSAY ITEMSLearning
Outcomes and Measurement for 7th grade Vocabulary and Phon
Baylor College of Dentistry - CHE - 2216
The purpose of this experiment was to first standardize a solution of approximately 0.0209 Mpermanganate solution. Using this standardized permanganate (MnO42+) solution concentration,students
determined the percent composition of oxalate (C2O42-) in an
Ashford University - BUSINESS - 640
3.A. The slope is $0.0674 gallon a year.B. The Y intercept is -132.96.C. The equation formed would be: y = 0.0674x 132.96D. Estimated cost in 2020: y(2020) = $3.188 a gallonE. Residuals? On excel
chart under column ResidualsF. I have recently paid $
CCAD - ECONOMIA - 101
PROBLEMABajo consumo del vino blanco y el consumo de vinos solo en ocasiones especiales enciertos segmentos en la regin. Muchas veces causada por la mala informacinPor eso: La empresa Vinos Sureo SAC
(es una empresa arequipea; su rubro es laelaboracin
DeVry Manhattan - MIS - 589
(TCOB)A(n)_circuitisanothernameforamultipointconfiguration. (Points : 5)analogshareddedicatedsimplex2. (TCOB)Thetechnologydirectorhasdecidedthatahostbasedapplicationarchitecturebased
École Normale Supérieure - PHYSICS - 102
University of Phoenix - SOC 100 - 100
I do not really think that any type of punishment out there effectively deters aperson from committing crime. We have several ways that we try and prevent crime,but in this day in age, I cannot tell
that they are working. I see more and more crimesbein
DeVry NY - NETW206 - 206
David M FontanezHands on Lab 5Prof. Shahed MustafaAs my team and I work on getting MPLS to work we discovered that two of the threerouter did not have the right ISO with that being the issue we could
not complete ourhands on. We will be in at 5pm nex
Northwestern - FRENCH - 111-1
Compltez les phrases suivantes avec un article indfini appropri ou bien en utilisant le symbole (option lettre 'o'):1. La Normandie et la Bretagne sont2. Jrme Chenot estun excellent pcheur de
crustacs. Il est3. Ce matin, Jrme a pch4. Mais il n'a pas
Northwestern - FRENCH - 111-1
Choisissez la rponse approprie:1. Pour faire un gteau, il faut> a) du chocolatb) un chocolat2. Elle doit acheter pour faire un Tiramisua) un caf> b) du caf3. Sur la plage, il y aa) du
coquillage> b) des coquillages4 Nous buvons toujours pour notre
Northwestern - FRENCH - 111-1
Classez les mots donns, suivant s'il y a ou non, lision de l'article.lel'l' lele l'l'lelel'l'lel'l'
lel'l'l'l'homardhlicoptrehamburgerhorreurhabithasardhockeyhabitanthabitudehallhistoireharenghommehiverhtelheureCompltez le
Northwestern - FRENCH - 111-1
Classez les pays suivants en fonction de leur genre:PaysItalieAllemagnePortugalRoyaume UniGrcefminin ou masculinfmininfmininmasculinmasculinfmininthiopieBotswanaMarocAfrique du
Northwestern - FRENCH - 111-1
Classez les mots donns, suivant s'il y a ou non, lision de l'article.Luc travaille comme conseiller du prsident pour de dveloppement durableIl travaille avec des collgues trs comptents.Dvelopper des
nergies propres ne se fait pas sans difficults.Le pa
Northwestern - FRENCH - 111-1
Exercise WYAKClassez les noms suivants en fonction de leur
Northwestern - FRENCH - 111-1
Traduisez les phrases suivantes en utilisant les verbes suivantes: vouloir, pouvoir, devoir1. Can I work on this computer ? cfw_ Est-ce que je peux travailler sur cet ordinateur ? |cfw_ Puis-je
travailler sur cet ordinateur ? 2. Do I have to eat my sp
Northwestern - FRENCH - 111-1
Conjuguez les verbes aux personnes et au mode (indicatif prsent ou impratif prsent)indiqus.Exemple :Lire/ impratif prsent /forme de tu :lisforme de vous :lisez1. Convenir / indicatifprsent /nous
:convenonsils :conviennent2. Devenir / indicat
Northwestern - FRENCH - 111-1
Compltez de manire logique:Marie, pose ta poupe Barbie et viens vite ! Cest lheure du repas. (Aprs quelques minutes)Nous commenessaons sans toi. Nous mangies de nous ennuassez dur. Jacques, ameons de
dlicieuses lasagnes, tu sais ? Est-ce que tuyer
Northwestern - FRENCH - 111-1
Traduction1. Are you going to buy a new computer? (informal) cfw_ Vas-tu acheter un nouvelordinateur ? |cfw_ Est-ce que tu vas acheter un nouvel ordinateur ? 2. Too bad! The train has just left! cfw_
Pas de chance ! Le train vient de partir ! |cfw_
Northwestern - FRENCH - 111-1
Indiquez par un "A" les phrases suivantes qui sont la voix active et par un "P" celles quisont la voix passive:A. Phrases en anglaisA1. This package must be sent out byThursday.2.
A young architect has been building this*P*Anew house.3. The Bast
Northwestern - FRENCH - 111-1
Conjuguez les verbes aux personnes et au mode (indicatif prsent ou impratif prsent)indiqus.Exemple :Sortir / imperatif /forme de tu :forme de nous :sorssortons1. Avoir/impratif prsent/forme de tu
:forme de nous :aieayons2. Choisir/ indicatif
Northwestern - FRENCH - 111-1
Traduisez les phrases suivantes :1. She leaves every morning at 6 a.m. cfw_ Elle part tous les matins 6h. | cfw_ Elle part chaquematin six heures. 2. Tomorrow, Ill take the 7.30 a.m. train. cfw_
Demain, je prends le train de 7h30. | cfw_ Demain,je p
Northwestern - FRENCH - 111-2
Donnez la forme demande. Ex. Un poisson rouge => des poissons _ (rouges)1. Des joueurs chanceux = des joueuseschanceuses2. Un discours prometteur = une dclaration3. Des ouragans destructeurs = des
forces4. Des robes fines = des tissusprometteusedes
Northwestern - FRENCH - 111-2
Compltez les deux paragraphes suivants :1. In Spring, some days are beautiful and sunny.Au printempscertaines journes sont belleset ensoleilles.2. But other days are very boring and rainy.Mais
d'autres jours sont trsennuyeux etpluvieux.3. In any ca
Northwestern - FRENCH - 111-2
Transformez les phrases suivantes en utilisant les comparatifs ou superlatifsappropris:(+, -, = signifie comparatif ; +, - signifie superlatif)1. Paul m'a pos la question poliment. (+/Yves)cfw_ Paul
m'a pos la question plus polimentque Yves. |cfw_
Northwestern - FRENCH - 111-2
Compltez les phrases suivantes en gardant le sens de la phrase de rfrence :Ex: La fin dece livre est surprenante.Ce livre se termine. de manire surprenante.1. Dans ce roman, le policier qui mne
l'enqute est intelligent et perspicace.Dans ce roman, lepo
Northwestern - FRENCH - 111-2
Insrez les adjectifs donns dans chacune des phrases suivantes.Faites attention auxaccords et la place des adjectifs.Ex. Olivier a taill cette _ table _en _marbre_. (beautiful/ large /white). Olivier
a taill cette belle grande table en marbreblanc.Oliv
Northwestern - FRENCH - 111-2
Formez la question qui correspond aux rponses suivantes. Utilisez l'inversion.Considrezque ces questions et rponses font partie d'un dialogue entre 2 personnes.1.Nous partons au Sngal.2.Nous partons
demain.O partez-vous ?cfw_ Quand partez-vous ? |c
Northwestern - FRENCH - 111-2
Vous interviewez quelqu'un. crivez les questions qui correspondent aux sujets suivants.Vous devez utiliser une forme de 'quel' avec ou sans prposition et vous adresser lapersonne en la vouvoyant:Ex.
Adresse:Quelle est votre adresse?1. ge cfw_ Quel est
Northwestern - FRENCH - 111-2
Compltez les phrases du paragraphe suivant en utilisant l'expression approprie.c'est / cesont/ il est / ils sont / elle est/ elles sont1. Ce soir,c'est la soire "carrires" l'universit.2.
Franois a prpar des cartes de visite (business card).Elles sont
Northwestern - FRENCH - 111-2
Conjuguez 'connatre' ou 'savoir' (suivant le contexte) au prsent.1. M. Lemarque, est-ce que vous2. Beaucoup d'Amricains ne3. Nous neconnaissez mon pre?savent pas o se trouve la ville de Lyon.savons
pas parler japonais.4. Ma mre ne sait pas encore q
Northwestern - FRENCH - 111-2
Rcrivez le petit dialogue ci-aprs en remettant les groupes de mots dans l'ordre. Ajoutezles majuscules, les points d'interrogation (question marks), les traits d'union (hyphens) etfaites les
contractions si ncessaires. Pour les questions, la forme de l'
Northwestern - FRENCH - 111-3
Mettez les phrases du paragraphe suivant au pass compos et la forme demande :Ex.Tu travailles aujourd'hui. (ne pas)Tu n'as pas travaill aujourd'hui.1. Ins travaille sur son arbre gnalogique. (ne pas)
Ins n'a pas travaill sur son arbregnalogique.2. Ell
Northwestern - FRENCH - 111-3
Mettez les phrases du paragraphe suivant au pass compos. Faites les accords sincssaire.1. La France connat beaucoup de rvolutions.La France a connu beaucoup de rvolutions.2. De nombreuses uvres
littraires les voquent. cfw_ De nombreuses uvres littrair
Northwestern - FRENCH - 111-3
Pour bien distinguer quand utiliser le pass compos ou l'imparfait, il faut comprendre ce qu'enanglais les verbes au pass expriment. Cet exercise est fait pour vous aider mieux comprendrecomment
choisir entre l'imparfait et le pass compos.Dans le paragr
Northwestern - FRENCH - 111-3
Compltez le paragraphe suivant en utilisant des adjectifs dmonstratifs.Ce soir, notre quipe jouera dansjournalistes ettoutesce stade surcette pelouse.ces gradins derrire sont pour le public.Cet
espace est rserv auxCes lumires seront toutes allumes
Northwestern - FRENCH - 111-3
Conjuguez le verbe au temps et la forme demands : Ex. imparfait / ils / travaillerilstravaillaient1. Pass compos / elle / applaudir elle a applaudi2. imparfait / vous / faire vous faisiez3. Pass
compos / je / rendre j'ai rendu4. Imparfait / elle / bo
Northwestern - FRENCH - 111-3
Donnez la forme correcte de "celui" et choisissez la suite.Exemple: Sa batte de baseball estcasse. Je vais lui prter celle que je n'utilise plus.1. As-tu une cane pche de rechange ?
Oui,j'aicelle-ci.2. Je dois m'acheter de nouveaux gants,3. As-tu un
Northwestern - FRENCH - 111-3
Dans les phrases suivantes, traduisez les mots en gras en utilisant une forme de tout etindiquez s'il s'agit d'un adjectif, un pronom ou un adverbe. Example: Those boys all playvery well; it is not
at all clear who will win. Ces garons jouent tous (adje
Northwestern - FRENCH - 111-3
Rpondez aux questions suivantes en utilisant la ngation approprie. Faites attention laplace de la ngation en fonction du temps de la phrase.1. Va-t-elle toujours voir des matchs de rugby ?Non,
ellecfw_ ne va jamais voir des matchs derugby |cfw_ ne v
Northwestern - FRENCH - 111-3
Compltez les phrases suivantes avec l'adjectif possessif appropri:1. qui appartiennent ces patins glace? Ce sont2. qui appartient cette planche voile? Est-ce3. qui appartient ce tutu? C'estmes (my)
patins glace.ta (your/singular) planche?son (her) t
Northwestern - FRENCH - 111-3
Conjuguez les verbes entre parenthses du paragraphe suivant l'imparfait, au passcompos ou au plus-que-parfait selon le sens de la phrase. Lisez toute la phrase avant deconjuguer le premier verbe.
N'oubliez pas l'accord du participe pass s'il le faut.La
Indiana - BUS-Z - 302
Content ReviewCourse Introduction1. Understanding the field of OB (organization behavior) within organizational behaviors,there are 6 factors (environmental factors, individual factors, job
attitudes, job behavior,organizational outcomes, individual o
Indiana - BUS-Z - 302
Z302Exam I ReviewPlace and TimeThursday, September 20In classEntrance and Exit ProceduresYou are encouraged to arrive a little bit early to the exam, in order to get your seat and be ready tostart on
time.You will need a photo-id with a name that m
Indiana - BUS-Z - 302
Substantial positive relationship:.40 through .70Moderate positive relationship:.20 through .39Small positive relationship:.10 through .19Small negative relationship:-.10 through -.19Moderate
negative relationship: -.20 through -.39Substantial n
Indiana - BUS-Z - 302
Z302 Fall 2012Team Case Presentation Assignment1,000 PointsPresentations will take place on November 29 and November 30(sign-ups for available time slots will be conducted in SONAsee procedures
below)Case: RL Wolfe: Implementing Self-Directed Teams,
Indiana - BUS-Z - 302
1. Can the approach were using in Corpus Christi be changed to improve the results wereseeing from self-directed teams?a. Specifically, how should we deal with decision making (who has final say)?
(parth)b. What role should management play in this plan
Indiana - BUS-Z - 302
Z302 - Content For FinalMotivation1. Expectancy theory of motivationa. Motivating force formula, Definitions of factors in formulab. Key Take-Aways for Managers2. Equity theory of motivationa. Inputs
/Outcomesb. Responses to Inequityc. Key Take-Awa
Indiana - BUS-Z - 302
Transitioning to Self DirectedTeamsWe want all three plant to beself- directed team.This work with Christi, so webelieve that it can beincorporated in allUnionsSince the selfdirected teams haveworked
better thanthe union plants.Even though peop
|
{"url":"http://www.coursehero.com/file/7835828/Assignment-23-Solution/","timestamp":"2014-04-16T04:33:10Z","content_type":null,"content_length":"56613","record_id":"<urn:uuid:d275cb91-9161-44a7-92c6-f42b412f61f6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why Study Sequences and Series?
Date: 02/26/99 at 12:52:15
From: Tara Leiviska
Subject: Purpose of Arithmetic Sequences and Series
I am teaching arithmetic sequences and series to 10th graders. They
keep asking me why they need to know sequences and series. I have no
idea why. I have never been told myself. I cannot find a Web site or
a book that will tell me some applications of arithmetic sequences or
series. Please help!
Date: 03/02/99 at 18:52:42
From: Doctor Nick
Subject: Re: Purpose of Arithmetic Sequences and Series
Sequences and series are useful in the same way geometry is useful.
There are things in the world that can be represented by circles and
squares, and things that can be represented as sequences and series.
For the variety of things that sequences can represent, take a
look at Sloane's On-Line Encyclopedia of Integer Sequences:
This is a gigantic collection of sequences that come from all kinds of
applications. Take a look at the Fibonacci sequence (a classic) at Ron
Knott's site:
Series involve one of the most powerful ideas in mathematics. In
particular, power series are amazingly useful for doing all kinds of
things. One thing they are very useful for is approximations of
functions, which is necessary in virtually all computer applications
involving evaluations of functions, an interesting computer
application. If you are not very familiar with power series, I suggest
reading up on them.
Another kind of series that is amazingly useful is Fourier series.
This is a somewhat advanced topic, but certainly could be introduced to
willing high school students, at least as an example of the cool things
that you can do with series. Fourier series, and ideas connected with
them, are used in many many signal processing applications (sound,
video, etc.). Ask your students to explain how the bass and treble
knobs on their stereos work. Those who do a good job answering the
question will run into Fourier series.
Another thing to consider is musical synthesis: how does a synthesizer
work? The answer involves Fourier series (along with other interesting
materials, of course). For some neat stuff related to Fourier series
(Fourier transforms, guitars, and drums), take a look around Dan
Russell's page on Vibration and Waves Animations:
Series are useful as a way of creating functions. Once you go through
rational, exponential, and trigonometric functions, what is next?
Functions created from series, with power series and Fourier series
being two of the more popular kinds. One place such functions arise is
as the solutions to differential equations.
In probability, one runs into series quite a bit. Consider the
following problem. You throw a 6-sided die until either a six comes
up, and you win, or a one comes up, and you lose. By symmetry, it is
clear that you have a 50% chance of winning. Another way to look
at it is the following. The probability that you will win in exactly
n throws of the die is
((4/6)^(n-1)) * (1/6)
So, the probability that you will win is the infinite series, where we
sum the above expression as n runs from 1 to infinity. This is a nice
geometric series, and its sum is (as it has to be) 1/2.
Now, in a probability problem, where we do not have this nice kind of
symmetry, the series may be the only way to go. (Even in these simpler
cases, it is always nice to have more than one way to solve a problem).
I could go on and on. Series are used in many, many applications.
Have fun,
- Doctor Nick, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/56618.html","timestamp":"2014-04-17T10:06:05Z","content_type":null,"content_length":"8850","record_id":"<urn:uuid:205a9d32-2e93-4c94-ad57-e0b05a23f19c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
percent problems formulas
percent problems formulas Related topics: how to use ti84 ninth root
solve algebra equations
fractions and mix number example
free 10th matric maths question papers
sample test questions for 6th grade math
math 8 exam 1 solutions
online graph calculator plot points
mathmatics 7 grade
intermediate accounting+seventh canadian edition+solutions to exercises+chapter 9
exponent and square roots
pre algerbra problems
factoring radical expressions
solving second order coupled differential equations
pyramid equations using polynimials
Author Message
XeNo® Posted: Saturday 26th of Aug 09:59
Hi folks I would really cherish some support with percent problems formulas on which I’m really stuck. I have this math assignment and don’t know how to solve multiplying fractions,
side-angle-side similarity and side-angle-side similarity . I would sure appreciate your suggestion rather than hiring a math tutor who are very costly.
Vofj Posted: Monday 28th of Aug 07:00
Timidrov What in particular is your difficulty with percent problems formulas ? Can you give some additional surmounting your difficulty with unearthing a tutor at an affordable cost is for you to
go in for a suitable program. There are a variety of programs in algebra that are to be had. Of all those that I have tried out, the the top most is Algebrator . Not only does it work out
the math problems, the good thing in it is that it explains every step in an easy to follow manner. This makes certain that not only you get the right answer but also you get to be
trained how to get to the answer.
Hiinidam Posted: Wednesday 30th of Aug 08:43
Algebrator truly is a must-have for us math students. As my dear friend said in the preceding post, it solves questions and it also explains all the intermediary steps involved in
reaching that final solution. That way you don’t just get to know the final answer but also learn how to go about solving questions right from the first step till the last, and it helps a
lot in working on assignments.
CO, US
Ashe Posted: Wednesday 30th of Aug 13:16
I remember having problems with factoring polynomials, rational inequalities and adding numerators. Algebrator is a really great piece of math software. I have used it through several
algebra classes - Pre Algebra, Remedial Algebra and Algebra 2. I would simply type in the problem from a workbook and by clicking on Solve, step by step solution would appear. The program
is highly recommended.
Tlochi Posted: Friday 01st of Sep 10:51
Of course. My aim is to learn Math. I would use it only as a tool to clear my concepts. Can I get the link to the software?
Dothan, AL
Admilal Posted: Saturday 02nd of Sep 09:14
`Leker Here http://www.softmath.com/faqs-regarding-algebra.html. Please let us know if this has been of any benefit to you.
From: NW
AR, USA
|
{"url":"http://www.softmath.com/algebra-software-7/percent-problems-formulas.html","timestamp":"2014-04-19T11:56:56Z","content_type":null,"content_length":"38030","record_id":"<urn:uuid:70cbbc40-2e79-4ec9-bc4f-2597526ff211>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Credit Card minimum payment
June 12th 2009, 06:35 PM #1
Jan 2009
Credit Card minimum payment
Your Visa card says its annual interest rate is 18%. Of course, you are asked to make monthly payments. If you have a balance of $1,000, how long would it take you to pay off the balance assuming
you make the minimum payment of 3.0% a month?
I understand its the reverse of a bank account where you make constant deposits and the interest accumulates, but I am having a tough time with this. Thanks.
Your Visa card says its annual interest rate is 18%. Of course, you are asked to make monthly payments. If you have a balance of $1,000, how long would it take you to pay off the balance assuming
you make the minimum payment of 3.0% a month?
I understand its the reverse of a bank account where you make constant deposits and the interest accumulates, but I am having a tough time with this. Thanks.
At some point you'll need to make an absolute minimum payment of an unchanging amount.
Otherwise the postage will cost more than the minimum 3% payment to be made, for a long time.
Without a set-or-fixed minimum $ amount at some point the exercise is not useful.
It would be more meaningful to say that the minimum payment is 3% or $5 (or $10), which ever is greater. Then you have a believable predicament.
Ok. I got confirmation standard rules apply.
IE. Minimum payment is $10 or 3%, whichever is greater.
18% annual interest is 18/12= 3/2= 1.5% monthly interest.To start with, ignore that $10 minimum. Suppose at some month the amount left to pay off is A. Then you would make the minimum payment of
0.03A and the interest for that month, 0.015A, wold be subtracted leaving an actual payment on the principal of (0.03- 0.015)A= 0.015A. Subtracting that payment from the principal leaves (1-
0.015)A= 0.985A still to be payed off.
That is, each month, the principal amount will be multiplied by 0.985. After n months, the initial amount will have been multiplied by $(0.985)^n$ and the account will have been payed off when $
(0.985)^nA= 0$. Of course, that's impossible! That would be the same a saying $(0.985)^n= 0$ and an exponential is never 0. That was aidan's point and why that $10 minumum is required.
The 3% payment will drop below $10 when 0.03A= 10 or A= 10/0.03= $333.33. So use the above calculation until $(0.985)^nA\le 333.33$, then see how many $10 paymeants are required to finish the
I understand your line of thinking, but upon doing the question, I discovered something different. Even though you are paying 18% a month, you are really paying 19.56% a year. Becuase it is a
Visa credit card bill and that you are to pay monthly, you are correct in that you pay 1.5% a month in interest. Consider this, you have a balance of $1 on your bill. Assuming no payments are
made for an entire year, you get:
$1 x 1.015^12 = $1.195618
So, in reality, we are paying an effective interest rate of 19.56% a year. Makes it confusing what do I go with I guess lol. But Id use the same formula given right?
In the US, you may have an additional floor:
1% + Actual Charges in the Billing Period
June 12th 2009, 06:58 PM #2
Super Member
Jan 2009
June 13th 2009, 02:10 PM #3
Jan 2009
June 14th 2009, 03:05 AM #4
MHF Contributor
Apr 2005
June 14th 2009, 04:19 PM #5
Jan 2009
June 16th 2009, 07:22 PM #6
MHF Contributor
Aug 2007
|
{"url":"http://mathhelpforum.com/business-math/92679-credit-card-minimum-payment.html","timestamp":"2014-04-20T22:02:59Z","content_type":null,"content_length":"46338","record_id":"<urn:uuid:7a045415-807a-48b8-8092-77753d93f54d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dungeon Crawler
I am trying to program a Dungeon Crawler in C++ but I am having trouble coding a Random Dungeon Generator.
To create a dungeon, I figured that I would randomly place Rooms, then connect the rooms together with passages.
However, I am having trouble figuring out how to connect the rooms together
I posted some outputs of the generator I got up to this point... I told it to create 64x64 dungeon and mark the traversable paths with 'O'
The dungeon information is stored in a 2D array of 0s and 1s where 1 is the traversable paths and 0 is dead end
|
{"url":"http://www.dreamincode.net/forums/topic/304949-dungeon-crawler/page__pid__1773642__st__0","timestamp":"2014-04-18T19:24:27Z","content_type":null,"content_length":"154092","record_id":"<urn:uuid:e8bf8578-5e1f-4be5-a288-df2fbc62c28d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
|
As your English teacher would say, good writers vary their sentence structure. The same is true of conditional statements: after a while, the If-Then formula becomes a real snoozefest. Some ways to
mix it up are: "All things satisfying hypothesis are conclusion" and "Conclusion whenever hypothesis."
However, mathematicians can be drier than the Sahara desert: they tend to write conditional statements as a formula p → q, where p is the hypothesis and q the conclusion. In fact, the old saying,
"Mind your p's and q's," has its origins in this sort of mathematical logic.
Sample Problem
Identify p and q in the following statements, translating them into p → q form.
(A) If it rains outside, then flowers will grow tomorrow.
(B) I cut off a finger whenever I peel rutabagas.
(C) All dogs go to heaven.
For (A), p = "it rains outside" and q = "flowers will grow tomorrow."
In (B), we may rewrite the statement as "If I peel rutabagas, then I cut off a finger," telling us that p = "I peel rutabagas" and q = "I cut off a finger."
Finally, we may rewrite (C) as "If it is a dog, then it will go to heaven," yielding p = "it is a dog" and q = "it will go to heaven."
The hypothesis and conclusion play very different roles in conditional statements. Duh. In other words, p → q and q → p mean very different things. It's kind of like subtraction: 5 – 3 gives a
different answer than 3 – 5. To highlight this distinction, mathematicians have given a special name to the statement q → p: it is called the converse of p → q.
No, not those Converse.
Sample Problem
Write the converse of the statement, "If something is a watermelon, then it has seeds."
We want to switch the hypothesis and the conclusion, which will give us: "If something has seeds, then it is a watermelon." Of course, this converse is obviously false, since apples, cucumbers, and
sunflowers all have seeds and are not watermelons. At least not during their day jobs.
There are some other special ways of modifying implications. For example, if you negate (that means stick a "not" in front of) both the hypothesis and conclusion, you get the inverse: in symbols, not
p → not q is the inverse of p → q. Finally, if you negate everything and flip p and q (taking the inverse of the converse, if you're fond of wordplay) then you get the contrapositive. Again in
symbols, the contrapositive of p → q is the statement not q → not p. Fancy.
Sample Problem
What is the inverse of the statement "All mirrors are shiny?" What is its contrapositive?
If we abbreviate the first statement as mirror → shiny, then the inverse would be not mirror → not shiny and the contrapositive would be not shiny → not mirror. Written in English, the inverse is,
"If it is not a mirror, then it is not shiny," while the contrapositive is, "If it is not shiny, then it is not a mirror."
While we've seen that it's possible for a statement to be true while its converse is false, it turns out that the contrapositive is better behaved. Whenever a conditional statement is true, its
contrapositive is also true and vice versa. Similarly, a statement's converse and its inverse are always either both true or both false. (Note that the inverse is the contrapositive of the converse.
Can you show that?)
|
{"url":"http://www.shmoop.com/logic-proof/converse-inverse-contrapositive-help.html","timestamp":"2014-04-20T10:50:12Z","content_type":null,"content_length":"33345","record_id":"<urn:uuid:4eb6e0c7-523f-4697-878c-b3f4343cc209>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
|
可计算性逻辑课程 (Lecture Course on Computability Logic)
Computability Logic
Graduate course taught at the Institute of Artificial Intelligence of Xiamen University in June-July 2007 by
Giorgi Japaridze
Giorgi Japaridze was born in Georgia, former Soviet Union. He received two PhDs, one from Moscow State University (in logic) and the other from the University of Pennsylvania (in computer science).
Currently he works in the Computer Science Department of Villanova University, USA. The primary area of his research interests is logic and its applications in computer science. Computability logic -
the subject of the present course - is an approach recently introduced by him.
中的应用。可计算性逻辑(这次课程的题目)-- 是他近年提出来的。这是第一次对中国进行学术访问.
Briefly about computability logic
Computability is certainly one of the most interesting and fundamental concepts in mathematics, philosophy and computer science, and it would be more than natural to ask what logic it induces. Let us
face it: this question has not only never been answered, but never even been asked within a reasonably coherent and comprehensive formal framework. This is where Computability Logic (CL) comes in. It
is a formal theory of computability in the same sense as classical logic is a formal theory of truth. In a broader and more proper sense, computability logic is not just a particular theory but an
ambitious and challenging program for redeveloping logic following the scheme "from truth to computability". It was introduced by Japaridze in 2003 and, at present, still remains in its infancy
stage, with open problems prevailing over answered questions. It is largely a virgin soil offering plenty of research opportunities, with good chances of interesting findings, for those with
interests in logic and its applications in computer science, including theory of computation and artificial intelligence.
Computation and computational problems in CL are understood in their most general, interactive sense, and are precisely seen as games played by a machine (computer, agent, robot) against its
environment (user, nature, or the devil himself). Computability of such problems means existence of a machine that always wins the game. Logical operators stand for operations on computational
problems, and validity of a logical formula means being a scheme of "always computable" problems.
Remarkably, classical, intuitionistic and linear (in a broad sense) logics turn out to be three natural fragments of CL. This is no accident. The classical concept of truth is nothing but a special
case of computability -- computability restricted to problems of zero interactivity degree. Correspondingly, classical logic is nothing but a special fragment of CL. One of the main -- so far rather
abstract -- intuitions associated with intuitionistic logic is that it must be a logic of problems (Kolmogorov 1932); this is exactly what CL is, only in a much more expressive language than
intuitionistic logic. And one of the main -- again, so far rather abstract -- claims of linear logic is that it is a logic of resources. Reversing of the roles of the machine and its environment
turns computational problems into computational resources, which makes CL a logic of resources, only, again, in a more expressive language than that of linear logic, and based on an intuitively
convincing and mathematically strict resource semantics rather than some naive philosophy and unreliable syntactic arguments. For more about CL vs. linear logic, visit Game Semantics or Linear Logic?
The CL paradigm is not only about "what can be computed", but equally about "how can be computed". This opens potential application areas far beyond pure computation theory, such as:
• Knowledgebase systems. CL is an appealing alternative to the traditional knowledge base, knowledge representation and query logics. Its advantages include:
□ CL is a logic of interaction, and this is what a good knowledgebase logic needs to be. After all, most of the real knowledgebase and information systems are interactive.
□ CL is resource-conscious: the potential knowledge provided by a disposable pregnancy test device is knowledge of only one (even though an arbitrary one) woman's pregnancy status, but not two.
CL naturally captures this sort of extremely relevant differences while the traditional systems fail to account for them.
□ CL naturally differentiates between truth and an agent's actual ability to find/compute/tell what is true. To achieve similar effects, traditional knowledgebase logics have to appeal to the
controversial, messy and troublemaking epistemic constructs.
• Systems for planning and action. CL might be a reasonable alternative to the traditional AI planning logics. Its advantages include:
□ CL is a logic of interaction, and planning systems should be well able to account for an agent's interaction with the environment.
□ A good planning logic should be able to naturally account for the fact that with one ballistic missile one can destroy only one (even though an arbitrary one) target but not two. The
resource-conscious CL fits the bill.
□ CL is a logic of knowledge and informational resources, which automatically takes care of the knowledge preconditions problem.
□ CL-based planning systems appear to be immune to the notorious frame problem.
• Constructive applied theories. CL is a conservative extension of classical logic, which makes it a reasonable alternative to the latter in its most traditional and unchallenged application areas.
In particular, it makes perfect sense to base applied theories (such as, say, Peano arithmetic) on CL instead of classical logic. The advantages of doing so include that CL-based applied
theories, compared with their classical-logic-based counterparts, are:
□ Expressive: the language of classical logic is only a modest fraction of the language of CL.
□ Computationally meaningful: every theorem of a CL-based theory represents a computable problem rather than just a true statement.
□ Constructive: any proof of a formula F in a CL-based theory effectively encodes a solution to the computational problem represented by F.
The primary online source of on the subject is maintained at the Computability Logic Homepage.
l 知识库系统可计算性逻辑对于传统的知识库,知识表示和查询逻辑是个有吸引力的选择,它的优势包括:
Ø 可计算性逻辑是交互的,这是好的知识库逻辑需要具备的,毕竟现实中多数的知识库和信息系统是交互的。
Ø 可计算性逻辑是有资源意识的:一次性的验孕设备提供的潜在信息只是一个(即使任意一个)妇女的怀孕情况,而不是两个。可计算性逻辑自然地捕捉到了这种极微小的区别,但传统的系统无法说明这些。
Ø 可计算性逻辑很容易区分真理和一个主体发现/计算/认知什么是正确的实际能力。为达到相似的效果,传统的知识库逻辑必须借助有争议的、混乱的和惹麻烦的认知构造。
l 规划和行为系统可计算性逻辑可能是传统人工智能规划逻辑的合理替代,它的优点如下:
Ø 可计算性逻辑是交互的,规划系统应该能很好的解决一个主体和环境的交互。
Ø 一个良好的规划逻辑应该能够容易的说明这样的情况,即同一个弹道导弹可摧毁只有一个(即使任意一)目标,而不是两个,有资源意识的可计算性逻辑正好符合要求。
Ø 可计算性逻辑是知识和信息资源的逻辑,它能自动地处理知识前条件(knowledge preconditions)问题。
Ø 基于可计算性逻辑的规划系统似乎能避免臭名昭著的框架(frame)问题。
l 构造的应用理论可计算性逻辑是经典逻辑的保守扩展,这使得可计算性逻辑在多数传统和没挑战性的应用领域里取代经典逻辑,尤其是很值得让可计算性逻辑代替经典逻辑作为应用理论(比如皮埃诺算术)的基础。这么做的
Ø 富有表达力:经典逻辑的语言只是可计算性逻辑语言的一小部分。
Ø 可计算的意义:基于可计算性逻辑的理论代表了可计算的问题而不仅是真语句。
Ø 建设性的:基于可计算性逻辑理论中的公式F的任何证据都可以有效地编码成由F代表的可计算问题的解答。
Selected papers on computability logic
The official journal versions of the following papers may be slightly different from the corresponding online preprints. It is recommended that you try to use the journal version first. Access to
some journals may be restricted though, in which case you may download the online preprint.
1. G.Japaridze, Introduction to computability logic. Annals of Pure and Applied Logic 123 (2003), pp.1-99. [SCI,AMS]
Official journal version Online preprint Erratum
2. G.Japaridze, Propositional computability logic I. ACM Transactions on Computational Logic 7 (2006), No. 2, pp. 302-330. [SCI,AMS]
Official journal version Online preprint
3. G.Japaridze, Propositional computability logic II. ACM Transactions on Computational Logic 7 (2006), No. 2, pp. 331-362. [SCI,AMS]
Official journal version Online preprint
4. G.Japaridze, Introduction to cirquent calculus and abstract resource semantics. Journal of Logic and Computation 16 (2006), No.4, pp. 489-532. [SCI,AMS]
Official journal version Online preprint
5. G.Japaridze, Computability logic: a formal theory of interaction. In: Interactive Computation: The New Paradigm. D.Goldin, S.Smolka and P.Wegner, eds. Springer Verlag, Berlin 2006, pp. 183-223.
Official book version Online preprint
6. G.Japaridze, From truth to computability I. Theoretical Computer Science 357 (2006), pp. 100-135. [SCI,AMS]
Official journal version Online preprint
7. G.Japaridze, From truth to computability II. Theoretical Computer Science 379 (2007), pp. 20-52. [SCI,AMS]
Official journal version Online preprint
8. G.Japaridze, Intuitionistic computability logic. Acta Cybernetica 18 (2007), No.1, pp. 77-113. [AMS]
Official journal version Online preprint
9. G.Japaridze, The logic of interactive Turing reduction. Journal of Symbolic Logic 72 (2007), No.1, pp. 243-276. [SCI,AMS]
Official journal version Online preprint
10. G.Japaridze, The intuitionistic fragment of computability logic at the propositional level. Annals of Pure and Applied Logic 147 (2007), No.3, pp. 187-227. [SCI,AMS]
Official journal version Online preprint
11. G.Japaridze, In the beginning was game semantics. In: Games: Unifying Logic, Language and Philosophy. O. Majer, A.-V. Pietarinen and T. Tulenheimo, eds. Springer 2009, pp. 249-350. [AMS]
Official book version Online preprint
12. G.Japaridze, Many concepts and two logics of algorithmic reduction. Studia Logica 91 (2009), No.1, pp. 1-24. [AMS]
Official journal version Online preprint
Precursors of computability logic (most relevant pre-2003 papers)
可计算性逻辑的“先驱” (2003年以前最有关系的论文)
1. A series of papers on game semantics by Lorenzen and his student Lorenz, written at the end of the 1950s and during the 1960s.
2. A series of papers on game semantics by Hintikka and his school, beginning from the 19960s.
3. A.Blass, Degrees of indeterminacy of games. Fundamenta Mathematicae 77 (1972), pp. 151-166.
4. J.Y.Girard, Linear logic. Theoretical computer science 50 (1887), pp. 1-102.
5. A.Blass, A game semantics for linear logic. Annals of Pure and Applied Logic 56 (1992), pp. 183-220.
6. G.Japaridze, A constructive game semantics for the language of linear logic. Annals of Pure and Applied Logic 85 (1997), No.2, pp.87-156.
7. G.Japaridze, The logic of tasks. Annals of Pure and Applied Logic 117 (2002), pp.263-295.
Some related papers by Chinese authors
The following papers are primarily or partially devoted to the logic of tasks (任务逻辑), introduced in item [7] (2002) of the previous list. The logic of tasks is essentially nothing but a special,
relatively simple fragment of computability logic in a somewhat naive form. The author treats it as an "experimental stage" on the way leading to developing computability logic. The latter is a
generalization and substantial refinement of the former, but otherwise the two are fully consistent, and results concerning the logic of tasks would typically automatically extend to computability
logic as well.
Meeting times: Tuesdays and Thursdays 4:30-6:10.
授课时间:星期二和星期四 4:30-6:10.
课室 (Classroom): 教学楼 202
There is no textbook for this course. All written materials that you may need (including lecture notes) will be made available through this web site.
Below is a tentative list of topics. Each topic is expected to take one or two lectures.
• Overview of the theory of computation:
Turing machines; The Church-Turing thesis; The traditional notions of computability, decidability, recursive enumerability; The fundamental limitations of algorithmic methods; Mapping
reducibility; Turing reducibility; Kolmogorov complexity.
• Overview of classical logic:
Propositional logic; Predicate (first-order) logic; The classical concepts of truth and validity; Hilbert- and Gentzen-style axiomatizations; The deduction theorem; Gödel's completeness theorem;
Peano arithmetic; Gödel's incompleteness theorems.
• Games:
Games as universal models of an agent's interaction with the environment; Static vs. dynamic games; Interactive Turing machines; Interactive computability; Interactive version of the
Church-Turing thesis.
• Game operations:
Negation; Choice operations; Blind operations; Parallel operations; Recurrence operations; Sequential operations.
• Computability and validity:
The computational meanings of logical formulas; "Truth" as computability; Validity versus uniform validity.
• Deductive systems for computability logic:
The propositional logics CL1 and CL2; The first-order logics CL3 and CL4; Affine logic; Intuitionistic logic.
• Applied theories based on computability logic:
The advantages of such theories; The philosophy of constructivism; Computability-logic-based arithmetic.
• Knowledge base systems based on computability logic:
Computability logic as a logic of knowledge; Computability logic as a declarative programming language; Resource-consciousness and constructiveness; Computability logic vs. epistemic logics.
• Systems for planning and action based on computability logic:
The logic of tasks; Reflections on the frame and knowledge preconditions problems.
• Cirquent calculus:
Linear logic; Shallow cirquent calculus; Abstract resource semantics; Deep cirquent calculus.
• Open problems and future directions:
The need for developing better and richer syntax; Thoughts about a to-be-developed theory of interactive complexity; Potential applications in artificial intelligence - from theory to practice; A
hot list of specific open problems.
Grading: A term paper should be submitted at the end of the semester, whose topic can be chosen by a student and approved by the instructor. Alternatively, a student may opt for taking a final
examination instead of writing a term paper. The term paper or examination will contribute 50% toward your final grade. The remaining 50% will come from quizzes that will be given 5 to 10 times
without warning, during the first 10-15 minutes of the class. Homework will be assigned after every meeting. It will not be collected or graded. However, the questions asked on a quiz will be
typically based on (or be identical with) some questions from the latest 2 homework assignments. Students who listen actively, participate in discussions and volunteer answering questions may receive
some extra credit at the time of deriving final grades.
NOTE: The following lecture notes are NOT copyrighted. Anyone in the world is permitted to copy and use them with or without modifications, and with or without acknowledging the source.
1. Introduction [6月19日]: What is computability logic. Computability logic versus classical logic. The current state of developement.
2. Mathematical preliminaries [6月19日]: Sets. Sequences. Functions. Relations. Strings.
3. Overview of the theory of computation [6月19日 - 6月21日]: Turing machines. The traditional concepts of computability, decidability and recursive enumerability. The limitations of the power of
Turing machines. The Church-Turing thesis. Mapping reducibility. Turing reducibility. Kolmogorov complexity.
4. Classical propositional logic (quick review) [6月21日 - 6月26日]: What logic is or should be. Propositions. Boolean operations. The language of classical propositional logic. Interpretation and
truth. Validity (tautologicity). Truth tables. The NP-completeness of the nontautologicity problem. Gentzen-style axiomatizations (sequent calculus systems).
5. Classical first-order logic (quick review) [6月26日 - 6月28日]: Propositional logic versus first-order (predicate) logic. The universe of discourse. Constants, variables, terms and valuations.
Predicates as generalized propositions. Boolean operations as operations on predicates. Substitution of variables. Quantifiers. The language of first-order logic. Interpretation, truth and
validity. The undecidability of the validity problem. A Gentzen-style deductive system. Soundness and Gödel's completeness.
6. Games [6月28日 - 7月3日]: Games as models of interactive computational tasks. Constant games. Prefixation. Traditional computational problems as games. Departing from functionality. Departing
from the input-output scheme. Departing from the depth-2 restriction. Propositions as games. Not-necessarily-constant games. Games as generalized predicates. Substitution of variables in games.
7. Negation and choice operations [7月3日]: About the operations studied in computability logic. Negation. The double negation principle. Choice conjunction and disjunction. Choice quantifiers.
DeMorgan's laws for choice operations. The constructive character of choice operations. Failure of the principle of the excluded middle for choice disjunction.
8. Parallel operations [7月3日 - 7月5日]: Parallel conjunction and disjunction. Free versus strict games. The law of the excluded middle for parallel disjunction. Resource-consciousness. Differences
with linear logic. Parallel quantifiers. DeMorgan's laws for parallel operations. Evolution trees and evolution sequences.
9. Reduction [7月5日 - 7月10日]: The operation of reduction and the relation of reducibility. Examples of reductions. The variety of reduction concepts and their systematization in computability
logic. Mapping reducibility is properly stronger than (simply) reducibility.
10. Blind quantifiers [7月10日]: Unistructurality. The blind universal and existential quantifiers. DeMorgan's laws for blind quantifiers. The hierarchy of quantifiers. How blind quantification
affects game trees.
11. Recurrence operations [7月12日 - 7月17日]: What the recurrences are all about. An informal look at the main types of recurrences. The parallel versus branching recurrences. Formal definition of
the parallel recurrence and corecurrence. Evolution sequences for parallel recurrences. Weak reductions. Weakly reducing multiplication to addition. Weakly reducing the RELATIVES problem to the
PARENTS problem. Weakly reducing the Kolmogorov complexity problem to the halting problem. The ultimate concept of algorithmic reduction. Definition of the branching recurrence and corecurrence.
Evolution sequences for branching recurrences. A look at some valid and invalid principles with recurrences.
12. Static games [7月17日]: Some intuitive characterizations of static games. Dynamic games. Pure computational problems = static games. Delays. Definition of static games.
13. Interactive computability [7月17日]: Hard-play machines. Easy-play machines. Definition of interactive computability. The interactive version of the Church-Turing thesis.
14. The language and formal semantics of computability logic [7月19日]: The formal language. Interpretations. Definitions of validity and uniform validity. Validity or uniform validity? The
extensional equivalence between validity and uniform validity. Computability versus "knowability". Closure under Modus Ponens. Other closure theorems. Uniform-constructive closure.
15. Cirquent calculus [7月19日]: About cirquent calculus in general. The language of CL5. Cirquents. Cirquents as circuits. Formulas as cirquents. Operations on cirquents. The rules of inference of
CL5. The soundness and completeness of CL5. A cirquent calculus system for classical logic. CL5 versus affine logic.
16. Logic CL4 [7月24日]: The language of CL4. The rules of CL4. CL4 as a conservative extension of classical logic. The soundness and completeness of CL4. The decidability of the
blind-quantifier-free fragment of CL4. Other axiomatizations (affine and intuitionistic logics).
17. Applied systems based on computability logic [7月26日]: Computability logic as a problem-solving tool. Knowledgebase systems based on computability logic. Constructiveness, interactivity and
resource-consciousness. Systems for planning and action based on computability logic. Applied theories based on computability logic. Computability-logic-based arithmetic.
Homework assignments
│星期二│6月 19日│6月 26日│7月 3日│7月 10日│7月 17日│7月 24日│
│星期四│6月 21日│6月 28日│7月 5日│7月 12日│7月 19日│7月 26日│
|
{"url":"http://www.csc.villanova.edu/~japaridz/CL/clx.html","timestamp":"2014-04-19T04:36:46Z","content_type":null,"content_length":"54494","record_id":"<urn:uuid:a3ade7d3-0b93-4854-bb36-0d1f1159d5bc>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Adding and Subtracting Decimals
More Free Printable Decimal Worksheets:
Equivalent Decimals and Fractions
Decimal Place Value
Compare Decimals and Fractions
More 5th Grade Math Worksheets:
Coordinate Plane
Order of Operations - 4 Basic Operations
Adding and Subtracting Decimals
Free printable math worksheets for adding and subtracting decimals to hundredths.
Answer keys included. Recommended level: 4th or 5th grade math
More FREE K - 8 Worksheets at www.worksheetsPlus.com
Free Online Decimal Math Learning Games:
Decimal Solitaire
Compare Decimals to Tenths and Hundredths
Decimal Solitaire - Thousandths
Decimal Sum One
A Fun Interactive Math Game to Practice Adding Decimals
Blackjack 2.2
Add Tenths and Hundredths as Fractions and Decimals
|
{"url":"http://www.worksheetsplus.com/AddAndSubtractDecimalsWorksheets.html","timestamp":"2014-04-19T01:51:04Z","content_type":null,"content_length":"7420","record_id":"<urn:uuid:dd0ca9a5-75d8-47ee-a77a-ebac4863b2bb>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Problem Set 12
For this entire problem set, R = k[x1, x2, . . . , xn] with k a eld.
Recall that R = k[x1, x2, . . . , xn] is a graded ring by the decomposition R =
d0 Rd where Rd is the space spanned by the monomials of degree d. Given a
graded R-module, M, we can dene the Hilbert function M of M by M (t) =
dimk(Mt) for each t Z. We have the following theorem:
Theorem 1. (Hilbert-Serre) Let M be a nitely generated graded R = k[x0, x1, . . . , xn]
module. There is a unique polynomial PM (t) Q[t] such that M (t) = PM (t) for
t 0.
The polynomial that appears in the theorem above is called the Hilbert Poly-
nomial of M. In the near future, we will have tools that will enable us to compute
the Hilbert Polynomial. For now, we must be satised to compute the Hilbert
For now I will let dimension be dened intuitively, the empty set will have
dimension -1, points will have dimension 0, curves will have dimension 1, etc.
Later we will give a more precise denition of dimension.
Recall the following lemma from Problem Set 9:
Lemma 2. Let R = k[x1, x2, . . . , xn]. Let F R. There is a short exact sequence
of the form
0 R/(I : F) R/I R/(I, F) 0.
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/471/2275937.html","timestamp":"2014-04-20T19:28:18Z","content_type":null,"content_length":"8262","record_id":"<urn:uuid:f29670c2-ea54-4fc2-aa0a-e73e240d3ccf>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Epact: Scientific Instruments of Medieval and Renaissance Europe
pin gnomon: the part of a sundial which casts the shadow, where this part is in the form of a pin.
plane table: type of surveying instrument, see article on the plane table.
plane table alidade: an alidade specially adapted for use with a plane table.
planetary hours: system of hour reckoning, see article on time and date.
planetary temperaments: astrological character of the fixed stars; as assigned in classical astrology, each star had a nature and effect similar to one or more of the planets.
planisphere: a representation of a spherical body on a flat surface, commonly a map of the earth or of the heavens.
planispheric astrolabe: astronomical instrument based on a planispheric projection of the heavens, see article on the astrolabe.
plate: part of an astrolabe with a projection of altitude and azimuth lines on to the equatorial plane, see article on the astrolabe.
plumb level: device for determining a horizontal level or an angle of elevation by a plumb line or plummet.
plumb line: a suspended thread with a weight at its end, indicating the vertical.
plummet: a form of plumb line in which the 'line' and weight are a single rigid piece.
polar dial: type of sundial, see article on the sundial.
polyhedral dial: sundial with hour lines on various faces of a solid figure, see article on the sundial.
prime vertical: celestial great circle passing through the east and west points and the zenith.
primum mobile: instrument for finding the sines and versed signs of angles, see article on the primum mobile.
projection: translation of a figure on to a plane or curved surface using straight lines in a systematic way. For example, a spherical surface can be projected on to a plane (the plane of projection)
by means of straight lines drawn from all points on the surface to a certain defined point (the point of projection) and marking where they intersect the plane. In a stereographic projection, such as
is used for the ordinary astrolabe, points on a containing circle are projected on to an equatorial plane from one pole; in an orthographic projection, such as is used in a Rojas design of universal
astrolabe, the point of projection is at infinity and the projection lines are parallel.
proportional compasses: drawing instrument consisting of two legs each with points at either end; used for transferring dimensions in enlarged or reduced ratio.
proportional dividers: drawing instrument consisting of two legs each with points at either end; used for transferring dimensions in enlarged or reduced ratio.
proportional instrument: unusual instrument used to mechanically perform functions in trigonometry.
protractor: instrument for setting out and measuring degrees.
quadragesima: the 40 days of Lent or the first Sunday in Lent.
quadrans vetus: type of horary quadrant, see article on the quadrant.
quadrant: instrument based on a quarter of a circle, see article on the quadrant.
quadratum nauticum: diagram for finding course directions from latitudes and longitudes, see article on the astrolabe.
quatrefoil: decorative form with four leaves or petals.
radio latino: instrument for measuring angles in surveying and gunnery, see article on the radio latino.
ramming rods: long rods, often in wood, intended for use in gunnery and used to push the projectile inside a cannon, or to compress the powder in an arquebus.
reduction compass: a drafting instrument with two pivoting arms which sits parallel to a drawing surface, held up by its fixed points. The arms have no scales but can be divided at any given position
by moveable points.
Regiomontanus-type dial: design of portable altitude dial adjustable for any latitude.
rete: skeletal star map representing the rotation of the heavens on an astrolabe. The rete normally features a projection of the ecliptic and pointers for prominent stars, and can be rotated over a
chosen latitude plate. see article on the astrolabe.
right ascension: angle parallel to the celestial equator, measured eastwards from the spring equinox.
ring dial: simple form of altitude dial in the shape of a ring, for use in one latitude.
rule (or ruler): an instrument of multifarious functions in the form of a straight rod of metal, wood or other material; a folding rule is an alternative form with two or more jointed legs.
Saints' days: anniversary days for Christian saints.
scaphe dial: sundial where the hour lines are marked on a concave hemisphere or part hemisphere.
sector: calculating instrument using pairs of lines on the faces of two hinged arms, see article on the sector.
sexagenarium: an instrument in the form of a volvelle for planetary calculations.
shadow square: a form of geometrical quadrant, where two sides of a square are divided into equal parts and a plumb line or alidade from the opposite corner is used for measuring angles in terms of
ratios (that is, tangents). Where a plumb line is used, one side has a pair of fixed sights. The name comes from the fact that a measurement of the altitude of the sun, expressed as a ratio, applied
to the length of the shadow cast by an upright structure, yields its height. See also under astrolabe.
sight: a device through which an object of interest can be viewed. Sights appear on surveying and astronomical instruments and in a more specialised form in the gunner's sight.
simple theodolite: surveying instrument for measuring horizontal angles, see article on the theodolite.
sinical quadrant: a quarter of a circle with a scale of degrees at its circumference which carries a pattern of criss-crossing vertical and horizontal lines. The ratio of the length of a given line
to the quadrant's radius gives the sine or cosine of the corresponding angle.
solar time: time measured directly from the position of the sun, see article on the sundial.
solstice: either of two points on the ecliptic where the sun achieves its maximum declination. The term also refers to the two dates when the sun reaches this position in its annual cycle. The winter
solstice corresponds to the shortest day, the summer solstice to the longest. Near these points the sun's declination changes only slowly, hence the etymological meaning of solstice as 'standing
star pointer: point marking the position of a star, see article on the astrolabe.
string gnomon: the part of a sundial which casts the shadow, where this part is in the form of a taut string.
sun and moon dial: sundial that can also tell the time from the shadow cast by the moon, see article on the sundial.
sundial: instrument of many different forms for finding the time from the position of the sun, usually by measuring the position of a shadow, see article on the sundial.
sundial and dividers: a compound instrument combining the functions of both a sundial and dividers.
surveying and gunnery instrument: compound instrument combining functions required in the closely-associated fields of surveying and gunnery, such as measuring bearings and finding ranges.
surveying compass: magnetic compass with fixed sights used for measuring horizontal bearings by magnetic azimuth. See also circumferentor.
surveying instrument: surveyors used instruments to take bearings, elevations, distances and other associated measures, as well as, for example, laying out plans and drawing topographical surveys.
Included are such instruments as the surveying compass, surveying quadrant, surveying rod, surveying staff, surveyor's cross, surveyor's line, surveyor's square and theodolite. Surveying instruments
also include compound instruments embodying several such functions.
surveying instruments: see surveying instrument.
surveying quadrant: quadrant used for land surveying, usually for measuring altitude and azimuth.
surveying rod: an instrument used by surveyors for taking distances and other associated measurements.
surveying staff: an instrument used in surveying to measure elevations.
surveying staffs: see surveying staff.
surveyor's cross: instrument for establishing right-angled lines of sight, by means either of open sights set at the ends of an equal-arm cross or slits in a cylinder.
surveyor's line: a long string or rope used to measure distances for surveying.
surveyor's square: another name for a surveyor's cross.
theodolite: surveying instrument for measuring horizontal angles, and perhaps vertical angles, see article on the theodolite.
theodolite and sundial: an instrument combining the functions of a sundial and a theodolite.
throne: part of an astrolabe, connecting the instrument to the suspension shackle, see article on the astrolabe.
triangulation: surveying technique involving the measurement of a baseline, the location of other stations by taking angles from either end, and perhaps the extension of the survey through the
addition of further triangles.
triangulation instrument: instrument with three jointed arms with scales for surveying or range-finding, see article on the triangulation instrument.
trigonus: triangular element in the type of altitude dial known as the 'organum Ptolemai'.
tripod base: base of a tripod (three-legged support) on which to stand an instrument.
tripod legs: legs of a tripod (three-legged support) on which to stand an instrument.
tropic of Capricorn: line of geographical latitude and corresponding line of declination coinciding with the sun's position at the autumnal equinox, its most southerly position in the sky.
tympanum: Latin name (plural tympana) for a latitude plate of an astrolabe.
unequal hours: system of hour reckoning, see article on time and date.
universal dial: a sundial that can be used in any latitude.
universal projection: a type of projection of the celestial sphere appropriate for use in any latitude.
vane: upright piece, in this context usually forming a sight and often pierced with a pinhole, a slit, or a slot with a vertical wire.
vertical dial: sundial where the hour lines are marked on a vertical surface, see article on the sundial.
vertical disc dial: sundial where the hour lines are marked on a vertical surface in the form of a disc, see article on the sundial.
volvelle: a device which rotates; usually referring to one or more discs which turn within a circular scale.
wedge: the part of an astrolabe which secures the pin; also called the 'horse'.
wegweiser: a circle marked with the points of the compass and a rotating index.
wind names: a nomenclature for dividing an azimuth circle into degrees, where eight 45-degree scales are allocated to eight traditional named winds of the Mediterranean.
wind rose: type of compass rose, where the directions are indicated by the names or initials of the traditional winds of the Mediterranean. May also refer to a whole instrument serving only to
indicate the names of the wind directions.
wind vane: flag-like vane mounted on a vertical post free to align itself with, and indicate the direction of, the wind. Also an instrument performing the same function.
zenith: the point on the celestial sphere directly above the observer.
zodiac: a band of 12 constellations of stars straddling the ecliptic; the ecliptic and zodiac are conventionally divided into these constellations, 30 degrees being allocated to each.
zodiacal calendar: pair of circular scales, one of the signs and degrees of the zodiac, the other of the days and months of the year, which yields the date of the sun's position in its annual cycle.
Commonly included on the back of an astrolabe.
zodiacal sign: one of the star constellations included in the zodiac.
|
{"url":"http://www.mhs.ox.ac.uk/epact/glossary.php?FirstID=143&RecordsAtATime=100","timestamp":"2014-04-18T18:10:48Z","content_type":null,"content_length":"38783","record_id":"<urn:uuid:58682bc0-3f05-4324-80d9-49e129e8cf4e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find all naturAls numbers. X , Y..
A is in decimal representation : Find all naturAls numbers. X , Y.. so that A divide by 25 .
Maybe I am misunderstanding something, I don't know. so your number can be written: $A=100\cdot xyxyxyxyx + y5$ where these are the decimal representation. $25|100$, so you need only worry about when
$25|y5$ which is clearly only 25 or 75. so $y=\{2,7\}$ $x=\{0,1,2,3,4,5,6,7,8,9\}$
unless x and y can represent multiple digit numbers? Then $y=\{n\in \mathbb{N}|n$ ends in a 2 or a 7} $x=\mathbb{N}$
|
{"url":"http://mathhelpforum.com/number-theory/96943-find-all-naturals-numbers-x-y.html","timestamp":"2014-04-16T19:00:55Z","content_type":null,"content_length":"35890","record_id":"<urn:uuid:3e4cc408-60e4-4066-8e10-25266ef33278>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rates of Change and Differential Equations: Filling and Leaking Water Tank
April 20th 2013, 08:30 PM #1
Mar 2013
New York, NY
Rates of Change and Differential Equations: Filling and Leaking Water Tank
Problem Statement:
A 50 gallon tank initially contains 20 gallons of pure water. Salt water solution containing .5lb./gallon is poured in at the rate of 4 gallons/minute. At the same time, a drain at the bottom
allows the well mixed salt solution to exit at 2 gallons/minute. How many pounds of salt are in the tank at the precise time the tank is full?
What I've done so far:
□ At t = 0, the tank has x(0) = 0 pounds of salt per 20 gallons of pure water
□ Salt water flows at 4 gallons/min with 0.5 pounds/gallon --> 2 pounds/min = rate of salt coming in
□ Drain takes 2 gallons/min of mixed contents --> 2x(t)/(20+2t) pounds/min is the rate of salt leaving
□ Want to first find x(f) such that f is the precisely time at which the tank is full
□ Then solve to find the general solution of x(t) and plug t = f in to solve.
Last edited by twilightmage13; April 20th 2013 at 09:06 PM.
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/differential-equations/217837-rates-change-differential-equations-filling-leaking-water-tank.html","timestamp":"2014-04-19T14:39:33Z","content_type":null,"content_length":"31398","record_id":"<urn:uuid:4b53ed91-b2b3-4314-aa7d-229cb935be0d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear regression and R-squared
----- Message d'origine -----
De :
[hidden email]
Date : 06/12/2010 21:19:
using "reglin" or "regress" a linear regression of measurement data can be done. But how is it possible to get R-squared of the regression as a parameter for the "goodness of fit"? Is there
another function for doing this?
The RMS deviations between data and the fit are given on the diagonal of the third
output argument of reglin(). For instance.
|
{"url":"http://mailinglists.scilab.org/Linear-regression-and-R-squared-td2618508.html","timestamp":"2014-04-20T08:35:18Z","content_type":null,"content_length":"67500","record_id":"<urn:uuid:9fbea84e-8f8f-452f-96a6-dcf98c4dca1d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
QUADRATIC EQUATIONS TUTORIAL for Physical Sciences
In physics, mathematics is used continuously. As the maths gets more tricky, it seems to lose familiarity and maths classes seem to have nothing to do with the maths in physics.
Maths classes seem only to use "x, y, z" whereas physics seems to use "s, v, t, a" and other unfamiliar letters which are confusing.
• Physics seems to use these letters in unfamiliar orders from the familiar maths problems.
• Physics uses strange numbers eg decimals, not nice integers!
Quadratic equations are usually the first point when physics starts using a "familiar" but otherwise purely "mathematics" type idea and suddenly this hoary old topic becomes unfamiliar.
Basics - A QUADRATIC FUNCTION = anything like
y = 3x^2 - 2x + 3 or y = 3 - 2x + 3x^2
or y = -3.42q^2 + 1.003q
or s = 30.2t - 4.9t^2
(NOTICE that the ORDER of writing terms in equations is unimportant, we often forget this, we tend to think that to be a quadratic equation it has to look like ----------.)
• These graphically are all PARABOLAE - any "power two" or "squared" function is a quadratic and graphs as a parabola. The largest indice term must be 2 (not three or any other number).
In physics, we MODEL real phenomenon. That means we find mathematical equations that "fit" the graphs that measuring nature creates for us. We then use the algebra that "fits" to make new
predictions. This is the process of modeling. The algebra we use has EXACTLY the same rules as in maths. That is why you do so much apparently useless stuff in maths ? because it is useful elsewhere,
eg physics or economics.
Lets work with the quadratic function
s = 30.2t - 4.9t^2
In maths, this is a formula ( a "function" ) which you might be expected to graph.
You would
• Make the vertical axis "s".
• Make the horizontal axis "t"
• Find the "s" intercept by making t = 0.
s = 30.2 x 0 - 4.9 x 0 x 0 = 0 , thus when t = 0, s = 0 so the graph crosses the "s" axis at (0,0)
• Find zeros. When s = 0 , 0 = 30.2t - 4.9t^2, (now what ? - oh yes, see if it factorises! ) 0 = t ( 30.2 - 4.9t )
( OK! to get nothing, I recognise that nothing times anything is nothing, that is 0 = nothing x something !!!!
tricky. )
Applying this to the equation, either 0 = 0 x ( 30.2 - 4.9t ) or 0 = t x 0
That is either t = 0 or ( 30.2 - 4.9t ) = 0
If 30.2 - 4.9t = 0 it means that
30.2 = 4.9t ( remember switching addition to the other side changes signs )
so 30.2/ 4.9 = t ( remember switching multiplication turns to dividing )
giving this answer as t = 6.16.
Two answers for zeroes t = 0 and t = 6.16
• Dominant Term is the last piece of evidence we need to sketch this graph. This is the "-4.9t^2" part which means for t large and negative, "-4.9t^2" will also be negative.
The graph is thus an inverted parabola as shown.
In physics, this function models a vertically moving ball which started moving at 30.2 ms^-1 upwards and is accelerating down at 9.8 ms^-2. "s" is its vertical displacement ( read "height" ) and "t"
is the time in the air after starting.
We can use both the graph and the function to calculate useful details.
If we know a value of "t" we can substitute in the function ( or draw on the graph a line ) to find "s". That is - knowing time, we can calculate height.
However, if I know height "s", I can, with a little more difficulty, calculate time, either graphically or through the algebra of quadratic equations.
Why more difficulty?
Look at the example below where I graphically wish to find the times when the height is 20m above the start ( +20). I get two times, one when going up, the other when coming down.
Now, how do we do this algebraically.
We take the function, s = 30.2t - 4.9t^2
And we say, "s = height takes the value, +20" that is
+20 = 30.2t - 4.9t^2
that is
+4.9t^2 - 30.2t +20 = 0 ( remember rules for moving across equals signs )
• Physically, what is the value of "s" when the height is ground level?
• What do the signs of "s" become when a ball falls BELOW ground level ( eg if off a cliff into water below)?
• What physically do the "zeros" of graph sketching mean?
What we now have is a QUADRATIC EQUATION.
To solve this little equation - we need either to factorise ( unlikely to happen easily here) or to do some fancy work with a rote formula.
Solving +4.9t^2 - 30.2t +20 = 0 for t.
Simple versions of these are solved by factorizing, but complicated versions don’t easily factorize. We need to use a "formula" approach. In maths one is developed by playing at algebra.
Maths teachers start with the most "general" version of the equation that is possible.
" ax^2 + bx +c = 0" ( numbers a, b, c are called coefficients.)
They then show that x will always be calculated by the formula
x = ( -b ± {b^2 - 4ac}^1/2 ) / 2a remember the indice "1/2" means sq root !
In the above case a = +4.9, b = - 30.2, c = +20 so
t = ( - -30.2 ± {[-30.2]^2 - 4x4.9x20}^1/2) / 2x4.9
= (30.2 ± {912 - 392}^1/2 ) / 9.8 = (30.2 ± 520 ^1/2) / 9.8 = ( 30.2 ± 22.63) / 9.8
giving two answers which can be seen on the graph above
t = 5.39 seconds and 0.77 seconds.
The 0.77s corresponds to going through +20m on the way up
while the 5.39s corresponds to +20 m on the way down.
This is clear from the graph above.
Calculate the times for the function s = 30.2t - 4.9t^2 when s = +10m and when s = - 10m . See if these agree with the graph.
Sometimes the answers will not make sense meaning that no real answers exist. An example would occur for our s = 30.2t - 4.9t^2 if s = + 60m. The algebra would give you ridiculous square root
answers. Physically it means that the ball does not reach +60m. ( Check both by calculating using the formula and by looking at the graph).
At present in physics you will not meet these cases.
Not all answers have physical meanings. Negative time answers are simply artificial artifacts of using algebra and not looking at the physical situation.
|
{"url":"http://www.launc.tased.edu.au/online/sciences/physics/maths/QUADTUTE/QUADT.HTM","timestamp":"2014-04-21T14:42:36Z","content_type":null,"content_length":"9090","record_id":"<urn:uuid:28237873-187e-4bbc-991e-34e1e10faf61>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Northridge Geometry Tutor
Find a Northridge Geometry Tutor
...I believe that building a good foundation is the key to success in any subject. I have come up with many different tricks for helping students remember key ideas in math and the biggest
compliment I received was when one client told me she was showing her class my "wedding cake" trick (one that ...
11 Subjects: including geometry, physics, algebra 1, GED
...I have routinely worked through the night to edit students' college application essays as deadlines fast-approached, and I have frequently talked to and assuaged nerve-racked students in the
early morning hours before their various standardized tests (e.g., SAT, ACT, SAT-2). If nothing else, I ...
58 Subjects: including geometry, English, reading, writing
...Since I was in AP Calculus BC in high school I began tutoring younger students in Algebra and Geometry. Once I began my college career I returned to my Alma Mater, San Fernando High School, to
tutor through the non-profit organization Project GRAD. I was an in class tutor in various math classes including Algebra 1, Geometry, Pre-Calculus and Calculus.
7 Subjects: including geometry, calculus, algebra 1, algebra 2
...Also, in high school I started my school's first boy's volleyball team with a couple friends. Usually, I found myself playing the position of setter, but have played all positions. I received
many awards like Coach's Award and Most Valuable Player.
20 Subjects: including geometry, English, writing, biology
...During college is where the bulk of my tutoring experience is derived. I began tutoring my peers in some of the harder classes such as Calculus, Intermediate Spanish, and General and Organic
Chemistry. What I found is that no particular subject is "too hard" for anyone, it's just a matter of explaining a concept in unique ways most appealing to the particular student.
31 Subjects: including geometry, English, reading, physics
|
{"url":"http://www.purplemath.com/northridge_ca_geometry_tutors.php","timestamp":"2014-04-16T04:19:04Z","content_type":null,"content_length":"24064","record_id":"<urn:uuid:1c36679e-a0d1-4617-bca7-3d9100107659>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
|