content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Henry Reich
NEW DISCOVERY About the Big Bang! (and a new video explaining that discovery!)
How much mass energy is there in a raisin? Or a mosquito? Or the earth? Find out in the newest lab from MinuteLabs
Interesting video. I think monogamy is a word invented for the human species for practical purposes since human babies need to be cared for and nurtured. Also that we invented the concept of home,
nation and country that has to be sustained by taxes. Most importantly, sexually transmitted diseases. The Old Testament has stories showing humans were not monogamous on the part of the male and was
the accepted practice at the time. Without the order monogamy, there would be no cheating. Good or bad the order has been established.
What should you do when it's really really cold outside? Go swimming, for one... | {"url":"https://plus.google.com/+HenryReich","timestamp":"2014-04-20T07:28:59Z","content_type":null,"content_length":"424086","record_id":"<urn:uuid:ddbf240b-6ee8-4eae-980f-87da423b22f2>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exercise Comparison
In this exercise you have to compare 2 given fractions. You have to choose the bigger fraction of both by selecting the correct comparison sign.
First choose the correct comparison sign. After you have chosen the comparison sign, the result will show on the right. A green square with Correct will tell you that your answer was correct while a
red square with Incorrect will indicate that your answer was wrong. You will get to the next task by clicking the button.
In this exercise only the option Mixed number is enabled. If checked the fractions will appear as mixed numbers. | {"url":"http://docs.kde.org/stable/en/kdeedu/kbruch/exer_compare.html","timestamp":"2014-04-18T05:31:16Z","content_type":null,"content_length":"3747","record_id":"<urn:uuid:50394dc2-a17b-497f-ab8c-a775fc228a1d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
solving as a product of two factors
July 21st 2009, 03:21 PM #1
Mar 2008
solving as a product of two factors
i have been given this equation to solve as the product of two factors
5x^3 - 10xy^2
I have been struggling with this equation i cant seem to get it right , can some one explains how the hell to do this please?
Thanks in advance.
Factors of a number are two things that multiply to give that number.
So if we pull out 5x from the above, we get;
Where $5x$ and $x^2-2y^2$ are both factors of $5x^3 - 10xy^2$.
$5x^3 - 10xy^2=(5x)(x^2-2y^2)$
Does this make sense?
Last edited by Kasper; July 21st 2009 at 03:36 PM.
July 21st 2009, 03:25 PM #2
Mar 2009 | {"url":"http://mathhelpforum.com/algebra/95743-solving-product-two-factors.html","timestamp":"2014-04-20T07:52:23Z","content_type":null,"content_length":"33154","record_id":"<urn:uuid:d432ca6e-b932-429e-aedc-a4dde6c79a1b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
ECCC - Reports tagged with algorithm
We consider worst case time bounds for NP-complete problems
including 3-SAT, 3-coloring, 3-edge-coloring, and 3-list-coloring.
Our algorithms are based on a common generalization of these problems,
called symbol-system satisfiability or, briefly, SSS [R. Floyd &
R. Beigel, The Language of Machines]. 3-SAT is equivalent to
(2,3)-SSS while the other problems ... more >>> | {"url":"http://eccc.hpi-web.de/keyword/13638/","timestamp":"2014-04-19T04:19:13Z","content_type":null,"content_length":"21548","record_id":"<urn:uuid:293e9fd7-655a-438b-9262-51d677af5fb2>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
โข Teamwork 19 Teammate
โข Problem Solving 19 Hero
โข Engagement 19 Mad Hatter
โข You have blocked this person.
โข โ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/rajshikhargupta/medals","timestamp":"2014-04-23T10:36:35Z","content_type":null,"content_length":"89525","record_id":"<urn:uuid:cfb6abcf-b466-4ffb-b54e-2bd05eac8739>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Volatility Watcherโs Toolkit
The CBOEโs VIX index gets mainstream exposure as the โfear indexโ, but thereโs a lot more to volatility watching than the VIX. The VIX does a good job of measuring the current level of anxiety in
the market, but it has some problems. Among other things itโs:
โข Not a good predictor of the future
โข Often does not move in the direction people expect (opposite the S&P 500). Historically its average percentage move is -4.77x of the S&P 500, but around 20% of the time it moves in the same
โข Not investableโitโs a measurement, not a security
โข Quirky around weekends and holidays
โข Prone to jumps/dips the Monday before the 3^rd Friday of the month
Thereโs almost an overwhelming number of things to watch with volatility, but a few straightforward concepts can help you observe intelligently.
Option Implied Volatility (IV)
โข The marketโs estimate of a securityโs future volatility is reflected in the price of its options. In practice the market isnโt always logical about this, for example out-of-the-money (OTM) puts
usually have higher expected volatility than at-the-money (ATM) puts even though they are all based on the same security.
โข If an option expires in 60 days, we assume that its pricing reflects a 60 day expectation of volatility in the underlying security.
Implied Volatility Skew
โข When the market is especially fearful the IV of OTM puts goes way up because investors are buying options for portfolio insurance. The difference between IVs at different options strike prices
is called skew. A partial measure of SPX (S&P 500) option skew is incorporated into the calculation of the VIX index. The CBOE also does an explicit skew calculations. Details of that
calculation are given here, and a spreadsheet with historical daily values can be downloaded here.
โข Volatility measures are either variable or constant. The VIX is a constant 30 day future estimate of volatility. SPX options, VIX futures, and VIX options all have variable durations because
they expire on specific dates. While the volatility directly or indirectly expressed by these securities does not necessarily go up or down with the passage of time, they tend to get tweaky a
few days before expiration.
Blends or Single Security
โข The VIX is an index that blends together the volatility characteristics of a wide range of SPX options. Most measures (e.g,, VIX futures prices) give a volatility measure for a single security.
Term Structure
โข While the VIX has a 30 day duration, itโs useful to look further out in time for other volatility estimates. The CBOE also provides VXV which is a 93 day estimate. SPX options give estimates
up to two years out, VIX futures 9 months, VIX options 6 months. Different patterns (e.g., steadily rising estimates of volatility vs declining) over time indicate how nervous the market is,
regardless of the absolute values of volatility.
โข When the term structure is climbing over time (positive slope) it is said to be in contango, if declining over time, backwardation.
Rolling Indexes
โข All exchange traded volatility funds (ETF/ETNs) rely on volatility rolling indexes that are blends of various VIX futures. This blending achieves a constant duration estimate of volatility and
is done in a way that can be physically traded. Unfortunately the typical term structure of the VIX futures creates an erosion effect on this mix of futures. The end result is that these
indexes are only good at tracking volatility moves in the short term. Long term these rolling indexes inexorably trend towards zero.
โข Examples:
The chart below summarizes the S&P 500 volatility measures that I monitor. Click on the ticker for quotes (Yahoo! Finance or Bloomberg) or more information.
โDurationโSecuritiesโ Term Structure โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ โShort Term (1-2 month)โ3 Month โMid-Term(4-7 month) โLong Term โ
โConstantโBlend โ โ โข VXV โ โข SPVXMTR (VXZ) โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ โVIX/VXV ratio (when below 1 indicates contango) โ
โ โโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โSingle โHistorical Volatility (e.g., 22 day retrospective) โ
โVariableโBlend โ โข VIN (1 mo) โPer-expiration month SPX option volatility computed using the VIXโs option strike composite approach for months 3 through 10โ
โ โ โ โข VIF (2 mo) โ โ
โ โโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โSingle โ โ
โข VVIX is an VIX-like index derived from the IV of VIX optionsโcreating a mind bending volatility of volatility measure.
โข Historical or realized volatility is computed using past changes in the security. The historical volatility is often compared to the implied volatility to determine the difference. Implied
volatility is almost always higher than historic volatility.
โข VIN and VIF are variable duration metrics created by the CBOE for use in calculating the VIX index. The โNโ in VIN stands for โnearโ, and the โFโ in VIF stands for โfar.โ The VIN is calculated
from the next set of SPX monthly options to expire until there is less than 7 days left to expirationโthen the expiration month after that is used. The VIF is calculated using the SPX options
that expire the month after the VIN options. The switch in SPX options expirations used for VIN and VIF occurs on the Monday before the 3^rd Friday of the month and sometimes glitches the VIX
index. See this post for calculation details.
Related Posts
Saturday, March 22nd, 2014 |
Vance Harwood
Thanks, very helpful. | {"url":"http://sixfigureinvesting.com/2012/12/volatility-measures-to-watch/","timestamp":"2014-04-19T04:22:44Z","content_type":null,"content_length":"57558","record_id":"<urn:uuid:39d87111-873a-40b6-9bd5-6050aa40fc8c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Extending the Language of Set Theory
Aatu Koskensilta aatu.koskensilta at xortec.fi
Tue Mar 1 02:34:56 EST 2005
On Feb 27, 2005, at 8:07 PM, Dmytro Taranovsky wrote:
> Meaningfully extending the language of set theory opens new horizons
> for
> mathematicians, but any such endeavor also raises a host of
> philosophical issues. I plan to discuss some of the issues in future
> FOM postings.
Lately, I've been playing with ideas closely resembling yours, but from
a slightly different angle. My emphasis has not really been on
addressing the deficiencies of the language of set theory per se, but
rather to see what we can get by trying to push two kinds of reflection
- epistemological and set theoretical - as far as possible. By
epistemological reflection I refer to the kind of reflection formalized
by e.g. various proof theoretical reflection principles, iterated truth
predicates and the like. By set theoretical reflection principles I
refer to principles which are, in some sense, formalizations of the
maxim UNIFY:
Every possible mathematical structure should be exemplified as a set.
There are basically two kinds of attitudes one can adopt in study of
extensions of ZFC by means of such reflection principles. One might be
interested merely to provide an explication of what is "implicit in
acceptance of ZFC" or some particular philosophical and mathematical
position. On the other hand, one might be interested in actually
establishing new mathematical results that are in some sense acceptable
on basis of ZFC and intuitively plausible reasoning. The former leads
to the kind of analysis exemplified in the classical results about
predicative justifiability, the ordinal Gamma_0 and so forth. The
latter is a more risky endeavor, but to me at least more interesting in
the grand scheme of things.
When trying to formally capture UNIFY at least partially one hits the
problem of defining what exactly is a possible mathematical structure.
Any axiomatization of this concept seems to lead to just a new theory
of sets so a more circumspect approach is called for. Luckily, as you
have noted in your paper, various non-set collections - which can be
seen as structures - make sense based on acceptance of ZFC. For
example, the class of all true sentences of the language of set theory
with a constant for every set makes sense, since the concept of set
theoretic truth does. (If we don't accept that there is a determinate
matter of fact as to the truth or falsity of set theoretic sentences
what reason do we have to accept replacement for anything but upwards
absolute formulae?).
Without further ado, let me present the formal system such musings
naturally lead. The language is a two sorted language with a sort for
sets and a sort for classes and a binary predicate for membership. As
axioms for sets we have those of ZFC with replacement and separation as
Pi^1_1 axioms. As to classes we have an axiom saying that V exists, for
every set there corresponds a class of all classes corresponding to the
members of the set and a rule of inference saying that if A is provably
a class ordinal, then for all classes B, the class version of the
constructive hierarchy relative to B up to A exists. This axiom
basically says that if B makes sense, then anything constructible from
it along a well-ordering which makes sense makes sense. As you note,
the resulting theory is interpretable in ZFC+There is an inaccessible
by taking V_kappa as V and the members of L[V_kappa]_alpha as classes,
where alpha is the least ordinal, s.t.
1. alpha > kappa
2. if delta in L[V_kappa]_beta and beta < alpha, then delta < alpha
The advantage of this theory over ZFC+There is an inaccessible is that
"in principle" all of its theorems are acceptable on basis of
acceptance of ZFC. In addition, talk about Skolem functions of V,
elementary substructures of V and so forth can be carried out. And
since classes can be members of other classes, there is no need for
tedious coding trickery.
We come now to set theoretical reflection. Since classes are certainly
possible mathematical structures, there should be, for every class, a
set that is in some sense structurally equivalent to the class. This I
have, tentatively, formalized as follows. We add to the language a new
binary function symbol c which takes a class structure A and a class
ordinal B into a set structure of form <V_kappa, L[a]_alpha, a> with
the following property
for all x in V_kappa( <V,L[A]_B,A>|=phi(x) <=>
Again, this is expressed as a rule of inference: if we have established
that B is a class ordinal, then we can infer the above. In addition, we
add the following rule of inference:
Q_1A_1...Q_nA_n( <V,L[A]_B,A,A_1,...,A_n>|=phi)
Q_1a_1...Q_na_n( <V_kappa, L[a]_alpha, a, a_1,...,a_n> |= phi)
Where Q_i is a sequence of alternating quantifiers and A_i are class
variables and a_i are set variables.
It's a rather trivial exercise to derive various small large cardinal
axioms in this system. However, I don't know how far one can go. There
are several directions for further extensions: the most obvious one is
that the above analysis seems perfectly sensible and acceptable and
hence there should be a natural model of the theory (by UNIFY). The
problem is defining what exactly is a natural model of the theory,
which is muddled by the presence of the special rules of inference.
I'd like to thank you for bringing up this interesting subject. Also,
I'd be interested in the relation of the theory outlined above and the
unfolding of set theory sketched by Solomon Feferman. I'll return to
your paper in more detail as I've had a few moments to digest it all.
Aatu Koskensilta (aatu.koskensilta at xortec.fi)
"Wovon man nicht sprechen kann, daruber muss man schweigen"
- Ludwig Wittgenstein, Tractatus Logico-Philosophicus
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2005-March/008812.html","timestamp":"2014-04-20T03:11:52Z","content_type":null,"content_length":"8313","record_id":"<urn:uuid:94cccb72-6da6-42cd-bfa6-46f5d1bd0a65>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
Most Active Subjects
Questions Asked
Questions Answered
Medals Received
Questions Asked
Questions Answered
Medals Received
is replying to Can someone tell me what button the professor is hitting...
โข Teamwork 19 Teammate
โข Problem Solving 19 Hero
โข Engagement 19 Mad Hatter
โข You have blocked this person.
โข โ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/angel76/asked","timestamp":"2014-04-19T17:20:18Z","content_type":null,"content_length":"99230","record_id":"<urn:uuid:a1ef013a-60c4-49fe-ac5d-4055eaa9e6c5>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: prime numbers and African artifact
Michael Jennings (M.J.Jennings@amtp.cam.ac.uk)
13 Jul 1995 20:06:05 GMT
In article <Pine.HPP.3.91.950713005007.12887H-100000@weber.ucsd.edu>,
Daniel Kian Mc Kiernan <dmckiern@weber.ucsd.edu> wrote:
>On Tue, 11 Jul 1995, Alistair J. R. Young wrote:
>>Rick Hawkins writes:
>>> But only half- credit, since it's the wrong answer. 1 is not prime.
>> Correct me if I'm wrong but if a prime number is only divisible by itself
>> and 1, 1 is prime. What else is it divisible by?
>I'm familiar with three definitions of "prime number".
>[1] A positive integer divisible only by itself and by 1.
>[2] Same as [1] except that the number must also be greater than 1.
>[3] Same as [1] or [2] except that the number must also be greater
> than 2.
>For my part, I don't care for definitions [2] or [3].
Why not? Definition two is the only sensible one, as
using this definition it is possible to express any positive integer
other than one as a unique product of prime numbers. This is why prime
numbers are useful - this result isn't called the fundamental theorem of
arithmetic for nothing. In fact, in one way it is best to use as a
"The prime numbers are the set of positive integers such
that any positive integer other than one can be created as a unique
product of elements (which can be used more than once each) of the set"
This definition explains why we have such a thing as a 'prime
number'. Unfortunately this is a totally non-constructive definition.
Other equivalent definitions, such as (2) above are more useful in
determining such things as whether a number is prime, and what the
prime numbers actually are, and are therefore more common.
The number 1 is the multiplicative identity, something quite special
and very important, but something entirely different from a prime number.
Michael Jennings
Department of Applied Mathematics and Theoretical Physics
The University of Cambridge. mjj12@damtp.cambridge.ac.uk
"Forrest Gump!! Man, I violently *hated* that reactionary piece of subtle
pseudohip drivel... Then again, I don't even like movies. But Jesus -- a
movie that really makes the audience wish they were obedient and stupid??
What gives?? It's like something out of the depths of a Stalinist purge."
- Bruce Sterling | {"url":"http://unauthorised.org/anthropology/sci.anthropology/july-1995/0273.html","timestamp":"2014-04-20T05:44:08Z","content_type":null,"content_length":"6463","record_id":"<urn:uuid:5b4e7b8f-7349-4ec7-8c16-91cd65c18bb4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
GMAT Reading Comprehension
Invest your time in reading the passage at a comfortable speed and identifying key information such as the main idea and passage structure. Doing so will allow you to quickly answer the related
questions without rereading the passage.
If you have a suggestion how we can make your learning experience better, please
To access every video in this module or to get full access to all GMAT Prep Now videos.
In the GMAT Reading Comprehension module, you will learn GMAT-specific skills related to:
โข Answering common question types โข Eliminating incorrect answers
โข Engaging in each passage โข Passage-specific questions
โข Summarizing paragraphs โข Adjusting your strategy
โข Identifying the main idea โข Common myths
โข Identifying common structures โข Recommended readings
*To view more free GMAT prep videos, please see individual lesson modules | {"url":"http://www.gmatprepnow.com/module/gmat-reading-comprehension","timestamp":"2014-04-19T20:32:10Z","content_type":null,"content_length":"33154","record_id":"<urn:uuid:d1f2f9b8-6e29-44e5-855f-194b22e3965c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two questions about products
Charles Greathouse on Wed, 21 Aug 2013 15:43:29 +0200
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Two questions about products
I seem to remember that prod() used to use binary splitting, forming subproducts of roughly equal size so that
prod(i=1, #v, v[i])
was substantially faster than
s=1; for(i=1, #v, s*=v[i]); s
for v a decent-sized vector of integers > 1. But this is no longer true; in particular, emulating (what I remember to be) the old behavior
fakeprod(v,start=1,end=#v)=if(end-start<3, prod(i=start,end,v[i]), fakeprod(v,start,(start+end)\2)*fakeprod(v,(start+end)\2+1,end))
is substantially faster, even though large vectors are passed repeatedly:
default(timer, 1)
When did this change -- or do I misremember? Can binary splitting be implemented again?
A quick second question: I notice that prodeuler() works like prod(X=a,b, 1.) rather than prod(X=a,b). Is this intentional? Would it be possible to give it a third argument so that prodeuler(p=a, b,
1) could be used if an integer result was desired?
Charles Greathouse
Case Western Reserve University | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-1308/msg00008.html","timestamp":"2014-04-17T12:39:04Z","content_type":null,"content_length":"5018","record_id":"<urn:uuid:54f2f22a-42b5-4de3-a19f-69c8137ffc9a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newport, RI Algebra Tutor
Find a Newport, RI Algebra Tutor
I have been a mathematics educator for more than forty years. I have taught middle school and high school mathematics in Rhode Island, Maine, and the African country of Zambia. Also I have had
lecturer's positions at the Community College of Rhode Island and Gibbs College, Cranston, RI.
15 Subjects: including algebra 1, algebra 2, calculus, trigonometry
...I have also been tutoring for many years from elementary subjects, to chemistry and test preparation and enjoy working with students who need that little extra boost. I use the hands on
approach when tutoring, trying many different methods to get the content across. Learning should be relevant and fun.
31 Subjects: including algebra 2, grammar, reading, geometry
...I tutored in algebra and precalculus. When I transferred to UMass Dartmouth, I continued tutoring in math from algebra I to calculus. Right now I tutor in UMD Primes, a program for UMass
Dartmouth on Mondays for Algebra I.
13 Subjects: including algebra 1, algebra 2, calculus, geometry
...Additionally, I was a member of the Rhode Island standard pilot program responsible for developing standards-based teaching for adult learners. I am a member of the Commission on Adult Basic
Education, a registered agent for CASAS Implementation, and an assessor for the National External Diploma...
30 Subjects: including algebra 2, algebra 1, English, reading
I'm a certified tutor for the North Kingstown School Department and also for the Literacy Volunteers of Washington County. I get along well with middle-school age students and am especially
patient with them. I enjoy helping students overcome problem areas with their schoolwork.
7 Subjects: including algebra 1, reading, English, prealgebra
Related Newport, RI Tutors
Newport, RI Accounting Tutors
Newport, RI ACT Tutors
Newport, RI Algebra Tutors
Newport, RI Algebra 2 Tutors
Newport, RI Calculus Tutors
Newport, RI Geometry Tutors
Newport, RI Math Tutors
Newport, RI Prealgebra Tutors
Newport, RI Precalculus Tutors
Newport, RI SAT Tutors
Newport, RI SAT Math Tutors
Newport, RI Science Tutors
Newport, RI Statistics Tutors
Newport, RI Trigonometry Tutors
Nearby Cities With algebra Tutor
Coventry, RI algebra Tutors
Dartmouth algebra Tutors
Jamestown, RI algebra Tutors
Johnston, RI algebra Tutors
Middletown, RI algebra Tutors
Narragansett algebra Tutors
NETC, RI algebra Tutors
North Kingstown algebra Tutors
Portsmouth, RI algebra Tutors
Somerset, MA algebra Tutors
South Kingstown, RI algebra Tutors
Tiverton algebra Tutors
Wakefield, RI algebra Tutors
West Warwick algebra Tutors
Westport, MA algebra Tutors | {"url":"http://www.purplemath.com/Newport_RI_Algebra_tutors.php","timestamp":"2014-04-18T06:08:26Z","content_type":null,"content_length":"23940","record_id":"<urn:uuid:5b29e3b6-a28d-4da7-8056-05bd40f4f33c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bezier curves
5 Bezier curves
Each interior node of a cubic spline may be given a direction prefix or suffix {dir}: the direction of the pair dir specifies the direction of the incoming or outgoing tangent, respectively, to the
curve at that node. Exterior nodes may be given direction specifiers only on their interior side.
A cubic spline between the node z_0, with postcontrol point c_0, and the node z_1, with precontrol point c_1, is computed as the Bezier curve
As illustrated in the diagram below, the third-order midpoint (m_5) constructed from two endpoints z_0 and z_1 and two control points c_0 and c_1, is the point corresponding to t=1/2 on the Bezier
curve formed by the quadruple (z_0, c_0, c_1, z_1). This allows one to recursively construct the desired curve, by using the newly extracted third-order midpoint as an endpoint and the respective
second- and first-order midpoints as control points:
Here m_0, m_1 and m_2 are the first-order midpoints, m_3 and m_4 are the second-order midpoints, and m_5 is the third-order midpoint. The curve is then constructed by recursively applying the
algorithm to (z_0, m_0, m_3, m_5) and (m_5, m_4, m_2, z_1).
In fact, an analogous property holds for points located at any fraction t in [0,1] of each segment, not just for midpoints (t=1/2).
The Bezier curve constructed in this manner has the following properties:
โข It is entirely contained in the convex hull of the given four points.
โข It starts heading from the first endpoint to the first control point and finishes heading from the second control point to the second endpoint.
The user can specify explicit control points between two nodes like this:
draw((0,0)..controls (0,100) and (100,100)..(100,0));
However, it is usually more convenient to just use the .. operator, which tells Asymptote to choose its own control points using the algorithms described in Donald Knuth's monograph, The
MetaFontbook, Chapter 14. The user can still customize the guide (or path) by specifying direction, tension, and curl values.
The higher the tension, the straighter the curve is, and the more it approximates a straight line. One can change the spline tension from its default value of 1 to any real value greater than or
equal to 0.75 (cf. John D. Hobby, Discrete and Computational Geometry 1, 1986):
draw((100,0)..tension 2 ..(100,100)..(0,100));
draw((100,0)..tension 3 and 2 ..(100,100)..(0,100));
draw((100,0)..tension atleast 2 ..(100,100)..(0,100));
In these examples there is a space between 2 and ... This is needed as 2. is interpreted as a numerical constant.
The curl parameter specifies the curvature at the endpoints of a path (0 means straight; the default value of 1 means approximately circular):
draw((100,0){curl 0}..(100,100)..{curl 0}(0,100));
The MetaPost ... path connector, which requests, when possible, an inflection-free curve confined to a triangle defined by the endpoints and directions, is implemented in Asymptote as the convenient
abbreviation :: for ..tension atleast 1 .. (the ellipsis ... is used in Asymptote to indicate a variable number of arguments; see Rest arguments). For example, compare
The --- connector is an abbreviation for ..tension atleast infinity.. and the & connector concatenates two paths, after first stripping off the last node of the first path (which normally should
coincide with the first node of the second path). | {"url":"http://asymptote.sourceforge.net/doc/Bezier-curves.html","timestamp":"2014-04-19T10:24:28Z","content_type":null,"content_length":"7322","record_id":"<urn:uuid:20bb6864-3956-42e3-bb86-590dc9ad93df>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
NOTE: International Finance Discussion Papers are preliminary materials circulated to stimulate discussion and critical comment. References in publications to International Finance Discussion Papers
(other than an acknowledgment that the writer has had access to unpublished material) should be cleared with the author or authors. Recent IFDPs are available on the Web at http://
www.federalreserve.gov/pubs/ifdp/. This paper can be downloaded without charge from the Social Science Research Network electronic library at http://www.ssrn.com/.
This interview for Econometric Theory explores David Hendry's research. Issues discussed include estimation and inference for nonstationary time series; econometric methodology; strategies, concepts,
and criteria for empirical modeling; the general-to-specific approach, as implemented in the computer packages PcGive and PcGets; computer-automated model selection procedures; David's textbook
Dynamic Econometrics; Monte Carlo techniques (PcNaive); evaluation of these developments in simulation studies and in empirical investigations of consumer expenditure, money demand, inflation, and
the housing and mortgage markets; economic forecasting and policy analysis; the history of econometric thought; and the use of computers for live empirical and Monte Carlo econometrics.
Keywords: cointegration, conditional models, consumers' expenditure, diagnostic testing, dynamic specification, encompassing, equilibrium-correction models, error-correction models, exogeneity,
forecasting, general-to-specific modeling, housing market, inflation, model design, model evaluation, money demand, Monte Carlo, mortgage market, parameter constancy, PcGets, PcGive, PcNaive,
sequential reduction.
JEL Classifications: C1, C5
1 Educational Background, Career, and Interests
Let's start with your educational background and interests. Tell me about your schooling, your original interest in economics and econometrics, and the principal people, events, and books that
influenced you at the time.
I went to Glasgow High School but left at 17, when my parents migrated to the north of Scotland. I was delighted to have quit education.
What didn't you like about it?
The basics that we were taught paled into insignificance when compared to untaught issues such as nuclear warfare, independence of post-colonial countries, and so on. We had an informal group that
discussed these issues in the playground. Even so, I left school with rather inadequate qualifications: Glasgow University simply returned my application.
That was not a promising start.
No, it wasn't. However, as barman at my parents' fishing hotel in Ross-shire, I met the local Chief Education Officer, who told me that the University of Aberdeen admitted students from
``educationally deprived areas'' such as Ross-shire, and would ignore my Glasgow background. I was in fact accepted by Aberdeen for a 3-year general MA degree (which is a first degree in Scotland)--a
``civilizing'' education that is the historical basis for a liberal arts education.
Why did you return to education when you had been so discouraged earlier?
Working from early in the morning till late at night in a hotel makes one consider alternatives! I had wanted to be an accountant, and an MA opened the door to doing so. At Aberdeen, I studied maths,
French, history, psychology, economic history, philosophy, and economics, as these seemed useful for accountancy. I stayed on because they were taught in a completely different way from school,
emphasizing understanding and relevance, not rote learning.
What swayed you off of accountancy?
My ``moral tutor'' was Peter Fisk ...
Ah, I remember talking with Peter (author of Fisk (1967)) at Royal Statistical Society meetings in London, but I had not realized that connection.
Peter persuaded me to think about other subjects. Meeting him later, he claimed to have suggested economics, and even econometrics, but I did not recall that.
Were you enrolled in economics?
No, I was reading French, history, and maths. My squash partner, Ian Souter, suggested that I try political economy and psychology as ``easy subjects,'' so I enrolled in them after scraping though my
first year.
Were they easy?
I thought psychology was wonderful. Rex and Margaret Knight taught really interesting material. However, economics was taught by Professor Hamilton, who had retired some years before but continued
part time because his post remained unfilled. I did not enjoy his course, and I stopped attending lectures. Shortly before the first term's exam, Ian suggested that I catch up by reading Paul
Samuelson's (1961) textbook, which I did (fortunately, not Samuelson's (1947) Foundations!). From page one, I found it marvelous, learning how economics affected our lives. I discovered that I had
been thinking economics without realizing it.
You had called it accountancy rather than economics?
Partly, but also, I was naive about the coverage of intellectual disciplines.
Why hadn't you encountered Samuelson's text before?
We were using a textbook by Sir Alec Cairncross, the government chief economic advisor at the time and a famous Scots economist. Ian was in second-year economics, where Samuelson was recommended. I
read Samuelson from cover to cover before the term exam, which then seemed elementary. Decades later, that exam came back to haunt me when I presented the ``Quincentennial Lecture in Economics'' at
Aberdeen in 1995. Bert Shaw, who had marked my exam paper, retold that I had written ``Poly Con'' at the top of the paper. The course was called ``PolEcon,'' but I had never seen it written. He had
drawn a huge red ring around ``Poly Con'' with the comment: ``You don't even know what this course is called, so how do you know all about it?'' That's when I decided to become an economist. My
squash partner Ian, however, become an accountant.
Were you also taking psychology at the time?
Yes. I transferred to a 4-year program during my second year, reading joint honors in psychology and economics. The Scottish Education Department generously extended my funding to 5 years, which
probably does not happen today for other ``late developers.'' There remain few routes to university such as the one that Aberdeen offered or funding bodies willing to support such an education.
Psychology was interesting, though immensely challenging--studying how people actually behaved, and eschewing assumptions strong enough to sustain analytical deductions. I enjoyed the statistics,
which focused on design and analysis of experiments, as well as conducting experiments, but I dropped psychology in my final year.
You published your first paper, [1], while an undergraduate. How did that come about?
I investigated student income and expenditure in Aberdeen over two years to evaluate changing living standards. To put this in perspective, only about 5% of each cohort went to university then, with
most being government funded, whereas about 40% now undertake higher or further education. The real value of such funding was falling, so I analyzed its effects on expenditure patterns (books,
clothes, food, lodging, travel, etc.): the paper later helped in planning social investment between student and holiday accommodation.
What happened after Aberdeen?
I applied to work with Dick Stone in Cambridge. Unfortunately he declined, so I did an MSc in econometrics at LSE with Denis Sargan--the Aberdeen faculty thought highly of his work. My econometrics
knowledge was woefully inadequate, but I only discovered that after starting the MSc.
Had you taken econometrics at Aberdeen?
Econometrics was not part of the usual undergraduate program, but my desk in Aberdeen's beautiful late-medieval library was by chance in a section that had books on econometrics. I tried to read
Lawrence Klein's (1953) A Textbook of Econometrics and to use Jan Tinbergen's (1951) Business Cycles in the United Kingdom 1870-1914 in my economic history course. That led the economics department
to arrange for Derek Pearce in the statistics department to help me: he and I worked through Jim Thomas's (1964) Notes on the Theory of Multiple Regression Analysis. Derek later said that he had been
keeping just about a week ahead of me, having had no previous contact with problems in econometrics like simultaneous equations and residual autocorrelation.
Was teaching at LSE a shock relative to Aberdeen?
The first lecture was by Jim Durbin on periodograms and spectral analysis, and it was incomprehensible. Jim was proving that the periodogram was inconsistent, but that typical spectral estimators are
well-behaved. As we left the lecture, I asked the student next to me, ``What is a likelihood?'' and got the reply ``You're in trouble!''. But luck was on my side. Dennis Anderson was a physicist
learning econometrics to forecast future electricity demand, so he and I helped each other through econometrics and economics respectively. Dennis has been a friend ever since, and is now a neighbor
in Oxford after working at the World Bank.
Did Bill Phillips teach any of your courses?
Yes, although Bill was only at LSE in my first year. When we discussed my inadequate knowledge of statistical theory, he was reassuring, and I did eventually come to grips with the material. Bill,
along with Meghnad Desai, Jan Tymes, and Denis Sargan, ran the quantitative economics seminar, which was half of the degree. They had erudite arguments about autoregressive and moving-average
representations, matching Denis's and Bill's respective interests. They also debated whether a Phillips curve or a real-wage relation was the better model for the United Kingdom. That discussion was
comprehensible, given my economics background.
What do you recall of your first encounters with Denis Sargan?
Denis was always charming and patient, but he never understood the knowledge gap between himself and his students. He answered questions about five levels above the target, and he knew the material
so well that he rarely used lecture notes. I once saw him in the coffee bar scribbling down a few notes on the back of an envelope--they constituted his entire lecture. Also, while the material was
brilliant, the notation changed several times in the course of the lecture: became , then , and back to , while had become and then ; and and got swapped as well. Sorting out one's notes proved
invaluable, however, and eventually ensured comprehension of Denis's lectures. Our present teaching-quality assessment agency would no doubt regard his approach as disastrous, given their blinkered
view of pedagogy.
That sort of lecturing could be discouraging to students, whereas it didn't bother Denis.
One got used to Denis's approach. For Denis, notation was just a vehicle, with the ideas standing above it.
My own recollection of Denis's lectures is that some were crystal clear, whereas others were confusing. For instance, his expositions of instrumental variables and LIML were superb. Who else taught
the MSc? Did Jim Durbin?
Yes, Jim taught the time-series course, which reflected his immense understanding of both time- and frequency-domain approaches to econometrics. He was a clear lecturer. I have no recollection of Jim
ever inadvertently changing notation--in complete contrast to Denis--so years later Jim's lecture notes remain clear.
What led you to write a PhD after the MSc?
The academic world was expanding rapidly in the United Kingdom after the (Lionel) Robbins report. Previously, many bright scholars had received tenured posts after undergraduate degrees, and Denis
was an example. However, as in the United States, a doctorate was becoming essential. I had a summer job in the Labour government's new Department of Economic Affairs, modeling the second-hand car
market. That work revealed to me the gap between econometric theory and practice, and the difficulty of making economics operational, so I thought that a doctorate might improve my research skills.
Having read George Katona's research, including Katona and Mueller (1968), I wanted to investigate economic psychology in order to integrate the psychologist's approach to human behavior with the
economist's utility-optimization inter-temporal models. Individuals play little role in the latter--agents' decisions could be made by computers. By contrast, Katona's models of human behavior
incorporated anticipations, plans, and mistakes.
Had you read John Muth (1961) on expectations by then?
Yes, in the quantitative economics seminar, but his results seemed specific to the given time-series model, rather than being a general approach to expectations formation. Models with adaptive and
other backward-looking expectations were being criticized at the time, although little was known about how individuals actually formed expectations. However, Denis guided me into modeling dynamic
systems with vector autoregressive errors for my PhD.
What was your initial reaction to that topic?
I admired Sargan (1964), and I knew that mis-specifying autocorrelation in a single equation induced modeling problems. Generalizing that result to systems with vector autoregressive errors appeared
useful. Denis's approach entailed formulating the `` solved-out'' form with white-noise errors, and then partitioning dynamics between observables and errors. Because any given polynomial matrix
could be factorized in many ways, with all factorizations being observationally equivalent in a stationary world, a sufficient number of (strongly) exogenous variables were needed to identify the
partition. The longer lag length induced by the autoregressive error generalized the model, but error autocorrelation per se imposed restrictions on dynamics, so the autoregressive-error
representation was testable: see [4], [14], and [22], the last with Andy Tremayne.
Did you consider the relationship between the system and the conditional model as an issue of exogeneity?
No. I took it for granted that the variables called `` exogenous'' were independent of the errors, as in strict exogeneity. Bill Phillips (1956) had considered whether the joint distribution of the
endogenous and potentially exogenous variables factorized, such that the parameters of interest in the conditional distribution didn't enter the marginal distribution. On differentiating the joint
distribution with respect to the parameters of interest, only the conditional distribution would contribute. Unfortunately, I didn't realize the importance of conditioning for model specification at
the time.
What other issues arose in your thesis?
Computing and modeling. Econometric methods are pointless unless operational, but implementing the new procedures that I developed required considerable computer programming. The IBM 360/65 at
University College London (UCL) facilitated calculations. I tried the methods on a small macro-model of the United Kingdom, investigating aggregate consumption, investment, and output; see [15].
At the time, Denis had several PhD students working on specific sectors of the economy, whereas you were working on the economy as a whole. How much did you interact with the other students?
The student rebellion at the LSE was at its height in 1968-1969; and most of Denis's students worked on the computer at UCL, an ocean of calm. It was a wonderful group to be with. Grayham Mizon wrote
code for optimization applied to investment equations, Pravin Trivedi for efficient Monte Carlo methods and modelling inventories, Mike Feiner for ``ratchet'' models for imports, and Ross Williams
for nonlinear estimation of durables expenditure. Also, Cliff Wymer was working on continuous-time simultaneous systems, Ray Byron on systems of demand equations, and William Mikhail on finite-sample
approximations. We shared ideas and code, and Denis met with us regularly in a workshop where each student presented his or her research. Most theses involved econometric theory, computing, an
empirical application, and perhaps a simulation study.
1.1 The London School of Economics
After finishing your PhD at the LSE, you stayed on as a Lecturer, then as a Reader, and eventually as a Professor of Econometrics. Was Denis Sargan the main influence on you at the LSE--as a mentor,
as a colleague, as an econometrician, and as an economist?
Yes, he was. And not just for me, but for a whole generation of British econometricians. He was a wonderful colleague. For instance, after struggling with a problem for months, a chat with Denis
often elicited a handwritten note later that afternoon, sketching the solution. I remember discussing Monte Carlo control variates with Denis over lunch after not getting far with them. He came to my
office an hour later, suggesting a general computable asymptotic approximation for the control variate that guaranteed an efficiency gain as the sample size increased. That exchange resulted in [16]
and [27]. Denis was inclined to suggest a solution and leave you to complete the analysis. Occasionally, our flailings stimulated him to publish, as with my attempt to extract th-order autoregressive
errors from th-order dynamics. Denis requested me to repeat my presentation on it to the econometrics workshop--the kiss of death to an idea! Then he formulated the common-factor approach in Sargan
How did Jim Durbin and other people at LSE influence you?
In 1973, I was programming GIVE--the Generalized Instrumental Variable Estimator [33]--including an algorithm for FIML. I used the FIML formula from Jim's 1963 paper, which was published much later
as Durbin (1988) in Econometric Theory. While explaining Jim's formula in a lecture, I noticed that it subsumed all known simultaneous equations estimators. The students later claimed that I stood
silently looking at the blackboard for some time, then turned around and said `` this covers everything.'' That insight led to [21] on estimator generating equations, from which all simultaneous
equations estimators and their asymptotic properties could be derived with ease. When Ted Anderson was visiting LSE in the mid-1970s and writing Anderson (1976), he interested me in developing an
analog for measurement-error models, leading to [20].
What were your teaching assignments at the LSE?
I taught the advanced econometrics option for the undergraduate degree, and the first year of the two-year MSc. It was an exciting time because LSE was then at the forefront of econometric theory and
its applications. I also taught control theory based on Bill Phillips's course notes and the book by Peter Whittle (1963).
Interactions between teaching, research, and software have been important in your work.
Indeed. Writing operational programs was a major theme at LSE because Denis was keen to have computable econometric methods. The mainframe program GIVE was my response. Meghnad Desai called GIVE a
``model destruction program'' because at least one of its diagnostic tests usually rejected anyone's pet empirical specification.
1.2 Overseas Visits
During 1975-1976, you split a year-long sabbatical between Yale--where I first met you--and Berkeley. What experiences would you like to share from those visits?
There were three surprises. The first was that the developments at LSE following Denis's 1964 paper were almost unknown in the United States. Few econometricians therefore realized that
autoregressive errors were a testable restriction and typically indicated mis-specification, and Denis's equilibrium-correction (or ``error-correction'') model was unknown. The second surprise was
the divergence appearing in the role attributed to economic theory in empirical modeling: from pure data-basing, through using theory as a guideline--which nevertheless attracted the accusation of
``measurement without theory''--to the increasingly dominant fitting of theory models. Conversely, little attention was given to which theory to use, and to bridging the gap between abstract models
and data by empirical modeling. The final surprise was how foreign the East Coast seemed, an impression enhanced by the apparently common language. The West Coast proved more familiar--we realized
how much we had been conditioned by movies! I enjoyed the entire sabbatical. At Yale, the Koopmans, Tobins, and Klevoricks were very hospitable; and in Berkeley, colleagues were kind. I ended that
year at Australian National University (ANU), where I first met Ted Hannan, Adrian Pagan, and Deane Terrell.
One of the academic highlights was the November 1975 conference in Minnesota held by Chris Sims.
Yes, it was, although Chris called my comments in [25] ``acerbic.'' In [25], I concurred with Clive Granger and Paul Newbold's critique of poor econometrics, particularly that a high and a low
Durbin-Watson statistic were diagnostic of an incorrect model. However, I thought that the common-factor interpretation of error autocorrelation, in combination with equilibrium-correction models,
resolved the nonsense-regressions problem better than differencing, and it retained the economics. My invited paper [26] at the 1975 Toronto Econometric Society World Congress had discussed a system
of equilibrium corrections that could offset nonstationarity.
George Box and Gwilym Jenkins's book (initially published as Box and jenkins (1970)) had appeared a few years earlier. What effect was that having on econometrics?
The debate between the Box-Jenkins approach and the standard econometrics approach was at its height, yet the ideas just noted seemed unknown. In the United States, criticisms by Phillip Cooper and
Charles Nelson (1975) of macro-forecasters had stimulated debate about model forms--specifically, about simultaneous systems versus ARIMA representations. However, my Monte Carlo work with Pravin in
[8] on estimating dynamic models with moving-average or autoregressive errors had shown that matching the lag length was more important than choosing the correct form; and neither lag length nor
model form was very accurately estimated from the sample sizes of 40-80 observations then available. Thus, to me, the only extra ingredients in the Box-Jenkins approach over Bill Phillips's work on
dynamic models with moving-average errors (Phillips (2000)) were differencing and data-based modeling. Differencing threw away steady-state economics--the long-run information--so it was unhelpful. I
suspected that Box-Jenkins models were winning because of their modeling approach, not their model form; and if a similar approach was adopted in econometrics--ensuring white-noise errors in a good
representation of the time series--econometric systems would do much better.
1.3 Oxford University
Why did you decide to move to Nuffield College in January 1982?
Oxford provided a good research environment with many excellent economists, it had bright students, and it was a lovely place to live. Our daughter Vivien was about to start school, and Oxford
schools were preferable to those in central London. Amartya Sen, Terence Gorman, and John Muellbauer had all recently moved to Oxford, and Jim Mirrlees was already there. In Oxford, I was initially
also acting director of their Institute of Economics and Statistics because academic cutbacks under Margaret Thatcher meant that the University could not afford a paid director. In 1999, the
Institute transmogrified into the Oxford economics department.
That sounds strange--not to have had an economics department at a major UK university.
No economics department, and no undergraduate economics degree. Economics was college-based rather than university-based, it lacked a building, and it had little secretarial support. PPE--short for
``Politics, Philosophy, and Economics''--was the major vehicle through which Oxford undergraduates learnt economics. The joke at the time was that LSE students knew everything, but could do nothing
with it, whereas Oxford students knew nothing, and could do everything with it.
How did your teaching responsibilities differ between LSE and Nuffield?
At Oxford, I taught the second-year optional econometrics course for the MPhil in economics--36 hours of lectures per year. Oxford students didn't have a strong background in econometrics,
mathematics, or statistics, but they were interested in empirical econometric modeling. With the creation of a department of economics, we have now integrated the teaching programs at both the
graduate and the undergraduate levels.
1.4 Research Funding
Throughout your academic career, research funding has been important. You've received grants from the Economic and Social Research Council (ESRC, formerly the SSRC), defended the funding of economics
generally, chaired the 1995-1996 economics national research evaluation panel for the Higher Education Funding Council for England (HEFCE), and just recently received a highly competitive ESRC-funded
research professorship.
On the first, applied econometrics requires software, computers, research assistants, and data resources, so it needs funding. Fortunately, I have received substantial ESRC support over the years,
enabling me to employ Frank Srba, Yock Chong, Adrian Neale, Mike Clements, Jurgen Doornik, Hans-Martin Krolzig, and yourself, who together revolutionized my productivity. That said, I have also been
critical of current funding allocations, particularly the drift away from fundamental research towards ``user-oriented'' research. ``Near-market'' projects should really be funded by commercial
companies, leaving the ESRC to focus on funding what the best researchers think is worthwhile, even if the payoff might be years later. The ESRC seems pushed by government to fund research on
immediate problems such as poverty and inner-city squalor--which we would certainly love to solve--but the opportunity cost is reduced research on the tools required for a solution. My work on the
fundamental concepts of forecasting would have been impossible without support from the Leverhulme Foundation. I still have more than half of my applications for funding rejected, and I regret that
so many exciting projects die. In an odd way, these prolific rejections may reassure younger scholars suffering similar outcomes.
Nevertheless, you have also defended the funding of economics against outside challenges.
In the mid-1980s, the UK meteorologists wanted another super-computer, which would have cost about as much as the ESRC's entire budget. There was an enquiry into the value of social science research,
threatening the ESRC's existence. I testified in the ESRC's favor, applying PcGive live to modeling UK house prices to demonstrate how economists analyzed empirical evidence; see [52]. The scientists
at the enquiry were fascinated by the predictability of such an important asset price, as well as the use of a cubic differential equation to describe its behavior. Fortunately, the enquiry
established that economics wasn't merely assertion.
I remember that one of the deciding arguments in favor of ESRC funding was not by an economist, but by a psychiatrist.
Yes. Griffiths Edwards worked in the addiction research unit at the Maudsley on a program for preventing smoking. An economist had asked him if lung-cancer operations were worthwhile. Checking, he
found that many patients did not have a good life post-operation. This role of economics in making people think about what they were doing persuaded the committee of inquiry of our value. Thatcher
clearly attached zero weight to insights like Keynes's (1936) General Theory, whereas I suspect that the output saved thereby over the last half century could fund economics in perpetuity.
There also seems to be a difference in attitudes towards, say, a failure in forecasting by economists and a failure in forecasting by the weathermen.
The British press has often quoted my statement that, when weathermen get it wrong, they get a new computer, whereas when economists get it wrong, they get their budgets cut. That difference in
attitude has serious consequences, and it ignores that one may learn from one's mistakes. Forecast failure is as informative for us as it is for meteorologists.
That difference in attitude may also reflect how some members of our profession ignore the failures of their own models.
Possibly. Sometimes they just start another research program.
Let's talk about your work on HEFCE.
Core research funding in UK universities is based on HEFCE's research assessment exercise. Peer-group panels evaluate research in each discipline. The panel for economics and econometrics has been
chaired in the past by Jim Mirrlees, Tony Atkinson, and myself. It is a huge task. Every five years, more than a thousand economists from UK universities submit four publications each to the panel,
which judges their quality. This assessment is the main determinant of future research funding, as few UK universities have adequate endowments. It also unfortunately facilitates excessive government
``micro-management.'' Through the Royal Economic Society, I have tried to advise the funding council about designing such evaluation exercises, both to create appropriate incentives and to adopt a
measurement structure that focuses on quality.
1.5 Professional Societies and Journals
Professional societies have several important roles for economists, and you have been particularly active in both the Econometric Society and the Royal Economic Society.
As a life member of the Econometric Society, and as a Fellow since 1976, I know that the Econometric Society plays a valuable role in our profession, but I believe that it should be more democratic
by allowing members, and not just Fellows, to have a voice in the affairs of the Society. I was the first competitively elected President of the Royal Economic Society. After empowering its members,
the Society became much more active, especially through financing scholarships and funding travel. I persuaded the RES to start up the Econometrics Journal, which is free to members and inexpensive
for libraries. Neil Shephard has been a brilliant and energetic first managing editor, helping to rapidly establish a presence for the Econometrics Journal. I also helped found a committee on the
role of women in economics, prompted by Karen Mumford and steered to a formal basis by Denise Osborn, with Carol Propper as its first chairperson. The committee has created a network and undertaken a
series of useful studies, as well as examined issues such as potential biases in promotions. Some women had also felt that there was bias in journal rejections and were surprised that (e.g.) I still
received referee reports that comprised just a couple of rude remarks.
Almost from the start of your professional career, you have been active in journal editing.
Yes. In 1971, Alan Walters (who had the office next door to mine at LSE) nominated me as the econometrics editor for the Review of Economic Studies. Geoff Heal was the Review's economics editor, and
we were both in our twenties at the time. I have no idea how Alan persuaded the Society for Economic Analysis to agree to my appointment, although the Review was previously known as the ``Children's
Newspaper'' in some sections of our profession. Editing was invaluable for broadening my knowledge of econometrics. I read every submission, as I did later when editing for the Economic Journal and
the Oxford Bulletin. An editor must judge each paper and evaluate the referee reports, not just act as a post box. All too often, editors' letters merely say that one of the referees didn't ``like''
the paper, and so reject it. If my referees didn't like a paper that I liked, I would accept the paper nonetheless, reporting the most serious criticisms from the referee reports for the author to
rebut. Active editing also requires soliciting papers that one likes, which can be arduous when still handling 100-150 submissions a year.
I then edited the Economic Journal with John Flemming (who regrettably died last year) and covered a wider range of more applied papers. When I began editing the Oxford Bulletin, a shift to the
mainstream was needed, and this was helped by commissioning two timely special issues on cointegration that attracted the profession's attention; see [63] and [97].
Some people then nicknamed it the Oxford Bulletin of Cointegration! Let's move on to conferences. You organized the Oslo meeting of the Econometric Society, and you helped create the Econometrics
Conferences of the European Community (EC ).
EC was conceived by Jan Kiviet and Herman van Dijk as a specialized forum, and I was delighted to help. Starting in Amsterdam in 1991, EC has been very successful, and it has definitely enhanced
European econometrics. We attract about a hundred expert participants, with no parallel sessions, although EC does have poster sessions.
Poster sessions have been a success in the scientific community, but they generally have not worked well at American economics meetings. That has puzzled me, but I gather they succeeded at EC ?
We encouraged ``big names'' to present posters, we provided champagne to encourage attendance, and we gave prizes to the best posters. Some of the presentations have been a delight, showing how a
paper can be communicated in four square meters of wall space, and allowing the presenter to meet the researchers they most want to talk to. At a conference the size of EC, about twenty people
present posters at once, so there are two to three audience members per presenter.
That said, in the natural sciences, poster sessions also work at large conferences, so perhaps the ratio is important, not the absolute numbers.
1.6 Long-term Collaborations
Your extensive list of long-term collaborators includes Pravin Trivedi, Frank Srba, James Davidson, Grayham Mizon, Jean-Franรงois Richard, Rob Engle, Aris Spanos, Mary Morgan, myself, Julia Campos,
John Muellbauer, Mike Clements, Jurgen Doornik, Anindya Banerjee, and, more recently, Katarina Juselius and Hans-Martin Krolzig. What were your reasons for collaboration, and what benefits did they
The obvious ones were a shared viewpoint yet complementary skills, my co-authors' brilliance, energy, and creativity, and that the sum exceeded the parts. Beyond that, the reasons were different in
every case. Any research involving economics, statistics, programming, history, and empirical analysis provides scope for complementarities. The benefits are clear to me, at least. Pravin was widely
read, and stimulated my interest in Monte Carlo. Frank greatly raised my productivity--our independently written computer code would work when combined, which must be a rarity. When I had tried this
with Andy Tremayne, we initially defined Kronecker products differently, inducing chaos! James brought different insights into our work, and insisted (like you) on clarity.
Grayham and I have investigated a wide range of issues. Like yourself, Rob, Jean-Franรงois, Katarina, and Mike (and also Sรธren Johansen, although we have not yet published together), Grayham shares a
willingness to discuss econometrics at any time, in any place. On the telephone or over dinner, we have started exchanging ideas about each other's research, usually to our spouses' dismay. I find
such discussions very productive. Jean-Franรงois and Rob are both great at stimulating new developments and clarifying half-baked ideas, leading to important notions and formalizations. Aris has
always been a kindred spirit in questioning conventional econometric approaches and having an interest in the history of econometrics.
Mary is an historian, as well as an econometrician, and so stops me from writing ``Whig history'' (i.e., history as written from the perspective of the victors). With yourself, we have long arguments
ending in new ideas, and then write the paper. Julia rigorously checks all derivations and frequently corrects me. John has a clear understanding of economics, so keeps me right in that arena. Mike
and I have pushed ahead on investigating a shared interest in the fundamentals of economic forecasting, despite a reluctance of funding agencies to believe that it is a worthwhile activity.
In addition to his substantial econometrics skills, Jurgen is one of the world's great programmers, with an extraordinary ability to conjure code that is almost infallible. He ported PcGive across to
C++ after persuading me that there was no future in FORTRAN. We interact on a host of issues, such as on how methodology impinges on the design and structure of programs. Anindya brings great
mathematical skills, and Katarina has superb intuition about empirical modeling. Hans-Martin has revived my interest in methodology with automatic model-selection procedures, which he pursues in
addition to his ``regime-switching'' research. Ken Wallis and I have regularly commented on each other's work, although we have rarely published together. And, of course, Denis Sargan was also a
long-term collaborator, but he almost never needed co-authors, except for [55], which was written jointly with Adrian Pagan and myself. As the acknowledgments in my publications testify, many others
have also helped at various stages, most recently Bent Nielsen and Neil Shephard, who are wonderful colleagues at Nuffield.
2 Research Strategy
I want to separate our discussion of research strategy into the role of economics in empirical modeling, the role of econometrics in economics, and the LSE approach to empirical econometric modeling.
2.1 The Role of Economics in Empirical Modeling
I studied economics because unemployment, living standards, and equity are important issues--as noted above, Paul Samuelson was a catalyst in that--and I remain an economist. However, a scientific
approach requires quantification, which led me to econometrics. Then I branched into methodology to understand what could be learnt from non-experimental empirical evidence. If econometrics could
develop good models of economic reality, economic policy decisions could be significantly improved. Since policy requires causal links, economic theory must play a central role in model formulation,
but economic theory is not the sole basis of model formulation. Economic theory is too abstract and simplified, so data and their analysis are also crucial. I have long endorsed the views in Ragnar
Frisch's (1933) editorial in the first issue of Econometrica, particularly his emphasis on unifying economic theory, economic statistics (data), and mathematics. That still leaves open the key
question as to `` which economic theory.'' `` High-level'' theory must be tested against data, contingent on `` well-established'' lower-level theories. For example, despite the emphasis on agents'
expectations by some economists, they devote negligible effort to collecting expectations data and checking their theories. Historically, much of the data variation is not due to economic factors,
but to `` special events'' such as wars and major changes in policy, institutions, and legislation. The findings in [205] and [208] are typical of my experience. A failure to account for these
special events can elide the role of economic forces in an empirical model.
2.2 The Role of Econometrics in Economics
Is the role of econometrics in economics that of a tool, just as Monte Carlo is a tool within econometrics?
Econometrics is our instrument, as telescopes and microscopes are instruments in other disciplines. Econometric theory and, within it, Monte Carlo, evaluates whether that instrument is functioning as
expected. Econometric methodology studies how such methods work when applied.
Too often, a study in economics starts afresh, postulating and then fitting a theory-based model, failing to build on previous findings. Because investigators revise their models and rewrite a priori
theories in light of the evidence, it is unclear how to interpret their results. That route of forcing theoretical models onto data is subject to the criticisms in Larry Summers (1991) about the ``
illusion of econometrics.'' I admire what Jan Tinbergen called `` kitchen-sink econometrics,'' being explicit about every step of the process. It starts with what the data are; how they are
collected, measured, and changed in the light of theory; what that theory is; why it takes the claimed form and is neither more general nor more explicit; and how one formulates the resulting
empirical relationship, and then fits it by a rule (an estimator) derived from the theoretical model. Next comes the modeling process, because the initial specification rarely works, given the many
features of reality that are ignored by the theory. Finally, ex post evaluation checks the outcome.
That approach suggests a difference between being primarily interested in the economic theory--where data check that the theory makes sense--and trying to understand the data--where the theory helps
interpret the evidence rather than act as a straitjacket.
Yes. To derive explicit results, economic theory usually abstracts from many complexities, including how the data are measured. There is a vast difference between such theory being invaluable, and
its being optimal. At best, the theory is a highly imperfect abstraction of reality, so one must take the data and the theory equally seriously in order to build useful empirical representations. The
instrument of econometrics can be used in a coherent way to interpret the data, build models, and underpin a progressive research strategy, thereby providing the next investigator with a starting
2.3 The LSE Approach
What is meant by the LSE approach? It is often associated with you in particular, although many other individuals have contributed to it, and not all of them have been at the LSE.
There are four basic stages, beginning with an economic analysis to delineate the most important factors. The next stage embeds those factors in a general model that also allows for other potential
determinants and relevant special features. Then, the congruence of that model is tested. Finally, that model is simplified to a parsimonious undominated congruent final selection that encompasses
the original model, thereby ensuring that all reductions are valid.
When developing the approach, the first tractable cases were linear dynamic single equations, where the appropriate lag length was an open issue. However, the principle applies to all econometric
modeling, albeit with greater difficulty in nonlinear settings; see Trivedi (1970) and Mizon (1977) for early empirical and theoretical contributions. Many other aspects followed, such as developing
a taxonomy for model evaluation, orthogonalizing variables, and re-commencing an analysis at the general model if a rejection occurs. Additional developments generalized this approach to system
modeling, in which several (or even all) variables are treated as endogenous. Multiple cointegration is easily analyzed as a reduction in this framework, as is encompassing of the VAR and whether a
conditional model entails a valid reduction. Mizon (1995) and [157] provide discussions.
Do you agree with Chris Gilbert (1986) that there is a marked contrast between the `` North American approach'' to modeling and the `` European approach'' ?
Historically, American economists were the pragmatists, but Koopmans (1947) seems to mark a turning point. Many American economists now rely heavily on abstract economic reasoning, often ignoring
institutional aspects and inter-agent heterogeneity, as well as inherent conflicts of interest between agents on different sides of the market. Some economists believe their theories to such an
extent that they retain them, even when they are strongly rejected by the data. There are precedents in the history of science for maintaining research programs despite conflicts with empirical
evidence, but only when there was no better theory. For economics, however, Werner Hildenbrand (1994), Jean-Pierre Benassy (1986), and many others highlight alternative theoretical approaches that
seem to accord better with empirical evidence.
3 Research Highlights
We discussed estimator generation already. Let's now turn to some other highlights of your research program, including equilibrium correction, exogeneity, model evaluation and design, encompassing,
Dynamic Econometrics, and Gets. These issues have often arisen from empirical work, so let's consider them in their context, focusing on consumers' expenditure and money demand, including the
Friedman-Schwartz debate. We should also discuss Monte Carlo as a tool in econometrics, the history of econometrics, and your recent interest in ex ante forecasting, which has emphasized the
difference between error correction and equilibrium correction.
3.1 Consumers' Expenditure
Your paper [28] with James Davidson, Frank Srba, and Stephen Yeo models UK consumers' expenditure. This paper is now commonly known by the acronym DHSY, which is derived from the authors' initials.
Some background is necessary. I first had access to computer graphics in the early 1970s, and I was astonished at the picture for real consumers' expenditure and income in the United Kingdom.
Expenditure manifested vast seasonality, with double-digit percentage changes between quarters, whereas income had virtually no seasonality. Those seasonal patterns meant that consumption was much
more volatile than income on a quarter-to-quarter basis. Two implications followed. First, it would not work to fit first-order lags (as I had done earlier) and hope that dummies plus the seasonality
in income would explain the seasonality in consumption. Second, the general class of consumption-smoothing theories like the permanent-income and life-cycle hypotheses seemed mis-focused. Consumers
were inducing volatility into the economy by large inter-quarter shifts in their expenditure, so the business sector must be a stabilizing influence.
Moreover, the consumption equation in my macro-model [15] had dramatically mis-forecasted the first two quarters of 1968. In 1968Q1, the Chancellor of the Exchequer announced that he would greatly
increase purchase (i.e., sales) taxes unless consumers' expenditure fell, the response to which was a jump in consumers' expenditure, followed in the next quarter by the Chancellor's tax increase and
a resulting fall in expenditure. I wrongly attributed my model's forecast failure to model mis-specification. In retrospect, that failure signalled that forecasting problems with econometric models
come from unanticipated changes.
At about this time, Gordon Anderson and I were modeling building societies, which are the British analogue of the US savings and loans associations. In [26], we nested the long-run solutions of
existing empirical equations, using a formulation related to Sargan (1964), although I did not see the link to Denis's work until much later; see [50]. I adopted a similar approach for modeling
consumers' expenditure, seeking a consumption function that could interpret the equations from the major UK macro-models and explain why their proprietors had picked the wrong models. In DHSY [28],
we adopted a `` detective story'' approach, using a nesting model for the different variables, valid for both seasonally-adjusted and unadjusted data, with up to 5 lags in all the variables to
capture the dynamics. Reformulation of that nesting model delivered an equation that [39] later related to Phillips (1957) and called an error-correction model. Under error correction, if consumers
made an error relative to their plan by over-spending in a given quarter, they would later correct that error.
Even with DHSY, a significant change in model formulation occurred just before publication. Angus Deaton (1977) had just established a role for inflation if agents were uncertain as to whether
relative or absolute prices were changing.
The first DHSY equation explained real consumers' expenditure given real income, and it significantly over-predicted expenditure through the 1973-1974 oil crisis. Angus's paper suggested including
inflation and changes therein. Adding these variables to our equation explained the under-spending. This result was the opposite of what the first-round economic theory suggested, namely, that high
inflation should induce pre-emptive spending, given the opportunity costs of holding money. Inflation did not reflect money illusion. Rather, it implied the erosion of the real value of liquid
assets. Consumers did not treat the nominal component of after-tax interest as income, whereas the Statistical Office did, so disposable income was being mis-measured. Adding inflation to our
equation corrected that. As ever, theory did not have a unique prediction.
DHSY explained why other modelers selected their models, in addition to evaluating your model against theirs. Why haven't you applied that approach in your recent work?
It was difficult to do. Several ingredients were necessary to explain other modelers' model selections: their modeling approaches, data measurements, seasonal adjustment procedures, choice of
estimators, maximum lag lengths, and mis-specification tests. We first standardized on unadjusted data and replicated models on that. While seasonal filters leave a model invariant when the model is
known, they can distort the lag patterns if the model is data-based. We then investigated both OLS and IV but found little difference. Few of the then reported evaluation statistics were valid for
dynamic models, so such tests could mislead. Most extant models had a maximum lag of one and low short-run marginal propensities to consume, which seemed too small to reflect agent behavior. We tried
many blind alleys (including measurement errors) to explain these low marginal propensities to consume. Then we found that equilibrium correction explained them by induced biases in
partial-adjustment models. We designed a nesting model, which explained all the previous findings, but with the paradox that it simplified to a differenced specification, with no long-run term in the
levels of the variables. Resolving that conundrum led to the error-correction mechanism. While this ``Sherlock Holmes'' approach was extremely time-consuming, it did stimulate research into
encompassing, i.e., trying to explain other models' results from a given model.
Were you aware of Phillips (1954) and Phillips (1957)?
Now the interview becomes embarrassing! I had taken over Bill Phillips's lecture course on control theory and forecasting, so I was teaching how proportional, integral, and derivative control rules
can stabilize the economy. However, I did not think of such rules as an econometric modeling device in behavioral equations.
What other important issues did you miss at the time?
Cointegration! Gordon Anderson's and my work on building societies showed that combinations of levels variables could be stationary, as in the discussion by Klein (1953) of the `` great ratios.''
Granger (1981, 1986) later formalized that property as cointegration removing unit roots. Grayham Mizon and I were debating with Gene Savin whether unit roots changed the distributions of estimators
and tests, but bad luck intervened. Grayham and I found no changes in several Monte Carlos, but, unknowingly, our data generation processes had strong growth rates.
Rather than unit-root processes with a zero mean?
Yes. We found that estimators were nearly normally distributed, and we falsely concluded that unit roots did not matter; see West (1988).
The next missed issue concerned seasonality and annual differences. In DHSY, the equilibrium correction was the four-quarter lag of the log of the ratio of consumption to income, and it was highly
seasonal. However, seasonal dummy variables were insignificant if one used the Scheffรฉ procedure; see Savin (1980). About a week after DHSY's publication, Thomas von Ungern-Sternberg added seasonal
dummies to our equation and, with conventional -tests, found that they were highly significant, leading to the `` HUS'' paper, [39]. Care is clearly required with multiple-testing procedures!
Those results on seasonality stimulated an industry on time-varying seasonal patterns, periodic seasonality, and periodic behavior, with many contributions by Denise Osborn (1988, 1991).
Indeed. The final mistake in DHSY was our treatment of liquid assets. HUS showed that, in an equilibrium-correction formulation, imposing a unit elasticity of consumption with respect to income
leaves no room for liquid assets. Logically speaking, DHSY went from simple to general. On de-restricting their equation, liquid assets were significant, which HUS interpreted as an integral
correction mechanism. The combined effect of liquid assets and real income on expenditure added up to unity in the long run.
The DHSY and HUS models appeared at almost the same time as the Euler-equation approach in Bob Hall (1978). Bob emphasized consumption smoothing, where changes in consumption were due to the
innovations in permanent income and so should be ex ante unpredictable. A large literature has tested if changes in consumers' expenditure are predictable in Hall's model. How did your models compare
with his?
In [35], James Davidson and I found that lagged variables, as derived from HUS, were significant in explaining changes in UK consumers' expenditure. HUS's model thus encompassed Hall's model.
``Excess volatility'' and ``excess smoothing'' have been found in various models, but few authors using an Euler-equation framework test whether their model encompasses other models.
You produced a whole series of papers on consumers' expenditure.
After DHSY, HUS, and [35], there were four more papers. They were written in part to check the constancy of the models, and in part to extend them. [46] modeled annual inter-war UK consumers'
expenditure, obtaining results similar to the post-war relation in DHSY and HUS, despite large changes in the correlation structure of the data. [88] followed up on DHSY, [101] developed a model of
consumers' expenditure in France, and [119] revisited HUS with additional data.
The 1990 paper [88] with Anthony Murphy and John Muellbauer finds that additional variables matter.
We would expect that to happen. As the sample size grows, noncentral -statistics become more significant, so models expand. That's another topic that Denis worked on; see Sargan (1975), and the
interesting follow-up by Robinson (2003).
It also fits in with the work on -testing by Hal White (1990).
Yes. Mis-specification evidence against a given formulation accumulates, which unfortunately takes one down a simple-to-general path. That is one reason empirical work is difficult. (The other is
that the economy changes.) A `` reject'' outcome on a test rejects the model, but it does not reveal why. Bernt Stigum (1990) has proposed a methodology to delineate the source of failure from each
test, but when a test rejects, it still takes a creative discovery to improve a model. That insight may come from theory, institutional evidence, data knowledge, or inspiration. While
general-to-specific methodology provides guidelines for building encompassing models, advances between studies are inevitably simple-to-general, putting a premium on creative thinking.
A good initial specification of the general model is a major source of value added, making the rest relatively easy, and incredibly difficult otherwise.
That's correct. Research can be wasted if a key variable is omitted.
3.2 Equilibrium-correction Models and Cointegration
You already mentioned that you had presented an equilibrium-correction model at Sims's 1975 conference.
Yes, in [25], I presented an example that was derived from the long-run economic theory of consumers' expenditure, and I merely asserted that there were other ways to obtain stationarity than
differencing. Nonsense regressions are only a problem for static models, or for those patched up with autoregressive errors. If one begins with a general dynamic specification, it is relatively easy
to detect that there is no relationship between two unrelated random walks, and (say). A significant drawback of being away from the LSE was the difficulty of transporting software, so I did not run
a Monte Carlo simulation to check this. Now it is easy to do so, and [229, Figure 1] shows the distributions of the -statistics for the coefficients in the regression of:
where , , , and and are each normal serially independent and are independent of each other. This simulation confirms my earlier claim about detecting nonsense regressions, but the -statistic for the
coefficient on the lagged dependent variable is skewed. While differencing the data imposes a common factor with a unit root, a model with differences and an equilibrium-correction term remains in
levels because it allows for a long-run relation. To explain this, DHSY explicitly distinguished between differencing as an operator and differencing as a linear transformation.
What was the connection between [25] and Clive's first papers on cointegration--Granger (1981) and Granger and Weiss (1983)?
At Sims's conference, Clive was skeptical about relating differences to lagged levels and doubted that the correction in levels could be stationary: differences of the data did not have a unit root,
whereas their lagged levels did. Investigating that issue helped Clive discover cointegration; see his discussion of [49], and see Phillips (1997).
Your interest in cointegration led to two special issues of the Oxford Bulletin, your book [104], and a number of papers--[61], [64], [78], [95], [98], and [136]--the last three also addressing
structural breaks.
The key insight was that fewer equilibrium corrections () than the number of decision variables () induced integrated-cointegrated data, which Sรธren Johansen (1988) formalized as reduced-rank
feedbacks of combinations of levels onto growth rates. In the Granger representation theorem in Engle and Granger (1987), the data are I(1) because , a situation that I had not thought about. So,
although DHSY was close in some ways, it was far off in others. In fact, I missed cointegration for a second time in [32], where I showed that `` nonsense regressions'' could be created and detected,
but I failed to formalize the latter. Cointegration explained many earlier results. For instance, in Denis's 1964 equilibrium relationship involving real wages relative to productivity, the measured
disequilibrium fed back to determine future wage rates, given current inflation rates.
Peter Phillips (1986, 1987), Jim Stock (1987), and others (such as Chan and Wei (1988)) were also changing the mathematical technology by using Weiner integrals to represent the limiting
distributions of unit-root processes. Anindya Banerjee, Juan Dolado, John Galbraith, and I thought that the power and generality of that new approach would dominate the future of econometrics,
especially since some proofs became easier, as with the forecast-error distributions in [139]. Our interest in cointegration resulted in [104], following Benjamin Disraeli's reputed remark that `` if
you want to learn about a subject, write a book about it.''
Or edit a special issue on it!
3.3 Exogeneity
Exogeneity takes us back to Vienna in August 1977 at the European Econometric Society Meeting.
Discussions of the concept of exogeneity abounded in the econometrics literature, but for me, the real insight came from the paper presented in Vienna by Jean-Franรงois Richard and published as
Richard (1980). Although the concept of exogeneity needed clarifying, the audience at the Econometric Society meeting seemed bewildered, since few could relate to Jean-Franรงois's likelihood
factorizations and sequential cuts. Rob Engle was also interested in exogeneity, so, when he visited LSE and CORE shortly after the Vienna meeting, the three of us analyzed the distinctions between
various kinds of exogeneity and developed more precise definitions. We all attended a Warwick workshop, with Chris Sims and Ed Prescott among the other econometricians, and we argued endlessly.
Reactions to our formalization of exogeneity suggested that fundamental methodological issues were in dispute, including how one should model, what the form of models should be, what modeling
concepts were, and even what appropriate model concepts were. Since I was working with Jean-Franรงois and Rob, I visited their respective institutions (CORE and UCSD) during 1980-1981. My time at both
locations was very stimulating. The coffee lounge at CORE saw many long discussions about the fundamentals of modeling with Knud Munk, Louis Phlips, Jean-Pierre Florens, Michel Mouchart, and Jacques
Drรจze (plus Angus Deaton during his visit). In San Diego, we argued more about technique.
Your paper [44] with Rob and Jean-Franรงois on exogeneity went through several revisions before being published, and many of the examples from the CORE discussion paper were dropped.
Regrettably so. Exogeneity is a difficult notion and is prone to ambiguities, whereas examples can help reduce the confusion. The CORE version was written in a cottage in Brittany, which the Hendrys
and Richards shared that summer. Jean-Franรงois even worked on it while moving along the dining table as supper was being laid. The extension to unit-root processes in [130] shows that exogeneity has
yet further interesting implications.
How did your paper [106] on super exogeneity with Rob Engle come about?
Parameter constancy is a fundamental attribute of a model, yet predictive failure was all too common empirically. The ideal condition was super exogeneity, which meant valid conditioning for
parameters of interest that were invariant to changes in the distributions of the conditioning variables. Rob correctly argued that tests for super exogeneity and invariance were required, so we
developed some tests and investigated whether conditioning variables were valid, or whether they were proxies for agents' expectations. Invalid conditioning should induce nonconstancy, and that
suggested how to test whether agents were forward-looking or contingent planners, as in [76].
The idea is a powerful one logically, but there is no formal work on the class of paired parameter constancy tests in which we seek rejection for the forcing variables' model and non-rejection for
the conditional model.
That has not been formalized. Following Trevor Breusch (1986), tests of super exogeneity reject if there is nonconstancy in the conditional model, ensuring refutability. The interpretation of
non-rejection is less clear.
You reported simulation evidence in [100] with Carlo Favero.
That work was based on my realization in [76] that feedback and feedforward models are not observationally equivalent when structural breaks occur in marginal processes. Intercept shifts in the
marginal distributions delivered high power, but changes in the parameters of mean-zero variables were barely detectable. At the time, I failed to realize two key implications: the Lucas (1976)
critique could only matter if it induced location shifts; and predictive failure was rarely due to changed coefficients of zero-mean variables. More recently, I have developed these ideas in [183]
and [188].
In your forecasting books with Mike Clements--[163] and [170]--you discuss how shifts in the equilibrium's mean are the driving force for empirically detectable nonconstancy.
Interestingly, such a shift was present in DHSY, since inflation was needed to model the falling consumption-income ratio, which was the equilibrium correction. When inflation was excluded from our
model, predictive failure occurred because the equilibrium mean had shifted. However, we did not realize that logic at the time.
3.4 Model Development and Design
There are four aspects to model development. The first is model evaluation, as epitomized by GIVE (or what is now PcGive) in its role as a ``model destruction program.'' The second aspect is model
design. The third is encompassing, which is closely related to the theory of reduction and to the general-to-specific modeling strategy. The fourth concerns a practical difficulty that arises because
we may model locally by general to specific, but over time we are forced to model specific to general as new variables are suggested, new data accrue, and so forth.
On the first issue, Denis Sargan taught us that `` problems'' with residuals usually revealed model mis-specification, so tests were needed to detect residual autocorrelation, heteroscedasticity,
non-normality, and so on. Consequently, my mainframe econometrics program GIVE printed many model evaluation statistics. Initially, they were usually likelihood ratio statistics, but many were
switched to their Lagrange multiplier form, following the implementation of Silvey (1959) in econometrics by Ray Byron, Adrian Pagan, Rob Engle, Andrew Harvey, and others; see Godfrey (1988).
Why doesn't repeated testing lead to too many false rejections?
Model evaluation statistics play two distinct roles. In the first, the statistics generate one-off mis-specification tests on the general model. Because the general model usually has four or five
relevant, nearly orthogonal, aspects to check, a 1% significance level for each test entails an overall size of about 5% under the null hypothesis that the general model is well-specified.
Alternatively, a combined test could be used, and both approaches seem unproblematic. However, for any given nominal size for each test statistic, more tests must raise rejection frequencies under
the null. This cost has to be balanced against the probability of detecting a problem that might seriously impugn inference, where repeated testing (i.e., more tests) raises the latter probability.
The second role of model evaluation statistics is to reveal invalid reductions from a congruent general model. Those invalid reductions are then not followed, so repeated testing here does not alter
the rejection frequencies of the model evaluation tests.
The main difficulty with model evaluation in the first sense is that rejection merely reveals an inappropriate model. It does not show how to fix the problem. Generalizing a model in the rejected
direction might work, but that inference is a non sequitur. Usually, creative insight is required, and re-examining the underlying economics may provide that. Still, the statistical properties of any
new model must await new data for a Neyman-Pearson quality-control check.
The empirical econometrics literature of the 1960s manifested covert design. For instance, when journal editors required that Durbin-Watson statistics be close to two, residual autocorrelation was
removed by fitting autoregressive errors. Such difficulties prompted the concept of explicit model design, leading us to consider what characteristics a model should have. In [43], Jean-Franรงois and
I formalized model concepts and the information sets against which to evaluate models, and we also elucidated the design characteristics needed for congruence.
If we knew the data generation process (DGP) and estimated its parameters appropriately, we would also obtain insignificant tests with the stated probabilities. So, as an alternative complementary
interpretation, successful model design restricts the model class to congruent outcomes, of which the DGP is a member.
Right. Congruence (a name suggested by Chris Allsopp) denotes that a model matches the evidence in all the directions of evaluation; and so the DGP is congruent with itself. Surprisingly, the concept
of the DGP once caused considerable dispute, even though (by analogy) all Monte Carlo studies needed a mechanism for generating their data. The concept's acceptance was helped by clarifying that
constant parameters are not an intrinsic property of an economics DGP. Also, the theory of reduction explains how marginalization, sequential factorization, and conditioning in the enormous DGP for
the entire economy entails the joint density of the subset of variables under analysis; see [69] and also [113] with Steven Cook.
That joint density of the subset of variables is what Christophe Bontemps and Mizon (2003) have since called the local DGP. The local DGP can be transformed to have homoscedastic innovation errors,
so congruent models are the class to search; and Bontemps and Mizon prove that a model is congruent if it encompasses the local DGP. Changes at a higher level in the full DGP can induce nonconstant
parameters in the local DGP, putting a premium on good selection of the variables.
One criticism of the model design approach, which is also applicable to pre-testing, is that test statistics no longer have their usual distributions. How do you respond to that?
For evaluation tests, that view is clearly correct, whether the testing is within a given study or between different studies. When a test's rejection leads to model revision and only
``insignificant'' tests are reported, tests are clearly design criteria. However, their insignificance on the initial model is informative about that model's goodness.
So, in model design, insignificant test statistics are evidence of having successfully built the model. What role does encompassing play in such a strategy?
In experimental disciplines, most researchers work on the data generated by their own experiments. In macroeconomics, there is one data set with a proliferation of models thereof, which raises the
question of congruence between any given model and the evidence provided by rival models. The concept of encompassing was present in DHSY and HUS, but primarily as a tool for reducing model
proliferation. The concept became clearer in [43] and [45], but it was only formalized as a test procedure in Mizon and Richard (1986). Although the idea surfaced in David Cox (1962), David
emphasized single degree-of-freedom tests for comparing non-nested models, as did Hashem Pesaran (1974), whose paper I had handled as editor for the Review of Economic Studies. I remain convinced of
the central role of encompassing in model evaluation, as argued in [75], [83], [118], and [142]. Kevin Hoover and Stephen Perez (1999) suggested that encompassing be used to select a dominant final
model from the set of terminal models obtained by general-to-specific simplifications along different paths. That insight sustains multi-path searches and has been implemented in [175] and [206].
More generally, in a progressive research strategy, encompassing leads to a well-established body of empirical knowledge, so new studies need not start from scratch.
As new data accumulate, however, we may be forced to model specific to general. How do we reconcile that with a progressive research strategy?
As data accrue over time, we can uncover both spurious and relevant effects because spurious variables have central -statistics, whereas relevant variables have noncentral -statistics that drift in
one direction. By letting the model expand appropriately and by letting the significance level go to zero at a suitable rate, the probability of retaining the spurious effects tends to zero
asymptotically, whereas the probability of retaining the relevant variables tends to unity; see Hannan and Quinn (1979) and White (1990) for stationary processes. Thus, modeling from specific to
general between studies is not problematic for a progressive research strategy, provided one returns to the general model each time. Otherwise, [172] showed that successively corroborating a sequence
of results can imply the model's refutation. Still, we know little about how well a progressive research strategy performs when there are intermittent structural breaks.
3.5 Money Demand
You have analyzed UK broad money demand on both quarterly and annual data, and quarterly narrow money demand for both the United Kingdom and the United States. In your first money-demand study [29],
you and Grayham Mizon were responding to work by Graham Hacche (1974) at the Bank of England. How did that arise?
Tony Courakis (1978) had submitted a comment to the Economic Journal criticizing Hacche for differencing data in order to achieve stationarity. Grayham Mizon and I proposed testing the restrictions
imposed by differencing as an example of Denis's new common-factor tests--later published as Sargan (1980)--and we developed an equilibrium-correction representation for money demand, using the
Bank's data. The common-factor restriction in Hacche (1974) was rejected, and the equilibrium-correction term in our model was significant.
So, you assumed that the data were stationary, even though differencing was needed.
We implicitly assumed that both the equilibrium-correction term and the differences would be stationary, despite no concept of cointegration; and we assumed that the significance of the
equilibrium-correction term was equivalent to rejecting the common factor from differencing. Also, the Bank study was specific to general in its approach, whereas we argued for general-to-specific
modeling, which was the natural way to test common-factor restrictions using Denis's determinantal conditions. Denis's COMFAC algorithm was already included in GIVE, although Grayham's and my Monte
Carlo study of COMFAC only appeared two years later in [34].
Did Courakis (1978) and [29] change modeling strategies in the United Kingdom? What was the Bank of England's reaction?
The next Bank study--of M1 by Richard Coghlan (1978)--considered general dynamic specifications, but they still lacked an equilibrium-correction term. As I discussed in my follow-up [31], narrow
money acts as a buffer for agents' expenditures, but with target ratios for money relative to expenditure, deviations from which prompt adjustment. That target ratio should depend on the opportunity
costs of holding money relative to alternative financial assets and to goods, as measured by interest rates and inflation respectively. Also, because some agents are taxed on interest earnings, and
other agents are not, the Fisher equation cannot hold.
So your interest rate measure did not adjust for tax.
Right. [31] also highlighted the problems confronting a simple-to-general approach. Those problems include the misinterpretation of earlier results in the modeling sequence, the impossibility of
constructively interpreting test rejections, the many expansion paths faced, the unknown stopping point, the collapse of the strategy if later mis-specifications are detected, and the poor properties
that result from stopping at the first non-rejection--a criticism dating back to Anderson (1962).
A key difficulty with earlier UK money-demand equations had been parameter nonconstancy. However, my equilibrium-correction model was constant over a sample with considerable turbulence after
Competition and Credit Control regulations in 1971.
[31] also served as the starting point for a sequence of papers on UK and US M1. You returned to modeling UK M1 again in [60] and [94].
That research resulted in a simple representation for UK M1 demand, despite a very general initial model, with only four variables representing opportunity costs against goods and other assets,
adjustment costs, and equilibrium adjustment.
In 1982, Milton Friedman and Anna Schwartz published their book Monetary Trends in the United States and the United Kingdom, and it had many potential policy implications. Early the following year,
the Bank asked you to evaluate the econometrics in Friedman and Schwartz (1982) for the Bank's panel of academic consultants, leading to Hendry and Ericsson (1983) and eventually to [93].
You were my research officer then. Friedman and Schwartz's approach was deliberately simple-to-general, commencing with bivariate regressions, generalizing to trivariate regressions, etc. By the
early 1980s, most British econometricians had realized that such an approach was not a good modeling strategy. However, replicating their results revealed numerous other problems as well.
Figure 1: A comparison of Friedman and Schwartz's graph of UK velocity with Hendry and Ericsson's graph of UK velocity.
I recall that one of those was simply graphing velocity.
Yes. The graph in Friedman and Schwartz (1982, p. 178, Chart 5.5) made UK velocity look constant over their century of data. I initially questioned your plot of UK velocity--using Friedman and
Schwartz's own annual data--because your graph showed considerable nonconstancy in velocity. We discovered that the discrepancy between the two graphs arose mainly because Friedman and Schwartz
plotted velocity allowing for a range of 1 to 10, whereas UK velocity itself only varied between 1 and 2.4. Figure 1 reproduces the comparison.
Testing Friedman and Schwartz's equations revealed a considerable lack of congruence. Friedman and Schwartz phase-averaged their annual data in an attempt to remove the business cycle, but phase
averaging still left highly autocorrelated, nonstationary processes. Because filtering (such as phase averaging) imposes dynamic restrictions, we analyzed the original annual data. Our paper for the
Bank of England panel started a modeling sequence, with contributions from Andrew Longbottom and Sean Holly (1985) and Alvaro Escribano (1985).
Shortly after the meeting of the Bank's panel of academic consultants, there was considerable press coverage. Do you recall how that occurred? The Guardian newspaper started the debate.
As background, monetarism was at its peak. Margaret Thatcher--the Prime Minister--had instituted a regime of monetary control, as she believed that money caused inflation, precisely the view put
forward by Friedman and Schwartz. From this perspective, a credible monetary tightening would rapidly reduce inflation because expectations were rational. In fact, inflation fell slowly, whereas
unemployment leapt to levels not seen since the 1930s. The Treasury and Civil Service Committee on Monetary Policy (which I had advised in [36] and [37]) had found no evidence that monetary expansion
was the cause of the post-oil-crisis inflation. If anything, inflation caused money, whereas money was almost an epiphenomenon. The structure of the British banking system made the Bank of England a
``lender of the first resort,'' and so the Bank could only control the quantity of money by varying interest rates.
At the time, Christopher Huhne was the economics editor at the Guardian. He had seen our critique, and he deemed our evidence central to the policy debate.
As I recall, when Huhne's article hit the press, your phone rang for hours on end.
That it did. There were actually two articles about Friedman and Schwartz (1982) in the Guardian on December 15, 1983. On page 19, Huhne had written an article that summarized--in layman's terms--our
critique of Friedman and Schwartz (1982). Huhne and I had talked at length about this piece, and it provided an accurate statement of Hendry and Ericsson (1983) and its implications. In addition--and
unknown to us--the Guardian decided to run a front-page editorial on Friedman and Schwartz with the headline Monetarism's guru `distorts his evidence'. That headline summarized Huhne's view that it
was unacceptable for Friedman and Schwartz to use their data-based dummy variable for 1921-1955 and still claim parameter constancy of their money-demand equation. Rather, that dummy variable
actually implied nonconstancy because the regression results were substantively different in its absence. That nonconstancy undermined Friedman and Schwartz's policy conclusions.
Charles Goodhart (1982) had also questioned that dummy.
It is legitimate to question any data-based dummy selected for a period unrelated to historical events. Whether that dummy ``distorted the evidence'' is less obvious, since econometricians often use
indicators to clarify evidence or to proxy for unobserved variables. In its place, we used a nonlinear equilibrium correction, which had two equilibria, one for normal times and one for disturbed
times (although one could hardly call the First World War ``normal''). Like Friedman and Schwartz, we did include a dummy for the two world wars that captured a increase in demand, probably due to
increased risks. Huhne later did a TV program about the debate, spending a day at my house filming.
Hendry and Ericsson (1983) was finally published nearly eight years later in [93], after a prolonged editorial process. Just when we thought the issue was laid to rest, Chris Attfield, David Demery,
and Nigel Duck (1995) claimed that our equation had broken down on data extended to the early 1990s whereas the Friedman and Schwartz specification was constant.
To compile a coherent statistical series over a long run of history, Attfield, Demery, and Duck had spliced several different money measures together; but they had not adjusted the corresponding
measures of the opportunity cost. With that combination, our model did indeed fail. However, as shown in [166], our model remained constant over the whole sample once we used an appropriate measure
of opportunity cost, whereas the updated Friedman and Schwartz model failed. Escribano (2004) updates our equation through 2000 and confirms its continued constancy.
Your model of US narrow money demand also generated controversy, as when you presented it at the Fed.
Yes, that research appeared as [96] with Yoshi Baba and Ross Starr. After the supposed break-down in US money demand recorded by Steve Goldfeld (1976), it was natural to implement similar models for
the United States. Many new financial instruments had been introduced, including money market mutual funds, CDs, and NOW and SuperNOW accounts, so we hypothesized that these non-modeled financial
innovations were the cause of the instability in money demand. Ross also thought that long-term interest-rate volatility had changed the maturity structure of the bond market, especially when the Fed
implemented its New Operating Procedures. A high long rate was no longer a signal to buy because high interest rates were associated with high variances, and interest rates might go higher still and
induce capital losses. This situation suggested calculating a certainty-equivalent long-run interest rate--that is, the interest rate adjusted for risk.
Otherwise, the basic approach and specifications were similar. We treated M1 as being determined by the private sector, conditional on interest rates set by the Fed, although the income elasticity
was one half, rather than unity, as in the United Kingdom. Seminars at the Fed indeed produced a number of challenges, including the claim that the Fed engineered a monetary expansion for Richard
Nixon's re-election. Dummies for that period were insignificant, so agents were willing to hold that money at the interest rates set, confirming valid conditioning. Another criticism concerned the
lag structure, which represented average adjustment speeds in a large and complex economy.
Some economists still regard the final formulation in [96] as too complicated. Sometimes, I think that they believe the world is inherently simple. Other times, I think that they are concerned about
data mining. Have you had similar reactions?
Data mining could never spuriously produce the sizes of -values we found, however many search paths were explored. The variables might proxy unmodeled effects, but their large -statistics could not
arise by chance.
3.6 Dynamic Econometrics
That takes us to your book Dynamic Econometrics [127], perhaps the largest single project of your professional career so far. This book had several false starts, dating back to just after you had
finished your PhD.
In 1972, the Italian public company IRI invited Pravin Trivedi and myself to publish (in Italian) a set of lectures on dynamic modeling. In preparing those lectures, we became concerned that
conventional econometric approaches camouflaged mis-specification. Unfortunately, the required revisions took more than two decades!
Your lectures with Pravin set out a research agenda that included a general analysis of mis-specification (as in [18]), the plethora of estimators (unified in [21]), and empirical model design
(systematized in [43], [46], [49], and [69]).
Building on the success of [11] in explaining the simulation results in Goldfeld and Quandt (1972), [18] used a simple analytic framework to investigate the consequences of various
mis-specifications. As I mentioned earlier (in Section 1.1), I had discovered the estimator generating equation while teaching. To round off the book, I developed some substantive illustrations of
empirical modeling, including consumers' expenditure, and housing and the construction sector (which appeared as [59] and [65]). However, new econometric issues continually appeared. For instance,
how do we model capital rationing, or the demand for mortgages when only the supply is observed, or the stocks and flows of durables? I realized that I could not teach students how to do applied
econometrics until I had sorted out at least some of these problems.
Did you see that as the challenge in writing the book?
Yes. The conventional approach to modeling was to write down the economic theory, collect variables with the same names (such as consumers' expenditure for consumption), develop mappings between the
theory constructs and the observations, and then estimate the resulting equations. I had learned that that approach did not work. The straitjacket of the prevailing approach meant that one understood
neither the data processes nor the behavior of the economy. I tried a more data-based approach, in which theory provided guidance rather than a complete structure, but that approach required
developing concepts of model design and modeling strategy.
You again attempted to write the book when you were visiting Duke University annually in the mid- to late-1980s.
Yes, with Bob Marshall and Jean-Franรงois Richard. By that time, common factors, the theory of reduction, equilibrium correction and cointegration, encompassing, and exogeneity had clarified the
empirical analysis of individual equations; and powerful software with recursive estimators implemented the ideas. However, modeling complete systems raised new issues, all of which had to be made
operational. Writing the software package PcFiml enforced beginning from the unrestricted system, checking its congruence, reducing to a model thereof, testing over-identification, and encompassing
the VAR; see [79], [110], and [114]. This work matched parallel developments on system cointegration by Sรธren, Katarina, and others in Copenhagen.
Analyses were still needed of general-to-specific modeling and diagnostic testing in systems (which eventually came in [122]), judging model reliability (my still unpublished Walras-Bowley lecture),
and clarifying the role of inter-temporal optimization theory. That was a daunting list! Bob and Jean-Franรงois became more interested in auctions and experimental economics, so their co-authorship
I remember receiving your first full draft of Dynamic Econometrics for comment in the late 1980s.
That draft would not have appeared without help from Duo Qin and Carlo Favero. Duo transcribed my lectures, based on draft chapters, and Carlo drafted answers for the solved exercises. The final
manuscript still took years more to complete.
Dynamic Econometrics lacks an extensive discussion of cointegration. That is a surprising omission, given your interest in cointegration and equilibrium correction.
All the main omissions in Dynamic Econometrics were deliberate, as they were addressed in other books. Cointegration had been treated in [104]; Monte Carlo in [53] and [95]; numerical issues and
software in [81], [99], and [115]; the history of econometrics in [132]; and forecasting was to come, presaged by [112]. That distribution of topics let Dynamic Econometrics focus on modeling.
Because (co)integrated series can be reduced to stationarity, much of Dynamic Econometrics assumes stationarity. Other forms of nonstationarity would be treated later in [163] and [170]. Even as it
stood, Dynamic Econometrics was almost 1,000 pages long when published!
You dedicated Dynamic Econometrics to your wife Evelyn and your daughter Vivien. How have they contributed to your work on econometrics?
I fear that we tread on thin ice here, whatever I say! Evelyn and Vivien have helped in numerous ways, both directly and indirectly, such as by facilitating time to work on ideas and time to visit
collaborators. They have also tolerated numerous discussions on econometrics, corrected my grammar, and, in Vivien's case, questioned my analyses and helped debug the software. As you know, Vivien is
now a professional economist in her own right.
3.7 Monte Carlo Methodology
Let's now turn to three of the omissions from Dynamic Econometrics: Monte Carlo, the history of econometrics, and forecasting.
Pravin introduced me to the concepts of Monte Carlo analysis, based on Hammersley and Handscomb (1964). I implemented some of their procedures, particularly antithetic variates (AVs) in [8] with
Pravin, and later control variates in [16] with Robin Harrison.
I think that it is worth repeating your story about antithetic variates.
Pravin and I were graduate students at the time. We were investigating forecasts from estimated dynamic models and were using AVs to reduce simulation uncertainty. Approximating moving-average errors
by autoregressive errors entailed inconsistent parameter estimates and hence, we thought, biased forecasts. To check, we printed the estimated AV bias for each Monte Carlo simulation of a static
model with a moving-average error. We got page upon page of zeros, and a scolding from the computing center for wasting paper and computer time. In fact, we had inadvertently discovered that, when an
estimator is invariant to the sign of the data but forecast errors change sign when the data do, then the average of AV pairs of forecast errors is precisely zero: see [8]. The idea works for
symmetric distributions and hence for generalized least squares with estimated covariance matrices; see Kakwani (1967). I have since tried other approaches, as in [34] and [58].
Monte Carlo has been important for developing econometric methodology--by emphasizing the role of the DGP--and in your teaching, as reported in [73] and [92].
In Monte Carlo, knowledge of the DGP entails all subsequent results using data from that DGP. The same logic applies to economic DGPs, providing an essential step in the theory of reduction, and
clarifying mis-specification analysis and encompassing. Monte Carlo also convinced me that the key issue was specification, rather than estimation. In Monte Carlo response surfaces, the relative
efficiencies of estimators were dominated by variations between models, a view reinforced by my later forecasting research. Moreover, deriving control variates yielded insights into what determined
the accuracy of asymptotic distribution theory. The software package PcNaive facilitates the live classroom use of Monte Carlo simulation to illustrate and test propositions from econometric theory;
see [196]. A final major purpose of Monte Carlo was to check software accuracy by simulating econometric programs for cases where results were known.
Did you also use different software packages to check them against each other?
Yes. The Monte Carlo package itself had to be checked, of course, especially to ensure that its random number generator was i.i.d. uniform.
3.8 The History of Econometrics
How did you become interested in the history of econometrics?
Harry Johnson and Roy Allen sold me their old copies of Econometrica, which went back to the first volume in 1933. Reading early papers such as Haavelmo (1944) showed that textbooks focused on a
small subset of the interesting ideas and ignored the evolution of our discipline. Dick Stone agreed, and he helped me to obtain funding from the ESRC. By coincidence, Mary Morgan had lost her job at
the Bank of England when Margaret Thatcher abolished exchange controls in 1979, so Mary and I commenced work together. Mary was the optimal person to investigate the history objectively, undertaking
extensive archival research and leading to her superb book, Morgan (1990). We had the privilege of (often jointly) interviewing many of our discipline's founding fathers, including Tjalling Koopmans,
Ted Anderson, Gerhard Tintner, Jack Johnston, Trygve Haavelmo, Herman Wold, and Jan Tinbergen. The interviews with the latter three provided the basis for [84], [123], and [146]. Mary and I worked on
[82] and also collated many of the most interesting papers for [132]. Shortly afterwards, Duo Qin (1993) studied the more recent history of econometrics through to about the mid-1970s.
Your interest must have also stimulated some of Chris Gilbert's work.
I held a series of seminars at Nuffield to discuss the history of econometrics with many who published on the topic, such as John Aldrich, Chris, Mary, and Duo. It was fascinating to re-examine the
debates about Frisch's confluence analysis, between Keynes and Tinbergen, etc. On the latter, I concluded that Keynes was wrong, rather than right, as many believe. Keynes assumed that empirical
econometrics was impossible without knowing the answer in advance. If that were true generally, science could never have progressed, whereas in fact it has.
You also differ markedly with the profession's view on another major debate--the one between Koopmans and Vining on ``measurement without theory.''
As [132] reveals, the profession has wrongly interpreted that debate's implications. Perhaps this has occurred because the debate is a `` classic''--something that nobody reads but everybody cites.
Koopmans (1947) assumed that economic theory was complete, correct, and unchanging, and hence formed an optimal basis for econometrics. However, as Rutledge Vining (1949) noted, economic theory is
actually incomplete, abstract, and evolving, so the opposite inference can be deduced. Koopmans's assumption is surprising because Koopmans himself was changing economic theory radically through his
own research. Economists today often use theories that differ from those that Koopmans alluded to, but still without concluding that Koopmans was wrong. However, absent Koopmans's assumption, one
cannot justify forcing economic-theory specifications on data.
3.9 Economic Policy and Government Interactions
London gave ready access to government organizations, and LSE fostered frequent interactions with government economists. There is no equivalent academic institution in Washington with such close
government contacts. You have had long-standing relationships with both the Treasury and the Bank of England.
The Treasury's macro-econometric model had a central role in economic policy analysis and forecasting, so it was important to keep its quality as high as feasible with the resources available. The
Treasury created an academic panel to advise on their model, and that panel met regularly for many years, introducing developments in economics and econometrics, and teaching modeling to their
recently hired economists.
Also, DHSY attracted the Treasury's attention. The negative effect of inflation on consumers' expenditure--approximating the erosion of wealth--entailed that if stimulatory fiscal policy increased
inflation, the overall outcome was deflationary. Upon replacing the Treasury's previous consumption function with DHSY, many multipliers in the Treasury model changed sign, and debates followed about
what were the correct and wrong signs for such multipliers. Some economists rationalized these signs as being due to forward-looking agents pre-empting government policy, which then had the opposite
effect from the previous ``Keynesian'' predictions.
The Bank of England also had an advisory panel. My housing model showed large effects on house prices from changes in outstanding mortgages because the mortgage market was credit-constrained, so (in
the mid-1980s) I served on the Bank's panel, examining equity withdrawal from the housing market and the consequential effect of housing wealth on expenditure and inflation. Civil servants and
ministers interacted with LSE faculty on parliamentary select committees as well. Once, in a deputation with Denis Sargan and other LSE economists, we visited Prime Minister Callaghan to explain the
consequences of expansionary policies in a small open economy.
You participated in two select committees, one on monetary policy and one on economic forecasting.
I suspect that my notoriety was established by [32], my paper nicknamed ``Alchemy,'' which was even discussed in Parliament for deriding the role of money. Shortly after [32] appeared, a Treasury and
Civil Service Committee on monetary policy was initiated because many Members of Parliament were unconvinced by Margaret Thatcher's policy of monetary control, and they sought the evidential basis
for that policy. The committee heard from many of the world's foremost economists. Most of the evidence was not empirical but purely theoretical, being derived from simplified economic models from
which their proprietor deduced what must happen. As the committee's econometric advisor, I collected what little empirical evidence there was, most of it from the Treasury. The Treasury, despite
arguing the government's case, could not establish that money caused inflation. Instead, it found evidence that devaluations, wage-price spirals, excess demands, and commodity-price shocks mattered;
see [36] and [37].
Those testimonies emphasized theory relative to empirical evidence--a more North American approach.
Many of those presenting evidence were North American, but several UK economists also used pure theory. Developing sustainable econometric evidence requires considerable time and effort, which is
problematic for preparing memoranda to a parliamentary committee. Most of my empirical studies have taken years.
Surprisingly, evidence dominated theory in the 1991 enquiry into official economic forecasting; see [91]. There was little relevant theory, but there was no shortage of actual forecasts or studies of
them. There were many papers on statistical forecasting, but few explicitly on economic forecasting for large, complex, nonstationary systems in which agents could change their behavior. Forecasts
from different models frequently conflicted, and the underlying models often suffered forecast failure. As makridakis and Hibon (2000) and [191] argue, those realities could not be explained within
the standard paradigm that forecasts were the conditional expectations. That enquiry triggered my interest in developing a viable theory of forecasting. Even after numerous papers--starting with
[124], [125], [137], [138], [139], and [141]--that research program is still ongoing.
You have also interacted with government on the preparation and quality of national statistics.
In the mid-1960s, I worked on National Accounts at the Central Statistical Office with Jack Hibbert and David Flaxen. Attributing components of output to sectors, calculating output in constant
prices, and aggregating the components to measure GNP was an enlightening experience. Most series were neither chained nor Divisia, but Laspeyres, and updated only intermittently, often inducing
changes in estimated relationships. More recently, in [179] and [190] with Andreas Beyer and Jurgen Doornik, I have helped create aggregate data series for a synthetic Euroland. Data accuracy is
obviously important to any approach that emphasizes empirical evidence, and I had learned that, although macro statistics were imperfect, they were usable for statistical analysis. For example,
consumption and income were revised jointly, essentially maintaining cointegration between them.
Is that because the relationship is primarily between their nominal values--which alter less on updating--and involves prices only secondarily?
Yes. Ian Harnett (1984) showed that the price indices nearly cancel in the log ratio, which approximates the long-run outcome. However, occasional large revisions can warp the evidence. In the early
1990s, the Central Statistical Office revised savings rates by as much as 8 percentage points in some quarters (from 12% to 4%, say), compared to equation standard errors of about 1%.
In unraveling why these revisions were made, we uncovered mistakes in how the data were constructed. In particular, the doubling of the value-added tax (VAT) in the early 1980s changed the relation
between the expenditure, output, and income measures of GNP. Prior to the increase in VAT, some individuals had cheated on their income tax but could not do so on expenditure taxes, so the
expenditure measure had been the larger. That relationship reversed after VAT rose to 17.5%, but the statisticians wrongly assumed that they had mis-measured income earlier. Such drastic revisions to
the data led me to propose that the recently created Office of National Statistics form a panel on the quality of economic statistics, and the ONS agreed. The panel has since discussed such issues as
data measurement, revision, seasonal adjustment, and national income accounting.
3.10 The Theory of Economic Forecasting
The forecast failure in 1968 motivated your research on methodology. What has led you back to investigate ex ante forecasting?
That early failure dissuaded me from real-time forecasting, and it took 25 years to understand its message. In the late 1970s, I investigated ex post predictive failure in [31]. Later, in [62] with
Yock Chong and also in [67], I looked at forecasting from dynamic systems, mainly to improve our power to test models. In retrospect, these two papers suggest much more insight than we had at the
time--we failed to realize the implications of many of our ideas.
In an important sense, policy rekindled my interest in forecasting. The Treasury missed the sharp downturn in 1989, having previously missed the boom from 1987, and the resulting policy mistakes
combined to induce high inflation and high unemployment. Mike Clements and I then sought analytical foundations for ex ante forecast failure when the economy is subject to structural breaks, and
forecasts are from mis-specified and inconsistently estimated models that are based on incorrect economic theories and selected from inaccurate data. Everything was allowed to be wrong, but the
investigator did not know that. Despite the generality of this framework, we derived some interesting theorems about economic forecasting, as shown in [105], [120], and [121]. The theory's empirical
content matched the historical record, and it suggested how to improve forecasting methods.
Surprisingly, estimation per se was not a key issue. The two important features were allowing for mis-specified models and incorporating structural change in the DGP.
Yes. Given that combination, we could disprove the theorem that causal variables must beat non-causal variables at forecasting. Hence, extrapolative methods could win at forecasting, as shown in
[171]. As [187] and [188] considered, that result suggests different roles for econometric models in forecasting and in economic policy, with causality clearly being essential in the latter.
The implications are fundamental. Ex ante forecast failure should not be used to reject models, as happened after the first oil crisis; see [159]. An almost perfect model could both forecast badly
and be worse than an extrapolative procedure, so the debate between Box-Jenkins models and econometric models needs reinterpretation. In [162], we also came to realize a difference between
equilibrium correction and error correction. The first induces cointegration, whereas in the latter a model adjusts to eliminate forecast errors. Devices like random walks and exponentially weighted
moving averages embody error correction, whereas cointegrated systems--which have equilibrium correction--will forecast systematically badly when an equilibrium mean shifts, since they continue to
converge to the old equilibrium. This explained why the Treasury's cointegrated system had performed so badly in the mid-1980s, following the sharp reduction in UK credit rationing. It also helped us
demonstrate in [138] the properties of intercept corrections to offset such shifts. Most recently, [204] offers an exposition and [210] a compendium.
Are you troubled that the best explanatory model need not be the best for forecasting, and that the best policy model could conceivably be different from both, as suggested in [187]?
Some structural breaks--such as shifts in equilibrium means--are inimical to forecasts from econometric models but not from robust devices, which do not explain behavior. Such shifts might not affect
the relevant policy derivatives. For example, the effect of interest rates on consumers' expenditure could be constant, despite a shift in the target level of savings due to (say) changed government
provisions for health in old age. After the shift, changing the interest rate still will have the expected policy effect, even though the econometric model is mis-forecasting. Because we could
robustify econometric models against such forecast failures, it may prove possible to use the same baseline causal econometric model for forecasting and for policy. If the econometric model alters
after a policy experiment, then at least we learn that super exogeneity is lacking.
There was considerable initial reluctance to fund such research on forecasting, with referees deeming the ideas as unimplementable. Unfortunately, such attitudes have returned, as the ESRC has
recently declined to support our research on this topic. One worries about their judgement, given the importance of forecasting in modern policy processes, and the lack of understanding of many
aspects of the problem even after a decade of considerable advances.
4 Econometric Software
4.1 The History and Roles of GIVE and PcGive
In my MSc course, you enumerated three reasons for having written the computer package GIVE. The first was to facilitate your own research, seeing as many techniques were not available in other
packages. The second was to ensure that other researchers did not have the excuse of unavailability--more controversial! The third was for teaching.
Non-operational econometric methods are pointless, so computer software must be written. Early versions of GIVE demonstrated the computability of FIML for systems with high-order vector
autoregressive errors and latent-variable structures, as in [33]: [174] and [218] provide a brief history. In those days, code was on punched cards. I once dropped my box off a bus and spent days
sorting it out.
You dropped your box of cards off a bus?
The IBM 360/65 was at UCL, so I took buses to and from LSE. Once, when rounding the Aldwych, the bus cornered faster than I anticipated, and my box of cards went flying. The program could only be
re-created because I had numbered every one of the cards.
I trust that it wasn't a rainy London day!
That would have been a disaster. After moving to Oxford, I ported GIVE to a menu-driven form (called PcGive) on an IBM PC 8088, using a rudimentary FORTRAN compiler; see [81]. That took about four
years, with Adrian Neale writing graphics in Assembler. A Windows version appeared after Jurgen Doornik translated PcGive to C++, leading to [195], [201], [197], and [194].
An attractive feature of PcGive has been its rapid incorporation of new tests and estimators--sometimes before they appeared in print, as with the Johansen (1988) reduced-rank cointegration
Adding routines initially required control of the software, but Jurgen recently converted PcGive to his Ox language, so that developments could be added by anyone writing Ox packages accessible from
GiveWin; see Doornik (2001). The two other important features of the software are its flexibility and its accuracy, with the latter checked by standard examples and by Monte Carlo.
Earlier versions of PcGive were certainly less flexible: the menus defined everything that could be done, even while the program's interactive nature was well-suited to empirical model design. The
use of Ox and the development of a batch language have alleviated that. I was astounded by a feature that Jurgen recently introduced. At the end of an interactive session, PcGive can generate batch
code for the entire session. I am not aware of any other program that has such a facility.
Batch code helps replication. Our latest Monte Carlo package (PcNaive) is just an experimental design front end that defines the DGP, the model specification, sample size, etc., and then writes out
an Ox program for that formulation. If desired, that program can be edited independently; and then it is run by Ox to calculate the Monte Carlo simulations. While this approach is mainly menu-driven,
it delivers complete flexibility in Monte Carlo. For teaching, it is invaluable to have easy-to-use, uncrashable, menu-driven programs, whereas complicated batch code is a disaster waiting to happen.
In writing PcGive, you sought to design a program that was not only numerically accurate, but also reasonably bug-proof. I wonder how many graduate students have mis-programmed GMM or some other
estimator using GAUSS or RATS.
Coding mistakes and inefficient programs can certainly produce inaccurate output. Jurgen found that the RESET -statistic can differ by a factor of a hundred, depending upon whether it is calculated
by direct implementation in regression or by partitioned inversion using singular value decomposition. Bruce McCullough has long been concerned about accurate output, and with good reason, as his
comparison in McCullough (1998) shows.
The latest development is the software package PcGets, designed with Hans-Martin Krolzig. ``Gets'' stands for ``general-to-specific,'' and PcGets now automatically selects an undominated congruent
regression model from a general specification. Its simulation properties confirm many of the earlier methodological claims about general-to-specific modeling, and PcGets is a great time-saver for
large problems; see [175], [206], [209], and [226].
PcGets still requires the economist's value added in terms of the choice of variables and in terms of transformations of the unrestricted model.
The algorithm indeed confirms the advantages of good economic analysis, both through excluding irrelevant effects and (especially) through including relevant ones. Still, excessive simplification--as
might be justified by some economic theory--will lead to a false general specification with no good model choice. Fortunately, there seems little power loss from some over-specification with
orthogonal regressors, and the empirical size remains close to the nominal.
4.2 The Role of Computing Facilities
More generally, computing has played a central role in the development of econometrics.
Historically, it has been fundamental. Estimators that were infeasible in the 1940s are now routine. Excellent color graphics are also a major boon. Computation can still be a limiting factor,
though. Simulation estimation and Monte Carlo studies of model selection strain today's fastest PCs. Parallel computation thus remains of interest, as discussed in [214] with Neil Shephard and Jurgen
There is an additional close link between computing and econometrics: different estimators are often different algorithms for approximating the same likelihood, as with the estimator generating
equation. Also, inefficient numerical procedures can produce inefficient statistical estimates, as with Cochrane-Orcutt estimates for dynamic models with autoregressive errors. In this example,
step-wise optimization and the corresponding statistical method are both inefficient because the coefficient covariance matrix is non-diagonal. Much can be learned about our statistical procedures
from their numerical properties.
4.3 The Role of Computing in Teaching
Was it difficult to use computers in teaching when only batch jobs could be run?
Indeed it was. My first computer-based teaching was with Ken Wallis using the Wharton model for macroeconomic experiments; see McCarthy (1972). The students gave us their experimental inputs, which
we ran, receiving the results several hours later. Now such illustrations are live and virtually instantaneous, and so can immediately resolve questions and check conjectures. The absorption of
interactive computing into teaching has been slow, even though it has been feasible for nearly two decades. I first did such presentations in the mid-1980s, and my first interactive-teaching article
was [68], with updates in [70] and [131].
Even now, few people use PCs interactively in seminars, although some do in teaching. Perhaps interactive computer-based presentations require familiarity with the software, reliability of the
software, and confidence in the model being presented. When I have made such presentations, they have often led to testing the model in ways that I hadn't previously thought of. If the model fails on
such tests, that is informative for me because it implies room for model improvement. If the model doesn't fail, then that is additional evidence in favor of the model.
Some conjectures involve unavailable data, but Internet access to data banks will improve that. Also, models that were once thought too complicated to model live--such as dynamic panels with awkward
instrumental variable structures, allowing for heterogeneity, etc.--are now included in PcGive. In live Monte Carlo simulations, students often gain important insights from experiments where they
choose the parameter values.
5 Concluding Remarks
5.1 Achievements and Failures
What do you see as your most important achievements, and what were your biggest failures?
Achievements are hard to pin down, even retrospectively, but the ones that have given me most pleasure were (a) consolidating estimation theory through the estimator generating equation; (b)
formalizing the methodology and model concepts to sustain general-to-specific modeling; (c) producing a theory of economic forecasting that has substantive content; (d) successfully designing
computer automation of general-to-specific model selection in PcGets; (e) developing efficient Monte Carlo methods; (f) building useful empirical models of housing, consumers' expenditure, and money
demand; and (g) stimulating a resurgence of interest in the history of our discipline.
I now see automatic model selection as a new instrument for the social sciences, akin to the microscope in the biological sciences. Already, PcGets has demonstrated remarkable performance across
different (unknown) states of nature, with the empirical data generating process being found almost as often by commencing from a general model as from the DGP itself. Retention of relevant variables
is close to the theoretical maximum, and elimination of irrelevant variables occurs at the rate set by the chosen significance level. The selected estimates have the appropriate reported standard
errors, and they can be bias-corrected if desired, which also down-weights adventitiously significant coefficients. These results essentially resuscitate traditional econometrics, despite data-based
selection; see [226] and [231]. Peter Phillips (1996) has made great strides in the automation of model selection using a related approach; see also [221].
The biggest failure is not having persuaded more economists of the value of data-based econometrics in empirical economics, although that failure has stimulated improvements in modeling and model
formulations. This reaction is certainly not uniform. Many empirical researchers in Europe adopt a general-to-specific modeling approach--which may be because they are regularly exposed to its
applications--whereas elsewhere other views are dominant, and are virtually enforced by some journals.
What role does failure play in econometrics and empirical modeling?
As a psychology student, I learned that failure was the route to success. Looking for positive instances of a concept is a slow way to acquire it when compared to seeking rejections.
Because macroeconomic data are non-experimental, aren't economists correctly hesitant about over-emphasizing the role of data in empirical modeling?
Such data are the outcome of governmental administrative processes, of which we can only observe one realization. We cannot re-run an economy under a different state of nature. The analysis of
non-experimental data raises many interesting issues, but lack of experimentation merely removes a tool, and its lack does not preclude a scientific approach or prevent progress.
It certainly hasn't stopped astronomers, environmental biologists, or meteorologists from analyzing their data.
Indeed. Historically, there are many natural, albeit uncontrolled, experiments. Governments experiment with policies; new legislation has unanticipated consequences; and physical and political
turmoil through violent weather, earthquakes, and war are ongoing. It is not easy to persuade governments to conduct controlled, small-scale, regular experiments. I once unsuccessfully suggested
randomly perturbing the Treasury bill tender at a regular frequency to test its effects on the discount and money markets and on the banking system.
You have worked almost exclusively with macroeconomic time series, rather than with micro data in cross-sections or in panels. Why did you make that choice?
My first empirical study analyzed panel data, and it helped convince me to focus on macroeconomic time series instead. I was consulting for British Petroleum on bidding behavior, and I had about a
million observations in total for oil products on about a thousand outlets for every Canton in Switzerland, monthly, over a decade. BP's linear programming system took prices as parametric, and they
wanted to endogenize price determination. The Swiss study sought to estimate demand functions. Even allowing for fixed effects, dynamics dominated, with near-unit roots, despite the (now known)
downward biases. We built optimized models to determine bids, assuming that the winning margin had a Weibull distribution, estimated from information on the winning bid and our own bid, which might
coincide. I also wrote a panel-data analysis program with Chris Gilbert to study voting behavior in York. The program tested for pooling the cross-sections, the time series, and both. It was
difficult to get much out of such panels, as only a tiny percentage of the variation was explained. It seemed unlikely that the remaining variation was random, so much of the explanation must be
missing. Because omitted variables would rarely be orthogonal to the included variables, the estimated coefficients would not correspond to the behavioral parameters. With macroeconomic data, the
problem is the converse of fitting too well. A difficulty with cross-sections is their dependence on time, so the errors are not independent, due to common effects. Quite early on, I thus decided to
first understand time series and then come back to analyzing micro data, but I haven't reached the end of the road on time series yet.
Your view on cross-section modeling differs from the conventional view that it reveals the long run.
I have not seen a proof of that claim. As a counter-example, suppose that a recent shock places all agents in disequilibrium during the measured cross-section.
5.2 Directions for the Future
What directions will your research explore?
A gold mine of new results awaits discovery from extending the theory of economic forecasting in the face of rare events, and from delineating what aspects of models are most important in
forecasting. Also, much remains to be understood about modeling procedures. Both are worthwhile topics, especially as new developments are likely to have practical value. The econometrics of economic
policy analysis also remains under-developed. For instance, it would help to understand which structural changes affect forecasting but not policy in order to clarify the relationship between
forecasting models and policy models. Given the difficulties with impulse response analyses documented in [128], [165], and [188], open models would repay a visit. Policy analyses require congruent
models with constant parameters, so more powerful tests of changes in dynamic coefficients are needed.
Many further advances are already in progress for automatic model selection, such as dealing with cointegration, with systems, and with nonlinear models. This new tool resolves a hitherto intractable
problem, namely, estimating a regression when there are more candidate variables than observations, as can occur when there are many potential interactions. Provided that the DGP has fewer variables
than observations, repeated application of the multi-path search process to feasible blocks is likely to deliver a model with the appropriate properties.
That should keep you busy!
Anderson, T. W. (1962) ``The Choice of the Degree of a Polynomial Regression as a Multiple Decision Problem'', Annals of Mathematical Statistics, 33, 1, 255--265.
Anderson, T. W. (1976) ``Estimation of Linear Functional Relationships: Approximate Distributions and Connections with Simultaneous Equations in Econometrics'', Journal of the Royal Statistical
Society, Series B, 38, 1, 1--20 (with discussion).
Attfield, C. L. F., D. Demery, and N. W. Duck (1995) ``Estimating the UK Demand for Money Function: A Test of Two Approaches'', Mimeo, Department of Economics, University of Bristol, Bristol,
England, November.
Benassy, J.-P. (1986) Macroeconomics: An Introduction to the Non-Walrasian Approach, Academic Press, Orlando.
Bontemps, C., and G. E. Mizon (2003) ``Congruence and Encompassing'', Chapter 15 in B. P. Stigum (ed.) Econometrics and the Philosophy of Economics: Theory-Data Confrontations in Economics, Princeton
University Press, Princeton, 354--378.
Box, G. E. P., and G. M. Jenkins (1970) Time Series Analysis: Forecasting and Control, Holden-Day, San Francisco.
Breusch, T. S. (1986) ``Hypothesis Testing in Unidentified Models'', Review of Economic Studies, 53, 4, 635--651.
Chan, N. H., and C. Z. Wei (1988) ``Limiting Distributions of Least Squares Estimates of Unstable Autoregressive Processes'', Annals of Statistics, 16, 1, 367--401.
Coghlan, R. T. (1978) ``A Transactions Demand for Money'', Bank of England Quarterly Bulletin, 18, 1, 48--60.
Cooper, J. P., and C. R. Nelson (1975) ``The Ex Ante Prediction Performance of the St. Louis and FRB-MIT-PENN Econometric Models and Some Results on Composite Predictors'', Journal of Money, Credit,
and Banking, 7, 1, 1--32.
Courakis, A. S. (1978) ``Serial Correlation and a Bank of England Study of the Demand for Money: An Exercise in Measurement Without Theory'', Economic Journal, 88, 351, 537--548.
Cox, D. R. (1962) ``Further Results on Tests of Separate Families of Hypotheses'', Journal of the Royal Statistical Society, Series B, 24, 2, 406--424.
Deaton, A. S. (1977) ``Involuntary Saving Through Unanticipated Inflation'', American Economic Review, 67, 5, 899--910.
Doornik, J. A. (2001) Ox 3.0: An Object-oriented Matrix Programing Language, Timberlake Consultants Press, London. 40
Durbin, J. (1988) ``Maximum Likelihood Estimation of the Parameters of a System of Simultaneous Regression Equations'', Econometric Theory, 4, 1, 159--170 (Paper presented to the European Meetings of
the Econometric Society, Copenhagen, 1963).
Engle, R. F., and C. W. J. Granger (1987) ``Co-integration and Error Correction: Representation, Estimation, and Testing'', Econometrica, 55, 2, 251--276.
Escribano, A. (1985) ``Non-linear Error-correction: The Case of Money Demand in the U.K. (1878--1970)'', Mimeo, University of California at San Diego, La Jolla, California, December.
Escribano, A. (2004) ``Nonlinear Error Correction: The Case of Money Demand in the United Kingdom (1878--2000)'', Macroeconomic Dynamics, 8, 1, 76--116.
Fisk, P. R. (1967) Stochastically Dependent Equations: An Introductory Text for Econometricians, Charles Griffin, London (Griffin's Statistical Monographs and Courses, No. 21).
Friedman, M., and A. J. Schwartz (1982) Monetary Trends in the United States and the United Kingdom: Their Relation to Income, Prices, and Interest Rates, 1867--1975, University of Chicago Press,
Frisch, R. (1933) ``Editorial'', Econometrica, 1, 1, 1--4.
Gilbert, C. L. (1986) ``Professor Hendry's Econometric Methodology'', Oxford Bulletin of Economics and Statistics, 48, 3, 283--307.
Godfrey, L. G. (1988) Misspecification Tests in Econometrics, Cambridge University Press, Cambridge.
Goldfeld, S. M. (1976) ``The Case of the Missing Money'', Brookings Papers on Economic Activity, 1976, 3, 683--730 (with discussion).
Goldfeld, S. M., and R. E. Quandt (1972) Nonlinear Methods in Econometrics, North- Holland, Amsterdam.
Goodhart, C. A. E. (1982) ``Monetary Trends in the United States and the United Kingdom: A British Review'', Journal of Economic Literature, 20, 4, 1540--1551.
Granger, C. W. J. (1981) ``Some Properties of Time Series Data and Their Use in Econometric Model Specification'', Journal of Econometrics, 16, 1, 121--130.
Granger, C.W. J. (1986) ``Developments in the Study of Cointegrated Economic Variables'', Oxford Bulletin of Economics and Statistics, 48, 3, 213--228.
Granger, C. W. J., and A. A. Weiss (1983) ``Time Series Analysis of Error-correction Models'', in S. Karlin, T. Amemiya, and L. A. Goodman (eds.) Studies in Econometrics, Time Series, and
Multivariate Statistics: In Honor of Theodore W. Anderson, Academic Press, New York, 255--278.
Haavelmo, T. (1944) ``The Probability Approach in Econometrics'', Econometrica, 12, Supplement, i--viii, 1--118.
Hacche, G. (1974) ``The Demand for Money in the United Kingdom: Experience Since 1971'', Bank of England Quarterly Bulletin, 14, 3, 284--305.
Hall, R. E. (1978) ``Stochastic Implications of the Life Cycle-Permanent Income Hypothesis: Theory and Evidence'', Journal of Political Economy, 86, 6, 971--987. 41
Hammersley, J. M., and D. C. Handscomb (1964) Monte Carlo Methods, Chapman and Hall, London.
Hannan, E. J., and B. G. Quinn (1979) ``The Determination of the Order of an Autoregression'', Journal of the Royal Statistical Society, Series B, 41, 2, 190--195.
Harnett, I. (1984) An Econometric Investigation into Recent Changes of UK Personal Sector Consumption Expenditure, University of Oxford, Oxford (Unpublished M. Phil. Thesis).
Hendry, D. F., and N. R. Ericsson (1983) ``Assertion Without Empirical Basis: An Econometric Appraisal of `Monetary Trends in . . . the United Kingdom' by Milton Friedman and Anna Schwartz'', in
Monetary Trends in the United Kingdom, Bank of England Panel of Academic Consultants, Panel Paper No. 22, Bank of England, London, October, 45--101.
Hildenbrand, W. (1994) Market Demand: Theory and Empirical Evidence, Princeton University Press, Princeton.
Hoover, K. D., and S. J. Perez (1999) ``Data Mining Reconsidered: Encompassing and the General-to-specific Approach to Specification Search'', Econometrics Journal, 2, 2, 167--191 (with discussion).
Johansen, S. (1988) ``Statistical Analysis of Cointegration Vectors'', Journal of Economic Dynamics and Control, 12, 2/3, 231--254.
Kakwani, N. C. (1967) ``The Unbiasedness of Zellner's Seemingly Unrelated Regression Equations Estimators'', Journal of the American Statistical Association, 62, 317, 141-- 142.
Katona, G., and E. Mueller (1968) Consumer Response to Income Increases, Brookings Institution, Washington, D.C.
Keynes, J. M. (1936) The General Theory of Employment, Interest and Money, Harcourt, Brace and Company, New York.
Klein, L. R. (1953) A Textbook of Econometrics, Row, Peterson and Company, Evanston.
Koopmans, T. C. (1947) ``Measurement Without Theory'', Review of Economics and Statistics (formerly the Review of Economic Statistics), 29, 3, 161--172.
Longbottom, A., and S. Holly (1985) ``EconometricMethodology and Monetarism: Professor Friedman and Professor Hendry on the Demand for Money'', Discussion Paper No. 131, London Business School,
London, February.
Lucas, Jr., R. E. (1976) ``Econometric Policy Evaluation: A Critique'', in K. Brunner and A. H. Meltzer (eds.) The Phillips Curve and Labor Markets, North-Holland, Amsterdam, Carnegie-Rochester
Conference Series on Public Policy, Volume 1, Journal of Monetary Economics, Supplement, 19--46 (with discussion).
Makridakis, S., and M. Hibon (2000) ``The M3-Competition: Results, Conclusions and Implications'', International Journal of Forecasting, 16, 4, 451--476.
McCarthy, M. D. (1972) The Wharton Quarterly Econometric Forecasting Model Mark III, University of Pennsylvania, Philadelphia (Studies in Quantitative Economics No. 6).
McCullough, B. D. (1998) ``Assessing the Reliability of Statistical Software: Part I'', American Statistician, 52, 4, 358--366. 42
Mizon, G. E. (1977) ``Inferential Procedures in Nonlinear Models: An Application in a UK Industrial Cross Section Study of Factor Substitution and Returns to Scale'', Econometrica, 45, 5, 1221--1242.
Mizon, G. E. (1995) ``Progressive Modeling of Macroeconomic Time Series: The LSE Methodology'', Chapter 4 in K. D. Hoover (ed.) Macroeconometrics: Developments, Tensions, and Prospects, Kluwer
Academic Publishers, Boston, 107--170 (with discussion).
Mizon, G. E., and J.-F. Richard (1986) ``The Encompassing Principle and its Application to Testing Non-nested Hypotheses'', Econometrica, 54, 3, 657--678.
Morgan, M. S. (1990) The History of Econometric Ideas, Cambridge University Press, Cambridge.
Muth, J. F. (1961) ``Rational Expectations and the Theory of Price Movements'', Econometrica, 29, 3, 315--335.
Osborn, D. R. (1988) ``Seasonality and Habit Persistence in a Life Cycle Model of Consumption'', Journal of Applied Econometrics, 3, 4, 255--266.
Osborn, D. R. (1991) ``The Implications of Periodically Varying Coefficients for Seasonal Time-series Processes'', Journal of Econometrics, 48, 3, 373--384.
Pesaran, M. H. (1974) ``On the General Problem of Model Selection'', Review of Economic Studies, 41, 2, 153--171.
Phillips, A. W. (1954) ``Stabilisation Policy in a Closed Economy'', Economic Journal, 64, 254, 290--323.
Phillips, A. W. (1956) ``Some Notes on the Estimation of Time-forms of Reactions in Interdependent Dynamic Systems'', Economica, 23, 90, 99--113.
Phillips, A. W. (1957) ``Stabilisation Policy and the Time-forms of Lagged Responses'', Economic Journal, 67, 266, 265--277.
Phillips, A. W. (2000) ``Estimation of Systems of Difference Equations with Moving Average Disturbances'', Chapter 45 in R. Leeson (ed.) A. W. H. Phillips: Collected Works in Contemporary
Perspective, Cambridge University Press, Cambridge, 423--444 (Walras-- Bowley Lecture, Econometric Society Meeting, San Francisco, December 1966).
Phillips, P. C. B. (1986) ``Understanding Spurious Regressions in Econometrics'', Journal of Econometrics, 33, 3, 311--340.
Phillips, P. C. B. (1987) ``Time Series Regression with a Unit Root'', Econometrica, 55, 2, 277--301.
Phillips, P. C. B. (1996) ``Econometric Model Determination'', Econometrica, 64, 4, 763-- 812.
Phillips, P. C. B. (1997) ``The ET Interview: Professor Clive Granger'', Econometric Theory, 13, 2, 253--303.
Qin, D. (1993) The Formation of Econometrics: A Historical Perspective, Clarendon Press, Oxford.
Richard, J.-F. (1980) ``Models with Several Regimes and Changes in Exogeneity'', Review of Economic Studies, 47, 1, 1--20. 43
Robinson, P. M. (2003) ``Denis Sargan: Some Perspectives'', Econometric Theory, 19, 3, 481--494.
Samuelson, P. A. (1947) Foundations of Economic Analysis, Harvard University Press, Cambridge.
Samuelson, P. A. (1961) Economics: An Introductory Analysis, McGraw-Hill Book Company, New York, Fifth Edition.
Sargan, J. D. (1964) ``Wages and Prices in the United Kingdom: A Study in Econometric Methodology'', in P. E. Hart, G. Mills, and J. K. Whitaker (eds.) Econometric Analysis for National Economic
Planning, Volume 16 of Colston Papers, Butterworths, London, 25--54 (with discussion).
Sargan, J. D. (1975) ``Asymptotic Theory and Large Models'', International Economic Review, 16, 1, 75--91.
Sargan, J. D. (1980) ``Some Tests of Dynamic Specification for a Single Equation'', Econometrica, 48, 4, 879--897.
Savin, N. E. (1980) ``The Bonferroni and the Scheffรฉ Multiple Comparison Procedures'', Review of Economic Studies, 47, 1, 255--273.
Silvey, S. D. (1959) ``The Lagrangian Multiplier Test'', Annals of Mathematical Statistics, 30, 2, 389--407.
Stigum, B. P. (1990) Toward a Formal Science of Economics: The Axiomatic Method in Economics and Econometrics, MIT Press, Cambridge.
Stock, J. H. (1987) ``Asymptotic Properties of Least Squares Estimators of Cointegrating Vectors'', Econometrica, 55, 5, 1035--1056.
Summers, L. H. (1991) ``The Scientific Illusion in Empirical Macroeconomics'', Scandinavian Journal of Economics, 93, 2, 129--148.
Thomas, J. J. (1964) Notes on the Theory of Multiple Regression Analysis, Contos Press, Athens (Center of Economic Research, Training Seminar Series, No. 4).
Tinbergen, J. (1951) Business Cycles in the United Kingdom, 1870--1914, North-Holland, Amsterdam.
Trivedi, P. K. (1970) ``The Relation Between the Order-Delivery Lag and the Rate of Capacity Utilization in the Engineering Industry in the United Kingdom, 1958--1967'', Economica, 37, 145, 54--67.
Vining, R. (1949) ``Koopmans on the Choice of Variables To Be Studied and of Methods of Measurement'', Review of Economics and Statistics, 31, 2, 77--86.
West, K. D. (1988) ``Asymptotic Normality, When Regressors Have a Unit Root'', Econometrica, 56, 6, 1397--1417.
White, H. (1990) ``A Consistent Model Selection Procedure Based on -Testing'', Chapter 16 in C. W. J. Granger (ed.) Modelling Economic Series: Readings in Econometric Methodology, Oxford University
Press, Oxford, 369--383.
Whittle, P. (1963) Prediction and Regulation by Linear Least-square Methods, D. Van Nostrand, Princeton.
1. Hendry, D. F. (1966) Survey of student income and expenditure at Aberdeen University, 1963--64 and 1964--65. Scottish Journal of Political Economy 13, 363--376.
2. Hendry, D. F. (1970) Book review of Introduction to Linear Algebra for Social Scientists by Gordon Mills. Economica 37, 217--218.
3. Hendry, D. F. (1971a) Discussion. Journal of the Royal Statistical Society, Series A 134, 315.
4. Hendry, D. F. (1971b) Maximum likelihood estimation of systems of simultaneous regression equations with errors generated by a vector autoregressive process. International Economic Review 12,
5. Hendry, D. F. (1972a) Book review of Elements of Econometrics by J. Kmenta. Economic Journal 82, 221--222.
6. Hendry, D. F. (1972b) Book review of Regression and Econometric Methods by David S. Huang. Economica 39, 104--105.
7. Hendry, D. F. (1972c) Book review of The Analysis and Forecasting of the British Economy by M. J. C. Surrey. Economica 39, 346.
8. Hendry, D. F., & P. K. Trivedi (1972) Maximum likelihood estimation of difference equations with moving average errors: A simulation study. Review of Economic Studies 39, 117--145.
9. Hendry, D. F. (1973a) Book review of Econometric Models of Cyclical Behaviour, edited by Bert G. Hickman. Economic Journal 83, 944--946.
10. Hendry, D. F. (1973b) Discussion. Journal of the Royal Statistical Society, Series A 136, 385--386.
11. Hendry, D. F. (1973c) On asymptotic theory and finite sample experiments. Economica 40, 210--217.
12. Hendry, D. F. (1974a) Book review of A Textbook of Econometrics by L. R. Klein. Economic Journal 84, 688--689.
13. Hendry, D. F. (1974b) Book review of Optimal Planning for Economic Stabilization: The Application of Control Theory to Stabilization Policy by Robert S. Pindyck. Economica 41, 353.
14. Hendry, D. F. (1974c)Maximum likelihood estimation of systems of simultaneous regression equations with errors generated by a vector autoregressive process: A correction. International Economic
Review 15, 260.
15. Hendry, D. F. (1974d) Stochastic specification in an aggregate demand model of the United Kingdom. Econometrica 42, 559--578.45
16. Hendry, D. F., & R. W. Harrison (1974) Monte Carlo methodology and the small sample behaviour of ordinary and two-stage least squares. Journal of Econometrics 2, 151--174.
17. Hendry, D. F. (1975a) Book review of Forecasting the U.K. Economy by J. C. K. Ash and D. J. Smyth. Economica 42, 223--224.
18. Hendry, D. F. (1975b) The consequences of mis-specification of dynamic structure, autocorrelation, and simultaneity in a simple model with an application to the demand for imports. In G. A.
Renton (ed.), Modelling the Economy, pp. 286--320 (with discussion). London: Heinemann Educational Books.
19. Hendry, D. F. (1976a) Discussion. Journal of the Royal Statistical Society, Series A 139, 494--495.
20. Hendry, D. F. (1976b) Discussion. Journal of the Royal Statistical Society, Series B 38, 24--25.
21. Hendry, D. F. (1976c) The structure of simultaneous equations estimators. Journal of Econometrics 4, 51--88.
22. Hendry, D. F., & A. R. Tremayne (1976) Estimating systems of dynamic reduced form equations with vector autoregressive errors. International Economic Review 17, 463--471.
23. Hendry, D. F. (1977a) Book review of Studies in Nonlinear Estimation, edited by Stephen M. Goldfeld and Richard E. Quandt. Economica 44, 317--318.
24. Hendry, D. F. (1977b) Book review of The Models of Project LINK, edited by J. L. Waelbroeck. Journal of the Royal Statistical Society, Series A 140, 561--562.
25. Hendry, D. F. (1977c) Comments on Granger-Newbold's `Time series approach to econometric model building' and Sargent-Sims' `Business cycle modeling without pretending to have too much a priori
economic theory' . In C. A. Sims (ed.), New Methods in Business Cycle Research: Proceedings from a Conference, pp. 183--202. Minneapolis: Federal Reserve Bank of Minneapolis.
26. Hendry, D. F., & G. J. Anderson (1977) Testing dynamic specification in small simultaneous systems: An application to a model of building society behavior in the United Kingdom. In M. D.
Intriligator (ed.), Frontiers of Quantitative Economics, vol. 3A, pp. 361--383. Amsterdam: North-Holland.
27. Hendry, D. F., & F. Srba (1977) The properties of autoregressive instrumental variables estimators in dynamic systems. Econometrica 45, 969--990.
28. Davidson, J. E. H., D. F. Hendry, F. Srba, & S. Yeo (1978) Econometric modelling of the aggregate time-series relationship between consumers' expenditure and income in the United Kingdom.
Economic Journal 88, 661--692.
29. Hendry, D. F., & G. E. Mizon (1978) Serial correlation as a convenient simplification, not a nuisance: A comment on a study of the demand for money by the Bank of England. Economic Journal 88,
549--563. 46
30. Hendry, D. F. (1979a) The behaviour of inconsistent instrumental variables estimators in dynamic systems with autocorrelated errors. Journal of Econometrics 9, 295--314.
31. Hendry, D. F. (1979b) Predictive failure and econometric modelling in macroeconomics: The transactions demand for money. In P. Ormerod (ed.), Economic Modelling: Current Issues and Problems in
Macroeconomic Modelling in the UK and the US, pp. 217--242. London: Heinemann Education Books.
32. Hendry, D. F. (1980) Econometrics-Alchemy or science? Economica 47, 387--406.
33. Hendry, D. F., & F. Srba (1980) AUTOREG: A computer program library for dynamic econometric models with autoregressive errors. Journal of Econometrics 12, 85--102.
34. Mizon, G. E., & D. F. Hendry (1980) An empirical application and Monte Carlo analysis of tests of dynamic specification. Review of Economic Studies 47, 21--45.
35. Davidson, J. E. H., & D. F. Hendry (1981) Interpreting econometric evidence: The behaviour of consumers' expenditure in the UK. European Economic Review 16, 177--192 (with discussion).
36. Hendry, D. F. (1981a) Comment on HM Treasury's memorandum, `Background to the Government's economic policy' . In House of Commons (ed.), Third Report from the Treasury and Civil Service
Committee, Session 1980--81, Monetary Policy, vol. 3, pp. 94--96 (Appendix 4). London: Her Majesty's Stationery Office.
37. Hendry, D. F. (1981b) Econometric evidence in the appraisal of monetary policy. In House of Commons (ed.), Third Report from the Treasury and Civil Service Committee, Session 1980--81, Monetary
Policy, vol. 3, pp. 1--21 (Appendix 1). London: Her Majesty's Stationery Office.
38. Hendry, D. F., & J.-F. Richard (1981) Model formulation to simplify selection when specification is uncertain. Journal of Econometrics 16, 159.
39. Hendry, D. F., & T. von Ungern-Sternberg (1981) Liquidity and inflation effects on consumers' expenditure. In A. S. Deaton (ed.), Essays in the Theory and Measurement of Consumer Behaviour: In
Honour of Sir Richard Stone, pp. 237--260. Cambridge: Cambridge University Press.
40. Hendry, D. F. (1982a) Comment: Whither disequilibrium econometrics? Econometric Reviews 1, 65--70.
41. Hendry, D. F. (1982b) A reply to Professors Maasoumi and Phillips. Journal of Econometrics 19, 203--213.
42. Hendry, D. F. (1982c) The role of econometrics in macro-economic analysis. UK Economic Prospect 1982, 26--38.
43. Hendry, D. F., & J.-F. Richard (1982) On the formulation of empirical models in dynamic econometrics. Journal of Econometrics 20, 3--33. 47
44. Engle, R. F., D. F. Hendry, & J.-F. Richard (1983) Exogeneity. Econometrica 51, 277--304.
45. Hendry, D. F. (1983a) Comment. Econometric Reviews 2, 111--114.
46. Hendry, D. F. (1983b) Econometric modelling: The `consumption function' in retrospect. Scottish Journal of Political Economy 30, 193--220.
47. Hendry, D. F. (1983c) On Keynesian model building and the rational expectations critique: A question of methodology. Cambridge Journal of Economics 7, 69--75.
48. Hendry, D. F., & R. C. Marshall (1983) On high and low 2 contributions. Oxford Bulletin of Economics and Statistics 45, 313--316.
49. Hendry, D. F., & J.-F. Richard (1983) The econometric analysis of economic time series. International Statistical Review 51, 111--148 (with discussion).
50. Anderson, G. J., & D. F. Hendry (1984) An econometric model of United Kingdom building societies. Oxford Bulletin of Economics and Statistics 46, 185--210.
51. Hendry, D. F. (1984a) Book review of Advances in Econometrics: Invited Papers for the 4th World Congress of the Econometric Society, edited by Werner Hildenbrand. Economic Journal 94, 403--405.
52. Hendry, D. F. (1984b) Econometric modelling of house prices in the United Kingdom. In D. F. Hendry and K. F. Wallis (eds.), Econometrics and Quantitative Economics, pp. 211--252. Oxford: Basil
53. Hendry, D. F. (1984c) Monte Carlo experimentation in econometrics. In Z. Griliches and M. D. Intriligator (eds.), Handbook of Econometrics, vol. 2, pp. 937--976. Amsterdam: North-Holland.
54. Hendry, D. F. (1984d) Present position and potential developments: Some personal views [on] time-series econometrics. Journal of the Royal Statistical Society, Series A 147, 327--338 (with
55. Hendry, D. F., A. Pagan, & J. D. Sargan (1984) Dynamic specification. In Z. Griliches and M. D. Intriligator (eds.), Handbook of Econometrics, vol. 2, pp. 1023--1100. Amsterdam: North-Holland.
56. Hendry, D. F., & K. F.Wallis (eds.) (1984a) Econometrics and Quantitative Economics. Oxford: Basil Blackwell.
57. Hendry, D. F., & K. F. Wallis (1984b) Editors' introduction. In D. F. Hendry and K. F. Wallis (eds.), Econometrics and Quantitative Economics, pp. 1--12. Oxford: Basil Blackwell.
58. Engle, R. F., D. F. Hendry, & D. Trumble (1985) Small-sample properties of ARCH estimators and tests. Canadian Journal of Economics 18, 66--93.
59. Ericsson, N. R., & D. F. Hendry (1985) Conditional econometric modeling: An application to new house prices in the United Kingdom. In A. C. Atkinson and S. E. Fienberg (eds.), A Celebration of
Statistics: The ISI Centenary Volume, pp. 251--285. New York: Springer-Verlag. 48
60. Hendry, D. F. (1985) Monetary economic myth and econometric reality. Oxford Review of Economic Policy 1, 72--84.
61. Banerjee, A., J. J. Dolado, D. F. Hendry, & G. W. Smith (1986) Exploring equilibrium relationships in econometrics through static models: SomeMonte Carlo evidence. Oxford Bulletin of Economics
and Statistics 48, 253--277.
62. Chong, Y. Y., & D. F. Hendry (1986) Econometric evaluation of linear macro-economic models. Review of Economic Studies 53, 671--690.
63. Hendry, D. F. (ed.) (1986a) Econometric Modelling with Cointegrated Variables. Special Issue, Oxford Bulletin of Economics and Statistics, 48, 3, August.
64. Hendry, D. F. (1986b) Econometric modelling with cointegrated variables: An overview. Oxford Bulletin of Economics and Statistics 48, 201--212.
65. Hendry, D. F. (1986c) Empirical modeling in dynamic econometrics. Applied Mathematics and Computation 20, 201--236.
66. Hendry, D. F. (1986d) An excursion into conditional varianceland. Econometric Reviews 5, 63--69.
67. Hendry, D. F. (1986e) The role of prediction in evaluating econometric models. Proceedings of the Royal Society, London, Series A 407, 25--33.
68. Hendry, D. F. (1986f) Using PC-GIVE in econometrics teaching. Oxford Bulletin of Economics and Statistics 48, 87--98.
69. Hendry, D. F. (1987a) Econometric methodology: A personal perspective. In T. F. Bewley (ed.), Advances in Econometrics: Fifth World Congress, vol. 2, pp. 29--48. Cambridge: Cambridge University
70. Hendry, D. F. (1987b) Econometrics in action. Empirica (Austrian Economic Papers) 14, 135--156.
71. Hendry, D. F. (1987c) PC-GIVE: An Interactive Menu-driven Econometric Modelling Program for IBM-compatible PC's, Version 4.2. Oxford: Institute of Economics and Statistics and Nuffield College,
University of Oxford, January.
72. Hendry, D. F. (1987d) PC-GIVE: An Interactive Menu-driven Econometric Modelling Program for IBM-compatible PC's, Version 5.0. Oxford: Institute of Economics and Statistics and Nuffield College,
University of Oxford, November.
73. Hendry, D. F., & A. J. Neale (1987) Monte Carlo experimentation using PC-NAIVE. In T. B. Fomby and G. F. Rhodes, Jr. (eds.), Advances in Econometrics: A Research Annual, vol. 6, pp. 91--125.
Greenwich: JAI Press.
74. Campos, J., N. R. Ericsson, & D. F. Hendry (1988) Comment on Telser. Journal of the American Statistical Association 83, 581.
75. Hendry, D. F. (1988a) Encompassing. National Institute Economic Review 3/88, 88--92.
76. Hendry, D. F. (1988b) The encompassing implications of feedback versus feedforward mechanisms in econometrics. Oxford Economic Papers 40, 132--149. 49
77. Hendry, D. F. (1988c) Some foreign observations on macro-economic model evaluation activities at INSEE--DP. In INSEE (ed.), Groupes d'รtudes Macroeconometriques Concertรฉes: Document
Complรฉmentaire de Synthรจse, pp. 71--106. Paris: INSEE.
78. Hendry, D. F., & A. J. Neale (1988) Interpreting long-run equilibrium solutions in conventional macro models: A comment. Economic Journal 98, 808--817.
79. Hendry, D. F., A. J. Neale, & F. Srba (1988) Econometric analysis of small linear systems using PC-FIML. Journal of Econometrics 38, 203--226.
80. Hendry, D. F. (1989a) Comment. Econometric Reviews 8, 111--121.
81. Hendry, D. F. (1989b) PC-GIVE: An Interactive Econometric Modelling System, Version 6.0/6.01. Oxford: Institute of Economics and Statistics and Nuffield College, University of Oxford, January.
82. Hendry, D. F., & M. S. Morgan (1989) A re-analysis of confluence analysis. Oxford Economic Papers 41, 35--52.
83. Hendry, D. F., & J.-F. Richard (1989) Recent developments in the theory of encompassing. In B. Cornet and H. Tulkens (eds.), Contributions to Operations Research and Economics: The Twentieth
Anniversary of CORE, pp. 393--440. Cambridge: MIT Press.
84. Hendry, D. F., A. Spanos, & N. R. Ericsson (1989) The contributions to econometrics in Trygve Haavelmo's The Probability Approach in Econometrics. Sosialรธ konomen 43, 12--17.
85. Campos, J., N. R. Ericsson, & D. F. Hendry (1990) An analogue model of phaseaveraging procedures. Journal of Econometrics 43, 275--292.
86. Hendry, D. F., E. E. Leamer, & D. J. Poirier (1990) The ET dialogue: A conversation on econometric methodology. Econometric Theory 6, 171--261.
87. Hendry, D. F., & G. E. Mizon (1990) Procrustean econometrics: Or stretching and squeezing data. In C. W. J. Granger (ed.), Modelling Economic Series: Readings in Econometric Methodology, pp.
121--136. Oxford: Oxford University Press.
88. Hendry, D. F., J. N. J. Muellbauer, & A. Murphy (1990) The econometrics of DHSY. In J. D. Hey and D. Winch (eds.), A Century of Economics: 100 Years of the Royal Economic Society and the Economic
Journal, pp. 298--334. Oxford: Basil Blackwell.
89. Hendry, D. F., A. J. Neale, & N. R. Ericsson (1990) PC-NAIVE: An Interactive Program for Monte Carlo Experimentation in Econometrics, Version 6.01. Oxford: Institute of Economics and Statistics
and Nuffield College, University of Oxford.
90. Hendry, D. F. (1991a) Comments: `The response of consumption to income: A crosscountry investigation' by John Y. Campbell and N. Gregory Mankiw. European Economic Review 35, 764--767.
91. Hendry, D. F. (1991b) Economic forecasting. In House of Commons (ed.), Memoranda on Official Economic Forecasting, Treasury and Civil Service Committee, Session 1990--
91. London: Her Majesty's Stationery Office. 50
92. Hendry, D. F. (1991c) Using PC-NAIVE in teaching econometrics. Oxford Bulletin of Economics and Statistics 53, 199--223.
93. Hendry, D. F., & N. R. Ericsson (1991a) An econometric analysis of U.K. money demand in Monetary Trends in the United States and the United Kingdom by Milton Friedman and Anna J. Schwartz.
American Economic Review 81, 8--38.
94. Hendry, D. F., & N. R. Ericsson (1991b) Modeling the demand for narrow money in the United Kingdom and the United States. European Economic Review 35, 833--881 (with discussion).
95. Hendry, D. F., & A. J. Neale (1991) A Monte Carlo study of the effects of structural breaks on tests for unit roots. In P. Hackl and A. H. Westlund (eds.), Economic Structural Change: Analysis
and Forecasting, pp. 95--119. Berlin: Springer-Verlag.
96. Baba, Y., D. F. Hendry, & R. M. Starr (1992) The demand for M1 in the U.S.A., 1960--1988. Review of Economic Studies 59, 25--61.
97. Banerjee, A., & D. F. Hendry (eds.) (1992a) Testing Integration and Cointegration. Special Issue, Oxford Bulletin of Economics and Statistics, 54, 3, August.
98. Banerjee, A., & D. F. Hendry (1992b) Testing integration and cointegration: An overview. Oxford Bulletin of Economics and Statistics 54, 225--255.
99. Doornik, J. A., & D. F. Hendry (1992) PcGive Version 7: An Interactive Econometric Modelling System. Oxford: Institute of Economics and Statistics, University of Oxford.
100. Favero, C., & D. F. Hendry (1992) Testing the Lucas critique: A review. Econometric Reviews 11, 265--306 (with discussion).
101. Hendry, D. F. (1992a) Assessing empirical evidence in macroeconometrics with an application to consumers' expenditure in France. In A. Vercelli and N. Dimitri (eds.), Macroeconomics: A Survey of
Research Strategies, pp. 363--392. Oxford: Oxford University Press.
102. Hendry, D. F. (1992b) An econometric analysis of TV advertising expenditure in the United Kingdom. Journal of Policy Modeling 14, 281--311.
103. Hendry, D. F., & J.-F. Richard (1992) Likelihood evaluation for dynamic latent variables models. In H. M. Amman, D. A. Belsley, and L. F. Pau (eds.), Computational Economics and Econometrics,
pp. 3--17. Dordrecht: Kluwer Academic Publishers.
104. Banerjee, A., J. J. Dolado, J.W. Galbraith, & D. F. Hendry (1993) Co-integration, Error Correction, and the Econometric Analysis of Non-stationary Data. Oxford: Oxford University Press.
105. Clements, M. P., & D. F. Hendry (1993) On the limitations of comparing mean square forecast errors. Journal of Forecasting 12, 617--637 (with discussion).
106. Engle, R. F., & D. F. Hendry (1993) Testing super exogeneity and invariance in regression models. Journal of Econometrics 56, 119--139.
107. Hendry, D. F. (1993a) Econometrics: Alchemy or Science? Essays in Econometric Methodology. Oxford: Blackwell Publishers. 51
108. Hendry, D. F. (1993b) Introduction. In D. F. Hendry (ed.), Econometrics: Alchemy or Science? Essays in Econometric Methodology, pp. 1--7. Oxford: Blackwell Publishers.
109. Hendry, D. F. (1993c) Postscript: The econometrics of PC-GIVE. In D. F. Hendry (ed.), Econometrics: Alchemy or Science? Essays in Econometric Methodology, pp. 444--466. Oxford: Blackwell
110. Hendry, D. F., & G. E. Mizon (1993) Evaluating dynamic econometric models by encompassing the VAR. In P. C. B. Phillips (ed.), Models, Methods, and Applications of Econometrics: Essays in Honor
of A. R. Bergstrom, pp. 272--300. Cambridge: Basil Blackwell.
111. Hendry, D. F., & R. M. Starr (1993) The demand for M1 in the USA: A reply to James M. Boughton. Economic Journal 103, 1158--1169.
112. Clements,M. P., & D. F. Hendry (1994) Towards a theory of economic forecasting. In C. P. Hargreaves (ed.), Nonstationary Time Series Analysis and Cointegration, pp. 9--52. Oxford: Oxford
University Press.
113. Cook, S., & D. F. Hendry (1994) The theory of reduction in econometrics. In B. Hamminga and N. B. De Marchi (eds.), Idealization VI: Idealization in Economics, vol. 38 of Pozna'n Studies in the
Philosophy of the Sciences and the Humanities, pp. 71--100. Amsterdam: Rodopi.
114. Doornik, J. A., & D. F. Hendry (1994a) PcFiml 8.0: Interactive Econometric Modelling of Dynamic Systems. London: International Thomson Publishing.
115. Doornik, J. A., & D. F. Hendry (1994b) PcGive 8.0: An Interactive Econometric Modelling System. London: International Thomson Publishing.
116. Engle, R. F., & D. F. Hendry (1994) Appendix: The reverse regression (Appendix to `Testing super exogeneity and invariance in regression models'). In N. R. Ericsson and J. S. Irons (eds.),
Testing Exogeneity, pp. 110--116. Oxford: Oxford University Press.
117. Ericsson, N. R., D. F. Hendry, & H.-A. Tran (1994) Cointegration, seasonality, encompassing, and the demand for money in the United Kingdom. In C. P. Hargreaves (ed.), Nonstationary Time Series
Analysis and Cointegration, pp. 179--224. Oxford: Oxford University Press.
118. Govaerts, B., D. F. Hendry, & J.-F. Richard (1994) Encompassing in stationary linear dynamic models. Journal of Econometrics 63, 245--270.
119. Hendry, D. F. (1994) HUS revisited. Oxford Review of Economic Policy 10, 86--106.
120. Hendry, D. F., & M. P. Clements (1994a) Can econometrics improve economic forecasting? Swiss Journal of Economics and Statistics 130, 267--298.
121. Hendry, D. F., & M. P. Clements (1994b) On a theory of intercept corrections in macroeconometric forecasting. In S. Holly (ed.), Money, Inflation and Employment: Essays in Honour of James Ball,
pp. 160--182. Aldershot: Edward Elgar.
122. Hendry, D. F., & J. A. Doornik (1994) Modelling linear dynamic econometric systems. Scottish Journal of Political Economy 41, 1--33.
123. Hendry, D. F., & M. S. Morgan (1994) The ET interview: Professor H. O. A. Wold: 1908--1992. Econometric Theory 10, 419--433. 52
124. Clements, M. P., & D. F. Hendry (1995a) Forecasting in cointegrated systems. Journal of Applied Econometrics 10, 127--146.
125. Clements, M. P., & D. F. Hendry (1995b) Macro-economic forecasting and modelling. Economic Journal 105, 1001--1013.
126. Clements, M. P., & D. F. Hendry (1995c) A reply to Armstrong and Fildes. Journal of Forecasting 14, 73--75.
127. Hendry, D. F. (1995a) Dynamic Econometrics. Oxford: Oxford University Press.
128. Hendry, D. F. (1995b) Econometrics and business cycle empirics. Economic Journal 105, 1622--1636.
129. Hendry, D. F. (1995c) Le rรดle de l'รฉconomรฉtrie dans l'รฉconomie scientifique. In A. d'Autume and J. Cartelier (eds.), L'รconomie Devient-Elle Une Science Dure?, pp. 172--196. Paris: Economica.
130. Hendry, D. F. (1995d) On the interactions of unit roots and exogeneity. Econometric Reviews 14, 383--419.
131. Hendry, D. F., & J. A. Doornik (1995) A window on econometrics. Cyprus Journal of Economics 8, 77--104.
132. Hendry, D. F., & M. S. Morgan (eds.) (1995a) The Foundations of Econometric Analysis. Cambridge: Cambridge University Press.
133. Hendry, D. F., & M. S. Morgan (1995b) Introduction. In D. F. Hendry and M. S. Morgan (eds.), The Foundations of Econometric Analysis, pp. 1--82. Cambridge: Cambridge University Press.
134. Banerjee, A., & D. F. Hendry (eds.) (1996) The Econometrics of Economic Policy. Special Issue, Oxford Bulletin of Economics and Statistics, 58, 4, November.
135. Banerjee, A., D. F. Hendry, & G. E. Mizon (1996) The econometric analysis of economic policy. Oxford Bulletin of Economics and Statistics 58, 573--600.
136. Campos, J., N. R. Ericsson, & D. F. Hendry (1996) Cointegration tests in the presence of structural breaks. Journal of Econometrics 70, 187--220.
137. Clements, M. P., & D. F. Hendry (1996a) Forecasting in macro-economics. In D. R. Cox, D. V. Hinkley, and O. E. Barndorff-Nielsen (eds.), Time Series Models: In Econometrics, Finance and Other
Fields, pp. 101--141. London: Chapman and Hall.
138. Clements,M. P., & D. F. Hendry (1996b) Intercept corrections and structural change. Journal of Applied Econometrics 11, 475--494.
139. Clements,M. P., &D. F. Hendry (1996c) Multi-step estimation for forecasting. Oxford Bulletin of Economics and Statistics 58, 657--684.
140. Doornik, J. A., & D. F. Hendry (1996) GiveWin: An Interface to Empirical Modelling, Version 1.0. London: International Thomson Business Press.
141. Emerson, R. A., & D. F. Hendry (1996) An evaluation of forecasting using leading indicators. Journal of Forecasting 15, 271--291.
142. Florens, J.-P., D. F. Hendry, & J.-F. Richard (1996) Encompassing and specificity. Econometric Theory 12, 620--656. 53
143. Hendry, D. F. (1996a) On the constancy of time-series econometric equations. Economic and Social Review 27, 401--422.
144. Hendry, D. F. (1996b) Typologies of linear dynamic systems and models. Journal of Statistical Planning and Inference 49, 177--201.
145. Hendry, D. F., & J. A. Doornik (1996) Empirical Econometric Modelling Using PcGive 9.0 for Windows. London: International Thomson Business Press.
146. Hendry, D. F., & M. S. Morgan (1996) Obituary: Jan Tinbergen, 1903--94. Journal of the Royal Statistical Society, Series A 159, 614--616. 1997
147. Banerjee, A., & D. F. Hendry (eds.) (1997) The Econometrics of Economic Policy. Oxford: Blackwell Publishers.
148. Barrow, L., J. Campos, N. R. Ericsson, D. F. Hendry, H.-A. Tran, & W. Veloce (1997) Cointegration. In D. Glasner (ed.), Business Cycles and Depressions: An Encyclopedia, pp. 101--106. New York:
Garland Publishing.
149. Campos, J., N. R. Ericsson, & D. F. Hendry (1997) Phase averaging. In D. Glasner (ed.), Business Cycles and Depressions: An Encyclopedia, pp. 525--527. New York: Garland Publishing.
150. Clements, M. P., & D. F. Hendry (1997) An empirical study of seasonal unit roots in forecasting. International Journal of Forecasting 13, 341--355.
151. Desai, M. J., D. F. Hendry, & G. E. Mizon (1997) John Denis Sargan. Economic Journal 107, 1121--1125.
152. Doornik, J. A., & D. F. Hendry (1997) Modelling Dynamic Systems Using PcFiml 9.0 for Windows. London: International Thomson Business Press.
153. Ericsson, N. R., & D. F. Hendry (1997) Lucas critique. In D. Glasner (ed.), Business Cycles and Depressions: An Encyclopedia, pp. 410--413. New York: Garland Publishing.
154. Hendry, D. F. (1997a) Book review of Doing Economic Research: Essays on the Applied Methodology of Economics by Thomas Mayer. Economic Journal 107, 845--847.
155. Hendry, D. F. (1997b) Cointegration analysis: An international enterprise. In H. Jeppesen and E. Starup-Jensen (eds.), University of Copenhagen: Centre of Excellence, pp. 190--208. Copenhagen:
University of Copenhagen.
156. Hendry, D. F. (1997c) The econometrics of macroeconomic forecasting. Economic Journal 107, 1330--1357.
157. Hendry, D. F. (1997d) On congruent econometric relations: A comment. Carnegie- Rochester Conference Series on Public Policy 47, 163--190.
158. Hendry, D. F. (1997e) The role of econometrics in scientific economics. InA. d'Autume and J. Cartelier (eds.), Is Economics Becoming a Hard Science?, pp. 165--186. Cheltenham: Edward Elgar.
159. Hendry, D. F., & J. A. Doornik (1997) The implications for econometric modelling of forecast failure. Scottish Journal of Political Economy 44, 437--461.
160. Hendry, D. F., & N. Shephard (eds.) (1997a) Cointegration and Dynamics in Economics. Special Issue, Journal of Econometrics, 80, 2, October. 54
161. Hendry, D. F., & N. Shephard (1997b) Editors introduction. Journal of Econometrics 80, 195--197.
162. Clements, M. P., & D. F. Hendry (1998a) Forecasting economic processes. International Journal of Forecasting 14, 111--131 (with discussion).
163. Clements, M. P., & D. F. Hendry (1998b) Forecasting Economic Time Series. Cambridge: Cambridge University Press.
164. Doornik, J. A., D. F. Hendry, & B. Nielsen (1998) Inference in cointegrating models: UK M1 revisited. Journal of Economic Surveys 12, 533--572.
165. Ericsson, N. R., D. F. Hendry, & G. E. Mizon (1998) Exogeneity, cointegration, and economic policy analysis. Journal of Business and Economic Statistics 16, 370--387.
166. Ericsson, N. R., D. F. Hendry, & K. M. Prestwich (1998a) The demand for broad money in the United Kingdom, 1878--1993. Scandinavian Journal of Economics 100, 289--324 (with discussion).
167. Ericsson, N. R., D. F. Hendry, & K. M. Prestwich (1998b) Friedman and Schwartz (1982) revisited: Assessing annual and phase-average models of money demand in the United Kingdom. Empirical
Economics 23, 401--415.
168. Hendry, D. F., & G. E. Mizon (1998) Exogeneity, causality, and co-breaking in economic policy analysis of a small econometric model of money in the UK. Empirical Economics 23, 267--294.
169. Hendry, D. F., & N. Shephard (1998) The Econometrics Journal of the Royal Economic Society: Foreward. Econometrics Journal 1, i--ii.
170. Clements, M. P., & D. F. Hendry (1999a) Forecasting Non-stationary Economic Time Series. Cambridge: MIT Press.
171. Clements, M. P., & D. F. Hendry (1999b) On winning forecasting competitions in economics. Spanish Economic Review 1, 123--160.
172. Ericsson, N. R., & D. F. Hendry (1999) Encompassing and rational expectations: How sequential corroboration can imply refutation. Empirical Economics 24, 1--21.
173. Hendry, D. F. (1999) An econometric analysis of US food expenditure, 1931--1989. In J. R. Magnus and M. S. Morgan (eds.), Methodology and Tacit Knowledge: Two Experiments in Econometrics, pp.
341--361. Chichester: John Wiley and Sons.
174. Hendry, D. F., & J. A. Doornik (1999) The impact of computational tools on time-series econometrics. In T. Coppock (ed.), Information Technology and Scholarship: Applications in the Humanities
and Social Sciences, pp. 257--269. Oxford: Oxford University Press.
175. Hendry, D. F., & H.-M. Krolzig (1999) Improving on `Data mining reconsidered' by K. D. Hoover and S. J. Perez. Econometrics Journal 2, 202--219.
176. Hendry, D. F., & G. E. Mizon (1999) The pervasiveness of Granger causality in econometrics. In R. F. Engle and H. White (eds.), Cointegration, Causality, and Forecasting: A Festschrift in Honour
of Clive W. J. Granger, pp. 102--134. Oxford: Oxford University Press. 55
177. Barnett, W. A., D. F. Hendry, S. Hylleberg, T. Terรคsvirta, D. Tjรธstheim, & A. Wรผrtz (2000a) Introduction and overview. In W. A. Barnett, D. F. Hendry, S. Hylleberg, T. Terรคsvirta, D. Tjรธstheim,
and A.Wรผrtz (eds.), Nonlinear Econometric Modeling in Time Series: Proceedings of the Eleventh International Symposium in Economic Theory, pp. 1--8. Cambridge: Cambridge University Press.
178. Barnett, W. A., D. F. Hendry, S. Hylleberg, T. Terรคsvirta, D. Tjรธstheim, & A. Wรผrtz (eds.) (2000b) Nonlinear Econometric Modeling in Time Series: Proceedings of the Eleventh International
Symposium in Economic Theory. Cambridge: Cambridge University Press.
179. Beyer, A., J. A. Doornik, & D. F. Hendry (2000) Reconstructing aggregate Euro-zone data. Journal of Common Market Studies 38, 613--624.
180. Hendry, D. F. (2000a) Does money determine UK inflation over the long run? In R. E. Backhouse and A. Salanti (eds.), Macroeconomics and the Real World, vol. 1, pp. 85--114. Oxford: Oxford
University Press.
181. Hendry, D. F. (2000b) Econometrics: Alchemy or Science? Essays in Econometric Methodology. Oxford: Oxford University Press, New Edition.
182. Hendry, D. F. (2000c) Epilogue: The success of general-to-specific model selection. In D. F. Hendry (ed.), Econometrics: Alchemy or Science? Essays in Econometric Methodology, New Edition, pp.
467--490. Oxford: Oxford University Press.
183. Hendry, D. F. (2000d) On dectectable and non-detectable structural change. Structural Change and Economic Dynamics 11, 45--65.
184. Hendry, D. F., & M. P. Clements (2000) Economic forecasting in the face of structural breaks. In S. Holly and M. Weale (eds.), Econometric Modelling: Techniques and Applications, pp. 3--37.
Cambridge: Cambridge University Press.
185. Hendry, D. F., & K. Juselius (2000) Explaining cointegration analysis: Part I. Energy Journal 21, 1--42.
186. Hendry, D. F., & G. E. Mizon (2000a) The influence of A.W. Phillips on econometrics. In R. Leeson (ed.), A. W. H. Phillips: Collected Works in Contemporary Perspective, pp. 353--364. Cambridge:
Cambridge University Press.
187. Hendry, D. F., & G. E. Mizon (2000b) On selecting policy analysis models by forecast accuracy. In A. B. Atkinson, H. Glennerster, and N. H. Stern (eds.), Putting Economics to Work: Volume in
Honour of Michio Morishima, pp. 71--119. London: STICERD, London School of Economics.
188. Hendry, D. F., & G. E. Mizon (2000c) Reformulating empirical macroeconometric modelling. Oxford Review of Economic Policy 16, 138--159.
189. Hendry, D. F., & R. Williams (2000) Distinguished fellow of the Economic Society of Australia, 1999: Adrian R. Pagan. Economic Record 76, 113--115.
190. Beyer, A., J. A. Doornik, & D. F. Hendry (2001) Constructing historical Euro-zone data. Economic Journal 111, F102--F121.
191. Clements, M. P., & D. F. Hendry (2001a) Explaining the results of the M3 forecasting competition. International Journal of Forecasting 17, 550--554. 56
192. Clements, M. P., & D. F. Hendry (2001b) Forecasting with difference-stationary and trend-stationary models. Econometrics Journal 4, S1--S19.
193. Clements, M. P., & D. F. Hendry (2001c) An historical perspective on forecast errors. National Institute Economic Review 2001, 100--112.
194. Doornik, J. A., & D. F. Hendry (2001a) Econometric Modelling Using PcGive 10 . Vol. 3, London: Timberlake Consultants Press (with Manuel Arellano, Stephen Bond, H. Peter Boswijk, and Marius
195. Doornik, J. A., & D. F. Hendry (2001b) GiveWin Version 2: An Interface to Empirical Modelling. London: Timberlake Consultants Press.
196. Doornik, J. A., & D. F. Hendry (2001c) Interactive Monte Carlo Experimentation in Econometrics Using PcNaive 2 . London: Timberlake Consultants Press.
197. Doornik, J. A., & D. F. Hendry (2001d) Modelling Dynamic Systems Using PcGive 10 . Vol. 2, London: Timberlake Consultants Press.
198. Hendry, D. F. (2001a) Achievements and challenges in econometric methodology. Journal of Econometrics 100, 7--10.
199. Hendry, D. F. (2001b) How economists forecast. In D. F. Hendry and N. R. Ericsson (eds.), Understanding Economic Forecasts, pp. 15--41. Cambridge: MIT Press.
200. Hendry, D. F. (2001c) Modelling UK inflation, 1875--1991. Journal of Applied Econometrics 16, 255--275.
201. Hendry, D. F., & J. A. Doornik (2001) Empirical Econometric Modelling Using PcGive 10 . Vol. 1, London: Timberlake Consultants Press.
202. Hendry, D. F., & N. R. Ericsson (2001a) Editors' introduction. In D. F. Hendry and N. R. Ericsson (eds.), Understanding Economic Forecasts, pp. 1--14. Cambridge: MIT Press.
203. Hendry, D. F., & N. R. Ericsson (2001b) Epilogue. In D. F. Hendry and N. R. Ericsson (eds.), Understanding Economic Forecasts, pp. 185--191. Cambridge: MIT Press.
204. Hendry, D. F., & N. R. Ericsson (eds.) (2001c) Understanding Economic Forecasts. Cambridge: MIT Press.
205. Hendry, D. F., & K. Juselius (2001) Explaining cointegration analysis: Part II. Energy Journal 22, 75--120.
206. Hendry, D. F., & H.-M. Krolzig (2001) Automatic Econometric Model Selection Using PcGets 1.0 . London: Timberlake Consultants Press.
207. Hendry, D. F., & M. H. Pesaran (2001a) Introduction: A special issue in memory of John Denis Sargan: Studies in empirical macroeconometrics. Journal of Applied Econometrics 16, 197--202.
208. Hendry, D. F., & M. H. Pesaran (eds.) (2001b) Special Issue in Memory of John Denis Sargan 1924--1996: Studies in Empirical Macroeconometrics. Special Issue, Journal of Applied Econometrics, 16,
3, May--June.
209. Krolzig, H.-M., & D. F. Hendry (2001) Computer automation of general-to-specific model selection procedures. Journal of Economic Dynamics and Control 25, 831--866. 57
210. Clements, M. P., & D. F. Hendry (eds.) (2002a) A Companion to Economic Forecasting. Oxford: Blackwell Publishers.
211. Clements, M. P., & D. F. Hendry (2002b) Explaining forecast failure in macroeconomics. In M. P. Clements and D. F. Hendry (eds.), A Companion to Economic Forecasting, pp. 539--571. Oxford:
Blackwell Publishers.
212. Clements, M. P., & D. F. Hendry (2002c) Modelling methodology and forecast failure. Econometrics Journal 5, 319--344.
213. Clements, M. P., & D. F. Hendry (2002d) An overview of economic forecasting. In M. P. Clements and D. F. Hendry (eds.), A Companion to Economic Forecasting, pp. 1--18. Oxford: Blackwell
214. Doornik, J. A., D. F. Hendry, & N. Shephard (2002) Computationally intensive econometrics using a distributed matrix-programming language. Philosophical Transactions of the Royal Society,
London, Series A 360, 1245--1266.
215. Hendry, D. F. (2002a) Applied econometrics without sinning. Journal of Economic Surveys 16, 591--604.
216. Hendry, D. F. (2002b) Forecast failure, expectations formation and the Lucas Critique. Annales D'รconomie et de Statistique 2002, 21--40.
217. Campos, J., D. F. Hendry, & H.-M. Krolzig (2003) Consistent model selection by an automatic Gets approach. Oxford Bulletin of Economics and Statistics 65, 803--819.
218. Doornik, J. A., & D. F. Hendry (2003a) PcGive. In C. G. Renfro (ed.), ``A Compendium of Existing Econometric Software Packages'', Journal of Economic and Social Measurement, 26, forthcoming.
219. Doornik, J. A., & D. F. Hendry (2003b) PcNaive. In C. G. Renfro (ed.), ``A Compendium of Existing Econometric Software Packages'', Journal of Economic and Social Measurement, 26, forthcoming.
220. Haldrup, N., D. F. Hendry, & H. K. van Dijk (2003a) Guest editors' introduction: Model selection and evaluation in econometrics. Oxford Bulletin of Economics and Statistics 65, 681--688.
221. Haldrup, N., D. F. Hendry, & H. K. van Dijk (eds.) (2003b) Model Selection and Evaluation. Special Issue, Oxford Bulletin of Economics and Statistics, 65, supplement.
222. Hendry, D. F. (2003a) Book review of Causality in Macroeconomics by Kevin D. Hoover. Economica 70, 375--377.
223. Hendry, D. F. (2003b) Forecasting pitfalls. Bulletin of E.U. and U.S. Inflation and Macroeconomic Analysis 2003, 65--82.
224. Hendry, D. F. (2003c) J. Denis Sargan and the origins of LSE econometric methodology. Econometric Theory 19, 457--480.
225. Hendry, D. F., & M. P. Clements (2003) Economic forecasting: Some lessons from recent research. Economic Modelling 20, 301--329.
226. Hendry, D. F., & H.-M. Krolzig (2003a) New developments in automatic general-tospecific modeling. In B. P. Stigum (ed.), Econometrics and the Philosophy of Economics: 58 Theory-Data
Confrontations in Economics, pp. 379--419. Princeton: Princeton University Press.
227. Hendry, D. F., & H.-M. Krolzig (2003b) PcGets. In C. G. Renfro (ed.), ``A Compendium of Existing Econometric Software Packages'', Journal of Economic and Social Measurement, 26, forthcoming.
228. Campos, J., N. R. Ericsson, & D. F. Hendry (eds.) (2004) Readings on General-to- Specific Modeling. Cheltenham: Edward Elgar, forthcoming.
229. Hendry, D. F. (2004) The Nobel memorial prize for Clive W. J. Granger. Scandinavian Journal of Economics 106, forthcoming.
230. Hendry, D. F., & M. P. Clements (2004) Pooling of forecasts. Econometrics Journal 7, forthcoming.
231. Hendry, D. F., & H.-M. Krolzig (2004) Sub-sample model selection procedures in general-to-specific modelling. In R. Becker and S. Hurn (eds.), Contemporary Issues in Economics and Econometrics:
Theory and Application, pp. 53--75. Cheltenham: Edward Elgar.
This version is optimized for use by screen readers. A printable pdf version is available. | {"url":"http://federalreserve.gov/pubs/ifdp/2004/811/ifdp811.htm","timestamp":"2014-04-16T19:07:05Z","content_type":null,"content_length":"211864","record_id":"<urn:uuid:7b1d0c5e-63eb-4264-88b2-920f4e285071>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
newtype Octonion k Source
Eq k => Eq (Octonion k)
(Ord k, Num k, Fractional k) => Fractional (Octonion k)
(Ord k, Num k) => Num (Octonion k)
Ord k => Ord (Octonion k)
Show k => Show (Octonion k)
g2_3 :: [Permutation (Octonion F3)]Source
Generators for G2(3), a finite simple group of order 4245696, as a permutation group on the 702 unit imaginary octonions over F3 | {"url":"http://hackage.haskell.org/package/HaskellForMaths-0.4.5/docs/Math-Projects-ChevalleyGroup-Exceptional.html","timestamp":"2014-04-17T19:03:55Z","content_type":null,"content_length":"24128","record_id":"<urn:uuid:ba6e6325-1ba8-4580-85b0-27c57f8d07c2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
Veri of control based security properties
- In Proceedings of the 9th ACM Conference on Computer and Communications Security , 2002
"... We describe a formal approach for finding bugs in security-relevant software and verifying their absence. The idea is as follows: we identify rules of safe programming practice, encode them as
safety properties, and verify whether these properties are obeyed. Because manual verification is too expen ..."
Cited by 196 (7 self)
Add to MetaCart
We describe a formal approach for finding bugs in security-relevant software and verifying their absence. The idea is as follows: we identify rules of safe programming practice, encode them as safety
properties, and verify whether these properties are obeyed. Because manual verification is too expensive, we have built a program analysis tool to automate this process. Our program analysis models
the program to be verified as a pushdown automaton, represents the security property as a finite state automaton, and uses model checking techniques to identify whether any state violating the
desired security goal is reachable in the program. The major advantages of this approach are that it is sound in verifying the absence of certain classes of vulnerabilities, that it is fully
interprocedural, and that it is efficient and scalable. Experience suggests that this approach will be useful in finding a wide range of security vulnerabilities in large programs efficiently.
- Proc. of CAV'2000 , 2000
"... We study model checking problems for pushdown systems and linear time logics. We show that the global model checking problem (computing the set of configurations, reachable or not, that violate
the formula) can be solved in O(gP 3 ..."
Cited by 145 (25 self)
Add to MetaCart
We study model checking problems for pushdown systems and linear time logics. We show that the global model checking problem (computing the set of configurations, reachable or not, that violate the
formula) can be solved in O(gP 3
, 2004
"... Abstract. We study congruences on words in order to characterize the class of visibly pushdown languages (Vpl), a subclass of context-free languages. For any language L, we define a natural
congruence on words that resembles the syntactic congruence for regular languages, such that this congruence i ..."
Cited by 133 (15 self)
Add to MetaCart
Abstract. We study congruences on words in order to characterize the class of visibly pushdown languages (Vpl), a subclass of context-free languages. For any language L, we define a natural
congruence on words that resembles the syntactic congruence for regular languages, such that this congruence is of finite index if, and only if, L is a Vpl. We then study the problem of finding
canonical minimal deterministic automata for Vpls. Though Vpls in general do not have unique minimal automata, we consider a subclass of VPAs called k-module single-entry VPAs that correspond to
programs with recursive procedures without input parameters, and show that the class of well-matched Vpls do indeed have unique minimal k-module single-entry automata. We also give a polynomial time
algorithm that minimizes such k-module single-entry VPAs. 1 Introduction The class of visibly pushdown languages (Vpl), introduced in [1], is a subclassof context-free languages accepted by pushdown
automata in which the input letter determines the type of operation permitted on the stack. Visibly push-down languages are closed under all boolean operations, and problems such as inclusion, that
are undecidable for context-free languages, are decidable for Vpl. Vpls are relevant to several applications that use context-free languages suchas the model-checking of software programs using their
pushdown models [1-3]. Recent work has shown applications in other contexts: in modeling semanticsof effects in processing XML streams [4], in game semantics for programming languages [5], and in
identifying larger classes of pushdown specifications thatadmit decidable problems for infinite games on pushdown graphs [6].
- In Proc. CAVรขโฌโข01, LNCS 2102 , 2001
"... We present a model-checker for boolean programs with (possibly recursive) procedures and the temporal logic LTL. The checker is guaranteed to terminate even for (usually faulty) programs in
which the depth of the recursion is not bounded. The algorithm uses automata to finitely represent possibly in ..."
Cited by 68 (8 self)
Add to MetaCart
We present a model-checker for boolean programs with (possibly recursive) procedures and the temporal logic LTL. The checker is guaranteed to terminate even for (usually faulty) programs in which the
depth of the recursion is not bounded. The algorithm uses automata to finitely represent possibly in nite sets of stack contents and BDDs to compactly represent nite sets of values of boolean
variables. We illustrate the checker on some examples and compare it with the Bebop tool of Ball and Rajamani.
, 2004
"... Model checking of linear temporal logic (LTL) speci cations with respect to pushdown systems has been shown to be a useful tool for analysis of programs with potentially recursive procedures.
LTL, however, can specify only regular properties, and properties such as correctness of procedures wit ..."
Cited by 54 (11 self)
Add to MetaCart
Model checking of linear temporal logic (LTL) speci cations with respect to pushdown systems has been shown to be a useful tool for analysis of programs with potentially recursive procedures. LTL,
however, can specify only regular properties, and properties such as correctness of procedures with respect to pre and post conditions, that require matching of calls and returns, are not regular. We
introduce a temporal logic of calls and returns (CaRet) for speci cation and algorithmic veri cation of correctness requirements of structured programs. The formulas of CaRet are interpreted over
sequences of propositional valuations tagged with special symbols call and ret. Besides the standard global temporal modalities, CaRet admits the abstract-next operator that allows a path to jump
from a call to the matching return. This operator can be used to specify a variety of non-regular properties such as partial and total correctness of program blocks with respect to pre and post
conditions. The abstract versions of the other temporal modalities can be used to specify regular properties of local paths within a procedure that skip over calls to other procedures. CaRet also
admits the caller modality that jumps to the most recent pending call, and such caller modalities allow speci cation of a variety of security properties that involve inspection of the call-stack.
Even though verifying contextfree properties of pushdown systems is undecidable, we show that model checking CaRet formulas against a pushdown model is decidable. We present a tableau construction
that reduces our model checking problem to the emptiness problem for a B uchi pushdown system. The complexity of model checking CaRet formulas is the same as that of checking LTL formulas, namely, | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1264182","timestamp":"2014-04-20T21:51:40Z","content_type":null,"content_length":"25262","record_id":"<urn:uuid:8c2fe5e0-ac63-4411-92ef-32d1d92e7457>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Take the Zimbardo Time Perspective Inventory (ZTPI) to get an idea of your scores in the different time perspectives.
You can also take the Transcendental-future Time Perspective Inventory (TTPI).
Once youโve taken the ZTPI or TTPI you may want to compare your scores to others by checking out the graph below:
When looking at all data collected so far, the average score on each of the time perspectives is different. The average score for each time perspective lines up with 50% on the graph. For example, on
the past negative time perspective, peopleโs average score is 3.0. On the past positive it is 3.22. Weโve taken into account the dispersion of scores so that a 4.7 on the past negative is the 99%,
while it takes a 4.11 on the past positive to reach the 99%.
The red dots and lines are not associated with the data in any way. It is simply our idea of what an ideal time perspective looks like. We have included it, so you can have an indication of how to
improve your time perspective. You can print out and plot your scores on the image, then connect the dots, and see how your time perspective compares with that of others. | {"url":"http://www.thetimeparadox.com/surveys/","timestamp":"2014-04-18T01:44:09Z","content_type":null,"content_length":"18626","record_id":"<urn:uuid:0874944d-3356-4784-a82c-f9d3f7c76799>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haledon Science Tutor
Find a Haledon Science Tutor
...I have a strong background in biochemistry from my PhD, and during my Master's received an 'A' in graduate biochemistry and a 'high pass' on my qualification exam which included a biochemistry
component. I have previously tutored students in graduate-level biochemistry. During the final year of my PhD, I had the opportunity to design and implement a semester-long course on ecology.
6 Subjects: including biostatistics, biology, chemistry, biochemistry
Hey there! I am a pre-med student in my second year of college and I have 3 years of tutoring experience. I currently work at the Math Center in South Orange, NJ.
29 Subjects: including chemistry, anatomy, biology, English
...I have worked for 3 years as a private tutor teaching biology to High School students. I am a native Greek speaker. I was born and grew up in Greece and I studied in Greece and France.
2 Subjects: including biology, Greek
...Have further taken math classes involving upper level math, such as differential equations and linear algebra. Engineering graduate. Have taken years of mathematics including calculus.
12 Subjects: including chemistry, physics, statistics, probability
...I have my master's degree in education and have four years of experience teaching science in New York City, in addition to several years coaching new math and science teachers. I use engaging
and effective instruction strategies to help you truly understand the reasons and the science behind all...
8 Subjects: including astronomy, physical science, Regents, nutrition
Nearby Cities With Science Tutor
Allendale, NJ Science Tutors
Fairfield, NJ Science Tutors
Glen Rock, NJ Science Tutors
Hawthorne, NJ Science Tutors
Ho Ho Kus Science Tutors
Midland Park Science Tutors
North Haledon, NJ Science Tutors
Paterson, NJ Science Tutors
Pequannock Science Tutors
Pequannock Township, NJ Science Tutors
Prospect Park, NJ Science Tutors
Totowa Science Tutors
Totowa Boro, NJ Science Tutors
Wayne, NJ Science Tutors
Woodland Park, NJ Science Tutors | {"url":"http://www.purplemath.com/haledon_nj_science_tutors.php","timestamp":"2014-04-16T16:00:27Z","content_type":null,"content_length":"23518","record_id":"<urn:uuid:059256e4-ea0a-4ccb-989f-b09dfd852b63>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
Given that `n!>=2^(n-1)`
Deduce that `sum_(k=1)^n 1/(k!) - Homework Help - eNotes.com
Given that `n!>=2^(n-1)`
Deduce that `sum_(k=1)^n 1/(k!) <= 2-1/2^(n-1)`
Hence show that e<=3, where e if the base of natural logarithms.
`sum_(k=1)^n 1/(k!) <= sum_(k=1)^n 1/2^(k-1)`
Since n is positive;
` ` Get the summation at both sides.
`sum_(k=1)^n 1/(k!) <= sum_(k=1)^n 1/2^(k-1)`
`sum_(k=1)^n 1/2^(k-1) = 1+(1/2)+(1/2)^2+......+(1/2)^(k-1)`
`sum_(k=1)^n (1/2)^(k-1)` represent a sum of a geometric series.
`sum_(k=1)^n 1/2^(k-1) = (1-(1/2)^n)/(1-1/2) = 2-2/2^n`
`sum_(k=1)^n 1/(k!) <= sum_(k=1)^n 1/2^(k-1)`
`sum_(k=1)^n 1/(k!) <= 2-2/2^n`
`sum_(k=1)^n 1/(k!) <= 2-1/2^(n-1)`
So the answer is proved.
We know that;
`e=lim_(n->oo) sum_(k=0)^n 1/(k!) `
`e= lim_(n->oo) [1+sum_(k=1)^n 1/(k!)]`
`e=1+ lim_(n->oo) sum_(k=1)^n 1/(k!)`
We proved that;
`sum_(k=1)^n (1/k!) <= 2-1/2^(n-1)`
`lim_(n->oo) sum_(k=0)^n (1/k!)<=1+ lim_(n->oo) [2-1/(2^(n-1))]`
So the answer is proved.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/given-that-n-gt-2-n-1-deduce-that-sum-k-1-n-1-k-lt-454031","timestamp":"2014-04-24T10:40:11Z","content_type":null,"content_length":"26327","record_id":"<urn:uuid:1c4b380c-0ee6-4173-873e-717b3718f208>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the area
What shape is this in the graph so I'll know what formula to use.....
I have a question and a possibility; for the far right point, can I assume the y-coordinate is 8.5? If so, I suggest that you use shoelace formula.
Seriously? I see three sides and three angles so- that's a triangle!
I can't distinguish the base and the height... But that's okay I'll just use the shoelace formula
I know it's a triangle I guess what I was aiming to ask for is for the formula | {"url":"http://mathhelpforum.com/calculus/213215-finding-area-print.html","timestamp":"2014-04-18T05:06:12Z","content_type":null,"content_length":"4955","record_id":"<urn:uuid:0f29fa2a-7146-4e0e-b249-0e3c3003f706>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quant analytics: Using Excel LINEST() function for multivariate regression
Written by Administrator Monday, 14 March 2011 20:41
Quant analytics: Using Excel LINEST() function for multivariate regression
An example of data from an antique clock auction
# Obs Y Price($) X2 Age X3 Bids
Y Price is independent variable as price of antique clock. X2 Age is age of clock. X3 # Bids makes it no longer a simple regression. X2 & X3 are dependent variables or explanatory. THE more bidders
or older the clock, the higher the price.
Use Excel LINEST() returns the array for the multiple regression.
Legend to Output Below
b1 b2 b3
se(b1) se(b2) se(b3)
R^2 se(y)
F d.f. <-degrees of freedom
ESS RSS <- Explained sum of square result and Residual sum of squared
Regression output (see below of Linest())
X3 X2 intercept
85.764 12.741 -1336.049 <- coefficients see below
8.803 .0912 15.272 <-this row is standard error
0.891 134.608 #n/a <-see legend above
.891 134.608 #n/a
R^2 is coefficient of determination so. 89 is quite high
se(y) is standard error of y/regression/or estimate
3 standard errors because it is multivariate
You need to select a range. Use =linest(known Y's, Known X's (both X2&X2 above),,true) Press Shift Enter -> Excel returns the array in the range you select
The return result are the coeffecients in the order of x3 x2 intercept as above in Regression output. This is specified right to left. The intercept is for the regression line. X2 would be for the
agre and X3 would be for # of bids. Also would be
y=-1336.049*12.741*x2+85.764*x3 <- formula for the multiple regression
true for statistics note no contant set in 3rd option
Because you choose y and both x's, it becomes a multivariate regression not just x&y.
To calculate Critical T value (or Critical T stat)=85.764/8.802 = 9.73437092 >2 which statistically significant | {"url":"http://quantlabs.net/labs/home/905-quant-analytics-using-excel-linest-function-for-multivariate-regression","timestamp":"2014-04-18T00:34:15Z","content_type":null,"content_length":"22745","record_id":"<urn:uuid:6a3762a1-5a9c-430e-9802-1fd6ee3b04a8>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Cart losing Mass
When you say 'initial mass M' does that include the load of sand? donjennix's approach looks OK to me (but then again, I'm no physics god so...). Using momentum, we have
dp/dt = d/dt (mv)
and since m and v are functions of time, then
dp/dt = mdv/dt + vdm/dt.
Rearranging gives
dv/dt = 1/m(dp/dt - vdm/dt)
Since F is producing the change in momentum, we can substitute it for dp/dt. dm/dt is just k so
dv/dt = 1/m(F - vk)
As a said, I'm not physics guru so let's see what others say. | {"url":"http://www.physicsforums.com/showthread.php?t=54586","timestamp":"2014-04-19T19:36:50Z","content_type":null,"content_length":"31495","record_id":"<urn:uuid:e32cf63c-2e21-4b80-a8c1-c10289d0ec9b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Math Puzzle Coming From Chemistry
I posed this puzzle a while back, and nobody solved it. Thatโs okayโnow that I think about it, Iโm not sure how to solve it either!
It seems to involve group theory. But instead of working on it, solving it and telling you the answer, Iโd rather dump all the clues in your lap, so we can figure it out together.
Suppose we have an ethyl cation. Weโll pretend it looks like this:
As I explained before, it actually doesnโtโnot in real life. But never mind! Realism should never stand in the way of a good puzzle.
Continuing on in this unrealistic vein, weโll pretend that the two black carbon atoms are distinguishable, and so are the five white hydrogen atoms. As you can see, 2 of the hydrogens are bonded to
one carbon, and 3 to the other. We donโt care how the hydrogens are arranged, apart from which carbon each hydrogen is attached to. Given this, there are
$2 \times \displaystyle{ \binom{5}{2} = 20 }$
ways to arrange the hydrogens. Letโs call these arrangements states.
Now draw a dot for each of these 20 states. Draw an edge connecting two dots whenever you can get from one state to another by having a hydrogen hop from the carbon with 2 hydrogens to the carbon
with 3. Youโll get this picture, called the Desargues graph:
The red dots are states where the first carbon has 2 hydrogens attached to it; the blue ones are states where the second carbon has 2 hydrogens attached to it. So, each edge goes between a red and a
blue dot. And there are 3 edges coming out of each dot, since there are 3 hydrogens that can make the jump!
Now, the puzzle is to show that you can also get the Desargues graph from a different kind of molecule. Any molecule shaped like this will do:
The 2 balls on top and bottom are called axial, while the 3 around the middle are called equatorial.
There are various molecules like this. For example, phosphorus pentachloride. Letโs use that.
Like the ethyl cation, phosphorus pentachloride also has 20 statesโฆ but only if count them a certain way! We have to treat all 5 chlorines as distinguishable, but think of two arrangements of them as
the same if we can rotate one to get the other. Again, Iโm not claiming this is physically realistic: itโs just for the sake of the puzzle.
Phosphorus pentachloride has 6 rotational symmetries, since you can turn it around its axis 3 ways, but also flip it over. So, it has
$\displaystyle{ \frac{5!}{6} = 20}$
Thatโs good: exactly the number of dots in the Desargues graph! But how about the edges? We get these from certain transitions between states. These transitions are called pseudorotations, and they
look like this:
Phosphorus pentachloride really does this! First the 2 axial guys move towards each other to become equatorial. Beware: now the equatorial ones are no longer in the horizontal plane: theyโre in the
plane facing us. Then 2 of the 3 equatorial guys swing out to become axial.
To get from one state to another this way, we have to pick 2 of the 3 equatorial guys to swing out and become axial. There are 3 choices here. So, we again get a graph with 20 vertices and 3 edges
coming out of each vertex.
Puzzle. Is this graph the Desargues graph? If so, show it is.
I read in some chemistry papers that it is. But is it really? And if so, why? David Corfield suggested a promising strategy. He pointed out that we just need to get a 1-1 correspondence between
โข states of the ethyl cation and states of phosphorus pentachloride,
together with a compatible 1-1 correspondence between
โข transitions of the ethyl cation and transitions of phosphorus pentachloride.
And he suggested that to do this, we should think of the split of hydrogens into a bunch of 2 and a bunch of 3 as analogous to the split of chlorines into a bunch of 2 (the โaxialโ ones) and a bunch
of 3 (the โequatorialโ ones).
Itโs a promising idea. Thereโs a problem, though! In the ethyl cation, a single hydrogen hops from the bunch of 3 to the bunch of 2. But in a pseudorotation, two chlorines go from the bunch of 2 to
the bunch of 3โฆ and meanwhile, two go back from the bunch of 3 to bunch of 2.
And if you think about it, thereโs another problem too. In the ethyl cation, there are 2 distinguishable carbons. One of them has 3 hydrogens attached, and one doesnโt. But in phosphorus
pentachloride itโs not like that. The 3 equatorial chlorines are just that: equatorial. They donโt have 2 choices about how to be that way. Or do they?
Well, thereโs more to say, but this should already make it clear that getting โnaturalโ one-to-one correspondences is a bit trickyโฆ if itโs even possible at all!
If you know some group theory, we could try solving the problem using the ideas behind Felix Kleinโs โErlangen programโ. The group of permutations of 5 things, say $S_5,$ acts as symmetries of either
molecule. For the ethyl cation the set of states will be $X = S_5/G$ for some subgroup $G.$ You can think of $X$ as a set of structures of some sort on a 5-element set. The group $S_5$ acts on $X,$
and the transitions will give an invariant binary relation on $X,$ For phosphorus pentachloride weโll have some set of states $X' = S_5/G'$ for some other subgroup $G'$, and the transitions will give
an invariant relation on $X'$.
We could start by trying to see if $G$ is the same as $G'$โor more precisely, conjugate. If they are, thatโs a good sign. If not, itโs bad: it probably means thereโs no โnaturalโ way to show the
graph for phosphorus pentachloride is the Desargues graph.
I could say more, but Iโll stop here. In case youโre wondering, all this is just a trick to get more mathematicians interested in chemistry. A few may then go on to do useful things.
37 Responses to A Math Puzzle Coming From Chemistry
1. Does it not become very natural if you label the hydrogens on the ethyl cation in a different way?
Instead of labelling them as which carbon theyโre on ignore the carbons altogether and label them according to whether theyโre in a set of 3 or a set of 2 (maybe set is the wrong word, I just
thought it would be less confusing than โgroupโ โ interpret in the non-maths-y sense) โ this is exactly the same labelling as for phosphorus pentachloride.
The pseudorotations and the hydrogen-swapping are then both just the action of turning the 3-set into a 2-set by moving one hydrogen. This is a little more difficult to see with the phosphorus
pentachloride but the two equatorial hydrogens that you choose to physically โmoveโ are the ones โstaying in their setโ and then one that stays physically stationary is the one that โswaps intoโ
the other set.
No group theory needed, just thinking about them in a very slightly different way and itโs obvious.
โก Nice!!! That was fast!
I think thereโs a wee bit more work to be done, thoughโฆ
Labelling each hydrogen according to whether itโs in the set of 3 or the set of 2 does not completely specify a state of the ethyl cation. Since weโre treating the carbons are
distinguishable, one extra bit of information is needed: which carbon has 3 hydrogens attached to it and which has 2?
Here โbitโ is used in the technical sense: binary digit. This extra bit is what brings the number of states up from
$\displaystyle{ \binom{5}{2} = 10 }$
$2 \times \displaystyle{ \binom{5}{2} = 20 }$
However, this bit of information always changes when we make a transition. Indeed, this bit says the color of the dots here:
and each edge goes between a red dot and a blue one.
Over on the phosphorus pentachloride side, thereโs also an extra bit of information, in addition to the labelling saying which chlorines are axial and which are equatorial.
So, weโve reduced the puzzle to this mini-puzzle: what is this extra bit, and does it always change when we do a pseudorotation? If it does, weโre done.
By the way, thereโs another way of counting states in both molecules, where we ignore this extra bit. If we used that way, weโd have 10 states instead of 20โฆ and weโd be done with this
But thereโs another fun question to ask about this other way of counting states: what graph would we get then?
โ Does the extra bit come from the reflectional symmetry of the phosphorus pentachloride? So, if a and b are axial, with a above, then the equatorial (b c d) clockwise is not the same as (d
c b) clockwise.
โ Yes, this extra bit does not change when we rotate the phosphorus pentachloride moleculeโsee my reply to Peter Morgan belowโbut it does change when we reflect it. The question is
whether it changes when we pseudorotate it!
โ Just thinking out loud.
Let me write [a,b,c,d,e] for the first image in this post labeled in reading order. So a and e are the axial atoms, bcd are equatorial in counter clockwise order. We have by
rotational symmetry that: [a,b,c,d,e] = [a,c,d,b,e] = [a,d,b,c,e] = [e,d,c,b,a] = [e,c,b,d,a] = [e,b,d,c,a]. I will always use the lexicographically smallest representation.
Then the pseudorotation in the image in this post is [a,b,c,d,e] |-> [b,a,e,c,d].
According to a simple program I wrote, there are 10 different states reachable from [1,2,3,4,5] in an even number of moves:
and 10 in an odd number of moves:
The bit could be the parity of the permutation (a,b,c,d,e). That is because pseudorotation has odd parity, while all symmetry permutations have even parity.
{-# LANGUAGE NoMonomorphismRestriction #-}
import Data.List
import qualified Data.Set as Set
perm xs = sort $ perm1 xs ++ perm1 (reverse xs)
perm1 [a,b,c,d,e] = [[a,b,c,d,e],[a,c,d,b,e],[a,d,b,c,e]]
cannon = head . perm
rot [a,b,c,d,e] = [b,a,e,c,d]
close :: Ord a => (a -> [a]) -> a -> Set.Set a
close f = go Set.empty
go s a
| a `Set.member` s = s
| otherwise = foldl go (Set.insert a s) (f a)
nextStates = map (cannon . rot) . perm
s0 = [1..5]
states = close nextStates s0
sEven = close (concatMap nextStates . nextStates) s0
sOdd = close (concatMap nextStates . nextStates) (nextStates s0!!0)
parity xs = (`mod`2) $ sum[ if x>y then 1 else 0 | x:ys <- tails xs, y<-ys ]
โ Excellent! The puzzle is solved!
More laterโฆ now itโs my bedtime. Iโd like to try to find a way to solve this puzzle using plain English.
โ Good way of looking at it, Twan. Now, however, the pseudorotation looks to me to be a bit of a red herring as far as counting states is concerned. States are the same if they can be
reached by rotations; rotations generate cycles containing 6 elements, [a,b,c,d,e] = [a,c,d,b,e] = [a,d,b,c,e] = [e,d,c,b,a] = [e,c,b,d,a] = [e,b,d,c,a], therefore there are 20
equivalence classes=states.
The pseudorotation plus *either* the rotation [a,b,c,d,e]->[a,c,d,b,e] *or* the rotation [a,b,c,d,e]->[e,b,d,c,a] generate cycles with 120 elements, with no grading. The
pseudorotation plus the reflection [a,b,c,d,e]->[e,b,c,d,a] generates a cycle with 120 elements, with an odd-even grading. All of these facts have no bearing on the counting of
states, but they show that the pseudorotation is sufficient to generate all transformations between the 20 equivalence classes under rotations.
There are also, a propos of nothing, 20 equivalence classes under pseudorotations, which generate cycles containing 6 elements, or 40 equivalence classes under even numbers of
pseudorotations, which generate cycles containing 3 elements, [a,b,c,d,e] = [a,b,e,c,d] = [a,b,d,e,c].
I hope this rehearsal as I see it now looks OK. Thanks for the puzzle, John. Not sure yet how to present equivalence classes in plain English.
For the ethyl cation, a more-or-less comparable formalism has a list of two lists, one containing 2 elements, the other 3, [[1,2],[3,4,5]], which we take *not* to be equivalent to
[[3,4,5],[1,2]]. On the other hand, the permutations of 3,4,5 and of 1,2 form an equivalence class of 12 elements. Hence, we have $2\times 5!/3!/2!$ states. Not sure how to introduce
an equivalence to the phosphorus pentachloride case.
2. We get 20 in the ethyl cation case by โpretendingโ that the carbon atoms are distinguishable, so that we get the *2 in Binomial(5,2)*2 as the number of states, and the set of states is $S_5/S_3/
S_2\times S_2$. Equally, we can โpretendโ that the two axial positions in potassium pentachloride are distinguishable, so that the number of states is multinomial(5,1,1), and the set of states is
$S_5/S_3$. Alternatively, we can โpretendโ that even and odd permutations of the equatorial positions in potassium pentachloride are distinguishable, so that the number of states is binomial(5,2)
*2 and the set of states is $S_5/((S_3/S_2)\times S_2)$.
In all three cases, other things being equal, all 20 states are equivalent to each other. You seem to be postulating a model dynamics of discrete moves between states at each discrete time step
(without any of the probabilistic weighting that weโve seen in earlier posts). Within a class of models of this type, we could reasonably define the ethyl cation model and the potassium
pentachloride model as equivalent, as isolated discrete dynamical models, just if the single step transition graphs for the two discrete dynamics are, on some definition of graph equivalence,
equivalent graphs (your geometrical description of the potassium pentachloride dynamics, however, introduces intermediate states that have a different symmetry structure, aiming at an equivalence
of the ethyl cation dynamics with a subgraph of the potassium pentachloride dynamics, which has what I suppose could be called a $\mathbb{Z}_2$ grading).
I suppose that for any sort of detailed modeling we will have to find ways to modify these structures to add the effect of another molecule, of an electromagnetic field, or of some other
perturbation to the single-molecule model, and that the 20 states will then not be equivalent, or that the number of states might change as the model symmetries change. In any case, I hope that
at some level of detail an empirically accurate model will distinguish between an ethyl cation and a potassium pentachloride molecule.
Iโm not sure youโll think this solves the puzzle?
โก Peter wrote:
โฆ we can โpretendโ that the two axial positions in phosphorus pentachloride are distinguishable, so that the number of states is multinomial(5,1,1), and the set of states is $S_5/S_3$.
Alternatively, we can โpretendโ that even and odd permutations of the equatorial positions in potassium pentachloride are distinguishable,
Nice!!! Butโฆ
You just mentioned two very tempting ways to introduce an extra bit of information into the phosphorus pentachloride problem. But we donโt get to pick how to introduce that extra bit of
information: the puzzle as stated tells us how we have to do it!
The puzzle says that two molecules of phosphorus pentachloride with chlorine atoms labelled 1,2,3,4,5 count as being in the same state if we can rotate one to the other, carrying the
labelling of one to the labelling of the other.
So, the two axial positions are not distinguishable: we can always rotate the molecule so that the โtopโ chlorine becomes the โbottomโ one, and vice versa.
Similarly, the two cyclic orderings of the equatorial chlorines are not distinguishable: we can always rotate the molecule so that the โclockwiseโ cyclic ordering becomes the
โcounterclockwiseโ one, and vice versa.
However, youโll note that flipping the molecule over to make the top chlorine the bottom one also changes the clockwise cyclic ordering into the counterclockwise one.
So while neither of your ways of introducing an extra bit of information is the way prescribed by the problem, the sum of these two bits is. (Here Iโm adding bits mod 2.)
So the mini-puzzle is whether this bit inevitably changes when we do a pseudorotation.
I was just about to work this out when I came back to my computer and saw your comment. Itโs nice to see weโre thinking about the same stuff. This problem has been bugging me for weeks, but
now itโs almost solved.
โ So, there are 40 states in the state space for the potassium pentachloride case, which contains a 20 state subspace that is closed under pairs of reflections. Pseudorotations are not
rotations, but they are one (of many) paths that implement reflections. I note that the pseudorotations are paths that preserve the lengths of the potassium-chlorine bonds, but not,
except at the endpoints, the angles between them.
โ My state space for phosphorus pentachloride has 20 states, not 40. I defined it here and also in Part 14. Iโm sure youโre doing something interesting, but itโs dinner-time so Iโll
read it later!
By the way itโs phosphorus, not potassium. Potassium canโt hang on to 5 chlorines!
โ Phosphorous-Potassium, Duh!
Thereโs a sense in which your state-space is infinite-dimensional, if you include pseudo-rotations as continuous paths between the 20.
I hope learning how to divide and multiply by 2 in different ways is doing me some good. Forget the 40.
โก Peter wrote:
You seem to be postulating a model dynamics of discrete moves between states at each discrete time step (without any of the probabilistic weighting that weโve seen in earlier posts).
Well, my plan for โPart 15โฒ of the Network Theory series is to look at:
โข a Markov chain model, where time is discrete, and the molecule has probabilities to hop from one state to another at each time step,
and also
โข a Markov process model, where time is continuous, and the molecule has โprobabilistic ratesโ of hopping from one state to another.
The second one is a bit more realistic, but the two models are mathematically related in a nice way. I also plan to look at quantum analogues of both models, where instead of probabilities we
have amplitudes.
None of this is especially tied to the particular molecules Iโm talking about now. We could start with any graph and study models of this sort. However, one nice feature of these particular
molecules is that theyโre so symmetrical that the transition probabilities, or rates, must be the same for every edge of the graph! This simplifies these models a bitโฆ and if I went far
enough, which I probably wonโt, itโd let us use the representation theory of the group $S_5 \times S_2$ to solve these models.
Within a class of models of this type, we could reasonably define the ethyl cation model and the potassium pentachloride model as equivalent, as isolated discrete dynamical models, just
if the single step transition graphs for the two discrete dynamics are, on some definition of graph equivalence, equivalent graphsโฆ
Yes, thatโs the idea here: abstract away from the molecule and look at the graph.
In fact the real-world ethyl cation is nothing like what Iโve drawn here. As I explained in Part 14, it really looks like this:
However, transitions in the purely imaginary ethyl cation I discussed in this puzzle are easier to visualize than pseudorotations of phosphorus pentachloride! So, Iโll talk about ethyl
cations that look like this:
even though they donโt exist. Translate the results to the phosphorus pentachloride picture, if you likeโฆ
โ Just a quick noteโฆ
Who is to say Markov processes are more realistic than Markov chains? Markov processes could represent continuum approximations to something that is fundamentally finitary.
3. maybe you should substitute a chlorine atom for the positive charge in the ethyl cation โ think about ethyl chloride insteadโฆ
4. the trouble is that the three hydrogen atoms on the one carbon of the ethyl cation are *not* distinguishable (C3 symmetry), and the two hydrogens on the other carbon are not distinguishable
either (C2 symmetry); similarly, the two axial chlorines on the PCl5 molecule have C2 symmetry, and the three equatorial chlorines have C3 symmetry, but the connectivity is different โ in ethyl
cation youโve got a two-carbon chain, while in PCl5 youโve got single P atom with 5 chlorine atoms on it. Substituents (chlorines, hydrogens, etc) tend to situate themselves so as to minimize
Coulombic repulsions. Your picture with the โcyclicโ C-C-H structure is an average โ it makes the two carbons indistinguishable โ they have โhalf-ownershipโ of one of the hydrogens โ not the
highly strained full ownership that the picture depicts.
โก Thanks for all the info, Hudson! Yes, Bacharach claims the ethyl cation actually looks like this:
and Iโm treating all sorts of things as distinguishable that actually arenโt. So, this was a math puzzle with chemistry serving as a rather flimsy excuse, not an actual chemistry puzzle.
Still it may get some mathematicians interested in graph theory problems coming from chemistry!
โข Danail Bonchev and D.H. Rouvray, eds., Chemical Graph Theory: Introduction and Fundamentals, Taylor and Francis, 1991.
โข Nenad Trinajstic, Chemical Graph Theory, CRC Press, 1992.
โข R. Bruce King, Applications of Graph Theory Topology in Inorganic Cluster Coordination Chemistry, CRC Press, 1993.
The second one is apparently the magisterial tome of the subject. The prices on these books are absurd: for example, Amazon sells the first for $300, and the second for $222. Luckily some
universities will have themโฆ
โ The experimental evidence favours the non-classical structure.
Andrei, H.-S.; Solcร , N.; Dopfer, O., โIR Spectrum of the Ethyl Cation: Evidence for the Nonclassical Structure, Angew. Chem. Int. Ed. 2008, 47, 395-397
5. I looked at the Bacharach reference and see that his calculations represent the ethyl cation as an ethylene molecule, H2C=CH2, with a proton associated halfway between the two carbons, and bonded
to neither. This gives the ethyl cation the same symmetry on average as ethylene, which I think is D2h, C2 with a mirror plane of symmetry between the two carbons (C2(z) C2(y) C2(x) i ฯ(xy) ฯ(xz)
ฯ(yz)), whereas phosphorus pentachloride is D3h (2C3 3C2 ฯh 2S3 3ฯv)
6. If you decide on a way to uniquely identify states, you can match them up to your picture of the Desargues graph and each one will match either a blue or a red dot. So the extra bit is whether
the state belongs to the blue dot set or the red dot set. This requires that you manually work out all of the possible transitions, but it does work. Unsatisfying perhaps.
I can also see a way to calculate a bit that will flip for each transition. (I teach high school physics and my group theory and abstract algebra is very rusty, so excuse me if I express things
clumsily). Arrange the chlorine atoms in a cycle (1 2 3 4 5). The axial atoms are either adjacent in the cycle or not. Call that bit A.
Using the cycle we can assign any two atoms an order by requiring that the distance in the cycle between the earlier and later atoms is at most two. Rotate the atom so that the axis of symmetry
is vertical and so that the top atom is the earlier of the two axial atoms. Then, looking down, the equatorial atoms are in clockwise order or not. Call that bit B.
The bit defined by A XOR B will flip in any transition.
I donโt have a good proof of this, but staring at some representative examples is convincing:
Here 1 and 2 are axial (and adjacent in the cycle) and 345 are equatorial in clockwise order looking down (to make the representation unique Iโll start with the smallest integer). There are
exactly 3 possible transitions. If we replace 1 and 2 with two adjacent atoms (3 and 4 or 4 and 5), the equatorial order will reverse. If we replace them with the two non-adjacent atoms (3 and 5)
then the equatorial order stays the same, but adjacency flips.
Another example, starting from non-adjacent axial atoms:
Again, flipping adjacency preserves equatorial order. If adjacency stays the same, order flips. I hope this makes sense to someone other than me.
โก In the second example, remember that 5 is earlier than 2 in the cycle.
7. There is a way to represent each of the 20 states of the ethyl cation as an ordered pair made with the numbers {1, 2, 3, 4, 5}. Also, there is a way to represent each of the 20 states of the
phosphorus pentachloride as such an ordered pair. And the transitions correspond via these representations.
The key is given by this wu xing diagram:
An ordered pair corresponds to an arrow from the wu xing diagram, together with a sign: a plus sign telling that the pair has the same order as the arrow, or a minus sign if the order is the
inverse one.
Letโs label the five hydrogen atoms of the the ethyl cation by the numbers {1, 2, 3, 4, 5}, and the two carbon ones by the plus and minus signs {+, -}. We associate to each of the 20 states the
pair formed by the two numbers representing the two hydrogen atoms linked to the carbon with two hydrogen atoms. To find the order, we look up in the wu xing diagram the arrow labeled with the
same numbers as the pair of hydrogen atoms. If the carbon atom is labeled with the plus sign, we order them by the order given by the arrow. If the carbon atom is labeled with the minus sign, we
take the inverse order. For example, if the hydrogen atoms labeled with {1, 2, 3} are connected to the carbon atom labeled with the plus sign, and the hydrogen atoms labeled with {4, 5} are
connected to the carbon atom labeled with the minus sign, the ordered pair used to represent the state of the molecule is (5, 4).
Letโs now represent the 20 states of the phosphorus pentachloride. We need first to define the sign of a triple in the wu xing diagram. It is made by the product between two signs. The first sign
is given by the orientation of the triangle made by the pair: it is plus iff the orientation is counterclockwise. The second sign is given by minus one to the number of arrows from the triangle
which are on the contour of the wu xing diagram. There are two kinds of triangles: the obtuse ones, with two arrows on the contour, and the acute ones, with one arrow on the contour. We multiply
the signs and obtain the sign of the triple.
Back to the 20 states of the phosphorus pentachloride. We label with the same numbers {1, 2, 3, 4, 5} the five atoms. We represent the state by the pair given by the two axial atoms. The order is
obtained by the following rule. We consider in the 3D representation of the molecule, a vector connecting the two axial atoms, oriented as in the wu xing diagram. With the wu xing diagram, we
calculate the sign of the triple made by the three numbers labeling the equatorial atoms. We then multiply with the orientation of the triangle made by the three equatorial atoms in the 3D
picture, seen from the direction in which the vector is pointing. If the resulting sign is plus, the pair representing the state is given by the orientation of the arrow connecting the labels of
the two axial atoms. If it is minus, we reverse the order indicated by the arrow.
For example, letโs say that the bottom axial atom is labeled with 1, the top one with 2, and the three equatorial atoms are labeled in counterclockwise order with 3, 4, 5. The wu xing diagram
shows that the arrow goes from 1 to 2. The orientation of the triangle made with 3, 4, 5 is clockwise in the wu xing diagram, and it is obtuse, so its sign is minus. Unlike the 3D diagram in
which the sign is plus, the triangle being CCW. Therefore, the state should be represented by the pair (2, 1).
The next step is to see that the transitions coincide. The rule of the transition, common to the two types of molecules, is the following: an ordered pair can go in another ordered pair, if they
donโt have common elements. In addition, we invert the sign. In other words, if a pair is oriented the same as the arrow represented it, then the transition will give a pair which is oriented in
the inverse direction than that given by the arrow representing it, and conversely.
โก Cool! Why do you call that a โwu xing diagramโ? Was some diagram like that used in China for some reason? My wife studies Chinese philosophy, religion, and history, so Iโm curious. It looks
almost like the Petersen graph:
but not quite, since the Petersen graph has 10 vertices instead of just 5.
โ Oh, I guessed it and confirmed it: wu xing refers to โfive phase theoryโ, roughly the Chinese version of the Western โfour elementsโ, although the phases were more mutable than elements.
โ Thanks! And you guessed well.
There is a simpler way to make the correspondence between the 20 states of the ethyl cation and those of the phosphorus pentachloride. The idea is that, instead of labeling the carbon
atoms of the ethyl cation, we can use the two possible orientations of the three hydrogen atoms connected to the same carbon atom.
We start with a phosphorus pentachloride state. We replace the central atom with a carbon, and the five terminal atoms with hydrogenes. We draw the vector connecting the axial atoms,
oriented as in the wu xing diagram. Then, we replace the axial atom towards which the vector points, with a carbon atom to which we link the two axial atoms. This operation transforms
a phosphorus pentachloride state in a ethyl cation state, with the triple of hydrogen atoms oriented the same as in the phosphorus pentachloride state. We now can check that the
transition diagrams are isomorphic.
P.S. I have thought at 1) one proof using a regular 5-cell in the four dimensional space, and hyperplanes separating its vertices, and 2) another one using the 20 vertices of a
labeled dodecahedron which I constructed to represent the multiplication table of the even permutations of 5 elements (http://www.unitaryflow.com/2009/06/polyhedra-and-groups.html). I
can write them down if you are interested, but I see that you already have enough proofs!
โ Just wondering: Earth absorbs water, water extinguishes fire, fire melts metal, metal cuts wood, but how does wood overcome earth?
Makes for a nice rock-paper-scissors game.
โ โฆok, I looked it up: wood parts earth, as in tree roots cracking stone.
โ โMakes for a nice rock-paper-scissors game.โ
Well, maybe because the rock-paper-scissors game is Chinese too (http://en.wikipedia.org/wiki/Rock_paper_scissor#History). Indeed, for 4 or 5 players the rock-paper-scissors game does
not offer enough options to avoid repetitions, but wu-xing offers :)
8. It was sort of silly for me to post this puzzle both here and on the n-Category Cafรฉ, but I was desperate for help. Hereโs a really nice answer from Tracy Hall over at the n-Cafรฉ. He answers the
main puzzle and also my subsidiary puzzle, namely: what graph do we get if we discard the extra bit of information that says which carbon here has 3 hydrogens attached to it and which has 2?
The answer is the Petersen graph:
Tracy wrote:
As some comments have pointed out over on Azimuth, in both cases there are ten underlying states which simply pick out two of the five pendant atoms as special, together with an extra parity
bit (which can take either value for any of the ten states), giving twenty states in total. The correspondence of the ten states is clear: an edge exists between state A and state B, in
either case, if and only if the two special atoms of state A are disjoint from the two special atoms of state B. This is precisely one definition of the Petersen graph (a famous 3-valent
graph on 10 vertices that shows up as a small counterexample to lots of naรฏve conjectures). Thus the graph in either case is a double cover of the Petersen graphโbut that does not uniquely
specify it, since, for example, both the Desargues graph and the dodecahedron graph are double covers of the Petersen graph.
For a labeled graph, each double cover corresponds uniquely to an element of the Z/2Z cohomology of the graph (for an unlabeled graph, some of the double covers defined in this way may turn
out to be isomorphic). Cohomology over Z/2Z takes any cycle as input and returns either 0 or 1, in a consistent way (the output of a Z/2Z sum of cycles is the sum of the outputs on each
cycle). The double cover has two copies of everything in the base (Petersen) graph, and as you follow all the way around a cycle in the base, the element of cohomology tells you whether you
come back to the same copy (for 0) or the other copy (for 1) in the double cover, compared to where you started.
One well-defined double cover for any graph is the one which simply switches copies for every single edge (this corresponding to the element of cohomology which is 1 on all odd cycles and 0
on all even cycles). This always gives a double cover which is a bipartite graph, and which is connected if and only if the base graph is connected and not bipartite. So if we can show that
in both cases (the fictitious ethyl cation and phosphorus pentachloride) the extra parity bit can be defined in such a way that it switches on every transition, that will show that we get the
Desargues graph in both cases.
The fictitious ethyl cation is easy: the parity bit records which carbon is which, so we can define it as saying which carbon has three neighbors. This switches on every transition, so we are
done. Phosphorus pentachloride is a bit trickier; the parity bit distinguishes a labeled molecule from its mirror image, or enantiomer. As has already been pointed out on both sites, we can
use the parity of a permutation to distinguish this, since it happens that the orientation-preserving rotations of the molecule, generated by a three-fold rotation acting as a three-cycle and
by a two-fold rotation acting as a pair of two-cycles, are all even permutations, while the mirror image that switches only the two special atoms is an odd permutation. The pseudorotation can
be followed by a quarter turn to return the five chlorine atoms to the five places previously occupied by chlorine atoms, which makes it act as a four-cycle, an odd permutation. Since the
parity bit in this case also can be defined in such a way that it switches on every transition, the particular double cover in each case is the Desargues graphโa graph I was surprised to come
across here, since just this past week I have been working out some combinatorial matrix theory for the same graph!
The five chlorine atoms in phosphorus pentachloride lie in six triangles which give a triangulation of the 2-sphere, and another way of thinking of the pseudorotation is that it corresponds
to a Pachner move or bistellar flip on this triangulationโin particular, any bistellar flip on this triangulation that preserves the number of triangles and the property that all vertices in
the triangulation have degree at least three corresponds to a pseudorotation as described.
โก Is there a nice place for the uninitiated to learn about cohomology of graphs?
โ Do you know about cohomology of anything else, like topological spaces or simplicial complexes? A graph is a special case of either of those.
Iโm not trying to intimidate you by boosting the level of generalityโhonest. Itโs just that if you happen to know some other kinds of cohomology, learning about the cohomology of graphs
may be a lot easier than you think! But cohomology of graphs is, in fact, about the easiest kind of all.
If you only want to learn about the cohomology of graphs, you could learn it from the end of week293, where I apply it to electrical circuits, or from this book:
โข P. Bamberg and S. Sternberg, A Course of Mathematics for Students of Physics vol. 2, Chap. 12: The theory of electrical circuits, Cambridge University, Cambridge, 1982.
which takes more time but goes into more detail.
Sometime fairly soon my Network Theory notes on this blog will get into electrical circuits, and then Iโll have to explain the cohomology of graphs.
9. Hereโs a sketch of a solution thatโs not too technical.
Puzzle. Show that the graph with states of a trigonal bipyramidal molecule as vertices and pseudorotations as edges is indeed the Desargues graph.
Answer. To be specific, letโs use iron pentacarbonyl as our example of a trigonal bipyramidal molecule:
It suffices to construct a 1-1 correspondence between the states of this molecule and those of the ethyl cation, such that two states of this molecule are connected by a transition if and only if
the same holds for the corresponding states of the ethyl cation.
Hereโs the key idea: the ethyl cation has 5 hydrogens, with 2 attached to one carbon and 3 attached to the other. Similarly, iron carbonyl has 5 carbonyl groups, with 2 axial and 3 equatorial.
Weโll use this resemblance to set up our correspondence.
There are various ways to describe states of the ethyl cation, but hereโs the best one for us. Number the hydrogens 1,2,3,4,5. Then a state of the ethyl cation consists of a partition of the set
{1,2,3,4,5} into a 2-element set and a 3-element set, together with one extra bit of information, saying which carbon has 2 hydrogens attached to it. This extra bit is the color here:
What do transitions look like in this description? When a transition occurs, two hydrogens that belonged to the 3-element set now become part of the 2-element set. Meanwhile, both hydrogens that
belonged to the 2-element set now become part of the 3-element set. (Ironically, the one hydrogen that hops is the one that stays in the 3-element set.) Moreover, the extra bit of information
changes. Thatโs why every edge goes from a red dot to a blue one, or vice versa.
So, to solve the puzzle, we need to show that the same description also works for the states and transitions of iron pentacarbonyl!
In other words, we need to describe its states as ways of partitioning the set {1,2,3,4,5} into a 2-element set and a 3-element set, together with one extra bit of information. And we need its
transitions to switch two elements of the 2-element set with two of the 3-element set, while changing that extra bit.
To do this, number the carbonyl groups {1,2,3,4,5}. The 2-element set consists of the axial ones, while the 3-element set consists of the equatorial ones. When a transition occurs, two of axial
ones trade places with two of the equatorial ones, like this:
So, now we just need to figure out what that extra bit of information is, and why it always changes when a transition occurs!
Hereโs how you calculate this extra bit. Hold the iron pentacarbonyl so that the axial guy with the lower number is pointing up, like this:
In this example the axial guys are numbered 2 and 4, so we hold the molecule so that the lower number, 2, is pointing up.
Then, looking down, see whether you can get the numbers of the three equatorial guys to increase as you read them going around clockwise, or counterclockwise. Thatโs your bit of information. In
this example, you can read the numbers 1, 3, 5 as you go clockwise.
Itโs easy to see that this bit of information doesnโt change when we rotate the iron carbonyl molecule, so we have a well-defined way of getting a bit from a state. On the other hand, this bit
always changes when a transition occurs. For example:
At the end if you hold this molecule so the guy labelled 1 is pointing up, you can read the numbers 2, 4, 5 as you go around counterclockwise. So, the bit has changed.
This completes the proof except for checking that the bit always changes when a transition occurs. We leave this as a further small puzzle for the reader.
10. That bit doesnโt always change. Try the one with 1 and 2 as axial and 3, 4, 5 clockwise looking down. Make the transition to 3 and 5 as axial and youโll get 1, 2, 4 clockwise looking down, so no
bit change. Itโs more complicated than this. I started out thinking like this, but ended up with what I posted in my last comment.
โก Ugh, youโre right! When try this example I start with
(in your notation)
and wind up with
โ It seems to me that thereโs a necessary symmetry missing when you use the numerical order. Without it the relationship between up/down order and clockwise/counterclockwise order is
inconsistent. Using a cycle and defining order in terms of shortest distance in the cycle (so 5 is after 3 but before 1 and 2) fixes this. But then there is also a difference if the axial
atoms are consecutive in the cycle or 2 steps apart (those are the only two possibilities). If you think of that and the cw/ccw as bits, then one of those two bits flips in each
transition. Thereโs probably a nicer way to formulate that but I think itโs correct.
11. Okay, I think the following solution is correct. This time I used Twan van Laarhovenโs idea for computing that extra bit of information. It not only works better than my earlier wrong approach;
itโs also easy to give a full proof that it works.
Puzzle. Show that the graph with states of a trigonal bipyramidal molecule as vertices and pseudorotations as edges is indeed the Desargues graph.
Answer. To be specific, letโs use iron pentacarbonyl as our example of a trigonal bipyramidal molecule:
It suffices to construct a 1-1 correspondence between the states of this molecule and those of the ethyl cation, such that two states of this molecule are connected by a transition if and only if
the same holds for the corresponding states of the ethyl cation.
Hereโs the key idea: the ethyl cation has 5 hydrogens, with 2 attached to one carbon and 3 attached to the other. Similarly, the trigonal bipyramidal molecule has 5 carbonyl grops, with 2 axial
and 3 equatorial. Weโll use this resemblance to set up our correspondence.
There are various ways to describe states of the ethyl cation, but this is the best for us. Number the hydrogens 1,2,3,4,5. Then a state of the ethyl cation consists of a partition of the set
{1,2,3,4,5} into a 2-element set and a 3-element set, together with one extra bit of information, saying which carbon has 2 hydrogens attached to it. This extra bit is the color here:
What do transitions look like in this description? When a transition occurs, two hydrogens that belonged to the 3-element set now become part of the 2-element set. Meanwhile, both hydrogens that
belonged to the 2-element set now become part of the 3-element set. (Ironically, the one hydrogen that hops is the one that stays in the 3-element set.) Moreover, the extra bit of information
changes. Thatโs why every edge goes from a red dot to a blue one, or vice versa.
So, to solve the puzzle, we need to show that the same description also works for the states and transitions of iron pentacarbonyl!
In other words, we need to describe its states as ways of partitioning the set {1,2,3,4,5} into a 2-element set and a 3-element set, together with one extra bit of information. And we need its
transitions to switch two elements of the 2-element set with two of the 3-element set, while changing that extra bit.
To do this, number the carbonyl groups 1,2,3,4,5. The 2-element set consists of the axial ones, while the 3-element set consists of the equatorial ones. When a transition occurs, two of axial
ones trade places with two of the equatorial ones, like this:
So, now we just need to figure out what that extra bit of information is, and why it always changes when a transition occurs.
Hereโs how we calculate that extra bit. Hold the iron pentacarbonyl molecule vertically with one of the equatorial carbonyl groups pointing to your left. Remember, the carbonyl groups are
numbered. So, write a list of these numbers, say (a,b,c,d,e), where a is the top axial one, b,c,d are the equatorial ones listed in counterclockwise order starting from the one pointing left, and
e is the bottom axial one. This list is some permutation of the list (1,2,3,4,5). Take the sign of this permutation to be our bit!
Letโs do an example:
Here we get the list (2,5,3,1,4) since 2 is on top, 4 is on bottom, and 5,3,1 are the equatorial guys listed counterclockwise starting from the one at left. The list (2,5,3,1,4) is an odd
permutation of (1,2,3,4,5), so our bit of information is odd.
Of course we must check that this bit is well-defined: namely, that it doesnโt change if we rotate the molecule. Rotating it a third of a turn gives an even permutation of the equatorial guys and
leaves the axial ones alone, so this is an even permutation. Flipping it over gives an odd permutation of the equatorial guys, but it also gives an odd permutation of the axial ones, so it too is
an even permutation. So, rotating the molecule doesnโt change the sign of the permutation we compute from it. The sign is thus a well-defined function of the state of the molecule.
Next we must to check that this sign changes whenever our molecule undergoes a transition. For this we need to check that any transition changes our list of numbers by an odd permutation. Since
all transitions are conjugate in the permutation group, it suffices to consider one example:
Here we started with a state giving the list (2,5,3,1,4). The transition took us to a state that gives the list (3,5,4,2,1) if we hold the molecule so that 3 is pointing up and 5 to the left. The
reader can check that going from one list to another requires an odd permutation. So weโre done.
12. I made a system of labels for each molecule:
Label the hydrogens of the ethyl cation with letters a through e. The state is labeled with the hydrogen pair and an orientation โ if it was ab โ cde, this is ab+. If it was cde โ ab, label as
Fill out the graph this way, and label transitions (edges) with the hydrogen that changes place.
For the pentachloride weโll have a similar system. Label each of the outer atoms with letters a through e. Each state gets a label that is the axis pair, in alphabetical order. Use alphabetical
order of the axial atoms to determine an orientation โ up or down. If the ordering of the axis matches the orientation, itโs +. If it does not, itโs -.
Psuedorotations will be labeled by the axial atom that does not participate.
Using the diagram for the original system, plot the edges and their labels on a second graph. Put the ab+ state in the same position as the original graph. Now, just go around using the moves
already written as psuedorotations. These will be valid psuedorotations and will not repeat. If you fill in the missing edges, theyโll match the original graph.
This does not distinguish the states of the pentachloride into two sets the way the original system did โ the original graph alternates the +/- around the graph (blue/red states), while this one
does not.
Itโs possible some similar system would fix this, but Iโm done with it for now.
You can use HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it. | {"url":"http://johncarlosbaez.wordpress.com/2011/10/23/a-math-puzzle-coming-from-chemistry/","timestamp":"2014-04-19T19:36:26Z","content_type":null,"content_length":"144444","record_id":"<urn:uuid:3fcb3350-8353-449d-9c56-18a105be5b55>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
Upper and lower bounds help?
February 5th 2010, 12:23 PM
Upper and lower bounds help?
I think this algebra..
I have been giving alot of questions to do with upper and lower bounds, an example is:
Two sides of a rectangle were measured to the nearest mm, as 4.6 cm and 7.9 cm.
a) Find the least and greatest possible values of the perimeter.
b) Find the least and greatest possible values of the area.
It's probably simple but i havent done perimeter versions yet.
Thanks all!
February 5th 2010, 01:07 PM
I think this algebra..
I have been giving alot of questions to do with upper and lower bounds, an example is:
Two sides of a rectangle were measured to the nearest mm, as 4.6 cm and 7.9 cm.
a) Find the least and greatest possible values of the perimeter.
b) Find the least and greatest possible values of the area.
It's probably simple but i havent done perimeter versions yet.
Thanks all!
Note that each measurement has an error of $\pm0.1\mbox{cm}$. With this knowledge, employ your rules for additon and multiplication when dealing with error. | {"url":"http://mathhelpforum.com/algebra/127352-upper-lower-bounds-help-print.html","timestamp":"2014-04-18T06:12:14Z","content_type":null,"content_length":"4943","record_id":"<urn:uuid:d8f544fa-8aec-4ae1-8b6c-0dd654eea727>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unit interval
See also unit interval (telecommunications).
In mathematics, the unit interval is the interval [0,1], that is the set of all real numbers x such that zero is less than or equal to x and x is less than or equal to one. The unit interval plays a
fundamental role in homotopy theory, a major branch of topology. It is a metric space, compact, contractible, path connected and locally path connected. As a topological space, it is homeomorphic to
the extended real number line. The unit interval is a one-dimensional analytical manifold with boundary {0,1}, carrying a standard orientation from 0 to 1. As a subset of the real numbers, its
Lebesgue measure is 1. It is a totally ordered set and a complete lattice (every subset of the unit interval has a supremum and an infimum).
In the literature, the term "unit interval" is also sometimes applied to the other shapes that an interval from 0 to 1 could take, that is (0,1], [0,1), and (0,1). However, it's most commonly
reserved for the closed interval [0,1], and Wikipedia follows this convention.
Sometimes, the term "unit interval" is used to refer to objects that play a role in various branches of mathematics analogous to the role that [0,1] plays in homotopy theory. For example, in the
theory of quiverss, the (analogue of the) unit interval is the graph whose vertex set is {0,1} and which contains a single edge e whose source is 0 and whose target is 1. One can then define a notion
of homotopy between quiver homomorphisms analogous to the notion of homotopy between continuous maps.
In all of its guises, the unit interval is almost always written I, and the following ASCII picture suffices in almost any context: | {"url":"http://july.fixedreference.org/en/20040724/wikipedia/Unit_interval","timestamp":"2014-04-16T17:19:19Z","content_type":null,"content_length":"4888","record_id":"<urn:uuid:df59cedc-9245-4ab4-ae16-3ed7ed5426b2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tabulae eclipsium
The topic Tabulae eclipsium is discussed in the following articles:
discussed in biography
โข Peuerbach also computed an influential set of eclipse tables, Tabulae eclipsium (c. 1459), based on the Alfonsine Tables, that circulated widely in manuscript before the first Viennese edition
(1514). Peuerbach composed other treatises, most still in manuscript, devoted to elementary arithmetic, sine tables, calculating devices, and the construction of astronomical instruments... | {"url":"http://www.britannica.com/print/topic/724980","timestamp":"2014-04-23T12:33:03Z","content_type":null,"content_length":"6567","record_id":"<urn:uuid:da5a670b-7a33-4d4e-bd1d-150eace9394c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
STAT 103: Elementary Probability
An elementary introduction to the concepts of probability, including: sets, Venn diagrams, definition of probability, algebra of probabilities, counting principles, some discrete random variables
and their distributions, graphical displays, expected values, the normal distribution, the Central Limit Theorem, applications, some statistical concepts.
Credit units
Term description
Arts and Science
Mathematics and Statistics
Mathematics B30 or Foundations of Mathematics 30 or Pre-Calculus 30.
Credit will not be granted for STAT 103 if it is taken concurrently with or after STAT 241. Please refer to the Statistics Course Regulations in the Arts & Science section of the Course and
Program Catalogue. | {"url":"http://www.usask.ca/programs/course.php?csubj_code=STAT&cnum=103","timestamp":"2014-04-16T19:33:50Z","content_type":null,"content_length":"1295","record_id":"<urn:uuid:e941090b-6937-43f0-b726-89a234660df8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Precision based floating-point
07-15-2006 #1
Precision based floating-point
Well... I couldn't think of something else better to do with my time. So I decided to build a class that wraps around the concept of floating-point numbers with an emphasis on precision at the
expense of accuracy.
const int STRDOUBLE_MAX_PRECISION = 15;
class strdouble {
strdouble(): integer(0), frac(0), exp(0), prec(1) {}
explicit strdouble(const std::string&);
strdouble& operator=(const std::string&);
strdouble& operator++();
strdouble& operator--();
strdouble operator++(int);
strdouble operator--(int);
friend bool operator==(const strdouble&, const strdouble&);
friend bool operator<(const strdouble&, const strdouble&);
friend bool operator>(const strdouble&, const strdouble&);
double value() const;
double integer; /// integer portion of fractional number
double frac; /// normalized fractional portion of fractional number
int exp; /// base 10 exponent of fractional portion
int prec; /// overall precision (significant digits)
The basic idea is to receive a well formatted string and decompose it into the integer and fractional parts. The fractional part is in fact an integral too. exp is a base 10 exponent to be
applied to it. Exp is expected to be always less than 0.
I was planning to also define a constructor accepting a double.
integer + frac * pow(10, exp) will reconstruct the floating-point number.
I have all of the above operator overloads already defined plus some more. Other overloads would eventually be defined of course. Namely the arithmetic operators. Also, some member functions
replacements for the arithmetic operators would also be supplied. These would differ in the sense that the user could specify the precision of the result throught either truncation or rounding.
Anyways... I don't think this approach is the best. Each instance is too big, the class will grow to become slow and clumsy. Every operation subject to conversions...
The questions is... do you think this has some use? Or is it best if perhaps I concentrate more on how doubles are stored in memory work from there in an attempt to create a precision based
floating-point type?
The programmerโs wife tells him: โRun to the store and pick up a loaf of bread. If they have eggs, get a dozen.โ
The programmer comes home with 12 loaves of bread.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
I take it, this is a useless class
The programmerโs wife tells him: โRun to the store and pick up a loaf of bread. If they have eggs, get a dozen.โ
The programmer comes home with 12 loaves of bread.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
Not useless, but overkill when you can just use a double data type to do the same thing with less code and less overhead.
My problem with that was having the class instancing with a number that was probably not what the user expected.
double bar = 1 / 3;
CFixedDouble foo(bar, precision);
The programmerโs wife tells him: โRun to the store and pick up a loaf of bread. If they have eggs, get a dozen.โ
The programmer comes home with 12 loaves of bread.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
I don't think that example best proves the point you were going to make. The division will always happen before the double is initialized, and while you could force the double to some artificial
precision from the beginning it's okay to use double's full range; that's what it's there for. If bar were smaller in bytes if it had 0.3334 instead of 0.3 repeating, you'd have a stronger point
I'd think, but it's not.
07-16-2006 #2
07-17-2006 #3
07-17-2006 #4
07-17-2006 #5 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/80999-precision-based-floating-point.html","timestamp":"2014-04-19T10:26:50Z","content_type":null,"content_length":"59750","record_id":"<urn:uuid:29c11b82-4ee2-4322-bf7e-00a95eaa9a2d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Turnovers are poison
December 20, 2012 Leave a comment
This is probably a slightly useless post, but a bit of fun all the same. If nothing else, it allows me to take a stab at learning a bit more about logistic regression.
Iโm still trying to unravel the mystery of why the Bears lost to the Vikings two weeks ago. This mystery is compounded with attempting to understand how the Patriots lost to the Jets in the playoffs
in the 2010 season. Very similar stories. Chicago and New England had defeated Minnesota and New York, respectively, several games prior. In the case of the Patriots, they bludgeoned the Jets,
allowing only a field goal, while racking up 45 points for themselves.
So what happened? I say that the answer is turnovers. Below, I show a graph of the difference in turnovers for each game the Bears played from 1985 through 2011. Iโve jittered the plots so that the
volume of observations is more clear. (This is the part where I note that I took this visual presentation and code from Andrew Gelman and Jennifer Hillโs fantastic book โData Analysis and Multilevel/
Hierarchical Modelsโ. Anything absurd is my doing.)
Note that I calculate turnover difference as the opponentโs turnovers minus Chicagoโs turnovers. This way a positive number is a good thing. In other words, if Chicago has 3 turnovers and their
opponent has 4, then turnover difference is equal to 1.
The fit line is a logistic regression of the results. The trend is obvious and shouldnโt surprise anyone with a nodding understanding of football. If you turn the ball over more often than your
opponent, itโs more difficult to win a game.
We can use the coefficients of the fit to determine how this affects their (modeled) chance of victory. If the turnover difference is zero, the fit line suggests that the Bears have about a 55%
chance of winning (again, this is a fit result over many seasons). If they turn the ball over one more time than their opponent, that probability drops to 40%.
How about other teams? Same story. Analysis of all teams shows a drop of at least 14% and as much as 26% if the turnover difference is -1. (The Ravens are a bit of an outlier, possibly because
theyโre a newer team. Or because their defense sucks. One or the other.)
The following is a very spartan graph of that point, that Iโll get around to replacing one of these days.
So, how do we predict turnovers? I donโt know. As I said, this may be a slightly useless post.
Almost forgot the code:
GetTeamSeasonResults = function(year, team)
games.URL.stem = "http://www.pro-football-reference.com/teams/"
URL = paste(games.URL.stem, team, "/", year, "_games.htm", sep="")
games = readHTMLTable(URL)
if (length(games) == 0) {
return (NULL)
df = games[[1]]
df = df[,1:21]
# Clean up the df
df[,4] = NULL
emptyRow = which(df$Tm == "")
if(length(emptyRow) > 0 )
df = df[-emptyRow,]
row.names(df) = seq(nrow(df))
colnames(df) = c("Week", "Day", "Date", "Outcome", "OT", "Record","Home", "Opponent", "ThisTeamScore", "OpponentScore"
, "ThisTeam1D", "ThisTeamTotalYards", "ThisTeamPassYards", "ThisTeamRushYards", "ThisTeamTO"
, "Opponent1D", "OpponentTotalYards", "OpponentPassYards", "OpponentRushYards", "OpponentTO")
df$GameDate = mdy(paste(df$Date, year), quiet=T)
year(df$GameDate) = with(df, ifelse(month(GameDate) <=6, year(GameDate)+1, year(GameDate)))
df$Date = as.character(df$Date)
df$Home = with(df, ifelse(Home == "@",F,T))
df$TODiff = with(df, as.integer(OpponentTO) - as.integer(ThisTeamTO))
df$Win = ifelse(df$Outcome =="W", 1, 0)
years = 1985:2011
teams = c("nyj", "mia", "nwe", "buf"
, "rav", "cin", "pit", "cle"
, "htx", "clt", "oti", "jax"
, "den", "sdg", "rai", "kan"
, "dal", "nyg", "was", "phi"
, "gnb", "chi", "min", "det"
, "atl", "tam", "nor", "car"
, "sfo", "sea", "ram", "crd")
numTeams = length(teams)
teamList = vector("list", numTeams)
for (iTeam in 1:numTeams)
aList = lapply(years, GetTeamSeasonResults, teams[iTeam])
df = do.call("rbind", aList)
teamList[[iTeam]] = df
FitTODiff = function(df)
fit = glm(df$Win ~ df$TODiff, family=binomial(link="logit"))
print(inv.logit (coef(fit)))
return (fit)
fits = lapply(teamList, FitTODiff)
OneGameDrop = function(fit)
drop = inv.logit( coef(fit)[1]) - inv.logit( coef(fit)[1] -coef(fit)[2] )
return (drop)
drops = sapply(fits, OneGameDrop)
plot (drops) | {"url":"http://pirategrunt.com/2012/12/20/turnovers-are-poison/","timestamp":"2014-04-20T21:18:34Z","content_type":null,"content_length":"77231","record_id":"<urn:uuid:c37a13a7-030d-429c-983a-d118f5d9d92e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Relationship Between the Phillips Curve and AD-AD - Boundless Open Textbook
The Phillips Curve Related to Aggregate Demand
The Phillips curve shows the inverse trade-off between rates of inflation and rates of unemployment. If unemployment is high, inflation will be low; if unemployment is low, inflation will be high.
The Phillips curve and aggregate demand share similar components. The Phillips curve is the relationship between inflation, which affects the price level aspect of aggregate demand, and unemployment,
which is dependent on the real output portion of aggregate demand. Consequently, it is not far-fetched to say that the Phillips curve and aggregate demand are actually closely related.
To see the connection more clearly, consider the example illustrated by . Let's assume that aggregate supply, AS, is stationary, and that aggregate demand starts with the curve, AD[1]. There is an
initial equilibrium price level and real GDP output at point A. Now, imagine there are increases in aggregate demand, causing the curve to shift right to curves AD[2] through AD4. As aggregate demand
increases, unemployment decreases as more workers are hired, real GDP output increases, and the price level increases; this situation describes a demand-pull inflation scenario.
Phillips Curve and Aggregate Demand
As aggregate demand increases from AD1 to AD4, the price level and real GDP increases. This translates to corresponding movements along the Phillips curve as inflation increases and unemployment
As more workers are hired, unemployment decreases. Moreover, the price level increases, leading to increases in inflation. These two factors are captured as equivalent movements along the Phillips
curve from points A to D. At the initial equilibrium point A in the aggregate demand and supply graph, there is a corresponding inflation rate and unemployment rate represented by point A in the
Phillips curve graph. For every new equilibrium point (points B, C, and D) in the aggregate graph, there is a corresponding point in the Phillips curve. This illustrates an important point: changes
in aggregate demand cause movements along the Phillips curve. | {"url":"https://www.boundless.com/economics/inflation-and-unemployment/the-relationship-between-inflation-and-unemployment/the-relationship-between-the-phillips-curve-and-ad-ad/","timestamp":"2014-04-19T06:53:03Z","content_type":null,"content_length":"73121","record_id":"<urn:uuid:7a1c2634-b498-466c-9b02-3bc4daa37bfc>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00605-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
โข across
MIT Grad Student
Online now
โข laura*
Helped 1,000 students
Online now
โข Hero
College Math Guru
Online now
Here's the question you clicked on:
if we throw a die once what is the probability that an even number will show on the upper face of the die?
โข one year ago
โข one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
โข Teamwork 19 Teammate
โข Problem Solving 19 Hero
โข Engagement 19 Mad Hatter
โข You have blocked this person.
โข โ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/502702f2e4b01d88d774d217","timestamp":"2014-04-19T10:05:43Z","content_type":null,"content_length":"34616","record_id":"<urn:uuid:2ed23ce1-d52d-4fb3-9565-f8b2213d4696>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Riverside, RI Geometry Tutor
Find a Riverside, RI Geometry Tutor
...I truly believe that students thrive when they feel accepted and believed in, and I believe that every student can learn. As a paraprofessional I worked with students in English 1,2,3 and 4 as
well as Algebra 1 and Geometry. I have also helped students with psychology, US History and World history.
30 Subjects: including geometry, reading, English, dyslexia
...I proofread for grammar, spelling, punctuation, and flow. In addition, I have over 10 years of experience in proofreading numerous high school and college essays and research papers. I
received my TEFL Certification from TEFL Worldwide Prague in 2005.
28 Subjects: including geometry, reading, English, writing
...You want an analyst, coach and encourager for your child. I've had experience with students who are several years behind their grade level in math. I have often brought them back to grade
10 Subjects: including geometry, algebra 1, algebra 2, ACT Math
I have successfully taught middle and high school science and math for over 20 years in Vermont, Connecticut and Massachusetts, and continue to do so today. I have also been tutoring for many
years from elementary subjects, to chemistry and test preparation and enjoy working with students who need...
31 Subjects: including geometry, reading, biology, chemistry
...Several of my English professors as an undergraduate wanted me to switch to the English department because of my writing ability. And last but not least, I am a guitar player of approximately
20 years through lessons as well as self-taught instruction. I also have experience with playing bass and own a stand-up double bass.
36 Subjects: including geometry, reading, calculus, English
Related Riverside, RI Tutors
Riverside, RI Accounting Tutors
Riverside, RI ACT Tutors
Riverside, RI Algebra Tutors
Riverside, RI Algebra 2 Tutors
Riverside, RI Calculus Tutors
Riverside, RI Geometry Tutors
Riverside, RI Math Tutors
Riverside, RI Prealgebra Tutors
Riverside, RI Precalculus Tutors
Riverside, RI SAT Tutors
Riverside, RI SAT Math Tutors
Riverside, RI Science Tutors
Riverside, RI Statistics Tutors
Riverside, RI Trigonometry Tutors | {"url":"http://www.purplemath.com/Riverside_RI_Geometry_tutors.php","timestamp":"2014-04-17T04:29:56Z","content_type":null,"content_length":"23990","record_id":"<urn:uuid:9301245a-3418-4061-9eed-28aa92398255>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
number of subgroups of an infinite abelian group
April 11th 2013, 05:03 AM
number of subgroups of an infinite abelian group
Is the number of subgroups of an infinite abelian group always infinite ?
( or Is there any infinite abelian group having only finite number of subgroups ? )
April 11th 2013, 05:26 AM
Re: number of subgroups of an infinite abelian group
Yes. This is trivially true if the abelian group is not finitely generated (just take the cyclic groups generated by each generating element). In the case where your infinite abelian group is
finitely generated, the fundamental theorem of finitely generated abelian groups shows that Z (which has infinitely many subgroups) is a subgroup of your group.
Less precisely, but more intuitively, just take the cyclic groups generated by each element of your infinite abelian group.
April 11th 2013, 07:50 AM
Re: number of subgroups of an infinite abelian group
Thank you Gusbob ! | {"url":"http://mathhelpforum.com/advanced-algebra/217245-number-subgroups-infinite-abelian-group-print.html","timestamp":"2014-04-17T05:53:01Z","content_type":null,"content_length":"4360","record_id":"<urn:uuid:22e11b19-c6a2-4833-b4a3-12f90606947d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Science in Christian Perspective
The Paradoxes of Mathematics*
R. P. DILWORTH
California Institute of Technology, Pasadena, Calif.
From JASA 8 (June 1956): 3-5. In popular usage a paradox is a true statement which apparently has false consequences. The explanation of the paradox consists in showing that the false consequences do
not, in fact, follow from the statement. Now, from the point of view of mathematics, which treats the strictly logical consequences of propositions, there is no difficulty at all with such
statements. Hence, for mathematics, the term paradox has a sharper meaning; namely, a self-contradictory proposition. At first glance it would appear unlikely that self-contradictory statements could
occur in a rigorous, deductive, mathematical system. Unfortunately, they do indeed occur and this paper will be devoted to a description of some of the more important paradoxes which have arisen to
plague the mathematician.
It will be instructive to, mention first a non-mathematical paradox which illustrates the basic principle underlying most of the mathematical paradoxes. This is the so called "Barber's paradox."
The barber in a certain military unit is ordered by his commanding officer to shave those and only those members of the company who do not shave themselves. Now apply this order to the
barber. If he does not shave himself then he fails to obey the order since he should then shave himself but if he shaves himself he is likewise failing to obey the order.
The difficulty with the Barber's paradox is simply that the officer's order does not determine unambiguously the class of men who will be shaved by the barber. It fails, in fact, for the barber and
whether or not the barber shaves himself must then be specified by the officer.
A quite similar non-mathematical paradox can be formulated as follows: Let us agree to call a word 11 autological" if it modifies itself. For example, the word "short" is autological. On the other
hand we will call a word "heterological" if it does not modify itself. Thus the word "long" is heterological. According to a basic principle of logic every word should be either autological or
heterological. Let us try to determine whether "heterological" is heterological or not. If "heterological" is heterological then it does not modify itself and since this is the meaning of
heterological it must be autological. But if it is autological it must modify itself and hence be heterological. The explanation of this paradox is similar to that of the Barber's paradox.
*Presented at the Tenth Annual Convention of the American Scientific Affiliation, Colorado Springs, August, 1955.
We turn now to the simplest of the mathematical paradoxes, the "Russell paradox." In order to describe this paradox we must first explain the notion of "class" or "set" as it is used in mathematics.
A class is simply a collection of objects. Frequently it is the aggregate of all objects having some specified property. For example, the class of men is the aggregate of all objects which are both
human and male. The class of even numbers is characterized by the property of being a whole number and also being divisible by two. The fundamental notion in connection with classes is that of class
membership, that is, the relationship of an object to a class to which it belongs. Classes themselves may be members of a class. Thus the class of audiences in the various concert halls of the nation
on a particular evening has classes of people as its members. Now, let us consider the class of all classes which are not members of themselves. The class of men clearly belongs to this class since
its members are men, not classes. We then ask, is this class a member of itself ? If it is a member of itself then it does not have the defining property and hence is not a member of itself. On the
other hand, if it is not a member of itself it does have the defining property and hence is a member of itself. Thus we have formulated a self-contradictory proposition.
Now it may be argued that the class of all classes which are not members of themselves is indeed not a well-defined class just as is the case of the Barber's paradox where the officer's order was
ambiguous with regard to the barber himself. But if we adopt this point of view then we have a property, namely, that of not being a member of itself which does not determine a class. This
immediately raises a question concerning ,he validity of other classes. For example, can we be .Sure that the class of all integers is a well-defined class which will not lead us to contradictions?
Clearly, a wide variety of classes are needed for the purposes of mathematics. On the other hand, too wide a flexibility in the definition of classes leads to a contradiction. Thus Russell's paradox
emphasizes the need for a formulation of the language underlying mathematics which is sufficient to express the propositions of mathematics and yet which is consistent, that is contains no
self-contradictory propositions.
The next paradox to be considered is due to Richard. It arises from considering the names of the integers in the English language. Now, since there are only a finite number of words in the English
language and since the integers form an infinite set it is clear that not every integer can be nameable in English in less than 13 words. Hence, "the least integer not nameable in English in less
than 13 words" is a definite integer and is a name consisting of only 12 words. But then this integer is indeed nameable in English in less than 13 words and we have a self -contradictory statement.
As in the case of Russell's paradox any consistent formulation of the language of mathematics must be such that sentences like that of Richard cannot be formulated in the system. It is interesting
that a number of systems which have been proposed for the foundations of mathematics have later been shown to have such a flexibility of expression that they were susceptible to paradoxes analogous
to Richard's.
What then is the present situation in regard to the consistency of mathematics? Systems of language have been proposed which are adequate for all of mathematics and in which no contradictions have
been detected. Furthermore at least one of these systems has been proved to be consistent. The proof, however, involves methods which cannot be expressed in the system itself and, indeed, the
validity of these methods are questioned by some mathematicians. On the other hand, this, result seems to be about the best that can be hoped for since Godel 131 has proved that any system which is
sufficient f or all of mathematics cannot be proved consistent by methods expressible within the system. This rather paradoxical result appears to close the door as far as a completely satisfactory
logical foundation of mathematics is concerned. Nevertheless, the gap between adequacy and consistency is very narrow since systems have been developed which are adequate for a large part of
mathematics and which can be proved consistent by methods expressible in the system itself (Church [21). This curious situation with regard to the foundations of mathematics has prompted Andre Weil
to remark, "God exists since mathematics is consistent and the Devil exists since we cannot prove it".
The inherent complexity of these questions make it difficult to go further into the construction of the various systems. It will suffice to mention that the central difficulty in the Russell paradox
is the innocent little word "all". This word also occurs implicitly in the Richard paradox. For an alternative statement of that paradox is that the collection of all integers not name able in
English in less than 13 words is not a valid class. Though, intuitively, the word "all" seems above reproach, it has nevertheless been necessary to limit its application in order to obtain consistent
languages for mathematics.
It has been mentioned in a preceding paragraph that there are principles frequently used in mathematics concerning which there is strong disagreement among mathematicians concerning their validity.
One of these principles which has played an important role in the development of mathematics and which has been used in the construction of consistency proofs is the "axiom of choice" first
formulated by the mathematician, Zermelo. In order to describe this axiom, let us consider a class of mutually disjoint, non-empty classes. The axiom of choice postulates the existence off a class
which has the property that it contains exactly one element from each of the classes in the original class. Or putting it in another way, the axiom of choice asserts that it is possible to pick one
element out of each of the classes in the collection and put them together to form a single class. Intuitively, this principle seems quite harmless. Nevertheless, the principle has far-reaching
consequences, some of which even contradict our basic intuitions. One such consequence is a theorem due to Banach and Tarski [11] which is certainly paradoxical in the usual sense of the word. This
theorem asserts that a sphere of radius one can be decomposed into five parts which can then be put together again in such a way as to form two spheres of radius one. Of course, the parts into which
the sphere is decomposed have an exceedingly complicated and complex structure. As a matter of fact the parts cannot be constructed in a finite number of operations. And it is here that the axiom of
choice comes into play. Nevertheless, the conclusion of the theorem seems to be contrary to our intuitions of three dimensional bodies. In spite of these consequences Godel [4] has proved it is
possible to adjoin the axiom of choice to one of the standard systems which is sufficient for mathematics and if the original system is consistent then the new system will also be consistent. Many
mathematicians feel that this theorem justifies the use of the Zermelo principle as a standard part of mathematical methodology. On the other hand, there are some mathematicians who feel that a proof
using this principle is, in fact, no proof at all. In view of the nature of the problem it seems unlikely that this controversy will be resolved in the near future.
Finally, we turn to the question of the implications of these considerations concerning the foundations of mathematics for philosophy in general and, in particular, for Christian philosophy.
Now if there are serious difficulties associated with the logical foundations of mathematics where very precise and rigorous methods are available f or exploring the consequences of propositions, it
would be presumptous to suppose that basic difficulties of a similar nature are not present in other areas of knowledge. In fact, it is because of the high precision associated with the concepts and
deductive procedures of mathematics that the detection of the subtle contradictions becomes possible. In a field where the basic ideas are not so carefully formulated, fundamental logical
difficulties may be obscured by ambiguities in the definition of terms. Furthermore, since the language required for mathematics is, in many respects, similar to the language of philosophy, these
considerations indicate points at which trouble is likely to occur. For example, use of the word "all" in philosophical or theological arguments should be carefully examined to insure that there are
no hidden inconsistencies. In point of fact, many classical theological controversies have centered about words with a similar inclusive connotation.
Next, it should be noted that while the reasoning of mathematics is formally deductive, much of the reasoning of philosophy and theology is intuitive in character. The formalization of the reasoning
would, in many cases, be very difficult indeed. Now we have already pointed out the unreliability of intuition even in the domain of the foundations of mathematics where it would be expected to be
accurate. Again, it is the existence of a rigorous deductive method which enables the mathematician to detect the errors in an intuitive argument. It seems reasonable, therefore, to suppose that
errors in intuitive reasoning are just as likely to occur in areas where a rigorous method of checking the argument is not available. If this is the case, it emphasizes the need for a critical and
tentative attitude toward intuitive thinking. This applies both to the professional philosopher in his ivory tower and to man in his daily conversations. In particular, Christian folk have a special
obligation in this regard. For if-their words betray a foolish and careless habit of mind, serious damage may be. done to the Christian cause. By way of example, consider the very common practice
among evangelical Christians of interpreting as the working of God the occurrence of an unexpectedly pleasant or, perhaps, longed for event. This is clearly an intuitive conclusion. If it were
formalized it would probably run as follows: God is good-This event is good-Hence God is responsible for this event. When it is, presented in this form, the weakness of the argument is obvious even
though, in some instances, the conclusion itself may be true. However, in many cases, a little careful reflection shows that what at the moment appeared to be good would, from a long range point of
view, indeed be evil. Thus in place of having been honored, God has been dishonored.
Clearly there are only a few who have the time and ability to acquire the intellectual sophistication of the professional logician. On the other hand, there is available to everyone the opportunity
to acquire the modest amount of critical judgment and logical habit of mind which distinguishes the wise man from the foolish.
1. St. Banach and A. Tarski, Sur la decomposition des encembles de points en parties respectivement congruent, Fundamenti Mathematicae, vol. 6 (1924) pp. 244-277.
2. A. Church, A proof of freedom from contradiction, Proc. Nat, Acad. Sci., vol. 21 (1935) pp. 275-281.
3. K. Godel, Uber formal unentchiedbare Satze der Principia Mathematica und verwandler Systeme, Monatschefte ffir Mathe matic und Physic, vol. 38 (1931) pp. 173-198.
4. --, The consistencv of the axiom of choice and the generalized continuum-hypp thesis, Princeton University Press, Princeton (1940).
5. B. Russell, Introduction to mathematical philosophy, London (1920).
6. P. Rosenbloom, The elements of mathematical logic, Dover publications, New York (1950). | {"url":"http://www.asa3.org/ASA/PSCF/1956/JASA6-56Dilworth.html","timestamp":"2014-04-17T04:07:00Z","content_type":null,"content_length":"17694","record_id":"<urn:uuid:0cc680ce-0793-4a52-86c5-83c5869429da>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using LINQ to Calculate Basic Statistics
Up to date source on GitHub
While working on another project, I found myself needing to calculate basic statistics on various sets of data of various underlying types. LINQ has Count, Min, Max, and Average, but no other
statistical aggregates. As I always do in a case like this, I started with Google, figuring someone else must have written some handy extension methods for this already. There are plenty of
statistical and numerical processing packages out there, but what I want is a simple and lightweight implementation for the basic stats: variance (sample and population), standard deviation (sample
and population), covariance, Pearson (chi squared), range, median, and mode.
I've modeled the API on the various overloads of Enumerable.Average, so you are able to use these methods on the same types of collections that those methods accept. Hopefully, this will make the
usage familiar and easy to use.
That means overloads for collections of the common numerical data types and their Nullable counter parts, as well as convenient selector overloads.
public static decimal? StandardDeviation(this IEnumerable<decimal?> source);
public static decimal StandardDeviation(this IEnumerable<decimal> source);
public static double? StandardDeviation(this IEnumerable<double?> source);
public static double StandardDeviation(this IEnumerable<double> source);
public static float? StandardDeviation(this IEnumerable<float?> source);
public static float StandardDeviation(this IEnumerable<float> source);
public static double? StandardDeviation(this IEnumerable<int?> source);
public static double StandardDeviation(this IEnumerable<int> source);
public static double? StandardDeviation(this IEnumerable<long?> source);
public static double StandardDeviation(this IEnumerable<long> source);
public static decimal? StandardDeviation<TSource>
(this IEnumerable<TSource> source, Func<TSource, decimal?> selector);
public static decimal StandardDeviation<TSource>
(this IEnumerable<TSource> source, Func<TSource, decimal> selector);
public static double? StandardDeviation<TSource>
(this IEnumerable<TSource> source, Func<TSource, double?> selector);
public static double StandardDeviation<TSource>
(this IEnumerable<TSource> source, Func<TSource, double> selector);
public static float? StandardDeviation<TSource>
(this IEnumerable<TSource> source, Func<TSource, float?> selector);
public static float StandardDeviation<TSource>
(this IEnumerable<TSource> source, Func<TSource, float> selector);
public static double? StandardDeviation<TSource>
(this IEnumerable<TSource> source, Func<TSource, int?> selector);
public static double StandardDeviation<TSource>
(this IEnumerable<TSource> source, Func<TSource, int> selector);
public static double? StandardDeviation<TSource>
(this IEnumerable<TSource> source, Func<TSource, long?> selector);
public static double StandardDeviation<TSource>
(this IEnumerable<TSource> source, Func<TSource, long> selector);
All of the overloads that take a collection of Nullable types only include actual values in the calculated result. For example:
public static double? StandardDeviation(this IEnumerable<double?> source)
IEnumerable<double> values = source.Coalesce();
if (values.Any())
return values.StandardDeviation();
return null;
where the Coalesce method is:
public static IEnumerable<T> Coalesce<T>(this IEnumerable<T?> source) where T : struct
Debug.Assert(source != null);
return source.Where(x => x.HasValue).Select(x => (T)x);
A Note About Mode
Since a distribution of values may not have a mode, all of the Mode methods return a Nullable type. For instance, in the series { 1, 2, 3, 4 }, no single value appears more than once. In cases such
as this, the return value will be null.
In the case where there are multiple modes, Mode returns the maximum mode (i.e., the value that appears the most times). If there is a tie for the maximum mode, it returns the smallest value in the
set of maximum modes.
There are also two methods for calculating all modes in a series. These return an IEnumerable of all of the modes in descending order of modality.
The Statistics Calculations
Links, descriptions, and mathematical images from Wikipedia.
Variance is the measure of the amount of variation of all the scores for a variable (not just the extremes which give the range).
Sample variance is typically denoted by the lower case sigma squared: ฯ^2.
public static double Variance(this IEnumerable<double> source)
int n = 0;
double mean = 0;
double M2 = 0;
foreach (double x in source)
n = n + 1;
double delta = x - mean;
mean = mean + delta / n;
M2 += delta * (x - mean);
return M2 / (n - 1);
Standard Deviation
The Standard Deviation of a statistical population, a data set, or a probability distribution is the square root of its variance.
Standard deviation is typically denoted by the lower case sigma: ฯ.
public static double StandardDeviation(this IEnumerable<double> source)
return Math.Sqrt(source.Variance());
Median is the number separating the higher half of a sample, a population, or a probability distribution, from the lower half.
public static double Median(this IEnumerable<double> source)
var sortedList = from number in source
orderby number
select number;
int count = sortedList.Count();
int itemIndex = count / 2;
if (count % 2 == 0) // Even number of items.
return (sortedList.ElementAt(itemIndex) +
sortedList.ElementAt(itemIndex - 1)) / 2;
// Odd number of items.
return sortedList.ElementAt(itemIndex);
Mode is the value that occurs the most frequently in a data set or a probability distribution.
public static T? Mode<T>(this IEnumerable<T> source) where T : struct
var sortedList = from number in source
orderby number
select number;
int count = 0;
int max = 0;
T current = default(T);
T? mode = new T?();
foreach (T next in sortedList)
if (current.Equals(next) == false)
current = next;
count = 1;
if (count > max)
max = count;
mode = current;
if (max > 1)
return mode;
return null;
Range is the length of the smallest interval which contains all the data.
public static double Range(this IEnumerable<double> source)
return source.Max() - source.Min();
Covariance is a measure of how much two variables change together.
public static double Covariance(this IEnumerable<double> source, IEnumerable<double> other)
int len = source.Count();
double avgSource = source.Average();
double avgOther = other.Average();
double covariance = 0;
for (int i = 0; i < len; i++)
covariance += (source.ElementAt(i) - avgSource) * (other.ElementAt(i) - avgOther);
return covariance / len;
Pearson's Chi Square Test
Pearson's chi square test is used to assess two types of comparisons: tests of goodness of fit, and tests of independence.
In other words, it is a measure of how well a sample distribution matches a predicted distribution or the degree of correlation between two sample distributions. Pearson's is often used in scientific
applications to test the validity of hypotheses.
public static double Pearson(this IEnumerable<double> source,
IEnumerable<double> other)
return source.Covariance(other) / (source.StandardDeviationP() *
Using the Code
The included Unit Tests should provide plenty of examples for how to use these methods, but at its simplest, they behave like other enumerable extension methods. The following program...
static void Main(string[] args)
IEnumerable<int> data = new int[] { 1, 2, 5, 6, 6, 8, 9, 9, 9 };
Console.WriteLine("Count = {0}", data.Count());
Console.WriteLine("Average = {0}", data.Average());
Console.WriteLine("Median = {0}", data.Median());
Console.WriteLine("Mode = {0}", data.Mode());
Console.WriteLine("Sample Variance = {0}", data.Variance());
Console.WriteLine("Sample Standard Deviation = {0}", data.StandardDeviation());
Console.WriteLine("Population Variance = {0}", data.VarianceP());
Console.WriteLine("Population Standard Deviation = {0}",
Console.WriteLine("Range = {0}", data.Range());
... produces:
Count = 9
Average = 6.11111111111111
Median = 6
Mode = 9
Sample Variance = 9.11111111111111
Sample Standard Deviation = 3.01846171271247
Population Variance = 8.09876543209877
Population Standard Deviation = 2.8458329944146
Range = 8
Points of Interest
I didn't spend much time optimizing the calculations, so be careful if you are evaluating extremely large data sets. If you come up with an optimization in any of the attached code, drop me a note
and I'll update the source.
Hopefully, you'll find this code handy the next time you need some simple statistics calculation.
โข Version 1.0 - Initial upload, 9/19/2009.
โข Version 1.1 - Added Covariance and Pearson as well as a couple of fixes/optimizations, 10/26/2009.
โข Version 1.2 - Updated variance implementation and added GitHub and NuGet links 12/3/2013 | {"url":"http://www.codeproject.com/Articles/42492/Using-LINQ-to-Calculate-Basic-Statistics?fid=1549548&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Quick&spc=Relaxed&fr=11","timestamp":"2014-04-19T12:08:12Z","content_type":null,"content_length":"114405","record_id":"<urn:uuid:42e6607f-6bf9-4ec3-9ba8-498438e436d8>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Please help me with the following question
Date: Mar 11, 2013 3:59 PM
Author: Robert Hansen
Subject: Re: Please help me with the following question
On Mar 11, 2013, at 11:49 AM, Joe Niederberger <niederberger@comcast.net> wrote:
>> At the same time I knew that the final step had to involve 3 coins or less
> Except that's not true.
The balanced strategy gives
1. AAAA BBBB CCCC <- 3 groups of 4 coins.
2. ABBB CCCB <- This is the next weighing if in step 1 AAAA != BBBB (otherwise it is trivial)
3. AB or BBB <- this can be resolved in one weighing AND also knowing the result of step 2.
You cannot end up with a group of 4 coins in step 3 and in one more weighing know which single coin is counterfeit, even if you know which way it leans.
Bob Hansen | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8598693","timestamp":"2014-04-18T21:20:22Z","content_type":null,"content_length":"1734","record_id":"<urn:uuid:fb760741-685a-485b-8381-cff8fba660c5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
On Saturday Alan Turing would have celebrated his 100th birthday. In his short life he revolutionised the scientific world and so 2012 has been declared Turing Year to celebrate his life and
scientific achievements. You can join the celebrations by visiting the special exhibition at the Science Museum or attending the Turing Educational Day at Bletchley Park. Turing is also being
honoured in this year's Manchester Pride Parade and the LGBT History Month. And here at Plus, apart from getting to work on building our own Turing machine out of LEGO, we're also celebrating with
these favourites:
Alan Turing: ahead of his time
Alan Turing is the father of computer science and contributed significantly to the WW2 effort, but his life came to a tragic end. This article explores his story.
Another look at Turing's life and work. Find out what types of numbers we can't count and why there are limits on what can be achieved with Turing machines.
How does the uniform ball of cells that make up an embryo differentiate to create the dramatic patterns of a zebra or leopard? How come there are spotty animals with stripy tails, but no stripy
animals with spotty tails? The answer comes from an ingenious mathematical model developed by Alan Turing.
Omega and why maths has no TOEs
Is there a Theory of Everything for mathematics? Gregory Chaitin thinks there isn't and Turing's famous halting problem plays an important part in his work.
Turing is most famous for his work as a WWII code breaker. This article looks at the efforts of all the code breakers at Bletchley Park, which historians believe shortened the war by two years.
A version of Turing's famous test โ the "Completely automated public Turing test to tell computers and humans apart", or CAPTCHA for short โ helps in the fight against the everyday evil of spam
Turing's scientific legacy is going stronger than ever. An example is an announcement from February this year that scientists have devised a biological computer, based on an idea first described by
Turing in the 1930s.
Did a philosopher kill WALL-E?
AI has become big business in Hollywood, but will we ever see the computers reliably pass the Turing test? Or is it philosophically impossible? | {"url":"http://plus.maths.org/content/comment/reply/5720","timestamp":"2014-04-20T06:03:56Z","content_type":null,"content_length":"25371","record_id":"<urn:uuid:0bb6a83d-cab0-42ff-9f71-29d568c3b31f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tutors
Port Neches, TX 77651
Middle and High School Science Tutor
...I am currently student teaching 6th grade science, but we have as study hall class of 6th graders. Most of the time, they work on their
in study hall. The teacher is often busy, so I have been helping the students with their
. I took up to pre-calculus...
Offering 6 subjects including prealgebra | {"url":"http://www.wyzant.com/Groves_Math_tutors.aspx","timestamp":"2014-04-17T04:37:00Z","content_type":null,"content_length":"56169","record_id":"<urn:uuid:74a59b90-2ff4-44ef-aed5-5e1ba687b9e0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the symbol of a differential operator?
up vote 27 down vote favorite
I find Wikipedia's discussion of symbols of differential operators a bit impenetrable, and Google doesn't seem to turn up useful links, so I'm hoping someone can point me to a more pedantic
I think I understand the basic idea on โ^n, so for readers who know as little as I do, I will provide some ideas. Any differential operator on โ^n is (uniquely) of the form ฮฃ p[i[1],...,i[k]](x) โ^k/
(โx[i[1]]...โx[i[k]]), where x[1],...,x[n] are the canonical coordinate functions on โ^n, the p[i[1],...,i[k]](x) are smooth functions, and the sum ranges over (finitely many) possible indexes (of
varying length). Then the symbol of such an operator is ฮฃ p[i[1],...,i[k]](x) ฮพ^i[1]...ฮพ^i[k], where ฮพ^1,...,ฮพ^n are new variables; the symbol is a polynomial in the variables {ฮพ^1,...,ฮพ^n} with
coefficients in the algebra of smooth functions on โ^n.
Ok, great. So symbols are well-defined for โ^n. But most spaces are not โ^n โ most spaces are formed by gluing together copies of (open sets in) โ^n along smooth maps. So what happens to symbols
under changes of coordinates? An affine change of coordinates is a map y[j](x) = a[j] + ฮฃ[i] Y[j]^ix[i] for some vector (a[1],...,a[n]) and some invertible matrix Y. It's straightforward to describe
how the differential operators change under such a transformation, and thus how their symbols transform. In fact, you can forget about the fact that indices range 1,...,n, and think of them as
keeping track of tensor contraction; then everything transforms as tensors under affine coordinate changes, e.g. the variables ฮพ^i transform as coordinates on the cotangent bundle.
On the other hand, consider the operator D = โ^2/โx^2 on โ, with symbol ฮพ^2; and consider the change of coordinates y = f(x). By the chain rule, the operator D transforms to (f'(y))^2 โ^2/โy^2 + f''
(y) โ/โy, with symbol (f'(y))^2 ฯ^2 + f''(y) ฯ. In particular, the symbol did not transform as a function on the cotangent space. Which is to say that I don't actually understand where the symbol of
a differential operator lives in a coordinate-free way.
Why I care
One reason I care is because I'm interested in quantum mechanics. If the symbol of a differential operator on a space X were canonically a function on the cotangent space T*X, then the inverse of
this Symbol map would determine a "quantization" of the functions on T*X, corresponding to the QP quantization of โ^n.
But the main reason I was thinking about this is from Lie algebras. I'd like to understand the following proof of the PBW theorem:
Let L be a Lie algebra over โ or โ, G a group integrating the Lie algebra, UL the universal enveloping algebra of L and SL the symmetric algebra of the vector space L. Then UL is naturally the
space of left-invariant differential operators on G, and SL is naturally the space of symbols of left-invariant differential operators on G. Thus the map Symbol defines a canonical vector-space
(and in fact coalgebra) isomorphism UL โ SL.
add comment
8 Answers
active oldest votes
The principal symbol of a differential operator $\sum_{|\alpha| \leq m} a_\alpha(x) \partial_x^\alpha$ is by definition the function $\sum_{|\alpha| = m} a_\alpha(x) (i\xi)^\alpha$ Here $
\alpha$ is a multi-index (so $\partial_x^\alpha$ denotes $\alpha_1$ derivatives with respect to $x_1$, etc.) At this point, the vector $\xi = (\xi_1, \ldots, \xi_n)$ is merely a formal
variable. The power of this definition is that if one interprets $(x,\xi)$ as variables in the cotangent bundle in the usual way -- i.e. $x$ is any local coordinate chart, then $\xi$ is
the linear coordinate in each tangent space using the basis $dx^1, \ldots, dx^n$, then the principal symbol is an invariantly defined function on $T^*X$, where $X$ is the manifold on
which the operator is initially defined, which is homogeneous of degree $m$ in the cotangent variables.
Here is a more invariant way of defining it: fix $(x_0,\xi_0)$ to be any point in $T^*X$ and choose a function $\phi(x)$ so that $d\phi(x_0) = \xi_0$. If $L$ is the differential operator,
then $L( e^{i\lambda \phi})$ is some complicated sum of derivatives of $\phi$, multiplied together, but always with a common factor of $e^{i\lambda \phi}$. The `top order part' is the one
which has a $\lambda^m$, and if we take only this, then its coefficient has only first derivatives of $\phi$ (lower order powers of $\lambda$ can be multiplied by higher derivatives of $\
phi$). Hence if we take the limit as $\lambda \to \infty$ of $\lambda^{-m} L( e^{i\lambda \phi})$ and evaluate at $x = x_0$, we get something which turns out to be exactly the principal
symbol of $L$ at the point $(x_0, \xi_0)$.
up vote 20 There are many reasons the principal symbol is useful. There is indeed a `quantization map' which takes a principal symbol to any operator of the correct order which has this as its
down vote principal symbol. This is not well defined, but is if we mod out by operators of one order lower. Hence the comment in a previous reply about this being an isomorphism between filtered
accepted algebras.
In special situations, e.g. on a Riemannian manifold where one has preferred coordinate charts (Riemann normal coordinates), one can define a total symbol in an invariant fashion (albeit
depending on the metric). There are also other ways to take the symbol, e.g. corresponding to the Weyl quantization, but that's another story.
In microlocal analysis, the symbol captures some very strong properties of the operator $L$. For example, $L$ is called elliptic if and only if the symbol is invertible (whenever $\xi \
neq 0$). We can even talk about the operator being elliptic in certain directions if the principal symbol is nonvanishing in an open cone (in the $\xi$ variables) about those directions.
Another interesting story is wave propagation: the characteristic set of the operator is the set of $(x,\xi)$ where the principal symbol $p(L)$ vanishes. If its differential (as a
function on the cotangent bundle) is nonvanishing there, then the integral curves of the Hamiltonian flow associated to $p(L)$, i.e. for the Hamiltonian vector field determined by $p(L)$
using the standard symplectic structure on $T^*X$, ``carries'' the singularities of solutions of $Lu = 0$. This is the generalization of the classical fact that singularities of solutions
of the wave equation propagate along light rays.
add comment
One way to understand the symbol of a differential operator (or more generally, a pseudodifferential operator) is to see what the operator does to "wave packets" - functions that are
strongly localised in both space and frequency.
Suppose, for instance, that one is working in R^n, and one takes a function psi which is localised to a small neighbourhood of a point x0, and whose Fourier transform is localised to a
small neighbourhood of xi0/hbar, for some frequency xi0 (or more geometrically, think of (x0,xi0) as an element of the cotangent bundle of R^n). Such functions exist when hbar is small,
e.g. psi(x) = eta( (x-x0)/eps ) e^{i xi0 . (x-x0) / hbar} for some smooth cutoff eta and some small eps (but not as small as hbar).
Now apply a differential operator L of degree d to this wave packet. When one does so (using the chain rule and product rule as appropriate), one obtains a bunch of terms with different
powers of 1/hbar attached to them, with the top order term being 1/hbar^d times some quantity a(x0,xi0) times the original wave packet. This number a(x0,xi0) is the principal symbol of a
up vote 22 at (x0,xi0). (The lower order terms are related to the lower order components of the symbol, but the precise relationship is icky.)
down vote
Basically, when viewed in a wave packet basis, (pseudo)differential operators are diagonal to top order. (This is why one has a pseudodifferential calculus.) The diagonal coefficients are
essentially the principal symbol of the operator. [While on this topic: Fourier integral operators (FIO) are essentially diagonal matrices times permutation matrices in the wave packet
basis, so they have a symbol as well as a relation (the canonical relation of the FIO, which happens to be a Lagrangian submanifold of phase space).]
One can construct wave packets in arbitrary smooth manifolds, basically because they look flat at small scales, and one can define the inner product xi0 . (x-x0) invariantly (up to lower
order corrections) in the asymptotic limit when x is close to x0 and (x0,xi0) is in the cotangent bundle. This gives a way to define the principal symbol on manifolds, which of course
agrees with the standard definition.
add comment
I think you have misunderstood the definition of "symbol." You should only take the term of highest order in the vector fields. Then the symbol is well defined. (EDIT: well, I guess I
should have read Wikipedia first. I stick by my assertion that the symbol map one should consider is the leading order one).
up vote 6 More to the point, the symbol map isn't from differential operators to functions on the cotangent bundle, it's from the associated graded of differential operators for the order filtration
down vote to functions on the cotangent bundle. So, on operators of order less than n, you can do the operation you described to the highest order term, and you get something coordinate independent.
That's what I thought. But then the "proof" of PBW (which I got from TWF) fails, unless there's something special going on for left-invariant things. โ Theo Johnson-Freyd Oct 31 '09 at
it doesn't really fail, because there's no proof there. Baez just says there's a map and calls it "symbol." But there is no one symbol map. However, there is a unique G-invariant
isomorphism of SL-> UL which sends a homogenous function to a differential operator with that element as principal symbol. You can't blame Baez for calling that map "symbol." โ Ben
Websterโฆ Oct 31 '09 at 18:00
1 The phrases 'principal symbol' (the highest degree part) and 'total symbol' (for every part) are pretty useful for distinguishing between the two. โ Greg Muller Nov 2 '09 at 5:51
It must also be remembered that the total symbol of (say, for concreteness) a scalar linear partial differential operator doesn't live in general on the cotangent bundle, but on the
bundle of jets of scalar-valued maps of the same order as the order of the operator. This can also be seen from the extension of the chain rule to higher-order derivatives. There the
notion of a total symbol becomes coordinate-invariant. โ Pedro Lauridsen Ribeiro Sep 7 '11 at 22:38
add comment
The D-module course notes of Dragan Milicic contain a detailed construction of the symbol map -- they can be found on his webpage www.math.utah.edu/~milicic. There may be several versions
linked there -- the 2007-2008 course should be most thorough. Start reading in Chapter 1, section 5. This goes through the construction of the filtration Ben mentions, then constructs the
graded module and symbol map explicitly. Of course, this section only covers the (complex) affine case you already describe. The coordinate-free generalization for say smooth quasi-projective
varieties over the complex numbers is done in Chapter 2, section 3. Basically, you look at the sheaf of differential operators on your variety, construct a degree filtration of that sheaf,
then the corresponding graded sheaf is isomorphic to the direct image of the sheaf of regular functions on the cotangent bundle via the symbol map.
up vote
3 down In the case G in your question is an algebraic group, the sheaf of differential operators on G are formed by localizing UL, and the pushforward of regular functions on the cotangent bundle is
vote the localization of SL. Then UL and SL can be recovered by pulling these sheaves back to the identity element in G. I don't think the isomorphism the way you have stated it is true as-is. I
think the content of any statement along these lines (as it relates to the proof of the PBW theorem) probably has to do with the construction of the filtration by top degree being
add comment
The original questioner already knows this, but anyone else who is interested in this question should check out the conversation at John Baez's blog.
up vote 2 down vote
add comment
John Baez's week282 might help you out, although I can't say I really understand what's going on.
up vote 1 down vote
2 Funny, it was week282 that prompted me to ask the question. โ Theo Johnson-Freyd Oct 31 '09 at 3:59
add comment
I think that one shouldn't insist on the invariance of symbols. A symbol is just a realization of a differential operator on the cotangent bundle. If the symbol were inavariant under some
transformation it would restrict the corresponding operator to some subset which may be less interesting. As a finite dimensional example, the subspace of linear transformations of a finite
up vote 1 dimensional vector space invariant under the group of unitary transformations would be the dull subspace of multiples of the unit operator.
down vote
add comment
The definition of symbol as presented in Wikipedia is not invariant โ only the highest order terms. Some textbooks call those higher order terms symbols (Wikipedia suggests the name
principal symbol), hence the Ben's answer, which refers to that definition.
The highest order terms are clearly most important for the properties of the differential equations, e.g. their positiveness allows to prove the existence of solutions (it's related to the
fact that positively definite linear operators are invertible in linear algebra).
As for "Thus the map Symbol defines a canonical vector-space (and in fact coalgebra) isomorphism UL โ SL.", this statement should be proved by induction order-by-order in a fixed coordinate
system. It should be true in any coordinate system, but the homomorphism depends on it.
up vote 1 There is however, a canonical coordinate system given by the exp map, and this, I think (not sure here), is the canonical map referred to in the question.
down vote
As for quantum mechanics, while I have only some general knowledge, I think "inverse of this Symbol map would determine a "quantization" of the functions on T*X, corresponding to the QP
quantization of โn" is true, but somewhat too optimistic. Yes, there are quantizations, but they are canonical to all terms only when you restrict yourself to linear change of coordinates
(or when you do some additional constructions).
I risk being viewed as stepping on a slippery stone here, but my limited understanding is that you're actually hitting a fundamental question here โ even some quantum field theory play nice
with diffeomorphisms, but it's far from a simple exercise. Moreover, the theory that includes them as the degrees of freedom would be called quantum gravity and is the Holy Grail of
high-energy physics rather then a theorem of Lie group geometry (though the latter is extensively used for the former).
I think the accepted terminology these days is such that a 'quantum field theory' does not include gravity. A theory of quantum gravity is indeed a primary goal of research in hep-th and
gr-qc, but it is questionable whether it is in the form of a quantum field theory. The current thinking is that quantum field theories are effective theories of a more fundamental theory
(perhaps a string/M theory, perhaps something else). โ Josรฉ Figueroa-O'Farrill Nov 2 '09 at 13:24
Another comment is that Chern-Simons theory is an example of a quantum field theory which "plays nicely" with diffeomorphisms, but it is not a gravity theory because it has no metric
degreees of freedom. In other words, whereas a quantum gravity theory presumably should "play nicely" with diffeomorphisms, not every theory which does is a theory of gravity. โ Josรฉ
Figueroa-O'Farrill Nov 2 '09 at 13:26
Agreed, but the last paragraph is quite vague anyway... I'll rewrite it still. โ Ilya Nikokoshev Nov 2 '09 at 18:43
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry lie-groups ap.analysis-of-pdes lie-algebras mp.mathematical-physics or ask your own question. | {"url":"http://mathoverflow.net/questions/3477/what-is-the-symbol-of-a-differential-operator","timestamp":"2014-04-19T22:55:11Z","content_type":null,"content_length":"98334","record_id":"<urn:uuid:aff191a3-3535-4fe8-9b4d-e0eb9dc6bb7b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
โข across
MIT Grad Student
Online now
โข laura*
Helped 1,000 students
Online now
โข Hero
College Math Guru
Online now
Here's the question you clicked on:
Find \(V_{out}\) of this amplifier...
โข one year ago
โข one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
1. The +ve input is tied to Vin, so the -ve input will attempt to maintain the same voltage. 2. This means that the voltage drop across, and thus the current through, the first 10k resistor is
zero. 3. if there's no current through the 1st 10k resistor, then there is no current through the feedback resistor. 4. no current through the feedback resistor means no voltage drop. With no
voltage drops the output will see Vin are you sure something isn't grounded in there somewhere?
Best Response
You've already chosen the best response.
Should the + not go to ground? If so then it looks like an Op amp inverting amplifier. If resistance is very high ( I hope 10k is enough ) then: \[V _{out} \approx - V _{ in } \frac { R _{ f } }{
R _{ in } }\] hereยดs a link to info on similar amps: http://www.renesas.com/edge_ol/engineer/03/index.jsp
Best Response
You've already chosen the best response.
The resistance Rf can also be named R2, it is the 2nd resistance from the left or the top one if you like.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
โข Teamwork 19 Teammate
โข Problem Solving 19 Hero
โข Engagement 19 Mad Hatter
โข You have blocked this person.
โข โ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a1cf7be4b0e22d17ef32aa","timestamp":"2014-04-18T00:37:44Z","content_type":null,"content_length":"97635","record_id":"<urn:uuid:e6cbd3b0-d198-475a-84db-8f352f0ebe8a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sausalito SAT Math Tutor
Find a Sausalito SAT Math Tutor
...In addition, I have significant experience tutoring students in lower division college mathematics courses such as calculus, multivariable calculus, linear algebra and differential equations,
as well as lower division physics. Teaching math and physics is exciting for me because I am passionate ...
25 Subjects: including SAT math, calculus, physics, statistics
...My degrees are all in this area, and I have taught this subject at all levels, from fourth graders to graduate students, for thirty years. I have taught grammar to elementary, middle, high
school, and college students. As a college professor, I required essay tests in all of my courses -- in order to ensure that my students would learn more about writing.
49 Subjects: including SAT math, English, reading, writing
...I have been tutoring students since I was in high school myself. Learning science and math can be difficult at times but with a little help anyone can master the principles and discover a vast,
exciting, and ever expanding body of knowledge! I would be honored to help you in your quest for this knowledge.
12 Subjects: including SAT math, chemistry, physics, calculus
...I have taken college calculus, differential equations and statistics. I was nominated to attend the NC School of Math & Science when I was in high school. Later, I went on to attend NC State to
receive a bachelor's degree in mechanical engineering.
13 Subjects: including SAT math, chemistry, calculus, geometry
I tutor from the perspective that mastery is not knowing all the right answers, but asking the right questions. Especially in science and math, taking the right approach is far more important than
the solution for any one problem. I graduated from UCLA with a degree in psychobiology, and have been tutoring chemistry, biology, and math students from high school to college level.
15 Subjects: including SAT math, English, chemistry, geometry | {"url":"http://www.purplemath.com/Sausalito_SAT_Math_tutors.php","timestamp":"2014-04-21T12:49:44Z","content_type":null,"content_length":"24107","record_id":"<urn:uuid:7d817319-5505-405b-8867-4cb5b1b6f4db>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Circumference Diameter Ration of A Circle
Circumference-Diameter Ration of a circle.
ciram. x diam.
ciram. x rad.
Determine at least two differenct methods of making the measurements. Be sure you include ways to measure the circumference of the cylinder in each method. Keep in mind that you must measure eah
quanity directly; no values can be found through calcualations | {"url":"http://www.physicsforums.com/showthread.php?t=126261","timestamp":"2014-04-17T21:31:48Z","content_type":null,"content_length":"22640","record_id":"<urn:uuid:45091542-b4cd-4be5-82fe-a64dcb606a13>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
MCNP planar source (rectangular)
Hello to everybody,
I need some explanation on how to use SDEF variables to define correctly a planar rectangular source. Let say this source is emitting in all direction but I am interested in a side of the source
surface where a point detector is located. I used in my example VEC (VEC =001) but not DIR.
How to take into account the fact that in analog simulation the source is emitting equally to the opposite surface plane.
I would like also to understand the cosine distribution mentioned in the primer for a surface source where it is written p(ฮผ) = 2ฮผ for ฮผ [0,1]. That is means that p(ฮผ) can the value 2 when ฮผ=1. Is it
not a probability? Which is supposed to be less than 1.
Here is my example:
SDEF SUR= 51 POS 0 0 0 X=0 Y=d1 Z=d2 PAR=2 ERG 1.25 VEC 1 0 0 $ Plane rectangular source
SI1 -45 45
SP1 0 1
SI2 -70 70
Thank you in advance | {"url":"http://www.physicsforums.com/showthread.php?t=723907","timestamp":"2014-04-21T07:21:49Z","content_type":null,"content_length":"20161","record_id":"<urn:uuid:b2e76a76-bcbf-4c68-b4a6-d078398d49c9>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Possible Answer
If you assume that the ground is very stiff relative to your structure you can use the following formula. G=sqrt((2*h*k)/W) Where, ... x1,x2=displacements of the structure on which the impact load
Shock load is a term used to describe an intense, sudden force impacting a fixed object. Typically, these loads move at fast speeds and thereby carry very large amounts of force. - read more
Please vote if the answer you were given helped you or not, thats the best way to improve our algorithm. You can also
submit an answer
or search
documents about impact load formula
Share your answer: impact load formula?
Question Analizer
impact load formula resources | {"url":"http://www.askives.com/impact-load-formula.html","timestamp":"2014-04-18T19:52:39Z","content_type":null,"content_length":"34671","record_id":"<urn:uuid:161efb59-100d-4042-ad0a-bea8d5ba419a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00227-ip-10-147-4-33.ec2.internal.warc.gz"} |
SIR Model โ The Flue Season โ Dynamic Programming
May 14, 2013
By Francis Smart
# The SIR Model (susceptible, infected, and recovered) model is a common and useful tool in epidemiological modelling.
# In this post and in future posts I hope to explore how this basic model can be enriched by including different population groups or disease vectors.
# Simulation Population Parameters:
# Proportion Susceptible
Sp = .9
# Proportion Infected
Ip = .1
# Population
N = 1000
# Number of periods
r = 200
# Number of pieces in each time period.
# A dynamic model can be simulated by dividing each dynamic period into a sufficient number of discrete pieces.
# As the number of pieces approaches infinity then the differences between the simulated outcome and the outcome achieved by solving the dynamic equations approaches zero.
np = 1
# Model - Dynamic Change
DS = function() -B*C*S*I/N
DI = function() (B*C*S*I/N) - v*I
DZ = function() v*I
# I is the number of people infected, N the number of people in total, S is the number of people susceptible for infection, and Z is the number of people immune to the infection (from already
recovering from the infection).
# Model Parameters:
# Transmition rate from contact with an infected individual.
B = .2
# Contact rate. The number of people that someone becomes in contact with sufficiently to recieve transmition.
C = .5
# Recovery rate. Meaning the average person will recover in 20 days (3 weeks).
# This would have to be a particularly virolent form of the flu (not impossible at all).
v = .05
# Initial populations:
# Sesceptible population, Sv is a vector while S is the population values as the current period
Sv = S = Sp*N
# Infected, Iv is a vector while I is the population values as the current period
Iv = I = Ip*N
# Initial immunity.
Zv = Z = 0
# Now let's how the model works.
# Loop through periods
for (p in 1:r) {
# Loop through parts of periods
for (pp in 1:np) {
# Calculate the change values
ds = DS()/np
di = DI()/np
dz = DZ()/np
# Change the total populations
S = S + ds
I = I + di
Z = Z + dz
# Save the changes in vector form
Sv = c(Sv, S)
Iv = c(Iv, I)
Zv = c(Zv, Z)
# ggplot2 generates easily high quality graphics
# Save the data to a data frame for easy manipulation with ggplot
mydata = data.frame(Period=rep((1:length(Sv))/np,3), Population = c(Sv, Iv, Zv), Indicator=rep(c("Uninfected", "Infected", "Recovered"), each=length(Sv)))
# This sets up the plot but does not actually graph anything yet.
p = ggplot(mydata, aes(x=Period, y=Population, group=Indicator))
# This graphs the first plot just by the use of the p command.
# Adding the geom_line plots the lines changing the color or the plot for each indicator (population group)
p + geom_line(aes(colour = Indicator)) + ggtitle("Flu Season")
# Save initial graph:
# Let's do some back of the envelope cost calculations.
# Let's say the cost of being infected with the flu is about $10 a day (a low estimate) in terms of lost productivity as well as expenses on treatment.
# This amounts to:
# Which is a cost of $165,663.40 over an entire flu season for the thousand people in our simulated sample.
# Or about $165 per person.
# Imagine if we could now do a public service intervention.
# Telling people to wash their hands, practice social distancing, and avoid touching their noses and eyes, and staying at home when ill.
# Let's say people take up these practices and it reduces the number of potential exposure periods per contact by half.
C = .25
# ....
p + geom_line(aes(colour = Indicator)) + ggtitle("Flu Season with Prevention")
# Save initial graph:
# ....
# Which is a cost of $76,331.58 over an entire flu season for the thousand people in our simulated sample or about 76 dollars per person.
# The difference in costs is about 89 thousand dollars for the whole population or on average 89 per person. The argument is therefore, so long as a public service intervention that reduces personal
contact costs less than 89 thousand dollars for those 1000 people, then it is an efficient intervention (at least by the made up parameters I have here).
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/sir-model-the-flue-season-dynamic-programming/","timestamp":"2014-04-20T10:50:45Z","content_type":null,"content_length":"41003","record_id":"<urn:uuid:0c5208d2-25de-4b53-886d-f8f9e46d21aa>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum Field Theory
P622 Quantum Field Theory
Department of Physics, Indiana University
Taught by: Steven Gottlieb
Meets: Tuesday and Thursday, 12:20 p.m. to 2:15 p.m. in Swain West 220
This course is the second semester of a two semester sequence in Quantum Field Theory. You may examine the syllabus for the course or the syllabus for the second semester of the last previous year I
taught P622 by clicking on either hyperlink. Note that I am using a different text this time.
About the Textbook
The textbook Quantum Field Theory by M. Srednick was first published in 2007. You should get the latest printing (2009)from the bookstore. Corrections to the text may be found here .
Material Relating to Lectures Will Appear Here
Slides from lectures 1-4, covering Srednicki Secs. 13-19
A Mathematica notebook (in pdf format) from lecture 3, doing the self-energy integral in Eq. (14.43)
Slides from lectures 4-6, covering Srednicki Secs. 20-28
Slides from lectures 6-8, covering Srednicki Secs. 51-52
Slides from lectures 9-10, covering Srednicki Secs. 61-63 (didn't get all the way to end)
Slides from lectures 11-12, covering Srednicki Secs. 64-65 (didn't get all the way to end)
Slides from lectures 13-14, covering Srednicki Secs. 66-68 and 44
Slides from lectures 14-17, covering Srednicki Secs. 53, 69, 70. We did not complete Sec. 70 and there is also some review material not covered in class.
Slides from lectures 17-18, covering Srednicki Secs. 71-72 and some review
Slides from lecture 20, covering Srednicki Sec. 73 (a Nobel prize winning calculation)
Ultraviolet Behavior of Non-Abelian Gauge Theories, by DJ Gross and F Wilczek
Reliable Perturbative Results for Strong Interactions?, by HD Politzer
Slides from lectures 21-22, covering Srednicki Secs. 74 and 32
Slides from lectures 22-23, covering Srednicki Secs. 84 to 87
Broken Symmetry and the Mass of Gauge Vector Mesons, by F. Englert and R. Brout
Broken Symmetries and the Masses of Gauge Bosons, by PW Higgs
Global Conservation Laws and Massless Particles, by GS Guralnik, CR Hagen, and TW Kibble
Slides from lecture 24, covering Srednicki Sec. 88
A Model of Leptons, by S. Weinberg
Slides from lecture 25, covering Srednicki Sec. 89
CP-Violation in the Renormalizable Theory of Weak Interaction, by M. Kobayashi and T. Maskawa
Lattice Field Theory
Confinement of quarks, by K. Wilson
Confinement and the Critical Dimensionality of Space-Time, by M. Creutz
Nonperturbative QCD simulations with 2+1 flavors of improved staggered quarks by A. Bazavov et al.
Computerized Feynman Diagrams
A comprehensive approach to new physics simulations by ND Christensen et al.
Homework Assignments Will Appear Below As They Are Assigned
Homework 6 (due February 1)
Homework 7 (due February 17)
Homework 8 (due March 8)
Homework 9 (due April 7)
Homework 10 (due April 21)
Feel free to look at the problems assigned previously. Since we are using a new textbook, don't expect old assignments to be of great relevance as the order of topics has changed.
Homework assignments from Spring 1997
Homework 7 (posted January 14)
Homework 8 (posted February 1; due February 13)
Homework 9 (handed out in class; due March 6)
Homework 10 (handed out in class; due April 22)
Homework 11 (handed out in class; due May 6)
Homework assignments from 1996
Homework 7 (posted January 16)
Homework 8 (posted February 5)
Homework 9 (posted February 22)
Homework 10 (posted March 24)
Homework 11 (posted April 22) | {"url":"http://physics.indiana.edu/~sg/p622.html","timestamp":"2014-04-16T10:35:27Z","content_type":null,"content_length":"8321","record_id":"<urn:uuid:52bb5317-9e89-4a83-8e38-2607b990d5cd>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
LaTeX and Miscellaneous
Published September 18, 2013 random thoughts Leave a Comment
Published February 6, 2013 linux 1 Comment
I have for long ignored blogging, but today I came across the useful linux command โpasteโ and I thought I should write it down. In the past I have used quite a few smart tricks to handle data or
output files, but I failed to document these tricks. I really should do it this time.
So here is the story. I have an output file test.o1551739 which contains lines such as โiter = 125โณ scattering everywhere. I need to get the sum of all the numbers in these lines. How can I achieve
this without writing a program? The following is a little thought process.
First, I am so used to grep that I can grab all such lines by doing:
grep iter test.o1551739
Then, use sed to get rid of the suffix โiter = โ:
sed s/"iter = "//
Now that I have a bunch of numbers, one in a line. How can I sum them up? Yah, here comes the usage of paste. The command paste allows me to merge the lines and put a delimiter in between. I chose to
use the plus sign, because then it gives me an arithmetic expression. Try this:
paste -sd+
Once I have the arithmetic expression, Iโll just use bc to calculate the result. So piping all these steps, here is the one line command:
grep iter test.o1551739 | sed s/"iter = "// | paste -sd+ | bc
Nice. I realize that there must be tons of ways to do the same job. But I really love my solution.
Published November 28, 2011 latex 2 Comments
The cases environment defined by the amsmath package sets each case to be inline math style. This will look ugly when the formulas in the cases are complicated, such as fractions or integrals.
Consider the following sample code:
\frac{1}{2(x-1)}, & x>1 \\
\frac{\Gamma(x)}{2^{x-1}}, & 0<x<1
The result looks like this:
The solution is to use the dcases environment provided by the mathtools package instead. The prefix `dโ means `displayโ. It will set the cases in displayed math style (exactly the same as \frac
versus \dfrac). So if we write the following code
\frac{1}{2(x-1)}, & x>1 \\
\frac{\Gamma(x)}{2^{x-1}}, & 0<x<1
then we will see the result which looks like this:
Published November 10, 2011 latex Leave a Comment
Published April 27, 2011 math Leave a Comment
A colleague of mine mentioned to me today the Bocherโs formula for computing the coefficients of the characteristic polynomial of a matrix. It seems that this formula does not appear too often in
textbooks or literature. Iโll just write down the formula and the idea of a simple proof here.
Let the characteristic polynomial of a matrix $A$ be
Then the coefficients can be computed by
To prove the formula, note that the coefficient $a_j$ is the summation of all possible products of j eigenvalues, i.e.,
$\displaystyle{a_j=(-1)^j\sum_{\{t_1\cdots t_j\}\in C_n^j}\lambda_{t_1}\cdots\lambda_{t_j},}$
where $C_n^j$ denotes the j-combination of numbers from 1 to n, and the trace of $A^i$ is the sum of the $j$th power of the eigenvalues, i.e.,
In addition, we have
$\displaystyle{\left(\sum_{\{t_1\cdots t_j\}\in C_n^j}\lambda_{t_1}\cdots\lambda_{t_j}\right)\left(\sum_{t_{j+1}=1}^n\lambda_{t_{j+1}}^i\right)=\sum_{\{t_1\cdots t_{j+1}\}\in C_n^{j+1}}\lambda_{t_1}\
cdots\lambda_{t_j}\lambda_{t_{j+1}}^i+\sum_{\{t_1\cdots t_j\}\in C_n^j}\lambda_{t_1}\cdots\lambda_{t_{j-1}}\lambda_{t_j}^{i+1}.}$
The above indicates that the first part of $(-1)^ja_jtr(A^i)$ cancels the second part of $(-1)^{j-1}a_{j-1}tr(A^{i+1})$, whereas the second part of $(-1)^ja_jtr(A^i)$ cancels the first part of $(-1)^
{j+1}a_{j+1}tr(A^{i-1}).$ The rest of proof becomes obvious now.
Published April 15, 2011 latex , matlab 9 Comments
You do not want to mess up, right? When writing a LaTeX document, you may once in a while want to include some Matlab codes and/or outputs (preferably typeset using typewriter font if you have the
same taste as me) during the course of your writing. What I used to do was to copy and paste the Matlab codes into my LaTeX file, execute the codes in Matlab, then do another copy and paste to place
the results in my LaTeX file, and finally decorate them in a verbatim block or something like that. Guess what, Matlab provides a command, called publish, that helps you do all these in a simpler
In a nutshell, the way to use publish is to first type in the texts (as your usually LaTeX editing), including Matlab codes, in a single .m file. Letโs say the file name is example.m. Then, in
Matlab, you issue the command
publish('example.m', struct('format','latex','outputDir','ltx-src'));
It means that you want Matlab to process example.m and output a LaTeX file example.tex (that you can compile to get pdf) in the sub-directory ltx-src. This is it. Instead of writing example.tex, you
write a file example.m.
So, how should I write example.m? It is best to give an example. See the following:
% <latex>
% The eigenvalues of a circulant matrix can be
% obtained by performing FFT on the first column
% of the matrix. First, let us construct a
% $5\times5$ circulant matrix \verb|C| whose first
% column \verb|c| is generated with random input:
% </latex>
c = rand(5,1);
% sad that Matlab does not provide a circulant()
% command...
C = toeplitz(c, c([1 end:-1:2]))
% <latex>
% The eigenvalues of \verb|C| are nothing but
% </latex>
lambda = fft(c)
% <latex>
% Check it out! The output is the same as using
% the \verb|eig| command:
% </latex>
% Fun, isn't it?
It is nothing but a script file that Matlab can execute, right? The trick part is that all the texts and LaTeX markups are buried in comment blocks. How the Matlab command publish makes a LaTeX
output is that whenever it meets a whole block of comments starting with โ%%โ, it strips the comment signs and decorates the whole block using the pair \begin{par} and \end{par}. On the other hand,
whenever it meets a block of codes that does not start with โ%%โ, Matlab knows that they are executable commands. Matlab uses \begin{verbatim} and \end{verbatim} to typeset these command texts, and
automatically add the Matlab outputs of the commands, which are also decorated by the \begin{verbatim} and \end{verbatim} pair, in the LaTeX file. Something I am not satisfied is that Matlab does not
recognize LaTeX commands such as \verb||. I have to put <latex></latex> so that Matlab can do a verbatim copy of \verb||, instead of expand the text \verb|| in some weird way, in the output LaTeX
It is time to try the above example yourself. Have fun.
Recent Comments
โข domj on Math Spacing and Length Units
โข Jie on ้ซ่ๆฐๅญฆ้ข
โข Robert on Inline Math, Subscript, Superscript | {"url":"https://chenfuture.wordpress.com/","timestamp":"2014-04-17T09:35:32Z","content_type":null,"content_length":"50114","record_id":"<urn:uuid:4a628340-b26f-417d-b389-cafbe1ae47fb>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gauge Theory, Maxwell's equations, and the Maxwellian
There is an important difference related to the mathematical formulation of Self-Field Theory (SFT) and that is the complete absence of gauge. We know from the various quantum theories that gauge
plays a crucial role. To understand this difference we must first look at the basics of gauge theory. And to look at the basics of gauge theory we must first define some terms we have already used in
discussing SFT. First letโs define Maxwellโs equations.
If we just look at the equations above (1a-d), Maxwellโs equations (the inhomogeneous Maxwell's equations), then we have a basis for examining gauge in various systems. The reason being, as they
stand, these equations cannot be solved uniquely. Since all equations involve differentials (without the actual variable) there is no definitive form of the equations. In other words, there is a
family of solutions, all isomorphs of each other; we do not have a unique solution. We can add any constant we like to them and this will be part of the overall family. Now this is the basis of gauge
theory with its symmetries. In terms of SFT, two arbitrary spinors could be a solution as long as they obey the Maxwell equations in (1a-d) above; the solutions 'float' about constants of
integration. There is a freedom to choose the constants anyway we like.
So what is a Maxwellian?
In addition to the inhomogeneous Maxwell equations where
we use the Lorentz equation that describes the field forces acting on the particles is written as
where the constitutive equations in free space are
. (1g)
the relationship between the speed of light [i] and the ratio of the fields
and finally the atomic energy density per volume is
which depends upon the E- and H-fields in the atomic region. Equations (1aโi) are termed the Maxwellian, or sometimes the EM field equations.[ii] In these equations, v is the particle velocity, m is
its mass. It is assumed that the volume of integration v[n] over which the charge density is evaluated, and the area the charge circulates normal to its motion S[n], are calculated during successive
periods over which the internal motions of the atom take place.
In the Maxwellian, (1a-i), where we use the inhomogeneous forms for (1a) and (1d) there is no gauge because now the the fields are completely defined. We have a unique system of equations. There is
no family of isomorphs because we have tied the bispinorial solution down; it is no longer a floating family of similar shaped solutions, but a single unique solution.
This is a crucial difference between quantum theory and SFT, a complete absence of gauge, basically due to the use of the Lorentz equation within SFT along with the other defining equations (1e-i).
That said, gauge theory has been a very important tool when searching for links between the forces found within nature, for instance when it is used within the Standard Model of particle physics. The
Maxwellian has an analog within quantum theory and that is the Lagrangian.
[i] In SFT, the speed of light is not proscribed from being variable. Depending on the energy density of the region being studied, and the photon state, c can vary.
[ii] Where a nebular current density is used in (2.1d), the factor 4p comes about from an application of Greenโs theorem leading to a surface over the volume enclosed by the charge density. For the
case of discrete charges, the factor p represents the area enclosed by the moving charge point.
As you see Maxwell does not hold Einstein's postulate in reverence and awe and I suggest any modern scientist should think about Albert's postulate deeply from a Maxwell's equations viewpoint. (For a
start he died before Albert's work) For instance the speed of light within the body is NOT the speed of light we know as c since the energy density is different to that of free space. Again in the
inflation era just after the Big Bang, there is a superluminal speed of expansion; this is known as the inflationary period. Where does this superluminal speed come from if you hold to Einstein's
postulate about the invariance of the speed of light?
Albert's postulate about the invariance of the speed of light was an assumption, nothing more, nothing less. Yes it fits the postulates of relativity but there's no other cosmological reason based on
physics. To my mind it's an observation of the local region around our part of the Universe. The speed of light as shown in the Maxwellian depends on the energy density at any point in space and this
includes the regions where the weak and strong nuclear forces hold within the atom.
Let me put in a footnote I've just been writing tonight about Newton's gravity, quantum theory, GR; it seems appropriate. Read it and see if you can understand it.
Newton empirically determined the gravitational force as a coupling between masses, a mutual interaction similar to SFT. He obtained the inverse square form of the gravitational field first via
parabolic calculus. Newton reasoned that if a cannon ball was projected with the right velocity, it would travel completely around the Earth, effectively forming an orbit. A particular
gravitational fieldwould lead to a period of revolution. He then validated his results using observations of the Moon around the Earth and the planets around the Sun.The gravitational constant
was eventually measured directly by Cavendish in1796. The mutual fields between the two masses are equal and opposite.
Einstein expanded Newtonโs concept of gravitation via the use of a relativistic Lagrangian. This resulted in a theoretical form of gravity that could be used in exotic regions of the Cosmos, not
just Newtonโs โgravitostaticsโ. In GR the Hilbert action yields Einstein's field equations through the principle of least action. While Einsteinโs equations are called โfieldโ equations they are
based on wave equations and written in terms of potential components. In Einsteinโs formulation the โfieldโ, in reality the potential around a single particle isdetermined via the curvature of
As with Newtonโs law, SFT obtains the mutual effect between two particles via the equations known as the Maxwellian(see Appendix A on the difference between Maxwellโs equations and the
moredefined Maxwellian that includes the Lorentz equation). SFT is thus based onfield equations and not potential wave equations. Like Newtonโs gravity SFT isa mutual effect. As with GR, the
various quantum field theories are based on waveequations that seek to determine the potential around a single particle. Arestatement of GR and quantum theories as mutual effects between
particles can be applied via the mathematics of SFT. The result of such a reformulation isthe replacement of the Lagrangians that involve wave equations of order 2 by Maxwellian equations in the
electric and magnetic field variables of order 1. Using the Maxwellian field equations compared with the Lagrangian potential equations induces a significant decrease in complexity. Practitioners
and students of quantum theory and GR will attest to the degree of difficulty of these computational methods.
The way SFT is a mutual description of particle - field interaction is another reason why we can talk about a possible 'cosmostasis'. What we are saying is that just like the hydrogen atom which is a
model of the dynamic equilibrium between two particles, the electron and the proton, and the fields between them, can we model the Universe in the same way that provides a system that comes to an
equilibrium over time? This gives us more room to compute than previously where we considered the solutions as either flat, open, or closed and we spoke of a critical density; now we also have a
possible equilibrium state.
Tony Fleming Biophotonics Research Institute tfleming@unifiedphysics.com
Tony Fleming | 06/29/12 | 15:36 PM | {"url":"http://www.science20.com/selffield_theory/blog/gauge_theory_maxwells_equations_and_maxwellian-91521","timestamp":"2014-04-20T22:16:06Z","content_type":null,"content_length":"56472","record_id":"<urn:uuid:aa9c6d52-d39c-4f31-a3ee-044c57810177>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
20 Dec 12:42 2012
Categories (cont.)
Christopher Howard <christopher.howard <at> frigidcode.com>
2012-12-20 11:42:09 GMT
I've perhaps been trying everyones patiences with my noobish CT
questions, but if you'll bear with me a little longer: I happened to
notice that there is in fact a Category class in Haskell base, in
class Category cat where
A class for categories. id and (.) must form a monoid.
id :: cat a a
the identity morphism
(.) :: cat b c -> cat a b -> cat a c
morphism composition
However, the documentation lists only two instances of Category,
functions (->) and Kleisli Monad. For instruction purposes, could
someone show me an example or two of how to make instances of this
class, perhaps for a few of the common types? My initial thoughts were
something like so:
(Continue reading) | {"url":"http://comments.gmane.org/gmane.comp.lang.haskell.cafe/102412","timestamp":"2014-04-20T18:32:11Z","content_type":null,"content_length":"67489","record_id":"<urn:uuid:b0884b97-5737-4a89-aa5c-3798ae033734>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: RFC: Exploring Technical Debt
in reply to RFC: Exploring Technical Debt
You would probably think of developing a method of measuring technical debt (rather than a simple equation or model) because it must necessarily be project-specific, and the "true cost" of each item
(ex. deferred refactoring) is best known by those that would implement them. Having experience with similar projects would aid in more accurate estimates.
Simplified, the problem can be stated: "Solving debt item D now would take N man-hours. D makes it X% longer to fix certain kinds of bugs on average.* We can expect to encounter these sorts of bugs Y
times per year. The average expected time it takes to fix these bugs is Z man-hours. Given the staffing on this project I would weight this figure by W." You should similarly add the man-hours saved
for implementing new features. D's per anum opportunity cost (in man-hours) is then the sum of X * Y * Z * W for all such bugs + (similar sum for new features). This can be thought of as future
"earnings" for fixing D now.
It then becomes an intertemporal cash flow problem. You must consider the time cost involved to make the fix now versus how much time is involved in repayment for the future. You only consider those
years you expect the project to be maintained (the longer the time the less important this estimate is). The ultimate comparison of each item should be the cost of fixing D now (N) vs. the present
value of all future productivity losses from D in maintenance and new features (F). If N - F < 0 it's better to fix the problem now, otherwise we should shelve it for later, if ever.
To calculate F you determine a discount rate, a non-trivial task based on a variety of difficult to estimate factors. In general it's the necessary return on investment of paying developers now to be
more productive in the future, plus risk (ex. bankruptcy risk of delaying a shipped product, risk of employee turnover, security risks, inflation risk, etc). It's difficult to do this simply and in a
suitably general way. If developer time would otherwise go to adding new features on the same product, the value of those features should be used to calculate an internal rate of return on those
features (based on an analysis of how much higher the product would sell for or how many more copies it would sell). Similarly, if they would otherwise go to another project, that project's marginal
IRR should be calculated and used. Alternatively, the company's WACC may or may not be appropriate. Calculating F is now a matter of running an NPV algorithm in a spreadsheet.
Since each input is an estimate, it would be prudent to repeat this calculation using a variety of values for your estimates, picked based on how confident you are in them (how well their cost is
known) and for worst-case scenarios. For example you may consider a steep discount rate if your company's survival is strongly determined by the success of your product as it's shipped in 4 months.
**Updated for clarity.
*X must be determined inside the technical domain, so consulting existing studies might be misleading. | {"url":"http://www.perlmonks.org/?node_id=797313","timestamp":"2014-04-18T19:06:08Z","content_type":null,"content_length":"34610","record_id":"<urn:uuid:1673198b-58fa-46b4-9ade-358fccf451d7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Extending Reedy dimension to augmented chain complexes of abelian groups
up vote 1 down vote favorite
Recall that a normal continuously-graded finite interval is given by a pair $a=([a],f)$ consisting of:
1.) A finite totally ordered set $[a]=[a_0< \dots < a_n]$
2.) A grading function $f:U[a] \to \mathbf{N}$ (where $U[a]$ denotes the underlying set of $[a]$)
satisfying the constraints
i.) Continuity: $f(a_j)=f(a_{j-1})\pm 1$ for $1\leq j \leq n$
ii.) Normality: $f(a_0)=f(a_n)=0$.
For an element $x\in [a]$, define even and odd boundaries: $$\partial^+(x)=\operatorname{min} \{y>x : f(y)=f(x)-1\}$$ and $$\partial^-(x)=\operatorname{max} \{y<x : f(y)=f(x)-1\}$$
We define the Reedy dimension $\operatorname{RDim}$ of $a$ to be the sum of the local maxima of $f$ minus the sum of the local minima of $f$.
From such an object, we may construct an augmented chain complex of free abelian groups:
with the boundary map $\partial:C(a)_{m+1}\to C(a)_m$ defined on the generators by the formula $$\partial(x)=\partial^+(x)-\partial^-(x)$$
and with the augmentation $\varepsilon:C(a)_0\to \mathbf{Z}$ defined by sending all generators to $1\in \mathbf{Z}$.
Then the question: Does there exist a "natural way" to extend the Reedy dimension to a dimension function $\operatorname{RDim}:\operatorname{Ob}(\operatorname{AugCh}_{\geq 0})\to \mathbf{N}$ such
a.) For any normal continuously-graded finite interval $a$, $\operatorname{RDim}(C(a))=\operatorname{RDim}(a)$
b.) It is additive on tensor products: $\operatorname{RDim}(C\otimes D)=\operatorname{RDim}(C)+\operatorname{RDim}(D)$
c.) It satisfies the "dimension condition" on sums: $\operatorname{RDim}(C + D)=\operatorname{RDim}(C)+\operatorname{RDim}(D) - \operatorname{RDim}(C\cap D)$?
That is, can such a dimension map be defined using some kind of homological construction? It seems like it might be related to some kind of additive Euler characteristic, which is the reason why I
homological-algebra co.combinatorics chain-complexes
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged homological-algebra co.combinatorics chain-complexes or ask your own question. | {"url":"http://mathoverflow.net/questions/83772/extending-reedy-dimension-to-augmented-chain-complexes-of-abelian-groups","timestamp":"2014-04-21T02:27:25Z","content_type":null,"content_length":"47541","record_id":"<urn:uuid:78b0f534-1cd1-4800-a702-1efee0d7af05>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
July 2nd 2013, 03:25 PM #1
What is this?
A friend of mine gave this to me as a challenge. I almost have no clue what I'm supposed to do
he says "Solve for the following equation for f(x)"
$f(x) = x + \lambda \int_0^1 f(\xi) \,\,d \xi$
can anyone tell me what type of problem this is so I can do a bit of research on my own? My understanding of calculus only spans from Calc I to Calc II
Re: What is this?
Hey ReneG.
Hint: Try differentiating both sides to get a differential equation and solve from there.
(These kinds of problems are known as integro-differential equations)
Integro-differential equation - Wikipedia, the free encyclopedia
Re: What is this?
Thank you for pointing me in the right direction
Re: What is this?
Hi ReneG,
If the upper limit of your integral is x and not 1, I agree with Chiro. However as written, the integral is just a number c. So integrate both sides of your equation from 0 to 1 and get:
$c={1\over2}+\lambda c$ or $c={1\over2(1-\lambda)}$
So $f(x)=x+{\lambda\over2(1-\lambda)}$
Re: What is this?
No typos. How did you integrate both sides though?
\begin{align*} f(x) &= x + \lambda \int_{0}^{1}f(\xi)\, d\xi \\ \int_{0}^{1}f(x)\,dx &= \int_0^1 x \,dx + \lambda \int_{0}^{1} \left[ \int_{0}^{1} f(\xi)\,d\xi \right ] \,dx \\ \int_{0}^{1}f(x)
\,dx &= \frac{1}{2} + \lambda \int_{0}^{1} \left[ \int_{0}^{1} f(\xi)\,d\xi \right ]\,dx \end{align}
I'm lost.
Last edited by ReneG; July 2nd 2013 at 11:18 PM.
Re: What is this?
July 2nd 2013, 06:55 PM #2
MHF Contributor
Sep 2012
July 2nd 2013, 07:13 PM #3
July 2nd 2013, 08:27 PM #4
Super Member
Dec 2012
Athens, OH, USA
July 2nd 2013, 11:11 PM #5
July 3rd 2013, 07:26 AM #6
Super Member
Dec 2012
Athens, OH, USA
July 3rd 2013, 07:37 AM #7 | {"url":"http://mathhelpforum.com/calculus/220319-what.html","timestamp":"2014-04-19T23:16:31Z","content_type":null,"content_length":"45067","record_id":"<urn:uuid:76114b66-92a0-4af8-b29c-077aa240302a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Accokeek Math Tutor
Find an Accokeek Math Tutor
...I can come to your home or at a public venue (e.g. a library or coffee shop), whichever is more comfortable and convenient for your learning. My hours are also very flexible, including a wide
mix of days, evenings, nights and weekends. I truly believe that math can be fun and easy if it's broken down for you in a way that you can comprehend it.
15 Subjects: including linear algebra, organic chemistry, algebra 1, algebra 2
...Tutoring at this level will reinforce these basic skills and will fill in the gaps sometimes missed as a student advances in their studies. I began my teaching career in the 4th through 6th
grade levels. I soon discovered that many students were losing ground in advancing because of gaps in the very fundamentals of their education.
20 Subjects: including SAT math, prealgebra, reading, English
...I also coach Varsity Soccer at George C. Marshall High School, where I coached JV for 2 years. I am currently teaching Special Education job skills and independent living skills.
16 Subjects: including statistics, probability, algebra 1, algebra 2
...My highest level of Math is Differential Equations. I have also taken Linear Algebra. The highest level of chemistry taken is Organic Chemistry.
10 Subjects: including geometry, physics, precalculus, trigonometry
...I am currently attending George Mason University to continue my education and use technology to even better in my classroom. I've also attended Florida A&M University where I received my
undergraduate degree in Elementary Education with an English Speakers of Other Languages endorsement. I star...
22 Subjects: including prealgebra, geometry, grammar, reading | {"url":"http://www.purplemath.com/accokeek_math_tutors.php","timestamp":"2014-04-20T16:14:26Z","content_type":null,"content_length":"23584","record_id":"<urn:uuid:155ad0e9-50fd-4d56-90ea-f26dc3d9f6af>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Moment of Zen
Note: This is a repost from an old weblog.
The Shell Sort algorithm is designed to move items over large distances each iteration. The idea behind this is that it will get each item closer to its final destination quicker saving a lot of
shuffling by comparing items farther apart.
The way it works is it subdivides the dataset into smaller groups where each item in the group is a set distance apart. For example, if we use h to represent our distance and R to represent an item,
we might have groups: { R[1], R[h+1], R[2h+1], ... }, { R[2], R[h+2], R[2h+2], ... }, ...
We then sort each subgroup individually.
Keep repeating the above process, continually reducing h, until h becomes 1. After one last run-through where h is a value of 1, we stop.
At this point, I'm sure you are wondering where h comes from and what values to reduce it by each time though. If and when you ever figure it out, let me know - you might also want to publish a paper
and submit it to ACM so that your name might go down in the history (and algorithm) books. That's right, as far as I'm aware, no one knows the answer to this question, except, perhaps, God Himself
If you are interested, you might take a look at Donald Knuth's book, Art of Computer Programming, Volume 3: Sorting and Searching (starting on page 83), for some mathematical discussion on the
As far as I understand from Sorting and Searching, it is theoretically possible to get the Shell Sort algorithm to approach O(n^1.5) given an ideal increment table which is quite impressive.
Knuth gives us a couple of tables to start with: [8 4 2 1] and [7 5 3 1] which seem to work okay, but are far from being ideal for our 100,000 item array that we are trying to sort in this exercise,
however, for the sake of keeping our first implementation simple, we'll use the [7 5 3 1] table since it has the charming property where each increment size is 2 smaller than the previous. Yay for
ShellSort (int a[], int n)
int h, i, j;
int tmp;
for (h = 7; h >= 1; h -= 2) {
for (i = h; i < n; i++) {
tmp = a[i];
for (j = i; j >= h && a[j - h] > tmp; j -= h)
a[j] = a[j - h];
a[j] = tmp;
The nice thing about Shell Sort is that, while the increment table is a complete mystery, the algorithm itself is quite simple and well within our grasp.
Let's plug this into our sort program and see how well it does.
I seem to get a pretty consistent 6.3 seconds for 100,000 items on my AMD Athlon XP 2500 system which is almost as good as the results we were getting from our Binary Insertion Sort implementation.
Now for some optimizations. We know that an ideal set of increment sizes will get us down to close to O(n^1.5) and that it is unlikely that the [7 5 3 1] set is ideal, so I suggest we start there.
On a hunch, I just started adding more and more primes to our [7 5 3 1] table and noticed that with each new prime added, it seemed to get a little faster. At some point I decided to experiment a bit
and tried using a set of primes farther apart and noticed that with a much smaller set of increments, I was able to get about the same performance as my much larger set of primes. This spurred me on
some more and I eventually came up with the following set:
{ 14057, 9371, 6247, 4177, 2777, 1861, 1237, 823, 557, 367, 251, 163, 109, 73, 37, 19, 11, 7, 5, 3, 1 }
In order to use this set, however, we need a slightly more complicated method for determining the next value for h than just h -= 2, so I used a lookup table instead:
static int htab[] = { 14057, 9371, 6247, 4177, 2777, 1861, 1237, 823, 557, 367, 251, 163, 109, 73, 37, 19, 11, 7, 5, 3, 1, 0 };
ShellSort (int a[], int n)
int *h, i, j;
int tmp;
for (h = htab; *h; h++) {
for (i = *h; i < n; i++) {
tmp = a[i];
for (j = i; j >= *h && a[j - *h] > tmp; j -= *h)
a[j] = a[j - *h];
a[j] = tmp;
With this new table of increments, I was able to achieve an average sort time of about 0.09 seconds. In fact, ShellSort() in combination with the above increment table will sort an array of 2 million
items in about the same amount of time as our optimized BinaryInsertionSort() took to sort 100 thousand items. That's quite an improvement, wouldn't you say!?
To learn more about sorting, I would recommend reading Art of Computer Programming, Volume 3: Sorting and Searching (2nd Edition) | {"url":"http://jeffreystedfast.blogspot.com/2007/02/shell-sort.html","timestamp":"2014-04-18T08:02:00Z","content_type":null,"content_length":"100777","record_id":"<urn:uuid:9c3f123b-50ce-4653-8880-e3f31ad303af>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 2,802
what would the equation be in slope intercept of 1 and is parallel to the given line
Write an equation in slope intercept form with a y-intercept of 3 and is perpendicular to the given line
algebra 2
can someone please answer this
algebra 2
(This homework question was removed due to a copyright claim submitted by NNDS.) I messed this up when I asked earlier is the right answer 3/4 ..please let me know
algebra 2
A cafeteria has 5 turkey sandwiches, 6 cheese sandwiches, and 4 tuna sandwiches. There are two students in line and each will take a sandwich. What isthe the probability that the first student takes
a cheese sandwich and the next student takes a turkey sandwich? the answers I ...
Pick the correct letter for the analogy below. Please correct me if I'm wrong!! meditation: church:: A. calibration: math B. skating: rink C. watch: time D. collaboration : army Is it A?
Pick the correct letter for the analogy below. Please correct me if I'm wrong!! thermometer: heat:: A. monograph: books B. barograph : stocks C. seismograph : tremors D. telegraph: sound Is it B?
Pick the correct letter for the analogy below. Please correct me if I'm wrong!! A. thermometer: heat B. barograph : stocks C. seismograph : tremors D. telegraph: sound Is it B?
Is it C?
Pick the correct letter for the analogy below. Please correct me if I'm wrong!! toe: foot:: A. nose: face B. hand: arm C. finger: hand D. limb: tree
algebra 2
typo that was a choice f or the answers
algebra 2
A builder has 8 lots available for sale. ยท 3/4 Six lots are greater than one acre. ยท Two lots are less than one acre. answers I have to choose from 3/4 5/14 27/64 3/4 please help
I did not see it . sorry
. There is a 10% chance it will rain on Saturday and a 30% chance it will rain on Sunday. What percent chance is there that it will rain on both Saturday and Sunday? A. 3% B. 15% C. 20% D. 40% would
the answer be c
There is a 10% chance it will rain on Saturday and a 30% chance it will rain on Sunday. What percent chance is there that it will rain on both Saturday and Sunday? A. 3% B. 15% C. 20% D. 40%
y-6 = -3(x + 1) A. The slope is 3, and the y-intercept is -6. B. The slope is 3, and the y-intercept is 1. C. The slope is -3, and the y-intercept is 3. D. The slope is -3, and the y-intercept is 1.
Which is the equation of the line that contains the points (0,3) and (-2,4)? A. x+2y=6 B. 2x+y=3 C. 2x+y=0 D. x-2y=
A cars gas tank holds 15.2 gallons of gasoline (density = 0.66 g/ml). How many pounds of gasoline are present in a full tank?
3. The half-life of a certain drug in the human body is 8 hours how long it would take for a dose of this drug to decay to 0.1% of its initial level?
2. A bacteria population grows exponentially with growth rate k=2.5 how long it would take for the bacteria population to double its size?
1.The remains of an old camp๏ฌre are unearthed and it is found that there is only 80% as much radioactive carbon-14 in the charcoal samples from the camp๏ฌre as there is in modern living trees. If the
half-life of carbon- 14 is 5730 years, how long ago did the camp...
A cafeteria has 5 turkey sandwiches, 6 cheese sandwiches, and 4 tuna sandwiches. There are two students in line and each will take a sandwich. What is the probability that the first student takes a
cheese sandwich and the next student takes a turkey sandwich?
algebra 2
In a shipment of alarm clocks, the probability that one alarm clock is defective is 0.04. Charlie selects three alarm clocks at random. If he puts each clock back with the rest of the shipment before
selecting the next one, what is the probability that all three alarm clocks a...
algerbra 2
would this be 20 %
algerbra 2
There is a 10% chance it will rain on Saturday and a 30% chance it will rain on Sunday. What percent chance is there that it will rain on both Saturday and Sunday
What is the y-intercept for the equation x=3y-6=0?
What is the y-intercept for the equation x=3y-6=0?
Write the equation of a line whose slope is the same as the line y=-2x=7 and whose y-intercept is the same as the line 2y=x-8.
During road construction a section of rock is cleared which contains many fossils each shovel of rock removed from the trench uncovers several fossils some of the fossils That are exposed include a
prehistoric fish petrified wood water plant imprints and jawbone from a giant c...
Enzymes in the human body work in a very wide range of conditions. A. True B. False
please help
Substance X is converted to substance Y in a chemical reaction. What role would a catalyst play in this scenario? A. It would increase the solubility of substance X into the solvent needed to speed
up the reaction. B. It would speed up the reaction by becoming part of the prod...
an experiment, a scientist notices that an enzyme s catalytic rate decreases when the pH of the environment is changed. Which of the following best summarizes the effect of pH on the enzyme? A. pH
changes the shape of the enzyme. B. pH takes energy away from the enzyme, t...
2. How do enzymes impact biochemical reactions in the body? A. Enzymes are catalysts to biochemical reactions. B. Enzymes lower the activation energy of biochemical reactions. C. Enzymes can speed up
biochemical reactions. D. All of the above are correct.
1. In an experiment, a scientist notices that an enzyme s catalytic rate decreases when the pH of the environment is changed. Which of the following best summarizes the effect of pH on the enzyme? A.
pH changes the shape of the enzyme. B. pH takes energy away from the en...
Sulfur has six electrons in its outer most energy level. In ionic bonding, it would tend to ยญยญยญยญยญยญยญยญยญ_____________________________. A. take on two more electrons B. give away two electrons C. give
away six electrons D. not take on anymore el...
Which of the following involve electronic charges? A. hydrogen bonds B. ionic bonds C. polar molecules D. all of the above
I have a problem with this: y=1.5x Questions: What is the rate of change between the variables? State whether the y values are increasing or decreasing, or neither, as x increases Give the y
intercept List the coordinates of two points that lie on the graph of the line in the ...
Which of the following involve electronic charges? A. hydrogen bonds B. ionic bonds C. polar molecules D. all of the above
please help
Sulfur has six electrons in its outer most energy level. In ionic bonding, it would tend to ยญยญยญยญยญยญยญยญยญ_____________________________. A. take on two more electrons B. give away two electrons C. give
away six electrons D. not take on anymore el...
social studies
What is a possible explanation of why China abandoned overseas exploration in 1435? A. China could not develop advanced naval technology. B. The voyages of Zheng He failed to impress local rulers. C.
Confucian scholars wanted to protect China s traditions. D. The Chinese ...
algebra 2
Isabella shipped a 5-pound package for $4. She shipped a 3-pound package for $3.60 and a 1-pound package for $3.20. A 10-pound package cost $5 to ship. The shipping cost follows a pattern based on
the weight of the package. Which expression can be used to calculate the shippin...
algerbra 2
1. Marissa is a photographer. She sells framed photographs for $100 each and greeting cards for $5 each. The materials for each framed photograph cost $30, and the materials for each greeting card
cost $2. Marissa can sell up to 8 framed photographs and 40 greeting cards each ...
Physics (Please check)
So I just want to make sure I did the math corretly: My value for angle of incidence was 20 degrees and for refraction it was 15 so I did Sin(20) / sin (15) = 1.32 which is very close to the accepted
value. Is this corret?
Physics (Please check)
Calculate the index (n water) of refraction of water given the angle of incidence and the angle of refraction. I know that the accepted value for n is 1.33 but I am not sure how to calculate the
index. Would I use Snell's law: na sin Qa = nb sin Qb? Thank you!
What would this look like in an equation in slope intercept form? through (-3,4) and (3,-4) Would it be Y=4/3x+0 ??
What would this look like in an equation in slope intercept form? X-intercept -4, Y-intercept -2 Would it be Y=-1/2x+2 ??
What would this look like in an equation in slope intercept form? through (0,7) and (1,9) Would it be Y= 2/1x+7 ?
How would you put this in an equation in slope intercept form? what would it look like? X-intercept -4, Y-intercept -2
How would you put this in an equation in slope intercept form? what would it look like? Y-intercept -5, X-intercept 3
How would you put this in an equation in slope intercept form? what would it look like? Slope -3; x-intercept 3
How would you put this in an equation in slope intercept form? what would it look like? Slope 1/2 ; x-intercept 4
Is that correct?
Whitman, Bradstreet, and Wheatley included sensory details in their poetry. True/False
please help with my other ones
last one write an equation of the line that passess through 10,-4 and is perpendicular to the line whose equation is y=2/5x+8 help
I am confused on what to do with this?
find an equation of the line that passess throught 0.-6 amd 5, 3
wtite an equatiom of the line through 5,-3 and -2 and -8 in slope intercept form
one line paasses thru -3,-2 and -2,1 and another line passes thru -1,2 and 2,1 what is the relationship between the lines are they parrelle, perpendicular, or neither?
In a line with a slope of 1/2 a perpendicular to another line, the the slope of the is -2 true or false
algebra 2
by 10 there are 25 people admitted to a park by 11 there are 50 what would a slope of zero between 2 consecutive hours mean and can the line between 2 consecutive numbers have a negative slope
math 5th grade
How to solve 547X73
algebra 2
This homework question was removed due to a copyright claim submitted by the National Network of Digital Schools Management Foundation (NNDS).
algebra 2
This homework question was removed due to a copyright claim submitted by the National Network of Digital Schools Management Foundation (NNDS).
math 5th grade
How to solve 547X73
This homework question was removed due to a copyright claim submitted by the National Network of Digital Schools Management Foundation (NNDS).
algebra 2
the 2 unequalities -26<4k-2 and 2k-1<6 A. Write a compound inequality to combine the inequalities shown previously. and B. Solve the compound inequality for values of k. Show your work. Write your
final answer as one inequality. please help
A ladder 12 m in length rests against a wall. The foot of the ladder is 3 m from the wall. What is the measure of the angle the ladder forms with the floor?
is 3/8 more,less than or equal to 40%
algebra 2
please help me with this
algebra 2
A garden supply store sells two types of lawn mowers. The cashiers kept a tally chart for the total number of mowers sold. They tallied 30 mowers sold. Total sales of mowers for the year were
$8379.70. The small mowers cost $249.99 and the large mowers cost $329.99. Find the n...
algebra 2
You have $22 in your bank account, and you deposit $11.50 per week. Your cousin has $218 in his bank account and is withdrawing $13 per week. The graph of this problem situation intersects at x=8.
What does this mean? A. In 8 weeks, you will have triple the amount of money in...
algebra 2
A garden supply store sells two types of lawn mowers. The cashiers kept a tally chart for the total number of mowers sold. They tallied 30 mowers sold. Total sales of mowers for the year were
$8379.70. The small mowers cost $249.99 and the large mowers cost $329.99. Find the n...
algebra 2
I need to graph these what would be the equations x=300 y=-300 x=200 y = 400/3 are these the correct answers
algebra 2
This homework question was removed due to a copyright claim submitted by the National Network of Digital Schools Management Foundation (NNDS).
This homework question was removed due to a copyright claim submitted by the National Network of Digital Schools Management Foundation (NNDS).
algebra 2
Ms. Tweed earns $12 an hour. Deductions for taxes and insurance take 25 percent of her earnings. Which equation could be solved to find Ms. Tweed s take-home pay, p, after 80 hours of work?
algebra 2
A. 75 = 10d + 5n B. 75 = 10d ยท 5n C. 75 = dn D. 75 = d + n
algebra 2
please help me with this would it be 75=10D+5N
algebra 2
Marshall has $1.25 in dimes, d, and nickels, n, in his pocket. Which equation could be solved to find the possible combinations of dimes and nickels Marshall has?
Covalent bonds are not affected by chemical reactions. true ir false
please help
What are the reactants in the following chemical formula? C6H12O6 + 6O2 ยจ 6CO2 + 6H2O
algebra 2
would 90=3y+2-5y be right
algebra 2
. A tile setter is joining the angles of two tiles, A and B, to make a 90-degree angle. The degree measure of Angle A can be represented as 3y + 2 and of Angle B as 5y. Which equation represents this
situation? these are the one I neeeed help with
algebra 2
One-fourth of the distance between two cities is 100 miles less than two-thirds the distance between the cities. Which equation expresses this situation?
algebra 2
do you guys tutor
algebra 2
7.15 =1/3 P+4.50 or 7.16=1/3 (p+4.50)
algebra 2
algebra 2
Three friends share the cost of a pizza. The base price of the pizza is p and the extra toppings cost $4.50. If each person s share was $7.15, which equation could be used to find p, the base price
of the pizza?
algerbra 2
Lisa buys a carpet for $230. The price is $3.50 per square foot. If Lisa had a special discount of $50 off, which linear equation could be used to find the area, A, of the carpet?
A restaurant meal for a group of people cost $85 total. This amount included a 6% tax and an 18% tip, both based on the price of the food. Which equation could be used to find f, the cost of the food
Algebra 1
I'm having trouble putting these equations in standard form. PLEASE HELP ME!!! Y=12x+4 Y= -8x+-5 Y=-2x+8 Y=5x+5 Y=18x+-3
algebra 2
that is a division signs '
algebra 2
how do you simplify 5-x _______ x^2-20 to = 4
This homework question was removed due to a copyright claim submitted by the National Network of Digital Schools Management Foundation (NNDS).
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Hannah&page=2","timestamp":"2014-04-19T08:17:20Z","content_type":null,"content_length":"27593","record_id":"<urn:uuid:585cb8e6-cd3b-4111-a407-46a75ff09cc1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mutual information, Fisher information and population coding
Results 1 - 10 of 50
- Proceedings of the International Conference on Spoken Language Processing , 1998
"... Statistical techniques based on hidden Markov Models (HMMs) with Gaussian emission densities have dominated signal processing and pattern recognition literature for the past 20 years. However,
HMMs trained using maximum likelihood techniques suffer from an inability to learn discriminative informati ..."
Cited by 74 (2 self)
Add to MetaCart
Statistical techniques based on hidden Markov Models (HMMs) with Gaussian emission densities have dominated signal processing and pattern recognition literature for the past 20 years. However, HMMs
trained using maximum likelihood techniques suffer from an inability to learn discriminative information and are prone to overfitting and over-parameterization. Recent work in machine learning has
focused on models, such as the support vector machine (SVM), that automatically control generalization and parameterization as part of the overall optimization process. In this paper, we show that
SVMs provide a significant improvement in performance on a static pattern classification task based on the Deterding vowel data. We also describe an application of SVMs to large vocabulary speech
recognition, and demonstrate an improvement in error rate on a continuous alphadigit task (OGI Aphadigits) and a large vocabulary conversational speech task (Switchboard). Issues related to the
development and optimization of an SVM/HMM hybrid system are discussed.
, 1999
"... The effectiveness of various stimulus identification (decoding) procedures for extracting the information carried by the responses of a population of neurons to a set of repeatedly presented
stimuli is studied analytically, in the limit of short time windows. It is shown that in this limit, the enti ..."
Cited by 33 (3 self)
Add to MetaCart
The effectiveness of various stimulus identification (decoding) procedures for extracting the information carried by the responses of a population of neurons to a set of repeatedly presented stimuli
is studied analytically, in the limit of short time windows. It is shown that in this limit, the entire information content of the responses can sometimes be decoded, and when this is not the case,
the lost information is quantified. In particular, the mutual information extracted by taking into account only the most likely stimulus in each trial turns out to be, if not equal, much closer to
the true value than that calculated from all the probabilities that each of the possible stimuli in the set was the actual one. The relation between the mutual information extracted by decoding and
the percentage of correct stimulus decodings is also derived analytically in the same limit, showing that the metric content index can be estimated reliably from a few cells recorded from brief
periods. Computer simulations as well as the activity of real neurons recorded in the primate hippocampus serve to confirm these results and illustrate the utility and limitations of the approach.
, 2001
"... We define predictive information Ipred(T) as the mutual information between the past and the future of a time series. Three qualitatively different behaviors are found in the limit of large
observation times T: Ipred(T) can remain finite, grow logarithmically, or grow as a fractional power law. If t ..."
Cited by 30 (2 self)
Add to MetaCart
We define predictive information Ipred(T) as the mutual information between the past and the future of a time series. Three qualitatively different behaviors are found in the limit of large
observation times T: Ipred(T) can remain finite, grow logarithmically, or grow as a fractional power law. If the time series allows us to learn a model with a finite number of parameters, then Ipred
(T) grows logarithmically with a coefficient that counts the dimensionality of the model space. In contrast, power-law growth is associated, for example, with the learning of infinite parameter (or
nonparametric) models such as continuous functions with smoothness constraints. There are connections between the predictive information and measures of complexity that have been defined both in
learning theory and the analysis of physical systems through statistical mechanics and dynamical systems theory. Furthermore, in the same way that entropy provides the unique measure of available
information consistent with some simple and plausible conditions, we argue that the divergent part of Ipred(T) provides the unique measure for the complexity of dynamics underlying a time series.
Finally, we discuss how these ideas may be useful in problems in physics, statistics, and biology.
- The Journal of Neuroscience , 2003
"... A key issue in understanding the neural code for an ensemble of neurons is the nature and strength of correlations between neurons and how these correlations are related to the stimulus. The
issue is complicated by the fact that there is not a single notion of independence or lack of correlation. We ..."
Cited by 29 (0 self)
Add to MetaCart
A key issue in understanding the neural code for an ensemble of neurons is the nature and strength of correlations between neurons and how these correlations are related to the stimulus. The issue is
complicated by the fact that there is not a single notion of independence or lack of correlation. We distinguish three kinds: (1) activity independence; (2) conditional independence; and (3)
information independence. Each notion is related to an information measure: the information between cells, the information between cells given the stimulus, and the synergy of cells about the
stimulus, respectively. We show that these measures form an interrelated framework for evaluating contributions of signal and noise correlations to the joint information conveyed about the stimulus
and that at least two of the three measures must be calculated to characterize a population code. This framework is compared with others recently proposed in the literature. In addition, we
distinguish questions about how information is encoded by a population of neurons from how that information can be decoded. Although information theory is natural and powerful for questions of
encoding, it is not sufficient for characterizing the process of decoding. Decoding fundamentally requires an error measure that quantifies the importance of the deviations of estimated stimuli from
actual stimuli. Because there is no a priori choice of error measure, questions about decoding cannot be put on the same level of generality as for encoding.
, 1999
"... Neurophysiologists are often faced with the problem of evaluating the quality of a code for a sensory or motor variable, either to relate it to the performance of the animal in a simple
discrimination task or to compare the codes at various stages along the neuronal pathway. One common belief that h ..."
Cited by 26 (0 self)
Add to MetaCart
Neurophysiologists are often faced with the problem of evaluating the quality of a code for a sensory or motor variable, either to relate it to the performance of the animal in a simple
discrimination task or to compare the codes at various stages along the neuronal pathway. One common belief that has emerged from such studies is that sharpening of tuning curves improves the quality
of the code, although only to a certain point; sharpening beyond that is believed to be harmful. We show that this belief relies on either problematic technical analysis or improper assumptions about
the noise. We conclude that one cannot tell, in the general case, whether narrow tuning curves are better than wide ones; the answer depends critically on the covariance of the noise. The same
conclusion applies to other manipulations of the tuning curve profiles such as gain increase.
, 2001
"... this article that the choice of a variability model has a major, nontrivial impact on the encoding properties of the neural population. The immense variability of individual response parameters,
such as tuning widths or correlation coef#cients, has also been neglected in most previous work. Although ..."
Cited by 23 (4 self)
Add to MetaCart
this article that the choice of a variability model has a major, nontrivial impact on the encoding properties of the neural population. The immense variability of individual response parameters, such
as tuning widths or correlation coef#cients, has also been neglected in most previous work. Although these parameter variations are always found in empirical data, they were considered functionally
insignificant, and hence theoretical studies have almost always assumed uniform parameters throughout the population. We will show here that this uniform case is unfavorable in the sense that the
introduction of parameter variability improves the encoding performance
- Neural Computation , 2000
"... Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically
spiking neurons characterized by different tuning widths for the different features. The optimal encoding strate ..."
Cited by 21 (5 self)
Add to MetaCart
Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically spiking
neurons characterized by different tuning widths for the different features. The optimal encoding strategy for representing one feature most accurately consists of (i) narrow tuning in the dimension
to be encoded to increase the single-neuron Fisher information, and (ii) broad tuning in all other dimensions to increase the number of active neurons. Extremely narrow tuning without sufficient
receptive field overlap will severely worsen the coding. This implies the existence of an optimal tuning width for the feature to be encoded. Empirically, only a subset of all stimulus features will
normally be accessible. In this case, relative encoding errors can be calculated which yield a criterion for the function of a neural population based on the measured tuning curves. 1 Introduction
The question...
- UNDER REVIEW, NEURAL COMPUTATION , 2007
"... Understanding how stimulus information is encoded in spike trains is a central problem in computational neuroscience. Decoding methods provide an important tool for addressing this problem, by
allowing us to explicitly read out the information contained in spike responses. Here we introduce several ..."
Cited by 19 (12 self)
Add to MetaCart
Understanding how stimulus information is encoded in spike trains is a central problem in computational neuroscience. Decoding methods provide an important tool for addressing this problem, by
allowing us to explicitly read out the information contained in spike responses. Here we introduce several decoding methods based on point-process neural encoding models (i.e. โforward โ models that
predict spike responses to novel stimuli). These models have concave log-likelihood functions, allowing for efficient fitting via maximum likelihood. Moreover, we may use the likelihood of the
observed spike trains under the model to perform optimal decoding. We present: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus โ the most probable
stimulus to have generated the observed single- or multiple-spike train response, given some prior distribution over the stimulus; (2) a Gaussian approximation to the posterior distribution, which
allows us to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the response; and (4) a
framework for the detection of change-point times (e.g. the time at which the stimulus undergoes a change in mean or variance), by marginalizing over the posterior distribution of stimuli. We show
several examples illustrating the performance of these estimators with simulated data.
- Neural Computation , 2007
"... Uncertainty coming from the noise in its neurons and the ill-posed nature of many tasks plagues neural computations. Maybe surprisingly, many studies show that the brain manipulates these forms
of uncertainty in a probabilistically consistent and normative manner, and there is now a rich theoretical ..."
Cited by 18 (4 self)
Add to MetaCart
Uncertainty coming from the noise in its neurons and the ill-posed nature of many tasks plagues neural computations. Maybe surprisingly, many studies show that the brain manipulates these forms of
uncertainty in a probabilistically consistent and normative manner, and there is now a rich theoretical literature on the capabilities of populations of neurons to implement computations in the face
of uncertainty. However, one major facet of uncertainty has received comparatively little attention: time. In a dynamic, rapidly changing world, data are only temporarily relevant. Here, we analyze
the computational consequences of encoding stimulus trajectories in populations of neurons. For the most obvious, simple, instantaneous encoder, the correlations induced by natural, smooth stimuli
engender a decoder that requires access to information that is nonlocal both in time and across neurons. This formally amounts to a ruinous representation. We show that there is an alternative
encoder that is computationally and representationally powerful in which each spike contributes independent information; it is independently decodable, in other words. We suggest this as an
appropriate foundation for understanding time-varying population codes. Furthermore, we show how adaptation to | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=614833","timestamp":"2014-04-19T00:48:39Z","content_type":null,"content_length":"40903","record_id":"<urn:uuid:1b72f88d-1dff-4dc7-8d1b-b7af4b6838e0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary to Hex Conversion by Formula BIN2HEX() in Excel
BIN2HEX() Excel Formula
This Excel Formula BIN2HEX() is to convert Binary Number to HEX Number. The argument of this function is the Binary Number which cannot contain more than 10 characters or 10 bits. It will return the
error #NUM! when it exceeds more than 10 bits. The most significant bit that is the first bit represents sign bit and the remaining 9 bits are magnitude bits and the negative numbers are represented
using two's-complement notation
Places is the number and it should be positive numeric; of characters to use and if it is not an integer, it is truncated. If places is omitted, BIN2HEX uses the minimum number of characters
necessary. Places is useful for padding the return value with leading 0s
BIN2HEX() Syntax
Excel Function to Convert Binary Number to HEX | {"url":"http://nscraps.com/Windows/1369-binary-hex-conversion-formula-bin2hex-excel.htm","timestamp":"2014-04-20T23:53:30Z","content_type":null,"content_length":"12187","record_id":"<urn:uuid:3e88dded-50ad-4d2a-8c0c-6c1820729797>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
two Hermitian problems
February 10th 2011, 04:24 PM #1
Junior Member
Oct 2008
two Hermitian problems
First problem: Assume T: V -> V is a Hermitian transformation
Prove T^-1 is Hermitian if T is invertible.
Here, I can prove that T^n is Hermitian if n > 0 but I'm stuck for n = -1.
Second Problem: C(0, 1) is linear Space. Inner product is given by:
(f, g) = integral( f * g, t, 0, 1).
Let V be the subspace of all f such that integral f, t, 0, 1) = 0.
Let T: V -> C(0, 1) and T(f(x)) = integral(f, t, 0, x). Prove T is skew - symmetric.
For this problem, I would apply the transformation, do the inner product, then I would have an integral in an integral. I don't know if I should F and G for their anti derivative or what.
First Problem: Assume, as the problem indicates, that $T:V\to V$ is Hermitian and invertible. Then we know that
$\langle x|Ty\rangle=\langle Tx|y\rangle$ for all $x,y\in V.$
Let $x,y\in V.$ Then, because the inverse exists, we may define $s=T^{-1}x$ and $t=T^{-1}y$ such that both $s,t\in V.$ Now play around with
$\langle x|T^{-1}y\rangle.$ You'd like it to equal $\langle T^{-1}x|y\rangle.$
Can you get that to happen?
Second Problem. Let me rephrase your question using more standard notation.
$W=C(0,1)$ is a linear space with inner product
$\displaystyle \langle f|g\rangle=\int_{0}^{1}\overline{f(t)}\,g(t)\,dt.$
Let $V\subseteq W$ be the subspace consisting of the following:
$\displaystyle V=\left\{f|\int_{0}^{1}f(t)\,dt=0\right\}.$
Let $T:V\to W$ be defined by
$\displaystyle Tf(x)=\int_{0}^{x}f(t)\,dt.$
Prove that $T$ is skew-symmetric.
Is this a correct re-statement of the problem? If so, I have an extremely strong feeling that integration by parts is going to be the key to solving this problem. The subspace you're in indicates
that the typical boundary term of
$\displaystyle \int_{a}^{b}v\,du=\underbrace{(uv)|_{a}^{b}}_{\tex t{Boundary Term}}-\int_{a}^{b}u\,dv$
will be zero, which means you'll just pick up that minus sign when you try to slap the integral on the other term. Try that and see if it doesn't get you where you need to go.
February 11th 2011, 05:06 AM #2 | {"url":"http://mathhelpforum.com/advanced-algebra/170824-two-hermitian-problems.html","timestamp":"2014-04-21T11:43:42Z","content_type":null,"content_length":"37502","record_id":"<urn:uuid:bbe04725-a9aa-4c45-a1cd-8303ab585fec>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
Imperial Beach Algebra 2 Tutor
...I hope to be able to help someone in any course that they are struggling in soon.In obtaining both my physics and engineering degree, calculus has been a necessary part of my everyday life. I
can give examples of why the subject is useful as well as explain the best way to apply the different co...
19 Subjects: including algebra 2, chemistry, calculus, writing
I am a San Diego native that chose to stay in this beautiful city and go to University of California, San Diego (UCSD).I graduated from high school as a lifetime member of the California Scholars
Federation, as an AP Scholar with Distinction, as a National AP Scholar, and as an IB Diploma recipient....
42 Subjects: including algebra 2, reading, English, Spanish
...As a tutor, I have led 87% of my SAT students to score increases of at least 100 total points and 40% to score increases of more than 200 total points. According to the CollegeBoard website,
only about 4% of students improve their SAT scores by at least 100 points. After graduating from college...
54 Subjects: including algebra 2, reading, English, chemistry
...For my PhD in Complexity Science, I earned this degree through research contributions based around computer programming. I work now as a post doc, where computer programming is a major
component of my work. I hold a master's degree in computer science and a PhD in computational neuroscience.
26 Subjects: including algebra 2, physics, calculus, statistics
I am passionate about teaching and learning mathematics. I've been working in the classroom and with students one-on-one for 14 years. One of the greatest thrills in life is to see the spark of
understanding in a student's eyes when a new concept is learned.
6 Subjects: including algebra 2, algebra 1, GED, trigonometry | {"url":"http://www.purplemath.com/imperial_beach_algebra_2_tutors.php","timestamp":"2014-04-16T13:48:46Z","content_type":null,"content_length":"24219","record_id":"<urn:uuid:acff00e4-80b6-414c-bdbd-8ffc11fa1949>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angular Frequency
A selection of articles related to angular frequency.
Original articles from our library related to the Angular Frequency. See Table of Contents for further available material (downloadable resources) on Angular Frequency.
Magick >> Rituals
Angular Frequency is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Angular Frequency books and related discussion.
Suggested Pdf Resources
Suggested News Resources
(8) where v is the Earth angular frequency (Hz), m is the thermal admittance (J m22 s20.5 K21), and Tg/b is either deep-soil or internal building temperature (K).
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site. | {"url":"http://www.realmagick.com/angular-frequency/","timestamp":"2014-04-17T18:42:58Z","content_type":null,"content_length":"29435","record_id":"<urn:uuid:c98fc11b-18f3-4a0f-a75f-f549b5070a64>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multi-prover verification of C programs
Results 11 - 20 of 48
- In 3rd IEEE Intl. Conf. SEFMโ05 , 2005
"... We describe an experiment of formal verification of C source code, using the CADUCEUS tool. We performed a full formal proof of the classical Schorr-Waite graph-marking algorithm, which has
already been used several times as a case study for formal reasoning on pointer programs. Our study is origina ..."
Cited by 13 (0 self)
Add to MetaCart
We describe an experiment of formal verification of C source code, using the CADUCEUS tool. We performed a full formal proof of the classical Schorr-Waite graph-marking algorithm, which has already
been used several times as a case study for formal reasoning on pointer programs. Our study is original with respect to previous experiments for several reasons. First, we use a general-purpose tool
for C programs: we start from a real source code written in C, specified using an annotation language for arbitrary C programs. Second, we use several theorem provers as backends, both automatic and
interactive. Third, we indeed formally establish more properties of the algorithm than previous works, in particular a formal proof of termination is made 1. Keywords: Formal verification,
Floyd-Hoare logic, Pointer programs, Aliasing, C programming language. The Schorr-Waite algorithm is the first moutain that any formalism for pointer aliasing should climb. โ Richard Bornat ([4],
page 121) 1.
"... Abstract. Formal verification of numerical programs is notoriously difficult. On the one hand, there exist automatic tools specialized in floatingpoint arithmetic, such as Gappa, but they target
very restrictive logics. On the other hand, there are interactive theorem provers based on the LCF approa ..."
Cited by 12 (1 self)
Add to MetaCart
Abstract. Formal verification of numerical programs is notoriously difficult. On the one hand, there exist automatic tools specialized in floatingpoint arithmetic, such as Gappa, but they target very
restrictive logics. On the other hand, there are interactive theorem provers based on the LCF approach, such as Coq, that handle a general-purpose logic but that lack proof automation for
floating-point properties. To alleviate these issues, we have implemented a mechanism for calling Gappa from a Coq interactive proof. This paper presents this combination and shows on several
examples how this approach offers a significant speedup in the process of verifying floating-point programs. 1
- In 21th International Conference on Automated Deduction (CADE-21), volume 4603 of LNCS (LNAI , 2007
"... Abstract. Polymorphism has become a common way of designing short and reusable programs by abstracting generic definitions from typespecific ones. Such a convenience is valuable in logic as
well, because it unburdens the specifier from writing redundant declarations of logical symbols. However, top ..."
Cited by 12 (1 self)
Add to MetaCart
Abstract. Polymorphism has become a common way of designing short and reusable programs by abstracting generic definitions from typespecific ones. Such a convenience is valuable in logic as well,
because it unburdens the specifier from writing redundant declarations of logical symbols. However, top shelf automated theorem provers such as Simplify, Yices or other SMT-LIB ones do not handle
polymorphism. To this end, we present efficient reductions of polymorphism in both unsorted and many-sorted first order logics. For each encoding, we show that the formulas and their encoded
counterparts are logically equivalent in the context of automated theorem proving. The efficiency keynote is to disturb the prover as little as possible, especially the internal decision procedures
used for special sorts, e.g. integer linear arithmetic, to which we apply a special treatment. The corresponding implementations are presented in the framework of the Why/Caduceus toolkit. 1
- BYTECODE , 2007
"... Many modern program verifiers translate the program to be verified and its specification into a simple intermediate representation and then compute verification conditions on this
representation. Using an intermediate language improves the interoperability of tools and facilitates the computation of ..."
Cited by 10 (1 self)
Add to MetaCart
Many modern program verifiers translate the program to be verified and its specification into a simple intermediate representation and then compute verification conditions on this representation.
Using an intermediate language improves the interoperability of tools and facilitates the computation of small verification conditions. Even though the translation into an intermediate representation
is critical for the soundness of a verifier, this step has not been formally verified. In this paper, we formalize the translation of a small subset of Java bytecode into an imperative intermediate
language similar to BoogiePL. We prove soundness of the translation by showing that each bytecode method whose BoogiePL translation can be verified, can also be verified in a logic that operates
directly on bytecode.
, 2011
"... Abstract. In this paper, we study translation from a first-order logic with polymorphic types ร la ML (of which we give a formal description) to a many-sorted or one-sorted logic as accepted by
mainstream automated theorem provers. We consider a three-stage scheme where the last stage eliminates pol ..."
Cited by 10 (1 self)
Add to MetaCart
Abstract. In this paper, we study translation from a first-order logic with polymorphic types ร la ML (of which we give a formal description) to a many-sorted or one-sorted logic as accepted by
mainstream automated theorem provers. We consider a three-stage scheme where the last stage eliminates polymorphic types while adding the necessary โannotationsโ to preserve soundness, and the first
two stages serve to protect certain terms so that they can keep their original unannotated form. This protection allows us to make use of provers โ built-in theories and operations. We present two
existing translation procedures as sound and complete instances of this generic scheme. Our formulation generalizes over the previous ones by allowing us to protect terms of arbitrary monomorphic
types. In particular, we can benefit from the built-in theory of arrays in SMT solvers such as Z3, CVC3, and Yices. The proposed methods are implemented in the Why3 tool and we compare their
performance in combination with several automated provers. 1
"... Based on our experience with the development of Alt-Ergo, we show a small number of modifications needed to bring parametric polymorphism to our SMT solver. The first one occurs in the typing
module where unification is now necessary for solving polymorphic constraints over types. The second one con ..."
Cited by 10 (1 self)
Add to MetaCart
Based on our experience with the development of Alt-Ergo, we show a small number of modifications needed to bring parametric polymorphism to our SMT solver. The first one occurs in the typing module
where unification is now necessary for solving polymorphic constraints over types. The second one consists in extending triggers โ definition in order to deal with both term and type variables. Last,
the matching module must be modified to account for the instantiation of type variables. We hope that this experience is convincing enough to raise interest for polymorphism in the SMT community. 1
"... Boogie is a program verification condition generator for an imperative core language. It has front-ends for the programming languages C# and C enriched by annotations in first-order logic. Its
verification conditions โ constructed via a wp calculus from these annotations โ are usually transferred to ..."
Cited by 9 (1 self)
Add to MetaCart
Boogie is a program verification condition generator for an imperative core language. It has front-ends for the programming languages C# and C enriched by annotations in first-order logic. Its
verification conditions โ constructed via a wp calculus from these annotations โ are usually transferred to automated theorem provers such as Simplify or Z3. In this paper, however, we present a
proofenvironment, HOL-Boogie, that combines Boogie with the interactive theorem prover Isabelle/HOL. In particular, we present specific techniques combining automated and interactive proof methods
for codeverification. We will exploit our proof-environment in two ways: First, we present scenarios to "debug" annotations (in particular: invariants) by interactive proofs. Second, we use our
environment also to verify "background theories", i.e. theories for data-types used in annotations as well as memory and machine models underlying the verification method for C.
"... Abstract. This paper presents the formal Isabelle/HOL framework we use to prove refinement between an executable, monadic specification and the C implementation of the seL4 microkernel. We
describe the refinement framework itself, the automated tactics it supports, and the connection to our previous ..."
Cited by 9 (5 self)
Add to MetaCart
Abstract. This paper presents the formal Isabelle/HOL framework we use to prove refinement between an executable, monadic specification and the C implementation of the seL4 microkernel. We describe
the refinement framework itself, the automated tactics it supports, and the connection to our previous C verification framework. We also report on our experience in applying the framework to seL4.
The characteristics of this microkernel verification are the size of the target (8,700 lines of C code), the treatment of low-level programming constructs, the focus on high performance, and the
large subset of the C programming language addressed, which includes pointer arithmetic and type-unsafe code. 1
"... The goal of the Verifying C Compiler project is to bring design by contract to C. More specifically, we are developing a verifying compiler, code name vcc, that takes annotated C programs,
generates logical verification conditions from them and passes those verification conditions on to an automatic ..."
Cited by 9 (2 self)
Add to MetaCart
The goal of the Verifying C Compiler project is to bring design by contract to C. More specifically, we are developing a verifying compiler, code name vcc, that takes annotated C programs, generates
logical verification conditions from them and passes those verification conditions on to an automatic theorem prover to either prove the correctness of the program or find errors in it. C
Intricacies. The vcc compiler is designed to support the verification of operating system code. As a consequence it does not only handle the type safe subset of C, but also deals with pointer
arithmetic, reinterpretation of data and volatile data access. This flexibility is for example needed to verify low level system code like memory allocators, where data is interpreted in different
ways by different parts of the system, or to verify algorithms implemented over polymorphic compare and swap operations. The vcc compiler uses different background axiomatizations to abstract from
Cโs implementation defined behavior. For example the size of character type, or how integers are implemented (typically twoโs complement) is dealt with not by
- Special issue of ACM TOCL on Implicit Computational Complexity , 2010
"... We extend Meyer and Ritchieโs Loop language with higher-order procedures and procedural variables and we show that the resulting programming language (called Loop ฯ) is a natural imperative
counterpart of Gรถdel System T. The argument is two-fold: 1. we define a translation of the Loop ฯ language int ..."
Cited by 9 (6 self)
Add to MetaCart
We extend Meyer and Ritchieโs Loop language with higher-order procedures and procedural variables and we show that the resulting programming language (called Loop ฯ) is a natural imperative
counterpart of Gรถdel System T. The argument is two-fold: 1. we define a translation of the Loop ฯ language into System T and we prove that this translation actually provides a lock-step simulation,
2. using a converse translation, we show that Loop ฯ is expressive enough to encode any term of System T. Moreover, we define the โiteration rank โ of a Loop ฯ program, which corresponds to the
classical notion of โrecursion rank โ in System T, and we show that both translations preserve ranks. Two applications of these results in the area of implicit complexity are described. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=302367&sort=cite&start=10","timestamp":"2014-04-21T00:59:44Z","content_type":null,"content_length":"39071","record_id":"<urn:uuid:e56d9555-70ac-40c6-bde1-9b46d32bea38>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
prime ideals in a product of rings
June 28th 2009, 07:30 AM #1
Mar 2009
Sรฃo Paulo- Brazil
prime ideals in a product of rings
Let $B=A_{1}\times\cdots\times A_{n}$ be a product of rings. Any prime ideal of B is of the form $A_{1}\times\cdots\times I_{i}\times\cdots \times A_{n}$, where $i\in\{1,\cdots,n\}$ and $I_{i}$
is a prime ideal of $A_{i}$.
Thanks in advance.
you should always mention what kind of rings do you have? for example, are they commutative? are they unitary? since you didn't mention that, i'll assume that your rings are commutative.
we first need an important fact:
Fact: every ideal of $B$ is in the form $I=I_1 \times \cdots I_n,$ where each $I_j$ is an ideal of $A_j.$
Proof: it's clear that $I$ is an ideal of $B.$ conversely, suppose $I$ is any ideal of $B.$ for any $1 \leq j \leq n$ consider the map $I \overset{\iota} \longrightarrow B \overset{\pi_j} \
longrightarrow A_j,$ where $\iota$ and $\pi_j$ are the inclusion and the projection maps
respectively. let $I_j=\pi_j \iota(I).$ see that $I_j$ is an ideal of $A_j$ and $I=I_1 \times \cdots \times I_n.$ Q.E.D.
now let $I=I_1 \times \cdots \times I_n$ be any ideal of $B.$ we have $\frac{B}{I} \cong \frac{A_1}{I_1} \times \cdots \times \frac{A_n}{I_n}.$ we know that $I$ is prime iff $\frac{B}{I}$ is a
domain. now suppose $I_i eq A_i, \ I_j eq A_j,$ for some $i eq j.$ choose $a_i \in A-I_i$ and
$a_j \in A_j - I_j.$ let $x=(0, \cdots, 0, a_i + I_i, 0 , \cdots , 0)$ and $y=(0, \cdots, 0, a_j + I_j, 0 , \cdots , 0).$ then $xy=0 \in \frac{A_1}{I_1} \times \cdots \times \frac{A_n}{I_n}$ and
$x eq 0, y eq 0.$ so in this case $\frac{B}{I}$ is not a domain. thus in order for $\frac{B}{I}$
to be a domain, we must have $I_j=A_j$ for all but one $j,$ which we'll call it $i.$ then $I=A_1 \times \cdots \times I_i \times \cdots \times A_n$ and $\frac{B}{I} \cong \frac{A_i}{I_i}.$
clearly $I$ is a prime ideal of $B$ iff $I_i$ is a prime ideal of $A_i. \ \Box$
Last edited by NonCommAlg; June 28th 2009 at 11:10 AM.
June 28th 2009, 10:57 AM #2
MHF Contributor
May 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/93914-prime-ideals-product-rings.html","timestamp":"2014-04-16T11:45:43Z","content_type":null,"content_length":"43890","record_id":"<urn:uuid:9a4122d1-02e4-4e6e-a513-54560d56f3ab>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Putting It Into Context
For a while now I have been reading through my course material and text books which are very good at explaining methods like for example I have plenty of material for the stats module I am currently
studying which is very good at explaining the method of calculating measures of dispersion and central tendancy etc. They explain exactly what cumulative frequency distribution is but the big problem
is absolutely none of it is put into context.
Some of it can be put into context using my own intuition. For example using the derivative to find the rate of change of a function or the definite integral to find the area under a function. Other
subjects are less obvious. How should I be expected to know what benefits the cumulative distribution function actually provides? Despite the method of calculation being so simple it is absolutely no
use if I cannot see how the cumulative frequency distribution could be used to solve problems. Maybe I haven't attempted enough example problems or maybe I just don't have the level of intuition
required to make use of it by myself which may be true. Surely there is a book of some sort out there that assumes you are capable of doing all the arithmetic necessary to solve problems but will
give examples of how it all can be put into context? | {"url":"http://www.physicsforums.com/showthread.php?t=712710","timestamp":"2014-04-19T07:39:07Z","content_type":null,"content_length":"22875","record_id":"<urn:uuid:616ab61f-4890-4d40-bfbf-3ca6047edd82>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rafail Ostrovsky
Rafail Ostrovsky - Publications
Deniable Encryption.
Ran Canetti, Cynthia Dwork, Moni Naor, Rafail Ostrovsky
Abstract: Consider a situation in which the transmission of encrypted messages is intercepted by an adversary who can later ask the sender to reveal the random choices (and also the secret key, if
one exists) used in generating the ciphertext, thereby exposing the cleartext. An encryption scheme is {\sf deniable} if the sender can generate `fake random choices' that will make the ciphertext
`look like' an encryption of a different cleartext, thus keeping the real cleartext private. Analogous requirements can be formulated with respect to attacking the receiver and with respect to
attacking both parties.
In this paper we introduce deniable encryption and propose constructions of schemes with polynomial deniability. In addition to being interesting by itself, and having several applications, deniable
encryption provides a simplified and elegant construction of adaptively secure multiparty computation.
comment: Appeared In Proceedings of advances in cryptology, (CRYPTO-97) Springer-Verlag Lecture Notes in Computer Science.
Fetch PostScript file of the paper Fetch PDF file of the paper | {"url":"http://www.cs.ucla.edu/~rafail/PUBLIC/29.html","timestamp":"2014-04-16T07:32:53Z","content_type":null,"content_length":"2017","record_id":"<urn:uuid:88a22f47-fd6a-40ec-b1fe-53df18c13669>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Implementation of Password Based Key Derivation Function, from RSA labs.
See PKCS # 5 / RFC 2898 from rsa labs: and haskell cafe discussion on why password hashing is a good idea for web apps and a suggestion that this be implemented:
hashedpass = pbkdf2 ( Password . toOctets $ "password" ) ( Salt . toOctets $ "salt" )
pbkdf2 :: Password -> Salt -> HashedPassSource
A reasonable default for rsa pbkdf2.
pbkdf2 = pbkdf2' (prfSHA512,64) 5000 64
SHA512 outputs 64 bytes. At least 1000 iters is suggested by PKCS#5 (rsa link above). I chose 5000 because this takes my computer a little over a second to compute a simple key derivation (see t test
function in source)
Dklen of 64 seemed reasonable to me: if this is being stored in a database, doesn't take too much space.
Computational barriers can be raised by increasing number of iters
pbkdf2' :: ([Word8] -> [Word8] -> [Word8], Integer) -> Integer -> Integer -> Password -> Salt -> HashedPassSource
Password Based Key Derivation Function, from RSA labs.
pbkdf2' (prf,hlen) cIters dklen (Password pass) (Salt salt)
prf: pseudo random function
hlen: length of prf output
cIters: Number of iterations of prf
dklen: Length of the derived key (hashed password)
newtype Password Source
Eq Password
Data Password
Ord Password
Read Password
Show Password
Typeable Password
newtype Salt Source
Eq Salt
Data Salt
Ord Salt
Read Salt
Show Salt
Typeable Salt
newtype HashedPass Source
Eq HashedPass
Data HashedPass
Ord HashedPass
Read HashedPass
Show HashedPass
Typeable HashedPass | {"url":"http://hackage.haskell.org/package/PBKDF2-0.3.1.3/docs/Crypto-PBKDF2.html","timestamp":"2014-04-17T20:08:56Z","content_type":null,"content_length":"12706","record_id":"<urn:uuid:2afe46e2-61e2-47d2-8573-b28b23b75aa0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical Pattern Processing - Module 4F10
The lectures of this part of the course aim to describe the basic concepts of statistical pattern processing and some of the standard techniques used in pattern classification.
Any queries, problems, or errors in the handouts, please contact me by email mjfg@eng.cam.ac.uk .
A related 4th year module is 4F13 Machine Learning
[ Handouts | Examples Papers | Exam Questions | On-line Material | Further Reading ]
The lecture notes should be available online just before the lectures.
Examples papers
The examples class for the first examples paper is planned for the end of week 4. For the second examples paper the end of week 8.
โข The first examples paper for 4F10 statistical pattern processing are available in [pdf].
Solutions for examples paper 1 are available in [pdf].
โข The second examples paper for 4F10 statistical pattern processing are available in [pdf].
Solutions for examples paper 2 are available in [pdf].
Exam Questions
The following past 4F10 exam questions, which require knowledge of Gaussian Processes (see 4F13), are NOT covered by the current 4F10 syllabus.
โข 2012: Qu 4
โข 2011: Qu 5
โข 2010: Qu 3
โข 2009: Qu 3
โข 2008: Qu 3
Past exam questions are available at the CUED web-page.
On-line material
WARNING: I have no control over any website beyond the engineering department. The links provided were valid when I checked them. If a link has changed, or the contents are impropriate please email
me ASAP.
Recommended reading
โข Judea Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann, San Mateo, CA, 1997. ISBN 1558604790.
โข (*) Richard Duda, Peter Hart and David Stork: Pattern Classification, Second Edition, John Wiley & Sons Inc, 2000. ISBN 0471056693
โข (*) Christopher Bishop, Neural Networks for Pattern Recognition, Clarendon Press, 1995. ISBN 0198538642
โข (*) Christopher Bishop, Pattern Recognition and Machine Learning, Springer 2006.
[ Cambridge University | CUED | MIL | Home] | {"url":"http://mi.eng.cam.ac.uk/~mjfg/local/4F10/index.html","timestamp":"2014-04-17T00:53:44Z","content_type":null,"content_length":"8823","record_id":"<urn:uuid:e9e510dc-0cce-4fcb-b058-3aac92bd26b8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra 1 Tutors
Miami, FL 33186
Francisco; Civil Engineering, Math., Science, Spanish, Computers.
...The goal is to help the student improve in knowledge and in grades as well. The most important is knowledge. The subjects that I am offering as tutor are: Prealgebra,
Algebra 1
, Linear Algebra, Microsoft Word and Excel,Trigonometry and Spanish. I have 5 years of...
Offering 10+ subjects including algebra 1 | {"url":"http://www.wyzant.com/Palmetto_Bay_FL_Algebra_1_tutors.aspx","timestamp":"2014-04-19T08:19:25Z","content_type":null,"content_length":"60717","record_id":"<urn:uuid:2377f03a-e359-4014-9d46-77bc118f6bb6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Junction Loss Experiments: Laboratory Report
Publication Number: FHWA-HRT-07-036
Date: March 2007
Junction Loss Experiments: Laboratory Report
5. CONCLUSIONS
One concern when conducting small-scale experiments is the scaling issue. Comparing the old base run data to the smaller scale base runs confirmed that small-scale models can be used with reasonable
confidence to evaluate and develop the proposed junction loss method. Small-scale tests are much more efficient and reduce many of the physical and geometrical constraints. This is the primary reason
why the experiments were able to determine that K[i] equals 0.43, K[o] equals 0.16, and the coefficient in equation 6 should be equal to 1.0 (i.e., equation 7). These values are remarkably close to
Kilgore's values of 0.4 for K[i] and 0.2 for K[o]. The difference in values produces only minor differences in energy loss for pipe velocities less than 3.05 m/s (10 ft/s). It should also be noted
that Kilgore's coefficients slightly overestimate the energy level in the access hole, which makes his coefficients slightly more conservative than the lab-determined values.
This new and revised methodology addresses the problem of supercritical flows in outflow pipes. The use of inlet controlled culvert equations to estimate the initial depth in the access hole for
these situations appears to work very well. Kilgore proposed a relatively simple equation to compute additional energy loss for plunging flows that accounts for the proportion of the flow that is
plunging and the drop height. The experiments show that the new junction loss method is applicable for plunge-height ratios (i.e., plunge height divided by outlet pipe diameter) up to 10.
Characterizing the kinetic energy in the access hole remains the most rational procedure for estimating energy losses in access holes and distributing those losses among several inflow pipes. The two
approaches involving PIV and 3โD numerical modeling to analyze the energy level in the access hole, however, proved too difficult due to the extremely chaotic flow inside the access hole. This was
the primary reason that the research focused on the more organized flow in the contracted area of the outflow pipe. The area of maximum velocity near the contraction zone was successfully used as an
indirect measure of the energy loss in the outflow pipe (an entrance loss), which was then used to backcalculate the energy loss in the inflow pipe. This procedure showed that the entrance and exit
losses predicted by the new junction loss method are remarkably accurate.
Previous | Contents | Next
United States Department of Transportation - Federal Highway Administration | {"url":"http://www.fhwa.dot.gov/publications/research/infrastructure/hydraulics/07036/chapt5.cfm","timestamp":"2014-04-18T03:11:19Z","content_type":null,"content_length":"12199","record_id":"<urn:uuid:96905738-8501-4044-8d24-65e57526daf2>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the co-ordinates of sw15 1eu?
You asked:
What is the co-ordinates of sw15 1eu?
51ยฐ28'00"N and 0ยฐ13'14"W
the group of objects: the latitude 51 degrees 28 minutes and 0 seconds north and the longitude 0 degrees 13 minutes and 14 seconds west
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/what_is_the_co-ordinates_of_sw15_1eu","timestamp":"2014-04-16T14:40:46Z","content_type":null,"content_length":"49279","record_id":"<urn:uuid:db474a9c-bb7c-48b2-bc41-4cb7c7c65ac4>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
7th International Conference on Multiphase Flow,
ICMF 2010, Tampa, FL, May 30 -June 4, 2010
Large Eddy Simulation of the Breakup of a Kerosene Jet in Crossflow
N. Spyrou*, D. Choi*, A. Sadiki* and J. Janicka*
Institute for Energy and Powerplant Technology, TU Darmstadt, Petersenstr. 30, 64287 Darmstadt, Germany
spyrou@ekt.tu-darmstadt.de, choi@ekt.tu-darmstadt.de, sadiki@ekt.tu-darmstadt.de and janicka@ekt.tu-darmstadt.de
Keywords: Breakup, Interface Capturing, Jet in Crossflow
This paper presents numerical simulation results of the breakup of a turbulent liquid jet injected into a turbulent
gaseous crossflow. Three calculations are performed in order to separate effects from a variation of the liquid
Weber number Weliq and a grid resolution variation. The numerical method for this investigation employs a surface
capturing model based on the volume fraction as indicator function but without an explicit reconstruction of the phase
interface in the framework of the finite volume method. To ensure a sharp phase interface resolution an additional
convective term is introduced into the transport equation for the volume fraction suitable to avoid numerical smearing
of the phase interface. Starting from a base case a second calculation on the same grid is performed with Wei;, being
the only varied parameter. A third calculation resembles the base case but with a refined mesh by a factor of two in
terms of total grid cell amount.
Typical applications where a liquid jet is injected into
a gaseous crossflow are gas turbines. They are of im-
portance in e.g. lean premixed prevaporized (LPP) com-
bustion and in afterburners for gas turbines and also in
ramjets. Since combustion quality, i.e. efficiency and
pollutant formation is directly related to fuel atomiza-
tion, strong efforts are being made to control the struc-
ture of the generated fuel spray in terms of achieving the
desired spray angle, spray penetration and droplet size
Several experimental studies subjected to liquid jets
in crossflow (LJCF) have been carried out, delivering in-
formation about penetration of the liquid jet, penetration
of the resulting spray and phenomenological breakup
modes depending on dimensionless groups, see [1]-[6].
For nonturbulent liquid jets Wu et al. (1997) observed
that two different breakup modes can be identified being
termed "surface breakup" and "column breakup". Col-
umn breakup is characterized by growing waves gener-
ated on the liquids surface on the windward side which
leads to the formation of bag-like structures that sep-
arate from the liquid column. In the surface breakup
mode fine structures are stripped by shear from the
liquid columns surface. In laminar jets the breakup
modes occurr separately and Wu et al. (1997) generated
a breakup map distinguishing between column and sur-
face breakup mode in dependence of the momentum flux
ratio and the crossflow Weber number. However if the
liquid jet is turbulent the breakup modes occur not in a
separated manner but both mechanisms exist in parallel,
see Lee et al. (2007) and Sallam et al. (2004). Besides
the visualization of breakup mechanisms experimental
investigations also provide several correlations, e.g. for
the near field penetration of the liquid jet, jet trajectory
and Sauter mean diameter of the droplets. The experi-
mental investigations by Becker and Hassa (2002) and
Bellofiore (2006) are two among the few that focused
on the breakup at elevated air pressure. From their ex-
perimental data they provided correlations for the near
field penetration and for the trajectory of the liquid jet.
Yet, aiming at predictive models for LJCF the under-
standing of the primary breakup process is not sufficient
to deliver such models. Several mechanisms that lead
to breakup might occur simultaneous in the close prox-
imity of the jet's surface, a region where the optical ac-
cess is not suited for traditional experimental techniques.
The use of detailed numerical simulations can provide
additional information of the processes at the phase in-
terface of the jet. Experimental correlations are suited
and helpful to verify simulation results and the reliabil-
ity of numerical methods up to a certain point. From
there on the numerics have to break new ground and de-
tailed simulations of the phase interface dynamics are
necessary to advance the understanding of the precur-
sors of primary breakup. Herrmann (2009) and Pai et al.
(2008) carried out detailed simulations of turbulent liq-
uid jets in subsonic crossflows at ambient air pressure.
Both their numerical methods were based on a level set
approach with enhanced resolution of the phase inter-
face resulting in promising results. For the conditions
chosen in their study the numerical predictions by Pai et
al. (2008) showed that the crossflow Weber number has
little impact on the size of the disturbances on the wind-
ward side of the liquid jet and that the smallest liquid
length scales seem to be controlled by the liquid Weber
number. Herrmann (2009) transferred small scale drops
into a Lagrangian point particle description and provided
drop size distributions.
Governing equations and interface capturing
Single field formulation
The investigated flow is mathematically modelled by
the Navier Stokes equations for incompressible fluids in-
cluding the force due to surface tension at the phase in-
terface. The continuity and momentum equations read
V U =0,
Op + V (pUU)
-Vp+V-T+pg+f,, (2)
7th International Conference on Multiphase Flow,
ICMF 2010, Tampa, FL, May 30 -June 4, 2010
a volumetric force. For constant surface tension a the
CSF model states
f, = aKVY,
where K is the curvature of the interface, expressed by
K -V .) (7)
To derive the Large Eddy Simulation (LES) formu-
lation of the governing equations a filtering procedure
must be applied to equations (1), (2) and (3) which corre-
sponds to volume averaging of the phase weighted prop-
erties. After filtering due to the nonlinearity of the con-
vective term in the momentum equation (2) the unknown
subgrid scale (SGS) stress tensor rTsg arises which has
the form:
sg. = UU UU, (8)
with the filter operation denoted by the overbar. To
close the unknown SGS stress tensor it is approximated
through the eddy viscosity assumption:
sgs ksgs
/Pss [Vu(VU()T
with p, U, p and g being the density, velocity, pressure
and gravitational vector, respectively. T represents the
viscous stress tensor which reduces for incompressible
flows to T =p iVU + (VU)T] and f, accounts for
the force due to surface tension at the phase interface.
Since the two phase flow is described by a single-field
representation with one set of equations for both phases
an indicator function is needed to account for the phase
present on a certain location at a certain time. Following
the Volume of Fluid (VOF) approach the indicator func-
tion is defined as the volume fraction 7, whose evolution
in time and space is described by an advection equation:
S+ V (Uy) 0. (3)
Based on the distribution of the liquid volume fraction
the physical properties of the two phase mixture are cal-
culated as weighted averages:
P = Pl + P (1 ') (4)
P- P' + g (1 ) (5)
where the subscripts g and I denote the physical property
related to the gas and liquid phase, respectively. The sur-
face tension force f, can be approximated by the con-
tinuum surface force (CSF) model by Brackbill et al.
(1992), which represents the surface tension effects as
where k,,, and p,s, are the SGS turbulent kinetic en-
ergy and SGS viscosity. I corresponds to the Kronecker
Delta 6ij. To determine ksg, and Pss the one equa-
tion transport model for ksg, by Yoshizawa and Horiuti
(1985) is used:
at + V (kyU) (10)
V [(v + v-98) V-sys] v Sc ,
where e = AC, (k,,)2 is the dissipation of k,,,,
v/gs ACk (k,) )2 with A being the SGS length
scale and S being the filtered rate of strain tensor
S = (VU+ (VU)T). The model constants are
Ck =0.07 and C, = 1.05
This LES formulation corresponds to a single phase
formulation, since the filtering of the equations (1),
(2) and (3) produces additional terms arising from
the surface tension and the transport of the volume
fraction which are neglected in this study. Due to the
grid refinement in the regions near the phase inter-
face it is assumed that the SGS contribution of these
terms is small and can be neglected. In addition the
effects of the neglected terms are oppositional and will
tend to attenuate each other [16], de Villiers et al. (2004).
7th International Conference on Multiphase Flow,
ICMF 2010, Tampa, FL, May 30 -June 4, 2010
Artificial compression term
Typical VOF methods solve equation (3) either in a
geometrical manner where the interface is reconstructed
or in an interface compression manner, in which case
special discretization techniques like e.g. Ubbink (1997)
or Muzaferija et al. (1998) are utilized. In the present
study a modified approach similar to the one proposed
in Rusche (2002) is used with an advanced model for-
mulation by OpenCFD Ltd. (2007). Reconstruction of
the interphase is not performed and instead of using spe-
cial compressive discretization techniques an additional
convective term is introduced into equation (3):
+ V (U) + V [U, (1 ) 0. (11)
The third term on the lhs is designated artificial com-
pression term containing the compression velocity Ur
which is computed in a way, suitable to avoid smear-
ing of the phase interface. Because of the multiplication
with 7 (1 ) this term acts only in the close proxim-
ity of the phase interface and vanishes in regions away
from it. Introducing the compression term into the ad-
vection equation for the volume fraction is numerically
motivated, hence shifting the challenging task to avoid
smearing of the interface by compressive discretization
techniques to the formulation of the compressive veloc-
ity Ur thus enabling the use of standard differencing
techniques for the volume fraction. The proposed re-
lationship by OpenCFD Ltd. (2007) for the compressive
velocity formulates Ur at the cell faces, based on the
maximum velocity magnitude at the interface region and
its direction:
Ur, n hmin [C max ) (12)
where the subscript f denotes values identified at cell
faces. In equation (12) y, Sf, C_ and nf are the
face volume flux, cell face area vector, compression co-
efficient and face unit normal flux respectively. The face
unit normal flux is defined by:
(nfV= Sf, (13)
(V7)1 +
with 6, being a stabilization factor. The intensity of the
interface compression is controlled by the constant C,
so that the influence of the compression term can be dis-
carded, act in a conservative or enhanced manner by set-
ting the constant to zero, unity or greater than one, re-
spectively. In the present study C, was set to unity.
V+1.56 d
Figure 1: Geometry of the computational domain
Numerical investigation
Test case description
The numerical investigation focuses on the injection
of a kerosene jet into a crossing airflow at elevated air
pressure and the subsequent breakup of the liquid jet.
Figure 1 shows the relevant geometrical information
of the computational domain. The liquid jet is injected
along the z-axis through a plain jet nozzle mounted flush
with the bottom wall of the duct. For a detailed descrip-
tion of the experimental setup the reader is referred to
Becker and Hassa (2002). The liquid jet in crossflow
(LJCF) is parametrized by five independent dimension-
less groups:
Liquid Weber number
Weliq l- U (14)
* Crossflow Weber number
p U, d
Wecf =
* Liquid Reynolds number
Reliq = ,
* Crossflow Reynolds number
Reqf Ub,gDh
Recn r
* Density ratio
7th International Conference on Multiphase Flow,
ICMF 2010, Tampa, FL, May 30 -June 4, 2010
Here d stands for the nozzle diameter, Dh is a charac-
teristic length scale of the crossflow (e.g. the hydraulic
diameter of the duct from the experimental investigation
Becker and Hassa (2002)) and the velocities Ub,l and
Ub,g are bulk values along the z-axis and x-axis, respec-
tively. In addition the momentum flux ratio q -= W
and the Ohnesorge number Oh can be
V Rliq
expressed by the already mentioned dimensionless
Operating conditions and computational mesh
Three calculations (cases A, B and C) were performed
in the present study. The characteristic parameters of the
LJCF are summarized in Table 1 and the information of
the computational grids is summarized in Table 2.
The comparison AB focuses on the effect of the liq-
uid Weber number Weliq on the breakup process. All
other independent dimensionless numbers and the com-
putational grid are the same in both cases A and B. The
altering of Weliq in case B is reached through increas-
ing the liquid bulk velocity Ub,i. To maintain the value
for the liquid Reynolds number Reliq the viscosity of the
liquid vi is adjusted in case B.
Comparison AC focuses on the effect of the grid res-
olution along the z-axis to the breakup process. All
dimensionless groups in cases A and C are identical.
While the computational grid for case A resolved the
phase interface regions downstream of the nozzle with
cells of size (15 x 15 x 30)pm the grid cells in compu-
tation C were of size (15 x 15 x 15)pim.
To save computational costs the mesh provides
the mentioned grid spacing only in a bounding box
surrounding the evolving jet. This is achieved through
local grid refinement in the interesting regions. Figure
2 shows a clipped part of the mesh in order to illustrate
the refined grid. The dimensions of the refined region
are -1d ... 5.5d x -2.8d... 2.8d x 0d... 6.7d.
Table 1: Summary of operating conditions
Variable Case A Case B Case C Units
Weliq 2782.0 4204.0 2782.0
Wecf 695.0 695.0 695.0
Reliq 3000.0 3000.0 3000.0
Ref 1.1e6 1.1e6 1.1e6
/ 66.0 66.0 66.0
q 4.0 6.0 4.0
d 4.5e-4 4.5e-4 4.5e-4 m
a 0.022 0.022 0.022 N/m
Ub,l 13.1 16.1 13.1 m/s
Ub,g 53.1 53.1 53.1 m/s
Pi 795.0 795.0 795.0 kg/m3
pg 12.05 12.05 12.05 kg/m3
vi 1.96e-6 2.41e-6 1.96e-6 r2/s
vi 1.5e-6 1.5e-6 1.5e-6 m2/s
Table 2: Summary of mesh parameters
Parameters case A case B case C
Axmin d/30 d/30 d/30
Aymin d/30 d/30 d/30
Azmin d/15 d/15 d/30
cell count 5.55e6 5.55e6 10.12e6
computational inlet and slip boundary conditions were
assigned to the upper and both lateral patches. To pro-
vide a transient turbulent inlet condition into the duct the
inflow generator by Klein et al. (2003) was used produc-
ing a time series of fluctuations correlated in space and
time. By superposing the mean velocity profile and the
time series of the fluctuations a data base for transient
turbulent inlet conditions was used for the present inves-
tigation. In a similar fashion the inflow conditions for
the nozzle were generated.
Setup details
The employed crossflow duct in the experimental in-
vestigation by Becker and Hassa (2002) had a cross sec-
tion of 25mm by 40mm. To investigate the LJCF numer-
ically only a subregion is modeled. Therefore a num-
ber of precalculations were performed to provide proper
inlet conditions for the air and jet flow. For the air-
flow a Reynolds Averaged Navier Stokes calculation of
the whole experimental duct were performed to obtain
a mean velocity profile providing the correct air mass
flow through the duct. According to the cross section of
the duct for the computational domain the correspond-
ing part of the mean velocity profile was mapped to the
Figure 2: Refined mesh
11 x
7th International Conference on Multiphase Flow,
ICMF 2010, Tampa, FL, May 30 -June 4, 2010
Figure 3: Temporal evolution of total liquid mass in the
computational domain during the averaging
Results and discussion
Any presented results in this study were obtained after
a certain calculation time in order to ensure that a fully
developed state is reached. This is confirmed by Figure
3 where the temporal evolution of the total liquid mass
in the computational domain ranges close to the mean
In Figure 4 the liquid jet trajectory is compared to cor-
relations derived from experimental investigations. For
the comparison correlations are used where the exper-
iments were performed at elevated air pressures. This
was the case for the experimental investigations from
Becker and Hassa (2002) and Bellofiore (2006). Note
that in Figure 4 the images of the liquid jet are not an
instantaneous snapshot but a superposition of multiple
snapshots. The phase interface is depicted as isosurface
of the volume fraction 7 = 0.5. The red and green lines
represent the correlations by Becker and Hassa (2002)
and Bellofiore (2006), respectively. The latter shows
better agreement with the numerical results but it should
be mentioned that the correlation by Bellofiore (2006)
lies inside the range of the standard deviation specified
by Becker and Hassa (2002) for their correlation. The
agreement between simulation and correlation for cases
A and C (q 4) is better than for the case C, where
q 6.
The lateral dispersion of the jet is prescribed by small
droplets which cannot be resolved with the employed
grid. This shall be emphasized by depicting in Figure
5 the lateral dispersion isosurface of 7 = 0.1.
Figure 6 shows for each case A, B and C a time se-
quence from left to right visualizing the development
and amplification of interfacial instabilities on the wind-
ward side of the liquid jet. The arrows highlight initially
small waves climbing up along the liquid surface in the
direction of the jet axis and being amplified by aerody-
namic forces. That process, denoted as column breakup,
results in large bag-like structures that break off and pro-
duce a wide range of droplets not resolvable in their en-
tirety by the computational grid. The presence of the
large scale instabilities in cases A and B is also captured
by the coarse grid. Furthermore, in each case the onset
Figure 4: Liquid jet trajectory based on superposed
snapshots depicted by isosurface of 0.5
of wave amplitude amplification is not located close to
the nozzle exit but rather at a considerable distance to
it. In conjunction with elevated air pressure the effect
of mass shedding close to the nozzle exit by means of
stripping mechanisms on the jets surface increases, thus
producing ligaments and subsequent detached structures
with a wide size distribution that are not resolved by
the employed grids. Further downstream along the jet
axis the observed large scale disturbances develop due
to combined effects by means of loss of mass, flatten-
ing and bending of the liquid column, Bellofiore (2006).
Note that the location of onset of growing wave struc-
tures denoted in the left column in Figure 6 coincides
with noticeable bending of the liquid column.
The instantaneous snapshots do not show a signifi-
cant impact of the liquid Weber number Weiq on the re-
... 1-1
time units -]
2C o
7th International Conference on Multiphase Flow,
ICMF 2010, Tampa, FL, May 30 -June 4, 2010
Figure 5: Lateral dispersion of the jet, case B correla-
tion by Becker and Hassa (2002)
suiting resolvable liquid structures. But as the detached
structures from the liquid column are suspected to have a
wide range of size an ultimate statement about the influ-
ence of Weil, in this study cannot be made until the nu-
merical grid is further refined. As expected the compar-
ison between cases A and C shows that the refined grid
captures finer liquid structures. Nevertheless the coarser
grid captures the behaviour of the liquid column.
The results of the computations have shown that the
present interface capturing approach, coupled with a
LES formalism can be used to investigate LJCF. The jet
trajectory and liquid column behaviour are reproduced
in accordance to experimental data and phenomenologi-
cal descriptions. In a next step further grid refinement is
necessary to isolate effects of characteristic parameters
like e.g. the liquid Weber number.
This work is part of the Graduiertenkolleg 1344 at TU-
Darmstadt and financially supported by the DFG.
[1] Becker J. and Hassa C., Breakup and Atomization of
a Kerosene Jet in Crossflow at Elevated Pressure, Atom-
ization and Sprays 12:49-67, 2002
[2] Stenzler J. N., Lee J. G. and Santavicca D.A., Pene-
tration of Liquid Jets in a Cross-Flow, Atomization and
Sprays 16:887-906, 2006
[3] Wu P-K., Kirkendall K. A. and Fuller R. P., Breakup
Processes of Liquid Jets in Subsonic Crossflows, Journal
of Propulsion and Power, Vol. 13, pp. 64-73, 1997
Figure 6: Side view snapshots at different times (in-
creasing left to right) for cases A, B and C
[4] Sallam K. A., Aalburg C. and Faeth G. M., Breakup
of Round Nonturbulent Liquid Jets in Gaseous Cross-
flow, AIAA Journal, Vol. 42, pp. 2529-2540, 2004
[5] Lee K., Aalburg C., Diez F. J., Faeth G. M. and Sal-
lam K. A., Primary Breakup of Turbulent Round Liquid
Jets in Uniform Crossflows, AIAA Journal, Vol. 45, pp.
1907-1916, 2007
[6] Bellofiore A., Experimental and Numerical Study of
Liquid Jets Injected in High-Density Air Crossflow, PhD
thesis, University of Naples, 2006
[7] Herrmann M., Detailed Simulations of the Breakup
Processes of Turbulent Liquid Jets in Subsonic Cross-
flows, Paper No. ICLASS2009-188, 11th Triennial Con-
ference on Liquid Atomization and Spray Systems, Vail,
CO, 2009
[8] Pai M. G., Desjardins O. and Pitsch H., Detailed
Simulations of Primary Breakup of Turbulent Liquid
Jets in Crossflow, Center for Turbulence Research, An-
nual Research Briefs, 2008
[9] Klein M., Sadiki A. and Janicka J., A Digital Fil-
ter Based Generation of Inflow Data for Spatially De-
veloping Direct Numerical or Large Eddy Simulations,
Journal of Computational Physics, Vol 186, pp. 652-665,
7th International Conference on Multiphase Flow,
ICMF 2010, Tampa, FL, May 30 -June 4, 2010
[10] Yoshizawa A. and Horiuti K., A Statistically-
Derived Subgrid-Scale Kinetic-Energy Model for the
Large-Eddy Simulation of Turbulent Flows, Journal of
the Physical Society of Japan, Vol. 54:2834-2839, 1985
[11] Rusche H., Computational Fluid Dynamics of Dis-
persed Two-Phase Flows at High Phase Fractions, PhD
thesis, Imperial College, University of London, 2002
[12] Ubbink O., Numerical Prediction of Two Fluid Sys-
tems with Sharp Interfaces, PhD thesis, Imperial Col-
lege, University of London, 1997
[13] Muzaferija S., Peric M., Sames P and Schelin T., A
Two-Fluid Navier-Stokes Solver to Simulate Water En-
try, Proc. 22nd Symp. on Naval Hydrodynamics, 1998
[14] Brackbill J. U., Kothe D. B. and Zemach C., A Con-
tinuum Method for Modeling Surface-Tension, Journal
of Computational Physics, Vol. 100, pp. 335-354, 1992
[15] OpenCFD Ltd., http://www.opencfd.co.uk/, 2007
[16] de Viliiers E., Gosman A. D. and Weller H. G.,
Large Eddy Simulation of Primary Diesel Spray Atom-
ization, SAE technical paper, 2004-01-0100, 2004 | {"url":"http://ufdc.ufl.edu/UF00102023/00294","timestamp":"2014-04-18T11:21:43Z","content_type":null,"content_length":"41656","record_id":"<urn:uuid:fc240f95-9a2c-4aa0-af11-454589a749e1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question about ideals and integral domains.
November 11th 2011, 07:55 AM #1
Dec 2010
Question about ideals and integral domains.
Suppose that A is a ring and I is an ideal of A. Prove that the quotient ring A/I is an integral domain if and only if I satisfies the following:
$I eq A$ and $xy \in I \implies (x\in I$ or $y\in I)$.
I have tried it both ways using a contradictory argument but to no avail. Help much appreciated.
Re: Question about ideals and integral domains.
An ideal which satisfies the second condition is said to be prime. Use the fact that $x\in I$ is the same thing as the class of $x$ modulo $I$ is the class of $0$, and the product of two classes
is the class of the product.
Re: Question about ideals and integral domains.
this is pretty basic: suppose I is a prime ideal of A. then if (x+I)(y+I) = I in A/I, and x is not in I, then xy + I = I, so xy is in I, and since I is prime, and x is not in I, y is in I.
but this means that y+I = I, that is, A/I has no zero divisors. provided that A was a commutative ring with unity in the first place, A/I is an integral domain (some authors do not require
commutativity nor an unit).
(the condition I โ A ensures we have some other element besides I = 0 + I in A/I, so that we have non-zero divisors at all).
the converse is proven similarly: using a direct approach, rather than contradiction, works well.
November 11th 2011, 10:17 AM #2
November 11th 2011, 11:55 AM #3
MHF Contributor
Mar 2011 | {"url":"http://mathhelpforum.com/advanced-algebra/191647-question-about-ideals-integral-domains.html","timestamp":"2014-04-19T22:25:56Z","content_type":null,"content_length":"36895","record_id":"<urn:uuid:62b04a05-8cb8-4570-9dfb-ce86a7faf103>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with equations of motion
I am having a little bit of conceptual trouble with this problem and would appreciate your help. The problem setup is given in the figure. Let's say we have a slender uniform rigid arm(mass m, length
l) in space, with a coordinate system [itex]B[/itex] attached to the left end of the arm as shown. C is the center of mass of the arm. We have a moment [itex]M_{z_b}[/itex] acting about the [itex]\
hat{z}_{b}[/itex] axis.
Let [itex](u,v,w)[/itex] and [itex](p,q,r)[/itex] be the inertial velocity and inertial angular velocity vectors expressed in [itex]B[/itex]. I get the scalar equations of motion as (assuming that
the angular velocity is only along [itex]\hat{z}_b[/itex]):
[itex]m \dot{u} - \frac{ml}{2} r^2 = F_{x_b}[/itex]
[itex]m \dot{v} + \frac{ml}{2} \dot{r} = F_{y_b}[/itex]
[itex]m \dot{w} = F_{z_b}[/itex]
[itex]0 = M_{x_b}[/itex]
[itex]-\frac{ml}{2} \dot{w} = M_{y_b}[/itex]
[itex]\frac{ml^2}{3} \dot{r} + \frac{ml}{2} \dot{v} = M_{z_b}[/itex]
The applied moment is given as : [itex]M_{z_b}(t) = 160 \left(1 - \cos \left(\frac{2 \pi t}{15} \right) \right)[/itex]. For [itex]t > 15, M_{z_b} = 0[/itex]. See figure below :
Integrating these equations using MATLAB's ode45, I get the following plot :
From the above figure :
1) There is only one component of angular velocity (yaw rate) which is as expected. But is the magnitude correct (ie should it reach 24 rad/s)?
2) I am not able to figure out what's going on with u. Why is it increasing so rapidly?
Any help would be really appreciated. | {"url":"http://www.physicsforums.com/showthread.php?s=ac2cffb385ff16a0b4794e3a7169fa22&p=4438675","timestamp":"2014-04-20T05:51:24Z","content_type":null,"content_length":"24740","record_id":"<urn:uuid:8af58478-fc57-4ec6-8c12-e2465c7fb8b7>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
GNU Octave Manual Version 3
by John W. Eaton, David Bateman, Sรธren Hauberg
Paperback (6"x9"), 568 pages
ISBN 095461206X
RRP ยฃ24.95 ($39.95)
Get a printed copy>>>
10.5 The for Statement
The for statement makes it more convenient to count iterations of a loop. The general form of the for statement looks like this:
for var = expression
where body stands for any statement or list of statements, expression is any valid expression, and var may take several forms. Usually it is a simple variable name or an indexed variable. If the
value of expression is a structure, var may also be a vector with two elements. See section 10.5.1 Looping Over Structure Elements, below.
The assignment expression in the for statement works a bit differently than Octave's normal assignment statement. Instead of assigning the complete result of the expression, it assigns each column of
the expression to var in turn. If expression is a range, a row vector, or a scalar, the value of var will be a scalar each time the loop body is executed. If var is a column vector or a matrix, var
will be a column vector each time the loop body is executed.
The following example shows another way to create a vector containing the first ten elements of the Fibonacci sequence, this time using the for statement:
fib = ones (1, 10);
for i = 3:10
fib (i) = fib (i-1) + fib (i-2);
This code works by first evaluating the expression 3:10, to produce a range of values from 3 to 10 inclusive. Then the variable i is assigned the first element of the range and the body of the loop
is executed once. When the end of the loop body is reached, the next value in the range is assigned to the variable i, and the loop body is executed again. This process continues until there are no
more elements to assign.
Within Octave is it also possible to iterate over matrices or cell arrays using the for statement. For example consider
disp("Loop over a matrix")
for i = [1,3;2,4]
disp("Loop over a cell array")
for i = {1,"two";"three",4}
In this case the variable i takes on the value of the columns of the matrix or cell matrix. So the first loop iterates twice, producing two column vectors [1;2], followed by [3;4], and likewise for
the loop over the cell array. This can be extended to loops over multidimensional arrays. For example
a = [1,3;2,4]; b = cat(3, a, 2*a);
for i = c
In the above case, the multidimensional matrix c is reshaped to a two dimensional matrix as reshape (c, rows(c), prod(size(c)(2:end))) and then the same behavior as a loop over a two dimensional
matrix is produced.
Although it is possible to rewrite all for loops as while loops, the Octave language has both statements because often a for loop is both less work to type and more natural to think of. Counting the
number of iterations is very common in loops and it can be easier to think of this counting as part of looping rather than as something to do inside the loop.
ISBN 095461206X GNU Octave Manual Version 3 See the print edition | {"url":"http://www.network-theory.co.uk/docs/octave3/octave_91.html","timestamp":"2014-04-19T09:53:18Z","content_type":null,"content_length":"7383","record_id":"<urn:uuid:24d20260-27dd-457a-9678-5f63eb51bb25>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maximum entropy models (
Routines for fitting maximum entropy models
Contains two classes for fitting maximum entropy models (also known as โexponential familyโ models) subject to linear constraints on the expectations of arbitrary feature statistics. One class,
โmodelโ, is for small discrete sample spaces, using explicit summation. The other, โbigmodelโ, is for sample spaces that are either continuous (and perhaps high-dimensional) or discrete but too large
to sum over, and uses importance sampling. conditional Monte Carlo methods.
The maximum entropy model has exponential form
with a real parameter vector theta of the same length as the feature statistic f(x), For more background, see, for example, Cover and Thomas (1991), Elements of Information Theory.
See the file bergerexample.py for a walk-through of how to use these routines when the sample space is small enough to be enumerated.
See bergerexamplesimulated.py for a a similar walk-through using simulation.
Copyright: Ed Schofield, 2003-2006 License: BSD-style (see LICENSE.txt in main source directory) | {"url":"http://docs.scipy.org/doc/scipy-0.8.x/reference/maxentropy.html","timestamp":"2014-04-17T22:41:45Z","content_type":null,"content_length":"46682","record_id":"<urn:uuid:87423582-b38e-479e-a693-2f0be68f67b7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
What does the p-value in SPSS research mean?
What does the p-value in SPSS research mean?
Don't hesitate to turn to our statisticians. We are proud to offer you quality statistical services supported by 100% money back guarantee. Regardless of the test and software you choose, you can
expect from us:
โข comprehensive task evaluation and quick quote
โข custom approach to your statistical project
โข accurate data analysis and detailed interpretation
โข constant project progress updates
โข free adjustments and timely delivery
Let us take care of your data!
The P-value means probability value and also known as the significance value.
In SPSS research p-value is a measure of how much evidence we have against the null hypothesis.
When we compute a statistics (e.g. F-statistics, Z-statistics, T- statistics) we compare it with critical value from the statistics table with the consideration of the sample size and the degree of
freedoms. In SPSS we use the significant value instead.
Let me introduce the ฮฑ-value which represent the part of data not covered (error) usually given by ฮฑ = (1-Confidence Interval)
Confidence Interval is a measure of how much (%) of your dataset is used
In SPSS research we either use 99%, 95% or 90% at worse Confidence Interval. This implies in other words the level of confidence is 0.01, 0.05 and 0.10 respectively.
Most SPSS researchers use 95% Confidence Interval in most analysis tests. For this particular case our ฮฑ is 0.05. In this case if we carry out a statistical test and our significance value P < 0.05
we reject the null hypothesis indicating there is significant difference between the means. If the significance value P โฅ 0.05 we donโt have enough evidence to reject the null hypothesis in that, the
difference between the means is not significant.
SPSS researchers should know that the word significant does not mean โimportantโ but instead in statistics it means โProbably Trueโ. | {"url":"http://www.spss-research.com/what-does-the-p-value-in-spss-research-mean/","timestamp":"2014-04-19T14:41:34Z","content_type":null,"content_length":"104217","record_id":"<urn:uuid:2351fe3f-7c4f-4b55-9737-0933d5cc7ce5>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Condition of an exponent for convergence of an integral
July 16th 2008, 11:27 AM #1
Condition of an exponent for convergence of an integral
I must find for which values of $p \in \mathbb{R}$ the following integral converges : $\int_e^{+\infty} \frac{dx}{x\ln^p|x|}$. Mathstud28 already done that (well, something pretty similar) in one
of my earlier threads, but I didn't understand a step he did, so I would like a little bit more explanations :
[quote]How about when
Hmm, sorry the quote doesn't work. I copy and past it, but it only shows the first line and all the other is cut, so I quote it by parts :[quote].
Still doesn't work!
Anyway, he was working with $\int_2^{+\infty}\frac{dx}{x\ln^p|x|}$. He said that if $0<p<1$, then the indefinite integral is equal to $\frac{\ln^{-p+1}(x)}{-p+1}$. This is what I missed to
understand. Did he do integration by parts? How could he reach this result? Thanks!!
Still doesn't work!
Anyway, he was working with $\int_2^{+\infty}\frac{dx}{x\ln^p|x|}$. He said that if $0<p<1$, then the indefinite integral is equal to $\frac{\ln^{-p+1}(x)}{-p+1}$. This is what I missed to
understand. Did he do integration by parts? How could he reach this result? Thanks!!
If $pe{-1}$
Then $\int\frac{dx}{x\ln^p(x)}=\int\frac{\frac{dx}{x}}{\ ln^p(x)}$
Now let $u=\ln(x)$
This is mathstud's baby, so I will leave him answer. I am sure he is typing as I write this. Anyway, here is a site with a list of integrals you may find helpful instead of deriving them each
time. Unless you want to.
Definite Integrals, General Formulas Involving Definite Integrals
Thanks mathstud28, I reached it! I thought it would have been much more complicated.
Hey mathstud, you know what might be fun?. Go to the link I posted above and actually derive the solutions they give for the integrals. You have probably done that already.
Some of them are probably very challenging.
July 16th 2008, 11:29 AM #2
July 16th 2008, 11:35 AM #3
July 16th 2008, 11:36 AM #4
July 16th 2008, 11:38 AM #5
July 16th 2008, 11:53 AM #6 | {"url":"http://mathhelpforum.com/calculus/43845-condition-exponent-convergence-integral.html","timestamp":"2014-04-17T20:14:36Z","content_type":null,"content_length":"51099","record_id":"<urn:uuid:692cbd2f-7c05-43b7-af78-412c12606e4b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lincoln Acres Prealgebra Tutor
Find a Lincoln Acres Prealgebra Tutor
...I also love tutoring computer science and physics. I create a different lesson plan for each student based on their needs and what they struggle with, and then set them up for success. I
understand that sometimes life gets hectic, so I only require 6 hours for cancellation.
37 Subjects: including prealgebra, calculus, algebra 2, algebra 1
...I also spent one year tutoring in Seattle, WA, working with special needs students pursuing their GEDs. I am an effective tutor because of my skill in assessing my student's needs, but also
because of my ability to empathize with young learners. I believe there is always an unseen angle that each learner can use to make the subject material more accessible and interesting.
14 Subjects: including prealgebra, French, geometry, ESL/ESOL
I have over 11 years in the educational field. I have worked with mostly high school and middle school children. I have experience many different subjects even though I am a history major, my
experience in working in many subjects has helped me be familiar in many other subjects.
32 Subjects: including prealgebra, reading, Spanish, writing
...The ACT is part knowledge but also part test strategy. Identifying the types of questions that are the most tricky, where the test seeks to confuse a student, and strategy to finish the test in
the allotted time became the focus of our studying. We worked for 3 days and her score improved significantly.
9 Subjects: including prealgebra, algebra 1, algebra 2, economics
...There are few jobs more rewarding than tutoring, and I have been lucky enough to tutor a diverse group of students throughout my career. I specialize in tutoring English at all levels,
particularly in writing skills and reading comprehension. I also specialize in tutoring for GRE, SAT, and ACT ...
29 Subjects: including prealgebra, English, chemistry, geometry | {"url":"http://www.purplemath.com/Lincoln_Acres_prealgebra_tutors.php","timestamp":"2014-04-17T13:37:24Z","content_type":null,"content_length":"24300","record_id":"<urn:uuid:9ec24b3b-bf1b-4b14-8c27-9e870ba36b6d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tortilla Flat Science Tutor
Find a Tortilla Flat Science Tutor
...I love to teach spelling and grammar, and know how to make it clear and easy. I write children's books, and have had poetry and articles published online. I also teach basic math through Pre
Algebra.I have taught complete pre-algebra courses for over two years.
18 Subjects: including biology, physical science, English, reading
...I am also very good at math and have taken up to Calculus 1, which I received a grade of 4.0 in. Although I'd like to tutor mainly science, math, or English, I will tutor most other subjects as
well. My tutoring style is to find out how best the student learns and cater my tutoring style to the way that individual learns best.
22 Subjects: including chemistry, ecology, physical science, biology
...I graduated Cum Laude with my BS in Biology from the University of Texas at Arlington. I was a substitute teacher in Azle, TX where I taught children from Kindergarten to High School. While I
was in college, I ran numerous study groups and assisted classmates in various subjects.
26 Subjects: including botany, reading, writing, microbiology
...I also instructed the following styles and classes for adults: small circle Jujitsu, hand to hand combat, Kali close fighting system, Tae Kwon Do, and an effective/interactive weapon fighting
system involving sticks and knives from Philippine Modern Arnis style. In August 2003, I was inducted in...
9 Subjects: including sociology, public speaking, psychology, pharmacology
...My credentials include an undergraduate teaching certification, graduate work in reading and a full SEI (structured English immersion) endorsement. My individualized approach is based on the
studentโs preferred learning style. So many of my students find that, for the first time, they arenโt simply trying to memorize; they actually understand.
40 Subjects: including sociology, geology, psychology, English | {"url":"http://www.purplemath.com/tortilla_flat_science_tutors.php","timestamp":"2014-04-19T05:12:34Z","content_type":null,"content_length":"24082","record_id":"<urn:uuid:e12dac4f-cf99-42b3-8500-3caed236a3a5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
You are working with the text-only light edition of "H.Lohninger: Teach/Me Data Analysis, Springer-Verlag, Berlin-New York-Tokyo, 1999. ISBN 3-540-14743-8". Click here for further information.
Table of Contents Statistical Tests Comparing Variances Index
See also: one-sample chi-square test, F-distribution, survey on statistical tests
Two-Sample F-Test
In order to compare two methods, it is often important to know whether the variabilities for both methods are the same. In order to compare two variances v[1], and v[2], one has to calculate the
ratio of the two variances. This ratio is called the F-statistic (in honor of R.A. Fisher) and follows an F distribution:
F = v[1]/v[2]
The null hypothesis H[0] assumes that the variances are equal and the ratio F is therefore one. The alternative hypothesis H[1 ]assumes that v[1 ]and v[2] are different, and that the ratio deviates
from unity. The F-test is based on two assumptions: (1) the samples are normally distributed, and (2) the samples are independent of each other. When these assumptions are fulfilled and H[0] is true,
the statistic F follows an F-distribution. The following is a decision table for the application of an F-test. In order to calculate the F-quantile, or an associated probability, refer to an F table,
or to the distribution calculator of Teach/Me.
โข When the normality assumption is not fulfilled, one should use a non-parametric method. In general the F-test is more sensitive to deviations from normality than the t-test.
โข The F-test can be used to check the equal variance assumption needed for the two sample t-test, but the non-rejection of H0 does not imply that the assumption (of equal variance) is valid, since
the probability of the type 2 error is unknown.
Suppose you have two series of measurements, one with 10 observations, and one with 13 observations. The variance of the first series is 0.88, and the variance of the second series is 1.79. Is the
variance of the second series significantly larger than the variance of the first series (at a significance level of 0.05)?
In order to check this, we assume the null hypothesis that the variance of the second series is not larger than the variance of the first series. The alternative hypothesis would be that the second
variance is indeed larger than the first one. Next we have to calculate the F statistic: F = 1.79/0.88 = 2.034. Now we can compare the F statistic with the critical value at a 5 percent level of
significance. By using the distribution calculator we find a critical value of 3.073. Since F is only 2.034 we cannot reject our null hypothesis (the second variance is not significantly larger than
the first one).
Last Update: 2005-Jul-16 | {"url":"http://www.vias.org/tmdatanaleng/cc_test_2sample_ftest.html","timestamp":"2014-04-17T18:46:09Z","content_type":null,"content_length":"6532","record_id":"<urn:uuid:a7cd073b-d811-46a7-abf5-f270d76bb96b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parkandbush, NJ Prealgebra Tutor
Find a Parkandbush, NJ Prealgebra Tutor
...Geometry being a prerequisite for algebra 2, I feel very confident that I can successfully help students in geometry. I have a strong math background from college (I was a chemistry major). I
have always enjoyed math and I have been successfully tutoring several students from 5th to 8th grade ...
7 Subjects: including prealgebra, chemistry, French, geometry
...I have been working as an SAT tutor since August 2013 and math tends to be my students' biggest focus. My students all see an improvement in their scores and I have some who have stuck with me
for many months leading up to their test. Math comes very naturally to me and I know how to put the abstract concept in more concrete terms for my students.
15 Subjects: including prealgebra, chemistry, physics, geometry
...After not being in school for twelve years, the first math course that I needed to take was this subject. At first I wasn't sure If I would perform well in this class. But thanks to having a
great teacher, I was able to understand every concept that was taught, and even better, I knew how to explain whatever problems there were to the other students.
15 Subjects: including prealgebra, geometry, statistics, algebra 1
...I did my undergraduate in Physics and Astronomy at Vassar, and did an Engineering degree at Dartmouth. I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and
will be done in a year. I have a lot of experience tutoring physics and math at all levels.
11 Subjects: including prealgebra, Spanish, calculus, physics
...I was also name MVP that same year. I've played basketball since I was 14. I played Division 1 for Lehigh University.
16 Subjects: including prealgebra, statistics, precalculus, elementary math
Related Parkandbush, NJ Tutors
Parkandbush, NJ Accounting Tutors
Parkandbush, NJ ACT Tutors
Parkandbush, NJ Algebra Tutors
Parkandbush, NJ Algebra 2 Tutors
Parkandbush, NJ Calculus Tutors
Parkandbush, NJ Geometry Tutors
Parkandbush, NJ Math Tutors
Parkandbush, NJ Prealgebra Tutors
Parkandbush, NJ Precalculus Tutors
Parkandbush, NJ SAT Tutors
Parkandbush, NJ SAT Math Tutors
Parkandbush, NJ Science Tutors
Parkandbush, NJ Statistics Tutors
Parkandbush, NJ Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Bayway, NJ prealgebra Tutors
Chestnut, NJ prealgebra Tutors
Elizabeth, NJ prealgebra Tutors
Elizabethport, NJ prealgebra Tutors
Elmora, NJ prealgebra Tutors
Greenville, NJ prealgebra Tutors
Midtown, NJ prealgebra Tutors
North Elizabeth, NJ prealgebra Tutors
Pamrapo, NJ prealgebra Tutors
Peterstown, NJ prealgebra Tutors
Townley, NJ prealgebra Tutors
Tremley, NJ prealgebra Tutors
Union Square, NJ prealgebra Tutors
Weequahic, NJ prealgebra Tutors
Winfield Park, NJ prealgebra Tutors | {"url":"http://www.purplemath.com/Parkandbush_NJ_prealgebra_tutors.php","timestamp":"2014-04-18T04:06:52Z","content_type":null,"content_length":"24377","record_id":"<urn:uuid:832694bb-bf72-4ab2-9801-6cc58ac67369>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why the NHL's new conference alignment is unfair for Eastern teams
On Friday the NHL released the 2013-2014 regular season schedule. This season will be the first played under the new conference and division alignment which sent Winnipeg west and Columbus and
Detroit east. A key aspect of the new alignment is that there is an imbalance in the number of teams in the East (16) and in the West (14). In this post I'll discuss a paper I wrote a couple months
back (
How the West will be Won: Using Monte Carlo Simulations to Estimate the Effects of NHL Realignment
). In the paper, I show that because 8 teams make the playoffs from each conference, the new alignment and playoff qualification rules unfairly disadvantage Eastern Conference teams.
Specifically, I'll show that the 8^th seed in the East will (on average) be 2 or 3 points better in the standings than the 8^th seed in the West. And I'll show that about 40% of the time, the 9^th
seed in the East would have made the playoffs if they were in the West (compared to just 20% of the time when the inverse is true).
This research was my first foray into hockey analytics. Back in March the NHL announced it had settled on a plan to realign the conference and division structure. Starting this upcoming season, the
NHL will have two conferences, and each conference will feature two divisions. The Eastern Conference will have 16 teams, and the Western Conference will have 14 teams. Each conference will continue
to feature an 8 team playoff, with the conference winners meeting in the Stanley Cup Final.
It was this final feature of the realignment that caught my attention. How is it that the NHL owners (especially in the East) agreed to a plan in which Eastern teams have a 50% chance of making the
playoffs, compared 57% among Western teams? Almost automatically, it has become harder to qualify in the East than in the West. So this got me thinking about how this new alignment will actually play
out and whether or not it will actually be unfair. I was specifically interested in what I call in the paper the "conference gap," which is the number of end-of-season points by the 8^th seed in the
East minus the points by the 8^th seed in the West.
The obvious problem with learning about the conference gap is that the NHL hasn't yet played a single game under these new rules, let alone a whole season. So, being a good statistician who knows how
to write fancy R code, I decided to simulate full NHL seasons and then calculate the conference gap. And not just 5 or 10 simulated seasons. I simulated 10,000 of them.
The intuition behind my simulation was pretty straightforward. To simulate one season, I randomly draw 30 numeric values from a normal (bell-shaped) distribution. These 30 values correspond to the
underlying quality or ability of the 30 teams. Then I go through and simulate all 1,230 games in the NHL schedule (adjusted appropriately to take into account the new scheduling matrix). For each
game, I draw a number from two normal distributions (each centered at the value of two teams' underlying ability). These two numbers can be thought of the amount of effort or skill the two teams put
forth during this game. The game is "won" by whichever team drew the higher number for their game performance value. I also account for games going into overtime, which I discuss this more thoroughly
on page 12 of the paper.
After I've simulated all 1,230 games, I calculate the final standings as well as the conference gap. Then as a point of comparison, I use the same 30 underlying ability values to rerun the season,
only this time I apply the old schedule matrix, old alignment, and old playoff qualification rules. I repeat all these steps 10,000 times (essentially simulating 10,000 seasons) to get a nice
understanding of how big the conference gap will be. In the paper, I go into more detail about how I tweaked my algorithm, so that I don't end up with the best team having 150 points and the worst
having 15. That discussion starts on page 16 of the
. It suffices to say though that you'd have a pretty tough time telling the difference between results from my simulations and the results from a real NHL season.
The graph below plots the conference gap for the 10,000 seasons simulated under the old and new alignment structures. The blue parts of the graph correspond with the new rules, and the red parts
correspond with the old.
What you can see is that under the old rules the average conference gap was 0.143 points in favor of the Western Conference, although this value is not statistically significantly different from
zero. Under the new rules, however, the average conference gap is 2.76 points. This means that, on average, the 8^th seed in the East had 2.76 more points than the 8^th seed in the West.
The graphs also contain the information of how often the conference gap favored the East versus the West. In the old alignment, 47% of seasons had the better 8^th seed in the West, and 46% of seasons
had an better Eastern 8^th. In the new alignment, this difference gets huge. 62% of the time, the East's 8 had a better record than the West's 8, compared to just 32% of the time the inverse was true
. This means that starting next year, we're about twice as likely to see a better Eastern 8^th. Under a fair set of rules in which your geographic location shouldn't affect your chances of making the
playoffs, these numbers should be 50/50.
My simulations also show that 38% of the time, the 9^th seed in the East would have qualified for the playoffs if they had only been located in the West. The opposite, in which the 9^th Western team
would have made the playoffs in the East, occurs only 21% of the time. Even more striking, my simulations suggest that 21% of the time the 10^th (!) seed in the East would make the playoffs if they
were in the Western Conference.
I might do another post in the next couple days, detailing a couple other interesting things that are in the
. But until then, here's the takeaway... Expect that in about 62% of seasons, the 8
seed in the East will have a better record than the 8
seed in the West. And in 38% of seasons, the 9
seed in the East will have a better record than the 8
seed in the West. And
4 comments:
1. Your Monte Carlo estimation assumes the distribution of talent is identical across conferences. But lately the West has been better than the East, as indicated by the fact that each of the last
few pre-strike years one or more western conference teams have missed the playoffs with point totals higher than those of Eastern conference teams that got in. If you buy this, then the
unfairness is baked in to the current system based on conferences, the question is just how it will be manifest (in unbalanced conferences) or in allocating playoff spots by conference.
2. Actually, it didn't happen in 11-12 (when the Kings won as the 8 seed in the West) but it did in the two preceding years.
This doesn't take away from your nice statistical analysis, just from the fairness argument. I think the best way to address this is to have the top 16 point total teams make the playoffs. Of
course then the unbalanced schedule would come in to play, but it would probably be better.
3. Thanks for the feedback. I definitely agree that this might serve to balance out the fact that the West is better. In an ideal world though we'd have great balance between the two conferences,
but if this were the case the West would benefit from this structural advantage. I think you're onto something with the suggestion that the top 16 teams should make the playoffs. That would be a
pretty radical change for the NHL to make, but they seem to like to make changes that bring attention to the league. Maybe some day we'll see that happen.
4. Make two more teams then it be even out. | {"url":"http://rinkstats.blogspot.com/2013/07/why-nhls-new-conference-alignment-is.html","timestamp":"2014-04-18T13:06:58Z","content_type":null,"content_length":"77679","record_id":"<urn:uuid:ab51428b-eabc-4583-9d3e-b569410edc11>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dividing a Line Segment into Equal Parts
Date: 11/22/2004 at 08:23:05
From: Iain
Subject: Determining coordinates of equal parts of a line segment
I have two endpoints of a line segment with coordinates A(2, 7) and
B(-4, -2). I am looking for the coordinates of the points that divide
AB into 3 equal parts.
If drawn on a graph, the points that divide AB are shown clearly. But
what if the coordinates of AB do not allow such convenient results?
Date: 11/22/2004 at 09:41:20
From: Doctor Barrus
Subject: Re: Determining coordinates of equal parts of a line segment
Hi, Iain!
Let's look at a picture of a line segment:
O A(2, 7)
O B(-4, -2)
Say we want to split segment AB into 3 equal parts, like this:
O A(2, 7)
. D(?, ?)
. C(?, ?)
O B(-4, -2)
If I understand you correctly, you want to find out the coordinates
of points C and D, right? Well, in order to help you understand my
answer, I'm going to add a little bit more to my drawing. First I'll
draw a vertical line through B and a horizontal line through A, which
will form a triangle:
O A(2, 7)
D . |
/ |
C. |
/ |
O-----O E(2,-2)
B(-4, -2)
Notice that since E is directly below A, its x-coordinate will be 2,
the same as A's. Since E is directly to the right of B, its y-
coordinate will be -2, the same as y's.
Now I'm going to mark points on segment BE directly below points C and
D, and I'm going to mark points on segment AE directly to the right of
C and D:
O A
D . . H
/ |
C. . I
/ |
O-.-.-O E
B F G
Now the x-coordinate of point C is the same as the x-coordinate of
point F, right? So I'm going to try to find the x-coordinate of point
F, and when I do, the answer will also be the x-coordinate of point C.
Let's just look at that bottom side of the triangle, segment BE:
-4 2
B F G E
The x-coordinate of B is -4, and the x-coordinate of E is 2. So the
distance between B and E is 6, because 2 - (-4) = 6.
-4 2
B F G E
<--------- 6 --------->
Now F is 1/3 of the way from B to E. Since the total distance from B
to E is 6, the distance from B to F is (1/3)*6 = 2 (Here * means
multiplication). So F is 2 units away from B, and G is 2 units away
from F. So we get the picture
-4 -2 0 2
B F G E
<- 2 -> <- 2 -> <- 2 ->
So the x-coordinate of F is -2, and the x-coordinate of G is 0. If
we look back the triangle we drew, we see that C also has to have x-
coordinate -2, and D has to have x-coordinate 0.
O A
D . . H
/ |
C. . I
/ |
O-.-.-O E
B F G
Now look at the segment AE with its y-coordinates:
7 O A
O H
O I
-2 O E
Can you find out what the y-coordinates for H and I should be? The
answers should tell you what the y-coordinates for D and C should be.
I hope this has helped. If you'd like a little more explanation,
please write us back with your questions. Good luck!
- Doctor Barrus, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/66794.html","timestamp":"2014-04-21T05:03:35Z","content_type":null,"content_length":"8301","record_id":"<urn:uuid:3017a01c-5cc5-4eea-9d7e-4218abfe8fc2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Discrete Fourier Transform in 2D
Replies: 2 Last Post: May 7, 2011 10:36 PM
Messages: [ Previous | Next ]
Re: Discrete Fourier Transform in 2D
Posted: Apr 16, 2011 12:47 AM
On Apr 15, 10:36 pm, "Will C." <will53...@gmail.com> wrote:
> Hi,
> I am trying to creating an algorithm to compute the fourier transform
> of a 2D array for use in a program which compares the performance of
> image filters in the spatial vs frequency domain.
> Part of the transform equation contains the term: exp(-j * 2 * pi *
> ((u * x) / M + (v * y) /N))
> My question is, how can I get a real solution from this? Since u, v,
> x, and y are indexes they are positive, and since M and N are the
> dimensions of the original array they are also positive.
> This leaves something like: exp(-j * c), where c is a positive
> constant which is calculated from the above givens. How can I ever
> get a real solution from this?
Why do you expect to get a real solution? Typically, the DFT of a real
function is a complex conjugate-symmetric function (see
The inverse DFT of a complex conjugate-symmetric function is real.
Date Subject Author
4/15/11 Discrete Fourier Transform in 2D Will C.
4/16/11 Re: Discrete Fourier Transform in 2D Dave Dodson
5/7/11 Re: Discrete Fourier Transform in 2D vjp2.at@at.BioStrategist.dot.dot.com | {"url":"http://mathforum.org/kb/message.jspa?messageID=7433448","timestamp":"2014-04-16T10:41:50Z","content_type":null,"content_length":"19510","record_id":"<urn:uuid:0fa4402f-49d4-4ae7-b1e2-638c97dc00f3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Biased Random Walks
Yossi Azar \Lambda Andrei Z. Broder y Anna R. Karlin z
Nathan Linial x Steven Phillips --
How much can an imperfect source of randomness affect an algoยญ
rithm? We examine several simple questions of this type concerning
the longยญterm behavior of a random walk on a finite graph. In our
setup, at each step of the random walk a ``controller'' can, with a cerยญ
tain small probability, fix the next step, thus introducing a bias. We
analyze the extent to which the bias can affect the limit behavior of
the walk. The controller is assumed to associate a real, nonnegative,
``benefit'' with each state, and to strive to maximize the longยญterm
expected benefit. We derive tight bounds on the maximum of this obยญ
jective function over all controller's strategies, and present polynomial
time algorithms for computing the optimal controller strategy.
1 Introduction
Ever since the introduction of randomness into computing, people have been
studying how imperfections in the sources of randomness affect the outcome
\Lambda Department of Computer Science, Tel Aviv University, Israel. This research was
supported in part by the Alon Fellowship and the Israel Science Foundation administered | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/305/3828803.html","timestamp":"2014-04-18T21:58:44Z","content_type":null,"content_length":"8307","record_id":"<urn:uuid:8d3d82fe-ac0e-4585-b171-58fb78ba9902>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Polynomial Time Approximation Schemes for
Euclidean Traveling Salesman and other Geometric
Sanjeev Arora
Princeton University
Association for Computing Machinery, Inc., 1515 Broadway, New York, NY 10036, USA
Tel: (212) 555ยญ1212; Fax: (212) 555ยญ2000
We present a polynomial time approximation scheme for Euclidean TSP in fixed dimensions. For
every fixed c > 1 and given any n nodes in # 2 , a randomized version of the scheme finds a
(1 + 1/c)ยญapproximation to the optimum traveling salesman tour in O(n(log n) O(c) ) time. When
the nodes are in # d , the running time increases to O(n(log n) (O( # dc)) d-1
). For every fixed c, d the
running time is n ยท poly(log n), i.e., nearly linear in n. The algorithm can be derandomized, but
this increases the running time by a factor O(n d ). The previous best approximation algorithm
for the problem (due to Christofides) achieves a 3/2ยญapproximation in polynomial time.
We also give similar approximation schemes for some other NPยญhard Euclidean problems: Miniยญ
mum Steiner Tree, kยญTSP, and kยญMST. (The running times of the algorithm for kยญTSP and kยญMST
involve an additional multiplicative factor k.) The previous best approximation algorithms for all
these problems achieved a constantยญfactor approximation. We also give e#cient approximation
schemes for Euclidean MinยญCost Matching, a problem that can be solved exactly in polynomial | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/040/3783024.html","timestamp":"2014-04-19T18:02:46Z","content_type":null,"content_length":"8532","record_id":"<urn:uuid:f96e7c25-7536-4108-96e2-db492db4cc09>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prisoner's Dilemma
A classic example of
game theory
Two criminal partners are captured and interrogated seperately. If one confesses and the other stays silent, the confesser will get off scot free and the tight-lipped one will get a heavy sentence.
If neither confess, they each get a light sentence. If both confess, they both get a heavy sentence.
What would you do?
The strategies involved in this game, when analysed mathematically, are complex and fascinating. | {"url":"http://www.everything2.com/index.pl?node_id=102826","timestamp":"2014-04-20T10:50:38Z","content_type":null,"content_length":"93819","record_id":"<urn:uuid:9d30fb41-53f1-478b-bfcd-fb21eb635b65>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: how can i make my loop run faster?
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: how can i make my loop run faster?
From Partho Sarkar <partho.ss+lists@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: how can i make my loop run faster?
Date Tue, 20 Sep 2011 10:54:01 +0530
Hello Stefanlo
I am not quite sure I understand what you are proposing, but I suppose
this ought to be do-able, though I can't offhand think of any direct
way to do this. I think this would also be an extremely cumbersome
way to do this, and almost certainly even slower than any of the
others we have talked about. However, I had thought of another
possible approach:
Since you have a "short" panel- "only" 200 periods, but many more
firms, I would think dividing up your sample period into, say, 10
(non-overlapping) subsets, and doing a statsby regression for each
would give you the results for all (or selected subset of) firms
within each time sub-period. You would still have to
combine(merge/append) the results.
But why not try the rolling loop as suggested in the thread I cited,
first splitting the firms into manageable subsets (which I think you
would have to do in any case, if you want to run this routine- unless
you have access to a super-computer!) ?
By the way, I am curious to know what exactly the source of your data is!
Hope this helps
On Tue, Sep 20, 2011 at 10:20 AM, Stefano Rossi <sr525@cornell.edu> wrote:
> Dear Partho,
> many thanks for this, which is very useful. I can see how "rolling" works, and I can see how it can generate efficiency gains, but I agree the whole procedure may still be quite slow and require splitting the sample into subsets to get a faster procedure in some way.
> I am currently considering a different path, namely generating a cross-section of observations by firm-period, whereby each firm-period unit contains 12 observations, from -1 to -12 (I would also have a separate data by +1 to +12). This procedure would effectively produce a dataset 12 times larger than my current one. This procedure would get around the "rolling" issue, and would allow me to use the "statsby" (or equivalent) command without worrying of the length of the estimation sample, with potentially large efficiency improvements (i.e., no "ifs").
> Provided my intuition is correct, my one concern here is how to create such dataset, which is 12 times bigger than the current one. Is there a built-in Stata command that allows to do this efficiently?
> Many thanks for your support.
> Kind regards,
> Stefano
> ________________________________________
> From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] On Behalf Of Partho Sarkar [partho.ss+lists@gmail.com]
> Sent: Tuesday, September 20, 2011 12:33 AM
> To: statalist@hsphsun2.harvard.edu
> Subject: Re: st: how can i make my loop run faster?
> I guess Stefano might have solved his problem by now, but just to
> complete this, here is a post by Brian R. Landy from an older thread
> which gives the complete code for -rolling-, including merging the
> results files.
> http://www.stata.com/statalist/archive/2009-09/msg01239.html
> The thread also points out the speed problems with rolling for panel data.
> P.Sarkar
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-09/msg00828.html","timestamp":"2014-04-19T20:46:26Z","content_type":null,"content_length":"12704","record_id":"<urn:uuid:4d2a2d31-b177-4929-81b8-f0d883f38cb7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Comparing Two Isochronous Oscillators
This Demonstration compares the motions of two well-known one-dimensional isochronous oscillators, in a simulation using analytical expressions for . One of the potentials considered is parabolic
(harmonic) and the other is sheared (anharmonic) [1,2,3]. You can obtain the time-evolution using analytic solutions for the second potential. For simplicity, dimensionless units are used, with mass
set equal to 1. Arrows are shown to represent the forces. Students are encouraged to modify the simulation in order to obtain animations for modified energies. They can thus verify the isochronicity
of the sheared potential, which follows from the solution .
[1] C. Antรณn and J. L. Brun, "Isochronous Oscillations: Potentials Derived from a Parabola by Shearing,"
American Journal of Physics
(6), 2008 pp. 537โ540.
[2] A. B. Pippard,
The Physics of Vibration, Vol 1.
, Cambridge, UK: Cambridge University Press, 1978.
[3] T. W. B. Kibble and F. H. Berkshire,
Classical Mechanics,
London: Imperial College Press, 2004. | {"url":"http://demonstrations.wolfram.com/ComparingTwoIsochronousOscillators/","timestamp":"2014-04-21T15:04:09Z","content_type":null,"content_length":"42935","record_id":"<urn:uuid:e84ef8e3-0e42-4968-a2fe-8f78523a6701>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definition of Hurewicz map relating $SH(k)$ with $DM_\_^{eff}(k)$
up vote 1 down vote favorite
In "Motivic Homotopy Theory" on page 153 it is stated that there exists a canonical Hurewicz map relating the motivic stable homotopy category with the category $\operatorname{DM}_\_^{eff}(k)$.
Unfortunatly, I was not able to find a definition of such a map and every attempt to define one myself ended in vain.
add comment
1 Answer
active oldest votes
There's a free-forgetful adjunction between $SH_s(k)$ and $DM^{eff}(k)$, where $SH_s(k)$ is the category of $S^1$-spectra (as opposed to $\mathbb{P}^1$-spectra). The right adjoint
simply takes a sheaf of chain complexes with transfers in $DM^{eff}(k)$ to its underlying sheaf of spectra (i.e. view chain complexes as spectra, and forget transfers).
You can then upgrade this adjunction to an adjunction between $SH(k)$ (= $SH_s(k)$ with $\Sigma^\infty\mathbb{G}_m$ inverted) and $DM(k)$ (= $DM^{eff}(k)$ with $\mathbb{Z}(1)[1]$
inverted). This works because the left adjoint above is symmetric monoidal and sends $\Sigma^\infty\mathbb{G}_m$ to $\mathbb{Z}(1)[1]$. The motivic Hurewicz map is the unit of this
up vote 2 down adjunction.
vote accepted
To get a functor $DM^{eff}\to SH$ you would first map $DM^{eff}$ to $DM$ (this is an embedding if $k$ is perfect), and then use the right adjoint $DM\to SH$.
For detailed constructions with model categories see section 2.2 in Modules over motivic cohomology by Rรถndigs and รstvรฆr.
Thanks a lot! I am not quite sure if I got how the right adjoint works. Is the structure of the spectra induced by the map $C\to\Sigma\Omega C\to\Sigma C[-1]$? โ Felix Wellen Feb 6
'13 at 23:32
No, it goes as follows. An unbounded chain complex is an infinite delooping of a connective chain complex. A connective chain complex is the same thing as a simplicial abelian group
(Dold-Kan). So if you forget the abelian group structure, you get an infinite delooping of a simplicial set, i.e., a spectrum. This is called the stable Dold-Kan correspondence, see
e.g. ncatlab.org/nlab/show/โฆ โ Marc Hoyois Feb 7 '13 at 3:58
Thanks again! After some thought, I think I got it. โ Felix Wellen Feb 12 '13 at 14:00
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry motivic-cohomology motives or ask your own question. | {"url":"https://mathoverflow.net/questions/120912/definition-of-hurewicz-map-relating-shk-with-dm-effk/120988","timestamp":"2014-04-17T01:27:00Z","content_type":null,"content_length":"54767","record_id":"<urn:uuid:214325ea-8244-4941-b15c-cab035d7b138>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kurt Gรถdel
Who the hell is Kurt Gรถdel, and why should I care?
The Austrian mathematician and logician Kurt Gรถdel is mostly known nowadays for his incompleteness theorem. Simply put, his theorem states that any sufficiently strong axiomatic system is either
inconsistent or incomplete. The implications of this discovery for the fields of mathematics and logic were massive: Logicians had long been trying to explain more or less the entire universe from a
few simple logical axioms. Notably, Bertrand Russell's mammoth work Principia Mathematica was an attempt to explain all of mathematics from such a logical, axiomatic system. Gรถdel's theorem, it can
be said, forced logic to learn humility. His discovery was a landmark, showing that mathematics is not a finished (and possibly not finishable) object. The way he arrived at the theorem (using a
modified version of Epimenides' paradox) seems almost provokingly simple in hindsight. Most people who know of him nowadays do so either because they themselves are mathematicians, or because they've
read Douglas Hofstadter's famous work Gรถdel, Escher, Bach: An Eternal Golden Braid (which would typically imply that they're geeks anyway).
Gรถdel's Childhood and Education
Kurt Gรถdel was born on the 28th April 1906 in Brรผnn, in what was then Austria-Hungary (the town is now called Brno, and is part of the Czech Republic). He went to school in his town of birth, and
completed the Gymnasium (European senior secondary schools; roughly equivalent to US high schools) in 1923. At that time, he had already mastered university-level mathematics. According to his
brother Rudolf Gรถdel, he had gained himself quite a reputation not only because of his mathematical talent, but also for the fact that he was a quite adept linguist, reputedly never having made a
single grammatical error in Latin during his entire time at school. He entered the University of Vienna the same year as he finished the Gymnasium, doing undergraduate work in the field of
mathematical philosophy. From there, he quickly developed an intense interest in formal logic, and according to his fellow students, showed incredible talent in this field right from the start. He
completed his doctorate dissertation under the supervision of professor Hans Hahn, and until 1938, he belonged to the philosophical school of logical positivism.
The Incompleteness Theorem
One of the things that caught Gรถdel's interest during his time at university was Bertrand Russell's work. As an undergraduate he had studied Russell's book Introduction to Mathematical Philosophy,
and he had later done an intensive study of Russell's main work, the Principia Mathematica. By 1931, Gรถdel published his work รber formal unentscheidbare Sรคtze der Principia Mathematica und
verwandter Systeme ("On Formally Undecidable Propositions of Principia Mathematica and Related Systems"), in which he presented his famous incompleteness theorem. His work was like a kick in the face
of the hundreds of years of desperate attempts of finding axioms in order to put the entire structure of mathematics on an axiomatic foundation. One of the "related systems" that were demolished by
Gรถdel's theorem was David Hilbert's formalism, which attempted to describe mathematics as a formal system. One of the more practical implications of Gรถdel's incompleteness theorem is that it is
impossible to program a computer to answer all mathematical questions. He arrived at his theorem using a modified form of Epimenides' so-called "liar paradox", which in its normal form says "this
sentence is false". The paradox should be obvious, but for the benefit of those who forgot to load logic.so while getting out of bed:
โข If the sentence "this sentence is false" is indeed false, it is in fact speaking the truth. It says it's false, after all.
โข If, on the other hand, it is true, the sentence is a dirty liar. It claims to be false!
Gรถdel's theorem was expressed in a more complicated mathematical lingo (and I'll admit I've never personally read his aforementioned book; but that's because I'm a lazy-assed non-mathematician. So
sue me.), but it boils down to the idea that in any sufficiently strong formal system, it is possible to express a theorem which says "this theorem is not provable". If the theorem is indeed
provable, the system is inconsistent, because it houses a self-contradiction. If the theorem is not provable, it is saying that the system is incomplete. Ha-ha! A smack in the face, a logic bomb
planted straight into the foundation of those systems that tried to plant mathematics on an axiomatic foundation. A related implication is that provability is a weaker notion than truth.
Gรถdel's Life during World War II
When Adolf Hitler seized political power in Germany in 1933, Gรถdel didn't particularly care. He didn't live in Germany, and in the time-honoured tradition of geeks everywhere, he didn't particularly
care about politics. That changed when a Nazi student of Gรถdel's former teacher Moritz Schlick (who had taught Gรถdel's undergraduate class in mathematical philosophy) murdered his old teacher.
Schlick had been the man to spur Gรถdel's interest in logic, and the event caused a full-scale emotional breakdown in Gรถdel. He had a serious nervous breakdown, which he recovered from in late 1934.
At that time, he was offered a guest professorship at Princeton, and moved to the United States to teach. His 1934 lectures at Princeton have been published by Stephen Cole Kleene, under the title "
On undecidable propositions of formal mathematical systems". In 1938, Gรถdel returned to Vienna to marry Adele Porkert. War broke out shortly afterwards, leaving the two trapped in Nazi-"reunited"
Austria. Determined to return to teach at Princeton and just as determined to stay alive, Gรถdel decided to flee Europe. Going to the States across the Atlantic was impossible, so he took the long
way: Travelling through Russia and then Japan, Gรถdel and his wife eventually found themselves in the US in 1940, at which time they formally emigrated there.
Death of a Logician: Paranoid delusions
From 1953 to his death in 1978, Gรถdel held a chair at Princeton's Institute for Advanced Study, and received the US National Medal of Science in 1974. He had written and published several acclaimed
scientific works, most famously including the aforementioned "On Formally Undecidable....." as well as "Consistency of the axiom of choice and of the generalized continuum hypothesis with the axioms
of set theory" (released 1940). His work is considered classics of modern mathematical logic, and he had much to be proud of. He had many quite intelligent friends with whom he loved to debate
philosophy, famously including Albert Einstein and John von Neumann.
Unfortunately, he was also gloriously insane. He was strongly opinionated about everything in his life, not only mathematics but also things he really didn't have much knowledge about, such as
medicine. Added to that, he had a quite profound case of paranoia, and believed that unseen enemies were stalking him and trying to kill him. He suffered a duodenal ulcer and had severe bleeding, and
put together an extremely strict diet for himself, which defied the advice his doctors had given him and caused him to slowly lose weight. Near the end of Gรถdel's life, his wife Adele was
hospitalized for cardiac problems, and Gรถdel refused to eat. He himself could not cook, and he trusted nobody other than Adele to cook for him, believing that they would put poison in his food. He
was found dead in his bed in 1978, curled up in a fetal position and starved to death. He was survived by Adele, and they never had any children.
"Either mathematics is too big for the human mind or the human mind is more than a machine."
--Kurt Gรถdel
โข http://www-gap.dcs.st-and.ac.uk/~history/Mathematicians/Godel.html
โข http://www.andrews.edu/~calkins/math/biograph/biogodel.htm
โข Douglas R. Hofstadter: "Gรถdel, Escher, Bach: An Eternal Golden Braid", ISBN 0465026850
Note: His name is pronounced roughly like "Koort GURdle". | {"url":"http://everything2.com/title/Kurt+Godel?author_id=1386733","timestamp":"2014-04-17T12:40:51Z","content_type":null,"content_length":"33152","record_id":"<urn:uuid:622d555f-615a-4f89-91eb-f5b80097b982>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Anderson, TX Math Tutor
Find an Anderson, TX Math Tutor
...Academically, I received the Phi Eta Sigma Freshman Scholastic Honorary and the Phi Kappa Phi Honor Society awards for academic achievement. My broad education brings a variety of approaches
to tutoring. I am at ease in communicating the practical approaches from engineering in tutoring as well as the more theoretical approaches of math and physics.
54 Subjects: including statistics, Praxis, MCAT, public speaking
I stress the learning of problem-solving skills above and beyond just learning facts. If you are willing to try and are open to new approaches, I can help you develop skills that can last a
lifetime. I have a Ph.D. in Analytical Chemistry and have many years of industry experience using both analytical and organic chemistry.
6 Subjects: including algebra 1, algebra 2, trigonometry, geometry
...I have taught Algebra II for over 4 years with a high success rate. All of my students continued to Precalculus and were successful in both subjects. I teach several different methods so that
the students can have options on how to solve the problems.
8 Subjects: including algebra 1, algebra 2, biology, geometry
...I specialize in tutoring math (elementary math, geometry, prealgebra, algebra 1 & 2, trigonometry, precalculus, etc.), Microsoft Word, Excel, PowerPoint, and VBA programming. I'd love to talk
more about tutoring for your specific situation and look forward to hearing from you.During my time at T...
17 Subjects: including geometry, elementary math, reading, ACT Math
...I have experience in lab techniques for all of these subjects. I also have a well-rounded, successful writing background, in both general and scientific research, as well as persuasive,
problem/solution, and informative essays. I have been a certified pharmacy technician for several years and, ...
15 Subjects: including algebra 1, prealgebra, chemistry, reading
Related Anderson, TX Tutors
Anderson, TX Accounting Tutors
Anderson, TX ACT Tutors
Anderson, TX Algebra Tutors
Anderson, TX Algebra 2 Tutors
Anderson, TX Calculus Tutors
Anderson, TX Geometry Tutors
Anderson, TX Math Tutors
Anderson, TX Prealgebra Tutors
Anderson, TX Precalculus Tutors
Anderson, TX SAT Tutors
Anderson, TX SAT Math Tutors
Anderson, TX Science Tutors
Anderson, TX Statistics Tutors
Anderson, TX Trigonometry Tutors
Nearby Cities With Math Tutor
Brenham Math Tutors
Hempstead, TX Math Tutors
Lyons, TX Math Tutors
Montgomery, TX Math Tutors
Mumford, TX Math Tutors
New Waverly, TX Math Tutors
Panorama Village, TX Math Tutors
Pinehurst, TX Math Tutors
Prairie View, TX Math Tutors
Richards, TX Math Tutors
Riverside, TX Math Tutors
Somerville, TX Math Tutors
Stagecoach, TX Math Tutors
Todd Mission, TX Math Tutors
Wheelock Math Tutors | {"url":"http://www.purplemath.com/Anderson_TX_Math_tutors.php","timestamp":"2014-04-17T01:24:34Z","content_type":null,"content_length":"23938","record_id":"<urn:uuid:8cb51b10-5375-4274-9fc8-b9554f3cbac0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Noah Stein
bio website mit.edu/~nstein
location MIT
age 30
visits member for 3 years, 11 months
seen 8 hours ago
stats profile views 1,734
I completed my graduate studies in MIT's LIDS, an applied math-ish program within the EE department. Now I do research in industry at Analog Devices | Lyric Labs, primarily on algorithms for the
audio source separation ("cocktail party") problem.
My thesis research was on the mathematical side of game theory with a view towards computation. In terms of mathematical classifications, I consider myself more of a "theory builder" (at least
aspirationally) than a "problem solver" and I have a preference for "soft analysis" over "hard analysis".
MathOverflow 5,046 rep 11134
Mathematics 2,041 rep 414
TeX - LaTeX 133 rep 3
Nice Answer ร 11 Pundit
Nice Question ร 4 Yearling ร 3
Good Question Popular Question ร 3
Custodian ร 5 Fanatic
Informed Enlightened ร 4
913 Votes Cast
all time by type month week
875 up 403 question 13 9
38 down 510 answer | {"url":"http://mathoverflow.net/users/5963/noah-stein","timestamp":"2014-04-17T07:57:58Z","content_type":null,"content_length":"63732","record_id":"<urn:uuid:073cea82-e67c-4076-8281-432b19566983>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tony directs the Our research area is machine learning, a field which develops novel algorithms that use data to model complex real-world phenomena and to make accurate predictions about them. Our
work spans both the applied and the fundamental aspects of the field. We have made contributions by migrating generalized matching, graph-theoretic methods and combinatorial methods into the field of
machine learning. We have also introduced unifying frameworks that combine two separate methodologies in the field: discriminative approaches and generative approaches. The machine learning
applications we have worked on have had real-world impact achieving state of the art results in vision, face recognition and spatio-temporal tracking, leading to a successful startup which applies
machine learning to spatio-temporal data. Our papers are available online
Learning with Matchings and Perfect Graphs
Learning with Generalized Matchings
We are interested in the application of generalized matching and other graph-theoretic / combinatorial methods to machine learning problems. Matching is an important tool in many fields and settings.
One famous example of generalized matching is Google's AdWords system which approximately matches advertisers to search terms (AdWords generates a significant portion of Google's revenue). Until
recently, matching has had limited impact in machine learning. In the past 5 years, we have introduced and explored the connections between machine learning problems and generalized matching. In
1965, Jack Edmonds began pioneering work into the combinatorics of matching and its many extensions including b-matching. This work received the following citation during Edmonds' receipt of the 1985
John von Neumann Theory Prize, Jack Edmonds has been one of the creators of the field of combinatorial optimization and polyhedral combinatorics. His 1965 paper "Paths, Trees, and Flowers" was one of
the first papers to suggest the possibility of establishing a mathematical theory of efficient combinatorial algorithms.
Matchings and generalized matchings are a family of graphs (such as trees) that enjoy fascinating combinatorics and fast algorithms. Matching goes by many names and variations including permutation
(or permutation-invariance), assignment, auction, alignment, correspondence, bipartite matching, unipartite matching, generalized matching and b-matching. The group of matchings is called the
symmetric group in algebra. We introduced the use of matching in machine learning problems and showed how it improves the state of the art in machine learning problems like classification,
clustering, semisupervised learning, collaborative filtering, tracking, and visualization. We have also shown how matching leads to theoretical and algorithmic breakthroughs in MAP estimation,
Bayesian networks and graphical modeling which are important topics in machine learning.
We initially explored matching from a computer vision perspective where it was already a popular tool. For instance, to compare two images for recognition purposes, it is natural to first match
pixels or patches in the images by solving a bipartite matching problem. We first explored matching for this type of image alignment and interleaved it into unsupervised and supervised learning
problems. For instance, we explored matching within unsupervised Gaussian modeling and principal components analysis settings (ICCV 03, AISTAT 03, COLT 04). We also explored matching in
classification frameworks by building kernels that were invariant to correspondence (ICML 03) and classifiers that were invariant to permutation (ICML 06b). These methods significantly outperformed
other approaches particularly on image datasets when images were represented as vector sets or bags of pixels (ICCV 03). The paper A Kernel between Sets of Vectors (ICML 03) won an award at the
International Conference on Machine Learning in 2003 out of a total of 370 submitted papers. Given the empirical success of matching in image problems, we considered its application to other machine
learning problems. One key insight was that matching acts as a general tool for turning data into graphs. Once data is in the form of a graph, a variety of machine learning problems can be directly
handled by algorithms from the graph theory community. We began pioneering the use of b-matching, a natural extension to matching in a variety of machine learning problems. We showed how it can lead
to state of the art results in visualization or graph embedding (ICML 09, AISTATS 07d), clustering (ECML 06), classification (AISTAT 07b), semisupervised learning (ICML 09b), collaborative filtering
(NIPS 08b), and tracking (AISTAT 07a).
We were initially motivated by the KDD 2005 Challenge which was a competition organized by the CIA and NSA to cluster anonymized documents by authorship. By using b-matching prior to spectral
clustering algorithms, we obtained the best average accuracy in the ER1B competition in competition with 8 funded teams from various other universities. The work was published in a classified journal
(Jebara, Shchogolev and Kondor in JICRD 2006) and a related non-classified publication can be found in (ECML 06). A simple way of seeing the usefulness of b-matching is to consider it as a principled
competitor to the most popular and simplest machine learning algorithm: k-nearest neighbors. For instance, consider connecting the points in the figure below using 2-nearest-neighbors. Typically,
when computing k-nearest-neighbors, many nodes will actually have more than k neighbors since, in addition to choosing k neighbors, each node itself can be selected by other nodes during the greedy
procedure. This situation is depicted in Figure (a) below. Here, several two ring datasets are shown and should be clustered into the two obvious components: the inner ring and the outer ring.
However, when the smaller ring's radius is similar to the larger ring's radius or when the number of samples on each ring is low, k-nearest-neighbors does not recover the two ring connectivity
correctly. This is because nodes in the smaller inner ring get over-selected by nodes in the outer ring and eventually have large neighborhoods. Ultimately, k-nearest neighbor algorithms do not
produce regular graphs as an output and the degree for many nodes (especially on the inner ring) is larger than k. This irregularity in k-nearest-neighbors can get exponentially worse in higher
dimensional datasets as shown by (Devroye and Wagner, 1979). This limits the stability and generalization power of k-nearest neighbors by the so called kissing number in d-dimensional space.
Conversely, b-matching methods guarantee that a truly regular graph (of minimum total edge length) is produced. Figure (b) shows the output of b-matching and therein it is clear that each node has
exactly 2 neighbors. Even spectral clustering will not tease apart these two rings at some scales and sampling configurations.
After showing how matching improves spectral clustering in practice (ECML 06), we investigated its use in other machine learning settings. For instance, in (AISTAT 07b), we showed how it can perform
more accurate classification than k-nearest neighbors without requiring much more computational effort. In visualization problems where high dimensional datasets are typically turned into k-nearest
neighbor graphs prior to low dimensional embedding (AISTATS 07d), we showed how b-matching yields more balanced connectivity and more faithful visualizations (Snowbird 06, ICML 09). The latter paper
Structure Preserving Embedding won the best paper award at the International Conference on Machine Learning in 2009 out of a total of 600 submitted papers. In addition, in semisupervised learning and
so-called graph transduction methods (ICML 08), we showed how b-matching consistently improves accuracy (ICML 09b). In collaborative filtering problems, we applied a generalization of b-matching
where each node need not have constant degree b but rather any customizable distribution over degrees and this yielded more accurate recommendation results (NIPS 08b). Finally, by realizing that
matchings form a finite group in algebra known as the symmetric group, we have managed to apply group-theoretic tools such as generalized fast Fourier transforms to solve problems over distributions
on matchings (AISTAT 07a). Such distributions over matchings are useful in multi-target tracking since, at each time point, the identity of current targets needs to be matched to previously tracked
trajectories. However, since there are n! matchings, these distributions need to be band-limited using Fourier analysis to remain efficient (polynomial). To our knowledge, this was the first
application of group theory to matchings in a machine learning and or tracking setting and led to improvement over previous methods.
Matching, MAP Estimation and Message Passing
One of the obstacles in applying off the shelf generalized matching solvers to real machine learning problems is computation time. For instance, the fastest solvers from the combinatorial
optimization community were too computationally demanding to handle more than a few hundred data points (ECML 06). This led us to build our own algorithms for solving b-matchings by trying a
generally fast machine learning approach known as message passing (or belief propagation which is used interchangeably here). This method is extremely popular in machine learning problems and is used
to perform approximate inference in graphical models and decomposable probability distributions. In (AISTAT 07b), we realized that b-matching problems could be written as a graphical model, a
probability density function that decomposes into a product of cliques in a sparse graph. To find the maximum a posteriori (MAP) estimate of this graphical model, we implemented message passing, a
clever and fast way of approximating the b-matching problem by sending messages along the graph. Message passing works for MAP estimation on tree-structured graphical models but only gives
approximate answers on graphical models with loops. Since the b-matching graphical model had many loops, all we could hope for was suboptimal convergence some of the time. Amazingly, message passing
on this loopy graphical model efficiently produced the correct answer all the time. We published a proof guaranteeing the efficient and exact convergence of message passing for b-matching problems in
(AISTAT 07b), and showed one of the few cases where the optimality is provably preserved despite loops in the graphical model. The optimality of message passing for trees (Pearl 1988) and single-loop
graphs (Weiss 2000) were previously known however the extension to b-matching represents an important special case. This was significant since defining which graphical models admit exact inference
has been an open and exciting research area in the field of machine learning (and other communities such as decoding) for many years. Graphical models corresponding to matching, b-matching and
generalized matching extend known guarantees beyond trees and single loop graphs to a wider set of models and problems. Today, the connection between message passing and matching is a promising area
of theoretical research with contributions from multiple theory researchers (Bayati, Shah and Sharma 2005, Sanghavi, Malioutov and Willsky 2008, as well as Wainwright and Jordan 2008).
Simultaneously, several applied researchers have downloaded our code (available online) and are efficiently solving large scale (bipartite and unipartite) b-matching problems using message passing.
Our group uses the code regularly for the various papers described in the previous section. In our original article we proved that the algorithm requires O(n^3) time for solving maximum weight
b-matchings on dense graphs with n nodes yet empirically our code seemed to be much more efficient. Recently, (Salez and Shah 2009) proved that, under mild assumptions, message passing methods take O
(n^2) on dense graphs. This result indicates that our implementation is one of the fastest around and applies to problems of comparable scale as k-nearest neighbors. Currently, we are working on
other theoretical guarantees for b-matching in terms of its generalization performance when applied to independently sampled data. To do so, we are attempting to characterize the stability of
b-matching compared to k-nearest neighbors by following the approach of (Devroye and Wagner 1979) which should yield theoretical guarantees on its accuracy in machine learning settings.
Matchings, Trees and Perfect Graphs
Matchings, b-matchings and generalized matchings are just families of graphs in the space of all possible undirected graphs on n nodes. There are n! possible matchings and even more possible
b-matchings. Amazingly, however, maximum weight matchings can be recovered efficiently either by using Edmonds' algorithm or by using matching-structured graphical models which admit exact inference
via message passing. Similarly, there are n^(n-2) trees and tree-structured graphical models admit exact inference via message passing.
We have worked on tree structured graphical modeling in several settings because of these interesting computational advantages. For instance, in (UAI 04), we developed tree hierarchies of dynamical
systems that allow efficient structured mean field inference. These were useful for multivariate time series tracking and other applied temporal data. Another remarkable property of trees is that
summation over a distribution over all n^(n-2) trees can be done in cubic time using Kirchoff's and Tutte's theorem. In (NIPS 99) we proposed inference over the distribution of all possible
undirected trees over n nodes using Kirchoff's matrix tree theorem. In (UAI 08) we developed inference over the distribution of all possible out directed trees over n nodes using Tutte's matrix tree
theorem. These methods are useful in modeling non iid data such as phylogenetic data, disease spread, language arborescence models and so forth.
Not only do trees have fascinating combinatorial and computational properties, they lead to important empirical results on real world machine learning problems. Similarly, not only do matchings have
fascinating combinatorial and computational properties, they lead to important empirical results on real world machine learning problems. Is there some larger family of graphs which subsumes these
two families of graphs? This answer is yes and leads to the most exciting topic in combinatorics: perfect graphs . In (UAI 09) we were the first to identify this contact point between perfect graphs
and machine learning and probabilistic inference.
Recently, mathematicians and combinatorics experts have had a breakthrough in their field through the work of (Chudnovsky et al. 2006) who proved the decades-old strong perfect graph conjecture and
developed algorithms for testing graphs to determine if they are perfect. We have shown that perfect graphs have important implications in the field of machine learning and decoding by using perfect
graphs to identify when inference in general graphical models is exact and when it can be solved efficiently via message passing (UAI 09). This is done by converting graphical models (with loops) and
the corresponding MAP estimation problem into what we call a nand Markov random field . The topology of this structure is then efficiently diagnosed to see if it forms a perfect graph. If it is
indeed perfect, then MAP estimation is guaranteed exact and efficient and message passing will provide the optimal solution. This generalizes known results on trees and the results on generalized
matchings to the larger class of perfect graphs. Defining which graphical models admit exact inference has been an open and exciting research area in the field of machine learning (and other
communities such as decoding) for many years. Perfect graph theory is now a valuable tool in the field's arsenal. This recent work extends graphical model and Bayesian network inference guarantees
that are known about trees and matchings to the more general and more fundamental family of perfect graphs.
Discriminative and Generative Learning
Before our work in matchings and graphs, we began contributing to machine learning by combining discriminative (large margin) learning with generative (probabilistic) models. The machine learning
community had two schools of thought: discriminative methods (such as large margin learning support vector classification) and generative methods (such as Bayesian networks, Bayesian learning (
Pattern Recognition 00) and probability density estimation). Generative models summarize data and are more flexible since the practitioner can introduce various conditional independence assumptions,
priors, hidden variables and myriad parametric assumptions. Meanwhile, discriminative models only learn from data to make accurate input to output predictions and offer less modeling flexibility,
limiting the practitioner to explore kernel methods which can be less natural.
The combination of generative and discriminative learning was explored in the book (Jebara 04) and provided several tools for combining these goals. These including variational approaches to maximum
conditional likelihood estimation (NIPS 98, NIPS 00) as well as maximum entropy discrimination or MED (NIPS 99, UAI 00, ICML 04). These methods work directly with generative models yet estimate the
models' parameters to reduce classification error and maximize the resulting classifier's margin. In fact, the MED approach subsumes support vector machines and also leads to important SVM extensions
by using probabilistic machinery. This MED framework has led to the first convex multitask SVM approach which performs shared feature and kernel selection while estimating multiple SVM classifiers (
ICML 04, JMLR 09). Similarly, nonstationary kernel selection and classification with hidden variables is also possible to derive using this framework (ICML 06). In current work, we are exploring
variants of conic kernel selection by learning mixtures of transformations, a method that also subsumes SVM kernel learning (NIPS 07).
Another approach to combining discriminative and generative learning is to use probability distributions to form kernels which are then used directly by standard large margin learning algorithms such
as support vector machines. We proposed probability product kernels (ICML 03, COLT 03, JMLR 04) and hyperkernels (NIPS 06) which involve the integral of two probability functions over the sample
space. This kernel is the affinity corresponding to the Hellinger divergence. This approach allows kernels to be built from exponential family models, mixtures, Bayesian networks and hidden Markov
models. Since then, other researchers have extended this idea and developed kernels from probabilities by exploring other information divergences beyond the Hellinger divergence.
While large margin estimation has led to consistent improvement in the classification accuracy of generative models and probabilistic kernels, our most recent work has been investigating even more
aggressive discriminative criteria. This has led to a more powerful criterion for discrimination we call large relative margin. There, the margin formed by the classifier is maximized while also
controlling the spread of the projections of the data onto the classification. This work has led to significant improvement over the performance of support vector machines without making any
additional assumptions about the data. The extension of large margin approaches to the relative margin case is straightforward, efficient, has theoretical guarantees and empirically strong
performance (AISTATS 07c, NIPS 08). We are currently investigating its applicability to large margin structured prediction problems, variants of boosting as well as the estimation of even more
discriminative generative models.
Application Areas
In addition to the above fundamental machine learning work, we are also dedicated to real-world applications of machine learning and its widespread impact on industry and other fields. In older work,
we have built machine-learning inspired computer vision systems that achieved top-performing face recognition performance and obtained an honorable mention from the Pattern Recognition Society (
Pattern Recognition 00, NIPS 98b). We have also worked on highly cited (300 citations) vision-based person tracking (CVPR 97), behavior recognition systems (ICVS 99) as well as mobile and wearable
computing (ISWC 97, IUW 98). In early 2006, we helped found Sense Networks a startup that applies machine learning to location and mobile phone data. Other startups we are involved in include Agolo ,
Ninoh and Bookt.
This page was last updated July 10, 2009.
Old research statement. | {"url":"http://www1.cs.columbia.edu/~jebara/research.html","timestamp":"2014-04-19T22:23:43Z","content_type":null,"content_length":"35463","record_id":"<urn:uuid:d1f3d7ef-aea6-461d-8e7f-cffda1ce368d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
[17P.07] Measurements of transport coefficients of granular gases using computer simulations
DPS Meeting, Madison, October 1998
Session 17P. Rings I, II
Contributed Poster Session, Tuesday, October 13, 1998, 4:15-5:20pm, Hall of Ideas
[Previous] | [Session 17P] | [Next]
[17P.07] Measurements of transport coefficients of granular gases using computer simulations
O. Petzschmann (University of Potsdam, Germany), M. Sremcevic (University of Belgrad, Yugoslavia), J. Schmidt, F. Spahn (University of Potsdam, Germany)
It is well known that the equilibrium state of a planetary ring is determined by a balance of viscous heating and collisional cooling. The ring material consists of granular particles, i.e. the
inter-particle collisions are dissipative. Therefore, the knowledge of the transport coefficients of granular gases is of crucial interest for the understanding of the ring dynamics.
As a first step, we concentrate our work on a granular gas of smooth spheres of unique size and with a constant coefficient of restitution.
We investigate the transport coefficients of granular gases by using N-body simulations and compare the results with analytic expressions derived by Jenkins and Richman (1985) in the framework of
kinetic theory.
We find a good agreement with the results of Jenkins and Richman that are restricted to nearly elastic collisions and purely Newtonian fluids. Using our simulations, we check the limitations of their
Furthermore, we investigate a sheared granular gas with variable restitution, in order to get a more realistic expression for the viscosity of the material of a planetry ring.
[Previous] | [Session 17P] | [Next] | {"url":"http://aas.org/archives/BAAS/v30n3/dps98/141.htm","timestamp":"2014-04-18T13:48:39Z","content_type":null,"content_length":"2593","record_id":"<urn:uuid:42fdb47b-bee5-4b32-a204-e4e6ac1fea7d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
A 2.0 G Particle Moving At 5.6 M/smakes A Perfectly ... | Chegg.com
A 2.0 g particle moving at 5.6 m/smakes a perfectly elastic head-on collision with a resting 1.0 gobject.
(a) Find the speed of each after the collision.
2.0 g particle
Your answer differs from the correct answerby 10% to 100%. m/s
1.0 g particle
Your answer differs from the correct answerby 10% to 100%. m/s
(b) Find the speed of each particle after the collision if thestationary particle has a mass of 10 g.
2.0 g particle
The answer you submitted has the wrongsign. m/s
1.0 g particle
Your answer differs from the correct answerby 10% to 100%. m/s
(c) Find the final kinetic energy of the incident 2.0 g particle inthe situations described in (a) and (b).
KE in part (a)
Your answer differs from the correct answerby orders of magnitude. J
KE in part (b)
Your answer differs from the correct answerby 10% to 100%. J
In which case does the incident particle lose more kinetic energy? | {"url":"http://www.chegg.com/homework-help/questions-and-answers/20-g-particle-moving-56-m-smakes-perfectly-elastic-head-collision-resting-10-gobject-find--q217669","timestamp":"2014-04-24T04:59:08Z","content_type":null,"content_length":"29227","record_id":"<urn:uuid:1061f8c7-623b-461e-a814-a61b67a3fa06>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrix Calculator
Matrix Calculator applet
The matrix calculator below computes inverses, eigenvalues and eigenvectors of 2 x 2, 3 x 3, 4 x 4 and 5 x 5 matrices, multiplies a matrix and a vector, and solves the matrix-vector equation Ax = b.
You can change the entries in the matrix A and vector b by clicking on them and typing.
โข Press the Invert button to see A^-1. This requires the matrix to be nonsingular.
โข Press the Eigen button to see the eigenvalues, with an eigenvector below each eigenvalue. Note that in the case of complex eigenvalues, only one of each complex-conjugate pair is shown.
โข Press the Solve button to see the solution of Ax=b. This requires the matrix to be nonsingular.
โข Press the Multiply button to see the product Ab.
โข You can keep three different matrices A, B, C and three different vectors a, b and c. To switch among them, click on the name and choose another one. The buttons always operate on the
currently-chosen matrix and vector.
โข The Copy button copies the latest result (if any) into the currently shown matrix or vector.
โก An inverse matrix will be copied into the current matrix.
โก For the result of Eigen, the three eigenvectors are copied into the columns of the current matrix. In the case of complex eigenvalues, the real part of a complex eigenvector is copied into
one column and the imaginary part into the next column.
โก For the result of Solve, the solution vector is copied into the current vector.
โข Click on 5 x 5 to change the matrix and vector sizes. Note that this sets all entries of the matrices and vectors to 0.
This applet uses the JAMA linear algebra package. | {"url":"http://www.math.ubc.ca/~israel/applet/mcalc/matcalc.html","timestamp":"2014-04-17T04:17:54Z","content_type":null,"content_length":"2316","record_id":"<urn:uuid:45a8a538-9704-4f23-808d-209972e7708f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Molded Dimensions - Engineered Elastomer Solutions to help you win!
Mounting and suspension assemblies generally require the loading of elastomers in shear. Elastomers deflect more under a given load in shear than in compression. Since shear is essentially a
combination of tensile and compression forces acting at right angles to each other, the stress strain curve for an elastomer in shear is similar to the tensile and compressive stress-strain curves.
Shear is the ratio of linear deformation (d) to elastomer thickness (t) as illustrated in Figure 1.
Figure 2 shows typical shear stress-strain curves for urethane ranging in hardness from 55A to 75D durometer.
Because of its high load bearing capacity in tension and compression, urethane has a high load bearing capacity in shear. The factor which limits the use of urethanes in shear loading is the strength
of the metal adhesive bond rather than the shear strength of the polymer.
Improvements in bonding urethane to metal will permit greater stress than those shown in Figure 2. Presently, 300 pli adhesion can be achieved compared to those values shown which are based on 100
Past practice has limited shear strain (t) to 0.5; that is, the thickness of the rubber is twice the horizontal deflection. No specific reasons can be cited for this limitation. Some static
applications of shear loading have been deformed to strains of 1.0 or more. However, under high strain, bond failures can occur imposing high stresses on the part. Useful hardnesses of urethanes are
limited from 65A to 90A durometer. Below 65A conventional rubber can be used, and above 90A stresses are very unpredictable.
It is common practice to enclose a shear mounting and move the loading surfaces closer together to provide a compressive load on the elastomers. Compression of 5% of the free thickness is commonly
used. The effect of shear loading for a double shear pad is shown in Figure 3
With load, the rubber tends to leave the supporting walls at the top. As the angle decreases, diagonal A decreases in length thus creating compression at X. But diagonal B increases in length causing
tension at Y. Therefore, by moving the loading surface closer together, the tensile stresses are reduced.
To achieve stability, the ration of width and length to thickness should be at least four. Lower ratios probably can be used with urethane and still be stable. If a shear pad were so designed that
the height of the rubber equaled its thickness, the rubber would tend to bend as a cantilever beam rather than as a shear mounting.
If larger deflections are required than can be accommodated by one thickness, it may be necessary to make several sandwiches in shear as shown on Figure 4.
FIG 4
However, the total width of the part between supports cannot be made too wide. Even though the elastomer is broken up into several sandwiches between supports, instability results in deflections
greater than calculated from plain shear.
Shear bonds are affected by the thickness of the sandwich. The greater the thickness, the higher the tensile component in shear which results in less bond strength as shown in Figure 5. | {"url":"http://www.moldeddimensions.com/shear.htm","timestamp":"2014-04-19T19:32:54Z","content_type":null,"content_length":"11753","record_id":"<urn:uuid:a1638862-8807-4fca-b7da-d7bedcf87f15>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conshohocken Trigonometry Tutors
...I am an experienced tutor teaching subjects like physics, calculus, algebra, Spanish and even guitar. I was originally an engineer for a helicopter company for nearly 4 years and I resigned to
start a career in education. I found little fulfillment in the business world especially because I didn't believe I was having a strong positive impact on society.
16 Subjects: including trigonometry, Spanish, calculus, physics
...I have worked with students on many courses which applies differential equations as well as tutoring the course itself. I enjoy helping students identify the easiest path to a solution and the
steps they should employ to get there. My first teaching job was in a school that specifically serviced special needs students.
58 Subjects: including trigonometry, reading, chemistry, calculus
...As a teaching assistant for four years in graduate school and a tutor as an undergraduate, I have a solid base of experience in tutoring undergraduate students in various levels of chemistry.
The material I have covered ranges from general chemistry (for majors and non-majors) to analytical chem...
9 Subjects: including trigonometry, chemistry, algebra 2, geometry
...I look forward to meeting and working with students and helping them achieve their academic goals. Thank you. Sincerely,Jonathan H.
9 Subjects: including trigonometry, geometry, algebra 2, algebra 1
...With this knowledge, and with a good knowledge of many of the texts that come out of Ancient Greece (Homer, the Tragedians, Plato, Aristotle, and philosophy and math generally), I could
proficiently tutor introductory students in Ancient Greek. I have taken a few courses which deal with linear a...
26 Subjects: including trigonometry, reading, English, algebra 2 | {"url":"http://www.algebrahelp.com/Conshohocken_trigonometry_tutors.jsp","timestamp":"2014-04-19T12:15:03Z","content_type":null,"content_length":"25319","record_id":"<urn:uuid:c634ef59-b141-48e4-a3ba-25b031858fce>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimization Using Symbolic Derivatives
Most Optimization Toolboxโข solvers run faster and more accurately when your objective and constraint function files include derivative calculations. Some solvers also benefit from second derivatives,
or Hessians. While calculating a derivative is straightforward, it is also quite tedious and error-prone. Calculating second derivatives is even more tedious and fraught with opportunities for error.
How can you get your solver to run faster and more accurately without the pain of computing derivatives manually?
This article demonstrates how to ease the calculation and use of gradients using Symbolic Math Toolboxโข. The techniques described here are applicable to almost any optimization problem where the
objective or constraint functions can be defined analytically. This means that you can use them if your objective and constraint functions are not simulations or black-box functions.
Running a Symbolically Defined Optimization
Suppose we want to minimize the function x + y + cosh(x โ 1.1y) + sinh(z/4) over the region defined by the implicit equation z^2 = sin(z โ x^2y^2), โ1 โค x โค 1, โ1 โค y โค 1, 0 โค z โค 1.
The region is shown in Figure 1.
The fmincon solver from Optimization Toolbox solves nonlinear optimization problems with nonlinear constraints. To formulate our problem for fmincon, we first write the objective and constraint
functions symbolically.
We then generate function handles for numerical computation with matlabFunction from Symbolic Math Toolbox.
The returned output structure shows that it took fmincon 20 iterations and 99 function evaluations to solve the problem. The solution point x (the yellow sphere in the plot in Figure 3) is [-0.8013;
Solving the Problem with Gradients
To include derivatives of the objective and constraint functions in the calculation, we simply perform three steps:
1. Compute the derivatives using the Symbolic Math Toolbox jacobian function.
2. Generate objective and constraint functions that include the derivatives with matlabFunction
3. Set fmincon options to use the derivatives.
The following code shows how to include gradients for the example.
Notice that the jacobian function is followed by .'. This transpose ensures that gradw and gradobj are column vectors, the preferred orientation for Optimization Toolbox solvers. matlabFunction
creates a function handle for evaluating both the function and its gradient. Notice, too, that we were able to calculate the gradient of the constraint function even though the function is implicit.
The output structure shows that fmincon computed the solution in 20 iterations, just as it did without gradients. fmincon with gradients evaluated the nonlinear functions at 36 points, compared to 99
points without gradients.
Including the Hessian
A Hessian function lets us solve the problem even more efficiently. For the interior-point algorithm, we write a function that is the Hessian of the Lagrangian. This means that if ฦ is the objective
function, c is the vector of nonlinear inequality constraints, ceq is the vector of nonlinear equality constraints, and ฮป is the vector of associated Lagrange multipliers, the Hessian H is
โ^2u represents the matrix of second derivatives with respect to x of the function u.
fmincon generates the Lagrange multipliers in a MATLAB^ยฎ structure. The relevant multipliers are lambda.ineqnonlin and lambda.eqnonlin, corresponding to indices i and j in the equation for H. We
include multipliers in the Hessian function, and then run the optimization^1.
The output structure shows that including a Hessian results in fewer iterations (10 instead of 20), a lower function count (11 instead of 36), and a better first-order optimality measure (2e-8
instead of 8e-8).
^1 For nonlinear equality constraints in Optimization Toolbox version 9b or earlier, you must subtract, not add, the Lagrange multiplier. See bug report. | {"url":"http://www.mathworks.se/company/newsletters/articles/optimization-using-symbolic-derivatives.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-24T14:04:02Z","content_type":null,"content_length":"28832","record_id":"<urn:uuid:ad8fe105-1257-46eb-b68f-57feca586b07>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: updating loop index
Replies: 1 Last Post: Jul 21, 2013 3:18 PM
Messages: [ Previous | Next ]
updating loop index
Posted: Jul 13, 2013 3:03 PM
Hello all,
i am doing a step in my code where i am generating a 5 vector random variable using (randperm) where number are from 1 to 3. then i am organizing them using (unique) and calculating how many times
each number is repeated.
After that if a number is mentioned once i update the code to generate 4 vector random number or even more depending on how many one time a number appeared till i reach zero.
My problem is: every time a 5 vector random numbers are generated ??!!
is there a way to fix this; here is a part of the code:
for i=1:NT
x(:,i) = g;
f=sum(n==1); % calculate if a number mentioned once | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2581826","timestamp":"2014-04-16T07:47:26Z","content_type":null,"content_length":"17639","record_id":"<urn:uuid:523bae5e-6895-49c7-9053-4d919e52c314>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |