content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Using current events to explain statistical ideas
It is often examples that make ideas understandable to students and current events can be a good source of examples. Case in point. Today in Wisconsin, the issue of the day is the outcome of the
recall elections and problems with the exit polling. As a tutor, the outcome isn’t interesting, but exit polling like all surveys is key to the usefulness of statistics! In fact, it gives a great
opportunity to illustrate some of the basic (and non-mathematical) ideas and concepts of statistics — usually the ideas presented at the beginning of most introduction-to-statistics courses.
Statistical inferences are grounded in some basic definitions and assumptions (in bold). A population is a defined collection of individuals that we want to know some data about and a sample is a
group taken from the population that we are going to actually collect data from (Sullivan, 2010, p. 5; Triola, 2010, p. 4). If we wanted to know the actual data about a population, which is called a
parameter, we would need to undertake a census “the collection of data from every member of the population.” (Triola, 2010, p. 4). If the population is large, this would be a daunting and expensive
task. Instead, statistics can be used (1) to select a sample of the population that is representative of the population, (2) to collect data about the sample which is summarized in a statistic, and
(3) to determine how to extrapolate the population’s parameter from the sample’s statistic. (Sullivan, 2010; Triola, 2010).
But often people (including individuals who should know better) make a critical error by focusing only on the second step because it seems to be what we want to find out, usually as quickly as
possible. So for the example of the day, we want to know how people voted so we know the result right away. It seems to make sense that the easiest way to find the result would be use an exit poll,
i.e. ask a subset of the voters (a sample) how they voted and calculate a summary of how they voted (a statistic) and use that statistic to estimate the parameter how all the voters (the population)
voted. But over-focusing on the second step, creates real problems, because sample selection is crucial either to ensure the sample is representative of the population (step 1) or to ensure the
mathematical reliability of extrapolating the parameter from the statistic (step 3).
For exit polls in particular, steps 1 and 3 are difficult. First, representativeness can involve a catch-22: we need to know what the population is like to pick a good sample, but we want to pick a
good sample to find out what the population is like. This issue often appears in the press as individuals complaining about the make-up of the survey, e.g. complaining that the sample included too
many individuals reporting a particular political affiliation. But it is possible to substitute a random sample for a truly representative sample, and utilize mathematical techniques to show that a
statistic from a random sample would be sufficiently reliable (using step 3). Second, unfortunately human beings are really bad at true “randomness” and thus our samples are usually much less random
than we think. For something like an exit poll, people volunteer to answer, and so even if a surveyor were to “randomly” select voters as they leave the polls, the only data collected would be about
people who “conveniently” choose to answer the questions. Additionally, other factors can interfere with randomness such as when people vote or whether they vote in person or absentee. So, the extent
our sample deviates from randomness or representativeness diminishes the reliability of the statistic to estimate the parameter.
To be fair, most statisticians understand these problems, but when the statistics are presented for wider circulation, these problems are ignored, de-emphasized, or misunderstood. Clarifying these
issues is often what the first chapter of a good statistics class is all about, so that future users of statistics can be aware of these potential problems. Exit polls in particular are particularly
vulnerable to have their problems exposed because they are always followed by an actual census. After all, the vote counting is what we are trying to predict using the exit poll, so that an estimate
of that count would be less accurate isn’t surprising, and all the more so given the difficulties of randomness and representativeness.
Works Cited
Sullivan, Michael (2010). Statistics: Informed Decisions Using Data (3rd ed.). Prentice Hall: Upper Saddle River, NJ.
Triola, Mario (2010). Elementary Statistics (11th ed.). Addison-Wesley: Boston, MA | {"url":"http://www.wyzant.com/resources/blogs/9162/using_current_events_to_explain_statistical_ideas","timestamp":"2014-04-17T21:33:07Z","content_type":null,"content_length":"40230","record_id":"<urn:uuid:2d0fcc41-4207-4ffc-9a0f-cce42930fd88>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
triangle ABC - Homework Help - eNotes.com
triangle ABC
<B =?
ab =root 7
ac =4
bc =6
Given triangle ABC with AB=`sqrt(7)` ,AC=4 and BC=6 find the measure of angle B:
Use the law of cosines: `c^2=a^2+b^2-2abcosC`
The angle has measure of approximately 20.4 degrees.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/triangle-abc-lt-b-381785","timestamp":"2014-04-21T13:14:47Z","content_type":null,"content_length":"24183","record_id":"<urn:uuid:55f11408-f5be-4863-bd2c-1dc8d11bd12f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Representation of $*$-automorphism on finite dimensional matrix algebras
up vote 5 down vote favorite
Let $\phi$ define a $*$-automorphism from the matrix algebras $M_n(\mathbb{C})$ to $M_n(\mathbb{C})$ such that $\phi(I) = I$. Is it true that any such map $\phi$ can be represented as $\phi(x) = U x
U^{\dagger}$ (where $U$ is a suitable unitary matrix)? If not, what is the most general expression?
It is redundant to require $\phi(I)=I$. – Jonas Meyer Aug 30 '10 at 19:25
how can this be shown? – kett Aug 30 '10 at 20:41
Let $A$ be an element of $M_n$, and let $B=\phi^{-1}(A)$. Then $\phi(I)A=\phi(I)\phi(B)=\phi(IB)=\phi(B)=A$. Similarly, $A\phi(I)=A$, so $\phi(I)$ is an identity for $M_n$. More generally, if $f:R\
to S$ is a surjective ring homomorphism and $R$ is unital, then $f(1_R)$ is an identity for $S$, and the only difference in the proof is that you take $B\in f^{-1}(A)$ in case $f$ is not injective.
– Jonas Meyer Aug 30 '10 at 20:56
Another way to see it in the $M_n(\mathbb{C})$ case is to take a family $\{e_i\}$ of $n$ mutually orthogonal projections (which necessarily add to I, because the sum is a projection with rank $n$).
Then the images also form a family $\{\phi(e_i\}$ of orthogonal projections, and the sums is a projection with trace $n$, that is $I$. – Martin Argerami Sep 1 '10 at 14:43
add comment
5 Answers
active oldest votes
Here is one generalization:
Every $*$-automorphism of the algebra of compact operators on a Hilbert space is conjugation by a unitary operator on that space.
Using the fact that the algebra of compact operators is irreducible, this can be seen as a special case of:
up vote 3 down vote Every irreducible $*$-representation of the algebra of compact operators on a Hilbert space is unitarily equivalent to the identity representation.
A proof can be found for instance in Section 1.4 of Arveson's An invitation to C* algebras. Another proof of the first assertion that gives more information can be found in
Proposition 1.6 of Raeburn and Williams's Morita equivalence and continuous trace C*-algebras.
The first part is still true if you take all bounded operators instead of only the compact ones. (And these are the same thing in the finite dimensional case.)
add comment
If $\phi$ is a $*$-automorphism then $\psi:A\mapsto\phi(\overline A)$ is a $\mathbb{C}$-automorphism. By the Skolem-Noether theorem every $\mathbb{C}$-automorphism of $M_n(\mathbb{C})$ is
inner, that is of the form $\psi(A)=UAU^{-1}$. This must commute with the $*$-operation: $A\mapsto\overline{A}^t$. This leads to $UAU^{-1} =\overline{U^t}^{-1}A\overline{U^t}$ for all $A$.
up vote 5 Thus implies that $U$ and $\overline{U^t}^{-1}$ are the same up to a constant multiple. By multiplying $U$ by a constant we may make $U$ unitary.
down vote
add comment
As an alternative to Robin Chapman's solution, I would like to state Exercise 7.8 from Rørdam's, Larsen's and Laustsen's "Introduction to the K-theory of C*-algebras":
For every unital AF-algebra $A$ there is a short exact sequence $$ 1\to\overline{\mathrm{Inn}}(A)\to\mathrm{Aut}(A)\to\mathrm{Aut}(K_0(A))\to 1, $$ where $\overline{\mathrm{Inn}}(A)$
denotes approximately inner automorphisms and $\mathrm{Aut}(K_0(A))$ denotes group automorphisms preserving the unit class and the positive cone in $K_0(A)$.
up vote 3 If $A$ is the matrix ring, then $\mathrm{Aut}(K_0(A))$ is trivial and hence every automorphism of $A$ is approximately inner. Since $A$ is separable, every approximately inner automorphism
down vote is the pointwise limit of a sequence of inner automorphisms. And I think the finite-dimensionality of $A$ implies that the pointwise limit of a sequence of inner automorphisms is again
Using the statement above, one immediately sees that, for instance, $\mathbb C\oplus\mathbb C $ possesses an automorphism which is not approximately inner.
Does anyone know how I can actually prove that the pointwise limit of a sequence of inner automorphisms of a finite-dimensional C*-algebra is again inner? – Rasmus Bentmann Aug 30 '10 at
At a guess: compactness? (take a cluster point/limit of your sequence of implementing elements) – Yemon Choi Aug 30 '10 at 19:49
Yemon's argument works, because your sequence of unitaries is bounded in a finite dimensional space, and so it has a convergent subsequence. On the other hand, already between the
answers and the comments there are several proofs that any automorphism of a finite-dimensional C$^*$-algebra is inner. So all you need to convince yourself about is that the pointwise
limit of inner automorphisms is an automorphism, and that's easy. – Martin Argerami Sep 1 '10 at 15:04
@M. Argerami: Well, I don't want to use that any automorphism of a finite-dimensional C∗-algebra is inner because that's what I want to prove in the end. – Rasmus Bentmann Sep 1 '10 at
I see. My bad, then. Regarding the last statement in your answer, if your $A$ is abelian, then the only inner automorphism is the identity, and then of course the only approximately
inner automorphism is the identity; and a simple example of a non-inner automorphism of $\mathbb{C}\oplus\mathbb{C}$ is the ``flip'', $(a,b)\mapsto(b,a)$. – Martin Argerami Sep 2 '10 at
add comment
Another proof can be obtained using that $M_n(\mathbb{C})$ is singly generated (and finite-dimensional). So $M_n(\mathbb{C})=C^*(s)$ for some $s$ (the shift, for example). Now, of course, $\
phi(s)$ is a generator for the image. And by Spetch's theorem, $\phi(s)$ and $s$ are unitarity equivalent (because $\phi$ is multiplicative and it preserves the trace). Then there exists a
up vote unitary $U\in M_n(\mathbb{C})$ with $\phi(s)=UsU^{-1}$. If you now take any $a\in M_n(\mathbb{C})$, we have $a=\sum_{j=0}^{n-1} \alpha_js^j+\sum_{j=1}^{n-1}\beta_j(s^*)^j$, for coefficients
1 down $\alpha_j,\beta_j$, and so $$ \phi(a)=\sum_{j=0}^{n-1} \alpha_j\phi(s)^j+\sum_{j=1}^{n-1}\beta_j\phi(s^*)^j $$ $$ =\sum_{j=0}^{n-1} \alpha_j((UsU)^{-1})^j+\sum_{j=1}^{n-1}\beta_j(Us^*U^{-1})
vote ^j=UaU^{-1} $$
add comment
Yet another proof would be to consider a system $(e_{kj})$ of matrix units in $M_n(\mathbb{C})$, coming from some orthonormal basis (the canonical one, say). It is then easy to check that
up vote 0 $(\phi(e_{kj}))$ is another system of matrix units, and so it corresponds to another orthonormal basis. The unitary implementing the change of basis is the one implementing $\phi$.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged oa.operator-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/37182/representation-of-automorphism-on-finite-dimensional-matrix-algebras","timestamp":"2014-04-16T22:50:11Z","content_type":null,"content_length":"76925","record_id":"<urn:uuid:b834b599-ef50-4a15-a4ec-40b8d33ada70>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gear Guru
Joined: Aug 2003
Location: Cork Ireland
Posts: 10,487
Thread Starter
Q 4 Avare
Hi Andre, I thought it best to ask this in public. I hope you don't mind or even better appreciate why I chose this.
Could you please give my logic a health check here and finally answer the punchline question.
Regular medium density semi rigid batt. Say 703, no FRK.
The LF performance changes when distance from the boundary is introduced.
There is a slight peak of absorption, accompanied by a slight lack of overall linearity further up in frequency. The frequency of this peak of course changes with the distance from the boundary.
Some have said that an airgap equal to the panel thickness is optimum.
This suggests that any increase of gap beyond panel equality starts to diminish the LF improvement/peak.
Does it? | {"url":"http://www.gearslutz.com/board/studio-building-acoustics/514620-q-4-avare.html","timestamp":"2014-04-19T07:04:37Z","content_type":null,"content_length":"189511","record_id":"<urn:uuid:8a7467db-2270-408a-ac8f-e0091792df55>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find Values of x...
Date: 9/4/96 at 22:33:30
From: michael atienza
Subject: Find Values of x...
Determine all real values of x for which
2 x - 2x - 48
(x - 5x + 5) = 1
Date: 9/5/96 at 9:35:8
From: Doctor Jerry
Subject: Re: Find Values of x...
Dear Margarette,
You might consider for which real numbers a and b is a^b = 1
(the carat (^) is often used to indicate powers). Provided that
a is not 0, a^0=1. So, you could seek values of x for which
If you know about logarithms, you can get another approach to the
problem. In the equation a^b = 1, take logs of both sides to get
b*ln(a) = ln(1) = 0. So, either b or ln(a) must be zero.
Hopefully, you can now finish the problem.
-Doctor Jerry, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 9/5/96 at 18:39:15
From: m.atienza
Subject: Re: Find Values of x...
Unfortunately I am still stuck on this problem.
I factored the exponent (x - 2x - 48) and came up with
x=8 and x=-6 2
The thing I don't understand is what do I do with the base x - 5x + 5
with regards to the exponent that I factored? Will there be more than
one value for x?
Thanks for your help!
Date: 9/5/96 at 20:49:22
From: Doctor Robert
Subject: Re: Find Values of x...
x^2 - 5x + 5 is the base, x^2-2x-48 is the exponent, and 1 is the
result, if I read your problem correctly. Now, the only way that a
number raised to a power can be 1 is if the exponent is zero, that is,
z^0 = 1. In your problem this means that x^2-2x-48 = 0. You solve
this by factoring
(x-8)(x+6) = 0
which leads to the conclusion that x=8 or x=-6.
The only thing you need to check is that these values of x do not make
the base zero, for zero raised to the zero power is not defined.
-Doctor Robert, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 09/09/2000 at 02:15:07
From: Tim Greene
Subject: Re: Find Values of x...
In the solution above, the writer only finds the solutions
x = -6 and x = 8
by finding solutions of the form a^0 = 1 (a not = 0). There are three
other solutions to the problem.
Finding solutions of the form 1^a = 1 (no restrictions on a), we have
(x^2 - 5x + 5) = 1
which gives the two solutions:
x = 1 and x = 4.
Finding solutions of the form (-1)^a = 1 (where a is an even integer),
we have:
(x^2 - 5x + 5) = -1
this gives the two potential solutions:
x = 2 and x = 3
however, for x = 2 the exponent (x^2 - 2x - 48) is an even integer but
for x = 3 that exponent is odd.
Thus, there are five solutions in all: -6, 1, 2, 4, and 8.
Dr. Greenie | {"url":"http://mathforum.org/library/drmath/view/52995.html","timestamp":"2014-04-19T08:33:45Z","content_type":null,"content_length":"7800","record_id":"<urn:uuid:5aed2549-f747-4937-81d9-4ea63473cee0>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clifford the Big Red Dog's 50th BIG Birthday Sweepstakes
I grew up with Clifford the Big Red Dog. When I was a school teacher, I read the books to my Kindergarten and first graders. When I had my first child, I stocked up on them to read at bedtime.
When my second child was born I found the cartoon of Clifford and his friends on PBS. Now, my 3rd child is in first grade and is reading Clifford to us! It’s Clifford’s birthday and Scholastic is
throwing him the Biggest birthday ever! Check out www.scholastic.com/clifford where you can send CLIFFORD a personalized birthday card and enter Clifford’s BIG Birthday Sweepstakes for a chance to
win a birthday party of your own!
Clifford the Big Red Dog has been entertaining children, parents and teachers for 50 years with his BIG adventures, using humor to teach the importance of social values. These ‘BIG Ideas’ focus
on simple, tangible life lessons that help young children navigate their world as they become active members of their communities. Clifford has taught generations of kids how to be kind, work
well with others, play fair, and more. Through relatable storytelling Clifford also reinforces the importance of literacy, letting kids know that reading is fun. In his beloved dog, Norman
Bridwell has created a literary classic embraced by generations around the world.
For more information on Clifford the Big Red Dog, visit http://www.scholastic.com/clifford/ or PBSKIDS.org/clifford.
Scholastic provided the prizing, all opinions are mine.
we like cliffords halloween.
Oops, Clifford!
i love the holiday books and clifford grows up
Clifford goes to the circus
I like Clifford’s Birthday Party
We own Cliffords Phonics Fun Set and I love all the books in the set! (reading through with kid #2 now!)
IO like the book Clifford The Big Red Dog
One of my favorites is Clifford Cares.
Happy Birthday, Clifford is our favorite
We like the Snow Dog Clifford reader book
My favorite growing up was Clifford’s Halloween.
I like the original Clifford book-I think it was just called Clifford
I like Clifford’s ABC Book.
clifford taking a trip
I love all the Clifford books where Clifford is a pup. so cute
My favorite is Clifford’s Kitten. Thanks.
Clifford Goes to the Park is cute and my son enjoys it
Clifford goes to school.
Clifford’s Good Deeds!
Clifford’s Christmas presents is my faovirte book
My son loves all Clifford books!
Thank you!
Clifford’s busy week
pedidentalasst at yahoo dot com
My son always liked the book Clifford’s Christmas Presents.
I love Cliffords Good Deeds
Clifford Goes to the Park
I like Clifford’s Christmas
We like Clifford’s Bathtime.
Clifford and the Grouchy Neighbors
cliffords halloween
We love Clifford’s Puppy Days!
i like the original clifford book
clifford goes to school
the very first book when he was a puppy =]
i think it’s called clifford to the rescue
Clifford’s Good Deeds
Clifford’s Birthday Party
My daughter likes to read the Deck the Halls Clifford Book over and over.
Clifford’s 1st Halloween!
We like Clifford’s Christmas.
Clifford’s buried treasure
Clifford’s Birthday!
My favorite Clifford book?! You mean I can only pick one?! I really like the ‘original’ Clifford the Big Red Dog book. It brings back memories of my childhood… reading it over and over.
Cliffords bathtime book…too cute!
Cliffords halloween. I owned it as a child
Clofford’s Birthday
Clifford’s Kitten
My childrens favorite is Clifford’s Day with Dad
Happy Birthday, Clifford is both my favorite and my son’s favorite.
My favorite is Clifford’s Birthday Party!
I’m partial to the original book. I used to read it to my little brother all the timw. Now I have a toddler and get to read the stories to him.
I like clifford the firehouse dog
My 7 year old son loves them ALL!
I think when I was younger, we just had “Clifford, The Big Red Dog”
My daughter would love this!
• Make sure you answer the mandatory question to have a valid first entry.
I love Clifford the big red dog & the good deeds books.
My favorite book is Clifford’s Christmas
I love Cliffords good deeds!
I like the Clifford’s Birthday Party
Birthday Party.
I like cliffords good deeds
Snow dog clifford or buried treasure clifford
My boys tend to love all the holiday based ones- both adult and puppy clifford versions
We like Cliffords Birthday Party
we like Clifford’s Halloween
We love Clifford ~ the christmas one is good
Clifford’s Birthday Party is my favorite
we like clifford’s birthday party
We like the Christmas one.
my son liked Clifford’s Buried Treasure..thanks
I like Clifford The Big Red Dog the best
Clifford’s Good Deeds is my favorite. I remember reading this book at my grandmas house when I was little.
Clifford goes to school.
Thanks again! Yay! | {"url":"http://themomreviews.com/2012/11/07/clifford-the-big-red-dogs-50th-big-birthday-sweepstakes-85-prize-pack-giveaway.html","timestamp":"2014-04-24T14:02:21Z","content_type":null,"content_length":"136278","record_id":"<urn:uuid:2904ead6-7e11-4e37-afcc-831a587aebd4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Constructing Elliptic Curves with Prescribed Embedding Degrees
Find out how to access preview-only content
Constructing Elliptic Curves with Prescribed Embedding Degrees
Purchase on Springer.com
$29.95 / €24.95 / £19.95*
* Final gross prices may vary according to local VAT.
Get Access
Pairing-based cryptosystems depend on the existence of groups where the Decision Diffie-Hellman problem is easy to solve, but the Computational Diffie-Hellman problem is hard. Such is the case of
elliptic curve groups whose embedding degree is large enough to maintain a good security level, but small enough for arithmetic operations to be feasible. However, the embedding degree for most
elliptic curves is enormous, and the few previously known suitable elliptic curves have embedding degree k ≤ 6. In this paper, we examine criteria for curves with larger k that generalize prior work
by Miyaji et al. based on the properties of cyclotomic polynomials, and propose efficient representations for the underlying algebraic structures.
1. A. Agashe, K. Lauter, R. Venkatesan, “Constructing elliptic curves with a given number of points over a finite field,” Cryptology ePrint Archive, Report 2001/096, http://eprint.iacr.org/2001/096/
2. R. Balasubramanian, N. Koblitz, “The improbability that an Elliptic Curve has Subexponential Discrete Log Problem under the Menezes-Okamoto-Vanstone Algorithm,” Journal of Cryptology, Vol. 11,
No. 2, 1998, pp. 141–145. CrossRef
3. P. S. L. M. Barreto, H. Y. Kim, B. Lynn, M. Scott, “Efficient Algorithms for Pairing-Based Cryptosystems,” Cryptology ePrint Archive, Report 2002/008, http://eprint.iacr.org/2002/008/.
4. I. Blake, G. Seroussi and N. Smart, “Elliptic Curves in Cryptography,” Cambridge University Press, 1999.
5. D. Boneh and M. Franklin, “Identity-based encryption from the Weil pairing,” Advances in Cryptology-Crypto’2001, Lecture Notes in Computer Science 2139, pp. 213–229, Springer-Verlag, 2001.
6. D. Boneh, B. Lynn, and H. Shacham, “Short signatures from the Weil pairing,” Asiacrypt’2001, Lecture Notes in Computer Science 2248, pp. 514–532, Springer-Verlag, 2002.
7. R. Crandall and C. Pomerance, “Prime Numbers: a Computational Perspective,” Springer-Verlag, 2001.
8. R. Dupont, A. Enge, F. Morain “Building curves with arbitrary small MOV degree over finite prime fields,” Cryptology ePrint Archive, Report 2002/094, available at http://eprint.iacr.org/2002/094.
9. G. Frey, M. Müller, and H. Rück, “The Tate Pairing and the Discrete Logarithm Applied to Elliptic Curve Cryptosystems,” IEEE Transactions on Information Theory, 45(5), pp. 1717–1719, 1999.
10. G. Frey and H. Rück, “A Remark Concerning m-Divisibility and the Discrete Logarithm in the Divisor Class Group of Curves,” Mathematics of Computation, 62 (1994), pp. 865–874. CrossRef
11. S. D.T Galbraith, K. Harrison, D. Solera, ldImplementing the Tate pairing,“ Algorithmic Number Theory-ANTS” V, 2002, to appear.
12. F. Hess, “Exponent Group Signature Schemes and Efficient Identity Based Signature Schemes Based on Pairings,” Cryptology ePrint Archive, Report 2002/012, available at http://eprint.iacr.org/2002/
13. IEEE Std 2000-1363, “Standard Specifications for Public Key Cryptography,” 2000.
14. A. Joux, “A one-round protocol for tripartite Diffie-Hellman,” Algorithm Number Theory Symposium-ANTS IV, Lecture Notes in Computer Science 1838, pp. 385–394, Springer-Verlag, 2000. CrossRef
15. A. Joux and K. Nguyen, “Separating Decision Diffie-Hellman from Diffie-Hellman in Cryptographic Groups,” Cryptology ePrint Archive, Report 2001/003, http://eprint.iacr.org/2001/003/.
16. G. J. Lay, H. G. Zimmer, “Constructing Elliptic Curves with Given Group Order over Large Finite Fields,” Algorithmic Number Theory Symposium-ANTS I, Lecture Notes in Computer Science 877 (1994),
pp. 250–263.
17. R. Lidl and H. Niederreiter, “Introduction to finite fields and their applications,” Cambridge University Press, 1986.
18. A. Menezes, T. Okamoto and S. Vanstone, “Reducing elliptic curve logarithms to logarithms in a finite field,” IEEE Transactions on Information Theory 39(1993), pp. 1639–1646. CrossRef
19. A. Miyaji, M. Nakabayashi, and S. Takano, “New explicit conditions of elliptic curve traces for FR-reduction,” IEICE Trans. Fundamentals, Vol. E84 A, no. 5, May 2001.
20. F. Morain, “Building cyclic elliptic curves modulo large primes,” Advances in Cryptology-Eurocrypt’91, Lecture Notes in Computer Science 547 (1991), pp. 328–336.
21. T. Nagell, “Introduction to Number Theory,” 2nd reprint edition, Chelsea Publishing, 2001.
22. K. G. Paterson, “ID-based signatures from pairings on elliptic curves,” Cryptology ePrint Archive, Report 2002/004, available at http://eprint.iacr.org/2002/004/.
23. R. Sakai, K. Ohgishi and M. Kasahara, “Cryptosystems based on pairing,” 2000 Symposium on Cryptography and Information Security (SCIS2000), Okinawa, Japan, Jan. 26–28, 2000.
24. O. Schirokauer, D. Weber and T. Denny, “Discrete Logarithms: the Effectiveness of the Index Calculus Method,” ANTS, pp. 337–361, 1996.
25. J. H. Silverman, “Elliptic curve discrete logarithms and the index calculus,” Workshop on Elliptic Curve Cryptography (ECC’98), September 14–16, 1998.
26. N. P. Smart, “The Algorithmic Resolution of Diophantine Equations,” London Mathematical Society Student Text 41, Cambridge University Press, 1998.
27. N. Smart, “An Identity Based Authenticated Key Agreement Protocol Based on the Weil Pairing,” Cryptology ePrint Archive, Report 2001/111, available at http://eprint.iacr.org/2001/111/.
28. N. Tzanakis, “Solving elliptic diophantine equations by estimating linear forms in elliptic logarithms. The case of quartic equations,” Acta Arithmetica 75 (1996), pp. 165–190.
29. E. Verheul, “Self-blindable Credential Certificates from the Weil Pairing,” Advances in Cryptology-Asiacrypt’2001, Lecture Notes in Computer Science 2248 (2002), pp 533–551. CrossRef
Constructing Elliptic Curves with Prescribed Embedding Degrees
Book Title
Book Subtitle
Third International Conference, SCN 2002 Amalfi, Italy, September 11–13, 2002 Revised Papers
pp 257-267
Print ISBN
Online ISBN
Series Title
Series Volume
Series ISSN
Springer Berlin Heidelberg
Copyright Holder
Springer-Verlag Berlin Heidelberg
Additional Links
Industry Sectors
eBook Packages
Editor Affiliations
□ 4. Dipartimento di Informatica ed Applicazioni, Università di Salerno
□ 5. Dept. of Computer Engineering and Informatics, Computer Technology Institute and University of Patras
Author Affiliations
□ 6. Laboratório de Arquitetura e Redes de Computadores (LARC) Escola Politécnica, Universidade de São Paulo, Brazil
□ 7. Computer Science Department, Stanford University, USA
□ 8. School of Computer Applications, Dublin City University, Dublin 9, Ballymun, Ireland
Continue reading...
To view the rest of this content please follow the download PDF link above. | {"url":"http://link.springer.com/chapter/10.1007%2F3-540-36413-7_19","timestamp":"2014-04-20T03:45:20Z","content_type":null,"content_length":"54753","record_id":"<urn:uuid:ba67cbe2-67d0-4198-8905-3a94e9d51d76>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
Enhanced Privacy ID
Overview of EPID
In our EPID scheme, there are three types of entities: issuer, members, and verifiers. There are two revocation lists: a list of corrupted private keys, denoted as PRIV-RL, and a list of signatures
made from suspected extracted keys, denoted as SIG-RL. An EPID scheme has the following operations:
• Setup. The issuer creates a public key and an issuing private key. The issuer publishes and distributes the public key to everyone (that is, to every member and every verifier).
• Join. This is an interactive protocol between an issuer and a member, the result of which is that the member obtains a unique private key.
• Sign. Given a message m and a SIG-RL, a member creates an EPID signature on m by using its private key.
• Verify. The verifier verifies the correctness of an EPID signature by using the public key. The verifier also checks that the key used to generate the signature has not been revoked in PRIV-RL or
Figure 2 depicts the interaction flows between the issuer, a member, and a verifier.
Zero-knowledge Proofs. In our EPID scheme, we use zero-knowledge proofs of knowledge [10] extensively. In a zero-knowledge proof system, a prover proves the knowledge of some secret information to a
verifier such that (1) the verifier is convinced of the proof and yet (2) the proof does not leak any information about the secret to the verifier. In this article, we use the following notation for
proof of knowledge of discrete logarithms. For example,
denotes a proof of knowledge of integer x such that y[1] = g[1]^x and y[2] = g[2]^x hold, where x is known only to the prover, and g[1], y[1], g[2], y[2] are known to both the prover and verifier. In
the above equation, PK stands for proof of knowledge and ∧ stands for logical conjunction.
Proof of knowledge protocols can be turned into signature schemes by using the Fiat-Shamir heuristic [9]. In our EPID scheme, we develop several efficient zero-knowledge proof protocols for proving
the knowledge of a valid EPID private key. In addition, we use an efficient zero-knowledge proof protocol developed by Camenisch and Shoup [8] for proving the inequality of discrete logarithms of two
group elements y[1], y[2] to base z[1], and z[2], respectively, denoted as
Overview of Our Construction
We begin with a high-level overview of our construction. In our scheme, each member chooses a unique membership key f. The issuer then issues a membership credential on f in a blind fashion such that
the issuer does not acquire knowledge of the membership key f. The membership key and the membership credential together form the private key of the member. To sign a signature, the member proves in
zero-knowledge that it has a membership credential on f. To verify a group signature, the verifier verifies the zero-knowledge proof.
In addition, each member chooses a base value B and computes K = B^f. This (B, K) pair serves the purpose of a revocation check. We call B the base and K the pseudonym. To sign a signature, the
member needs not only to prove that it has a valid membership credential, but also to prove that it constructs the (B, K) pair correctly, all in zero-knowledge.
In EPID, there are two options to compute the base B: the random base option and the name base option.
• Random base option. B is chosen randomly each time by the member. Under the decisional Diffie-Hellman assumption, no verifier can link two EPID signatures based on the (B, K) pairs in the
• Name base option. B is derived from the verifier's basename; for example, B = Hash (verifier's basename). Note that in this option, the value K becomes a pseudonym of the member with regard to
the verifier's basename, as the member will always use the same K in the EPID signature to the verifier.
We first explain how membership can be revoked based on a compromised private key. Given a private key that has been revealed to the public, the issuer extracts the membership key f from the private
key and inserts f into the private-key-based revocation list PRIV-RL. The issuer then distributes PRIV-RL to all the verifiers. Given an EPID signature, any verifier can check whether it was created
with the corrupted private keys in PRIV-RL as follows:
Let (B, K) be the base-pseudonym pair in the EPID signature. The verifier can check that K ≠ B^f' for every f' in PRIV-RL. If there exists an f' in PRIV-RL, such that K = B^f', it means that the
signature was created with a revoked private key. Therefore the verifier can reject the signature.
We now explain how membership is revoked, based on a transaction that a member was involved in. We call this kind of revocation signature-based revocation. Suppose a member's private key has been
compromised by an attacker and has been used in some transaction. If the issuer has collected enough evidence to show that the private key used in the transaction was corrupted, the issuer can
identify the EPID signature in the transaction and revoke the key, based on the signature. To do this, the issuer extracts the (B, K) pair from the signature and inserts the pair into the
signature-based revocation list SIG-RL. The issuer then distributes the SIG-RL to all the verifiers. Before a member performs the membership proof, the verifier sends the latest SIG-RL to the member,
so that the member can prove that it did not perform those transactions. More specifically, the member proves that it is not revoked in SIG-RL, by proving that, in zero-knowledge,
for each (B', K') pair in SIG-RL. If the zero-knowledge proof holds, the verifier is convinced that the member has not conducted those transactions and that membership has not been revoked. | {"url":"http://www.drdobbs.com/architecture-and-design/enhanced-privacy-id/219501634?pgno=3","timestamp":"2014-04-20T11:17:33Z","content_type":null,"content_length":"98983","record_id":"<urn:uuid:24d361da-7ed2-418c-bc60-b0fd5992431b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boston Calculus Tutor
Find a Boston Calculus Tutor
...In this program I aided freshmen engineering students with balancing the intense workload with the transition to college life, in addition to offering free tutoring in freshmen classes. I
currently serve with City Year where I teach reading and writing in a low income fourth grade classroom. My...
9 Subjects: including calculus, chemistry, statistics, biology
I have been a student all my life, long after I didn't have to be! The excitement and reward of approaching a new, unknown subject, slowly unraveling its details and nuances, and finally feeling
like I have mastered it- this is what drives me. Studying science can be boring or intimidating, but in the right hands it can be fun and exciting.
13 Subjects: including calculus, physics, geometry, algebra 1
I have a Master's degree in Mechanical Engineering and a Bachelor's in Material Science Engineering. I can cover any engineering topic and Math, Physics and Chemistry for all levels. During my
education I have the experience of teacher assisting for more than 4 years both in college and grad school.
23 Subjects: including calculus, chemistry, physics, statistics
Hi! My name is Elena and I graduated from Cornell University in Ithaca, New York, with a degree in Computational Biology, a combination of mathematics, computer science, and biology. I was born
and bred in Texas and I attended Plano West Senior High School.
36 Subjects: including calculus, chemistry, English, reading
...I also tutored students in physics and math for 2 years while at UCSD. Because of my background, my particular areas of tutoring expertise are in physics and mathematics. I'm a great tutor
because I love to share my knowledge of physics and math and find it very rewarding when a student understands a concept he/she didn't previously.
12 Subjects: including calculus, physics, statistics, geometry | {"url":"http://www.purplemath.com/Boston_calculus_tutors.php","timestamp":"2014-04-21T02:04:09Z","content_type":null,"content_length":"23839","record_id":"<urn:uuid:346109a1-2cf9-4ad8-8991-b0019fc9cb3b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra 2 Tutors
San Pedro, CA 90732
Will tutor any math subject, any time!
...ello! My name is David, and I hope to be the tutor you are looking for. I have over 5 years of tutoring experience in all math subjects, including Algebra, Geometry, Trigonometry, Pre-Calculus,
Calculus, Probability and Statistics. I have also helped students out...
Offering 10+ subjects including algebra 2 | {"url":"http://www.wyzant.com/Hawaiian_Gardens_Algebra_2_tutors.aspx","timestamp":"2014-04-19T17:40:05Z","content_type":null,"content_length":"60879","record_id":"<urn:uuid:b4070a30-960e-4cbd-a401-e32db67b138c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaching When No One Is Watching
The subject of this post is an essay commonly referred to as “Lockhart’s Lament” (actually “A Mathematician’s Lament”) about mathematics education, and some of my observations and opinions relating
those ideas to homeschooling and volunteering.
Before going any further, if you have not read the essay, or the articles about the essay in Keith Devlin’s MAA column, then stop reading here, and start reading there. There are better words there,
including an interesting follow-up article with a response from Lockhart to reader comments. Even if you do not agree with all of his ideas, “Lockhart’s Lament” is a must read for anyone with a
stake in mathematics education, whether you are a teacher, student, parent, or even administrator (!).
Lockhart’s essay rings very true to me. My focus here is on two ideas in particular. First, as I have said before, mathematics is not a vocational skill. It seems fashionable today to try to
motivate students with applications of mathematics that they will be able to use in the “real world.” Compute the tax and tip on a restaurant bill; compute the interest earned on a savings account.
Such applications typically end up consisting of problems that, in the real world, people do not solve with pencil and paper. They solve them with calculators, lookup tables, handbooks, or web
It is this requirement of application that bothers me. Who cares if there is an application? I will admit that at times an application might be useful as a means of making a problem less abstract
and more concrete, but why can’t the application be fun? What kid cares about the tax on a restaurant bill? As Lockhart suggests:
Play games! Teach them Chess and Go, Hex and Backgammon, Sprouts and Nim, whatever. Make up a game. Do puzzles. Expose them to situations where deductive reasoning is necessary. Don’t worry about
notation and technique, help them to become active and creative mathematical thinkers.
How can we possibly get away with something like this? This sounds like a mathematical environment where there is no fixed curriculum, there is no pre-planned road ahead, but instead the students
have at least some influence on where the road leads, with that influence stemming from the directions in which their interest takes them.
This brings me to my second and more important point: maybe we can’t have such an environment… at least in a school system. As one reader responded, “educational systems almost inevitably entail
measuring results, an activity from which Lockhart clearly recoils.” I completely agree here… because it is that anticipation of measurement that causes the problem. When it is known that a
school’s effectiveness will be evaluated based on student performance on specific tests of specific “skills,” the curricula naturally adapt to teach to the test, and student interest no longer
has any influence on the direction of study. It is an inadequate metaphor, but mathematics education is rather quantum mechanical; the very act of measuring disturbs that which is being measured.
(It is not just my own observation that standardized tests measure the school much more than they measure the student. I was amused to find that, at least here in Maryland, the description of the
High School Assessment program seems to acknowledge this.)
All of this suggests to me that environments with more freedom of direction have a lot of potential for engaging students. It is a lot easier to teach when no one is watching. Homeschooling seems
like an attractive example of this… but one of which I am suspicious for several reasons. Homeschooling is attractive because of its simple efficiency. Whether using a fixed curricula or not, the
student:teacher ratio is so much lower that there is little to hold the student back. But accuracy is another matter. The parent/teacher has to be able to react knowledgeably and creatively to the
student’s evolving interest; as Lockhart points out:
Teaching is a messy human relationship; it does not require a method. Or rather I should say, if you need a method you’re probably not a very good teacher. If you don’t have enough of a feeling
for your subject to be able to talk about it in your own voice, in a natural and spontaneous way, how well could you understand it?
How many parents are equipped to think on their mathematical feet in this way? While we’re at it, how many school teachers are equipped to do so? This is where I think there is great potential for
something like a compromise: mathematical professionals volunteering, working with students in public schools in an independent study environment. It works– and I speak from experience, both as a
student and as a wannabe teacher. Mathematics can indeed be beautiful and useful, pure and applied, rigorous and recreational, all at the same time.
2 Responses to Teaching When No One Is Watching
1. Nice post.
It’s a good point you make about homeschooling efficiency. The student-teacher ratio is fantastic and there is no need for anybody to slow down (or speed up) to move with the herd. That counts
for a lot.
Also a valid point – how many parents are skilled enough in [pick a subject] to do the job? I’d say that “almost surely” there is some subject for which a set of parents does not have the
necessary skill set to do a good job. I think homeschoolers have recognized this too, often forming co-ops to pool skill sets and teach each others’ kids.
More interestingly, to me (as an ardent defender of homeschooling), is the point about school teachers not having the necessary capacity for various subjects (mathematics, in this case). I would
say that this and general inefficiency of the public education system is what has driven a significant rise in homeschooling. We’d love to send the kids off to school during the day – but we just
can’t stomach the massive waste of time it incurs. Many homeschoolers only have a few hours of instruction during the day – and still get just as much educating done.
I point this out simply to note that I believe the education system is fundamentally broken (or significantly damaged). If it were fixed, then I suspect you would see a drop in homeschooling. The
homeschooling parents are out to defend the best interests of their children – and will gladly send them off to school once it becomes a palatable alternative.
2. “If it were fixed, then I suspect you would see a drop in homeschooling.”
It is worth emphasizing Lockhart’s comment in his sequel, that his essay was “a lament, not a proposal.” It is not clear to me that the system can be fixed, since by its nature publicly funded
support of education requires accountability– and thus evaluation– of the schools providing that education. And that evaluation leads to “teaching to the test.” Both homeschooling and
volunteering by professionals have the advantage of not requiring that evaluation… but the smaller numbers that makes either option so attractive also implies that such an education almost
certainly cannot be provided to all children. It’s Most Children Left Behind.
Lest I seem too cozy with homeschooling, I should clarify that although I like the idea, I tend to not trust the implementation. That is, parents may homeschool because they feel they can teach
subject X more efficiently, where in this post X = mathematics. In that case I am simply suspicious of the individual subject matter expertise. But parents may also homeschool because they want
their children to learn Y instead of Z, where typically Y = creation and Z = evolution. This also does not seem useful; learning pseudoscience quickly does not sound more attractive to me than
learning science more slowly.
This entry was posted in Uncategorized. Bookmark the permalink. | {"url":"http://possiblywrong.wordpress.com/2010/10/30/teaching-when-no-one-is-watching/","timestamp":"2014-04-19T17:22:44Z","content_type":null,"content_length":"59416","record_id":"<urn:uuid:8dcfc11a-7714-45f4-9399-4f463012ac3e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Suitland Algebra 2 Tutor
Find a Suitland Algebra 2 Tutor
My experience as a graduate TA has allowed me to obtain a lot of experience teaching general and organic chemistry. Each year I look forward to seeing the students do well in the course and often
employ the use of metaphors, examples, and models in order to make chemistry less mysterious and ensure...
6 Subjects: including algebra 2, chemistry, algebra 1, prealgebra
...There are numerous topic areas that must be worked. In a meeting with a new student in Algebra 2, I talk with the student in an attempt to identify the ones for which he/she may need
assistance. To be sure, there are links between the topic areas.
13 Subjects: including algebra 2, chemistry, calculus, physics
...I am a biological physics major at Georgetown University and so I have a lot of interdisciplinary science experience, most especially with mathematics (Geometry, Algebra, Precalculus,
Trigonometry, Calculus I and II). Additionally, I have tutored people in French and Chemistry, even though they a...
11 Subjects: including algebra 2, chemistry, calculus, French
...I graduated with a Bachelor of Science in Computer Science from the George Washington University in May 2012. I had more than 3 years' intense training in programming, especially in C and Java,
both of which have been widely used in my daily job. I also have the tutored C and Java courses when I'm an undergraduate.
27 Subjects: including algebra 2, chemistry, calculus, physics
...I have tutored students in preparation for the test. At Williams College I studied Statistics, and the course of study included probability. I have tutored both statistics and math students in
the field of probability.
21 Subjects: including algebra 2, statistics, geometry, algebra 1 | {"url":"http://www.purplemath.com/Suitland_Algebra_2_tutors.php","timestamp":"2014-04-17T19:24:06Z","content_type":null,"content_length":"24005","record_id":"<urn:uuid:188c8c7a-efa3-40f6-8457-96df45470583>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Data Clustering
Data Clustering (Final Report)
Mentors: Gilad Lerman, Mark Iwen
IMA REU Program: Schedule
Generally, clustering methods attempt to separate input data into several coherent types, or clusters, based on a meaningful metric. As an example of a clustering problem, consider the data points
pictured below:
Assuming we want to cluster this data based on Euclidean distance, most reasonable individuals would agree there are three clusters. However, in more complicated settings it can become very difficult
to both determine how many clusters there are, and then to assign each data point to the proper cluster. This summer we will investigate both of these issues, focusing primarily on the first question
above: "How many clusters are there?"
Introductory Talk: Power point slides.
Final Report Template: Latex Example.
Slide Presentation Template: Latex Example.
Poster Presentation Templates: Latex Example 1, Latex Example 2, and Power Point Example.
Group Goals
We will focus on methods for determining the number of clusters in a dataset. In the process of studying these methods we will try to accomplish as many of the following goals as we can:
• Conduct a short literature survey based on the papers below. What is currently known about when various methods will correctly determine the number of clusters in a dataset?
• Compare existing methods on artificial and real data. Do your experiments agree with what you found in your literature survey?
• Use your experiments to conclude which methods perform best on which types of data sets. Can you prove some method of your choice will perform well/poorly on particular types of data?
• Using what you have learned, propose and test a new clustering method which behaves better than existing methods on some type(s) of data. What can you prove?
Of course, you should not expect to accomplish all of these goals! The course of study and pace of work is ultimately up to you.
Papers on K-Means and variants
Papers on general clustering problems
Code for determining number of clusters:
Data for comparing clustering methods:
Matlab Code for Generating Random Datasets
• An example `.m' file that creates a 2D dataset with 3 clusters. It can also be modified to generate other artificial data (with different numbers of clusters, dimensions, and underlying
• The following matlab package contains a file called "generate_samples.m" for generating hybrid linear models. It is part of the larger GPCA package. In order to avoid intersection of
subspaces (so that standard clustering could be applied) one needs to set the parameter avoidIntersection = TRUE (and also have affine subspaces instead of linear).
Other Data and Data repositories
• Clustering datasets at UCI Repository
• Complete UCI Machine Learning Repository
• Yale Face Database B
• Some processed face datasets saved as Matlab data can be found here. Two matrices, X and Y, are included. If you plot Y(1:3,:) you will see three clearly separated clusters. The first 64
points are in one cluster, the next 64 points in another cluster, etc.. The original files are on the Yale Face Database B webpage (above). The folder names are yaleB5_P00, yaleB8_P00,
yaleB10_P00. They have been processed following the steps described in Section 4.2.2 of the following paper. The matlab code used for processing them is here.
• Here is an example of spectral clustering data. It contains points from 2 noisy circles: after loading the `.mat' file type "plot(X(:,1),X(:,2),'LineStyle','.');" to see them. You can embed
them into 2D space for clustering with EmbedCircles.m. Note that changing sigma in this file will lead to different problems.
For after work, or on the weekends -- A list of Fun Stuff to do. | {"url":"http://www.ima.umn.edu/~iwen/REU/REU_cluster.html","timestamp":"2014-04-17T00:47:42Z","content_type":null,"content_length":"11152","record_id":"<urn:uuid:87898135-aa4c-46aa-8fcc-b457fdd4b5a4>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Camano Island Math Tutor
...It doesn't matter where they should be. We can get them there. I try to present the material in a way that is different than they way the student is having it presented in the formal setting.
17 Subjects: including statistics, ACT Math, SAT math, algebra 2
...I will be going to nursing school in the fall. I am looking to tutor in math and sciences for mainly middle school and high school age group. I find math to be the easiest for me to tutor in.
23 Subjects: including algebra 1, ACT Math, precalculus, GED
...I'm currently a Math and Science tutor at a school. In college, I completed math through Calculus 3 and am proficient in Advanced Trigonometry, Calculus 1 and 2. My favorite types of math are
Trigonometry, Geometry, Algebra 1 and 2.
26 Subjects: including algebra 1, ACT Math, probability, SAT math
...Our sessions can then be tailored to these objectives.I have been operating a full-time tutoring service in this region for the last 7 years, and have worked with both high school and college
students, not only in their chosen fields, but in study skills and exam prep as well. I have trained in ...
25 Subjects: including algebra 1, algebra 2, vocabulary, grammar
...I have also taken a leadership program at the University of Berkeley and through it gained skills to successfully lead others through their challenges. Challenges such as ropes courses where we
encouraged others to cross the rope and not stopping our support until they reached the end. I have a...
15 Subjects: including algebra 2, geometry, precalculus, prealgebra | {"url":"http://www.purplemath.com/Camano_Island_Math_tutors.php","timestamp":"2014-04-19T02:20:22Z","content_type":null,"content_length":"23687","record_id":"<urn:uuid:71f98210-79fc-408c-bd2e-74137da7f5c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: September 1992 [00020]
[Date Index] [Thread Index] [Author Index]
Re: Integrating Normal Distributions gives the wrong answer
• To: mathgroup at yoda.physics.unc.edu
• Subject: Re: Integrating Normal Distributions gives the wrong answer
• From: mek at guinan.psu.edu (Mark E. Kotanchek)
• Date: Wed, 16 Sep 92 14:17:52 EDT
Thanks to Gordon, Charlie, Steve, Pat, Tom, and Jason for confirming
the misbehavior of Mma w.r.t. integration. I've ordered the 2.1
upgrade and am eagerly waiting to integrate properly (and stop my
colleague next door from chortling about the relative merits of
In any case, if I execute
I get an answer of
rather than the expected "1". Following the discussion of "Adding a
conditional def to Sqrt", I implemented
Sqrt[x_^y_] := x^(y/2)/;EvenQ[y];
Power[x_^y_,z_] := x^(y/2) /; EvenQ[y] && z==1/2;
after which I got the desired result of "1". I don't know WHY this
worked and was wondering if y'all could explain it and whether v2.1
has such a "fix" implented or whether I need to remember to execute
this sequence every time I wanted to do symbolic computations....
This leads me in the question of how to define bounds on a variable.
For example, if "theta" is defined to exist on the between -Pi/2 and
Pi/2, how do I tell Mma this? In this case I would expect
"ArcSin[Sin[z]]" to simplify to "z" rather than "ArcSin[Sin[z]]".
I've tried looking through Blachman's books, the manual, the
tutorials, etc. under the headings {conditionals, limits, boundaries,
..., etc.} but nothing so far has raised a detection flag. Is there
another book or files someplace wherein an ignorant soul like myself
could learn such things or do I simply have to get into the karma of
Mark Kotanchek
Guidance & Control Dept - 363 ASB
Applied Research Lab/Penn State
P.O. Box 30
State College, PA 16804
e-mail: mek at guinan.psu.edu (NeXTmail)
TEL: (814)863-0682
FAX: (814)863-7843 | {"url":"http://forums.wolfram.com/mathgroup/archive/1992/Sep/msg00020.html","timestamp":"2014-04-18T03:11:21Z","content_type":null,"content_length":"36226","record_id":"<urn:uuid:525ae99f-4a48-4b6d-be30-d048a7084e78>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - compact
Date: May 15, 2013 12:15 AM
Author: William Elliot
Subject: compact
A point x is an axxumulation point of A
when for all open U nhood x, U /\ A is infinite.
If S is compact, then every infinite set A has an accumulation point.
If not, then for all x, there's some open U_x nhood x with finite U /\ A.
Since C = { U_x | x in S } covers S,
there's a finite subcover { U_x1,.. U_xj }
Thus A = A /\ (U_x1 \/..\/ U_xj } = (U_x1 /\ A) \/..\/ (U_xj /\ A),
is finite, which of course it isn't.
If every infinite A subset S has an accumulation point, is S compact? | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=9123669","timestamp":"2014-04-19T21:23:01Z","content_type":null,"content_length":"1454","record_id":"<urn:uuid:3b1f4435-f207-4d03-9a10-76d48344cec9>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
ExploringDataBlogA question of model uncertaintyAssessing the precision of classification tree model predictionsClassification Tree ModelsFinding outliers in numerical dataData Science, Data Analysis, R and PythonCharacterizing a new datasetSpacing measures: heterogeneity in numerical distributionsImplementing the CountSummary ProcedureBase versus grid graphicsGraphical insights from the 2012 UseR! MeetingClassifying the UCI mushroomsInterestingness comparisonsDavid Olive’s median confidence intervalGastwirth’s location estimatorMeasuring associations between non-numeric variablesMoving window filters and the pracma packageCleaning time-series and other data streamsHarmonic means, reciprocals, and ratios of random variablesThe Zipf and Zipf-Mandelbrot distributionsIs the “Long Tail” a Useless Concept?The Long Tail of the Pareto DistributionSome Additional Thoughts on Useless AveragesWhen are averages useless?Fitting mixture distributions with the R package mixtoolsMixture distributions and models: a clarification
tag:blogger.com,1999:blog-91793254201748997792014-03-18T05:04:34.435-07:00Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.comBlogger38125tag:blogger.com,1999:blog-9179325420174899779.post-58685583139159282322014-03-09T11:13:00.000-07:002014-03-09T11:13:20.167-07:00<div class=
"MsoNormal">It has been several months since my last post on classification tree models, because two things have been consuming all of my spare time. The first is that I taught a night class
for the <st1:place w:st="on"><st1:placetype w:st="on">University</st1:placetype> of <st1:placename w:st="on">Connecticut</st1:placename></st1:place>’s Graduate School of Business, introducing R to
students with little or no prior exposure to either R or programming. My hope is that the students learned something useful – I can say with certainty that I did – but preparing for the class
and teaching it took a lot of time. The other activity, that has taken essentially all of my time since the class ended, is the completion of a book on nonlinear digital filtering using Python,
joint work with my colleague Moncef Gabbouj of the Tampere University of Technology in <st1:place w:st="on"><st1:city w:st="on">Tampere</st1:city>, <st1:country-region w:st="on">Finland</
st1:country-region></st1:place>. I will have more to say about both of these activities in the future, but for now I wanted to respond to a question raised about my last post.</div><div class=
"MsoNormal"><br /></div><div class="MsoNormal">Specifically, Professor Frank Harrell, the developer of the extremely useful <b>Hmisc</b> package, asked the following:</div><div class="MsoNormal"><br
/></div><blockquote class="tr_bq"> How did you take into account model uncertainty? The uncertainty resulting from data mining
to find nodes and thresholds for continuous predictors has a massive impact on confidence intervals for estimates from recursive partitioning.</blockquote><div class="MsoNormal"><br /></div><div
class="MsoNormal">The short answer is that model uncertainty was not accounted for in the results I presented last time, primarily because – as Professor Harrell’s comments indicate – this is a
complicated issue for tree-based models. The primary objective of this post and the next few is to discuss this issue.</div><div class="MsoNormal"><br /></div><div class="MsoNormal">So first,
what exactly is model uncertainty? Any time we fit an empirical model to data, the results we obtain inherit some of the uncertainty present in the data. For the specific example of
linear regression models, the magnitude of this uncertainty is partially characterized by the standard errors included in the results returned by R’s <b>summary()</b> function. This magnitude
depends on both the uncertainty inherent in the data and the algorithm we use to fit the model. Sometimes – and classification tree models are a case in point – this uncertainty is not
restricted to variations in the values of a fixed set of parameters, but it can manifest itself in substantial structural variations. That is, if we fit classification tree models to two
similar but not identical datasets, the results may differ in the number of terminal nodes, the depths of these terminal nodes, the variables that determine the path to each one, and the values of
these variables that determine the split at each intermediate node. This is the issue Professor Harrell raised in his comments, and the primary point of this post is to present some simple
examples to illustrate its nature and severity.</div><div class="MsoNormal"><br /></div><div class="MsoNormal">In addition, this post has two other objectives. The first is to make amends for a
very bad practice demonstrated in my last two posts. Specifically, the classification tree models described there were fit to a relatively large dataset and then evaluated with respect to that
same dataset. This is bad practice because it can lead to overfitting, a problem that I will discuss in detail in my next post. (For a simple example that illustrates this problem, see
the discussion in Section 1.5.3 of <a href="http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650">Exploring Data in Engineering, the Sciences, and Medicine</a>.) In
the machine learning community, this issue is typically addressed by splitting the original dataset randomly into three parts: a training subset (Tr) used for model-fitting, a validation subset (V)
used for intermediate modeling decisions (e.g., which variables to include in the model), and a test subset (Te) used for final model evaluation. This approach is described in Section 7.2 of <a
href="http://statweb.stanford.edu/~tibs/ElemStatLearn/">The Elements of Statistical Learning</a> by Hastie, Tibshirani, and Friedman, who suggest 50% training, 25% validation, and 25% test as a
typical choice.</div><div class="MsoNormal"><br /></div><div class="MsoNormal">The other point of this post is to say something about the different roles of model uncertainty and data uncertainty in
the practice of predictive modeling. I will say a little more at the end, but whether we are considering business applications like predicting customer behavior or industrial process control
applications to predict the influence of changes in control valve settings, the basic predictive modeling process consists of three steps: build a prediction model; fix (i.e., “finalize”) this model;
and apply it to generate predictions from data not seen in the model-building process. In these applications, model uncertainty plays an important role in the model development process, but
once we have fixed the model, we have eliminated this uncertainty by fiat. Uncertainty remains an important issue in these applications, but the source of this uncertainty is in the data from
which the model generates its predictions and not in the model itself once we have fixed it. Conversely, as George Box famously said, “all models are wrong, but some are useful,” and this point
is crucial here: if the model uncertainty is great enough, it may be difficult or impossible to select a fixed model that is good enough to be useful in practice.</div><div class="MsoNormal"><br /></
div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-X3GeYf3OqkY/UxylhzQA5xI/AAAAAAAAAN4/QxD1U1bNrY0/s1600/TreeFull.png" imageanchor="1" style=
"margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-X3GeYf3OqkY/UxylhzQA5xI/AAAAAAAAAN4/QxD1U1bNrY0/s1600/TreeFull.png" height="319" width="320" /></a></div><div
class="MsoNormal"><br /></div><div class="MsoNormal"><br /></div><div class="MsoNormal">Returning to the topic of uncertainty in tree-based models, the above plot is a graphical representation of a
classification tree model repeated from my previous two posts. This model was fit using the <b>ctree</b> procedure in the R package <b>party</b>, taking all optional parameters at their default
values. As before, the dataset used to generate this model was the Australian vehicle insurance dataset <b>car.csv</b>, obtained from the website associated with the book <a href="http://
www.businessandeconomics.mq.edu.au/our_departments/Applied_Finance_and_Actuarial_Studies/research/books/GLMsforInsuranceData">Generalized Linear Models for Insurance Data</a>, by Piet de Jong and
Gillian Z. Heller. This model – and all of the others considered in this post – was fit using the same formula as before:</div><div class="MsoNormal"><br /></div><div class="MsoNormal"> &
nbsp; Fmla = clm ~ veh_value + veh_body + veh_age + gender + area + agecat</div><div class="MsoNormal"><br /></div><div class="MsoNormal">Each
record in this dataset describes a single-vehicle, single-driver insurance policy, and clm is a binary response variable taking the value 1 if policy filed one or more claims during the observation
period and 0 otherwise. The other variables (on the right side of “~”) represent covariates that are either numeric (veh_value, the value of the vehicle) or categorical (all other variables,
representing the vehicle body type, its age, the gender of the driver, the region where the vehicle is driven, and the driver’s age).</div><div class="MsoNormal"><br /></div><div class="MsoNormal">As
I noted above, this model was fit to the entire dataset, a practice that is to be discouraged since it does not leave independent datasets of similar character for validation and testing. To
address this problem, I randomly partitioned the original dataset into a 50% training subset, a 25% validation subset, and a 25% test subset as suggested by Hastie, Tibshirani and Friedman. The
plot shown below represents the <b>ctree</b>model we obtain using exactly the same fitting procedure as before, but applied to the 50% random training subset instead of the complete dataset.
Comparing these plots reveals substantial differences in the overall structure of the trees we obtain, strictly as a function of the data used to fit the models. In particular, while the
original model has seven terminal nodes (i.e., the tree assigns every record to one of seven “leaves”), the model obtained from the training data subset has only four. Also, note that the
branches in the original tree model are determined by the three variables agecat, veh_body, and veh_value, while the branches in the model built from the training subset are determined by the two
variables agecat and veh_value only.</div><div class="MsoNormal"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-8-rb7qgiujo/UxymjpPBTII
/AAAAAAAAAOE/z_PySdRZQSo/s1600/TreeT.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-8-rb7qgiujo/UxymjpPBTII/AAAAAAAAAOE/z_PySdRZQSo/
s1600/TreeT.png" height="319" width="320" /></a></div><div class="MsoNormal"><o:p><br /></o:p></div><div class="MsoNormal"><br /></div><div class="MsoNormal">These differences illustrate the point
noted above about the strong dependence of classification tree model structure on the data used in model-building. One could object that since the two datasets used here differ by a factor of
two in size, the comparison isn’t exactly “apples-to-apples.” To see that this is not really the issue, consider the following two cases, based on the idea of bootstrap resampling. I
won’t attempt a detailed discussion of the bootstrap approach here, but the basic idea is to assess the effects of data variability on a computational procedure by applying that procedure to multiple
datasets, each obtained by sampling with replacement from a single source dataset. (For a comprehensive discussion of the bootstrap and some of its many applications, refer to the book <a href=
"http://www.amazon.com/Bootstrap-Application-Statistical-Probabilistic-Mathematics/dp/0521574714">Bootstrap Methods and their Application</a> by A.C. Davison and D.V. Hinkley.) The
essential motivation is that these datasets – called bootstrap resamples – all have the same essential statistical character as the original dataset. Thus, by comparing the results obtained
from different bootstrap resamples, we can assess the variability in results for which exact statistical characterizations are either unknown or impractical to compute. Here, I use this idea to
obtain datasets that should address the “apples-to-apples” concern raised above. More specifically, I start with the training data subset used to generate the model described in the previous
figure, and I use R’s built-in <b>sample()</b> function to sample the rows of this dataframe with replacement. For an arbitrary dataframe DF, the code to do this is simple:</div><div class=
"MsoNormal"><br /></div><blockquote class="tr_bq" style="margin-left: .5in;">> set.seed(iseed) </blockquote><blockquote class="tr_bq" style="margin-left: .5in;">> BootstrapIndex = sample
(seq(1,nrow(DF),1),size=nrow(DF),replace=TRUE </blockquote><blockquote class="tr_bq" style="margin-left: .5in;">> ResampleFrame = DF[BootstrapIndex,]</blockquote><div class="MsoNormal"><br />
</div><div class="MsoNormal">The only variable in this procedure is the seed for the random sampling function <b>sample()</b>, which I have denoted as iseed. The extremely complicated figure
below shows the <b>ctree</b> model obtained using the bootstrap resample generated from the training subset with iseed = 5.</div><div class="MsoNormal"><br /></div><div class="separator" style=
"clear: both; text-align: center;"><br /></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-lxTrGZk7n90/Uxyn_2pfB2I/AAAAAAAAAOY/ekhilLyXVyw
/s1600/Tree5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-lxTrGZk7n90/Uxyn_2pfB2I/AAAAAAAAAOY/ekhilLyXVyw/s1600/Tree5.png" height=
"319" width="320" /></a></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><br /></div><div class="MsoNormal">Comparing this model with the previous one – both built from datasets of the
same size, with the same general data characteristics – we see that the differences are even more dramatic than those between the original model (built from the complete dataset) and the second one
(built from the training subset). Specifically, while the training subset model has four terminal nodes, determined by two variables, the bootstrap subsample model uses all six of the variables
included in the model formula, yielding a tree with 16 terminal nodes. But wait – sampling with replacement generates a significant number of duplicated records (for large datasets, each
bootstrap resample contains approximately 63.2% of the original data values, meaning that the other 36.8% of the resample values must be duplicates). Could this be the reason the results are so
different? The following example shows that this is not the issue.</div><div class="MsoNormal"><br /></div><div class="MsoNormal"></div><div class="MsoNormal"><o:p></o:p></div><div class=
"separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-w6tbbQzqWn0/Uxyn8VuoH-I/AAAAAAAAAOU/RJTj_PhMFsM/s1600/Tree6.png" imageanchor="1" style="margin-left: 1em;
margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-w6tbbQzqWn0/Uxyn8VuoH-I/AAAAAAAAAOU/RJTj_PhMFsM/s1600/Tree6.png" height="319" width="320" /></a></div><div class="MsoNormal"><br />
</div><div class="MsoNormal"><o:p><br /></o:p></div><div class="MsoNormal">This plot shows the <b>ctree</b> model obtained from another bootstrap resample of the training data subset, obtained by
specifying iseed = 6 instead of iseed = 5. This second bootstrap resample tree is much simpler, with only 7 terminal nodes instead of 16, and the branches of the tree are based on only four of
the prediction variables instead of all six (specifically, neither gender nor veh_body appear in this model). While I don’t include all of the corresponding plots, I have also constructed and
compared the <b>ctree</b>models obtained from the bootstrap resamples generated for all iseed values between 1 and 8, giving final models involving between four and six variables, with between 7 and
16 terminal nodes. In all cases, the datasets used in building these models were exactly the same size and had the same statistical character. The key point is that, as Professor Harrell
noted in his comments, the structural variability of these classification tree models across similar datasets is substantial. In fact, this variability of individual tree-based models was one
of the key motivations for developing the random forest method, which achieves substantially reduced model uncertainty by averaging over many randomly generated trees. Unfortunately, the price
we pay for this improved model stability is a complete loss of interpretibility. That is, looking at any one of the plots shown here, we can construct a simple description (e.g., node 12 in the
above figure represents older drivers – agecat > 4 – with less expensive cars, and it has the lowest risk of any of the groups identified there). While we may obtain less variable
predictions by averaging over a large number of these trees, such simple intuitive explanations of the resulting model are no longer possible.</div><div class="MsoNormal"><br /></div><div class=
"MsoNormal">I noted earlier that predictive modeling applications typically involve a three-step strategy: fit the model, fix the model, and apply the model. I also argued that once we fix the
model, we have eliminated model uncertainty when we apply it to new data. Unfortunately, if the inherent model uncertainty is large, as in the examples presented here, this greatly complicates
the “fix the model” step. That is, if small variations in our training data subset can cause large changes in the structure of our prediction model, it is likely that very different models will
exhibit similar performance when applied to our validation data subset. How, then, do we choose? I will examine this issue further in my next post when I discuss overfitting and the
training/validation/test split in more detail. </div><br /><div class="MsoNormal"><br /></div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com0tag:blogger.com,1999:blog-9179325420174899779.post-80212182885203066222013-08-06T19:14:00.000-07:002013-08-06T19:14:28.240-07:00<div class="MsoNormal">My last
post focused on the use of the <i>ctree</i> procedure in the R package <i>party</i> to build classification tree models. These models map each record in a dataset into one of M mutually
exclusive groups, which are characterized by their average response. For responses coded as 0 or 1, this average may be regarded as an estimate of the probability that a record in the group
exhibits a “positive response.” This interpretation leads to the idea discussed here, which is to replace this estimate with the size-corrected probability estimate I discussed in my previous
post (<a href="http://exploringdatablog.blogspot.com/2011/04/screening-for-predictive.html">Screening for predictive characteristics</a>). Also, as discussed in that post, these estimates
provide the basis for confidence intervals that quantify their precision, particularly for small groups.</div><div class="MsoNormal"><br /></div><div class="MsoNormal"> </div><div class="MsoNormal">
In this post, the basis for these estimates is the R package <i>PropCIs</i>, which includes several procedures for estimating binomial probabilities and their confidence intervals, including an
implementation of the method discussed in my previous post. Specifically, the procedure used here is <i>addz2ci</i>, discussed in Chapter 9 of <a href="http://www.amazon.com/
Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650">Exploring Data in Engineering, the Sciences, and Medicine</a>. As noted in both that discussion and in my previous post, this
estimator is described in a paper by Brown, Cai and DasGupta in 2002, but the documentation for the <i>PropCIs</i> package cites an earlier paper by Agresti and Coull (“Approximate is better than
exact for interval estimation of binomial proportions,” in <i>The American Statistician,</i> vol. 52, 1998, pp. 119-126). The essential idea is to modify the classical estimator, augmenting the
counts of 0’s and 1’s in the data by <i>z<sup>2</sup>/2</i>, where <i>z</i> is the normal z-score associated with the significance level. As a specific example, <i>z</i> is approximately 1.96
for 95% confidence limits, so this modification adds approximately 2 to each count. In cases where both of these counts are large, this correction has negligible effect, so the size-corrected
estimates and their corresponding confidence intervals are essentially identical with the classical results. In cases where either the sample is small or one of the possible responses is rare,
these size-corrected results are much more reasonable than the classical results, which motivated their use both here and in my earlier post.</div><div class="MsoNormal"><br /></div><div class=
"separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-x_lzlloipQg/UgGoBHq-A3I/AAAAAAAAALk/faiVi488bWs/s1600/Tree2PostFig01.png" imageanchor="1" style="margin-left:
1em; margin-right: 1em;"><img border="0" height="283" src="http://4.bp.blogspot.com/-x_lzlloipQg/UgGoBHq-A3I/AAAAAAAAALk/faiVi488bWs/s320/Tree2PostFig01.png" width="320" /></a></div><div class=
"MsoNormal"><br /></div><div class="MsoNormal"><br /></div><div class="MsoNormal">The above plot provides a simple illustration of the results that can be obtained using the <i>addz2ci</i> procedure,
in a case where some groups are small enough for these size-corrections to matter. More specifically, this plot is based on the Australian vehicle insurance dataset that I discussed in my last
post, and it characterizes the probability that a policy files a claim (i.e., that the variable <i>clm</i> has the value 1), for each of the 13 vehicle types included in the dataset. The heavy
horizontal line segments in this plot represent the size-corrected claim probability estimates for each vehicle type, while the open triangles connected by dotted lines represent the upper and lower
95% confidence limits around these probability estimates, computed as described above. The solid horizontal line represents the overall claim probability for the dataset, to serve as a
reference value for the individual subset results.</div><div class="MsoNormal"> </div><div class="MsoNormal"><br /></div><div class="MsoNormal">An important observation here is that although this
dataset is reasonably large (there are a total of 67,856 records), the subgroups are quite heterogeneous in size, spanning the range from 27 records listing “RDSTR” as the vehicle type to 22,233
listing “<st1:city w:st="on">SEDAN</st1:city>”. As a consequence, although the classical and size-adjusted claim probability estimates and their confidence intervals are essentially identical
for the dataset overall, the extent of this agreement varies substantially across the different vehicle types. Taking the extremes, the results for the largest group (“SEDAN”) are, as with the
dataset overall, almost identical: the classical estimate is 0.0665, while the size-adjusted estimate is 0.0664; the lower 95% confidence limit also differs by one in the fourth decimal place
(classical 0.0631 versus size-corrected 0.0632), and the upper limit is identical to four decimal places, at 0.0697. In marked contrast, the classical and size-corrected estimates for the
“RDSTR” group are 0.0741 versus 0.1271, the upper 95% confidence limits are 0.1729 versus 0.2447, and the lower confidence limits are -0.0247 versus 0.0096. Note that in this case, the lower
classical confidence limit violates the requirement that probabilities must be positive, something that is not possible for the <i>addz2ci</i> confidence limits (specifically, negative values are
less likely to arise, as in this example, and if they ever do arise, they are replaced with zero, the smallest feasible value for the lower confidence limit; similarly for upper confidence limits
that exceed 1). As is often the case, the primary advantage of plotting these results is that it gives us a much more immediate indication of the relative precision of the probability
estimates, particularly in cases like “RDSTR” where these confidence intervals are quite wide.</div><div class="MsoNormal"><br /></div><div class="MsoNormal">The R code used to generate these results
uses both the <i>addz2ci</i> procedure from the <i>PropCIs</i> package, and the <i>summaryBy</i> procedure from the <i>doBy</i> package. Specifically, the following function returns a dataframe
with one row for each distinct value of the variable <i>GroupingVar</i>. The columns of this dataframe include this value, the total number of records listing this value, the number of these
records for which the binary response variable <i>BinVar</i> is equal to 1, the lower confidence limit, the upper confidence limit, and the size-corrected estimate. The function is called with
<i>BinVar</i>, <i>GroupingVar</i>, and the significance level, with a default of 95%. The first two lines of the function require the <i>doBy</i> and <i>PropCIs</i> packages. The third
line constructs an internal dataframe, passed to the <i>summaryBy</i>function in the <i>doBy</i> package, which applies the <i>length</i> and <i>sum</i> functions to the subset of <i>BinVar</i>
values defined by each level of <i>GroupingVar</i>, giving the total number of records and the total number of records with <i>BinVar</i> = 1. The main loop in this program applies the <i>
addz2ci</i> function to these two numbers, for each value of <i>GroupingVar</i>, which returns a two-element list. The element <i>$estimate</i> gives the size-corrected probability estimate,
and the element <i>$conf.int</i> is a vector of length 2 with the lower and upper confidence limits for this estimate. The rest of the program appends these values to the internal dataframe
created by the <i>summaryBy</i>function, which is returned as the final result. The code listing follows:</div><div class="MsoNormal"><br /></div><blockquote class="tr_bq">
BinomialCIbyGroupFunction <- function(BinVar, GroupingVar, SigLevel = 0.95){<br /> #<br /> require(doBy)<br /> require(PropCIs)<br /> #<br /> IntFrame = data.frame(b
= BinVar, g = as.factor(GroupingVar))<br /> SumFrame = summaryBy(b ~ g, data = IntFrame, FUN=c(length,sum))<br /> #<br /> n = nrow(SumFrame)<br /> EstVec = vector("numeric",n)
<br /> LowVec = vector("numeric",n)<br /> UpVec = vector("numeric",n)<br /> for (i in 1:n){<br /> Rslt = addz2ci(x = SumFrame$b.sum[i],n = SumFrame$b.length
[i],conf.level=SigLevel)<br /> EstVec[i] = Rslt$estimate<br /> CI = Rslt$conf.int<br /> LowVec[i] = CI[1]<br /> UpVec[i] = CI
[2]<br /> }<br /> SumFrame$LowerCI = LowVec<br /> SumFrame$UpperCI = UpVec<br /> SumFrame$Estimate = EstVec<br /> return(SumFrame)<br />}</blockquote><div class=
"separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-I5hspeKvPcI/UgGqALxgzEI/AAAAAAAAAL0/bRuw6CXTG1M/s1600/Tree2PostFig02.png" imageanchor="1" style="margin-left:
1em; margin-right: 1em;"><img border="0" height="283" src="http://4.bp.blogspot.com/-I5hspeKvPcI/UgGqALxgzEI/AAAAAAAAAL0/bRuw6CXTG1M/s320/Tree2PostFig02.png" width="320" /></a></div><br /><br />&
nbsp;The binary response characterization tools just described can be applied to the results obtained from a classification tree model. Specifically, since a classification tree assigns every
record to a unique terminal node, we can characterize the response across these nodes, treating the node numbers as the data groups, analogous to the vehicle body types in the previous example.
As a specific illustration, the figure above gives a graphical representation of the <i>ctree</i> model considered in my previous post, built using the <i>ctree</i> command from the <i>party</i>
package with the following formula:<br /> <div class="MsoNormal"><br /></div><div class="MsoNormal"> Fmla = clm ~ veh_value +
veh_body + veh_age + gender + area + agecat</div><div class="MsoNormal"><br /></div><div class="MsoNormal">Recall that this formula specifies we want a classification tree that predicts the binary
claim indicator <i>clm</i> from the six variables on the right-hand side of the tilde, separated by “+” signs. Each of the terminal nodes in the resulting <i>ctree</i> model is characterized
with a rectangular box in the above figure, giving the number of records in each group <i>(n)</i> and the average positive response <i>(y)</i>, corresponding to the classical claim probability
estimate. Note that the product <i>ny</i> corresponds to the total number of claims in each group, so these products and the group sizes together provide all of the information we need to
compute the size-corrected claim probability estimates and their confidence limits for each terminal node. Alternatively, we can use the <i>where</i> method associated with the binary tree
object that <i>ctree</i> returns to extract the terminal nodes associated with each observation. Then, we simply use the terminal node in place of vehicle body type in exactly the same analysis
as before.</div><div class="MsoNormal"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-WAAQNC5-2IU/UgGq_xDPi_I/AAAAAAAAAME/V85l5kYcm_s/
s1600/Tree2PostFig03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="283" src="http://1.bp.blogspot.com/-WAAQNC5-2IU/UgGq_xDPi_I/AAAAAAAAAME/V85l5kYcm_s/s320
/Tree2PostFig03.png" width="320" /></a></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><br /></div><div class="MsoNormal">The above figure shows these estimates, in the same format as
the original plot of claim probability broken down by vehicle body type given earlier. Here, the range of confidence interval widths is much less extreme than before, but it is still clearly
evident: the largest group (Node 10, with 23,315 records) exhibits the narrowest confidence interval, while the smallest groups (Node 9, with 1,361 records, and Node 13, with 1,932 records) exhibit
the widest confidence intervals. Despite its small size, however, the smallest group does exhibit a significantly lower claim probability than any of the other groups defined by this
classification tree model.</div><div class="MsoNormal"><br /></div><div class="MsoNormal"> </div><div class="MsoNormal">The primary point of this post has been to demonstrate that binomial confidence
intervals can be used to help interpret and explain classification tree results, especially when displayed graphically as in the above figure. These displays provide a useful basis for
comparing classification tree models obtained in different ways (e.g., by different algorithms like <i>rpart</i> and <i>ctree</i>, or by different tuning parameters for one specific algorithm).
Comparisons of this sort will form the basis for my next post.</div><div class="MsoNormal"> </div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com1tag:blogger.com,1999:blog-9179325420174899779.post-55839113393338171032013-04-13T08:09:00.000-07:002013-04-13T08:09:32.331-07:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">On March 26, I attended the Connecticut R Meetup in <st1:city w:st="on"><st1:place w:st="on">New Haven</st1:place></st1:city>, which featured a talk by Illya Mowerman on
decision trees in <em>R</em>.<span style="mso-spacerun: yes;"> </span>I have gone to these Meetups before, and I have always found them to be interesting and informative.<span style=
"mso-spacerun: yes;"> </span>Attendees range from those who are just starting to explore <em>R</em> to those who have multiple CRAN packages to their credit.<span style="mso-spacerun: yes;">&
nbsp; </span>Each session is organized around a talk that focuses on some aspect of <em>R</em> and both the talks and the discussion that follow are typically lively and useful.<span style=
"mso-spacerun: yes;"> </span>More information about the Connecticut R Meetup can be found <a href="http://www.meetup.com/Conneticut-R-Users-Group/messages/47523342/">here</a>, and information
about R Meetups in other areas can be found with a Google search on “R Meetup” with a location.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style=
"clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-9rNn3CpjFYE/UWlfCb5uNMI/AAAAAAAAAK8/bsMaO9uyttM/s1600/ctreeFig01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;
"><img border="0" bua="true" height="319" src="http://3.bp.blogspot.com/-9rNn3CpjFYE/UWlfCb5uNMI/AAAAAAAAAK8/bsMaO9uyttM/s320/ctreeFig01.png" width="320" /></a></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Mowerman’s talk focused on decision trees like the one shown in the figure above.<span style="mso-spacerun: yes;
"> </span>I give a somewhat more detailed discussion of this example below, but the basic idea is that the tree assigns every record in a dataset to a unique group, and a predicted response is
generated for each group.<span style="mso-spacerun: yes;"> </span>The basic decision tree models are either classification trees, appropriate to binary response variables, or regression tree
models, appropriate to numeric response variables.<span style="mso-spacerun: yes;"> </span>The figure above represents a classification tree model that predicts the probability that an
automobile insurance policyholder will file a claim, based on a publicly available insurance dataset discussed further below.<span style="mso-spacerun: yes;"> </span>Two advantages of
classification tree models that Mowerman emphasized in his talk are, first, their simplicity of interpretation, and second, their ability to generate predictions from a mix of numerical and
categorical covariates.<span style="mso-spacerun: yes;"> </span>The above example illustrates both of these points – the decision tree is based on both categorical variables like <strong>
veh_body</strong> (vehicle body type) and numerical variables like <strong>veh_value</strong> (the vehicle value in units of 10,000 Australian dollars).<span style="mso-spacerun: yes;"> </span>
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><span style="mso-spacerun: yes;"><div class="MsoNormal" style="margin: 0in 0in 0pt;">To interpret this tree, begin by reading
from the top down, with the root node, numbered 1, which partitions the dataset into two subsets based on the variable <strong>agecat</strong>.<span style="mso-spacerun: yes;"> </span>This
variable is an integer-coded driver age group with six levels, ranging from 1 for the youngest drivers to 6 for the oldest drivers.<span style="mso-spacerun: yes;"> </span>The root node splits
the dataset into a younger driver subgroup (to the left, with <strong>agecat</strong> values 1 through 4) and an older driver subgroup (to the right, with <strong>agecat</strong> values 5 and 6).
<span style="mso-spacerun: yes;"> </span>Going to the right, node 11 splits the older driver group on the basis of vehicle value, with node 12 consisting of older drivers with <strong>veh_value
</strong> less than or equal to 2.89, corresponding to vehicle values not more than 28,900 Australian dollars.<span style="mso-spacerun: yes;"> </span>This subgroup contains 15,351 policy
records, of which 5.3% file claims.<span style="mso-spacerun: yes;"> </span>Similarly, node 13 corresponds to older drivers with vehicles valued more than 28,900 Australian dollars; this is a
smaller group (1,932 policy records) with a higher fraction filing claims (8.3%).<span style="mso-spacerun: yes;"> </span>Going to the left, we partition the younger driver group first on
vehicle body type (node 2), then possibly a second time on driver age (node 4), possibly further on vehicle value (node 6) and finally again on vehicle body type (node 7).<span style="mso-spacerun:
yes;"> </span>The key point is that every record in the dataset is ultimately assigned to one of the seven terminal nodes of this tree (the “leaves,” numbered 3, 5, 8, 9, 10, 12, and 13).<span
style="mso-spacerun: yes;"> </span>The numbers associated with these nodes gives their size and the fraction of each group that files a claim, which may be viewed as an estimate of the
conditional probability that a driver from each group will file a claim.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class
="MsoNormal" style="margin: 0in 0in 0pt;">Classification trees can be fit to data using a number of different algorithms, several of which are included in various <em>R</em> packages.<span style=
"mso-spacerun: yes;"> </span>Mowerman’s talk focused primarily on the <strong>rpart</strong> package that is part of the standard <em>R</em> distribution and includes a procedure also named
<strong>rpart</strong>, based on what is probably the best known algorithm for fitting classification and regression trees.<span style="mso-spacerun: yes;"> </span>In addition, Mowerman also
discussed the <strong>rpart.plot</strong> package, a very useful adjunct to <strong>rpart</strong> that provides a lot of flexibility in representing the resulting tree models graphically. In
particular, this package can be used to make much nicer plots than the one shown above; I haven't done that here largely because I have used a different tree fitting procedure, for&
nbsp;reasons discussed in the next paragraph.<span style="mso-spacerun: yes;"> </span>Another classification package that Mowerman mentioned in his talk is <strong>C50</strong>, which
implements the C5.0 algorithm popular in the machine learning community.<span style="mso-spacerun: yes;"> </span>The primary focus of this post is the <strong>ctree</strong> procedure in the
<strong>party</strong> package, which was used to fit the tree shown here.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The
reason I have used the <strong>ctree</strong> procedure instead of the <strong>rpart</strong> procedure is that for the dataset I consider here, the <strong>rpart</strong> procedure returns a trivial
tree.<span style="mso-spacerun: yes;"> </span>That is, when I attempt to fit a tree to the dataset using <strong>rpart</strong> with the response variable and covariates described below, the
resulting “tree” assigns the entire dataset to a single node, declaring the overall fraction of positive responses in the dataset to be the common prediction for all records.<span style=
"mso-spacerun: yes;"> </span>Applying the <strong>ctree</strong> procedure (the code is listed below) yields the nontrivial tree shown in the plot above.<span style="mso-spacerun: yes;">
</span>The reason for the difference in these results is that the <strong>rpart</strong> and <strong>ctree</strong> procedures use different tree-fitting algorithms.<span style="mso-spacerun: yes;">&
nbsp; </span>Very likely, the reason <strong>rpart</strong> has such difficulty with this dataset is its high degree of <em>class imbalance:</em> the positive response (i.e., “policy filed one or
more claims”) occurs in only 4,264 of 67,856 data records, representing 6.81% of the total.<span style="mso-spacerun: yes;"> </span>This imbalance problem is known to make classification
difficult, enough so that it has become the focus of a specialized technical literature.<span style="mso-spacerun: yes;"> </span>For a rather technical survey of this topic, refer to the paper
“The Class Imbalance Problem: A Systematic Study,” by Japkowicz and Stephen <a href="http://iospress.metapress.com/content/mxug8cjkjylnk3n0/">(Intelligent Data Analysis, volume 6, number 5, November,
2002).</a> (So far, I have not been able to find a free version of this paper, but if you are interested in the topic, a search on this title turns up a number of other useful papers on the
topic, although generally more specialized than this broad survey.)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">To obtain
the tree shown in the plot above, I used the following R commands:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><o:p><blockquote class="tr_bq"><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><o:p>> library(party)</o:p></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><o:p>> carFrame = read.csv("car.csv")</o:p></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><o:p>> Fmla = clm ~ veh_value + veh_body + veh_age + gender + area + agecat</o:p></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><o:p>> TreeModel = ctree(Fmla,
data = carFrame)</o:p></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><o:p>> plot(TreeModel, type="simple")</o:p></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></
blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"></div></o:p><o:p></o:p> </span><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><o:p></o:p>The first line loads the <strong>party</strong> package to make the <strong>ctree</strong> procedure available for our use, and the second line reads the data file
described below into the dataframe <strong>carFrame</strong> (note that this assumes the data file "car.csv" has been loaded into <em>R's</em> current working directory, which can be shown using the
<strong>getwd()</strong> command). The third line defines the formula that specifies the response as the binary variable <strong>clm</strong> (on the left side of "~") and the six other
variables listed above as potential predictors, each separated by the "+" symbol. The fourth line invokes the <strong>ctree</strong> procedure to fit the model and the last line displays the
results.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The dataset I used here is <strong>car.csv</strong>, available from
the <a href="http://www.businessandeconomics.mq.edu.au/our_departments/Applied_Finance_and_Actuarial_Studies/research/books/GLMsforInsuranceData/data_sets">website</a> associated with the book <a
Generalized Linear Models for Insurance Data, by Piet de Jong and Gillian Z. Heller</a>.<span style="mso-spacerun: yes;"> </span>As noted, this dataset contains 67,856 records, each
characterizing an automobile insurance policy associated with one vehicle and one driver.<span style="mso-spacerun: yes;"> </span>The dataset has 10 columns, each representing an observed value
for a policy characteristic, including claim and loss information, vehicle characteristics, driver characteristics, and certain other variables (e.g., a categorical variable characterizing the type
of region where the vehicle is driven).<span style="mso-spacerun: yes;"> </span>The <strong>ctree</strong> model shown above was built to predict the binary response variable <strong>clm</
strong> (where <strong>clm</strong> = 1 if one or more claims have been filed by the policyholder, and 0 otherwise), based on the following prediction variables:</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><o:p><o:p><blockquote class="tr_bq"><o:p><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt 0.75in;
mso-list: l0 level1 lfo1; tab-stops: list .75in; text-indent: -0.25in;"><span style="mso-list: Ignore;">-<span style="font: 7pt 'Times New Roman';"> </
span></span>the numeric variable veh_value;</div><div class="MsoNormal" style="margin: 0in 0in 0pt 0.75in; mso-list: l0 level1 lfo1; tab-stops: list .75in; text-indent: -0.25in;"><span style=
"mso-list: Ignore;">-<span style="font: 7pt 'Times New Roman';"> </span></span>veh_body, a categorical variable with 13 levels;</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt 0.75in; mso-list: l0 level1 lfo1; tab-stops: list .75in; text-indent: -0.25in;"><span style="mso-list: Ignore;">-<span style="font: 7pt 'Times New Roman';">&
nbsp; </span></span>veh_age, an integer-coded categorical variable with 4 levels;</div><div class="MsoNormal" style="margin: 0in 0in 0pt 0.75in; mso-list: l0
level1 lfo1; tab-stops: list .75in; text-indent: -0.25in;"><span style="mso-list: Ignore;">-<span style="font: 7pt 'Times New Roman';"> </span></span>
gender, a binary indicator of driver gender;</div><div class="MsoNormal" style="margin: 0in 0in 0pt 0.75in; mso-list: l0 level1 lfo1; tab-stops: list .75in; text-indent: -0.25in;"><span style=
"mso-list: Ignore;">-<span style="font: 7pt 'Times New Roman';"> </span></span>area, a categorical variable with six levels;</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt 0.75in; mso-list: l0 level1 lfo1; tab-stops: list .75in; text-indent: -0.25in;"><span style="mso-list: Ignore;">-<span style="font: 7pt 'Times New Roman';">&
nbsp; </span></span>agecat, and integer-coded driver age variable.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></o:p></blockquote></
o:p></o:p><o:p><div class="MsoNormal" style="margin: 0in 0in 0pt;">The tree model shown above illustrates one of the points Mowerman made in his talk, that classification tree models can easily
handle mixed covariate types: here, these covariates include one numeric variable (<strong>veh_value</strong>), one binary variable (<strong>gender</strong>), and four categorical variables.<span
style="mso-spacerun: yes;"> </span>In principle, tree models can be built using categorical variables with an arbitrary number of levels, but in practice procedures like <strong>ctree</strong>
will fail if the number of levels becomes too large.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">One of the tuning
parameters in tree-fitting procedures like <strong>rpart</strong> and <strong>ctree</strong> is the minimum node size.<span style="mso-spacerun: yes;"> </span>In his R Meetup talk, Mowerman
showed that increasing this value from the default limit of 7 yielded simpler trees for the dataset he considered (the <strong>churn</strong> dataset from the <strong>C50</strong> package).<span
style="mso-spacerun: yes;"> </span>Specifically, increasing the minimum node size parameter eliminated very small nodes from the tree, nodes whose practical utility was questionable due to
their small size.<span style="mso-spacerun: yes;"> </span>In my next post, I will show how a graphical tool for displaying binomial probability confidence limits can be used to help interpret
classification tree results by explicitly displaying the prediction uncertainties.<span style="mso-spacerun: yes;"> </span>The tool I use is <strong>GroupedBinomialPlot</strong>, one of those
included in the <strong>ExploringData</strong> package that I am developing.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
Finally, I should say in response to a question about my last post that, sadly, I do not yet have a beta test version of the <strong>ExploringData</strong> package.</div></o:p>Ron Pearson (aka
15693640298594791682noreply@blogger.com3tag:blogger.com,1999:blog-9179325420174899779.post-59604961577361142892013-02-16T12:10:00.000-08:002013-02-16T12:10:20.193-08:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">One of the topics emphasized in <a href="http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650">Exploring Data in Engineering, the Sciences and
Medicine</a> is the damage outliers can do to traditional data characterizations.<span style="mso-spacerun: yes;"> </span>Consequently, one of the procedures to be included in the <strong>
ExploringData</strong> package is <strong>FindOutliers</strong>, described in this post.<span style="mso-spacerun: yes;"> </span>Given a vector of numeric values, this procedure supports four
different methods for identifying possible outliers.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Before describing these
methods, it is important to emphasize two points.<span style="mso-spacerun: yes;"> </span>First, the <i style="mso-bidi-font-style: normal;">detection</i> of outliers in a sequence of numbers
can be approached as a mathematical problem, but the <i style="mso-bidi-font-style: normal;">interpretation</i> of these data observations cannot.<span style="mso-spacerun: yes;"> </span>That
is, mathematical outlier detection procedures implement various rules for identifying points that appear to be anomalous with respect to the nominal behavior of the data, but they cannot explain <i
style="mso-bidi-font-style: normal;">why</i> these points appear to be anomalous.<span style="mso-spacerun: yes;"> </span>The second point is closely related to the first: one possible source
of outliers in a data sequence is gross measurement errors or other data quality problems, but other sources of outliers are also possible so it is important to keep an open mind.<span style=
"mso-spacerun: yes;"> </span>The terms “outlier” and “bad data” are <i style="mso-bidi-font-style: normal;">not</i> synonymous.<span style="mso-spacerun: yes;"> </span>Chapter 7 of <em>
Exploring Data</em> briefly describes two examples of outliers whose detection and interpretation led to a Nobel Prize and to a major new industrial product (Teflon, a registered trademark of the
DuPont Company).</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">In the case of a single sequence of numbers, the typical
approach to outlier detection is to first determine upper and lower limits on the nominal range of data variation, and then declare any point falling outside this range to be an outlier.<span style=
"mso-spacerun: yes;"> </span>The <strong>FindOutliers</strong> procedure implements the following methods of computing the upper and lower limits of the nominal data range:</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l0 level1 lfo1; tab-stops: list 1.0in; text-indent: -0.5in;"><span style=
"mso-list: Ignore;">1.<span style="font: 7pt 'Times New Roman';"> </span></span>The ESD
identifier, more commonly known as the “three-sigma edit rule,” well known but unreliable;</div><div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l0 level1 lfo1; tab-stops: list 1.0in;
text-indent: -0.5in;"><span style="mso-list: Ignore;">2.<span style="font: 7pt 'Times New Roman';"> &
nbsp; </span></span>The Hampel identifier, a more reliable procedure based on the median and the MADM scale estimate;</div><div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l0 level1
lfo1; tab-stops: list 1.0in; text-indent: -0.5in;"><span style="mso-list: Ignore;">3.<span style="font: 7pt 'Times New Roman';"> &
nbsp; </span></span>The standard boxplot rule, based on the upper and lower quartiles of the data distribution;</div><div class="MsoNormal" style="margin: 0in 0in 0pt
1in; mso-list: l0 level1 lfo1; tab-stops: list 1.0in; text-indent: -0.5in;"><span style="mso-list: Ignore;">4.<span style="font: 7pt 'Times New Roman';"> &
nbsp; </span></span>An adjusted boxplot rule, based on the upper and lower quartiles, along with a robust skewness estimator called the <i style=
"mso-bidi-font-style: normal;">medcouple</i>.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The rest of this post briefly
describes these four outlier detection rules and illustrates their application to two real data examples.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;">Without question, the most popular outlier detection rule is the ESD identifier (an abbreviation for “extreme Studentized deviation”), which declares any point more than
<i style="mso-bidi-font-style: normal;">t </i>standard deviations from the mean to be an outlier, where the threshold value <i style="mso-bidi-font-style: normal;">t</i> is most commonly taken to be
3.<span style="mso-spacerun: yes;"> </span>In other words, the nominal range used by this outlier detection procedure is the closed interval:</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>[mean – t * SD, mean +
t * SD]</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">where SD is the estimated standard deviation of the data sequence.
<span style="mso-spacerun: yes;"> </span>Motivation for the threshold choice t = 3 comes from the fact that for normally-distributed data, the probability of observing a value more than three
standard deviations from the mean is only about 0.3%.<span style="mso-spacerun: yes;"> </span>The problem with this outlier detection procedure is that both the mean and the standard deviation
are themselves extremely sensitive to the presence of outliers in the data.<span style="mso-spacerun: yes;"> </span>As a consequence, this procedure is likely to miss outliers that are present
in the data.<span style="mso-spacerun: yes;"> </span>In fact, it can be shown that for a contamination level greater than 10%, this rule fails completely, detecting no outliers at all, no
matter how extreme they are (for details, see the discussion in Sec. 3.2.1 of <a href="http://www.amazon.com/Mining-Imperfect-Data-Contamination-Incomplete/dp/0898715822">Mining Imperfect Data</a>).
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The default option for the <strong>FindOutliers</strong> procedure is the
Hampel identifier, which replaces the mean with the median and the standard deviation with the MAD (or MADM)<span style="mso-spacerun: yes;"> </span>scale estimate.<span style="mso-spacerun:
yes;"> </span>The nominal data range for this outlier detection procedure is:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-tab-count: 1;"> </span>[median – t * MAD, median + t * MAD]</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">As I have discussed in previous posts, the median and the MAD scale are much more resistant to the influence of outliers than the
mean and standard deviation.<span style="mso-spacerun: yes;"> </span>As a consequence, the Hampel identifier is generally more effective than the ESD identifier, although the Hampel identifier
can be too aggressive, declaring too many points as outliers.<span style="mso-spacerun: yes;"> </span>For detailed comparisons of the ESD and Hampel identifiers, refer to Sec. 7.5 of <i style=
"mso-bidi-font-style: normal;">Exploring Data</i> or Sec. 3.3 of <i style="mso-bidi-font-style: normal;">Mining Imperfect Data</i>.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></
div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The third method option for the <strong>FindOutliers</strong> procedure is the standard boxplot rule, based on the following nominal data
range:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span>[Q1 – c * IQD, Q3 + c * IQD]</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">where Q1 and
Q3 represent the lower and upper quartiles, respectively, of the data distribution, and IQD = Q3 – Q1 is the interquartile distance, a measure of the spread of the data similar to the standard
deviation.<span style="mso-spacerun: yes;"> </span>The threshold parameter <i style="mso-bidi-font-style: normal;">c</i> is analogous to <i style="mso-bidi-font-style: normal;">t</i> in the
first two outlier detection rules, and the value most commonly used in this outlier detection rule is c = 1.5.<span style="mso-spacerun: yes;"> </span>This outlier detection rule is much less
sensitive to the presence of outliers than the ESD identifier, but more sensitive than the Hampel identifier, and, like the Hampel identifier, it can be somewhat too aggressive, declaring nominal
data observations to be outliers.<span style="mso-spacerun: yes;"> </span>An advantage of the boxplot rule over these two alternatives is that, because it does not depend on an estimate of the
“center” of the data (e.g., the mean in the ESD identifier or the median in the Hampel identifier), it is better suited to distributions that are moderately asymmetric.</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The fourth method option is an extension of the standard boxplot rule, developed for data distributions
that may be strongly asymmetric.<span style="mso-spacerun: yes;"> </span>Basically, this procedure modifies the threshold parameter <i style="mso-bidi-font-style: normal;">c</i> by an amount
that depends on the asymmetry of the distribution, modifying the upper threshold and the lower threshold differently.<span style="mso-spacerun: yes;"> </span>Because the standard moment-based
skewness estimator is <i style="mso-bidi-font-style: normal;">extremely</i> outlier-sensitive (for an illustration of this point, see the discussion in Sec. 7.1.1 of <i style="mso-bidi-font-style:
normal;">Exploring Data</i>), it is necessary to use an outlier-resistant alternative to assess distributional asymmetry.<span style="mso-spacerun: yes;"> </span>The asymmetry measure used here
is the <i style="mso-bidi-font-style: normal;">medcouple</i>, a robust skewness measure available in the <b style="mso-bidi-font-weight: normal;">robustbase</b> package in <em>R</em> and that I have
discussed in a previous post (<a href="http://exploringdatablog.blogspot.com/2011/02/boxplots-and-beyond-part-ii-asymmetry.html">Boxplots and Beyond - Part II: Asymmetry</a><span style="mso-spacerun:
yes;"> </span>).<span style="mso-spacerun: yes;"> </span>An important point about the medcouple is that it can be either positive or negative, depending on the direction of the
distributional asymmetry; positive values arise more frequently in practice, but negative values can occur and the sign of the medcouple influences the definition of the asymmetric boxplot rule.<span
style="mso-spacerun: yes;"> </span>Specifically, for positive values of the medcouple MC, the adjusted boxplot rule’s nominal data range is:</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>[Q1 – c * exp(a * MC) *
IQD, Q3 + c * exp(b * MC) * IQD ]</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">while for negative medcouple values, the
nominal data range is:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span>[Q1 – c * exp(-b * MC) * IQD, Q3 + c * exp(-a * MC) * IQD ]</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">An important observation here is that for symmetric data distributions, MC should be zero, reducing the adjusted boxplot rule to the standard boxplot rule
described above.<span style="mso-spacerun: yes;"> </span>As in the standard boxplot rule, the threshold parameter is typically taken as c = 1.5, while the other two parameters are typically
taken as a = -4 and b = 3.<span style="mso-spacerun: yes;"> </span>In particular, these are the default values for the procedure <b style="mso-bidi-font-weight: normal;">adjboxStats</b> in the
<b style="mso-bidi-font-weight: normal;">robustbase</b> package.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-SPUmE3GgKEc/UR_jamXVJaI/AAAAAAAAAKs/ZmL_a4g1Pg4/s1600/FindOutliersFig01Makeup.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0"
height="319" src="http://4.bp.blogspot.com/-SPUmE3GgKEc/UR_jamXVJaI/AAAAAAAAAKs/ZmL_a4g1Pg4/s320/FindOutliersFig01Makeup.png" uea="true" width="320" /></a></div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">To illustrate how these outlier detection methods
compare, the above pair of plots shows the results of applying all four of them to the makeup flow rate dataset discussed in <em>Exploring Data</em> (Sec. 7.1.2) in connection with the failure of the
ESD identifier.<span style="mso-spacerun: yes;"> </span>The points in these plots represent approximately 2,500 regularly sampled flow rate measurements from an industrial manufacturing
process.<span style="mso-spacerun: yes;"> </span>These measurements were taken over a long enough period of time to contain both periods of regular process operation – during which the
measurements fluctuate around a value of approximately 400 – and periods when the process was shut down, was being shut down, or was being restarted, during which the measurements exhibit values near
zero.<span style="mso-spacerun: yes;"> </span>If we wish to characterize normal process operation, these shut down episodes represent outliers, and they correspond to about 20% of the data.
<span style="mso-spacerun: yes;"> </span>The left-hand plot shows the outlier detection limits for the ESD identifier (lighter, dashed lines) and the Hampel identifier (darker, dotted lines).
<span style="mso-spacerun: yes;"> </span>As discussed in <em>Exploring Data</em>, the ESD limits are wide enough that they do not detect any outliers in this data sequence, while the Hampel
identifier nicely separates the data into normal operating data and outliers that correspond to the shut down episodes.<span style="mso-spacerun: yes;"> </span>The right-hand plot shows the&
nbsp;analogous results obtained with the standard boxplot method (lighter, dashed lines) and the adjusted boxplot method (darker, dotted lines).<span style="mso-spacerun: yes;"> </
span>Here, the standard boxplot rule gives results very similar to the Hampel identifier, again nicely separating the dataset into normal operating data and shut down episodes.<span style=
"mso-spacerun: yes;"> </span>Unfortunately, the adjusted boxplot rule does not perform very well here, placing its lower nominal data limit in about the middle of the shut down data and its
upper nominal data limit in about the middle of the normal operating data.<span style="mso-spacerun: yes;"> </span>The likely cause of this behavior is that the relatively large fraction of
lower tail outliers, which introduces a fairly strong negative skewness (the medcouple value for this example is -0.589).</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Sa2YMBtbOwA/UR_jSga68dI/AAAAAAAAAKk/9hbC2kIJWbw/s1600/FindOutliersFig01.png" imageanchor="1" style=
"margin-left: 1em; margin-right: 1em;"><img border="0" height="319" src="http://1.bp.blogspot.com/-Sa2YMBtbOwA/UR_jSga68dI/AAAAAAAAAKk/9hbC2kIJWbw/s320/FindOutliersFig01.png" uea="true" width="320"
/></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The
second example considered here is the industrial pressure data sequence shown in the above figure, in the same format as the previous figure.<span style="mso-spacerun: yes;"> </span>This data
sequence was discussed in <em>Exploring Data</em> (pp. 326-327) as a troublesome case because the two smallest values in this data sequence – near the right-hand end of the plots – appear to be
downward outliers in a sequence with generally positive skewness (here, the medcouple value is 0.162).<span style="mso-spacerun: yes;"> </span>As a consequence, neither the ESD identifier nor
the Hampel identifier give fully satisfactory performance, in both cases declaring only one of these points as a downward outlier and arguably detecting too many upward outliers.<span style=
"mso-spacerun: yes;"> </span>In fact, because the Hampel identifier is more aggressive here, it actually declares more upward outliers, making its performance worse for this example.<span style
="mso-spacerun: yes;"> </span>The right-hand plot in the above figure shows the outlier detection limits for the standard boxplot rule (lighter, dashed lines) and the adjusted boxplot rule
(darker, dotted lines).<span style="mso-spacerun: yes;"> </span>As in the previous example, the limits for the standard boxplot rule are almost the same as those for the Hampel identifier (the
darker, dotted lines in the left-hand plot), but here the adjusted boxplot rule gives much better results, identifying both of the visually evident downward outliers and declaring far fewer points as
upward outliers.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><span style="font-family: 'Times New Roman'; font-size: 12pt; mso-ansi-language: EN-US; mso-bidi-language: AR-SA;
mso-fareast-font-family: 'Times New Roman'; mso-fareast-language: EN-US;">The primary point of this post has been to describe and demonstrate the outlier detection methods to be included in the
<strong>FindOutliers</strong> procedure in the forthcoming <strong>ExploringData</strong> <em>R</em> package.<span style="mso-spacerun: yes;"> </span>It should be clear from these results that,
when it comes to outlier detection, “one size does not fit all” – method matters, and the choice of method requires a comparison of the results obtained by each one.<span style="mso-spacerun: yes;">&
nbsp; </span>I have not included the code for the <strong>FindOutliers</strong> procedure here, but that will be the subject of my next post.</span>Ron Pearson (aka TheNoodleDoodler)http://
class="MsoNormal" style="margin: 0in 0in 0pt;">The October 2012 issue of <i style="mso-bidi-font-style: normal;">Harvard Business Review</i> prominently features the words “Getting Control of Big
Data” on the cover, and the magazine includes these three related articles:</div><o:p><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><ol style="margin-top:
0in;" type="1"><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l4 level1 lfo1; tab-stops: list .5in;">“Big Data: The Management Revolution,” by Andrew McAfee and Erik Brynjolfsson, pages
61 – 68;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l4 level1 lfo1; tab-stops: list .5in;">“Data Scientist: The Sexiest Job of the 21<sup>st</sup> Century,” by Thomas H.
Davenport and D.J. Patil, pages 70 – 76;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l4 level1 lfo1; tab-stops: list .5in;">“Making Advanced Analytics Work For You,” by Dominic
Barton and <st1:street w:st="on"><st1:address w:st="on">David Court</st1:address></st1:street>, pages 79 – 83.</li></ol><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></blockquote>
<div class="MsoNormal" style="margin: 0in 0in 0pt;"></div></o:p>All three provide food for thought; this post presents a brief summary of some of those thoughts. <div class="MsoNormal" style="margin:
0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt; text-indent: 0.5in;">One point made in the first article is that the “size” of a dataset – i.e., what constitutes “Big
Data” – can be measured in at least three very different ways: volume, velocity, and variety.<span style="mso-spacerun: yes;"> </span>All of these aspects of the Big Data characterization
problem affect it, but differently:</div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt; text-indent: 0.5in;"><br /></div><div class="MsoNormal" style="margin: 0in 0in
0pt 78pt; mso-list: l1 level1 lfo4; tab-stops: list 78.0pt; text-indent: -0.25in;"><span style="font-family: Symbol; mso-bidi-font-family: Symbol; mso-fareast-font-family: Symbol;"><span style=
"mso-list: Ignore;">·<span style="font: 7pt 'Times New Roman';"> </span></span></span>For very large data volumes, one fundamental issue is the
incomprehensibility of the raw data itself.<span style="mso-spacerun: yes;"> </span>Even if you could display a data table with several million, billion, or trillion rows and hundreds or
thousands of columns, making any sense of this display would be a hopeless task.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt 78pt; mso-list:
l1 level1 lfo4; tab-stops: list 78.0pt; text-indent: -0.25in;"><span style="font-family: Symbol; mso-bidi-font-family: Symbol; mso-fareast-font-family: Symbol;"><span style="mso-list: Ignore;">·<span
style="font: 7pt 'Times New Roman';"> </span></span></span>For high velocity datasets – e.g., real-time, Internet-based data sources – the data volume is
determined by the observation time: at a fixed rate, the longer you observe, the more you collect.<span style="mso-spacerun: yes;"> </span>If you are attempting to generate a real-time
characterization that keeps up with this input data rate, you face a fundamental trade-off between exploiting richer datasets acquired over longer observation periods, and the longer computation
times required to process those datasets, making you less likely to keep up with the input data rate.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in
0pt 78pt; mso-list: l1 level1 lfo4; tab-stops: list 78.0pt; text-indent: -0.25in;"><span style="font-family: Symbol; mso-bidi-font-family: Symbol; mso-fareast-font-family: Symbol;"><span style=
"mso-list: Ignore;">·<span style="font: 7pt 'Times New Roman';"> </span></span></span>For high-variety datasets, a key challenge lies in finding useful ways
to combine very different data sources into something amenable to a common analysis (e.g., combining images, text, and numerical data into a single joint analysis framework).</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt; text-indent: 0.5in;">One practical corollary to these observations is the need for
a computer-based data reduction process or “data funnel” that matches the volume, velocity, and/or variety of the original data sources with the ultimate needs of the organization.<span style=
"mso-spacerun: yes;"> </span>In large organizations, this data funnel generally involves a mix of different technologies and people.<span style="mso-spacerun: yes;"> </span>While it is
not a complete characterization, some of these differences are evident from the primary software platforms used in the different stages of this data funnel: languages like HTML for dealing with
web-based data sources; typically, some variant of SQL for dealing with large databases; a package like R for complex quantitative analysis; and often something like Microsoft Word, Excel, or
PowerPoint delivers the final results.<span style="mso-spacerun: yes;"> </span>In addition, to help coordinate some of these tasks, there are likely to be scripts, either in an operating system
like UNIX or in a platform-independent scripting language like perl or Python.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;
text-indent: 0.5in;">An important point omitted from all three articles is that there are at least two distinct application areas for Big Data:</div><blockquote class="tr_bq"><div class="MsoNormal"
style="margin: 0in 0in 0pt; text-indent: 0.5in;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l2 level1 lfo5; tab-stops: list 1.0in; text-indent: -0.25in;"><span style
="mso-list: Ignore;">1.<span style="font: 7pt 'Times New Roman';"> </span></span>The class of “production applications,” which were discussed in these articles and
illustrated with examples like the un-named U.S. airline described by McAfee and Brynjolfsson that adopted a vendor-supplied procedure to obtain better estimates of flight arrival times, improving
their ability to schedule ground crews and saving several million dollars per year at each airport.<span style="mso-spacerun: yes;"> </span>Similarly, the article by Barton and Court described
a shipping company (again, un-named) that used real-time weather forecast data and shipping port status data, developing an automated system to improve the on-time performance of its fleet.<span
style="mso-spacerun: yes;"> </span>Examples like these describe automated systems put in place to continuously exploit a large but fixed data source.<span style="mso-spacerun: yes;"> </
span></div><div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l2 level1 lfo5; tab-stops: list 1.0in; text-indent: -0.25in;"><span style="mso-list: Ignore;">2.<span style="font: 7pt
'Times New Roman';"> </span></span>The exploitation of Big Data for “one-off” analyses: a question is posed, and the data science team scrambles to find an answer.<span
style="mso-spacerun: yes;"> </span>This use is not represented by any of the examples described in these articles.<span style="mso-spacerun: yes;"> </span>In fact, this second type of
application overlaps a lot with the development process required to create a production application, although the end results are very different.<span style="mso-spacerun: yes;"> </span>In
particular, the end result of a one-off analysis is a single set of results, ultimately summarized to address the question originally posed.<span style="mso-spacerun: yes;"> </span>In contrast,
a production application requires continuing support and often has to meet challenging interface requirements between the IT systems that collect and preprocess the Big Data sources and those that
are already in use by the end-users of the tool (e.g., a Hadoop cluster running in a UNIX environment versus periodic reports generated either automatically or on demand from a Microsoft Access
database of summary information).</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt; text-indent: 0.5in;">A key point
of Davenport and Patil’s article is that data science involves more than just the analysis of data: it is also necessary to identify data sources, acquire what is needed from them, re-structure the
results into a form amenable to analysis, clean them up, and in the end, present the analytical results in a useable form.<span style="mso-spacerun: yes;"> </span>In fact, the subtitle of their
article is “Meet the people who can coax treasure out of messy, unstructured data,” and this statement forms the core of the article’s working definition for the term “data scientist.” (The authors
indicate that the term was coined in 2008 by D.J. Patil, who holds a position with that title at Greylock Partners.)<span style="mso-spacerun: yes;"> </span>Also, two particularly interesting
tidbits from this article were the authors’ suggestion that a good place to find data scientists is at R User Groups, and their description of R as “an open-source statistical tool favored by data
scientists.”</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt; text-indent: 0.5in;"><st1:city w:st="on"><st1:place w:st="on">
Davenport</st1:place></st1:city> and Patil emphasize the difference between structured and unstructured data, especially relevant to the R community since most of R’s procedures are designed to work
with the structured data types discussed in Chapter 2 of <a href="http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650">Exploring Data in Engineering, the Sciences and
Medicine</a>: continuous, integer, nominal, ordinal, and binary.<span style="mso-spacerun: yes;"> </span>More specifically, note that these variable types can all be included in dataframes, the
data object type that is best supported by R’s vast and expanding collection of add-on packages.<span style="mso-spacerun: yes;"> </span>Certainly, there is some support for other data types,
and the level of this support is growing – the <b style="mso-bidi-font-weight: normal;">tm</b> package and a variety of other related packages support the analysis of text data, the <b style=
"mso-bidi-font-weight: normal;">twitteR</b> package provides support for analyzing Twitter tweets, and the <b style="mso-bidi-font-weight: normal;">scrapeR</b> package supports web scraping – but the
acquisition and reformatting of unstructured data sources is not R’s primary strength.<span style="mso-spacerun: yes;"> </span>Yet it is a key component of data science, as <st1:city w:st="on">
<st1:place w:st="on">Davenport</st1:place></st1:city> and Patil emphasize:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style=
"margin: 0in 0in 0pt;">“A quantitative analyst can be great at analyzing data but not at subduing a mass of unstructured data and getting it into a form in which it can be analyzed.<span style=
"mso-spacerun: yes;"> </span>A data management expert might be great at generating and organizing data in structured form but not at turning unstructured data into structured data – and also
not at actually analyzing the data.”</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">To better understand the distinction between the quantitative analyst and the data scientist implied by this quote, consider mathematician George Polya’s
book, <a href="http://www.amazon.com/How-Solve-Aspect-Mathematical-Method/dp/4871878309/ref=sr_1_1?s=books&ie=UTF8&qid=1352581828&sr=1-1&keywords=polya+how+to+solve+it#_">How To Solve
It</a>.<span style="mso-spacerun: yes;"> </span>Originally published in 1945 and most recently re-issued in 2009, 24 years after the author’s death, this book is a very useful guide to
solving math problems.<span style="mso-spacerun: yes;"> </span>Polya’s basic approach consists of these four steps:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;"></div><ol style="margin-top: 0in;" type="1"><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l0 level1 lfo2;
tab-stops: list .5in;">Understand the problem;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l0 level1 lfo2; tab-stops: list .5in;">Formulate a plan for solving the problem;</li><li
class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l0 level1 lfo2; tab-stops: list .5in;">Carry out this plan;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l0 level1 lfo2;
tab-stops: list .5in;">Check the results.</li></ol><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"></div>It is
important to note what is <i style="mso-bidi-font-style: normal;">not</i> included in the scope of Polya’s four steps: Step 1 assumes a problem has been stated precisely, and Step 4 assumes the final
result is well-defined, verifiable, and requires no further explanation.<span style="mso-spacerun: yes;"> </span>While quantitative analysis problems are generally neither as precisely
formulated as Polya’s method assumes, nor as clear in their ultimate objective, the class of “quantitative analyst” problems that <st1:city w:st="on"><st1:place w:st="on">Davenport</st1:place></
st1:city> and Patil assume in the previous quote correspond very roughly to problems of this type.<span style="mso-spacerun: yes;"> </span>They begin with something like an R dataframe and a
reasonably clear idea of what analytical results are desired; they end by summarizing the problem and presenting the results.<span style="mso-spacerun: yes;"> </span>In contrast, the class of
“data scientist” problems implied in <st1:city w:st="on"><st1:place w:st="on">Davenport</st1:place></st1:city> and Patil’s quote comprises an expanded set of steps: <div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;"></div><ol style="margin-top: 0in;" type="1"><li class="MsoNormal" style="margin: 0in
0in 0pt; mso-list: l3 level1 lfo3; tab-stops: list .5in;">Formulate the analytical problem: decide what kinds of questions could and should be asked in a way that is likely to yield useful,
quantitative answers;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l3 level1 lfo3; tab-stops: list .5in;">Identify and evaluate potential data sources: what is available in-house,
from the Internet, from vendors?<span style="mso-spacerun: yes;"> </span>How complete are these data sources?<span style="mso-spacerun: yes;"> </span>What would it cost to use them?<span
style="mso-spacerun: yes;"> </span>Are there significant constraints on how they can be used?<span style="mso-spacerun: yes;"> </span>Are some of these data sources strongly incompatible?
<span style="mso-spacerun: yes;"> </span>If so, does it make sense to try to merge them approximately, or is it more reasonable to omit some of them?</li><li class="MsoNormal" style="margin:
0in 0in 0pt; mso-list: l3 level1 lfo3; tab-stops: list .5in;">Acquire the data and transform it into a form that is useful for analysis; note that for sufficiently large data collections, part of
this data will almost certainly be stored in some form of relational database, probably administered by others, and extracting what is needed for analysis will likely involve writing SQL queries
against this database;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l3 level1 lfo3; tab-stops: list .5in;">Once the relevant collection of data has been acquired and prepared,
examine the results carefully to make sure it meets analytical expectations: do the formats look right?<span style="mso-spacerun: yes;"> </span>Are the ranges consistent with expectations?<span
style="mso-spacerun: yes;"> </span>Do the relationships seen between key variables seem to make sense?</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l3 level1 lfo3; tab-stops:
list .5in;">Do the analysis: by lumping all of the steps of data analysis into this simple statement, I am not attempting to minimize the effort involved, but rather emphasizing the other aspects of
the Big Data analysis problem;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l3 level1 lfo3; tab-stops: list .5in;">After the analysis is complete, develop a concise summary of the
results that clearly and succinctly states the motivating problem, highlights what has been assumed, what has been neglected and why, and gives the simplest useful summary of the data analysis
results.<span style="mso-spacerun: yes;"> </span>(Note that this will often involve several different summaries, with different levels of detail and/or emphases, intended for different
audiences.)</li></ol><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"></div>Here, Steps 1 and 6 necessarily involve
close interaction with the end users of the data analysis results, and they lie mostly outside the domain of R.<span style="mso-spacerun: yes;"> </span>(Conversely, knowing what is available in
R can be extremely useful in formulating analytical problems that are reasonable to solve, and the graphical procedures available in R can be extremely useful in putting together meaningful summaries
of the results.)<span style="mso-spacerun: yes;"> </span>The primary domain of R is Step 5: given a dataframe containing what are believed to be the relevant variables, we generate, validate,
and refine the analytical results that will form the basis for the summary in Step 6.<span style="mso-spacerun: yes;"> </span>Part of Step 4 also lies clearly within the domain of R: examining
the data once it has been acquired to make sure it meets expectations.<span style="mso-spacerun: yes;"> </span>In particular, once we have a dataset or a collection of datasets that can be
converted easily into one or more R dataframes (e.g., csv files or possibly relational databases), a preliminary look at the data is greatly facilitated by the vast array of R procedures available
for graphical characterizations (e.g., nonparametric density estimates, quantile-quantile plots, boxplots and variants like beanplots or bagplots, and much more); for constructing simple descriptive
statistics (e.g., means, medians, and quantiles for numerical variables, tabulations of level counts for categorical variables, etc.); and for preliminary multivariate characterizations (e.g.,
scatter plots, classical and robust covariance ellipses, classical and robust principal component plots, etc.).<span style="mso-spacerun: yes;"> </span><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt; text-indent: 0.5in;">The rest of this post discusses those parts of Steps 2, 3, and 4 above that fall outside the
domain of R.<span style="mso-spacerun: yes;"> </span>First, however, I have two observations.<span style="mso-spacerun: yes;"> </span>My first observation is that because R is evolving
fairly rapidly, some tasks which are “outside the domain of R” today may very well move “inside the domain of R” in the near future.<span style="mso-spacerun: yes;"> </span>The packages
<strong>twitteR</strong> and <strong>scrapeR</strong>, mentioned earlier, are cases in point, as are the continued improvements in packages that simplify the use of R with databases.<span style=
"mso-spacerun: yes;"> </span>My second observation is that, just because something is possible within a particular software environment doesn’t make it a good idea.<span style="mso-spacerun:
yes;"> </span>A number of years ago, I attended a student talk given at an industry/university consortium.<span style="mso-spacerun: yes;"> </span>The speaker set up and solved a simple
linear program (i.e., he implemented the <a href="http://en.wikipedia.org/wiki/Simplex_algorithm">simplex algorithm</a> to solve a simple linear optimization problem with linear constraints) using an
industrial programmable controller.<span style="mso-spacerun: yes;"> </span>At the time, programming those controllers was done via <a href="http://en.wikipedia.org/wiki/Ladder_logic">relay
ladder logic</a>, a diagrammatic approach used by electricians to configure complicated electrical wiring systems.<span style="mso-spacerun: yes;"> </span>I left the talk impressed by the
student’s skill, creativity and persistence, but I felt his efforts were extremely misguided.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt; text-indent: 0.5in;">Although it does not address every aspect of the “extra-R” components of Steps 2, 3, and 4 defined above – indeed, some of these aspects are so
application-specific that no single book possibly could – Paul Murrell’s book <a href="http://www.amazon.com/Introduction-Technologies-Chapman-Computer-Analysis/dp/1420065173">Introduction to Data
Technologies</a> provides an excellent introduction to many of them.<span style="mso-spacerun: yes;"> </span>(This book is also available as a free <a href="http://www.stat.auckland.ac.nz/~paul
/ItDT/itdt-2012-07-29.pdf">PDF file</a> under creative commons.) <span style="mso-spacerun: yes;"> </span>A point made in the book’s preface mirrors one in <st1:city w:st="on"><st1:place
w:st="on">Davenport</st1:place></st1:city> and Patil’s article:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin:
0in 0in 0pt;">“Data sets never pop into existence in a fully mature and reliable state; they must be cleaned and massaged into an appropriate form.<span style="mso-spacerun: yes;"> </span>Just
getting the data ready for analysis often represents a significant component of a research project.”</div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>Since Murrell is the developer of R’s grid graphics system that I have discussed in previous posts, it is no
surprise that his book has an R-centric data analysis focus, but the book’s main emphasis is on the tasks of getting data from the outside world – specifically, from the Internet – into a dataframe
suitable for analysis in R.<span style="mso-spacerun: yes;"> </span>Murrell therefore gives detailed treatments of topics like HTML and Cascading Style Sheets (CSS) for working with Internet
web pages; XML for storing and sharing data; and relational databases and their associated query language SQL for efficiently organizing data collections with complex structures.<span style=
"mso-spacerun: yes;"> </span>Murrell states in his preface that these are things researchers – the target audience of the book – typically aren’t taught, but pick up in bits and pieces as they
go along. <span style="mso-spacerun: yes;"> </span>He adds:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin:
0in 0in 0pt;"></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>“A great deal of
information on these topics already exists in books and on the internet; the value of this book is in collecting only the important subset of this information that is necessary to begin applying
these technologies within a research setting.”</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"></div>My one
quibble with Murrell’s book is that he gives Python only a passing mention.<span style="mso-spacerun: yes;"> </span>While I greatly prefer R to Python for data analysis, I have found Python to
be more suitable than R for a variety of extra-analytical tasks, including preliminary explorations of the contents of weakly structured data sources, as well as certain important reformatting and
preprocessing tasks.<span style="mso-spacerun: yes;"> </span>Like R, <a href="http://www.python.org/">Python</a> is an open-source language, freely available for a wide variety of computing
environments.<span style="mso-spacerun: yes;"> </span>Also like R, Python has numerous add-on packages that support an enormous variety of computational tasks (over 25,000 at this writing).
<span style="mso-spacerun: yes;"> </span>In my day job in a SAS-centric environment, I commonly face tasks like the following: I need to create several nearly-identical SAS batch jobs, each to
read a different SAS dataset that is selected on the basis of information contained in the file name; submit these jobs, each of which creates a CSV file; harvest and merge the resulting CSV files;
run an R batch job to read this combined CSV file and perform computations on its contents.<span style="mso-spacerun: yes;"> </span>I can do all of these things with a Python script, which also
provides a detailed recipe of what I have done, so when I have to modify the procedure slightly and run it again six months later, I can quickly re-construct what I did before.<span style=
"mso-spacerun: yes;"> </span>I have found Python to be better suited than R to tasks that involve a combination of automatically generating simple programs in another language, data file
management, text processing, simple data manipulation, and batch job scheduling. <div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
Despite my Python quibble, Murrell’s book represents an excellent first step toward filling the knowledge gap that <st1:place w:st="on"><st1:city w:st="on">Davenport</st1:city></st1:place> and Patil
note between quantitative analysts and data scientists; in fact, it is the only book I know addressing this gap.<span style="mso-spacerun: yes;"> </span>If you are an R aficionado interested in
positioning yourself for “the sexiest job of the 21<sup>st</sup> century,” Murrell’s book is an excellent place to start.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>Ron
Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com3tag:blogger.com,1999:blog-9179325420174899779.post-51223581269090045012012-10-27T12:30:00.000-07:002012-10-27T12:30:52.008-07:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">In my last post, I promised a further examination of the spacing measures I described there, and I still promise to do that, but I am changing the order of topics slightly.
<span style="mso-spacerun: yes;"> </span>So, instead of spacing measures, today’s post is about the <strong>DataframeSummary</strong> procedure to be included in the <strong>ExploringData</
strong> package, which I also mentioned in my last post and promised to describe later.<span style="mso-spacerun: yes;"> </span>My next post will be a special one on Big Data and Data Science,
followed by another one about the <strong>DataframeSummary</strong> procedure (additional features of the procedure and the code used to implement it), after which I will come back to the spacing
measures I discussed last time.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">A task that arises frequently in exploratory
data analysis is the initial characterization of a new dataset.<span style="mso-spacerun: yes;"> </span>Ideally, everything we could want to know about a dataset <em>should</em> come from the
accompanying metadata, but this is rarely the case.<span style="mso-spacerun: yes;"> </span>As I discuss in Chapter 2 of <a href="http://www.amazon.com/
Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650">Exploring Data in Engineering, the Sciences, and Medicine</a>, <em>metadata</em> is the available “data about data” that (usually)
accompanies a data source.<span style="mso-spacerun: yes;"> </span>In practice, however, the available metadata is almost never as complete as we would like, and it is sometimes wrong in
important respects.<span style="mso-spacerun: yes;"> </span>This is particularly the case when numeric codes are used for missing data, without accompanying notes describing the coding.<span
style="mso-spacerun: yes;"> </span>An example, illustrating the consequent problem of <i style="mso-bidi-font-style: normal;">disguised missing data</i> is described in my paper <a href="http:/
/www.sigkdd.org/explorations/issues/8-1-2006-06/12-Pearson.pdf">The Problem of Disguised Missing Data</a>.<span style="mso-spacerun: yes;"> </span>(It should be noted that the original source
of one of the problems described there – a comment in the UCI Machine Learning Repository header file for the Pima Indians diabetes dataset that there were no missing data records – has since been <a
href="http://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes">corrected.)</a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in
0pt;">Once we have converted our data source into an <em>R</em> data frame (e.g., via the <strong>read.csv</strong> function for an external csv file), there are a number of useful tools to help us
begin this characterization process.<span style="mso-spacerun: yes;"> </span>Probably the most general is the <strong>str</strong> command, applicable to essentially any <em>R</em> object.<span
style="mso-spacerun: yes;"> </span>Applied to a dataframe, this command first tells us that the object <i style="mso-bidi-font-style: normal;">is </i>a dataframe, second, gives us the
dimensions of the dataframe, and third, presents a brief summary of its contents, including the variable names, their type (specifically, the results of R’s <strong>class</strong> function), and
the values of their first few observations.<span style="mso-spacerun: yes;"> </span>As a specific example, if we apply this command to the <strong>rent</strong> dataset from the <strong>gamlss
</strong> package, we obtain the following summary:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">
> str(rent)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">'data.frame':<span style="mso-spacerun: yes;"> </span>1969 obs. of<span style="mso-spacerun: yes;"> </span>9
variables:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>$ R<span style="mso-spacerun: yes;"> </span>: num<span style="mso-spacerun:
yes;"> </span>693 422 737 732 1295 ...</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>$ Fl : num<span style="mso-spacerun: yes;">
</span>50 54 70 50 55 59 46 94 93 65 ...</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>$ A<span style="mso-spacerun: yes;"> </span>: num
<span style="mso-spacerun: yes;"> </span>1972 1972 1972 1972 1893 ...</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>$ Sp : num<span
style="mso-spacerun: yes;"> </span>0 0 0 0 0 0 0 0 0 0 ...</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>$ Sm : num<span style=
"mso-spacerun: yes;"> </span>0 0 0 0 0 0 0 0 0 0 ...</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>$ B<span style="mso-spacerun: yes;">&
nbsp; </span>: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>$ H<span style="mso-spacerun:
yes;"> </span>: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 2 1 1 1 ...</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>$ L<span style=
"mso-spacerun: yes;"> </span>: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>$ loc:
Factor w/ 3 levels "1","2","3": 2 2 2 2 2 2 2 2 2 2 ...</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">><o:p> </o:p></div></blockquote><div class="MsoNormal" style="margin: 0in 0in
0pt;">This dataset summarizes a 1993 random sample of housing rental prices in <st1:place w:st="on"><st1:city w:st="on">Munich</st1:city></st1:place>, including a number of important characteristics
about each one (e.g., year of construction, floor space in square meters, etc.).<span style="mso-spacerun: yes;"> </span>A more detailed description can be obtained via the command “<strong>
help(rent)</strong>”.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The <strong>head</strong> command provides similar
information to the <strong>str</strong> command, in slightly less detail (e.g., it doesn’t give us the variable types), but in a format that some will find more natural:</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">> head(rent)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span
style="mso-spacerun: yes;"> </span>R Fl<span style="mso-spacerun: yes;"> </span>A Sp Sm B H L loc</div><div class="MsoNormal" style="margin: 0in
0in 0pt;">1<span style="mso-spacerun: yes;"> </span>693.3 50 1972<span style="mso-spacerun: yes;"> </span>0<span style="mso-spacerun: yes;"> </span>0 0 0 0<span style="mso-spacerun:
yes;"> </span>2</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">2<span style="mso-spacerun: yes;"> </span>422.0 54 1972<span style="mso-spacerun: yes;"> </span>0<span
style="mso-spacerun: yes;"> </span>0 0 0 0<span style="mso-spacerun: yes;"> </span>2</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">3<span style="mso-spacerun: yes;">&
nbsp; </span>736.6 70 1972<span style="mso-spacerun: yes;"> </span>0<span style="mso-spacerun: yes;"> </span>0 0 0 0<span style="mso-spacerun: yes;"> </span>2</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">4<span style="mso-spacerun: yes;"> </span>732.2 50 1972<span style="mso-spacerun: yes;"> </span>0<span style="mso-spacerun: yes;"> </span>0
0 0 0<span style="mso-spacerun: yes;"> </span>2</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">5 1295.1 55 1893<span style="mso-spacerun: yes;"> </span>0<span style=
"mso-spacerun: yes;"> </span>0 0 0 0<span style="mso-spacerun: yes;"> </span>2</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">6 1195.9 59 1893<span style="mso-spacerun:
yes;"> </span>0<span style="mso-spacerun: yes;"> </span>0 0 0 0<span style="mso-spacerun: yes;"> </span>2</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">><o:p>&
nbsp;</o:p></div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><o:p> (An important difference between these representations is that <strong>str</strong> characterizes factor
variables by their level <em>number</em> and not their level <em>value:</em> thus the first few observations of the factor B assume the first level of the factor, which is the value 0. As a
consequence, while it may appear that <strong>str</strong> is telling us that the first few records list the value 1 for the variable B while <strong>head</strong> is indicating a zero, this is not
the case. This is one reason that data analysts may prefer the <strong>head</strong> characterization.)</o:p></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">While the <em>R</em> data types for each variable can be useful to know – particularly in cases where it isn’t what we expect it to be, as when integers
are coded as factors – this characterization doesn’t really tell us the whole story.<span style="mso-spacerun: yes;"> </span>In particular, note that <em>R</em> has commands like “<strong>
as.character</strong>” and “<strong>as.factor</strong>” that can easily convert numeric variables to character or factor data types.<span style="mso-spacerun: yes;"> </span>Even beyond this,
the range of inherent behaviors that numerically-coded data can exhibit cannot be fully described by a simple data type designation.<span style="mso-spacerun: yes;"> </span>As a specific
example, one of the variables in the <strong>rent</strong> dataframe is “A,” described in the metadata available from the help command as “year of construction.”<span style="mso-spacerun: yes;">&
nbsp; </span>While this variable is coded as type “numeric,” in fact it takes integer values from 1890 to 1988, with some values in this range repeated many times and others absent.<span style=
"mso-spacerun: yes;"> </span>This point is important, since analysis tools designed for continuous variables – especially outlier-resistant ones like medians and other rank-based methods –
sometimes perform poorly in the face of data sequences with many repeated values (i.e., “ties,” which have zero probability for continuous data distributions).<span style="mso-spacerun: yes;">
</span>In extreme cases, these techniques may fail completely, as in the case of the MADM scale estimate, discussed in Chapter 7 of <em>Exploring Data</em>.<span style="mso-spacerun: yes;"> </
span>This data characterization <em>implodes</em> if more than 50% of the data values are the same, returning the useless value zero in this case, independent of the values of all of the other data
points.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">These observations motivate the <strong>DataframeSummary</strong>
procedure described here, to be included in the <strong>ExploringData</strong> package.<span style="mso-spacerun: yes;"> </span>This function is called with the name of the dataframe to be
characterized and an optional parameter <strong>Option</strong>, which can take any one of the following four values:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><ol style=
"margin-top: 0in;" type="1"><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l2 level1 lfo1; tab-stops: list .5in;">“Brief” (the default value)</li><li class="MsoNormal" style="margin: 0in
0in 0pt; mso-list: l2 level1 lfo1; tab-stops: list .5in;">“NumericOnly”</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l2 level1 lfo1; tab-stops: list .5in;">“FactorOnly”</li><li
class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l2 level1 lfo1; tab-stops: list .5in;">“AllAsFactor”</li></ol><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">In all cases, this function returns a summary dataframe with one row for each column in the dataframe to be characterized.<span style="mso-spacerun: yes;">&
nbsp; </span>Like the <strong>str</strong> command, these results include the name of each variable and its type.<span style="mso-spacerun: yes;"> </span>Under the default option “Brief,” this
function also returns the following characteristics for each variable:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><ul style="margin-top: 0in;" type="disc"><li class=
"MsoNormal" style="margin: 0in 0in 0pt; mso-list: l1 level1 lfo2; tab-stops: list .5in;">Levels = the number of distinct values the variable exhibits;</li><li class="MsoNormal" style="margin: 0in 0in
0pt; mso-list: l1 level1 lfo2; tab-stops: list .5in;">AvgFreq = the average number of records listing each value;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l1 level1 lfo2;
tab-stops: list .5in;">TopLevel = the most frequently occurring value;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l1 level1 lfo2; tab-stops: list .5in;">TopFreq = the number of
records listing this most frequent value;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l1 level1 lfo2; tab-stops: list .5in;">TopPct = the percentage of records listing this most
frequent value;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l1 level1 lfo2; tab-stops: list .5in;">MissFreq = the number of missing or blank records;</li><li class="MsoNormal"
style="margin: 0in 0in 0pt; mso-list: l1 level1 lfo2; tab-stops: list .5in;">MissPct = the percentage of missing or blank records.</li></ul><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br />
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">For the <strong>rent</strong> dataframe, this function (under the default “Brief” option) gives the following summary:</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">> DataframeSummary(rent)</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;">Variable<span style="mso-spacerun: yes;"> </span>Type Levels AvgFreq TopLevel TopFreq TopPct MissFreq MissPct</div><div class="MsoNormal" style="margin: 0in 0in
0pt;">3<span style="mso-spacerun: yes;"> </span>A numeric<span style="mso-spacerun: yes;"> </span>73
<span style="mso-spacerun: yes;"> </span>26.97<span style="mso-spacerun: yes;"> </span>1957<span style="mso-spacerun: yes;"> &
nbsp; </span>551<span style="mso-spacerun: yes;"> </span>27.98<span style="mso-spacerun: yes;"> &
nbsp; </span>0<span style="mso-spacerun: yes;"> </span>0</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">6<span style="mso-spacerun: yes;"> &
nbsp; </span>B<span style="mso-spacerun: yes;"> </span>factor<span style="mso-spacerun: yes;"> &
nbsp; </span>2<span style="mso-spacerun: yes;"> </span>984.50<span style="mso-spacerun: yes;"> </span>0<span style=
"mso-spacerun: yes;"> </span>1925<span style="mso-spacerun: yes;"> </span>97.77<span style=
"mso-spacerun: yes;"> </span>0<span style="mso-spacerun: yes;"> </span>0</div><div class="MsoNormal" style="margin: 0in
0in 0pt;">2<span style="mso-spacerun: yes;"> </span>Fl numeric<span style="mso-spacerun: yes;"> </span>91
<span style="mso-spacerun: yes;"> </span>21.64<span style="mso-spacerun: yes;"> </span>60<span style="mso-spacerun: yes;"> &
nbsp; </span>71<span style="mso-spacerun: yes;"> </span>3.61<span style=
"mso-spacerun: yes;"> </span>0<span style="mso-spacerun: yes;"> </span>0</div><div class="MsoNormal" style="margin: 0in
0in 0pt;">7<span style="mso-spacerun: yes;"> </span>H<span style="mso-spacerun: yes;"> </span>factor<span style="mso-spacerun: yes;">
</span>2<span style="mso-spacerun: yes;"> </span>984.50<span style="mso-spacerun: yes;"> &
nbsp; </span>0<span style="mso-spacerun: yes;"> </span>1580<span style="mso-spacerun: yes;"> </
span>80.24<span style="mso-spacerun: yes;"> </span>0<span style="mso-spacerun: yes;"> </span>0</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">8<span style="mso-spacerun: yes;"> </span>L<span style="mso-spacerun: yes;"> </span>factor
<span style="mso-spacerun: yes;"> </span>2<span style="mso-spacerun: yes;"> </span>984.50<span style="mso-spacerun: yes;">&
nbsp; </span>0<span style="mso-spacerun: yes;"> </span>1808<span style="mso-spacerun:
yes;"> </span>91.82<span style="mso-spacerun: yes;"> </span>0<span style="mso-spacerun: yes;"> &
nbsp; </span>0</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">9<span style="mso-spacerun: yes;"> </span><span style="mso-spacerun: yes;"> </span>loc
<span style="mso-spacerun: yes;"> </span>factor<span style="mso-spacerun: yes;"> </span>3<span style="mso-spacerun:
yes;"> </span>656.33<span style="mso-spacerun: yes;"> </span>2<span style="mso-spacerun: yes;"> &
nbsp; </span>1247<span style="mso-spacerun: yes;"> </span>63.33<span style="mso-spacerun: yes;"> </
span>0<span style="mso-spacerun: yes;"> </span>0</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">1<span style="mso-spacerun: yes;"> &
nbsp; </span>R numeric<span style="mso-spacerun: yes;"> </span>1762<span style="mso-spacerun: yes;"> </span>1.12<span style=
"mso-spacerun: yes;"> </span>900<span style="mso-spacerun: yes;"> &
nbsp; </span>7<span style="mso-spacerun: yes;"> </span>0.36<span style="mso-spacerun: yes;"> </span>0
<span style="mso-spacerun: yes;"> </span>0</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">5<span style="mso-spacerun: yes;"> &
nbsp; </span>Sm numeric<span style="mso-spacerun: yes;"> </span>2<span style="mso-spacerun: yes;"> </span>984.50<span style
="mso-spacerun: yes;"> </span>0<span style="mso-spacerun: yes;"> </span>1797<span
style="mso-spacerun: yes;"> </span>91.26<span style="mso-spacerun: yes;"> </span>0<span style="mso-spacerun:
yes;"> </span>0</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">4<span style="mso-spacerun: yes;"> </span>Sp &
nbsp; numeric<span style="mso-spacerun: yes;"> </span>2<span style="mso-spacerun: yes;"> </span>984.50<span style="mso-spacerun: yes;"> &
nbsp; </span><span style="mso-spacerun: yes;"> </span>0<span style="mso-spacerun: yes;"> </span>1419
<span style="mso-spacerun: yes;"> </span>72.07<span style="mso-spacerun: yes;"> </span>0<span style=
"mso-spacerun: yes;"> </span>0</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">><o:p> </o:p></div></blockquote><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The variable names and types appear essentially as they do in the results obtained with the <strong>str</strong>
function, and the numbers to the far left indicate the column numbers from the dataframe <strong>rent</strong> for each variable, since the variable names are listed alphabetically for convenience.
<span style="mso-spacerun: yes;"> </span>The “Levels” column of this summary dataframe gives the number of unique values for each variable, and it is clear that this can vary widely even within
a given data type.<span style="mso-spacerun: yes;"> </span>For example, the variable “R” (monthly rent in DM) exhibits 1,762 unique values in 1,969 data observations, so it is almost unique,
while the variables “Sm” and “Sp” exhibit only two possible values, even though all three of these variables are of type “numeric.”<span style="mso-spacerun: yes;"> </span>The AvgFreq column
gives the average number of times each level should appear, assuming a uniform distribution over all possible values.<span style="mso-spacerun: yes;"> </span>This number is included as a
reference value for assessing the other frequencies (i.e., TopFreq for the most frequently occurring value and MissFreq for missing data values).<span style="mso-spacerun: yes;"> </span>Thus,
for the first variable, “A,” AvgFreq is 26.97, meaning that if all 73 possible values for this variable were equally represented, each one should occur about 27 times in the dataset.<span style=
"mso-spacerun: yes;"> </span>The most frequently occurring level (TopLevel) is “1957,” which occurs 551 times, suggesting a highly nonuniform distribution of values for this variable.<span
style="mso-spacerun: yes;"> </span>In contrast, for the variable “R,” AvgFreq is 1.12, meaning that each value of this variable is almost unique.<span style="mso-spacerun: yes;"> </span>
The TopPct column gives the percentage of records in the dataset exhibiting the most frequent value for each record, which varies from 0.36% for the numeric variable “R” to 97.77% for the factor
variable “B.”<span style="mso-spacerun: yes;"> </span>It is interesting to note that this variable is of type “factor” but is coded as 0 or 1, while the variables “Sm” and “Sp” are also binary,
coded as 0 or 1, but are of type “numeric.”<span style="mso-spacerun: yes;"> </span>This illustrates the point noted above that the <em>R</em> data type is not always as informative as we might
like it to be.<span style="mso-spacerun: yes;"> </span>(This is not a criticism of <em>R</em>, but rather a caution about the fact that, in preparing data, we are free to choose many different
representations, and the original logic behind the choice may not be obvious to all ultimate users of the data.)<span style="mso-spacerun: yes;"> </span>In addition, comparing the available
metadata for the variable “B” illustrates the point about metadata errors noted earlier: of the 1,969 data records, 1,925 have the value “0” (97.77%), while 44 have the value “1” (2.23%), but the
information returned by the help command indicates exactly the opposite proportion of values: 1,925 should have the value “1” (indicating the presence of a bathroom), while 44 should have the value
“0” (indicating the absence of a bathroom).<span style="mso-spacerun: yes;"> </span>Since the interpretation of the variables that enter any analysis is important in explaining our final
analytical results, it is useful to detect this type of mismatch between the data and the available metadata as early as possible.<span style="mso-spacerun: yes;"> Here, comparing the average
rents for records with B = 1 (DM 424.95) against those with B = 0 (DM 820.72) suggests that the levels have been reversed relative to the metadata: the relatively few housing units without bathrooms
are represented by B = 1, renting for less than the majority of those units, which have bathrooms and are represented by B = 0. </span>Finally, the last two columns of the above summary
give the number of records with missing or blank values (MissFreq) and the corresponding percentage (MissPct); here, all records are complete so these numbers are zero.</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">In my next post on this topic, I will present results for the other three options of the <strong>
DataframeSummary </strong>procedure, along with the code that implements it.<span style="mso-spacerun: yes;"> </span>In all cases, the results include those generated by the “Brief” option just
presented, but the difference between the other options lies first, in what additional characterizations are included, and second, in which subset of variables are included in the summary.<span style
="mso-spacerun: yes;"> </span>Specifically, for the <strong>rent</strong> dataframe, we obtain:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><ul style="margin-top: 0in;"
type="disc"><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l0 level1 lfo3; tab-stops: list .5in;">Under the “NumericOnly” option, a summary of the five numeric variables R, FL, A, Sp,
and Sm results, giving characteristics that are appropriate to numeric data types, like the spacing measures described in my last post;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list:
l0 level1 lfo3; tab-stops: list .5in;">Under the “FactorOnly” option, a summary of the four factor variables B, H, L, and loc results, giving measures that are appropriate to categorical data types,
like the normalized Shannon entropy measure discussed in several previous posts;</li><li class="MsoNormal" style="margin: 0in 0in 0pt; mso-list: l0 level1 lfo3; tab-stops: list .5in;">Under the
“AllAsFactor” option, all variables in the dataframe are first converted to factors and then characterized using the same measures as in the “FactorOnly” option.</li></ul><div class="MsoNormal" style
="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The advantage of the “AllAsFactor” option is that it characterizes all variables in the dataframe, but as I
discussed in my last post, the characterization of numerical variables with measures like <st1:place w:st="on">Shannon</st1:place> entropy is not always terribly useful.</div>Ron Pearson (aka
15693640298594791682noreply@blogger.com0tag:blogger.com,1999:blog-9179325420174899779.post-44562589283462891712012-09-22T18:54:00.000-07:002012-09-22T18:54:04.750-07:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">Numerically-coded data sequences can exhibit a very wide range of distributional characteristics, including near-Gaussian (historically, the most popular working assumption),
strongly asymmetric, light- or heavy-tailed, multi-modal, or discrete (e.g., count data).<span style="mso-spacerun: yes;"> </span>In addition, numerically coded values can be effectively
categorical, either ordered, or unordered.<span style="mso-spacerun: yes;"> </span>A specific example that illustrates the range of distributional behavior often seen in a collection of
numerical variables is the <st1:city w:st="on">Boston</st1:city> housing dataframe (<st1:city w:st="on"><st1:place w:st="on"><strong>Boston</strong></st1:place></st1:city>) from the <strong>MASS</
strong> package in <em>R</em>.<span style="mso-spacerun: yes;"> </span>This dataframe includes 14 numerical variables that characterize 506 suburban housing tracts in the <st1:city w:st="on">
<st1:place w:st="on">Boston</st1:place></st1:city> area: 12 of these variables have class “numeric” and the remaining two have class “integer”.<span style="mso-spacerun: yes;"> </span>The
integer variable <strong>chas</strong> is in fact a binary flag, taking the value 1 if the tract bounds the Charles river and 0 otherwise, and the integer variable <strong>rad</strong> is described
as “an index of accessibility to radial highways,’’ assuming one of nine values: the integers 1 through 8, and 24.<span style="mso-spacerun: yes;"> </span>The other 12 variables assume anywhere
between 26 unique values (for the zoning variable <strong>zn</strong>) to 504 unique values (for the per capita crime rate <strong>crim</strong>). The figure below shows nonparametric density
estimates for four of these variables: the per-capita crime rate (<strong>crim</strong>, upper left plot), the percentage of the population designated “lower status” by the researchers who provided
the data (<strong>lstat</strong>, upper right plot), the average number of rooms per dwelling (<strong>rm</strong>, lower left plot), and the zoning variable (<strong>zn</strong>, lower right plot).
<span style="mso-spacerun: yes;"> </span>Comparing the appearances of these density estimates, considerable variability is evident: the distribution of <strong>crim</strong> is very asymmetric
with an extremely heavy right tail, the distribution of <strong>lstat</strong> is also clearly asymmetric but far less so, while the distribution of <strong>rm</strong> appears to be almost Gaussian.
<span style="mso-spacerun: yes;"> </span>Finally, the distribution of <strong>zn</strong> appears to be tri-modal, mostly concentrated around zero, but with clear secondary peaks at around 20
and 80.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-71PKem_pq00/UF4hdvnxB9I
/AAAAAAAAAJ4/cs7iMj7gQOI/s1600/HolesFig01a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" hea="true" height="319" src="http://4.bp.blogspot.com/-71PKem_pq00/
UF4hdvnxB9I/AAAAAAAAAJ4/cs7iMj7gQOI/s320/HolesFig01a.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Each of these four plots also includes some additional information about the corresponding variable: three vertical reference lines at
the mean (the solid line) and the mean offset by plus or minus three standard deviations (the dotted lines), and the value of the normalized <st1:place w:st="on">Shannon</st1:place> entropy, listed
in the title of each plot.<span style="mso-spacerun: yes;"> </span>This normalized entropy value is discussed in detail in Chapter 3 of <a href="http://www.amazon.com/
Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650">Exploring Data in Engineering, the Sciences, and Medicine</a> and in two of my previous posts (<a href="http://
exploringdatablog.blogspot.com/2011/04/interestingness-measures.html">April 3, 2011</a> and <a href="http://exploringdatablog.blogspot.com/2011_05_01_archive.html">May 21, 2011</a>), and it
forms the basis for the spacing measure described below.<span style="mso-spacerun: yes;"> </span>First, however, the reason for including the three vertical reference lines on the density plots
is to illustrate that, while popular “Gaussian expectations” for data are approximately met for some numerical variables (the <strong>rm</strong> variable is a case in point here), often these
expectations are violated so much that they are useless.<span style="mso-spacerun: yes;"> </span>Specifically, note that under approximately Gaussian working assumptions, most of the observed
values for the data sequence should fall between the two dotted reference lines, which should correspond approximately to the smallest and largest values seen in the dataset.<span style=
"mso-spacerun: yes;"> </span>This description is reasonably accurate for the variable <strong>rm</strong>, and the upper limit appears fairly reasonable for the variable <strong>lstat</strong>,
but the lower limit is substantially negative here, which is not reasonable for this variable since it is defined as a percentage.<span style="mso-spacerun: yes;"> </span>These reference
lines appear even more divergent from the general shapes of the distributions for the <strong>crim</strong> and <strong>zn</strong> data, where again, the lower reference lines are substantially
negative, infeasible values for both of these variables.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The reason the&
nbsp;reference values defined by these lines are not particularly representative is the extremely heterogeneous nature of the data distributions, particularly for the variables <strong>crim</
strong> – where the distribution exhibits a very long right tail – and <strong>zn</strong> – where the distribution exhibits multiple modes. <span style="mso-spacerun: yes;"> </span>For
categorical variables, distributional heterogeneity can be assessed by measures like the normalized Shannon entropy, which varies between 0 and 1, taking the value zero when all levels of the
variable are equally represented, and taking the value 1 when only one of several possible values are present.<span style="mso-spacerun: yes;"> </span>This measure is easily computed and, while
it is intended for use with categorical variables, the procedures used to compute it will return results for numerical variables as well.<span style="mso-spacerun: yes;"> </span>These
values are shown in the figure captions of each of the above four plots, and it is clear from these results that the <st1:place w:st="on">Shannon</st1:place> measure does not give a reliable
indication of distributional heterogeneity here.<span style="mso-spacerun: yes;"> </span>In particular, note that the Shannon measure for the <strong>crim</strong> variable is zero to three
decimal places, suggesting a very homogeneous distribution, while the variables <strong>lstat</strong> and <strong>rm</strong> – both arguably less heterogeneous than <strong>crim</strong> – exhibit
slightly larger values of 0.006 and 0.007, respectively.<span style="mso-spacerun: yes;"> </span>Further, the variable <strong>zn</strong>, whose density estimate resembles that of <strong>crim
</strong> more than that of either of the other two variables, exhibits the much larger <st1:place w:st="on">Shannon</st1:place> entropy value of 0.585.</div><div class="MsoNormal" style="margin: 0in
0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The basic difficulty here is that all observations of a continuously distributed random variable <em>should</em> be unique.
<span style="mso-spacerun: yes;"> </span>The normalized <st1:place w:st="on">Shannon</st1:place> entropy – along with the other heterogeneity measures discussed in Chapter 3 of <em>Exploring
Data</em> – effectively treat variables as categorical, returning a value that is computed from the fractions of total observations assigned to each possible value for the variable.<span style=
"mso-spacerun: yes;"> </span>Thus, for an ideal continuously-distributed variable, every possible value appears once and only once, so these fractions should be 1/N for each of the N distinct
values observed for the variable.<span style="mso-spacerun: yes;"> </span>This means that the normalized Shannon measure – along with all of the alternative measures just noted – should be
identically zero for this case, regardless of whether the continuous distribution in question is Gaussian, Cauchy, Pareto, uniform, or anything else.<span style="mso-spacerun: yes;"> </span>In
fact, the <strong>crim</strong> variable considered here almost meets this ideal requirement: in 506 observations, <strong>crim</strong> exhibits 504 unique values, which is why its normalized
<st1:place w:st="on">Shannon</st1:place> entropy value is zero to three significant figures.<span style="mso-spacerun: yes;"> </span>In marked contrast, the variable <strong>zn</strong>
exhibits only 26 distinct values, meaning that each of these values occurs, on average, just over 19 times.<span style="mso-spacerun: yes;"> </span>However, this average behavior is not
representative of the data in this case, since the smallest possible value (0) occurs 372 times, while the largest possible value (100) occurs only once.<span style="mso-spacerun: yes;"> </
span>It is because of the discrete character of this distribution that the normalized <st1:place w:st="on">Shannon</st1:place> entropy is much larger here, accurately reflecting the pronounced
distributional heterogeneity of this variable.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Taken together, these
observations suggest a simple extension of the normalized <st1:place w:st="on">Shannon</st1:place> entropy that can give us a more adequate characterization of distributional differences for
numerical variables.<span style="mso-spacerun: yes;"> </span>Specifically, the idea is this: begin by dividing the total range of a numerical variable <em>x</em> into M equal intervals.<span
style="mso-spacerun: yes;"> </span>Then, count the number of observations that fall into each of these intervals and divide by the total number of observations N to obtain the fraction of
observations falling into each group.<span style="mso-spacerun: yes;"> </span>By doing this, we have effectively converted the original numerical variable into an M-level categorical variable,
to which we can apply heterogeneity measures like the normalized <st1:place w:st="on">Shannon</st1:place> entropy.<span style="mso-spacerun: yes;"> </span>The four plots below illustrate this
basic idea for the four <st1:city w:st="on"><st1:place w:st="on">Boston</st1:place></st1:city> housing variables considered above.<span style="mso-spacerun: yes;"> </span>Specifically, each
plot shows the fraction of observations falling into each of 10 equally spaced intervals, spanning the range from the smallest observed value of the variable to the largest.</div><div class
="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-5f9HSlzZrKk/UF4nSLcji2I/AAAAAAAAAKM/
PUf-UnLJZlI/s1600/HolesFig03a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" hea="true" height="319" src="http://4.bp.blogspot.com/-5f9HSlzZrKk/UF4nSLcji2I/
AAAAAAAAAKM/PUf-UnLJZlI/s320/HolesFig03a.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">As a specific example, consider the results shown in the upper left plot for the variable <strong>crim</strong>, which varies from a minimum of
0.00632 to a maximum of 89.0.<span style="mso-spacerun: yes;"> </span>Almost 87% of the observations fall into the smallest 10% of this range, from 0.00632 to 8.9, while the next two groups
account for almost all of the remaining observations.<span style="mso-spacerun: yes;"> </span>In fact, none of the other groups (4 through 10) account for more than 1% of the observations,
and one of these groups – group 7 – is completely empty.<span style="mso-spacerun: yes;"> </span>Computing the normalized <st1:place w:st="on">Shannon</st1:place> entropy from this ten-level
categorical variable yields 0.767, as indicated in the title of the upper left plot.<span style="mso-spacerun: yes;"> </span>In contrast, the corresponding plot for the <strong>lstat</
strong> variable, shown in the upper right, is much more uniform, with the first five groups exhibiting roughly the same fractional occupation.<span style="mso-spacerun: yes;"> </span>As a
consequence, the normalized <st1:place w:st="on">Shannon</st1:place> entropy for this grouped variable is much smaller than that for the more heterogeneously distributed <strong>crim</strong>
variable: 0.138 versus 0.767.<span style="mso-spacerun: yes;"> </span>Because the distribution is more sharply peaked for the <strong>rm</strong> variable than for <strong>lstat</strong>, the
occupation fractions for the grouped version of this variable (lower left plot) are less homogeneous, and the normalized <st1:place w:st="on">Shannon</st1:place> entropy is correspondingly larger, at
0.272.<span style="mso-spacerun: yes;"> </span>Finally, for the <strong>zn</strong> variable (lower right plot), the grouped distribution appears similar to that for the <strong>crim</strong>
variable, and the normalized <st1:place w:st="on">Shannon</st1:place> entropy values are also similar: 0.525 versus 0.767.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The key point here is that, in contrast to the normalized Shannon entropy applied directly to the
numerical variables in the <strong>Boston</strong> dataframe, grouping these values into 10 equally-spaced intervals and then computing the normalized Shannon entropy gives a number that seems to be
more consistent with the distributional differences between these variables that can be seen clearly in their density plots.<span style="mso-spacerun: yes;"> </span>Motivation for this
numerical measure (i.e., why not just look at the density plots?) comes from the fact that we are sometimes faced with the task of characterizing a new dataset that we have not seen before.<span
style="mso-spacerun: yes;"> </span>While we can – and should – examine graphical representations of these variables, in cases where we have <em>many</em> such variables, it is desirable to have
a few, easily computed numerical measures to use as screening tools, guiding us in deciding which variables to look at first, and which techniques to apply to them.<span style="mso-spacerun: yes;">&
nbsp; </span>The spacing measure described here – i.e., the normalized <st1:place w:st="on">Shannon</st1:place> entropy measure applied to a grouped version of the numerical variable – appears to be
a potentially useful measure for this type of preliminary data characterization.<span style="mso-spacerun: yes;"> </span>For this reason, I am including it – along with a few other numerical
characterizations – in the <strong>DataFrameSummary</strong> procedure I am implementing as part of the <strong>ExploringData</strong> package, which I will describe in a later post.<span style=
"mso-spacerun: yes;"> </span>Next time, however, I will explore two obvious extensions of the procedure described here: different choices of the heterogeneity measure, and different choices of
the number of grouping levels.<span style="mso-spacerun: yes;"> </span>In particular, as I have shown in previous posts on interestingness measures, the normalized Bray, Gini, and Simpson
measures all behave somewhat differently than the <st1:place w:st="on">Shannon</st1:place> measure considered here, raising the question of which one would be most effective in this application.<span
style="mso-spacerun: yes;"> </span>In addition, the choice of 10 grouping levels considered here was arbitrary, and it is by no means clear that this choice is the best one.<span style=
"mso-spacerun: yes;"> </span>In my next post, I will explore how sensitive the Boston housing results are to changes in these two key design parameters.</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Finally, it is worth saying something about how the grouping used here was implemented.<span style=
"mso-spacerun: yes;"> </span>The <em>R </em>code listed below is the function I used to convert a numerical variable <em>x</em> into the grouped variable from which I computed the normalized
<st1:place w:st="on">Shannon</st1:place> entropy.<span style="mso-spacerun: yes;"> </span>The three key components of this function are the <strong>classIntervals</strong> function from the
<em>R</em> package <strong>classInt</strong> (which must be loaded before use; hence, the “library(classInt)” statement at the beginning of the function), and the <strong>cut</strong> and <strong>
table</strong> functions from base <em>R.</em><span style="mso-spacerun: yes;"> </span>The <strong>classIntervals</strong> function generates a two-element list with components <strong>var</
strong>, which contains the original observations, and <strong>brks</strong>, which contains the M+1 boundary values for the M groups to be generated.<span style="mso-spacerun: yes;"> </span>
Note that the <strong>style = “equal”</strong> argument is important here, since we want M equal-width groups.<span style="mso-spacerun: yes;"> </span>The <strong>cut</strong> function then
takes these results and converts them into an M-level categorical variable, assigning each original data value to the interval into which it falls.<span style="mso-spacerun: yes;"> </span>The
<strong>table</strong> function counts the number of times each of the M possible levels occurs for this categorical variable.<span style="mso-spacerun: yes;"> </span>Dividing this vector&
nbsp;by the sum of all entries then gives the fraction of observations falling into each group.<span style="mso-spacerun: yes;"> </span>Plotting the results obtained from this function and
reformatting the results slightly yields the four plots shown in the second figure above, and applying the <strong>shannon.proc</strong> procedure available from the <a href="http://www.oup.com/us/
static/companion.websites/9780195089653/TextFiles/shannonproc.txt">OUP companion website</a> for <em>Exploring Data</em> yields the Shannon entropy values listed in the figure titles.</div><div class
="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">
UniformSpacingFunction <- function(x, nLvls = 10){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>library(classInt)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>xsum = classIntervals(x,n = nLvls, style="equal")</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-spacerun: yes;"> </span>xcut = cut(xsum$var, breaks = xsum$brks, include.lowest = TRUE)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>xtbl = table(xcut)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>pvec = xtbl/sum(xtbl)</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>pvec</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">}</div></blockquote><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com0tag:blogger.com,1999:blog-9179325420174899779.post-5718125434476277962012-09-08T11:53:00.000-07:002012-09-08T11:53:23.397-07:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">In my last post, I described and demonstrated the <strong>CountSummary</strong> procedure to be included in the <strong>ExploringData</strong> package that I am in the process
of developing.<span style="mso-spacerun: yes;"> </span>This procedure generates a collection of graphical data summaries for a count data sequence, based on the <strong>distplot</strong>,
<strong>Ord_plot</strong>, and <strong>Ord_estimate</strong> functions from the <strong>vcd</strong> package.<span style="mso-spacerun: yes;"> </span>The <strong>distplot</strong> function
generates both the <em>Poissonness plot</em> and the <em>negative-binomialness plot</em> discussed in Chapters 8 and 9 of <a href="http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/
dp/0195089650">Exploring Data in Engineering, the Sciences and Medicine</a>.<span style="mso-spacerun: yes;"> </span>These plots provide informal graphical assessments of the conformance of a
count data sequence with the two most popular distribution models for count data, the Poisson distribution and the negative-binomial distribution.<span style="mso-spacerun: yes;"> </span>As
promised, this post describes the <em>R</em> code needed to implement the <strong>CountSummary</strong> procedure, based on these functions from the <strong>vcd</strong> package.</div><div class
="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The key to this implementation lies in the use of the <strong>grid</strong> package, a set of
low-level graphics primitives included in base <em>R</em>.<span style="mso-spacerun: yes;"> As I mentioned in my last post, the reason this was necessary - instead of using higher-level
graphics packages like <strong>lattice</strong> or <strong>ggplot2</strong> - was that the <strong>vcd</strong> package is based on grid graphics, making it incompatible with base graphics commands
like those used to generate arrays of multiple plots. </span>The <strong>grid</strong> package was developed by Paul Murrell, who provides a lot of extremely useful information about both
<em>R</em> graphics in general and grid graphics in particular on his <a href="http://www.stat.auckland.ac.nz/~paul/">home page</a>, including the article “Drawing Diagrams with R,” which provides a
nicely focused introduction to grid graphics.<span style="mso-spacerun: yes;"> </span>The first example I present here is basically a composite of the first two examples presented in this
paper.<span style="mso-spacerun: yes;"> </span>Specifically, the code for this example is:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">library(grid)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">grid.newpage()
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">pushViewport(viewport(width = 0.8, height = 0.4))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">grid.roundrect()</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">grid.text("This is text in a box")</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">popViewport()</div></blockquote><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The first line of this R code loads the <strong>grid</strong> package and the second tells this package to clear
the plot window; failing to do this will cause this particular piece of code to overwrite whatever was there before, which usually isn’t what you want.<span style="mso-spacerun: yes;"> </span>
The third line creates a <em>viewport</em>, into which the plot will be placed.<span style="mso-spacerun: yes;"> </span>In this particular example, we specify a width of 0.8, or 80% of the
total plot window width, and a height of 0.4, corresponding to 40% of the total window height. <span style="mso-spacerun: yes;"> </span>The next two lines draw a rectangular box with rounded
corners and put “This is text in a box” in the center of this box.<span style="mso-spacerun: yes;"> </span>The advantage of the <strong>grid </strong>package is that it provides us with simple
graphics primitives to draw this kind of figure, without having to compute exact positions (e.g., in inches) for the different figure components.<span style="mso-spacerun: yes;"> </span>
Commands like <strong>grid.text</strong> provide useful defaults (i.e., put the text in the center of the viewport), which can be overridden by specifying positional parameters in a variety of ways
(e.g., left- or right-justified, offsets in inches or lines of text, etc.).<span style="mso-spacerun: yes;"> </span>The results obtained using these commands are shown in the figure below.</
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-It8lzfxyfvc/UEtohPq67jI/
AAAAAAAAAJU/Z7VZG2enuO4/s1600/ImplementingFig01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" hea="true" height="319" src="http://2.bp.blogspot.com/-It8lzfxyfvc/
UEtohPq67jI/AAAAAAAAAJU/Z7VZG2enuO4/s320/ImplementingFig01.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The code for the second example is a simple extension of the first one, essentially consisting of the added initial code required
to create the desired two-by-two plot array, followed by four slightly modified copies of the above code.<span style="mso-spacerun: yes;"> </span>Specifically, this code is:</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">grid.newpage()</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">pushViewport(viewport(layout=grid.layout(nrow=2,ncol=2)))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">pushViewport(viewport(layout.pos.row=1,layout.pos.col=1))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">grid.roundrect(width = 0.8,
height=0.4)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">grid.text("Plot 1 goes here")</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">popViewport()</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">pushViewport(viewport(layout.pos.row=
1,layout.pos.col=2))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">grid.roundrect(width = 0.8, height=0.4)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">grid.text("Plot 2 goes
here")</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">popViewport()</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">pushViewport(viewport(layout.pos.row=2,layout.pos.col=1))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">grid.roundrect(width
= 0.8, height=0.4)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">grid.text("Plot 3 goes here")</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">popViewport()</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">pushViewport(viewport
(layout.pos.row=2,layout.pos.col=2))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">grid.roundrect(width = 0.8, height=0.4)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
grid.text("Plot 4 goes here")</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">popViewport()</div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">Here, note that the first “pushViewport” command creates the two-by-two plot array we want, by specifying “layout = grid.layout(nrow=2,ncol=2)”.<span style=
"mso-spacerun: yes;"> </span>As in initializing a data frame in <em>R</em>, we can create an arbitrary two-dimensional array of grid graphics viewports – say m by n – by specifying “layout =
grid.layout(nrow=m, ncol=n)”.<span style="mso-spacerun: yes;"> </span>Once we have done this, we can use whatever <strong>grid</strong> commands – or grid-compatible commands, such as those
generated by the <strong>vcd</strong> package – we want, to create the individual elements in our array of plots.<span style="mso-spacerun: yes;"> </span>In this example, I have basically
repeated the code from the first example to put text into rounded rectangular boxes in each position of the plot array.<span style="mso-spacerun: yes;"> </span>The two most important details
are, first, the “pushViewport” command at the beginning of each of these individual plot blocks specifies which of the four array elements the following plot will go in, and second, the “popViewport
()” command at the end of each block, which tells the <strong>grid</strong> package that we are finished with this element of the array.<span style="mso-spacerun: yes;"> </span>If we leave this
command out, the next “pushViewport” command will not move to the desired plot element, but will simply overwrite the previous plot.<span style="mso-spacerun: yes;"> </span>Executing this code
yields the plot shown below.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/
--w8pVClO-Wk/UEtpZ4Fq8mI/AAAAAAAAAJc/IoDInNuG7t8/s1600/ImplementingFig02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" hea="true" height="319" src="http://
4.bp.blogspot.com/--w8pVClO-Wk/UEtpZ4Fq8mI/AAAAAAAAAJc/IoDInNuG7t8/s320/ImplementingFig02.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The final example replaces the text in the above two-by-two example with the plots I want for
the <strong>CountSummary</strong> procedure.<span style="mso-spacerun: yes;"> </span>Before presenting this code, it is important to say something about the structure of the resulting plot and
the <strong>vcd</strong> commands used to generate the different plot elements.<span style="mso-spacerun: yes;"> </span>The first plot – in the upper left position of the array shown below – is
an Ord plot, generated by the <strong>Ord_plot</strong> command, which does two things.<span style="mso-spacerun: yes;"> </span>The first is to generate the desired plot, but the second is to
return estimates of the intercept and slope of one of the two reference lines in the plot.<span style="mso-spacerun: yes;"> </span>The first of these lines is fit to the points in the plot via
ordinary least squares, while the second – the one whose parameters are returned – is fit via weighted least squares, to down-weight the widely scattered points seen in this plot that correspond to
cases with very few observations.<span style="mso-spacerun: yes;"> </span>The intent of the Ord plot is to help us decide which of several alternative distributions – including both the Poisson
and the negative-binomial – fits our count data sequence better.<span style="mso-spacerun: yes;"> </span><span style="mso-spacerun: yes;"> </span>This guidance is based on the reference
line parameters, and the <strong>Ord_estimate</strong> function in the <strong>vcd</strong> package transforms these parameter estimates into distributional recommendations and the distribution
parameter values needed by the <strong>distplot</strong> function in the <strong>vcd</strong> package to generate either the Poissonness plot or the negative-binomialness plot for the count data
sequence.<span style="mso-spacerun: yes;"> </span>Although these recommendations are sometimes useful, it is important to emphasize the caution given in the vcd package documentation:</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span></div><blockquote class="tr_bq">“Be careful with the conclusions from Ord_estimate as it implements just some simple heuristics!”</blockquote><div class="MsoNormal" style="margin:
0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/
-j1m2Fw17_Fw/UEtqTNjZs6I/AAAAAAAAAJk/YpIHlA1JYi4/s1600/ImplementingFig03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" hea="true" height="319" src="http://
1.bp.blogspot.com/-j1m2Fw17_Fw/UEtqTNjZs6I/AAAAAAAAAJk/YpIHlA1JYi4/s320/ImplementingFig03.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">In the <strong>CountSummary</strong> procedure, I use these results both to generate part of
the text summary in the upper right element of the plot array, and to decide which type of plot to display in the lower right element of this array.<span style="mso-spacerun: yes;"> </span>Both
this plot and the Poissonness reference plot in the lower left element of the display are created using the <strong>distplot</strong> command in the <strong>vcd</strong> package.<span style=
"mso-spacerun: yes;"> </span>I include the Poissonness reference plot because the Poisson distribution is the most commonly assumed distribution for count data – analogous in many ways to the
Gaussian distribution so often assumed for continuous-valued data – and, by not specifying the single parameter for this distribution, I allow the function to determine it by fitting the data.<span
style="mso-spacerun: yes;"> </span>In cases where the Ord plot heuristic recommends the Poissonness plot, it also provides this parameter, which I provide to the <strong>distplot</strong>
function for the lower right plot. Thus, while both the lower right and lower left plots are Poissonness plots in this case, they are generally based on different distribution parameters.
<span style="mso-spacerun: yes;"> </span>In the particular example shown here – constructed from the “number of times pregnant” variable in the Pima Indians diabetes dataset that I have
discussed in several previous posts (available from the <a href="http://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes">UCI Machine Learning Repository</a>) – the Ord plot heuristic recommends
the negative binomial distribution.<span style="mso-spacerun: yes;"> </span>Comparing the Poissonness and negative-binomialness plots in the bottom row of the above plot array, it does appear
that the negative binomial distribution fits the data better.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Finally, before
examining the code for the <strong>CountSummary</strong> procedure, it is worth noting that the <strong>vcd</strong> package’s implementation of the <strong>Ord_plot</strong> and <strong>Ord_estimate
</strong> procedures can generate four different distributional recommendations: the Poisson and negative-binomial distributions discussed here, along with the binomial distribution and the much less
well-known <em>log-series distribution</em>.<span style="mso-spacerun: yes;"> </span>The <strong>distplot</strong> procedure is flexible enough to generate plots for the first three of these
distributions, but not the fourth, so in cases where the Ord plot heuristic recommends this last distribution, the <strong>CountSummary</strong> procedure displays the recommended distribution and
parameter, but displays a warning message that no distribution plot is available for this case in the lower right plot position.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">The code for the <strong>CountSummary</strong> procedure looks like this:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">CountSummary <- function(xCount,TitleString){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>Initial
setup</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun:
yes;"> </span>library(vcd)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>grid.newpage()</div><div class="MsoNormal" style="margin: 0in
0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;">&
nbsp; </span>Set up 2x2 array of plots</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-spacerun: yes;"> </span>pushViewport(viewport(layout=grid.layout(nrow=2,ncol=2)))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;
"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>Generate the plots:</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </
span>#<span style="mso-spacerun: yes;"> </span>1 - upper left = Ord plot</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>pushViewport(viewport(layout.pos.row=1,layout.pos.col=1))</div><div class="MsoNormal" style="margin: 0in
0in 0pt;"><span style="mso-spacerun: yes;"> </span>OrdLine = Ord_plot(xCount, newpage = FALSE, pop=FALSE, legend=FALSE)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>OrdType = Ord_estimate(OrdLine)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>popViewport()</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </
span>#<span style="mso-spacerun: yes;"> </span>2 - upper right = text summary</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>OrdTypeText = paste("Type = ",OrdType$type,sep=" ")</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-spacerun: yes;"> </span>if (OrdType$type == "poisson"){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </
span>OrdPar = "Lambda = "</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span
style="mso-spacerun: yes;"> </span>else if ((OrdType$type == "nbinomial")|(OrdType$type == "nbinomial")){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun:
yes;"> </span>OrdPar = "Prob = "</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>}<span style="mso-spacerun: yes;"> </
span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"></div></blockquote> else if (OrdType$type == "log-series"){ <div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>OrdPar = "Theta = "</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>}</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>else{</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> &
nbsp; </span>OrdPar = "Parameter = "</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;
"><span style="mso-spacerun: yes;"> </span>OrdEstText = paste(OrdPar,round(OrdType$estimate,digits=3), sep=" ")</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>TextSummary = paste("Ord plot heuristic results:",OrdTypeText,OrdEstText,sep="\n")</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>pushViewport(viewport(layout.pos.row=1,layout.pos.col=2))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>
grid.text(TitleString,y=2/3,gp=gpar(fontface="bold"))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>grid.text(TextSummary,y=1/3)</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>popViewport()</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;">&
nbsp; </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>3 - lower left = standard
Poissonness plot</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>pushViewport(viewport(layout.pos.row=2,layout.pos.col=1))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>
distplot(xCount, type="poisson",newpage=FALSE, pop=FALSE, legend = FALSE)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>popViewport()<span
style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>4 - lower right = plot suggested by Ord results</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>pushViewport(viewport(layout.pos.row=
2,layout.pos.col=2))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>if (OrdType$type == "poisson"){</div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>distplot(xCount, type="poisson",lambda=OrdType$estimate, newpage=FALSE, pop=FALSE, legend=FALSE)</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>else if
(OrdType$type == "nbinomial"){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>prob = OrdType$estimate</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>size = 1/prob - 1</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;">&
nbsp; </span>distplot(xCount, type="nbinomial",size=size,newpage=FALSE, pop=FALSE, legend=FALSE)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;">
</span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>else if (OrdType$type == "binomial"){</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>distplot(xCount, type="binomial", newpage=FALSE, pop=FALSE, legend=FALSE)</div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>else{</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>Message = paste("No distribution plot","available","for this case",sep="\n")</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>grid.text(Message)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>popViewport()</div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">This procedure is a function called with two arguments: the sequence of count values, <strong>xCounts</strong>, and <strong>TitleString</strong>, a text
string that is displayed in the upper right text box in the plot array, along with the recommendations from the Ord plot heuristic.<span style="mso-spacerun: yes;"> </span>When called, the
function first loads the <strong>vcd </strong>library to make the <strong>Ord_plot</strong>, <strong>Ord_estimate</strong>, and <strong>distplot</strong> functions available for use, and it executes
the <strong>grid.newpage()</strong> command to clear the display.<span style="mso-spacerun: yes;"> </span>(Note that we don’t have to include “library(grid)” here to load the <strong>grid </
strong>package, since loading the <strong>vcd</strong> package automatically does this.)<span style="mso-spacerun: yes;"> </span>As in the previous example, the first “pushViewport”
command creates the two-by-two plot array, and this is again followed by four code segments, one to generate each of the four displays in this array.<span style="mso-spacerun: yes;"> </span>The
first of these segments invokes the <strong>Ord_plot</strong> and <strong>Ord_estimate</strong> commands as discussed above, first to generate the upper left plot (a side-effect of the <strong>
Ord_plot</strong> command) and second, to obtain the Ord plot heuristic recommendations, to be used in structuring the rest of the display.<span style="mso-spacerun: yes;"> </span>The second
segment creates a text display as in the first example considered here, but the structure of this display depends on the Ord plot heuristic results (i.e., the names of the parameters for the four
possible recommended distributions are different, and the logic in this code block matches the display text to this distribution).<span style="mso-spacerun: yes;"> </span>As noted in the
preceding discussion, the third plot (lower left) is the Poissonness plot generated by the <strong>distplot</strong> function from the <strong>vcd</strong> package.<span style="mso-spacerun: yes;">&
nbsp; </span>In this case, the function is called only specifying ‘type = “poisson”’ without the optional distribution parameter lambda, which is obtained by fitting the data.<span style=
"mso-spacerun: yes;"> </span>The final element of this plot array, in the lower right, is also generated via a call to the <strong>distplot</strong> function, but here, the results from the Ord
plot heuristic are used to specify both the type parameter and any optional or required shape parameters for the distribution.<span style="mso-spacerun: yes;"> </span>As with the displayed
text, simple if-then-else logic is required here to match the plot generated with the Ord plot heuristic recommendations.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">Finally, it is important to note that in all of the calls made to <strong>Ord_plot</strong> or <strong>distplot</strong> in the <strong>CountSummary</
strong> procedure, the parameters <strong>newpage</strong>, <strong>pop</strong>, and <strong>legend</strong>, are all specified as FALSE.<span style="mso-spacerun: yes;"> </span>Specifying
“newpage = FALSE” prevents these <strong>vcd</strong> plot commands from clearing the display page and erasing everything we have done so far.<span style="mso-spacerun: yes;"> </span>Similarly,
specifying “pop = FALSE” allows us to continue working in the current plot window until we notify the grid graphics system that we are done with it by issuing our own “popViewport()” command.<span
style="mso-spacerun: yes;"> </span>Specifying “legend = FALSE” tells <strong>Ord_plot</strong> and <strong>distplot</strong> not to write the default informational legend on each plot.<span
style="mso-spacerun: yes;"> </span>This is important here because, given the relatively small size of the plots generated in this two-by-two array, including the default legends would obscure
important details.</div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com0tag:blogger.com,1999:blog-9179325420174899779.post-22023757427221913632012-07-21T13:33:00.000-07:002012-07-21T13:33:51.861-07:00In a comment in response to my
latest post, Robert Young took issue with my characterization of <strong>grid</strong> as an <em>R</em> graphics package. Perhaps <strong>grid</strong> is better described as a “graphics
support package,” but my primary point – and the main point of this post – is that to generate the display you want, it is sometimes necessary to use commands from this package. In my case, the
necessity to learn something about grid graphics came as the result of my attempt to implement the <strong>CountSummary</strong> procedure to be included in the <strong>ExploringData</strong> package
that I am developing. <strong>CountSummary</strong> is a graphical summary procedure for count data, based on <em>Poissonness plots, negative binomialness plots</em>, and <em>Ord plots</em>, all
discussed in Chapter 8 of <a href="http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650">Exploring Data in Engineering, the Sciences and Medicine</a>. My original idea was
to implement these plots myself, but then I discovered that all three were already available in the <strong>vcd</strong> package. One of the great things about <em>R</em> is that you are encouraged
to build on what already exists, so using the <strong>vcd</strong> implementations seemed like a no-brainer. Unfortunately, my first attempt at creating a two-by-two array of plots from the <strong>
vcd</strong> package failed, and I didn’t understand why. The reason turned out to be that I was attempting to mix the base graphics command “<strong>par(mfrow=c(2,2))</strong>” that sets up a
two-by-two array with varous plotting commands from <strong>vcd</strong>, which are based on grid graphics. Because these two graphics systems don’t play well together, I didn’t get the results I
wanted. In the end, however, by learning a little about the <strong>grid</strong> package and its commands, I was able to generate my two-by-two plot array without a great deal of difficulty. Since
grid graphics isn’t even mentioned in my favorite <em>R</em> reference book (Michael Crawley’s <a href="http://www.amazon.com/The-Book-Michael-J-Crawley/dp/0470510242">The R Book</a>), I wanted to
say a little here about what the <strong>grid</strong> package is and why you might need to know something about it. To do this, I will describe the ideas that went into the development of the
<strong>CountSummary</strong> procedure and conclude this post with an example that shows what the output looks like. Next time, I will give a detailed discussion of the <em>R</em> code that
generated these results. (For those wanting a preliminary view of what the code looks like, load the <strong>vcd</strong> package with the <strong>library</strong> command and run “<strong>examples
(Ord_plot)</strong>” – in addition to generating the plots, this example displays the grid commands needed to construct the two-by-two array.)<br /><br /><br /><br /><br />Count variables –
non-negative integer variables like the “number of times pregnant” (NPG) variable from the Pima Indians database described below – are often assumed to obey a Poisson distribution, in much the same
way that continuous-valued variables are often assumed to obey a Gaussian (normal) distribution. Like this normality assumption for continuous variables, the Poisson assumption for count data is
sometimes reasonable, but sometimes it isn’t. Normal quantile-quantile plots like those generated by the <strong>qqnorm</strong> command in base <em>R</em> or the <strong>qqPlot</strong> command from
the <strong>car</strong> package are useful in informally assessing the reasonableness of the normality assumption for continuous data. Similarly, Poissonness plots are the corresponding graphical
tool for informally evaluating the Poisson hypothesis for count data. The construction and interpretation of these plots is discussed in some detail in Chapters 8 and 9 of <em>Exploring Data</em>,
but briefly, this plot constructs a variable called the <em>Poissonness count metameter</em> from the number of times each possible count value occurs in the data; if the data sequence conforms to
the Poisson distribution, the points on this plot should fall approximately on a straight line. A simple <em>R</em> function that constructs Poissonness plots is available on the <a href="http:/
/www.oup.com/us/companion.websites/9780195089653/rprogram/?view=usa">OUP companion website</a> for the book, but an implementation that is both more conveniently available and more flexible is the
<strong>distplot</strong> function in the <strong>vcd</strong> package, which also generates the negative binomialness plot discussed below.<br /><br /><br /><br /><br /><div class="separator" style=
"clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-eZyHMtgdR00/UAsBSv87yEI/AAAAAAAAAI8/orrB7rCvnAI/s1600/PoissonnessPlot.png" imageanchor="1" style="margin-left: 1em; margin-right:
1em;"><img border="0" closure_uid_zd5ma="68" hda="true" height="319" src="http://3.bp.blogspot.com/-eZyHMtgdR00/UAsBSv87yEI/AAAAAAAAAI8/orrB7rCvnAI/s320/PoissonnessPlot.png" width="320" /></a></div>
<br /><br /><br />The figure above is the Poissonness plot constructed using the <strong>distplot</strong> procedure from the <strong>vcd</strong> package for the NPG variable from the Pima Indians
diabetes dataset mentioned above. I have discussed this dataset in previous posts and have used it as the basis for several examples in <em>Exploring Data</em>. It is available from the <a href=
"http://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes">UCI Machine Learning Repository</a> and it has been incorporated in various forms as an example dataset in a number of <em>R</em>
packages, including a cleaned-up version in the <strong>MASS</strong> package (dataset <strong>Pima.tr</strong>). The full version considered here contains nine characteristics for 768 female members
of the Pima Indian tribe, including their age, medical characteristics like diastolic blood pressure, and the number of times each woman has been pregnant. If this NPG count sequence obeyed the
Poisson distribution, the points in the above plot would fall approximately on the reference line included there. The fact that these points do not conform well to this line – note, in particular,
the departure at the lower left end of the plot where most of the counts occur – calls the Poisson working assumption into question.<br /><br /><br /><br />A fundamental feature of the Poisson
distribution is that it is defined by a single parameter that determines all distributional characteristics, including both the mean and the variance. In fact, a key characteristic of the Poisson
distribution is that the variance is equal to the mean. This constraint is not satisfied by all count data sequences we encounter, however, and these deviations are important enough to receive
special designations: integer sequences whose variance is larger than their mean are commonly called <em>overdispersed</em>, while those whose variance is smaller than their mean are commonly called
<em>underdispersed</em>. In practice, overdispersion seems to occur more frequently, and a popular distributional alternative for overdispersed sequences is the negative binomial distribution. This
distribution is defined by two parameters and it is capable of matching both the mean and variance of arbitrary overdispersed count data sequences. For a detailed discussion of this distribution,
refer to Chapter 3 of <em>Exploring Data</em>.<br /><br /><br /><br />Like the Poisson distribution, it is possible to evaluate the reasonableness of the negative binomial distribution graphically,
via the negative binomialness plot. Like the Poissonness plot, this plot is based on a quantity called the <em>negative binomialness metameter</em>, computed from the number of times each count value
occurs, plotted against those count values. To construct this plot, it is necessary to specify a numerical value for the distribution’s second parameter (the <em>size</em> parameter in the <strong>
distplot</strong> command, corresponding to the <em>r</em> parameter in the discussion of this distribution given in Chapter 8 of <em>Exploring Data</em>). This can be done in several different ways,
including the specification of trial values, the approach taken in the negative binomialness plot procedure that is available from the OUP companion website. This option is also
available with the <strong>distplot</strong> command from the <strong>vcd</strong> package: to obtain a negative binomialness plot, specify the <em>type</em> parameter as “nbinomial” and, if a fixed
<em>size</em> parameter is desired, it is specified by giving a numerical value for the <em>size</em> parameter in the <strong>distplot</strong> function call. Alternatively, if this parameter is not
specified, the <strong>distplot</strong> procedure will estimate it via the method of maximum likelihood, an extremely useful feature, although it is important to note that this estimation process
can be time-consuming, especially for long data sequences. Finally, a third approach that can be adopted is to use the Ord plot described next to obtain an estimate of this parameter based on a
simple heuristic. In addition, this heuristic suggests which of these two candidate distributions – the Poisson or the negative binomial – is more appropriate for the data sequence. <br /><br /><br
/><br />Like the Poissonness plot, the Ord plot computes a simple derived quantity from the original count data sequence – specifically, the <em>frequency ratio, </em>defined for each count
value as that value multiplied by the ratio of the number of times it occurs to the number of times the next smaller count occurs – and plots this versus the counts. If the data sequence obeys
the negative binomial distribution, these points should conform reasonably well to a line with positive slope, and this slope can be used to determine the <em>size</em> parameter for the
distribution. Conversely, if the Poisson distribution is appropriate, the best fit reference line for the Ord plot should have zero slope. In addition, Ord plots can also be used to suggest two
additional discrete distributions (specifically, the binomial distribution and the log-series distribution), and the <strong>vcd </strong>package provides dataset examples to illustrate all four of
these cases.<br /><br /><br /><br />For my <strong>CountSummary</strong> procedure, I decided to construct a two-by-two array with the following four components. First, in the upper left, I used the
<strong>Ord_plot</strong> command in <strong>vcd</strong> to generate an Ord plot. This command returns the intercept and slope parameters for the reference line in the plot, and the <strong>
Ord_estimate</strong> command can then be used to convert these values into a type specification and an estimate of the distribution parameter needed to construct the appropriate discrete
distribution plot. I will discuss these results in more detail in my next post, but for the case of the NPG count sequence considered here, the Ord plot results suggest the negative binomial
distribution as the most appropriate choice, returning a parameter <em>prob</em>, from which the <em>size</em> parameter required to generate the negative binomialness plot may be generated
(specifically, <em>size = 1/prob – 1</em>). The upper right quadrant of this display gives a text summary identifying the variable being characterized and listing the Ord plot recommendations and
parameter estimate. Since the Poisson distribution is “the default” assumption for count data, the lower left plot shows a Poissonness plot for the data sequence, while the lower right plot is the
“distribution-ness plot” for the distribution recommended by the Ord plot results. The results obtained by the <strong>CountSummary</strong> procedure for the NPG sequence are shown below. Next time,
I will present the code used to generate this plot.<br /><br /><br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-8avnjKu1QWw/
UAsEgiaIysI/AAAAAAAAAJI/q7Z52I_U0-k/s1600/CountSummaryExample.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" closure_uid_zd5ma="70" hda="true" height="319" src=
"http://3.bp.blogspot.com/-8avnjKu1QWw/UAsEgiaIysI/AAAAAAAAAJI/q7Z52I_U0-k/s320/CountSummaryExample.png" width="320" /></a></div><br /><br /><br /><br /><br />Ron Pearson (aka TheNoodleDoodler)http:/
class="MsoNormal" style="margin: 0in 0in 0pt;">About this time last month, I attended the 2012 UseR! Meeting.<span style="mso-spacerun: yes;"> </span>Now an annual event, this series of
conferences started in Europe in 2004 as an every-other-year gathering that now seems to alternate between the <country-region w:st="on">U.S.</country-region> and <place w:st="on">Europe</place>.
<span style="mso-spacerun: yes;"> </span>This year’s meeting was held on the <placename w:st="on">Vanderbilt</placename> <placetype w:st="on">University</placetype> campus in <place w:st="on">
<city w:st="on">Nashville</city>, <state w:st="on">TN</state></place>, and it was attended by about 500 <i style="mso-bidi-font-style: normal;">R </i>aficionados, ranging from beginners who have just
learned about <i style="mso-bidi-font-style: normal;">R</i> to members of the original group of developers and the R Core Team that continues to maintain it.<span style="mso-spacerun: yes;
"> </span>Many different topics were discussed, but one given particular emphasis was data visualization, which forms the primary focus of this post.<span style="mso-spacerun: yes;"> </
span>For a more complete view of the range of topics discussed and who discussed them, the conference program is available as a <a href="http://biostat.mc.vanderbilt.edu/wiki/pub/Main/UseR-2012/
useR-2012-program.pdf">PDF file</a> that includes short abstracts of the talks.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;
">All attendees were invited to present a Lightning Talk, and about 20 of us did.<span style="mso-spacerun: yes;"> </span>The format is essentially the technical equivalent of the 50-yard dash:
before the talk, you provide the organizers exactly 15 slides, each of which is displayed for 20 seconds.<span style="mso-spacerun: yes;"> </span>The speaker’s challenge is first, to try to
keep up with the slides, and second, to try to convey some useful information about each one.<span style="mso-spacerun: yes;"> </span>For my Lightning Talk, I described the <b style=
"mso-bidi-font-weight: normal;">ExploringData</b> <i style="mso-bidi-font-style: normal;">R</i> package that I am in the process of developing, as a companion to both this blog and my book, <a href=
"http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650">Exploring Data in Engineering, the Sciences, and Medicine</a>.<span style="mso-spacerun: yes;"> </span>The
intent of the package is first, to make the <i style="mso-bidi-font-style: normal;">R</i> procedures and datasets from the <a href="http://www.oup.com/us/companion.websites/9780195089653/rprogram/?
view=usa">OUP companion site</a> for the book more readily accessible, and second, to provide some additional useful tools for exploratory data analysis, incorporating some of the extensions I have
discussed in previous blog posts.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Originally, I had hoped to have the package
complete by the time I gave my Lightning Talk, but in retrospect, it is just as well that the package is still in the development stage, because I picked up some extremely useful tips on what
constitutes a good package at the meeting.<span style="mso-spacerun: yes;"> </span>As a specific example, Hadley Wickham, Professor of Statistics at <place w:st="on"><placename w:st="on">Rice</
placename> <placetype w:st="on">University</placetype></place> and the developer of the <b style="mso-bidi-font-weight: normal;">ggplot2</b> package (more on this later), gave a standing-room-only
talk on package development, featuring the <b style="mso-bidi-font-weight: normal;">devtools</b> package, something he developed to make the <i style="mso-bidi-font-style: normal;">R</i> package
development process easier.<span style="mso-spacerun: yes;"> </span>In addition, the CRC vendor display at the meeting gave me the opportunity to browse and purchase Paul Murrell’s book, <a
href="http://www.amazon.com/Graphics-Second-Chapman-Hall-CRC/dp/1439831769/ref=sr_1_1?s=books&ie=UTF8&qid=1341672263&sr=1-1&keywords=R+Graphics">R Graphics</a>, which provides an
extremely useful, detailed, and well-written treatment of the four different approaches to graphics in <i style="mso-bidi-font-style: normal;">R</i> that I will say a bit more about below.</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Because I am still deciding what to include in the <b style="mso-bidi-font-weight:
normal;">ExploringData</b> package, one of the most valuable sessions for me was the invited talk by Di Cook, Professor of Statistics at <place w:st="on"><placename w:st="on">Iowa</placename>
<placetype w:st="on">State</placetype> <placetype w:st="on">University</placetype></place>, who emphasized the importance of meaningful graphical displays in understanding the contents of a dataset,
particularly if it is new to us.<span style="mso-spacerun: yes;"> </span>One of her key points – illustrated with examples from some extremely standard <i style="mso-bidi-font-style: normal;">R
</i> packages – was that the “examples” associated with datasets included in <i style="mso-bidi-font-style: normal;">R</i> packages often fail to include any such graphical visualization, and even
for those that do, the displays are often too cryptic to be informative.<span style="mso-spacerun: yes;"> </span>While this point is obvious enough in retrospect, it is one that I – along with
a lot of other people, evidently – had not thought about previously.<span style="mso-spacerun: yes;"> </span>As a consequence, I am now giving careful thought to the design of informative
display examples for each of the datasets I will include in the <b style="mso-bidi-font-weight: normal;">ExploringData</b> package.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></
div><div class="MsoNormal" style="margin: 0in 0in 0pt;">As I mentioned above, there are (at least) four fundamental approaches to doing graphics in <i style="mso-bidi-font-style: normal;">R</i>.<span
style="mso-spacerun: yes;"> </span>The one that most of us first encounter – the one we use by default every time we issue a “plot” command – is called <i style="mso-bidi-font-style: normal;">
base graphics</i>, and it is included in base R to support a wide range of useful data visualization procedures, including scatter plots, boxplots, histograms, and a variety of other common displays.
<span style="mso-spacerun: yes;"> </span>The other three approaches to graphics – grid graphics, lattice graphics, and <b style="mso-bidi-font-weight: normal;">ggplot2</b> – all offer more
advanced features than what is typically available in base graphics, but they are, most unfortunately, incompatible in a number of ways with base graphics.<span style="mso-spacerun: yes;"> </
span>I discovered this the hard way when I was preparing one of the procedures for the <b style="mso-bidi-font-weight: normal;">ExploringData</b> package (the <b style="mso-bidi-font-weight: normal;
">CountSummary</b> procedure, which I will describe and demonstrate in my next post).<span style="mso-spacerun: yes;"> </span>Specifically, the <b style="mso-bidi-font-weight: normal;">vcd</b>
package includes implementations of Poissonness plots, negative binomialness plots, and Ord plots, all discussed in <i style="mso-bidi-font-style: normal;">Exploring Data</i>, and I wanted to take
advantage of these implementations in building a simple graphical summary display for count data.<span style="mso-spacerun: yes;"> </span>In base graphics, to generate a two-by-two array of
plots, you simply specify “par(mfrow=c(2,2))” and then generate each individual plot using standard plot commands.<span style="mso-spacerun: yes;"> </span>When I tried this with the plots
generated by the <b style="mso-bidi-font-weight: normal;">vcd</b> package, I didn’t get what I wanted – for the most part, it appeared that the “par(mfrow=c(2,2))” command was simply being ignored,
and when it wasn’t, multiple plots were piled up on top of each other.<span style="mso-spacerun: yes;"> </span>It turns out that the <b style="mso-bidi-font-weight: normal;">vcd</b> package
uses grid graphics, which has a fundamentally different syntax: it’s more complicated, but in the end, it does provide a wider range of display options.<span style="mso-spacerun: yes;"> </span>
Ultimately, I was able to generate the display I wanted, although this required some digging, since grid graphics aren’t really discussed much in my standard <i style="mso-bidi-font-style: normal;">R
</i> reference books.<span style="mso-spacerun: yes;"> </span>For example, <a href="http://www.amazon.com/R-Book-Michael-J-Crawley/dp/0470510242/ref=sr_1_1?s=books&ie=UTF8&qid=
1341672357&sr=1-1&keywords=The+R+book">The R Book</a> by Michael J. Crawley covers an extremely wide range of useful topics, but the only mentions of “grid” in the index refer to the
generation of grid lines (e.g., the base graphics command “grid” generates grid lines on a base <i style="mso-bidi-font-style: normal;">R</i> plot, which is <em>not</em> based on grid graphics).<span
style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Often, grid graphics are mentioned
in passing in introductory descriptions of trellis (lattice) graphics, since the <b style="mso-bidi-font-weight: normal;">lattice</b> package is based on grid graphics.<span style="mso-spacerun: yes;
"> </span>This package is discussed in <i style="mso-bidi-font-style: normal;">The R Book</i>, and I have used it occasionally because it does support things like violin plots that are not part
of base graphics. <span style="mso-spacerun: yes;"> </span>To date, I haven’t used it much because I find the syntax much more complicated, but I plan to look further into it, since it does
appear to have a lot more capability than base graphics do.<span style="mso-spacerun: yes;"> </span>Also, Murrell’s <i style="mso-bidi-font-style: normal;">R Graphics</i> book devotes a chapter
to trellis graphics and the lattice package, which goes well beyond the treatments given in my other <i style="mso-bidi-font-style: normal;">R</i> references, and this provides me further motivation
to learn more.<span style="mso-spacerun: yes;"> </span>The fourth approach to <i style="mso-bidi-font-style: normal;">R</i> graphics – Hadley Wickham’s <b style="mso-bidi-font-weight: normal;">
ggplot2</b> package – was much discussed at the UseR! Meeting, appearing both in examples presented in various authors’ talks and as components for more complex and specialized graphics packages.
<span style="mso-spacerun: yes;"> </span>I have not yet used <b style="mso-bidi-font-weight: normal;">ggplot2</b>, but I intend to try it out, since it appears from some of the examples that
this package can generate an extremely wide range of data visualizations, with simple types comparable to what is found in base graphics often available as defaults.<span style="mso-spacerun: yes;">&
nbsp; </span>Like the lattice package, <b style="mso-bidi-font-weight: normal;">ggplot2</b> is also based on grid graphics, making it, too, incompatible with base graphics.<span style="mso-spacerun:
yes;"> </span>Again, the fact that Murrell’s book devotes a chapter to this package should also be quite helpful in learning when and how to make the best use of it.</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">This year’s UseR! Meeting was the second one I have attended – I also went to the 2010 meeting in <place
w:st="on"><city w:st="on">Gaithersburg</city>, <state w:st="on">MD</state></place>, held at the National Institute of Standards and Technology (NIST).<span style="mso-spacerun: yes;"> </span>
Both have been fabulous meetings, and I fully expect future meetings to be as good: next year’s UseR! meeting is scheduled to be held in <country-region w:st="on"><place w:st="on">Spain</place></
country-region> and I’m not sure I will be able to attend, but I would love to.<span style="mso-spacerun: yes;"> </span>In any case, if you can get there, I highly recommend it, based on my
experiences so far.</div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com2tag:blogger.com,1999:blog-9179325420174899779.post-8726239151325206052012-06-10T13:13:00.000-07:002012-06-10T13:13:12.654-07:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">In my last post, I considered the shifts in two interestingness measures as possible tools for selecting variables in classification problems.<span style="mso-spacerun: yes;">&
nbsp; </span>Specifically, I considered the Gini and Shannon interestingness measures applied to the 22 categorical mushroom characteristics from the <a href="http://archive.ics.uci.edu/ml/datasets/
Mushroom">UCI mushroom dataset</a>.<span style="mso-spacerun: yes;"> </span>The proposed variable selection strategy was to compare these values when computed from only edible mushrooms or only
poisonous mushrooms.<span style="mso-spacerun: yes;"> </span>The rationale was that variables whose interestingness measures changed a lot between these two subsets might be predictive of
mushroom edibility.<span style="mso-spacerun: yes;"> </span>In this post, I examine this question a little more systematically, primarily to illustrate the mechanics of setting up
classification problems and evaluating their results.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">More specifically, the
classification problem I consider here is that of building and comparing models that predicts mushroom edibility, each one based on a different mushroom characteristic.<span style=
"mso-spacerun: yes;"> </span>In practice, you would generally consider more than one characteristic as the basis for prediction, but here, I want to use standard classification tools to provide
a basis for comparing the predictabilities of each of the potentially promising mushroom characteristics identified in my last post.<span style="mso-spacerun: yes;"> </span>In doing this, I
also want to highlight three aspects of classification problems: first, the utility of randomly splitting the available data into subsets before undertaking the analysis, second, the fact that we
have many different options in building classifiers, and third, one approach to assessing classification results.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">One of the extremely useful ideas emphasized in the machine learning literature is the utility of randomly partitioning our dataset into three parts:
one used to fit whatever prediction model we are interested in building, another used to perform intermediate fit comparisons (e.g., compare the performance of models based on different predictor
variables), and a third that is saved for a final performance assessment.<span style="mso-spacerun: yes;"> </span>The reasoning behind this partitioning is that if we allow our prediction model
to become too complex, we run the risk of <i style="mso-bidi-font-style: normal;">overfitting,</i> or predicting some of the random details in our dataset, resulting in a model that does not perform
well on other, similar datasets.<span style="mso-spacerun: yes;"> </span>This is an important practical problem that I illustrate with an extreme example in Chapter 1 of <a href="http://
www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650">Exploring Data in Engineering, the Sciences, and Medicine</a>.<span style="mso-spacerun: yes;"> </span>There, a
sequence of seven monotonically-decaying observations is fit to a sixth-degree polynomial that exactly predicts the original seven observations, but which exhibits horrible interpolation and
extrapolation behavior.<span style="mso-spacerun: yes;"> </span>The point here is that we need a practical means of protecting ourselves against building models that are too specific to the
dataset at hand, and the partitioning strategy just described provides a simple way of doing this.<span style="mso-spacerun: yes;"> </span>That is, once we partition the data, we can fit our
prediction model to the first subset and then evaluate its performance with respect to the second subset: because these subsets were generated by randomly sampling the original dataset, their general
character is the same, so a “good” prediction model built from the first subset should give “reasonable” predictions for the second subset.<span style="mso-spacerun: yes;"> </span>The reason
for saving out a third data subset – not used at all until the final evaluation of our model – is that model-building is typically an iterative procedure, so we are likely to cycle repeatedly between
the first and second subsets.<span style="mso-spacerun: yes;"> </span>For the final model evaluation, it is desirable to have a dataset available that hasn’t been used at all.</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Generating this three-way split in <em>R</em> is fairly easy.<span style="mso-spacerun: yes;">
</span>As with many tasks, this can be done in more than one way, but the following procedure is fairly straightforward and only makes use of procedures available in base <em>R</em>:</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">RandomThreeWay.proc <- function(df, probs = c(35,35,30),
iseed = 101){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>set.seed(iseed)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>n = nrow(df)</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>u = runif(n)</
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;">&
nbsp; </span>nprobs = probs/sum(probs)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>brks = c(0,cumsum(nprobs))</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>Subgroup = cut(u, breaks=brks, labels=c("A","B","C"), include.lowest=TRUE)</div><div class="MsoNormal" style="margin: 0in
0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>Subgroup</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">}</div></blockquote><div class="MsoNormal" style="margin: 0in
0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">This function is called with three parameters: the data frame that we wish to partition for our analysis, a vector of the
relative sizes of our three partitions, and a seed for the random number generator.<span style="mso-spacerun: yes;"> </span>In the implementation shown here, the vector of relative sizes is
given the default values 35%/35%/30%, but any relative size partitioning can be specified.<span style="mso-spacerun: yes;"> </span>The result returned by this procedure is the character vector
Subgroup, which has the values “A”, “B”, or “C”, corresponding to the three desired partitions of the dataset.<span style="mso-spacerun: yes;"> </span>The first line of this procedure sets the
seed for the uniform random number generator used in the third line, and the second line specifies how many random numbers to generate (i.e., one for each data record in the data frame).<span style=
"mso-spacerun: yes;"> </span>The basic idea here is to generate uniform random numbers on the interval [0,1] and then assign subgroups depending on whether this value falls into the interval
between 0 and 0.35, 0.35 to 0.70, or 0.70 to 1.00.<span style="mso-spacerun: yes;"> </span>The <strong>runif</strong> function generates the required random numbers, the <strong>cumsum</strong>
function is used to generate the cumulative breakpoints from the normalized probabilities, and the <strong>cut</strong> function is used to group the uniform random numbers using these break points.
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">In the specific example considered here, I use logistic regression as my
classifier, although many, many other classification procedures are available in <em>R,</em> including a wide range of decision tree-based models, random forest models, boosted tree models, naïve
Bayes classifiers, and support vector machines, to name only a few.<span style="mso-spacerun: yes;"> </span>(For a more complete list, refer to the CRAN task view on <a href="http://
cran.r-project.org/web/views/MachineLearning.html">Machine Learning and Statistical Learning</a>).<span style="mso-spacerun: yes;"> </span>Here, I construct and compare six logistic regression
models, each constructed to predict the probability that a mushroom is poisonous from one of the six mushroom characteristics identified in my previous post: GillSize, StalkShape, CapSurf, Bruises,
GillSpace, and Pop.<span style="mso-spacerun: yes;"> </span>In each case, I extract the records for subset “A” of the UCI mushroom dataset, as described above, and use the base <em>R</em>
procedure <strong>glm</strong> to construct a logistic regression model.<span style="mso-spacerun: yes;"> </span>Because the model evaluation procedure (<strong>somers2</strong>, described
below) that I use here requires a binary response coded as 0 or 1, it is simplest to construct a data frame with this response explicitly, along with the prediction covariate of interest.<span style=
"mso-spacerun: yes;"> </span>The following code does this for the first predictor (GillSize):</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">EorP = UCImushroom.frame$EorP</div><div class="MsoNormal" style="margin: 0in 0in 0pt;
">PoisonBinary = rep(0,length(EorP))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">PoisonIndx = which(EorP = = "p")</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">PoisonBinary
[PoisonIndx] = 1</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">FirstFrame = data.frame(PoisonBinary = PoisonBinary, Covar = UCImushroom.frame$GillSize)</div></blockquote><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">In particular, this code constructs a two-column data frame that contains the binary response
variable PoisonBinary that is equal to 1 whenever EorP is “p” and 0 whenever this variable is “e”, and the prediction covariate Covar, which is here “GillSize”.<span style="mso-spacerun: yes;">
</span>Given this data frame, I then apply the following code to randomly partition this data frame into subsets A, B, and C, and I invoke the built-in <strong>glm</strong> procedure to fit a
logistic regression model:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">Subset =
RandomThreeWay.proc(FirstFrame)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">IndxA = which(Subset = = "A")</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">LogisticModel = glm
(PoisonBinary ~ Covar, data = FirstFrame, subset = IndxA, family=binomial())</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></blockquote><div class="MsoNormal" style="margin:
0in 0in 0pt;">Note that here I have specified the model form using the <em>R</em> formula construction “PoisonBinary ~ Covar”, I have used the <strong>subset</strong> argument of the <strong>glm</
strong> procedure to specify that I only want to fit the model to subset A, and I have specified “family = binomial()” to request a logistic regression model.<span style="mso-spacerun: yes;">
</span>Once I have this model, I evaluate it using the concordance index C available from the <strong>somers2</strong> function in the <em>R</em> package <strong>Hmisc</strong>.<span style=
"mso-spacerun: yes;"> </span>This value corresponds to the area under the ROC curve and is a measure of agreement between the predictions of the logistic regression model and the actual binary
response.<span style="mso-spacerun: yes;"> </span>As discussed above, I want to do this evaluation for subset B to avoid an over-optimistic view of the model’s performance due to overfitting of
subset A.<span style="mso-spacerun: yes;"> </span>To do this, I need the model predictions from subset B, which I obtain with the built-in <strong>predict</strong> procedure:</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">
IndxB = which(Subset = = "B")</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">PredPoisonProb = predict(LogisticModel, newdata = FirstFrame[IndxB,], type="response")</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">ObsPoisonBinary = FirstFrame$PoisonBinary[IndxB]</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></blockquote><div class="MsoNormal"
style="margin: 0in 0in 0pt;">In addition, I have created the variable ObsPoissonBinary, the sequence of binary responses from subset B, which I will use in calling the <strong>somers2</strong>
function:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">library(Hmisc)</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;">somers2(PredPoisonProb, ObsPoisonBinary)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> &
nbsp; </span>C<span style="mso-spacerun: yes;"> </span>Dxy
<span style="mso-spacerun: yes;"> </span>n<span style="mso-spacerun: yes;"> &
nbsp; </span><span style="mso-spacerun: yes;"> </span>Missing </div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>0.7375031<span style="mso-spacerun: yes;"> </span>0.4750063 2858.0000000
<span style="mso-spacerun: yes;"> </span>0.0000000 </div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in
0pt;">The results shown here include the concordance index C, an alternative (and fully equivalent) measure called Somers’ D (from which the procedure gets its name), the number of records in the
dataset (here, in subset B), and the number of missing records (here, none).<span style="mso-spacerun: yes;"> </span>The concordance index C is a number that varies between 0 and 1, with values
between 0.5 and 1.0 meaning that the predictions are better than random guessing, and values less than 0.5 indicating performance so poor that it is actually worse than random guessing.<span style=
"mso-spacerun: yes;"> </span>Here, the value of approximately 0.738 suggests that GillSize is a reasonable predictor of mushroom edibility, at least for mushrooms like those characterized in
the UCI mushroom dataset.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
Repeating this process for all six of the mushroom characteristics identified as potentially predictive by the interestingness change analysis I discussed last time leads to the following results:</
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;"> Pop:<span style=
"mso-tab-count: 2;"> </span>C = 0.753<span style="mso-tab-count: 1;"> &
nbsp; </span>(6 levels)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>
Bruises:<span style="mso-tab-count: 1;"> </span>C = 0.740<span style="mso-tab-count: 1;">
</span>(2 levels)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>GillSize:<span
style="mso-tab-count: 1;"> </span>C = 0.738<span style="mso-tab-count: 1;"> </span>(2
levels)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>GillSpace:<span style=
"mso-tab-count: 1;"> </span>C = 0.635<span style="mso-tab-count: 1;"> </span>(2 levels)</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>CapSurf:<span style="mso-tab-count: 1;">
</span>C = 0.595<span style="mso-tab-count: 1;"> </span>(4 levels)</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>StalkShape:<span style="mso-tab-count: 1;"> &
nbsp; </span>C = 0.550<span style="mso-tab-count: 1;"> </span>(2 levels)</div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">These results leave open the questions of whether other mushroom characteristics, not identified on the basis of their interestingness shifts, are
in fact more predictive of edibility, or how much better the predictions can be if we use more than one prediction variable.<span style="mso-spacerun: yes;"> </span>I will examine those
questions in subsequent posts, using the ideas outlined here.<span style="mso-spacerun: yes;"> </span>For now, it is enough to note that one advantage of the approach described here,
relative to that using odds ratios for selected covariates discussed last time, is that this approach can be used to assess the potential prediction power of categorical variables with arbitrary
numbers of levels, while the odds ratio approach is limited to two-level predictors.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>Ron Pearson (aka TheNoodleDoodler)http://
class="MsoNormal" style="margin: 0in 0in 0pt;">In three previous posts (<a href="http://exploringdatablog.blogspot.com/2011/04/interestingness-measures.html">April 3, 2011</a>, <a
href="http://exploringdatablog.blogspot.com/2011/04/screening-for-predictive.html">April 12, 2011</a>,and <a href="http://exploringdatablog.blogspot.com/2011/05/distribution-of-interestingness.html">
May 21, 2011</a>), I have discussed <em>interestingness measures,</em> which characterize the distributional heterogeneity of categorical variables.<span style="mso-spacerun: yes;">
</span>Four specific measures are discussed in Chapter 3 of <a href="http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650">Exploring Data in Engineering, the Sciences and
Medicine</a>: the Bray measure, the Gini measure, the <place w:st="on">Shannon</place> measure, and the Simpson measure.<span style="mso-spacerun: yes;"> </span>All four of these measures vary
from 0 to 1 in value, exhibiting their minimum value when all levels of the variable are equally represented, and exhibiting their maximum value when the variable is completely concentrated on a
single one of its several possible levels.<span style="mso-spacerun: yes;"> </span>Intermediate values correspond to variables that are more or less homogeneously distributed: more homogeneous
for smaller values of the measure, and less homogeneous for larger values.<span style="mso-spacerun: yes;"> </span>One of the points I noted in my first post on this topic was that the
different measures exhibit different behavior for the intermediate cases, reflecting different inherent sensitivities to the various ways in which a variable can be “more homogeneous” or “less
homogeneous.”<span style="mso-spacerun: yes;"> </span>This post examines changes in interestingness measures as a potential exploratory analysis tool for selecting categorical predictors of
some binary response. In fact, I examined the same question from a different perspective in my April 12 post noted above: the primary difference is that there, the characterization I considered
generates a single graph for each variable, with the number of points on the graph corresponding to the number of levels of the variable. Here, I examine a characterization that represents each
variable as a single point on the graph, allowing us to consider all variables simultaneously.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style=
"clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-NBTjPHooKC0/T7fzl4ud0BI/AAAAAAAAAIY/7iD3q7T3LD4/s1600/GiniVsShannonPlot.png" imageanchor="1" style="margin-left: 1em;
margin-right: 1em;"><img border="0" height="319" kba="true" src="http://4.bp.blogspot.com/-NBTjPHooKC0/T7fzl4ud0BI/AAAAAAAAAIY/7iD3q7T3LD4/s320/GiniVsShannonPlot.png" width="320" /></a></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">As a reminder of how
these measures behave, the figure above shows a plot of the normalized Gini measure versus the normalized <place w:st="on">Shannon</place> measure for the 23 categorical variables included in the
mushroom dataset from the <a href="http://archive.ics.uci.edu/ml/datasets/Mushroom">UCI Machine Learning Repository</a>.<span style="mso-spacerun: yes;"> As I have noted in several
previous posts that have discussed</span> this dataset, it gives observable characteristics for 8,124 mushrooms and classifies each one as either edible or poisonous (the binary variable
EorP).<span style="mso-spacerun: yes;"> </span>The above plot illustrates the systematic difference between the normalized Shannon and Gini interestingness measures: there, each point
represents one of the 23 variables in the dataset, with the horizontal axis representing the Shannon measure computed for the variable and the vertical axis rperesenting the corresponding Gini
measure. The plot shows that the Gini measure is consistently larger than the <place w:st="on">Shannon</place> measure, since all points lie above the equality reference line in this
plot except for the single point at the origin.<span style="mso-spacerun: yes;"> </span>This point corresponds to the variable VeilType, which only exhibits a single value in this dataset,
meaning that both the Gini and Shannon measures are inherently ill-defined; consequently, they are given the default value of zero here, consistent with the general interpretation of these
measures: if a variable only assumes a single value, it seems reasonable to consider it “completely homogeneous.”</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">Because edible and poisonous mushrooms are fairly evenly represented in this dataset (51.8% edible versus 48.2% poisonous), it has been widely used as one of
several benchmarks for evaluating classification algorithms.<span style="mso-spacerun: yes;"> </span>In particular, given the other mushroom characteristics, the fundamental classification
question is how well can we predict whether each mushroom is poisonous or edible.<span style="mso-spacerun: yes;"> </span>In this post and a subsequent follow-up post, I consider a closely
related question: can differences in a variable’s interestingness measure between the edible subset and the poisonous subset be used to help us select prediction covariates for these classification
algorithms?<span style="mso-spacerun: yes;"> </span>In this post, I present some preliminary evidence to suggest that this may be the case, while in a subsequent post, I will put the question
to the test by seeing how well the covariates suggested by this analysis actually predict edibility.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;">The specific idea I examine here is the following: given an interestingness measure and a mushroom characteristic, compute this measure for the chosen characteristic, applied
the edible and poisonous mushrooms separately.<span style="mso-spacerun: yes;"> </span>If these numbers are very different, this suggests that the distribution of levels is different for edible
and poisonous mushrooms, further suggesting that this variable may be a useful predictor of edibility.<span style="mso-spacerun: yes;"> </span>To turn this idea into a data analysis tool, it is
necessary to define what we mean by “very different,” and this can be done in more than one way.<span style="mso-spacerun: yes;"> </span>Here, I consider two possibilities.<span style=
"mso-spacerun: yes;"> </span>The first is what I call the “normalized difference,” defined as the difference of the two interestingness measures divided by their sum.<span style="mso-spacerun:
yes;"> Since</span> both interestingness measures lie between 0 and 1, it is not difficult to show that this normalized difference lies between -1 and +1.<span style="mso-spacerun:
yes;"> </span>As a specific application of this idea, consider the plot below, which shows the normalized difference in the Gini measure between the poisonous mushrooms and the edible mushrooms
(the normalized Gini shift) plotted against the corresponding difference for the Shannon measure (the normalized Shannon shift).<span style="mso-spacerun: yes;"> </span>In addition, this plot
shows an equality reference line, and the fact that the points consistently lie between this line and the horizontal axis shows that the normalized Gini shift is consistently smaller in magnitude
than the normalized <place w:st="on">Shannon</place> shift.<span style="mso-spacerun: yes;"> </span>This suggests that the normalized <place w:st="on">Shannon</place> measure may be more
sensitive to distributional differences between edible and poisonous mushrooms.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both;
text-align: center;"><a href="http://2.bp.blogspot.com/-taWQoCtLtdg/T7f10S9arAI/AAAAAAAAAIg/IISgWJr5lXw/s1600/NormalizedMeasurePlot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;">
<img border="0" height="319" kba="true" src="http://2.bp.blogspot.com/-taWQoCtLtdg/T7f10S9arAI/AAAAAAAAAIg/IISgWJr5lXw/s320/NormalizedMeasurePlot.png" width="320" /></a></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The next figure, below, shows a re-drawn
version of the above plot, with the equality reference line removed and replaced by four other reference lines.<span style="mso-spacerun: yes;"> </span>The vertical dashed lines correspond to
the outlier detection limits obtained by the Hampel identifier with threshold value t = 2 (see Chapter 7 of <em>Exploring Data</em> for a detailed discussion of this procedure), computed from
the normalized Shannon shift values, while the horizontal dashed lines represent the corresponding limits computed from the normalized Gini shift values.<span style="mso-spacerun: yes;"> </
span>Points falling outside these limits represent variables whose changes in both Gini measure and Shannon measure are “unusually large” according to the Hampel identifier criteria used here.<span
style="mso-spacerun: yes;"> </span>These points are represented as solid circles, while those not detected as “unusual” by the Hampel identifier are represented as open circles.<span style=
"mso-spacerun: yes;"> </span>The idea proposed here – to be investigated in a future post – is that these outlying variables <i style="mso-bidi-font-style: normal;">may</i> be useful in
predicting mushroom edibility.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/
-7M85IUD9EW8/T7f2mv0pRtI/AAAAAAAAAIo/ITEIoTvLcW4/s1600/NormalizedMeasurePlot3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" kba="true" src="http://
4.bp.blogspot.com/-7M85IUD9EW8/T7f2mv0pRtI/AAAAAAAAAIo/ITEIoTvLcW4/s320/NormalizedMeasurePlot3.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">More specifically, the five solid circles in the above plot correspond to the following
mushroom characteristics.<span style="mso-spacerun: yes;"> </span>The two points in the lower left corner of the plot – exhibiting almost the most negative normalized <place w:st="on">Shannon</
place> shift possible – correspond to GillSize and StalkShape, two binary variables.<span style="mso-spacerun: yes;"> </span>As I discussed in a previous post (<a href="http://
exploringdatablog.blogspot.com/2011/05/computing-odds-ratios-in-r.html">May 7, 2011</a>) and I discuss further in Chapter 13 of <i style="mso-bidi-font-style: normal;">Exploring Data</i>, an
extremely useful measure of association between two binary variables (e.g., between GillSize and edibility) is the odds ratio.<span style="mso-spacerun: yes;"> </span>An examination of the odds
ratios for these two variables suggest that both should be at least somewhat predictive of edibility: the odds ratio between GillSize and edibility is 0.056, suggesting a very strong association
(specifically, a GillSize value of “n” for “narrow” is most commonly associated with poisonous mushrooms in the UCI mushroom dataset), while the odds ratio between StalkShape and edibility is less
extreme at 1.511, but still different enough from the neutral value of 1 to be suggestive of a clear association between these variables (a StalkShape value of “t” is more strongly associated
with edible mushrooms than the alternative value of “e”).<span style="mso-spacerun: yes;"> </span>The solid circle in the upper right of this plot corresponds to the variable CapSurf, which has
four levels and whose distributional homogeneity appears to change quite substantially, according to both the Gini and <place w:st="on">Shannon</place> measures.<span style="mso-spacerun: yes;">&
nbsp; </span>Because this variable has more than two levels, it is not possible to characterize its association in terms of its odds ratio relative to edibility.<span style="mso-spacerun: yes;">&
nbsp; </span>Finally, the cluster of three points in the upper right, just barely above the upper horizontal dashed line, correspond to the binary variables Bruises and GillSpace, and the six-level
variable Pop.<span style="mso-spacerun: yes;"> </span>Both of these binary variables exhibit very large odds ratios with respect to edibility (9.97 and 13.55 for Bruises and GillSpace,
respectively), again suggesting that these variables may be highly predictive of edibility.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br
/></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The prevalence of binary variables in these results is noteworthy, and it reflects the fact that distributional shifts for binary variables
can only occur in one way (i.e., the relative frequency of either fixed level can either increase or decrease).<span style="mso-spacerun: yes;"> </span>Thus, large shifts in either
interestingness measure should correspond to significant odds ratios with respect to the binary response variable, and this is seen to be the case here.<span style="mso-spacerun: yes;"> </span>
The situation is more complicated when a variable exhibits more than two levels, since the distribution of these levels can change in many ways between the two binary response values.<span
style="mso-spacerun: yes;"> </span>An important advantage of techniques like the the interestingness shift analysis described here is that they are not restricted to binary characteristics, as
odds ratio characterizations are.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The second approach I consider for measuring
the shift in interestingness between edible and poisonous mushrooms is what I call the “marginal measure,” corresponding to the difference in either the Gini or the <place w:st="on">Shannon</place>
measure between poisonous and edible mushrooms, divided by the original measure for the complete dataset.<span style="mso-spacerun: yes;"> </span>An important difference between the marginal
measure and the normalized measure is that the marginal measure is not bounded to lie between -1 and +1, as is evident in the plot below.<span style="mso-spacerun: yes;"> </span>This plot shows
the marginal Gini shift against the marginal <place w:st="on">Shannon</place> shift for the mushroom characteristics, in the same format as the plot above.<span style="mso-spacerun: yes;"> </
span>Here, only four points are flagged as outliers, corresponding to the four binary variables identified above from the normalized shift plot: Bruises (the point in the extreme upper right),
GillSpace (the point just barely in the upper right quadrant), and GillSize and StalkShape (the two points in the extreme lower left).<span style="mso-spacerun: yes;"> </span>However, if we
lower the Hampel identifier threshold from t = 2 to t = 1.5, we again identify CapSurf and Pop as potentially influential variables.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></
div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-NDv1NRKdFVo/T7f3sYM_CgI/AAAAAAAAAIw/gahvcxEtAMQ/s1600/MarginalMeasurePlot3.png" imageanchor="1"
style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" kba="true" src="http://3.bp.blogspot.com/-NDv1NRKdFVo/T7f3sYM_CgI/AAAAAAAAAIw/gahvcxEtAMQ/s320/MarginalMeasurePlot3.png"
width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in
0pt;">This last observation suggests an alternative interpretation approach that may be worth exploring.<span style="mso-spacerun: yes;"> </span>Specifically, both of the two previous plots
give clear visual evidence of “cluster structure,” and the Hampel identifier does extract some or all of this structure from the plot, but only if we apply a sufficiently judicious tuning to the
threshold parameter.<span style="mso-spacerun: yes;"> </span>A possible alternative would be to apply <em>cluster analysis</em> procedures, and this will be the subject of one or more
subsequent posts.<span style="mso-spacerun: yes;"> </span>In particular, there are many different clustering algorithms that could be applied to this problem, and the results are likely to be
quite different.<span style="mso-spacerun: yes;"> </span>The key practical question is which ones – if any – lead to useful ways of grouping these mushroom characteristics.<span style=
"mso-spacerun: yes;"> </span>Subsequent posts will examine this question further from several different perspectives.</div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com0tag:blogger.com,1999:blog-9179325420174899779.post-62875307895827496672012-04-21T11:38:00.000-07:002012-04-21T11:38:34.100-07:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">As I have discussed in a number of previous posts, the median represents a well-known and widely-used estimate of the “center” of a data sequence.<span style="mso-spacerun:
yes;"> </span>Relative to the better-known mean, the primary advantage of the median is its much reduced outlier sensitivity.<span style="mso-spacerun: yes;"> </span>This post briefly
describes a simple confidence interval for the median that is discussed in a paper by David Olive, available on-line via the following link:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<br /></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;"><a href="http://www.math.siu.edu/olive/ppmedci.pdf">http://www.math.siu.edu/olive/ppmedci.pdf</a><a href=
"http://www.math.siu.edu/olive/ppmedci.pdf"></a></div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">As Olive notes in his
paper and I further demonstrate in this post, an advantage of his confidence interval for the median is that it provides a simple, numerical way of identifying situations where the data values
deserve a careful, graphical look.<span style="mso-spacerun: yes;"> </span>In particular, he advocates comparing the traditional confidence interval for the mean with his confidence interval
for the median: if these intervals are markedly different, it is worth investigating to understand why.<span style="mso-spacerun: yes;"> </span>This strategy may be viewed as a particular
instance of Collin Mallows’ “compute and compare” advice, discussed at the end of Chapter 7 of <a href="http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650">Exploring
Data in Engineering, the Sciences, and Medicine</a>.<span style="mso-spacerun: yes;"> </span>The key idea here is that under “standard” working assumptions – i.e., distributional symmetry and
approximate normality – the mean and the median should be approximately the same: if they are not, it probably means these working assumptions have been violated, due to outliers in the data,
pronounced distributional asymmetry, or other less common phenomena like strongly multimodal data distributions or coarse quantization.<span style="mso-spacerun: yes;"> </span>In the
increasingly common case where we have a lot of numerical variables to consider, it may be undesirable or infeasible to examine them all graphically: numerical comparisons like the one described here
may be automated and used to point us to subsets of variables that we really need to look at further.<span style="mso-spacerun: yes;"> </span>In addition to describing this confidence interval
estimator and illustrating it for three examples, this post also provides the <em>R</em> code to compute it.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/
-Yf8FfTapsaU/T5LcFqPTD_I/AAAAAAAAAH4/kB_y9O0d3fo/s1600/OliveFig01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" qda="true" src="http://
2.bp.blogspot.com/-Yf8FfTapsaU/T5LcFqPTD_I/AAAAAAAAAH4/kB_y9O0d3fo/s320/OliveFig01.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;">As a first example, the plot above shows the makeup flow rate dataset discussed in <em>Exploring Data</em> and available as the makeup dataset <span style="mso-spacerun:
yes;"> </span>(<strong>makeup.csv</strong>) from the book's <a href="http://www.oup.com/us/companion.websites/9780195089653/rprogram/?view=usa">companion website</a>.<span style="mso-spacerun:
yes;"> </span>This plot shows 2,589 successive observations of the measured flow rate of a solvent recycle stream in an industrial manufacturing process.<span style="mso-spacerun: yes;">
</span>In normal operation, this flow rate is just under 400 – in fact, the median flow rate is 393.86 – but this data record also includes measurements during time intervals when the process is
either being shut down, is not running, or is being started back up, and during these periods the measured flow rates decrease toward zero, are approximately equal to zero, and increase from zero
back to approximately 400, respectively.<span style="mso-spacerun: yes;"> </span>Because of the presence of these anomalous segments in the data, the mean value is much smaller than the median:
specifically, the mean is 315.46, actually serving as a practical dividing line between the normal operation segments (i.e., those data points that lie above the mean) and the shutdown segments
(i.e., those data points that lie below the mean).<span style="mso-spacerun: yes;"> </span>The dashed lines in this plot at 309.49 and 321.44 correspond to the classical 95% confidence interval
for the mean, computed as described below.<span style="mso-spacerun: yes;"> </span>In contrast, the dotted lines at 391.83 and 394.88 correspond to Olive’s 95% confidence interval for the
median, also described below.<span style="mso-spacerun: yes;"> </span>Before proceeding to a more detailed discussion of how these lines were determined, the three primary points to note from
this figure are, first, that the two confidence intervals are very different (e.g., they do not overlap at all), second, that the mean confidence intervals are much wider than those for the median in
this case, and third, that the median confidence interval lies well within the range of the normal operating data, while the mean confidence interval does not.<span style="mso-spacerun: yes;">
</span>It is also worth noting that, if we simply remove the shutdown episodes from this dataset, the mean of this edited dataset is 397.7, a value that lies slightly above the upper 95% confidence
interval for the median, but only slightly so (this and other data cleaning strategies for this dataset are discussed in some detail in Chapter 7 of <em>Exploring Data</em>).</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Both the classical confidence interval for the mean and David Olive’s confidence interval for
the median are based on the fact that these estimators are asymptotically normal: for a sufficiently large data sample, both the estimated mean and the estimated median approach the correct limits
for the underlying data distribution, with a standard deviation that decreases inversely with the square root of the sample size.<span style="mso-spacerun: yes;"> Using</span> this
description directly would lead to confidence intervals based on the quantiles of the Gaussian distribution, but for small to moderate-sized samples, more accurate confidence intervals are obtained
by replacing these Gaussian quantiles with those for the Student’s t-distribution with the appropriate number of degrees of freedom.<span style="mso-spacerun: yes;"> </span>More specifically,
for the mean, the confidence interval at a given level p is of the form:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span
style="mso-tab-count: 1;"> </span>CI = (Mean – c<sub>p</sub> SE, Mean + c<sub>p</sub> SE),</div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">where c<sub>p</sub> is the constant derived from the Gaussian or Student’s t-distribution, and SE is the standard error
of the mean, equal to the usual standard deviation estimate divided by the square root of the number of data points.<span style="mso-spacerun: yes;"> </span>(For a more detailed discussion of
the math behind these results, refer to either Chapter 9 of <em>Exploring Data</em> or to David Olive’s paper, available through the link given above.)<span style="mso-spacerun: yes;"> </span>
For the median, Olive provides a simple estimator for the standard error, described further in the next paragraph.<span style="mso-spacerun: yes;"> </span>First, however, it is worth saying a
little about the difference between the Gaussian and Student’s t-distribution in these results.<span style="mso-spacerun: yes;"> </span>Probably the most commonly used confidence intervals are
the 95% intervals – these are the confidence intervals shown in the plot above for the makeup flow rate data – which represent the interval that should contain the true distribution mean with
probability at least 95%.<span style="mso-spacerun: yes;"> </span>In the Gaussian case, the constant c<sub>p</sub> for the 95% confidence interval is approximately 1.96, while for the Student’s
t-distribution, this number depends on the degrees of freedom parameter.<span style="mso-spacerun: yes;"> </span>In the case of the mean, the degrees of freedom is one less than the sample
size, while for the median confidence intervals described below, this number is typically much smaller.<span style="mso-spacerun: yes;"> </span>The difference between these distributions is
that the c<sub>p</sub> parameter decreases from a very large value for few degrees of freedom – e.g., the 95% parameter value is 12.71 for a single degree of freedom – to the Gaussian value (e.g.,
1.96 for the 95% case) in the limit of infinite degrees of freedom.<span style="mso-spacerun: yes;"> </span>Thus, using Student’s t-distribution instead of the Gaussian distribution results in
wider confidence intervals, wider by the ratio of the Student’s t value for c<sub>p</sub> to the Gaussian value.<span style="mso-spacerun: yes;"> </span>The plot below shows this ratio for the
95% parameter c<sub>p</sub> as the degree of freedom parameter varies between 5 and 200, with the dashed line corresponding to the Gaussian limit when this ratio is equal to 1.</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-6bAp74V8OyI/T5L3k1Uy1JI/AAAAAAAAAIA/7rBuaVGCj-k
/s1600/SizeEffectRatioPlot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" qda="true" src="http://3.bp.blogspot.com/-6bAp74V8OyI/T5L3k1Uy1JI/AAAAAAAAAIA
/7rBuaVGCj-k/s320/SizeEffectRatioPlot.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">The general structure of Olive’s confidence interval for the median is exactly analogous to that for the mean given above:</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>
CI = (Median – c<sub>p</sub> SE, Median + c<sub>p</sub> SE)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The key result of
Olive’s paper is a simple estimator for the standard error SE, based on order statistics (i.e., rank-ordered data values like the minimum, median, and maximum).<span style="mso-spacerun: yes;">
</span>Instead of describing these results mathematically, I have included an <i style="mso-bidi-font-style: normal;">R</i> procedure that computes the median, Olive’s standard error, the
corresponding confidence intervals, and the classical results for the mean (again, for the mathematical details, refer to Olive’s paper; for a more detailed discussion of order statistics, refer to
Chapter 6 of <em>Exploring Data</em>).<span style="mso-spacerun: yes;"> </span>Specifically, the following <i style="mso-bidi-font-style: normal;">R</i> procedure is called with a vector y of
numerical data values, and the default level of the resulting confidence interval is 95%, although this level can be changed by specifying an alternative value of alpha (this is 1 minus the
confidence level, so alpha is 0.05 for the 95% case, 0.01 for 99%, etc.).</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br
/></div><blockquote class="tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">DOliveCIproc <- function(y, alpha = 0.05){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span
style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>This
procedure implements David Olive's simple</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>
median confidence interval, along with the standard</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </
span>confidence interval for the mean, for comparison</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>First, compute the median</div><div class="MsoNormal" style="margin: 0in 0in 0pt;
"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>n = length(y)</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>ysort = sort(y)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>
nhalf = floor(n/2)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>if (2*nhalf < n){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;
"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>n odd</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun:
yes;"> </span>med = ysort[nhalf + 1]</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>else{</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span># n
even</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>med = (ysort[nhalf] + ysort[nhalf+1])/2</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>Next, compute Olive’s standard error for the median</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </
span>Ln = nhalf - ceiling(sqrt(n/4))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>Un = n - Ln</div><div class="MsoNormal" style="margin: 0in
0in 0pt;"><span style="mso-spacerun: yes;"> </span>SE = 0.5*(ysort[Un] - ysort[Ln+1])</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>Compute the confidence interval based on
Student’s t-distribution</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>The degrees of freedom
parameter p is discussed in Olive’s paper</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-spacerun: yes;"> </span>p = Un - Ln - 1</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>t = qt(p = 1 - alpha/2, df
= p)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>medLCI = med - t * SE</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style
="mso-spacerun: yes;"> </span>medUCI = med + t * SE</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>Next, compute the mean and its classical confidence interval</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>mu =
mean(y)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>SEmu = sd(y)/sqrt(n)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span
style="mso-spacerun: yes;"> </span>tmu = qt(p = 1 - alpha/2, df = n-1)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>muLCI = mu - tmu *
SEmu</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>muUCI = mu + tmu * SEmu</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span
style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>
Finally, return a data frame with all of the results computed here</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>OutFrame = data.frame(Median = med, LCI = medLCI, UCI = medUCI, </div><div class="MsoNormal" style="margin: 0in
0in 0pt;"><span style="mso-spacerun: yes;"> </span>Mean = mu,
MeanLCI = muLCI, MeanUCI = muUCI,</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> &
nbsp; </span>N = n, dof = p, tmedian = t, tmean = tmu,</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun:
yes;"> </span>SEmedian = SE, SEmean = SEmu)</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>OutFrame</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">}</div></blockquote><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Briefly, this procedure performs the following computations.<span style="mso-spacerun: yes;"> </
span>The first portion of the code computes the median, defined as the middle element of the rank-ordered list of samples if the number of samples n is odd, and the average of the two middle samples
if n is even.<span style="mso-spacerun: yes;"> </span>Note that the even/odd character of n is determined by using the <b style="mso-bidi-font-weight: normal;">floor</b> function in <i style=
"mso-bidi-font-style: normal;">R:</i> floor(n/2) is the largest integer that does not exceed n/2.<span style="mso-spacerun: yes;"> </span>Thus, if n is odd, the <b style="mso-bidi-font-weight:
normal;">floor</b> function rounds n/2 down to its integer part, so the product 2 * floor(n/2) is less than n, while if n is even, floor(n/2) is exactly equal to n/2, so this product is equal to n.
<span style="mso-spacerun: yes;"> </span>In addition, both the <b style="mso-bidi-font-weight: normal;">floor</b> function and its opposite function <b style="mso-bidi-font-weight: normal;">
ceiling</b> are needed to compute the value Ln used in computing Olive’s standard error for the median.<span style="mso-spacerun: yes;"> </span>The c<sub>p</sub> values correspond to the
parameters t and tmu that appear in this function, computed from the built-in R function <strong>qt</strong> (which returns quantiles of the t-distribution).<span style="mso-spacerun: yes;"> </
span>Note that for the median, the degrees of freedom supplied to this function is p, which tends to be much smaller than the degrees of freedom value n-1 for the mean confidence interval computed in
the latter part of this function.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">As a specific illustration of the results
generated by this procedure, applying it to the makeup flow rate data sequence yields:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in
0in 0pt;">> DOliveCIproc(makeupflow)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>Median<span style="mso-spacerun: yes;">&
nbsp; </span>LCI<span style="mso-spacerun: yes;"> </span>UCI<span style=
"mso-spacerun: yes;"> </span>Mean<span style="mso-spacerun: yes;"> </span>MeanLCI<span style
="mso-spacerun: yes;"> </span>MeanUCI<span style="mso-spacerun: yes;"> </span>N dof<span style="mso-spacerun: yes;"> &
nbsp; </span>tmedian</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">1 393.3586 391.8338 394.8834 315.4609 309.4857&
nbsp; 321.4361 2589<span style="mso-spacerun: yes;"> </span>52 2.006647</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;">&
nbsp; </span>tmean SEmedian<span style="mso-spacerun: yes;"> </span>SEmean</div><div class="MsoNormal" style="margin: 0in 0in
0pt;">1 1.960881<span style="mso-spacerun: yes;"> </span>0.75987 3.047188</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">These results were used to construct the confidence interval lines in the makeup flow rate plot shown
above.<span style="mso-spacerun: yes;"> </span>In addition, note that these results also illustrate the point noted in the preceding discussion about the degrees of freedom used in constructing
the Student’s t-based confidence intervals.<span style="mso-spacerun: yes;"> </span>For the mean, the degrees of freedom is N-1, which is 2588 for this example, meaning that there is
essentially no difference in this case between these confidence intervals and those based on the Gaussian limiting distribution.<span style="mso-spacerun: yes;"> </span>In contrast, for the
median, the degrees of freedom is only 52, giving a c<sub>p</sub> value that is about 2.5% larger than the corresponding Gaussian case; for the next example, the degrees of freedom is only 16, making
this parameter about 8% larger than the Gaussian limit.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href=
"http://2.bp.blogspot.com/-ffH0dHWT1SU/T5L44DXg6OI/AAAAAAAAAII/kiTL9LpRSng/s1600/OliveFig02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" qda="true"
src="http://2.bp.blogspot.com/-ffH0dHWT1SU/T5L44DXg6OI/AAAAAAAAAII/kiTL9LpRSng/s320/OliveFig02.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">One of the points I discussed in my last post was the instability of the median relative to
the mean, a point I illustrated with the plot shown above.<span style="mso-spacerun: yes;"> </span>This is a simulation-based dataset consisting of three parts: the first 100 points are
narrowly distributed around the value +1, the 101<sup>st</sup> point is exactly zero, and the last 100 points are narrowly distributed around the value -1.<span style="mso-spacerun: yes;"> </
span>As I noted last time, removing two points from either the first group or the last group can profoundly alter the median, while having very little effect on the mean.<span style="mso-spacerun:
yes;"> </span>The figure shown above includes, in addition to the data values, the 95% confidence intervals for both the mean (the dotted lines in the center of the plot) and the median (the
heavy dashed lines at the top and bottom of the plot).<span style="mso-spacerun: yes;"> </span>Here, the fact that the median confidence interval is enormously wider (by almost a factor of 13)
than the mean confidence interval gives an indication of the instability of the median.<span style="mso-spacerun: yes;"> </span>In fact, the data distribution in this example is strongly
bimodal, corresponding to a case where order statistic-based estimators like the median and Olive’s standard error for it perform poorly, a point discussed in Chapter 7 of <em>Exploring Data.</em></
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-58FRLcbZTd0/T5L5MJ-D99I/
AAAAAAAAAIQ/-Kz26DHi3I4/s1600/OliveFig03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" qda="true" src="http://3.bp.blogspot.com/-58FRLcbZTd0/
T5L5MJ-D99I/AAAAAAAAAIQ/-Kz26DHi3I4/s320/OliveFig03.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">One of the other important cases where estimators based on order statistics can perform poorly is that of coarsely quantized data, such
as temperatures recorded only to the nearest tenth of a degree.<span style="mso-spacerun: yes;"> </span>The difficulty with these cases is that coarse quantization profoundly changes the nature
of the data distribution.<span style="mso-spacerun: yes;"> </span>Specifically, it is a standard result in statistics that the probability of any two samples drawn from a continuous
distribution having exactly the same value is zero, but this is no longer true for discrete distributions (e.g., count data), and coarse quantization introduces an element of discreteness into the
data distribution.<span style="mso-spacerun: yes;"> </span>The above figure illustrates this point for a simple simulation-based example.<span style="mso-spacerun: yes;"> </span>The upper
left plot shows a random sample of size 200 drawn from a zero-mean, unit-variance Gaussian distribution, and the upper right plot shows the effects of quantizing this sample, rounding it to the
nearest half-integer value.<span style="mso-spacerun: yes;"> </span>The lower two plots are normal quantile-quantile plots generated by the <i style="mso-bidi-font-style: normal;">R</i> command
<b style="mso-bidi-font-weight: normal;">qqPlot</b> from the <b style="mso-bidi-font-weight: normal;">car</b> package: in the lower left plot, almost all of the points fall within the 95% confidence
interval around the normal reference line for this plot, while many of the points fall somewhat outside these confidence limits in the plot shown in the lower right.<span style="mso-spacerun: yes;">&
nbsp; </span>The greatest difference, however, is in the “staircase” appearance of this lower right plot, reflecting the effects of the coarse quantization on this data sample: each “step”
corresponds to a group of samples that have exactly the same value.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The
influence of this quantization on Olive’s confidence interval for the median is profound: for the original Gaussian data sequence, the 95% confidence interval for the median is approximately
(-0.222,0.124), compared with (-0.174,0.095) for the mean.<span style="mso-spacerun: yes;"> </span>These results are consistent with our expectations: since the mean is the best possible
location estimator for Gaussian data, it should give the narrower confidence interval, and it does.<span style="mso-spacerun: yes;"> </span>For the quantized case, the 95% confidence interval
for the mean is (-0.194, 0.079), fairly similar to that for the original data sequence, but the confidence interval for the median reduces to the single value zero.<span style="mso-spacerun: yes;">&
nbsp; </span>This result represents an <i style="mso-bidi-font-style: normal;">implosion</i> of Olive’s standard error estimator for the median, exactly analogous to the behavior of the MADM scale
estimate that I have discussed previously when a majority of the data values (i.e., more than 50% of them) are identical.<span style="mso-spacerun: yes;"> </span>Here, the situation is more
serious, since the MADM scale estimate does not implode for this example: the MADM scale for the original data sequence is 0.938, versus 0.741 for the quantized sequence.<span style="mso-spacerun:
yes;"> </span>The reason Olive’s standard error estimator is more prone to implosion in the face of coarse quantization is that it is based on a small subset of the original data sample.<span
style="mso-spacerun: yes;"> </span>In particular, the size of the subsample on which this estimator is based is p, the degrees of freedom for the t-distribution used in constructing the
corresponding confidence interval, and this number is approximately the square root of the sample size.<span style="mso-spacerun: yes;"> </span>Thus, for a sample of size 200 like the example
considered here, MADM scale implosion requires just over half the sample to have the same value – 101 data points in this case – where Olive’s standard error estimator for the median can implode if
16 or more samples have the same value, and this is exactly what happens here: the median value is zero, and this value occurs 39 times in the quantized data sequence.</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">David Olive’s confidence interval for the median is easily computed and represents a useful adjunct to the
median as a characterization of numerical variables.<span style="mso-spacerun: yes;"> </span>As Olive advises, there is considerable advantage in computing and comparing both his median
confidence interval and the corresponding standard confidence interval around the mean.<span style="mso-spacerun: yes;"> </span>Although in the summary of his paper, Olive only mentions
outliers as a potential cause of substantial differences between these two confidence intervals, this post has illustrated that disagreements can also arise from other causes, including light-tailed,
bimodal, or coarsely quantized data, much like the situation with the MADM scale estimate versus the standard deviation.<span style="mso-spacerun: yes;"> </span>In fact, as the last example
discussed here illustrates, Olive’s standard error estimator for the median and the confidence intervals based on it can implode – exactly like the MADM scale estimate – in the face of coarsely
quantized data.<span style="mso-spacerun: yes;"> </span>In fact, the implosion problem for Olive’s median standard error estimator is potentially more severe, again as illustrated in the
previous example.<span style="mso-spacerun: yes;"> </span>Finally, it is worth noting that Olive’s paper also discusses confidence intervals for trimmed means.</div>Ron Pearson (aka
15693640298594791682noreply@blogger.com0tag:blogger.com,1999:blog-9179325420174899779.post-62894266654358476302012-03-03T15:25:00.000-08:002012-03-03T15:25:18.627-08:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">The problem of outliers – data points that are substantially inconsistent with the majority of the other points in a dataset – arises frequently in the analysis of numerical
data.<span style="mso-spacerun: yes;"> </span>The practical importance of outliers lies in the fact that even a few of these points can badly distort the results of an otherwise reasonable data
analysis.<span style="mso-spacerun: yes;"> </span>This outlier-sensitivity problem is often particularly acute for classical data characterizations and analysis methods like means, standard
deviations, and linear regression analysis.<span style="mso-spacerun: yes;"> </span>As a consequence, a range of outlier-resistant methods have been developed for many different applications,
and new methods continue to be developed.<span style="mso-spacerun: yes;"> </span>For example, the <em>R</em> package <strong>robustbase</strong> that I have discussed in previous posts
includes outlier-resistant methods for estimating location (i.e., outlier-resistant alternatives to the mean), estimating scale (outlier-resistant alternatives to the standard deviation), quantifying
asymmetry (outlier-resistant alternatives to the skewness), and fitting regression models.<span style="mso-spacerun: yes;"> </span>In <a href="http://www.amazon.com/
Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650">Exploring Data in Engineering, the Sciences, and Medicine</a>, I discuss a number of outlier-resistant methods for addressing some of these
problems, including <em>Gastwirth’s location estimator</em>, an alternative to the mean that is the subject of this post.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">The mean is the best-known location estimator, and it gives a useful assessment of the “typical” value of any numerical sequence that is reasonably
symmetrically distributed and free of outliers.<span style="mso-spacerun: yes;"> </span>The outlier-sensitivity of the mean is severe, however, which motivates the use of outlier-resistant
alternatives like the median.<span style="mso-spacerun: yes;"> </span>While the median is almost as well-known as the mean and extremely outlier-resistant, it can behave unexpectedly (i.e.,
“badly”) as a result of its non-smooth character.<span style="mso-spacerun: yes;"> </span>This point is illustrated in Fig. 7.23 in <em>Exploring Data</em>, identical in character to the figure
shown below (this figure is slightly different because it uses a different seed to generate the random numbers on which it is based).<span style="mso-spacerun: yes;"> </span>Specifically, this
plot shows a sequence of 201 data points, constructed as follows.<span style="mso-spacerun: yes;"> </span>The first 100 points are normally distributed with mean 1 and standard deviation 0.1,
the 101<sup>st</sup> point is equal to zero, and points 102 through 201 are normally distributed with mean -1 and standard deviation 0.1.<span style="mso-spacerun: yes;"> </span>Small changes
in this dataset in the specific form of deleting points can result in very large changes in the computed median.<span style="mso-spacerun: yes;"> </span>Specifically, in this example, the first
100 points lie between 0.768 and 1.185 and the last 100 points lie between -0.787 and -1.282; because the central data point lies between these two equal-sized groups, it defines the median, which is
0.<span style="mso-spacerun: yes;"> </span>The mean is quite close to this value, at -0.004, but the situation changes dramatically if we omit either the first two or the last two points from
this data sequence.<span style="mso-spacerun: yes;"> </span>Specifically, the median value computed from points 1 through 199 is 0.768, while that computed from points 3 through 201 is -0.787.
<span style="mso-spacerun: yes;"> </span>In contrast, the mean values for these two modified sequences are 0.006 and -0.014.<span style="mso-spacerun: yes;"> </span>Thus, although the
median is much less sensitive than the mean to contamination from outliers, it is extremely sensitive to the 1% change made in this example for this particular dataset.<span style="mso-spacerun: yes;
"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><div class="separator" style="clear: both; text-align:
center;"><a href="http://1.bp.blogspot.com/-nEk7MSvKhcg/T1KgWJGB5xI/AAAAAAAAAHQ/CnxWo2AIR9U/s1600/GastFig00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height=
"319" src="http://1.bp.blogspot.com/-nEk7MSvKhcg/T1KgWJGB5xI/AAAAAAAAAHQ/CnxWo2AIR9U/s320/GastFig00.png" uda="true" width="320" /></a></div></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The fact that the median is not “universally the best location estimator” provides a practical motivation for examining alternatives
that are intermediate in behavior between the very smooth but very outlier-sensitive mean and the very outlier-insensitive but very non-smooth median.<span style="mso-spacerun: yes;"> </span>
Some of these alternatives were examined in detail in the book <em>Robust Estimates of Location: Survey and Advances<place w:st="on"></place></em>, by D.F. Andrews, P.J. Bickel, F.R. Hampel, P.J.
Huber, W.H. Rogers, and J.W. Tukey, published by Princeton University Press in 1972 (according to the publisher's website, this book is out of print, but used copies are available through
distributors like Amazon or Barnes and Noble).<span style="mso-spacerun: yes;"> T</span>he book summarizes the results of a year-long study of 68 different location estimators, including both
the mean and the median.<span style="mso-spacerun: yes;"> </span>The fundamental criteria for inclusion in this study were, first, that the estimators had to be computable from any given
sequence of real numbers, and second, that they had to be both location and scale-invariant.<span style="mso-spacerun: yes;"> </span>Specifically, if a given data sequence <i style=
"mso-bidi-font-style: normal;">{x<sub>k</sub>}</i> yielded a result <i style="mso-bidi-font-style: normal;">m</i>, the scaled and shifted data sequence <i style="mso-bidi-font-style: normal;">{Ax
<sub>k</sub> + b}</i> should yield the result <i style="mso-bidi-font-style: normal;">Am+b</i>, for any numbers <i style="mso-bidi-font-style: normal;">A</i> and <i style="mso-bidi-font-style:
normal;">b</i>.<span style="mso-spacerun: yes;"> </span>The study was co-authored by six statistical researchers with differing opinions and points of view, but two of the authors – D.F.
Andrews and F.R. Hampel – included the Gastwirth estimator (described in detail below) in their list of favorites.<span style="mso-spacerun: yes;"> </span>For example, Hampel characterized this
estimator as one of a small list of those that were “never bad at the distributions considered.”<span style="mso-spacerun: yes;"> </span>Also, in contrast to many of the location estimators
considered in the study, Gastwirth’s estimator does not require iterative computations, making it simpler to implement.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class
="MsoNormal" style="margin: 0in 0in 0pt;">Specifically, Gastwirth’s location estimator is a weighted sum of three order statistics.<span style="mso-spacerun: yes;"> </span>That is, to compute
this estimator, we first sort the data sequence in ascending order.<span style="mso-spacerun: yes;"> </span>Then, we take the values that are one-third of the way up this sequence (the 0.33
quantile), half way up the sequence (i.e., the median, or 0.50 quantile), and two-thirds of the way up the sequence (the 0.67 quantile).<span style="mso-spacerun: yes;"> </span>Given these
three values, we then form the weighted average, giving the central (median) value a weight of 40% and the two extreme values each a weight of 30%.<span style="mso-spacerun: yes;"> </span>This
is extremely easy to do in R, with the following code:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><blockquote class=
"tr_bq"><div class="MsoNormal" style="margin: 0in 0in 0pt;">Gastwirth <- function(x,...){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>ordstats = quantile(x, probs=c(1/3,1/2,2/3),...)</div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>wts = c(0.3,0.4,0.3)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>sum
(wts*ordstats)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">}</div></blockquote>
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The key part of this code is the first line, which computes the required
order statistics (i.e., the quantiles 1/3, 1/2, and 2/3) using the built-in <b style="mso-bidi-font-weight: normal;">quantile</b> function.<span style="mso-spacerun: yes;"> </span>The first
argument passed to this function is <b style="mso-bidi-font-weight: normal;">x</b>, the vector of data values to be characterized, and the second argument (<b style="mso-bidi-font-weight: normal;">
probs</b>) defines the specific quantiles we wish to compute.<span style="mso-spacerun: yes;"> </span>The ellipses in the Gastwirth procedure’s command line is passed to the <b style=
"mso-bidi-font-weight: normal;">quantile</b> function; several parameters are possible (type “help(quantile)” in your <em>R</em> session for details), but one of the most useful is <b style=
"mso-bidi-font-weight: normal;">na.rm</b>, a logical variable that specifies how missing data values are to be handled.<span style="mso-spacerun: yes;"> </span>The default is “FALSE” and this
causes the <b style="mso-bidi-font-weight: normal;">Gastwirth</b> procedure to return the missing data value “NA” if any values of <b style="mso-bidi-font-weight: normal;">x</b> are missing; the
alternative “TRUE” computes the Gastwirth estimator from the non-missing values, giving a numerical result.<span style="mso-spacerun: yes;"> </span>The three-element vector <b style=
"mso-bidi-font-weight: normal;">wts</b> defines the quantile weights that define the Gastwirth estimator, which the final <b style="mso-bidi-font-weight: normal;">sum</b> statement computes.</
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">For the data example considered above, the Gastwirth estimator yields the
location estimate -0.001 for the complete dataset, 0.308 for points 1 to 199 (vs. 0.768 for the median), and -0.317 for points 3 to 201 (vs. -0.787 for the median).<span style="mso-spacerun: yes;">&
nbsp; </span>Thus, while it does not perform nearly as well as the mean for this example, it performs substantially better than the median.<span style="mso-spacerun: yes;"> </span></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-lX85Av_ZRvI/T1KjMM0rX4I/AAAAAAAAAHY/dk5cvsDf_nM
/s1600/GastFig01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" src="http://3.bp.blogspot.com/-lX85Av_ZRvI/T1KjMM0rX4I/AAAAAAAAAHY/dk5cvsDf_nM/s320/
GastFig01.png" uda="true" width="320" /></a></div></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">For the infinite-variance
Cauchy distribution that I have discussed in several previous posts, the Gastwirth estimator performs similarly to the median, yielding a useful estimate of the center of the data distribution, in
contrast to the mean, which doesn’t actually exist for this distribution (that is, the first moment does not exist for the Cauchy distribution).<span style="mso-spacerun: yes;"> </span>Still,
the distribution is symmetric about zero, so the median is well-defined, as is the Gastwirth estimator, and both should be zero for this distribution.<span style="mso-spacerun: yes;"> </span>
The above figure shows the results of applying these three estimators – the mean, the median, and Gastwirth’s estimator – to 1,000 independent random samples drawn from the Cauchy distribution.<span
style="mso-spacerun: yes;"> </span>Specifically, this figure gives a boxplot summary of these results, truncated to the range from -3 to 3 to show the range of variation of the median and
Gastwirth estimator (without this restriction, the boxplot comparison would be fairly non-informative, since the mean values range from approximately -161 to 27,793, reflecting the fact that the mean
is not a consistent location estimator for the Cauchy distribution).<span style="mso-spacerun: yes;"> </span>To generate these results, the <b style="mso-bidi-font-weight: normal;">
replicate</b> function in R was used, followed by the <b style="mso-bidi-font-weight: normal;">apply</b> function, as follows:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>RandomSampleFrame = replicate(1000, rt(n=200,df=1))<br />
BoxPlotVector = apply(RandomSampleFrame, MARGIN=2, Gastwirth)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The <b style=
"mso-bidi-font-weight: normal;">replicate</b> function creates a data frame with the number of columns specified by the first argument (here, 1000), and each column generated by the R statement that
appears as the second argument.<span style="mso-spacerun: yes;"> </span>In this case, this second argument is the command <b style="mso-bidi-font-weight: normal;">rt</b>, which generates a
sequence of <b style="mso-bidi-font-weight: normal;">n</b> statistically independent random numbers drawn from the Student’s <i style="mso-bidi-font-style: normal;">t</i>-distribution with the number
of degrees of freedom specified by the <b style="mso-bidi-font-weight: normal;">df </b>argument (here, this is 1, corresponding to the fact that the Cauchy distribution is the Student’s <i style=
"mso-bidi-font-style: normal;">t</i>-distribution with 1 degree of freedom). <span style="mso-spacerun: yes;"> </span>Thus, <b style="mso-bidi-font-weight: normal;">RandomSampleFrame</b>
is a data frame with 200 rows and 1,000 columns, each of which may be regarded as a Cauchy-distributed random sample.<span style="mso-spacerun: yes;"> </span>The <b style="mso-bidi-font-weight:
normal;">apply</b> function applies the function specified in the third argument (here, the <b style="mso-bidi-font-weight: normal;">Gastwirth</b> procedure listed above) to the columns (<b style=
"mso-bidi-font-weight: normal;">MARGIN</b>=2 specifies columns; <b style="mso-bidi-font-weight: normal;">MARGIN</b>=1 would specify rows) of the data frame specified in the first argument.<span style
="mso-spacerun: yes;"> </span>The result is <b style="mso-bidi-font-weight: normal;">BoxPlotVector</b>, a vector of 1,000 Gastwirth estimates, one for each random sample generated by the
<strong>replicate</strong> function above.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://
3.bp.blogspot.com/-ybrBRTX9ypA/T1Kk9T-_9MI/AAAAAAAAAHg/wJHe00-2PkU/s1600/GastFig02a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" src="http://
3.bp.blogspot.com/-ybrBRTX9ypA/T1Kk9T-_9MI/AAAAAAAAAHg/wJHe00-2PkU/s320/GastFig02a.png" uda="true" width="320" /></a></div></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">At the other extreme, in the limit of infinite degrees of freedom, the Student’s <em>t</em>-distribution approaches a Gaussian limit.<span style=
"mso-spacerun: yes;"> </span>The figure above shows the same comparison as before, except for the Gaussian distribution instead of the Cauchy distribution.<span style="mso-spacerun: yes;">&
nbsp; </span>Here, the mean is the best possible location estimator and it clearly performs the best, but the point of this example is that Gastwirth’s location estimator performs better than the
median.<span style="mso-spacerun: yes;"> </span>In particular, the interquartile distance (i.e., the width of the “box” in each boxplot) for the mean is 0.094, it is 0.113 for the median, and
it is 0.106 for Gastwirth’s estimator.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><div class="separator" style="clear:
both; text-align: center;"><a href="http://4.bp.blogspot.com/-7SrgfuHe0K4/T1KlXJD1dfI/AAAAAAAAAHo/3_BtnzblqD0/s1600/ArcsinPlot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img
border="0" height="319" src="http://4.bp.blogspot.com/-7SrgfuHe0K4/T1KlXJD1dfI/AAAAAAAAAHo/3_BtnzblqD0/s320/ArcsinPlot.png" uda="true" width="320" /></a></div></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Another application area where very robust estimators like the median often perform poorly is that of bimodal
distributions like the <i style="mso-bidi-font-style: normal;">arc-sine distribution</i> whose density is plotted above.<span style="mso-spacerun: yes;"> </span>This distribution is a symmetric
beta distribution, with both shape parameters equal to 0.5 (see <em>Exploring Data</em>, Sec. 4.5.1 for further discussion of this distribution).<span style="mso-spacerun: yes;"> </span>Because
it is symmetrically distributed on the interval from 0 to 1, the location parameter for this distribution is 0.5 and all three of the location estimators considered here yield values that are
accurate on average, but with different levels of precision.<span style="mso-spacerun: yes;"> </span>This point is shown in the figure below, which again provides boxplot comparisons for 1,000
random samples drawn from this distribution, each of length 200, for the mean, median, and Gastwirth location estimators.<span style="mso-spacerun: yes;"> </span>As in the Gaussian case
considered above, the mean performs best here, with an interquartile distance of 0.035, the median performs worst, with an interquartile distance of 0.077, and Gastwirth’s estimator is intermediate,
with an interquartile distance of 0.060.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://
2.bp.blogspot.com/-NiS3T5T2hEo/T1KlqrDbOZI/AAAAAAAAAHw/J_9cjl80_aM/s1600/GastFig03a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" src="http://
2.bp.blogspot.com/-NiS3T5T2hEo/T1KlqrDbOZI/AAAAAAAAAHw/J_9cjl80_aM/s320/GastFig03a.png" uda="true" width="320" /></a></div></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">The point of this post has been to illustrate a location estimator with properties that are intermediate between those of the much better-known mean and
median.<span style="mso-spacerun: yes;"> </span>In particular, the results presented here for the Cauchy distribution show that Gastwirth’s estimator is intermediate in outlier sensitivity
between the disastrously sensitive mean and the maximally insensitive median.<span style="mso-spacerun: yes;"> </span>Similarly, the first example demonstrated that Gastwirth’s estimator is
also intermediate in smoothness between the maximally smooth mean and the discontinuous median: the sensitivity of Gastwirth’s estimator to data editing in “swing-vote” examples like the one
presented here is still undesirably large, but much better than that of the median.<span style="mso-spacerun: yes;"> </span>Finally, the results presented here for the Gaussian and arc-sine
distributions show that Gastwirth’s estimator is better-behaved for these distributions than the median.<span style="mso-spacerun: yes;"> </span>Because it is extremely easy to implement in
<em>R</em>, Gastwirth’s estimator seems worth knowing about.</div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com3tag:blogger.com,1999:blog-9179325420174899779.post-70911523361836659332012-02-04T16:06:00.000-08:002012-02-04T16:06:09.600-08:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">It is often useful to know how strongly or weakly two variables are associated: do they vary together or are they essentially unrelated?<span style="mso-spacerun: yes;">
</span>In the case of numerical variables, the best-known measure of association is the product-moment correlation coefficient introduced by Karl Pearson at the end of the nineteenth century.<span
style="mso-spacerun: yes;"> </span>For variables that are ordered but not necessarily numeric (e.g., Likert scale responses with levels like “strongly agree,” “agree,” “neither agree nor
disagree,” “disagree” and “strongly disagree”), association can be measured in terms of the Spearman rank correlation coefficient.<span style="mso-spacerun: yes;"> </span>Both of these measures
are discussed in detail in Chapter 10 of <a href="http://www.amazon.com/s?ie=UTF8&rh=n%3A283155%2Ck%3Aexploring%20data%20in%20engineering.%20the%20sciences.%20and%20medicine&page=1">Exploring
Data in Engineering, the Sciences, and Medicine</a>.<span style="mso-spacerun: yes;"> </span>For unordered categorical variables (e.g., country, state, county, tumor type, literary genre,
etc.), neither of these measures are applicable, but applicable alternatives do exist.<span style="mso-spacerun: yes;"> </span>One of these is Goodman and Kruskal’s tau measure, discussed very
briefly in <em>Exploring Data</em> (Chapter 10, page 492).<span style="mso-spacerun: yes;"> </span>The point of this post is to give a more detailed discussion of this association measure,
illustrating some of its advantages, disadvantages, and peculiarities.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">A more
complete discussion of Goodman and Kruskal’s tau measure is given in Agresti’s book <a href="http://www.amazon.com/s/ref=nb_sb_ss_i_1_8?url=search-alias%3Dstripbooks&field-keywords=
agresti+categorical+data+analysis&sprefix=agresti+%2Cstripbooks%2C428">Categorical Data Analysis</a>, on pages 68 and 69.<span style="mso-spacerun: yes;"> </span>It belongs to a family of
categorical association measures of the general form:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-tab-count: 1;"> </span>a(x,y) = [V(y) – E{V(y|x)}]/V(y)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">where V(y) is a measure of the overall (i.e., marginal) variability of y and E{V(y|x)} is the expected value of the conditional variability V(y|x)
of y given a fixed value of x, where the expectation is taken over all possible values of x.<span style="mso-spacerun: yes;"> </span>These variability measures can be defined in different ways,
leading to different association measures, including Goodman and Kruskal’s tau as a special case.<span style="mso-spacerun: yes;"> </span>Agresti’s book gives detailed expressions for several
of these variability measures, including the one on which Goodman and Kruskal’s tau is based, and an alternative expression for the overall association measure a(x,y) is given in Eq. (10.178) on page
492 of <em>Exploring Data</em>.<span style="mso-spacerun: yes;"> </span>This association measure does not appear to be available in any current <em>R</em> package, but it is easily implemented
as the following function:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><blockquote class="tr_bq"><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">GKtau <- function(x,y){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>First, compute the IxJ contingency table between x and y</div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;">
</span>Nij = table(x,y,useNA="ifany")</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;
"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>Next, convert this table into a joint probability estimate</div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>PIij = Nij/sum(Nij)</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </
span>#<span style="mso-spacerun: yes;"> </span>Compute the marginal probability estimates</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </
span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>PIiPlus = apply(PIij,MARGIN=1,sum)</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-spacerun: yes;"> </span>PIPlusj = apply(PIij,MARGIN=2,sum)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>Compute the marginal variation of y</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>Vy =
1 - sum(PIPlusj^2)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>Compute the expected conditional variation of y given x</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>InnerSum = apply(PIij^2,MARGIN=1,sum)</div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>VyBarx = 1 - sum(InnerSum/PIiPlus)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span
style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#<span style="mso-spacerun: yes;"> </span>
Compute and return Goodman and Kruskal's tau measure</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>tau = (Vy - VyBarx)/Vy</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>
tau</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;">An
important feature of this procedure is that it allows missing values in either of the variables x or y, treating “missing” as an additional level.<span style="mso-spacerun: yes;"> </span>In
practice, this is sometimes very important since missing values in one variable may be strongly associated with either missing values in another variable or specific non-missing levels of that
variable.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">An important characteristic of Goodman and Kruskal’s tau measure is
its asymmetry: because the variables x and y enter this expression differently, the value of a(y,x) is <em>not</em> the same as the value of a(x,y), in general.<span style="mso-spacerun: yes;">
</span>This stands in marked contrast to either the product-moment correlation coefficient or the Spearman rank correlation coefficient, which are both symmetric, giving the same association between
x and y as that between y and x.<span style="mso-spacerun: yes;"> </span>The fundamental reason for the asymmetry of the general class of measures defined above is that they quantify the extent
to which the variable x is useful in predicting y, which may be very different than the extent to which the variable y is useful in predicting x.<span style="mso-spacerun: yes;"> </span>
Specifically, if x and y are statistically independent, then E{V(y|x)} = V(y) – i.e., knowing x does not help at all in predicting y – and this implies that a(x,y) = 0.<span style="mso-spacerun:
yes;"> </span>At the other extreme, if y is perfectly predictable from x, then E{V(y|x)} = 0, which implies that a(x,y) = 1.<span style="mso-spacerun: yes;"> </span>As the examples
presented next demonstrate, it is possible that y is extremely predictable from x, but x is only slightly predictable from y.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">Specifically, consider the sequence of 400 random numbers, uniformly distributed between 0 and 1 generated by the following R code:</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span>set.seed(123)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>u = runif
(400)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">(Here, I have used the “set.seed” command to initialize the random
number generator so repeated runs of this example will give exactly the same results.)<span style="mso-spacerun: yes;"> </span>The second sequence is obtained by quantizing the first, rounding
the values of u to a single digit:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span>x = round(u,digits=1)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in
0in 0pt;">The plot below shows the effects of this coarse quantization: values of u vary continuously from 0 to 1, but values of x are restricted to 0.0, 0.1, 0.2, … , 1.0.<span style="mso-spacerun:
yes;"> </span>Although this example is simulation-based, it is important to note that this type of grouping of variables is often encountered in practice (e.g., the use of age groups instead of
ages in demographic characterizations, blood pressure characterizations like “normal,” “borderline hypertensive,” etc. in clinical data analysis, or the recording of industrial process temperatures
to the nearest 0.1 degree, in part due to measurement accuracy considerations and in part due to memory limitations of early data collection systems).<span style="mso-spacerun: yes;"> </span></
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-1yCneUgZLQE/Ty3C5dfv3II/
AAAAAAAAAG4/36tSbqEgXFQ/s1600/GKtauFig01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" sda="true" src="http://4.bp.blogspot.com/-1yCneUgZLQE/
Ty3C5dfv3II/AAAAAAAAAG4/36tSbqEgXFQ/s320/GKtauFig01.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">In this particular case, because the variables x and u are both numeric, we could compute either the product-moment correlation
coefficient or the Spearman rank correlation, obtaining the very large value of approximately 0.995 for either one, showing that these variables are strongly associated.<span style="mso-spacerun:
yes;"> </span>We can also apply Goodman and Kruskal’s tau measure here, and the result is much more informative.<span style="mso-spacerun: yes;"> </span>Specifically, the value of a(u,x)
is 1 in this case, correctly reflecting the fact that the grouped variable x is exactly computable from the original variable u.<span style="mso-spacerun: yes;"> </span>In contrast, the value
of a(x,u) is approximately 0.025, suggesting – again correctly – that the original variable u cannot be well predicted from the grouped variable x.<span style="mso-spacerun: yes;"> </span></
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">To illustrate a case where the product-moment and rank correlation measures are
not applicable at all, consider the following alphabetic re-coding of the variable x into an unordered categorical variable c:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>letters = c(“A”, “B”, “C”, “D”, “E”, “F”,
“G”, “H”, “I”, “J”, “K”)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>c = letters
[10*x+1]</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">In this case, both of the Goodman and Kruskal tau measures, a(x,c)
and a(c,x), are equal to 1, reflecting the fact that these two variables are effectively identical, related via the non-numeric transformation given above.<span style="mso-spacerun: yes;"> </
span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Being able to detect relationships like these can be extremely useful in
exploratory data analysis where such relationships may be unexpected, particularly in the early stages of characterizing a dataset whose metadata – i.e., detailed descriptions of the variables
included in the dataset – is absent, incomplete, ambiguous, or suspect.<span style="mso-spacerun: yes;"> </span>As a real data illustration, consider the <strong>rent</strong> data frame from
the <em>R</em> package <strong>gamlss.data</strong>, which has 1,969 rows, each corresponding to a rental property in <place w:st="on"><city w:st="on">Munich</city></place>, and 9 columns, each
giving a characteristic of that unit (e.g., the rent, floor space, year of construction, etc.).<span style="mso-spacerun: yes;"> </span>Three of these variables are <em>Sp</em>, a binary
variable indicating whether the location is considered above average (1) or not (0), <em>Sm</em>, another binary variable indicating whether the location is considered below average (1) or not (0),
and <em>loc</em>, a three-level variable combining the information in these other two, taking the values 1 (below average), 2 (average), or 3 (above average).<span style="mso-spacerun: yes;">
</span>The Goodman and Kruskal tau values between all possible pairs of these three variables are:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>a(Sm,Sp) = a(Sp,Sm) = 0.037</div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>a(Sm,loc) = 0.245 vs. a(loc,Sm) = 1</div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>a(Sp,loc) = 0.701 vs. a(loc,Sp) = 1</div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The first of these results – the symmetry of Goodman and Kruskal’s tau for the variables <em>Sm</em> and <em>Sp</em> – is
a consequence of the fact that this measure is symmetric for any pair of <em>binary</em> variables.<span style="mso-spacerun: yes;"> </span>In fact, the odds ratio that I have discussed in
previous posts represents a much better way of characterizing the relationship between binary variables (here, the odds ratio between <em>Sm</em> and <em>Sp</em> is zero, reflecting the fact that a
location cannot be both “above average” and “below average” at the same time).<span style="mso-spacerun: yes;"> </span>The real utility of the tau measure here is that the second and third
lines above show that the variables <em>Sm</em> and <em>Sp</em> are both re-groupings of the finer-grained variable <em>loc</em>.<span style="mso-spacerun: yes;"> </span></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-oRSclb8fvPE/Ty3EgV0qJ9I/AAAAAAAAAHA/gsQgEujOFxs
/s1600/GKtauFig02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" sda="true" src="http://4.bp.blogspot.com/-oRSclb8fvPE/Ty3EgV0qJ9I/AAAAAAAAAHA/
gsQgEujOFxs/s320/GKtauFig02.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">Finally, a more interesting exploratory application to this dataset is the following one.<span style="mso-spacerun: yes;"> </span>Computing Goodman and
Kruskal’s tau measure between the location variable <em>loc</em> and all of the other variables in the dataset – beyond the cases of <em>Sm</em> and <em>Sp</em> just considered – generally yields
small values for the associations in either direction.<span style="mso-spacerun: yes;"> </span>As a specific example, the association a(loc,Fl) is 0.001, suggesting that location is not a good
predictor of the unit’s floor space in meters, and although the reverse association a(Fl,loc) is larger (0.057), it is not large enough to suggest that the unit’s floor space is a particularly good
predictor of its location quality.<span style="mso-spacerun: yes;"> </span>The same is true of most of the other variables in the dataset: they are neither well predicted by nor good predictors
of location quality.<span style="mso-spacerun: yes;"> </span>The one glaring exception is the rent variable <em>R:</em> although the association a(loc,R) is only 0.001, the reverse association
a(R,loc) is 0.907, a very large value suggesting that location quality is quite well predicted by the rent.<span style="mso-spacerun: yes;"> </span>The beanplot above shows what is happening
here: because the variation in rents for all three location qualities is substantial, knowledge of the <em>loc</em> value is not sufficient to accurately predict the rent <em>R</em>, but these rent
values do generally increase in going from below-average locations (loc = 1) to average locations (loc = 2) to above-average locations (loc = 3).<span style="mso-spacerun: yes;"> </span>For
comparison, the beanplots below show why the association with floor space is so much weaker: both the mean floor space in each location quality group and the overall range of these values are quite
comparable, implying that neither location quality can be well predicted from floor space nor vice versa.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator"
style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-96Hzx9KHTtk/Ty3FGgKun9I/AAAAAAAAAHI/mxprlhMDTYk/s1600/GKtauFig03.png" imageanchor="1" style="margin-left: 1em;
margin-right: 1em;"><img border="0" height="319" sda="true" src="http://3.bp.blogspot.com/-96Hzx9KHTtk/Ty3FGgKun9I/AAAAAAAAAHI/mxprlhMDTYk/s320/GKtauFig03.png" width="320" /></a></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The asymmetry of Goodman and
Kruskal’s tau measure is disconcerting at first because it has no counterpart in better-known measures like the product-moment correlation coefficient between numerical variables, Spearman’s rank
correlation coefficient between ordinal variables, or the odds ratio between binary variables.<span style="mso-spacerun: yes;"> </span>One of the points of this post has been to demonstrate how
this unusual asymmetry can be useful in practice, distinguishing between the ability of one variable x to predict another variable y, and the reverse case.</div>Ron Pearson (aka TheNoodleDoodler)
<div class="MsoNormal" style="margin: 0in 0in 0pt;">In my last post, I discussed the Hampel filter, a useful moving window nonlinear data cleaning filter that is available in the <em>R</em> package
<strong>pracma</strong>.<span style="mso-spacerun: yes;"> </span>In this post, I briefly discuss this moving window filter in a little more detail, focusing on two important practical points:
the choice of the filter’s local outlier detection threshold, and the question of how to initialize moving window filters.<span style="mso-spacerun: yes;"> </span>This second point is
particularly important here because the <strong>pracma</strong> package initializes the Hampel filter in a particularly appropriate way, but doesn’t do such a good job of initializing the
Savitzky-Golay filter, a linear smoothing filter that is popular in physics and chemistry.<span style="mso-spacerun: yes;"> </span>Fortunately, this second difficulty is easy to fix, as I
demonstrate here.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Recall from my last post that the Hampel filter is a moving
window implementation of the Hampel identifier, discussed in Chapter 7 of <a href="http://www.amazon.com/s?ie=UTF8&rh=
n%3A283155%2Ck%3Aexploring%20data%20in%20engineering.%20the%20sciences.%20and%20medicine&page=1">Exploring Data in Engineering, the Sciences, and Medicine</a>.<span style="mso-spacerun: yes;">&
nbsp; </span>In particular, this procedure – implemented as <strong>outlierMAD</strong> in the <strong>pracma</strong> package – is a nonlinear data cleaning filter that looks for local outliers in a
time-series or other streaming data sequence, replacing them with a more reasonable alternative value when it finds them.<span style="mso-spacerun: yes;"> </span>Specifically, this filter may
be viewed as a more effective alternative to a “local three-sigma edit rule” that would replace any data point lying more than three standard deviations from the mean of its neighbors with that mean
value.<span style="mso-spacerun: yes;"> </span>The difficulty with this simple strategy is that both the mean and especially the standard deviation are badly distorted by the presence of
outliers in the data, causing this data cleaning procedure to often fail completely in practice.<span style="mso-spacerun: yes;"> </span>The Hampel filter instead uses the median of neighboring
observations as a reference value, and the MAD scale estimator as an alternative measure of distance: that is, a data point is declared an outlier and replaced if it lies more than some number <em>t
</em>of MAD scale estimates from the median of its neighbors; the replacement value used in this procedure is the median.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-zeJmhgqThZk/TxHJNN-4XlI/AAAAAAAAAGQ/OxoXHvRm-3U/s1600/HampelIIfig01.png" imageanchor="1" style=
"margin-left: 1em; margin-right: 1em;"><img border="0" height="319" kba="true" src="http://4.bp.blogspot.com/-zeJmhgqThZk/TxHJNN-4XlI/AAAAAAAAAGQ/OxoXHvRm-3U/s320/HampelIIfig01.png" width="320" /></
a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">More
specifically, for each observation in the original data sequence, the Hampel filter constructs a moving window that includes the <em>K</em> prior points, the data point of primary interest, and the
<em>K</em> subsequent data points.<span style="mso-spacerun: yes;"> </span>The reference value used for the central data point is the median of these <em>2K+1</em> successive observations, and
the MAD scale estimate is computed from these same observations to serve as a measure of the “natural local spread” of the data sequence.<span style="mso-spacerun: yes;"> </span>If the central
data point lies more than <em>t </em>MAD scale estimate values from the median, it is replaced with the median; otherwise, it is left unchanged.<span style="mso-spacerun: yes;"> </span>To
illustrate the performance of this filter, the top plot above shows the sequence of 1024 successive physical property measurements from an industrial manufacturing process that I also discussed in my
last post.<span style="mso-spacerun: yes;"> </span>The bottom plot in this pair shows the results of applying the Hampel filter with a window half-width parameter K=5 and a threshold value of t
= 3 to this data sequence.<span style="mso-spacerun: yes;"> </span>Comparing these two plots, it is clear that the Hampel filter has removed the glaring outlier – the value zero – at
observation k = 291, yielding a cleaned data sequence that varies over a much narrower (and, at least in this case, much more reasonable) range of possible values.<span style="mso-spacerun: yes;">&
nbsp; </span>What is less obvious is that this filter has also replaced 18 other data points with their local median reference values.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-uQht8r4-pk8/TxHJuf8hHiI/AAAAAAAAAGY/Vq7Pu7BaeRA/s1600/HampelIIfig02.png" imageanchor="1" style
="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" kba="true" src="http://4.bp.blogspot.com/-uQht8r4-pk8/TxHJuf8hHiI/AAAAAAAAAGY/Vq7Pu7BaeRA/s320/HampelIIfig02.png" width="320" /></
a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The above
plot shows the original data sequence, but on approximately the same range as the cleaned data sequence so that the glaring outlier at k = 291 no longer dominates the figure.<span style=
"mso-spacerun: yes;"> </span>The large solid circles represent the 18 additional points that the Hampel filter has declared to be outliers and replaced with their local median values.<span
style="mso-spacerun: yes;"> </span>This plot was generated using the Hampel filter implemented in the <strong>outlierMAD</strong> command in the <strong>pracma</strong> package, which has the
following syntax:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 2;"> &
nbsp; </span>outlierMAD(x,k)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br
/></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">where <em>x</em> is the data sequence to be cleaned and <em>k</em> is the half-width that defines the moving data window on which the
filter is based.<span style="mso-spacerun: yes;"> </span>Here, specifying k = 5 results in an 11-point moving data window.<span style="mso-spacerun: yes;"> </span>Unfortunately, the
threshold parameter <em>t</em> is hard-coded as 3 in this <strong>pracma</strong> procedure, which has the following code:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">outlierMAD <- function (x, k){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>n
<- length(x)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>y <- x</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<span style="mso-spacerun: yes;"> </span><state w:st="on"><place w:st="on">ind</place></state> <- c()</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>L <- 1.4826</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>t0 <- 3</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>for (i in (k + 1):(n - k)) {</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span
style="mso-spacerun: yes;"> </span>x0 <- median(x[(i - k):(i + k)])</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun:
yes;"> </span>S0 <- L * median(abs(x[(i - k):(i + k)] - x0))</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;">&
nbsp; </span>if (abs(x[i] - x0) > t0 * S0) {</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> &
nbsp; </span>y[i] <- x0</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> &
nbsp; </span><state w:st="on">ind</state> <- c(<state w:st="on"><place w:st="on">ind</place></state>, i)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>}</div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>list(y = y, <state w:st="on">ind</state> = <state w:st="on"><place w:st="on">ind</place>
</state>)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Note that
it is a simple matter to create your own version of this filter, specifying the threshold (here, the variable <em>t0</em>) to have a default value of 3, but allowing the user to modify it in the
function call.<span style="mso-spacerun: yes;"> </span>Specifically, the code would be:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;">HampelFilter <- function (x, k,t0=3){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>n <- length(x)</
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>y <- x</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span><place w:st="on"><state w:st="on">ind</state></place> <- c()</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;
"> </span>L <- 1.4826</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>for (i in (k + 1):(n - k)) {</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>x0 <- median(x[(i - k):(i + k)])</div><div class="MsoNormal" style
="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>S0 <- L * median(abs(x[(i - k):(i + k)] - x0))</div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>if (abs(x[i] - x0) > t0 * S0) {</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span
style="mso-spacerun: yes;"> </span>y[i] <- x0</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun:
yes;"> </span><state w:st="on">ind</state> <- c(<place w:st="on"><state w:st="on">ind</state></place>, i)</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>list(y = y, <state w:st="on">ind</
state> = <place w:st="on"><state w:st="on">ind</state></place>)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">The advantage of this modification is that it allows you to explore the influence of varying the threshold parameter.<span style="mso-spacerun: yes;">&
nbsp; </span>Note that increasing t0 makes the filter more forgiving, allowing more extreme local fluctuations to pass through the filter unmodified, while decreasing t0 makes the filter more
aggressive, declaring more points to be local outliers and replacing them with the appropriate local median.<span style="mso-spacerun: yes;"> </span>In fact, this filter remains well-defined
even for t0 = 0, where it reduces to the median filter, popular in nonlinear digital signal processing.<span style="mso-spacerun: yes;"> </span>John Tukey – the developer or co-developer
of many useful things, including the fast Fourier transform (FFT) – introduced the median filter at a technical conference in 1974, and it has profoundly influenced subsequent developments in
nonlinear digital filtering.<span style="mso-spacerun: yes;"> </span>It may be viewed as the most aggressive limit of the Hampel filter and, although it is quite effective in removing local
outliers, it is often too aggressive in practice, introducing significant distortions into the original data sequence.<span style="mso-spacerun: yes;"> </span>This point may be seen in the plot
below, which shows the results of applying the median filter (i.e., the <strong>HampelFilter</strong> procedure defined above with t0=0) to the physical property dataset.<span style="mso-spacerun:
yes;"> </span>In particular, the heavy solid line in this plot shows the behavior of the first 250 points of the median filtered sequence, while the lighter dotted line shows the corresponding
results for the Hampel filter with t0=3.<span style="mso-spacerun: yes;"> </span>Note the “clipped” or “blocky” appearance of the median filtered results, compared with the more irregular local
variation seen in the Hampel filtered results.<span style="mso-spacerun: yes;"> </span>In many applications (e.g., fitting time-series models), the less aggressive Hampel filter gives much
better overall results.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/
-EjtEmc-sw8w/TxHK5E_gtSI/AAAAAAAAAGg/9SldVilqjoc/s1600/HampelIIfig03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" kba="true" src="http://
2.bp.blogspot.com/-EjtEmc-sw8w/TxHK5E_gtSI/AAAAAAAAAGg/9SldVilqjoc/s320/HampelIIfig03.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The other main issue I wanted to discuss in this post is that of initializing moving window
filters.<span style="mso-spacerun: yes;"> </span>The basic structure of these filters – whether they are nonlinear types like the Hampel and median filters discussed above, or linear types like
the Savitzky-Golay filter discussed briefly below – is built on a moving data window that includes a central point of interest, prior observations and subsequent observations.<span style=
"mso-spacerun: yes;"> </span>For a symmetric window that includes K prior and K subsequent observations, this window is not well defined for the first K or the last K observations in the data
sequence.<span style="mso-spacerun: yes;"> </span>These points must be given special treatment, and a very common approach in the digital signal processing community is to extend the original
sequence by appending K additional copies of the first element to the beginning of the sequence and K additional copies of the last element to the end of the sequence.<span style="mso-spacerun: yes;
"> </span>The <strong>pracma</strong> implementation of the Hampel filter procedure (<strong>outlierMAD</strong>) takes an alternative approach, one that is particularly appropriate for data
cleaning filters.<span style="mso-spacerun: yes;"> </span>Specifically, procedure <strong>outlierMAD</strong> simply passes the first and last K observations unmodified from the original data
sequence to the filter output.<span style="mso-spacerun: yes;"> </span>This would also seem to be a reasonable option for smoothing filters like the linear Savitzky-Golay filter discussed next.
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-irmS_ED9KG0/TxHLXwqiMpI/
AAAAAAAAAGo/ccj53hOUrHM/s1600/HampelIIfig04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" kba="true" src="http://1.bp.blogspot.com/-irmS_ED9KG0/
TxHLXwqiMpI/AAAAAAAAAGo/ccj53hOUrHM/s320/HampelIIfig04.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;
"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">As noted, this linear smoothing filter is popular in chemistry and physics, and it is implemented in the <strong>pracma</strong>
package as procedure <strong>savgol.</strong><span style="mso-spacerun: yes;"> </span>For a more detailed discussion of this filter, refer to the treatment in the book <a href="http://
www.amazon.com/Numerical-Recipes-3rd-Scientific-Computing/dp/0521880688/ref=sr_1_1?s=books&ie=UTF8&qid=1326566316&sr=1-1">Numerical Recipes</a>, which the authors of the <strong>pracma</
strong> package cite for further details (Section 14.8).<span style="mso-spacerun: yes;"> </span>Here, the key point is that this filter is a linear smoother, implemented as the convolution of
the input sequence with an impulse response function (i.e., a smoothing kernel) that is constructed by the <strong>savgol </strong>procedure.<span style="mso-spacerun: yes;"> </span>The above
two plots show the effects of applying this filter with a total window width of 11 points (i.e., the same half-width K = 5 used with the Hampel and median filters), first to the raw physical property
data sequence (upper plot), and then to the sequence after it has been cleaned by the Hampel filter (lower plot).<span style="mso-spacerun: yes;"> </span>The large downward spike at k = 291 in
the upper plot reflects the impact of the glaring outlier in the original data sequence, illustrating the practical importance of removing these artifacts from a data sequence before applying
smoothing procedures like the Savitzky-Golay filter.<span style="mso-spacerun: yes;"> </span>Both the upper and lower plots exhibit similarly large spikes at the beginning and end of the data
sequence, however, and these artifacts are due to the moving window problem noted above for the first K and the last K elements of the original data sequence.<span style="mso-spacerun: yes;">
</span>In particular, the filter implementation in the <strong>savgol</strong> procedure does not apply the sequence extension procedure discussed above, and this fact is responsible for these
artifacts appearing at the beginning and end of the smoothed data sequence.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">It
is extremely easy to correct this problem, adopting the same philosophy the package uses for the <strong>outlierMAD</strong> procedure: simply retain the first and last K elements of the original
sequence unmodified.<span style="mso-spacerun: yes;"> </span>The procedure <strong>SGwrapper</strong> listed below does this after the fact, calling the <strong>savgol</strong> procedure and
then replacing the first and last K elements of the filtered sequence with the original sequence values:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;">SGwrapper <- function(x,K,forder=4,dorder=0){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class
="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>n = length(x)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;">
</span>fl = 2*K+1</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>y = savgol(x,fl,forder,dorder)</div><div class="MsoNormal" style="margin: 0in
0in 0pt;"><span style="mso-spacerun: yes;"> </span>if (dorder == 0){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>y[1:K] =
x[1:K]</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>y[(n-K):n] = x[(n-K):n]</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>else{</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>y[1:K] = 0</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> &
nbsp; </span>y[(n-K):n] = 0</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span
style="mso-spacerun: yes;"> </span>y</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style
="margin: 0in 0in 0pt;">Before showing the results obtained with this procedure, it is important to note two points.<span style="mso-spacerun: yes;"> </span>First, the moving window width
parameter fl required for the <strong>savgol </strong>procedure corresponds to fl = 2K+1 for a half-width parameter K.<span style="mso-spacerun: yes;"> </span>The procedure <strong>SGwrapper</
strong> instead requires K as its passing parameter, constructing fl from this value of K.<span style="mso-spacerun: yes;"> </span>Second, note that in addition to serving as a smoother, the
Savitzky-Golay filter family can also be used to estimate derivatives (this is tricky since differentiation filters are incredible noise amplifiers, but I’ll talk more about that in another post).
<span style="mso-spacerun: yes;"> </span>In the <strong>savgol</strong> procedure, this is accomplished by specifying the parameter dorder, which has a default value of zero (implying
smoothing), but which can be set to 1 to estimate the first derivative of a sequence, 2 for the second derivative, etc.<span style="mso-spacerun: yes;"> </span>In these cases, replacing the
first and last K elements of the filtered sequence with the original data sequence elements is not reasonable: in the absence of any other knowledge, a better default derivative estimate is zero, and
the <strong>SGwrapper</strong> procedure listed above does this.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-zc74biMBwes/TxHMuCc4tZI/AAAAAAAAAGw/zYuzLZbdfMk/s1600/HampelIIfig05.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319"
kba="true" src="http://3.bp.blogspot.com/-zc74biMBwes/TxHMuCc4tZI/AAAAAAAAAGw/zYuzLZbdfMk/s320/HampelIIfig05.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The four plots shown above illustrate the differences between the original
<strong>savgol</strong> procedure (the left-hand plots) and those obtained with the <strong>SGwrapper</strong> procedure listed above (the right-hand plots).<span style="mso-spacerun: yes;"> </
span>In all cases, the data sequence used to generate these plots was the physical property data sequence cleaned using the Hampel filter with t0 = 3.<span style="mso-spacerun: yes;"> </span>
The upper left plot repeats the lower of the two previous plots, corresponding to the <strong>savgol</strong> smoother output, while the upper right plot applies the <strong>SGwrapper</strong>
function to remove the artifacts at the beginning and end of the smoothed data sequence.<span style="mso-spacerun: yes;"> </span>Similarly, the lower two plots give the corresponding
second-derivative estimates, obtained by applying the <strong>savgol</strong> procedure with fl = 11 and dorder = 2 (lower left plot) or the <strong>SGwrapper</strong> procedure with K = 5 and dorder
= 2 (lower right plot).<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/
profile/15693640298594791682noreply@blogger.com1tag:blogger.com,1999:blog-9179325420174899779.post-6061761910664338992011-11-27T08:37:00.000-08:002011-11-27T08:37:14.467-08:00<div class="MsoNormal"
style="margin: 0in 0in 0pt;">The need to analyze time-series or other forms of streaming data arises frequently in many different application areas.<span style="mso-spacerun: yes;"> </span>
Examples include economic time-series like stock prices, exchange rates, or unemployment figures, biomedical data sequences like electrocardiograms or electroencephalograms, or industrial process
operating data sequences like temperatures, pressures or concentrations.<span style="mso-spacerun: yes;"> </span>As a specific example, the figure below shows four data sequences: the upper two
plots represent hourly physical property measurements, one made at the inlet of a product storage tank (the left-hand plot) and the other made at the same time at the outlet of the tank (the
right-hand plot).<span style="mso-spacerun: yes;"> </span>The lower two plots in this figure show the results of applying the data cleaning filter <strong>outlierMAD</strong> from the <em>R</
em> package <strong>pracma</strong> discussed further below.<span style="mso-spacerun: yes;"> </span>The two main points of this post are first, that isolated spikes like those seen in the
upper two plots at hour 291 can badly distort the results of an otherwise reasonable time-series characterization, and second, that the simple moving window data cleaning filter described here is
often very effective in removing these artifacts.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://
3.bp.blogspot.com/-xe3qt3qFIjc/TtJe9BAfGtI/AAAAAAAAAFw/GTVB2hnN3fU/s1600/hampelfig01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" hda="true" height="319" src=
"http://3.bp.blogspot.com/-xe3qt3qFIjc/TtJe9BAfGtI/AAAAAAAAAFw/GTVB2hnN3fU/s320/hampelfig01.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-tab-count: 1;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;">This example is discussed in more detail in Section 8.1.2 of my book <a href="http://www.amazon.com/Discrete-time-Dynamic-Models-Chemical-Engineering/dp/0195121988">
Discrete-Time Dynamic Models</a>, but the key observations here are the following.<span style="mso-spacerun: yes;"> </span>First, the large spikes seen in both of the original data sequences
were caused by the simultaneous, temporary loss of both measurements and the subsequent coding of these missing values as zero by the data collection system.<span style="mso-spacerun: yes;"> </
span>The practical question of interest was to determine how long, on average, the viscous, polymeric material being fed into and out of the product storage tank was spending there.<span style=
"mso-spacerun: yes;"> </span>A standard method for addressing such questions is the use of cross-correlation analysis, where the expected result is a broad peak like the heavy dashed line in
the plot shown below.<span style="mso-spacerun: yes;"> </span>The location of this peak provides an estimate of the average time spent in the tank, which is approximately 21 hours in this case,
as indicated in the plot.<span style="mso-spacerun: yes;"> </span>This result was about what was expected, and it was obtained by applying standard cross-correlation analysis to the cleaned
data sequences shown in the bottom two plots above.<span style="mso-spacerun: yes;"> </span>The lighter solid curve in the plot below shows the results of applying exactly the same analysis,
but to the original data sequences instead of the cleaned data sequences.<span style="mso-spacerun: yes;"> </span>This dramatically different plot suggests that the material is spending very
little time in the storage tank: accepted uncritically, this result would imply severe fouling of the tank, suggesting a need to shut the process down and clean out the tank, an expensive and
labor-intensive proposition.<span style="mso-spacerun: yes;"> </span>The main point of this example is that the difference in these two plots is entirely due to the extreme data anomalies
present in the original time-series.<span style="mso-spacerun: yes;"> </span>Additional examples of problems caused by time-series outliers are discussed in Section 4.3 of my book <a href=
"http://www.amazon.com/Mining-Imperfect-Data-Contamination-Incomplete/dp/0898715822">Mining Imperfect Data</a>.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-LtDGNc0Pq3w/TtJgcfIkfwI/AAAAAAAAAF4/OP18CGkOpck/s1600/
hampelfig02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" hda="true" height="319" src="http://1.bp.blogspot.com/-LtDGNc0Pq3w/TtJgcfIkfwI/AAAAAAAAAF4/OP18CGkOpck/
s320/hampelfig02.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </
span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">One of the primary features of the analysis of time-series and other
streaming data sequences is the need for <i style="mso-bidi-font-style: normal;">local</i> data characterizations.<span style="mso-spacerun: yes;"> </span>This point is illustrated in the plot
below, which shows the first 200 observations of the storage tank inlet data sequence discussed above.<span style="mso-spacerun: yes;"> </span>All of these observations but one are represented
as open circles in this plot, but the data point at <em>k = 110</em> is shown as a solid circle, to emphasize how far it lies from its immediate neighbors in the data sequence.<span style=
"mso-spacerun: yes;"> </span>It is important to note that this point is not anomalous with respect to the overall range of this data sequence – it is, for example, well within the normal range
of variation seen for the points from about <em>k = 150</em> to <em>k = 200</em> – but it is clearly anomalous with respect to those points that immediately precede and follow it.<span style=
"mso-spacerun: yes;"> </span>A general strategy for automatically detecting and removing such spikes from a data sequence like this one is to apply a <i style="mso-bidi-font-style: normal;
">moving window data cleaning filter</i> which characterizes each data point with respect to a local neighborhood of prior and subsequent samples.<span style="mso-spacerun: yes;"> </span>That
is, for each data point <i style="mso-bidi-font-style: normal;">k</i> in the original data sequence, this type of filter forms a cleaned data estimate based on some number <i style=
"mso-bidi-font-style: normal;">J</i> of prior data values (i.e., points <i style="mso-bidi-font-style: normal;">k-J</i> through <i style="mso-bidi-font-style: normal;">k-1</i> in the sequence) and,
in the simplest implementations, the same number of subsequent data values (i.e., points <i style="mso-bidi-font-style: normal;">k+1</i> through <i style="mso-bidi-font-style: normal;">k+J</i> in the
sequence).</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-bSZnrwrGhFg/
TtJg1JK1mLI/AAAAAAAAAGA/I95d4s7VILM/s1600/hampelfig03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" hda="true" height="319" src="http://4.bp.blogspot.com/
-bSZnrwrGhFg/TtJg1JK1mLI/AAAAAAAAAGA/I95d4s7VILM/s320/hampelfig03.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The specific data
cleaning filter considered here is the <em>Hampel filter</em>, which applies the Hampel identifier discussed in Chapter 7 of <a href="http://www.amazon.com/s?ie=UTF8&rh=
n%3A283155%2Ck%3Aexploring%20data%20in%20engineering.%20the%20sciences.%20and%20medicine&page=1">Exploring Data in Engineering, the Sciences and Medicine</a> to this moving data window.<span
style="mso-spacerun: yes;"> </span>If the <i style="mso-bidi-font-style: normal;">k<sup>th</sup></i> data point is declared to be an outlier, it is replaced by the median value computed from
this data window; otherwise, the data point is not modified.<span style="mso-spacerun: yes;"> </span>The results of applying the Hampel filter with a window width of <i style=
"mso-bidi-font-style: normal;">J = 5</i> to the above data sequence are shown in the plot below.<span style="mso-spacerun: yes;"> </span>The effect is to modify three of the original data
points – those at <i style="mso-bidi-font-style: normal;">k = 43, 110</i>, and <i style="mso-bidi-font-style: normal;">120</i> – and the original values of these modified points are shown as solid
circles at the appropriate locations in this plot.<span style="mso-spacerun: yes;"> </span>It is clear that the most pronounced effect of the Hampel filter is to remove the local outlier
indicated in the above figure and replace it with a value that is much more representative of the other data points in the immediate vicinity.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;
"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-sPj3GVpR9Uw/TtJhjPvDHnI/AAAAAAAAAGI/W6pb7RUWXdc/s1600/hampelfig04.png" imageanchor="1"
style="margin-left: 1em; margin-right: 1em;"><img border="0" hda="true" height="319" src="http://4.bp.blogspot.com/-sPj3GVpR9Uw/TtJhjPvDHnI/AAAAAAAAAGI/W6pb7RUWXdc/s320/hampelfig04.png" width="320"
/></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">As I noted above, the Hampel filter implementation used here is that available in the <em>R</em> package
<strong>pracma</strong> as procedure <strong>outlierMAD</strong>.<span style="mso-spacerun: yes;"> </span>I will discuss this <em>R</em> package in more detail in my next post, but for those
seeking a more detailed discussion of the Hampel filter in the meantime, one is freely available on-line in the form of an EDN article I wrote in 2002, <a href="http://www.edn.com/article/
486039-Scrub_data_with_scale_invariant_nonlinear_digital_filters.php">Scrub data with scale-invariant nonlinear digital filters</a>.<span style="mso-spacerun: yes;"> Also, c</span>omparisons
with alternatives like the standard median filter (generally too aggressive, introducing unwanted distortion into the “cleaned” data sequence) and the center-weighted median filter (sometimes quite
effective) are presented in Section 4.2 of the book <em>Mining Imperfect Data</em> <span style="mso-spacerun: yes;"> mentioned above.</span></div>Ron Pearson (aka TheNoodleDoodler)
<div class="MsoNormal" style="margin: 0in 0in 0pt;">In my last few posts, I have considered “long-tailed” distributions whose probability density decays much more slowly than standard distributions
like the Gaussian.<span style="mso-spacerun: yes;"> </span>For these slowly-decaying distributions, the harmonic mean often turns out to be a much better (i.e., less variable) characterization
than the arithmetic mean, which is generally not even well-defined theoretically for these distributions.<span style="mso-spacerun: yes;"> </span>Since the harmonic mean is defined as the
reciprocal of the mean of the reciprocal values, it is intimately related to the reciprocal transformation.<span style="mso-spacerun: yes;"> </span>The main point of this post is to show how
profoundly the reciprocal transformation can alter the character of a distribution, for better or worse.<span style="mso-spacerun: yes;"> </span>One way that reciprocal transformations
sneak into analysis results is through attempts to characterize ratios of random numbers.<span style="mso-spacerun: yes;"> </span>The key issue underlying all of these ideas is the question of
when the denominator variable in either a reciprocal transformation or a ratio exhibits non-negligible probability in a finite neighborhood of zero.<span style="mso-spacerun: yes;"> </span>I
discuss transformations in Chapter 12 of <a href="http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650/ref=sr_1_1?s=books&ie=UTF8&qid=1321042995&sr=1-1">
Exploring Data in Engineering, the Sciences and Medicine</a>, with a section (12.7) devoted to reciprocal transformations, showing what happens when we apply them to six different distributions:
Gaussian, <place w:st="on">Laplace</place>, Cauchy, beta, Pareto, and lognormal.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">In the general case, if a random variable <em>x</em> has the density <em>p(x),</em> the distribution <em>g(y)</em> of the reciprocal <em>y = 1/x</
em> has the density:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span><em>g(y) = p(1/y)/y<sup>2</sup></em> </div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in
0in 0pt;">As I discuss in greater detail in <em>Exploring Data</em>, the consequence of this transformation is <i style="mso-bidi-font-style: normal;">typically</i> (though not always) to convert a
well-behaved distribution into a very poorly behaved one.<span style="mso-spacerun: yes;"> </span>As a specific example, the plot below shows the effect of the reciprocal transformation on a
Gaussian random variable with mean 1 and standard deviation 2.<span style="mso-spacerun: yes;"> </span>The most obvious characteristic of this transformed distribution is its strongly
asymmetric, bimodal character, but another non-obvious consequence of the reciprocal transformation is that it takes a distribution that is completely characterized by its first two moments into a
new distribution with Cauchy-like tails, for which none of the integer moments exist.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both;
text-align: center;"><a href="http://4.bp.blogspot.com/-ihUpKC5yNpg/Tr1xtl2PFDI/AAAAAAAAAFQ/03fpQJy8IIc/s1600/recipfig01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border=
"0" height="319" nda="true" src="http://4.bp.blogspot.com/-ihUpKC5yNpg/Tr1xtl2PFDI/AAAAAAAAAFQ/03fpQJy8IIc/s320/recipfig01.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-tab-count: 1;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">The implications of the reciprocal transformation for many other distributions are equally non-obvious.<span style="mso-spacerun: yes;"> </span>For
example, both the badly-behaved Cauchy distribution (no moments exist) and the well-behaved lognormal distribution (all moments exist, but interestingly, do not completely characterize the
distribution, as I have discussed in a previous post) are invariant under the reciprocal transformation.<span style="mso-spacerun: yes;"> </span>Also, applying the reciprocal transformation to
the long-tailed Pareto type I distribution (which exhibits few or no finite moments, depending on its tail decay rate) yields a beta distribution, all of whose moments are finite.<span style=
"mso-spacerun: yes;"> </span>Finally, it is worth noting that the invariance of the Cauchy distribution under the reciprocal transformation lies at the heart of the following result, presented
in the book <a href="http://www.amazon.com/Continuous-Univariate-Distributions-Probability-Statistics/dp/0471584959/ref=sr_1_2?s=books&ie=UTF8&qid=1321042772&sr=1-2">Continuous Univariate
Distributions</a> by Johnson, Kotz, and Balakrishnan (Volume 1, 2<sup>nd</sup> edition, Wiley, 1994, page 319).<span style="mso-spacerun: yes;"> </span>They note that if the density of
<em>x</em> is positive, continuous, and differentiable at <em>x = 0</em> – all true for the Gaussian case – the distribution of the harmonic mean of <em>N</em> samples approaches a Cauchy limit as
<em>N</em> becomes infinitely large.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">As noted above, the key issue responsible
for the pathological behavior of the reciprocal transformation is the question of whether the original data distribution exhibits nonzero probability of taking on values within a neighborhood around
zero.<span style="mso-spacerun: yes;"> </span>In particular, note that if <em>x</em> can only assume values larger than some positive lower limit <em>L</em>, it follows that <em>1/x</em>
necessarily lies between <em>0</em> and <em>1/L</em>, which is enough to guarantee that all moments of the transformed distribution exist.<span style="mso-spacerun: yes;"> </span>For the
Gaussian distribution, even if the mean is large enough and the standard deviation is small enough that the probability of observing values less than some limit <em>L > 0</em> is negligible, the
fact that this probability is not <i style="mso-bidi-font-style: normal;">zero</i> means that the moments of <i style="mso-bidi-font-style: normal;">any</i> reciprocally-transformed Gaussian
distribution are not finite.<span style="mso-spacerun: yes;"> </span>As a practical matter, however, reciprocal transformations and related characterizations – like harmonic means and ratios –
do become better-behaved as the probability of observing values near zero become negligibly small.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;">To see this point, consider two reciprocally-transformed Gaussian examples.<span style="mso-spacerun: yes;"> </span>The first is the one considered above: the reciprocal
transformation of a Gaussian random variable with mean 1 and standard deviation 2.<span style="mso-spacerun: yes;"> </span>In this case, the probability that <em>x</em> assumes values smaller
than or equal to zero is non-negligible.<span style="mso-spacerun: yes;"> </span>Specifically, this probability is simply the cumulative distribution function for the distribution evaluated at
zero, easily computed in R as approximately 31%:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">> pnorm(0,mean=1,sd=2)</
div><div class="MsoNormal" style="margin: 0in 0in 0pt;">[1] 0.3085375</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">In
contrast, for a Gaussian random variable with mean 1 and standard deviation 0.1, the corresponding probability is negligibly small:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></
div><div class="MsoNormal" style="margin: 0in 0in 0pt;">> pnorm(0,mean=1,sd=0.1)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">[1] 7.619853e-24</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">If we consider the harmonic means of these two examples, we see that the first one is horribly behaved, as all
of the results presented here would lead us to expect.<span style="mso-spacerun: yes;"> </span>In fact, the <strong>qqPlot</strong> command in the <strong>car</strong> package in <em>R </
em>allows us to compute quantile-quantile plots for the Student’s <em>t</em>-distribution with one degree of freedom, corresponding to the Cauchy distribution, yielding the plot shown below.<span
style="mso-spacerun: yes;"> </span>The Cauchy-like tail behavior expected from the results presented by Johnson, Kotz and Balakrishnan is seen clearly in this Cauchy Q-Q plot, constructed from
1000 harmonic means, each computed from statistically independent samples drawn from a Gaussian distribution with mean 1 and standard deviation 2.<span style="mso-spacerun: yes;"> </span>The
fact that almost all of the observations fall within the – very wide – 95% confidence interval around the reference line suggest that the Cauchy tail behavior is appropriate here.</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-tQbQfuhvKY4/Tr1y6ipHrTI/AAAAAAAAAFY/BWQUNWtTVbg
/s1600/recipfig02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" nda="true" src="http://2.bp.blogspot.com/-tQbQfuhvKY4/Tr1y6ipHrTI/AAAAAAAAAFY/
BWQUNWtTVbg/s320/recipfig02.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">To further confirm this point, compare the corresponding
normal Q-Q plot for the same sequence of harmonic means, shown below.<span style="mso-spacerun: yes;"> Th</span>ere, the extreme non-Gaussian character of these harmonic means is readily
apparent from the pronounced outliers evident in both the upper and lower tails.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both;
text-align: center;"><a href="http://2.bp.blogspot.com/-II9KLHCeIYw/Tr1zH9K003I/AAAAAAAAAFg/14mIAISzn4U/s1600/recipfig03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border=
"0" height="319" nda="true" src="http://2.bp.blogspot.com/-II9KLHCeIYw/Tr1zH9K003I/AAAAAAAAAFg/14mIAISzn4U/s320/recipfig03.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-tab-count: 1;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">In marked contrast, for the second example with the mean of 1 as before but the much smaller standard deviation of 0.1, the harmonic mean is much better
behaved, as the normal Q-Q plot below illustrates.<span style="mso-spacerun: yes;"> </span>Specifically, this plot is identical in construction to the one above, except it was computed from
samples drawn from the second data distribution.<span style="mso-spacerun: yes;"> </span>Here, most of the computed harmonic mean values fall within the 95% confidence limits around the
Gaussian reference line, suggesting that it is not unreasonable in practice to regard these values as approximately normally distributed, in spite of the pathologies of the reciprocal transformation.
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-9kCbnML55mE/Tr1zVSWL8kI/
AAAAAAAAAFo/aGD2h8oow4c/s1600/recipfig04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" nda="true" src="http://2.bp.blogspot.com/-9kCbnML55mE/
Tr1zVSWL8kI/AAAAAAAAAFo/aGD2h8oow4c/s320/recipfig04.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">One reason the reciprocal
transformation is important in practice – particularly in connection with the Gaussian distribution – is that the desire to characterize ratios of uncertain quantities does arise from time to time.
<span style="mso-spacerun: yes;"> </span>In particular, if we are interested in characterizing the ratio of two averages, the Central Limit Theorem would lead us to expect that, at least
approximately, this ratio should behave like the ratio of two Gaussian random variables.<span style="mso-spacerun: yes;"> </span>If these component averages are statistically independent, the
expected value of the ratio can be re-written as the product of the expected value of the numerator average and the expected value of the reciprocal of the denominator average, leading us directly to
the reciprocal Gaussian transformation discussed here.<span style="mso-spacerun: yes;"> </span>In fact, if these two averages are both zero mean, it is a standard result that the ratio has a
Cauchy distribution (this result is presented in the same discussion from Johnson, Kotz and Balakrishnan noted above).<span style="mso-spacerun: yes;"> </span>As in the second harmonic mean
example presented above, however, it turns out to be true that if the mean and standard deviation of the denominator variable are such that the probability of a zero or negative denominator are
negligible, the distribution of the ratio may be approximated reasonably well as Gaussian.<span style="mso-spacerun: yes;"> </span>A very readable and detailed discussion of this fact is given
in the paper by George Marsaglia in the May 2006 issue of <a href="http://www.jstatsoft.org/v16/i04">Journal of Statistical Software</a>.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br
/></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Finally, it is important to note that the “reciprocally-transformed Gaussian distribution” I have been discussing here is <i style=
"mso-bidi-font-style: normal;">not</i> the same as the <em>inverse Gaussian distribution</em>, to which Johnson, Kotz and Balakrishnan devote a 39-page chapter (Chapter 15).<span style="mso-spacerun:
yes;"> </span>That distribution takes only positive values and exhibits moments of all orders, both positive and negative, and as a consequence, it has the interesting characteristic that it
remains well-behaved under reciprocal transformations, in marked contrast to the Gaussian case.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com0tag:blogger.com,1999:blog-9179325420174899779.post-32075753837295733872011-10-23T13:31:00.000-07:002011-10-23T13:31:47.692-07:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">In my last few posts, I have been discussing some of the consequences of the slow decay rate of the tail of the Pareto type I distribution, along with some other, closely
related notions, all in the context of continuously distributed data.<span style="mso-spacerun: yes;"> </span>Today’s post considers the Zipf distribution for discrete data, which has come to
be extremely popular as a model for phenomena like word frequencies, city sizes, or sales rank data, where the values of these quantities associated with randomly selected samples can vary by many
orders of magnitude.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">More specifically, the Zipf distribution is defined by a
probability p<sub>i</sub> of observing the i<sup>th</sup> element of an infinite sequence of objects in a single random draw from that sequence, where the probability is given by:</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span></div><blockquote>p<sub>i</sub> = A/i<sup>a</sup></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Here, <i
style="mso-bidi-font-style: normal;">a</i> is a positive number greater than 1 that determines the rate of the distribution’s tail decay, and <i style="mso-bidi-font-style: normal;">A</i> is a
normalization constant, chosen so that these probabilities sum to 1.<span style="mso-spacerun: yes;"> </span>Like the continuous-valued Pareto type I distribution, the Zipf distribution
exhibits a “long tail,” meaning that its tail decays slowly enough that in a random sample of objects <i style="mso-bidi-font-style: normal;">O<sub>i</sub></i> drawn from a Zipf distribution, some
very large values of the index <i style="mso-bidi-font-style: normal;">i</i> will be observed, particularly for relatively small values of the exponent <i style="mso-bidi-font-style: normal;">a</i>.
<span style="mso-spacerun: yes;"> </span>In one of the earliest and most common applications of the Zipf distribution, the objects considered represent words in a document and <i style=
"mso-bidi-font-style: normal;">i</i> represents their rank, ranging from most frequent (for <i style="mso-bidi-font-style: normal;">i = 1</i>) to rare (for large <i style="mso-bidi-font-style:
normal;">i</i> ).<span style="mso-spacerun: yes;"> </span>In a more business-oriented application, the objects might be products for sale (e.g., books listed on Amazon), with the index <i
style="mso-bidi-font-style: normal;">i</i> corresponding to their sales rank.<span style="mso-spacerun: yes;"> </span>For a fairly extensive collection of references to many different
applications of the Zipf distribution, the website (originally) from <a href="http://www.nslij-genetics.org/wli/zipf/index.html">Rockefeller University</a> is an excellent source.</
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">In <a href="http://www.amazon.com/s/ref=nb_sb_ss_i_1_15?url=
search-alias%3Dstripbooks&field-keywords=exploring+data+in+engineering.+the+sciences.+and+medicine&sprefix=Exploring+Data+">Exploring Data in Engineering, the Sciences, and Medicine</a>, I
give a brief discussion of both the Zipf distribution and the closely related Zipf-Mandelbrot distribution discussed by Beniot Mandelbrot in his book <a href="http://www.amazon.com/s/ref=
nb_sb_ss_i_0_12?url=search-alias%3Dstripbooks&field-keywords=the+fractal+geometry+of+nature&sprefix=the+fractal+">The Fractal Geometry of Nature</a>.<span style="mso-spacerun: yes;"> </
span>The probabilities defining this distribution may be parameterized in several ways, and the one given in <em>Exploring Data</em> is:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span></div><blockquote>p<sub>i</
sub> = A/(1+Bi)<sup>a</sup></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">where again <i style="mso-bidi-font-style:
normal;">a</i> is an exponent that determines the rate at which the tail of the distribution decays, and <i style="mso-bidi-font-style: normal;">B</i> is a second parameter with a value that is
strictly positive but no greater than 1.<span style="mso-spacerun: yes;"> </span>For both the Zipf distribution and the Zipf-Mandelbrot distribution, the exponent <i style="mso-bidi-font-style:
normal;">a</i> must be greater than 1 for the distribution to be well-defined, it must be greater than 2 for the mean to be finite, and it must be greater than 3 for the variance to be finite.</div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">So far, I have been unable to find an <em>R</em> package that supports the
generation of random samples drawn from the Zipf distribution, but the package <strong>zipfR</strong> includes the command <strong>rlnre</strong>, which generates random samples drawn from the
Zipf-Mandelbrot distribution.<span style="mso-spacerun: yes;"> </span>As I noted, this distribution can be parameterized in several different ways and, as Murphy’s law would have it, the
<strong>zipfR</strong> parameterization is not the same as the one presented above and discussed in <em>Exploring Data</em>.<span style="mso-spacerun: yes;"> </span>Fortunately, the conversion
between these parameters is simple.<span style="mso-spacerun: yes;"> </span>The <strong>zipfR</strong> package defines the distribution in terms of a parameter <strong>alpha</strong> that must
lie strictly between 0 and 1, and a second parameter <strong>B</strong> that I will call <em>B<sub>zipfR</sub></em> to avoid confusion with the parameter <em>B</em> in the above definition.<span
style="mso-spacerun: yes;"> </span>These parameters are related by:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<span style="mso-tab-count: 1;"> </span></div><blockquote>alpha = 1/a<span style="mso-spacerun: yes;"> </span>and
<span style="mso-spacerun: yes;"> </span>B<sub>zipfR</sub> = (a-1) B</blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;">Since the <i style="mso-bidi-font-style: normal;">a</i> parameter (and thus the <strong>alpha</strong> parameter in the <strong>zipfR</strong> package) determines the tail
decay rate of the distribution, it is of the most interest here, and the rest of this post will focus on three examples: a = 1.5 (alpha = 2/3), for which both the distribution’s mean and variance are
infinite, a = 2.5 (alpha = 2/5), for which the mean is finite but the variance is not, and a = 3.5 (alpha = 2/7), for which both the mean and variance are finite.<span style="mso-spacerun: yes;">&
nbsp; </span>The value of the parameter <em>B</em> in the <em>Exploring Data</em> definition of the distribution will be fixed at 0.2 in all of these examples, corresponding to values of <em>B<sub>
zipfR</sub></em> = 0.1, 0.3, and 0.5 for the three examples considered here.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
To generate Zipf-Mandelbrot random samples, the <strong>zipfR</strong> package uses the procedure <strong>rlnre</strong> in conjunction with the procedure <strong>lnre </strong>(the abbreviation&
nbsp;“lnre”<span style="mso-spacerun: yes;"> stands for “large number of rare events” and it represents a class of data models that includes the Zipf-Mandelbrot distribution). </span>
Specifically, to generate a random sample of size N = 100 for the first case considered here, the following <em>R</em> code is executed:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br
/></div><blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;">> library(zipfR)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">> ZM = lnre(“zm”, alpha = 2/3, B = 0.1)</div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">> zmsample = rlnre(ZM, n=100)</div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;">The first line loads the <strong>zipfR</strong> library (which must first be installed, of course, using the <strong>install.packages</strong> command), the second line invokes
the <strong>lnre</strong> command to set up the distribution with the desired parameters, and the last line invokes the <strong>rlnre</strong> command to generate 100 random samples from this
distribution.<span style="mso-spacerun: yes;"> </span>(As with all <em>R</em> random number generators, the <strong>set.seed</strong> command should be used first to initialize the random
number generator seed if you want to get repeatable results; for the results presented here, I used <strong>set.seed(101)</strong>.)<span style="mso-spacerun: yes;"> </span>The sample returned
by the <strong>rlnre</strong> command is a vector of 100 observations, which have the “factor” data type, although their designations are numeric (think of the factor value “1339” as meaning “1
sample of object number 1339”).<span style="mso-spacerun: yes;"> </span>In the results I present here, I have converted these factor responses to numerical ones so I can interpret them as
numerical ranks.<span style="mso-spacerun: yes;"> </span>This conversion is a little subtle: simply converting from factor to numeric values via something like “<strong>zmnumeric = as.numeric
(zmsample)</strong>” almost certainly doesn’t give you what you want: this will convert the first-ocurring factor value (which has a numeric label, say “1339”) into the number 1, convert the
second-occurring value (since this is a random sequence, this might be “73”) into the number 2, etc.<span style="mso-spacerun: yes;"> </span>To get what you want (e.g., the labels “1339” and
“73” assigned to the numbers 1339 and 73, respectively), you need to first convert the factors in <strong>zmsample</strong> into characters and then convert these characters into numeric values:</
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span></div><blockquote>zmnumeric = as.numeric(as.character(zmsample))</blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The three plots below show random samples drawn from each of the three Zipf-Mandelbrot distributions
considered here.<span style="mso-spacerun: yes;"> </span>In all cases, the y-axis corresponds to the number of times the object labeled <em>i </em>was observed in a random sample of size N =
100 drawn from the distribution with the indicated exponent.<span style="mso-spacerun: yes;"> </span>Since the range of these indices can be quite large in the slowly-decaying members of the
Zipf-Mandelbrot distribution family, the plots are drawn with logarithmic x-axes, and to facilitate comparisons, the x-axes have the same range in all three plots, as do the y-axes.<span style=
"mso-spacerun: yes;"> </span>In all three plots, object i = 1 occurs most often – about a dozen times in the top plot, two dozen times in the middle plot, and three dozen times in the bottom
plot – and those objects with larger indices occur less frequently.<span style="mso-spacerun: yes;"> </span>The major difference between these three examples lies in the largest indices of the
objects seen in the samples: we never see an object with index greater than 50 in the bottom plot, we see only two such objects in the middle plot, while more than a third of the objects in the top
plot meet this condition, with the most extreme object having index i = 115,116.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-4IkBCpRbZdk/TqRtqcC60pI/AAAAAAAAAEg/0NJarxwlteo/s1600/zipfig00.png" imageanchor="1" style=
"margin-left: 1em; margin-right: 1em;"><img border="0" height="319" rda="true" src="http://1.bp.blogspot.com/-4IkBCpRbZdk/TqRtqcC60pI/AAAAAAAAAEg/0NJarxwlteo/s320/zipfig00.png" width="320" /></a></
div><div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l0 level1 lfo1; tab-stops: list 1.0in; text-indent: -0.5in;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br
/></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">As in the case of the Pareto type I distributions I discussed in several previous posts – which may be regarded as the continuous analog of
the Zipf distribution – the mean is generally not a useful characterization for the Zipf distribution.<span style="mso-spacerun: yes;"> </span>This point is illustrated in the boxplot
comparison presented below, which summarizes the means computed from 1000 statistically independent random samples drawn from each of the three distributions considered here, where the object labels
have been converted to numerical values as described above.<span style="mso-spacerun: yes;"> </span>Thus, the three boxplots on the left represent the means – note the logarithmic scale on the
y-axis – of these index values <i style="mso-bidi-font-style: normal;">i</i> generated for each random sample.<span style="mso-spacerun: yes;"> </span>The extreme variability seen for Case 1 (a
= 1.5) reflects the fact that neither the mean nor the variance are finite for this case, and the consistent reduction in the range of variability for Cases 2 (a = 2.5, finite mean but infinite
variance) and 3 (a = 3.5, finite mean and variance) reflects the “shortening tail” of this distribution with increasing exponent <i style="mso-bidi-font-style: normal;">a</i>.<span style=
"mso-spacerun: yes;"> </span>As I discussed in my last post, a better characterization than the mean for distributions like this is the “95% tail length,” corresponding to the 95% sample
quantile. Boxplots summarizing these values for the three distributions considered here are shown to the right of the dashed vertical line in the plot below.<span style="mso-spacerun: yes;"> </
span>In each case, the range of variation seen here is much less extreme for the 95% tail length than it is for the mean, supporting the idea that this is a better characterization for data described
by Zipf-like discrete distributions.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://
3.bp.blogspot.com/-1efDOBGWbGM/TqRuBRTgeMI/AAAAAAAAAEo/GtG7pBgZVIY/s1600/zipfig01a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" rda="true" src="http:
//3.bp.blogspot.com/-1efDOBGWbGM/TqRuBRTgeMI/AAAAAAAAAEo/GtG7pBgZVIY/s320/zipfig01a.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l0 level1 lfo1;
tab-stops: list 1.0in; text-indent: -0.5in;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Other alternatives to the
(arithmetic) mean that I discussed in conjunction with the Pareto type I distribution were the sample median, the geometric mean, and the harmonic mean.<span style="mso-spacerun: yes;"> </span>
The plot below compares these four characterizations for 1000 random samples, each of size N = 100, drawn from the Zipf-Mandelbrot distribution with a = 3.5 (the third case), for which the mean is
well-defined.<span style="mso-spacerun: yes;"> </span>Even here, it is clear that the mean is considerably more variable than these other three alternatives.</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-iFLdn8T5avk/TqRuODcIExI/AAAAAAAAAEw/sX0yQP6zmzM/s1600/
zipfig02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" rda="true" src="http://3.bp.blogspot.com/-iFLdn8T5avk/TqRuODcIExI/AAAAAAAAAEw/sX0yQP6zmzM/s320/
zipfig02.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l0 level1 lfo1; tab-stops: list 1.0in; text-indent: -0.5in;"><br /></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Finally, the plot below shows boxplot comparisons of these alternative characterizations – the median, the
geometric mean, and the harmonic mean – for all three of the distributions considered here.<span style="mso-spacerun: yes;"> </span>Not surprisingly, Case 1 (a = 1.5) exhibits the largest
variability seen for all three characterizations, but the harmonic mean is much more consistent for this case than either the geometric mean or the median.<span style="mso-spacerun: yes;"> </
span>In fact, the same observation holds – although less dramatically – for Case 2 (a = 2.5), and the harmonic mean appears more consistent than the geometric mean for all three cases.<span style=
"mso-spacerun: yes;"> </span>This observation is particularly interesting in view of the connection between the harmonic mean and the reciprocal transformation, which I will discuss in more
detail next time.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-swrGBTcTcNs/
TqRubBMbQ5I/AAAAAAAAAE4/8tNLF8W1G4Y/s1600/zipfig03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" rda="true" src="http://4.bp.blogspot.com/-swrGBTcTcNs
/TqRubBMbQ5I/AAAAAAAAAE4/8tNLF8W1G4Y/s320/zipfig03.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l0 level1 lfo1; tab-stops: list 1.0in; text-indent:
-0.5in;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com0tag:blogger.com,1999:blog-9179325420174899779.post-71806279906162061832011-09-28T15:11:00.000-07:002011-09-28T15:11:01.454-07:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">In response to my last post, “The Long Tail of the Pareto Distribution,” Neil Gunther had the following comment:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br />
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span></div><blockquote>“<span style=
"color: #333333;">Unfortunately, you've fallen into the trap of using the ‘long tail’ misnomer. If you think about it, it can't possibly be the length of the tail that sets distributions like Pareto
and Zipf apart; even the negative exponential and Gaussian have <i>infinitely</i> long tails.”</span></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">He goes on to say that the relevant concept is the “width” or the “weight” of the tails that is important, and that a more appropriate characterization of
these “Long Tails” would be “heavy-tailed” or “power-law” distributions.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
Neil’s comment raises an important point: while the term “long tail” appears a lot in both the on-line and hard-copy literature, it is often somewhat ambiguously defined.<span style="mso-spacerun:
yes;"> </span>For example, in his book, <a href="http://www.amazon.com/Long-Tail-Revised-Updated-Business/dp/1401309666/ref=sr_1_1?s=books&ie=UTF8&qid=1317246600&sr=1-1"><em>The
Long Tail</em></a>, Chris Anderson offers the following description (page 10):</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;
"><span style="mso-tab-count: 1;"> </span></div><blockquote>“In statistics, curves like that are called ‘long-tailed distributions’
because the tail of the curve is very long relative to the head.”</blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The
difficulty with this description is that it is somewhat ambiguous since it says nothing about how to measure “tail length,” forcing us to adopt our own definitions.<span style="mso-spacerun: yes;">&
nbsp; </span>It is clear from Neil’s comments that the definition he adopts for “tail length” is the width of the distribution’s support set.<span style="mso-spacerun: yes;"> </span>Under this
definition, the notion of a “long-tailed distribution” is of extremely limited utility: the situation is exactly as Neil describes it, with “long-tailed distributions” corresponding to any
distribution with unbounded support, including both distributions like the Gaussian and gamma distribution where the mean is a reasonable characterization, and those like the Cauchy and Pareto
distribution where the mean doesn’t even exist.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;">The situation is analogous to that of confidence intervals, which characterize the uncertainty inherited by any characterization computed from a collection of uncertain (i.e.,
random) data values.<span style="mso-spacerun: yes;"> </span>As a specific example consider the mean: the <em>sample mean</em> is the arithmetic average of <em>N</em> observed data samples, and
it is generally intended as an estimate of the <em>population mean</em>, defined as the first moment of the data distribution.<span style="mso-spacerun: yes;"> </span>A <em>q%
confidence interval</em> around the sample mean is an interval that contains the population mean with probability at least <em>q%</em>.<span style="mso-spacerun: yes;"> </span>These
intervals can be computed in various ways for different data characterizations, but the key point here is that they are widely used in practice, with the most popular choices being the 90%, 95%
and 99% confidence intervals, which necessarily become wider as this percentage <em>q</em> increases.<span style="mso-spacerun: yes;"> </span>(For a more detailed discussion of
confidence intervals, refer to Chapter 9 of <a href="http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650/ref=sr_1_1?s=books&ie=UTF8&qid=1317246817&sr=1-1#_">
Exploring Data in Engineering, the Sciences, and Medicine</a>.)<span style="mso-spacerun: yes;"> </span>We can, in principle, construct 100% confidence intervals, but this leads us directly
back to Neil’s objection: the 100% confidence interval for the mean is entire support set of the distribution (e.g., for the Gaussian distribution, this 100% confidence interval is the whole real
line, while for any gamma distribution, it is the set of all positive numbers).<span style="mso-spacerun: yes;"> </span>These observations suggest the following notion of “tail length”
that addresses Neil’s concern while retaining the essential idea of interest in the business literature: we can compare the “q% tail length” of different distributions for some <em>q</em> less than
100.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">In particular, consider the case of J-shaped distributions, defined as
those like the Pareto type I distribution whose distribution p(x) decays monotonically with increasing x, approaching zero as x goes to infinity.<span style="mso-spacerun: yes;"> </span>The
plot below shows two specific examples to illustrate the idea: the solid line corresponds to the (shifted) exponential distribution:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>p(x) = e<sup>–(x-1)</sup></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">for all x greater than or equal to 1 and zero otherwise, while the dotted line
represents the Pareto type I distribution with location parameter <em>k = 1</em> and shape parameter <em>a = 0.5</em> discussed in my last post.<span style="mso-spacerun: yes;"> </span>
Initially, as x increases from 1, the exponential density is greater than the Pareto density, but for x larger than about 3.5, the opposite is true: the exponential distribution rapidly becomes much
smaller, reflecting its much more rapid rate of tail decay.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a
href="http://1.bp.blogspot.com/-WtujfYFLLtw/ToOVGk0u85I/AAAAAAAAAEM/cjJl9R66-hk/s1600/LongUselessFig01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319"
kca="true" src="http://1.bp.blogspot.com/-WtujfYFLLtw/ToOVGk0u85I/AAAAAAAAAEM/cjJl9R66-hk/s320/LongUselessFig01.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br />
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">For these distributions, define the q% tail length to be the distance from
the minimum possible value of x (the “head” of the distribution; here, x = 1) to the point in the tail where the cumulative probability reaches q% (i.e., the value x<sub>q</sub> where x < x<sub>q
</sub> with probability q%). <span style="mso-spacerun: yes;"> </span>In practical terms, the q% tail length tells us how far out we have to go in the tail to account for q% of the possible
cases.<span style="mso-spacerun: yes;"> </span>In <em>R</em>, this value is easy to compute using the <em>quantile</em> function included in most families of available distribution functions.
<span style="mso-spacerun: yes;"> </span>As a specific example, for the Pareto type I distribution, the function <strong>qparetoI</strong> in the <strong>VGAM</strong> package gives us the
desired quantiles for the distribution with specified values of the parameters <em>k</em> (designated “scale” in the <strong>qparetoI</strong> call) and <em>a</em> (designated “shape” in the <strong>
qparetoI</strong> call).<span style="mso-spacerun: yes;"> </span>Thus, for the case <em>k = 1</em> and <em>a = 0.5</em> (i.e., the dashed curve in the above plot), the “90% tail length” is
given by:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">> qparetoI(p=0.9,scale=1,shape=0.5)</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;">[1] 100</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;">For comparison, the corresponding shifted exponential distribution has the 90% tail length given by:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">> 1 + qexp(p = 0.9)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">[1] 3.302585</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">&
gt;</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">(Note that here, I added 1 to the exponential quantile to account for
the shift in its domain from “all positive numbers” – the domain for the standard exponential distribution – to the shifted domain “all numbers greater than 1”.)<span style="mso-spacerun: yes;">&
nbsp; </span>Since these 90% tail lengths differ by a factor of 30, they provide a sound basis for declaring the Pareto type I distribution to be “longer tailed” than the exponential distribution.</
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">These results also provide a useful basis for assessing the influence of the
decay parameter a for the Pareto distribution.<span style="mso-spacerun: yes;"> </span>As I noted last time, two of the examples I considered did not have finite means (<em>a = 0.5</em> and
<em>1.0</em>), and none of the four had finite variances (i.e., also <em>a = 1.5</em> and <em>2.0</em>), rendering moment characterizations like the mean and standard deviation fundamentally
useless.<span style="mso-spacerun: yes;"> </span>Comparing the 90% tail lengths for these distributions, however, leads to the following results:</div><div class="MsoNormal" style="margin: 0in
0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span><em>a = 0.5:</em>
90% tail length = 100.000</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span><em>a =
1.0:</em> 90% tail length =<span style="mso-spacerun: yes;"> </span>10.000</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span><em>a = 1.5:</em> 90% tail length =<span style="mso-spacerun: yes;"> </span>4.642</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span><em>a = 2.0:</em> 90% tail length =<span style="mso-spacerun: yes;">&
nbsp; </span>3.162</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">It is clear from these results that the
shape parameter <em>a</em> has a dramatic effect on the 90% tail length (in fact, on the q% tail length for any <em>q</em> less than 100).<span style="mso-spacerun: yes;"> </span>Further, note
that the 90% tail length for the Pareto type I distribution with <em>a = 2.0</em> is actually a little bit shorter than that for the exponential distribution.<span style="mso-spacerun: yes;">
</span>If we move further out into the tail, however, this situation changes.<span style="mso-spacerun: yes;"> </span>As a specific example, suppose we compare the 98% tail lengths. For the
exponential distribution, this yields the value 4.912, while for the four Pareto shape parameters we have the following results:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span><em>a = 0.5:</em> 98% tail length =
2,500.000</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span><em>a = 1.0:</em> 98% tail
length =<span style="mso-spacerun: yes;"> </span>50.000</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;">
</span><em>a = 1.5:</em> 98% tail length =<span style="mso-spacerun: yes;"> </span>13.572</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span><em>a = 2.0:</em> 98% tail length =<span style="mso-spacerun: yes;">&
nbsp; </span>7.071</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">This value (i.e., the
98% tail length) seems a particularly appropriate choice to include here since in his book, <em>The Long Tail</em>, Chris Anderson notes that his original presentations on the topic were entitled
“The 98% Rule,” reflecting the fact that he was explicitly considering how far out you had to go into the tail of a distribution of goods (e.g., the books for sale by Amazon) to account for 98% of
the sales.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Since this discussion originally began with the question, “when are
averages useless?” it is appropriate to note that, in contrast to the much better-known average, the “q% tail length” considered here is well-defined for <em>any </em>proper distribution.<span style=
"mso-spacerun: yes;"> </span>As the examples discussed here demonstrate, this characterization also provides a useful basis for quantifying the “Long Tail” behavior that is of increasing
interest in business applications like Internet marketing.<span style="mso-spacerun: yes;"> </span>Thus, if we adopt this measure for any <em>q</em> value less than 100%, the answer to the
title question of this post is, “No: The Long Tail is a useful concept.”</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The
downside of this minor change is that – as the results shown here illustrate – the results obtained using the q% tail length depend on the value of <em>q</em> we choose.<span style=
"mso-spacerun: yes;"> </span>In my next post, I will explore the computational issues associated with that choice.</div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com1tag:blogger.com,1999:blog-9179325420174899779.post-16112661374266985532011-09-17T09:54:00.000-07:002011-09-17T09:54:58.940-07:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">In my last two posts, I have discussed cases where the mean is of little or no use as a data characterization.<span style="mso-spacerun: yes;"> </span>One of the specific
examples I discussed last time was the case of the Pareto type I distribution, for which the density is given by:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 2;">
</span>p(x) = ak<sup>a</sup>/x<sup>a+1</sup></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">defined for all <i
style="mso-bidi-font-style: normal;">x > k</i>, where <i style="mso-bidi-font-style: normal;">k</i> and <i style="mso-bidi-font-style: normal;">a</i> are numeric parameters that define the
distribution.<span style="mso-spacerun: yes;"> </span>In the example I discussed last time, I considered the case where a = 1.5, which exhibits a finite mean (specifically, the mean is 3 for
this case), but an infinite variance.<span style="mso-spacerun: yes;"> </span>As the results I presented last time demonstrated, the extreme data variability of this distribution renders the
computed mean too variable to be useful.<span style="mso-spacerun: yes;"> </span>Another reason this distribution is particularly interesting is that it exhibits essentially the same tail
behavior as the discrete Zipf distribution; there, the probability that a discrete random variable x takes its i<sup>th</sup> value is:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br />
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 2;"> &
nbsp; </span>p<sub>i</sub> = A/i<sup>c</sup>,</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">where A
is a normalization constant and <i style="mso-bidi-font-style: normal;">c</i> is a parameter that determines how slowly the tail decays.<span style="mso-spacerun: yes;"> </span>This
distribution was originally proposed to characterize the frequency of words in long documents (the Zipf-Estoup law), it was investigated further by Zipf in the mid-twentieth century in a wide range
of applications (e.g., the distributions of city sizes), and it has become the subject of considerable recent attention as a model for “long-tailed” business phenomena (for a non-technical
introduction to some of these business phenomena, see the book by Chris Anderson, <a href="http://www.amazon.com/Long-Tail-Future-Business-Selling/dp/1401302378">The Long Tail</a>).<span style=
"mso-spacerun: yes;"> </span>I will discuss the Zipf distribution further in a later post, but one of the reasons for discussing the Pareto type I distribution first is that since it is a
continuous distribution, the math is easier, meaning that more characterization results are available for the Pareto distribution.<span style="mso-spacerun: yes;"> </span></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-Od4r-V4YWGU/TnTLcJsl0DI/AAAAAAAAAD0/fqILT5xoVxM
/s1600/ParetoIFig01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" rba="true" src="http://4.bp.blogspot.com/-Od4r-V4YWGU/TnTLcJsl0DI/AAAAAAAAAD0/
fqILT5xoVxM/s320/ParetoIFig01.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">The mean of the Pareto type I distribution is:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in
0in 0pt;"><span style="mso-tab-count: 2;"> </span>Mean = ak/
(a-1),</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">provided <i style="mso-bidi-font-style: normal;">a > 1</i>, and the
variance of the distribution is finite only if <i style="mso-bidi-font-style: normal;">a > 2</i>.<span style="mso-spacerun: yes;"> </span>Plots of the probability density defined above for
this distribution are shown above, for <i style="mso-bidi-font-style: normal;">k = 1</i> in all cases, and with <i style="mso-bidi-font-style: normal;">a</i> taking the values 0.5, 1.0, 1.5, and 2.0.
<span style="mso-spacerun: yes;"> </span>(This is essentially the same plot as Figure 4.17 in <a href="http://www.amazon.com/s/ref=nb_sb_ss_i_1_14?url=search-alias%3Dstripbooks&
field-keywords=exploring+data+in+engineering.+the+sciences.+and+medicine&sprefix=Exploring+Data">Exploring Data in Engineering, the Sciences, and Medicine</a>, where I give a brief description of
the Pareto type I distribution.)<span style="mso-spacerun: yes;"> </span>Note that all of the cases considered here are characterized by infinite variance, while the first two (a = 0.5 and 1.0)
are also characterized by infinite means.<span style="mso-spacerun: yes;"> </span>As the results presented below emphasize, the mean represents a very poor characterization in practice for data
drawn from any of these distributions, but there are alternatives, including the familiar median that I have discussed previously, along with two others that are more specific to the Pareto type I
distribution: the geometric mean and the harmonic mean.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;">The plot below emphasizes the point made above about the extremely limited utility of the mean as a characterization of Pareto type I data, even in cases where it is
theoretically well-defined.<span style="mso-spacerun: yes;"> </span>Specifically, this plot compares the four characterizations I discuss here – the mean (more precisely known as the
“arithmetic mean” to distinguish it from the other means considered here), the median, the geometric mean, and the harmonic mean – for 1000 statistically independent Pareto type I data sequences,
each of length N = 400, with parameters <i style="mso-bidi-font-style: normal;">k = 1</i> and <i style="mso-bidi-font-style: normal;">a = 2.0</i>.<span style="mso-spacerun: yes;"> </span>For
this example, the mean is well-defined (specifically, it is equal to 2), but compared with the other data characterizations, its variability is much greater, reflecting the more serious impact of
this distribution’s infinite variance on the mean than on these other data characterizations.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear:
both; text-align: center;"><a href="http://4.bp.blogspot.com/-IPXsTMe5thU/TnTL7h31dMI/AAAAAAAAAD4/RMWinyP9vQU/s1600/ParetoIFig09.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img
border="0" height="319" rba="true" src="http://4.bp.blogspot.com/-IPXsTMe5thU/TnTL7h31dMI/AAAAAAAAAD4/RMWinyP9vQU/s320/ParetoIFig09.png" width="320" /></a></div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">To give a more complete view of the extreme variability
of the arithmetic mean, boxplots of 1000 statistically independent samples drawn from all four of the Pareto type I distribution examples plotted above are shown in the boxplots below.<span style=
"mso-spacerun: yes;"> </span>As before, each sample is of size N = 400 and the parameter <i style="mso-bidi-font-style: normal;">k</i> has the value 1, but here the computed arithmetic means
are shown for the parameter values a = 0.5, 1.0, 1.5, and 2.0; note the log scale used here because the range of computed means is so large.<span style="mso-spacerun: yes;"> </span>For the
first two of these examples, the population mean does not exist, so it is not surprising that the computed values span such an enormous range, but even when the mean is well-defined, the influence of
the infinite variance of these cases is clearly evident.<span style="mso-spacerun: yes;"> </span>It may be argued that infinite variance is an extreme phenomenon, but it is worth emphasizing
here that for the specific “long tail” distributions popular in many applications, the decay rate is sufficiently slow for the variance – and sometimes even the mean – to be infinite, as in these
examples.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align:
center;"><a href="http://3.bp.blogspot.com/-uAabSu1vdws/TnTMME2enxI/AAAAAAAAAD8/PVl2_aeqXfk/s1600/ParetoIFig03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height
="319" rba="true" src="http://3.bp.blogspot.com/-uAabSu1vdws/TnTMME2enxI/AAAAAAAAAD8/PVl2_aeqXfk/s320/ParetoIFig03.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br
/></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">As I have noted several times in previous posts, the median is much better behaved than the mean, so much so that it is well-defined for
any proper distribution.<span style="mso-spacerun: yes;"> </span>One of the advantages of the Pareto type I distribution is that the form of the density function is simple enough that the
median of the distribution can be computed explicitly from the distribution parameters.<span style="mso-spacerun: yes;"> </span>This result is given in the fabulous book by <a href="http://
www.amazon.com/Continuous-Univariate-Distributions-Probability-Statistics/dp/0471584959/ref=sr_1_1?s=books&ie=UTF8&qid=1316277338&sr=1-1">Johnson, Kotz and Balakrishnan</a> that I have
mentioned previously, which devotes an entire chapter (Chapter 20) to the Pareto family of distributions.<span style="mso-spacerun: yes;"> </span>Specifically, the median of the Pareto type I
distribution with parameters <i style="mso-bidi-font-style: normal;">k</i> and <i style="mso-bidi-font-style: normal;">a</i> is given by:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br
/></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 2;"> &
nbsp; </span>Median = 2<sup>1/a</sup>k</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Thus, for
the four examples considered here, the median values are 4.0 (for a = 0.5), 2.0 (for a = 1.0), 1.587 (for a = 1.5), and 1.414 (for a = 2.0).<span style="mso-spacerun: yes;"> </span>Boxplot
summaries for the same 1000 random samples considered above are shown in the plot below, which also includes horizontal dotted lines at these theoretical median values for the four distributions.
<span style="mso-spacerun: yes;"> </span>The fact that these lines correspond closely with the median lines in the boxplots gives an indication that the computed median is, on average, in good
agreement with the correct values it is attempting to estimate.<span style="mso-spacerun: yes;"> </span>As in the case of the arithmetic means, the variability of these estimates decreases
monotonically as <em>a</em> increases, corresponding to the fact that the distribution becomes generally better-behaved as the <i style="mso-bidi-font-style: normal;">a</i> parameter increases.</div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Q0nHedEppms/TnTM4jen5fI/AAAAAAAAAEA/
YWd432AzrBg/s1600/ParetoIFig04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" rba="true" src="http://1.bp.blogspot.com/-Q0nHedEppms/TnTM4jen5fI/
AAAAAAAAAEA/YWd432AzrBg/s320/ParetoIFig04.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">The <i style="mso-bidi-font-style: normal;">geometric mean</i> is an alternative characterization to the more familiar arithmetic mean, one that is
well-defined for any sequence of positive numbers.<span style="mso-spacerun: yes;"> </span>Specifically, the geometric mean of <i style="mso-bidi-font-style: normal;">N</i> positive numbers is
defined as the <i style="mso-bidi-font-style: normal;">N<sup>th</sup></i> root of their product.<span style="mso-spacerun: yes;"> </span>Equivalently, the geometric mean may be computed by
exponentiating the arithmetic average of the log-transformed values.<span style="mso-spacerun: yes;"> </span>In the case of the Pareto type I distribution, the utility of the geometric mean is
closely related to the fact that the log transformation converts a Pareto-distributed random variable into an exponentially distributed one, a point that I will discuss further in a later post on
data transformations.<span style="mso-spacerun: yes;"> </span>(These transformations are the topic of Chapter 12 of <em>Exploring Data</em>, where I briefly discuss both the logarithmic
transformation on which the geometric mean is based and the reciprocal transformation on which the harmonic mean is based, described next.)<span style="mso-spacerun: yes;"> </span>The key
point here is that the following simple expression is available for the geometric mean of the Pareto type I distribution (Johnson, Kotz, and Balakrishnan, page 577):</div><div class="MsoNormal" style
="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 2;"> &
nbsp; </span>Geometric Mean = k exp(1/a)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;">For the four specific examples considered here, these geometric mean values are approximately 7.389 (for a = 0.5), 2.718 (for a = 1.0), 1.948 (for a = 1.5), and 1.649 (for a =
2.0).<span style="mso-spacerun: yes;"> </span>The boxplots shown below summarize the range of variation seen in the computed geometric means for the same 1000 statistically independent samples
considered above.<span style="mso-spacerun: yes;"> </span>Again, the horizontal dotted lines indicate the correct values for each distribution, and it may be seen that the computed values are
in good agreement, on average.<span style="mso-spacerun: yes;"> </span>As before, the variability of these computed values decreases with increasing <i style="mso-bidi-font-style: normal;">a </
i>values as the distribution becomes better-behaved.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href=
"http://1.bp.blogspot.com/-oFTSSZfIUNc/TnTNzo_0TAI/AAAAAAAAAEE/ivn91x51gdI/s1600/ParetoIFig06.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" rba="true"
src="http://1.bp.blogspot.com/-oFTSSZfIUNc/TnTNzo_0TAI/AAAAAAAAAEE/ivn91x51gdI/s320/ParetoIFig06.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The fourth characterization considered here is the <i style="mso-bidi-font-style:
normal;">harmonic mean</i>, again appropriate to positive values, and defined as the reciprocal of the average of the reciprocal data values.<span style="mso-spacerun: yes;"> </span>In the case
of the geometric mean just discussed, the log transformation on which it is based is often useful in improving the distributional character of data values that span a wide range.<span style=
"mso-spacerun: yes;"> </span>In the case of the Pareto type I distribution – and a number of others – the reciprocal transformation on which the harmonic mean is based also improves the
behavior of the data distribution, but this is often not the case.<span style="mso-spacerun: yes;"> </span>In particular, reciprocal transformations often make the character of a data
distribution much worse: applied to the extremely well-behaved standard uniform distribution, it yields the Pareto type I distribution with a = 1, for which none of the integer moments exist;
similarly, applied to the Gaussian distribution, the reciprocal transformation yields a result that is both infinite variance and bimodal.<span style="mso-spacerun: yes;"> </span>(A little
thought suggests that the reciprocal transformation is inappropriate for the Gaussian distribution because it is not strictly positive, but normality is a favorite working assumption, sometimes
applied to the denominators of ratios, leading to a number of theoretical difficulties.<span style="mso-spacerun: yes;"> </span>I will have more to say about that in a future post.)<span style=
"mso-spacerun: yes;"> </span>For the case of the Pareto type I distribution, the reciprocal transformation converts it into the extremely well-behaved beta distribution, and the harmonic mean
has the following simple expression:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span>Harmonic mean = k(1 + a<sup>-1</sup>)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;">For the four examples considered here, this expression yields harmonic mean values of 3 (for a = 0.5), 2 (for a = 1.0), 1.667 (for a = 1.5), and 1.5 (for a = 2.0).<span
style="mso-spacerun: yes;"> </span>Boxplot summaries of the computed harmonic means for the 1000 simulations of each case considered previously are shown below, again with dotted horizontal
lines at the theoretical values for each case.<span style="mso-spacerun: yes;"> </span>As with both the median and the geometric mean, it is clear from these plots that the computed values are
correct on average, and their variability decreases with increasing values of the <i style="mso-bidi-font-style: normal;">a</i> parameter.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br
/></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-pDmiBtXFO0Y/TnTOLsBqh4I/AAAAAAAAAEI/zAuxxfiTnCw/s1600/ParetoIFig08.png" imageanchor="1"
style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" rba="true" src="http://1.bp.blogspot.com/-pDmiBtXFO0Y/TnTOLsBqh4I/AAAAAAAAAEI/zAuxxfiTnCw/s320/ParetoIFig08.png" width="320"
/></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The key
point of this post has been to show that, while averages are not suitable characterizations for “long tailed” phenomena that are becoming an increasing subject of interest in many different fields,
useful alternatives do exist.<span style="mso-spacerun: yes;"> </span>For the case of the Pareto type I distribution considered here, these alternatives include the popular median, along with
the somewhat less well-known geometric and harmonic means.<span style="mso-spacerun: yes;"> </span>In an upcoming post, I will examine the utility of these characterizations for the Zipf
distribution.</div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com2tag:blogger.com,1999:blog-9179325420174899779.post-12618532900006353532011-08-27T13:46:00.000-07:002011-08-27T13:46:16.463-07:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">In my last post, I described three situations where the average of a sequence of numbers is not representative enough to be useful: in the presence of severe outliers, in the
face of multimodal data distributions, and in the face of infinite-variance distributions.<span style="mso-spacerun: yes;"> </span>The post generated three interesting comments that I want to
respond to here.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">First and foremost, I want to say thanks to all of you for&
nbsp;giving me something to think about further, leading me in some interesting new directions.<span style="mso-spacerun: yes;"> </span>First, <strong>chrisbeeleyimh</strong> had the following
to say:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span></div><blockquote>“I seem to have rather abandoned means and medians in favor of drawing the distribution all the time, which baffles my colleagues somewhat.”</
blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Chris also maintains a collection of data examples where the mean is the
same but the shape is very different.<span style="mso-spacerun: yes;"> </span>In fact, one of the points I illustrate in Section 4.4.1 of <span><a href="http://www.amazon.com/
Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650?ie=UTF8&tag=widgetsamazon-20&link_code=btl&camp=213689&creative=392969" target="_blank">Exploring Data in Engineering, the
Sciences, and Medicine</a><img alt="" border="0" height="1" src="http://www.assoc-amazon.com/e/ir?t=widgetsamazon-20&l=btl&camp=213689&creative=392969&o=1&a=0195089650" style=
"border-bottom: medium none; border-left: medium none; border-right: medium none; border-top: medium none; margin: 0px; padding-bottom: 0px !important; padding-left: 0px !important; padding-right:
0px !important; padding-top: 0px !important;" width="1" /></span> is that there are cases where not only the means but <em>all </em>of the moments (i.e., variance, skewness, kurtosis, etc.)
are identical but the distributions are profoundly different.<span style="mso-spacerun: yes;"> </span>A specific example is taken from the book <span><a href="http://www.amazon.com/
Counterexamples-Probability-2nd-Jordan-Stoyanov/dp/0471965383?ie=UTF8&tag=widgetsamazon-20&link_code=btl&camp=213689&creative=392969" target="_blank">Counterexamples in Probability,
2nd Edition</a><img alt="" border="0" height="1" src="http://www.assoc-amazon.com/e/ir?t=widgetsamazon-20&l=btl&camp=213689&creative=392969&o=1&a=0471965383" style="border-bottom:
medium none; border-left: medium none; border-right: medium none; border-top: medium none; margin: 0px; padding-bottom: 0px !important; padding-left: 0px !important; padding-right: 0px !important;
padding-top: 0px !important;" width="1" /></span> by J.M. Stoyanov, who shows that if the lognormal density is multiplied by the following function:</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 2;"> &
nbsp; </span></div><blockquote>f(x) = 1 + A sin(2 pi ln x),</blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;">for any constant A between -1 and +1, the moments are unchanged.<span style="mso-spacerun: yes;"> </span>The character of the distribution is changed profoundly,
however, as the following plot illustrates (this plot is similar to Fig. 4.8 in <em>Exploring Data,</em> which shows the same two distributions, but for A = 0.5 instead of A = 0.9, as shown here).
<span style="mso-spacerun: yes;"> </span>To be sure, this behavior is pathological – distributions that have finite support, for example, are defined uniquely by their complete set of moments –
but it does make the point that moment characterizations are not always complete, even if an infinite number of them are available.<span style="mso-spacerun: yes;"> </span>Within well-behaved
families of distributions (such as the one proposed by Karl Pearson in 1895), a complete characterization is possible on the basis of the first few moments, which is one reason for the historical
popularity of the method of moments for fitting data to distributions.<span style="mso-spacerun: yes;"> </span>It is important to recognize, however, that moments do have their limitations and
that the first moment alone – i.e., the mean by itself – is almost never a complete characterization.<span style="mso-spacerun: yes;"> </span>(I am forced to say “almost” here because if we
impose certain very strong distributional assumptions – e.g., the Poisson or binomial distributions – the specific distribution considered may be fully characterized by its mean.<span style
="mso-spacerun: yes;"> </span>This begs the question, however, of whether this distributional assumption is adequate.<span style="mso-spacerun: yes;"> </span>My experience has been that,
no matter how firmly held the belief in a particular distribution is, exceptions do arise in practice … overdispersion, anyone?)<span style="mso-spacerun: yes;"> </span></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-9SsouQJ0FHo/TllFIR9ESOI/AAAAAAAAADk/mQLCGfSQdH8
/s1600/MoreUselessFig01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" qaa="true" src="http://2.bp.blogspot.com/-9SsouQJ0FHo/TllFIR9ESOI/AAAAAAAAADk/
mQLCGfSQdH8/s320/MoreUselessFig01.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">The plot below provides a further illustration of the inadequacy of the mean as a sole data characterization, comparing four different members of
the family of beta distributions.<span style="mso-spacerun: yes;"> </span>These distributions – in the standard form assumed here – describe variables whose values range from 0 to 1, and they
are defined by two parameters, p and q, that determine the shape of the density function and all moments of the distribution.<span style="mso-spacerun: yes;"> </span>The mean of the beta
distribution is equal to p/(p+q), so if p = q – corresponding to the class of symmetric beta distributions – the mean is ½, regardless of the common value of these parameters.<span style=
"mso-spacerun: yes;"> </span>The four plots below show the corresponding distributions when both parameters are equal to 0.5 (upper left, the arcsin distribution I discussed last time), 1.0
(upper right, the uniform distribution), 1.5 (lower left), and 8.0 (lower right).<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/--uLGduKNCtY/TllFpqRxinI/AAAAAAAAADo/OVGP_ZITwL8/s1600/MoreUselessFig02.png" imageanchor="1" style=
"margin-left: 1em; margin-right: 1em;"><img border="0" height="319" qaa="true" src="http://3.bp.blogspot.com/--uLGduKNCtY/TllFpqRxinI/AAAAAAAAADo/OVGP_ZITwL8/s320/MoreUselessFig02.png" width="320" />
</a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The
second comment on my last post was from <strong>Efrique</strong>, who suggested the Student’s t-distribution with 2 degrees of freedom as a better infinite-variance example than the Cauchy example I
used (corresponding to Student’s t-distribution with one degree of freedom), because the first moment doesn’t even exist for the Cauchy distribution (“there’s nothing to converge to”).<span style=
"mso-spacerun: yes;"> </span>The figure below expands the boxplot comparison I presented last time, comparing the means, medians, and modes (from the <strong>modeest </strong>package), for both
of these infinite-variance examples: the Cauchy distribution I discussed last time and the Student’s t-distribution with two degrees of freedom that Efrique suggested.<span style="mso-spacerun: yes;
"> H</span>ere, the same characterization (mean, median, or mode) is summarized for both distributions in side-by-side boxplots to facilitate comparisons.<span style="mso-spacerun: yes;">&
nbsp; </span>It is clear from these boxplots that the results for the median and the mode are essentially identical for these distributions, but the results for the mean differ dramatically
(recall that these results are truncated for the Cauchy distribution: 13.6% of the 1000 computed means fell outside the +/- 5 range shown here, exhibiting values approaching +/- 1000).<span
style="mso-spacerun: yes;"> </span>This difference illustrates Efrique’s further point that the mean of the data values is a consistent estimator of the (well-defined) population mean of the
Student’s t-distribution with 2 degrees of freedom, while it is not a consistent estimator for the Cauchy distribution.<span style="mso-spacerun: yes;"> </span>Still, it also clear from this
plot that the mean is substantially more variable for the Student’s t-distribution with 2 degrees of freedom than either the median or the <strong>modeest</strong> mode estimate.</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-mBKzlxuO3Lo/TllGgZGwsCI/AAAAAAAAADs/L2mdVfDwpo4
/s1600/MoreUselessFig03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" qaa="true" src="http://3.bp.blogspot.com/-mBKzlxuO3Lo/TllGgZGwsCI/AAAAAAAAADs/
L2mdVfDwpo4/s320/MoreUselessFig03.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">Another example of an infinite-variance distribution where the mean is well-defined but highly variable is the Pareto type I distribution, discussed in
Section 4.5.8 of <em>Exploring Data</em>.<span style="mso-spacerun: yes;"> </span>My favorite reference on distributions is the two volume set by Johnson, Kotz, and Balakrishnan (<span><a href=
"http://www.amazon.com/Continuous-Univariate-Distributions-Probability-Statistics/dp/0471584959?ie=UTF8&tag=widgetsamazon-20&link_code=btl&camp=213689&creative=392969" target=
"_blank">Continuous Univariate Distributions, Vol. 1 (Wiley Series in Probability and Statistics)</a><img alt="" border="0" height="1" src="http://www.assoc-amazon.com/e/ir?t=widgetsamazon-20&l=
btl&camp=213689&creative=392969&o=1&a=0471584959" style="border-bottom: medium none; border-left: medium none; border-right: medium none; border-top: medium none; margin: 0px;
padding-bottom: 0px !important; padding-left: 0px !important; padding-right: 0px !important; padding-top: 0px !important;" width="1" /> and <span><a href="http://www.amazon.com/
Continuous-Univariate-Distributions-Probability-Statistics/dp/0471584940?ie=UTF8&tag=widgetsamazon-20&link_code=btl&camp=213689&creative=392969" target="_blank">Continuous Univariate
Distributions, Vol. 2 (Wiley Series in Probability and Statistics)</a>)<img alt="" border="0" height="1" src="http://www.assoc-amazon.com/e/ir?t=widgetsamazon-20&l=btl&camp=213689&
creative=392969&o=1&a=0471584940" style="border-bottom: medium none; border-left: medium none; border-right: medium none; border-top: medium none; margin: 0px; padding-bottom: 0px !important;
padding-left: 0px !important; padding-right: 0px !important; padding-top: 0px !important;" width="1" /></span></span>, who devote an entire 55 page chapter (Chapter 20 in Volume 1) to the Pareto
distribution, noting that it is named after Vilafredo Pareto, a mid nineteenth- to early twentieth-century Swiss professor of economics, who proposed it as a description of the distribution of
income over a population.<span style="mso-spacerun: yes;"> </span>In fact, there are several different distributions named after Pareto, but the type I distribution considered here exhibits a
power-law decay like the Student’s t-distributions, but it is a J-shaped distribution whose mode is equal to its minimum value.<span style="mso-spacerun: yes;"> </span>More specifically, this
distribution is defined by a location parameter that determines this minimum value and a shape parameter that determines how rapidly the tail decays for values larger than this minimum.<span style=
"mso-spacerun: yes;"> </span>The example considered here takes this minimum value as 1 and the shape parameter as 1.5, giving a distribution with a finite mean but an infinite variance.<span
style="mso-spacerun: yes;"> </span>As in the above example, the boxplot summary shown below characterizes the mean, median, and mode for 1000 statistically independent random samples drawn from
this distribution, each of size N = 100.<span style="mso-spacerun: yes;"> </span>As before, it is clear from this plot that the mean is much more highly variable than either the median or the
mode.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href=
"http://3.bp.blogspot.com/-xaz314FpZmU/TllHOi1mdEI/AAAAAAAAADw/_iCdakolo68/s1600/MoreUselessFig04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" qaa=
"true" src="http://3.bp.blogspot.com/-xaz314FpZmU/TllHOi1mdEI/AAAAAAAAADw/_iCdakolo68/s320/MoreUselessFig04.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">In this case, however, we have the added complication that since this
distribution is not symmetric, its mean, median and mode do not coincide.<span style="mso-spacerun: yes;"> </span>In fact, the population mode is the minimum value (which is 1 here),
corresponding to the solid line at the bottom of the plot.<span style="mso-spacerun: yes;"> </span>The narrow range of the boxplot values around this correct value suggest that the <strong>
modeest</strong> package is reliably estimating this mode value, but as I noted in my last post, this characterization is not useful here because it tells us nothing about the rate at which the
density decays.<span style="mso-spacerun: yes;"> </span>The theoretical median value can also be calculated easily for this distribution, and here it is approximately equal to 1.587,
corresponding to the dashed horizontal line in the plot.<span style="mso-spacerun: yes;"> </span>As with the mode, it is clear from the boxplot that the median estimated from the data is in
generally excellent agreement with this value.<span style="mso-spacerun: yes;"> </span>Finally, the mean value for this particular distribution is 3, corresponding to the dotted horizontal line
in the plot.<span style="mso-spacerun: yes;"> </span>Since this line lies fairly close to the upper quartile of the computed means (i.e., the top of the “box” in the boxplot), it follows that
the estimated mean falls below the correct value almost 75% of the time, but it is also clear that when the mean is overestimated, the extent of this overestimation can be very large.<span style=
"mso-spacerun: yes;"> </span>Motivated in part by the fact that the mean doesn’t always exist for the Pareto distribution, Johnson, Kotz and Balakrishnan note in their chapter on these
distributions that alternative location measures have been considered, including both the geometric and harmonic means.<span style="mso-spacerun: yes;"> </span>I will examine these ideas
further in a future post.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Finally, <strong>klr</strong> mentioned my post on
useless averages in his blog <a href="http://timelyportfolio.blogspot.com/">TimelyPortfolio</a>, where he discusses alternatives to the moving average in characterizing financial time-series.<span
style="mso-spacerun: yes;"> </span>For the case he considers, klr compares a 10-month moving average, the corresponding moving median, and a number of the corresponding mode estimators from the
<strong>modeest</strong> package.<span style="mso-spacerun: yes;"> </span>This is a very interesting avenue of exploration for me since it is closely related to the median filter and other
nonlinear digital filters that can be very useful in cleaning noisy time-series data.<span style="mso-spacerun: yes;"> </span>I discuss a number of these ideas – including moving-window
extensions of other data characterizations like skewness and kurtosis – in my book <span><a href="http://www.amazon.com/Mining-Imperfect-Data-Contamination-Incomplete/dp/0898715822?ie=UTF8&tag=
widgetsamazon-20&link_code=btl&camp=213689&creative=392969" target="_blank">Mining Imperfect Data: Dealing with Contamination and Incomplete Records</a><img alt="" border="0" height="1"
src="http://www.assoc-amazon.com/e/ir?t=widgetsamazon-20&l=btl&camp=213689&creative=392969&o=1&a=0898715822" style="border-bottom: medium none; border-left: medium none;
border-right: medium none; border-top: medium none; margin: 0px; padding-bottom: 0px !important; padding-left: 0px !important; padding-right: 0px !important; padding-top: 0px !important;" width="1"
/>. </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Again, thanks to all of you for your comments.<span style=
"mso-spacerun: yes;"> </span>You have given me much to think about and investigate further, which is one of the joys of doing this blog.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;
"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/
profile/15693640298594791682noreply@blogger.com1tag:blogger.com,1999:blog-9179325420174899779.post-41467424617327171532011-08-20T08:21:00.000-07:002011-08-20T08:21:05.423-07:00<div class="MsoNormal"
style="margin: 0in 0in 0pt;">Of all possible single-number characterizations of a data sequence, the average is probably the best known.<span style="mso-spacerun: yes;"> </span>It is also easy
to compute and in favorable cases, it provides a useful characterization of “the typical value” of a sequence of numbers.<span style="mso-spacerun: yes;"> </span>It is not the only such
“typical value,” however, nor is it always the most useful one: two other candidates – location estimators in statistical terminology – are the median and the mode, both of which are discussed in
detail in Section 4.1.2 of <span><a href="http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650?ie=UTF8&tag=widgetsamazon-20&link_code=btl&camp=213689&
creative=392969" target="_blank">Exploring Data in Engineering, the Sciences, and Medicine</a><img alt="" border="0" height="1" src="http://www.assoc-amazon.com/e/ir?t=widgetsamazon-20&l=btl&
camp=213689&creative=392969&o=1&a=0195089650" style="border-bottom: medium none; border-left: medium none; border-right: medium none; border-top: medium none; margin: 0px; padding-bottom:
0px !important; padding-left: 0px !important; padding-right: 0px !important; padding-top: 0px !important;" width="1" /></span>.<span style="mso-spacerun: yes;"> </span>Like the average, these
alternative location estimators are not always “fully representative,” but they do represent viable alternatives – at least sometimes – in cases where the average is sufficiently non-representative
as to be effectively useless.<span style="mso-spacerun: yes;"> </span>As the title of this post suggests, the focus here is on those cases where the mean doesn’t really tell us what we
want to know about a data sequence, briefly examining why this happens and what we can do about it.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style=
"clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-bDidkJnPnX4/Tk_CdddskeI/AAAAAAAAADU/BFEJDtmip7U/s1600/UselessFig01.png" imageanchor="1" style="margin-left: 1em; margin-right:
1em;"><img border="0" height="319" qaa="true" src="http://2.bp.blogspot.com/-bDidkJnPnX4/Tk_CdddskeI/AAAAAAAAADU/BFEJDtmip7U/s320/UselessFig01.png" width="320" /></a></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">First, it is worth saying a few words
about the two alternatives just mentioned: the median and the mode.<span style="mso-spacerun: yes;"> </span>Of these, the mode is both the more difficult to estimate and the less broadly
useful.<span style="mso-spacerun: yes;"> </span>Essentially, “the mode” corresponds to “the location of the peak in the data distribution.”<span style="mso-spacerun: yes;"> </span>One
difficulty with this somewhat loose definition is that “the mode” is not always well-defined.<span style="mso-spacerun: yes;"> </span>The above collection of plots shows three examples where
the mode is not well-defined, and another where the mode is well-defined but not particularly useful.<span style="mso-spacerun: yes;"> </span>The upper left plot shows the density of the
uniform distribution on the range [1,2]: there, the density is constant over the entire range, so there is no single, well-defined “peak” or unique maximum to serve as a mode for this distribution.
<span style="mso-spacerun: yes;"> </span>The upper right plot shows a nonparametric density estimate for the <place w:st="on">Old Faithful</place> geyser waiting time data that I have discussed
in several of my recent posts (the <em>R</em> data object <strong>faithful</strong>).<span style="mso-spacerun: yes;"> </span>Here, the difficulty is that there are not one but two modes, so
“the mode” is not well-defined here, either: we must discuss “the modes.”<span style="mso-spacerun: yes;"> </span>The same behavior is observed for the <em>arcsin distribution</em>, whose
density is shown in the lower left plot in the above figure.<span style="mso-spacerun: yes;"> </span>This density corresponds to the beta distribution with shape parameters both equal to ½,
giving a bimodal distribution whose cumulative probability function can be written simply in terms of the arcsin function, motivating its name (see Section 4.5.1 of <em>Exploring Data
</em> for a more complete discussion of both the beta distribution family and the special case of the arcsin distribution).<span style="mso-spacerun: yes;"> </span>In this case, the two modes
of the distribution occur at the extremes of the data, at x = 1 and x = 2.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">The second difficulty with the mode noted above is that it is sometimes well-defined but not particularly useful.<span style="mso-spacerun: yes;">
</span>The case of the J-shaped exponential density shown in the lower right plot above illustrates this point: this distribution exhibits a single, well-defined peak at the minimum value x = 0.<span
style="mso-spacerun: yes;"> </span>Here, you don’t even have to look at the data to arrive at this result, which therefore tells you nothing about the data distribution: this density is
described by a single parameter that determines how slowly or rapidly the distribution decays and the mode is independent of this parameter. Despite these limitations, there are cases where the
mode represents an extremely useful data characterization, even though it is much harder to estimate than the mean or the median.<span style="mso-spacerun: yes;"> </span>Fortunately, there is a
nice package available in <em>R</em> to address this problem: the <strong>modeest </strong>package provides 11 different mode estimation procedures.<span style="mso-spacerun: yes;"> </span>I
will illustrate one of these in the examples that follow – the half range mode estimator of Bickel – and I will give a more complete discussion of this package in a later post.</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The median is a far better-known data characterization than the mode, and it is both much
easier to estimate and much more broadly applicable.<span style="mso-spacerun: yes;"> </span>In particular, unlike either the mean or the mode, the median is well-defined for <em>any</em>
proper data distribution, a result demonstrated in Section 4.1.2 of <em>Exploring Data</em>.<span style="mso-spacerun: yes;"> </span>Conceptually, computing the median only requires
sorting the N data values from smallest to largest and then taking either the middle element from this sorted list (if N is odd), or averaging the middle two elements (if N is even).<span style=
"mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The mean is, of course, both the easiest
of these characterizations to compute – simply add the N data values and divide by N – and unquestionably the best known.<span style="mso-spacerun: yes;"> </span>There are, however, at least
three situations where the mean can be so highly non-representative as to be useless:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><span style="mso-list: Ignore;"><blockquote>
<div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l0 level1 lfo1; tab-stops: list 1.0in; text-indent: -0.25in;"><span style="mso-list: Ignore;">1.<span style="font: 7pt "Times New
Roman";"> </span></span>if severe outliers are present;</div><div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l0 level1 lfo1; tab-stops: list 1.0in;
text-indent: -0.25in;"><span style="mso-list: Ignore;">2.<span style="font: 7pt "Times New Roman";"> </span></span>if the distribution is multi-modal;</div>
<div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l0 level1 lfo1; tab-stops: list 1.0in; text-indent: -0.25in;"><span style="mso-list: Ignore;">3.<span style="font: 7pt "Times New
Roman";"> </span></span>if the distribution has infinite variance.</div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt 1in; mso-list: l0 level1
lfo1; tab-stops: list 1.0in; text-indent: -0.25in;"></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The rest of this post examines each of these cases in turn.</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">I have discussed the problem of outliers before, but they are an important enough problem in practice to
bear repeating.<span style="mso-spacerun: yes;"> </span>(I devote all of Chapter 7 to this topic in <em>Exploring Data</em>.)<span style="mso-spacerun: yes;"> </span>The plot below shows
the makeup flow rate dataset, available from the companion website for <em>Exploring Data</em> (the dataset is <strong>makeup.csv</strong>, available on the <a href="http://www.oup.com/us/
companion.websites/9780195089653/rprogram">R programs and datasets page</a>).<span style="mso-spacerun: yes;"> </span>This dataset consists of 2,589 successive measurements of the flow rate of
a fluid stream in an industrial manufacturing process.<span style="mso-spacerun: yes;"> </span>The points in this plot show two distinct forms of behavior: those with values on the order of 400
represent measurements made during normal process operation, while those with values less than about 300 correspond to measurements made when the process is shut down (these values are approximately
zero) or is in the process of being either shut down or started back up.<span style="mso-spacerun: yes;"> </span>The three lines in this plot correspond to the mean (the solid line at
approximately 315), the median (the dotted line at approximately 393), and the mode (the dashed line at approximately 403, estimated using the “hrm” method in the <strong>modeest</strong> package).
<span style="mso-spacerun: yes;"> </span>As I have noted previously, the mean in this case represents a useful line of demarcation between the normal operation data (those points above the
mean, representing 77.6% of the data) and the shutdown segments (those points below the mean, representing 22.4% of the data).<span style="mso-spacerun: yes;"> </span>In contrast, both the
median and the specific mode estimator used here provide much better characterizations of the normal operating data.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-CnPhxb-jIgE/Tk_H08ceYNI/AAAAAAAAADY/IIByhlS_LcI/s1600/
UselessFig02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" qaa="true" src="http://3.bp.blogspot.com/-CnPhxb-jIgE/Tk_H08ceYNI/AAAAAAAAADY/IIByhlS_LcI/
s320/UselessFig02.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;">The next plot below shows a nonparametric density estimate of the <place w:st="on">Old Faithful</place> geyser waiting data I discussed in my last few posts.<span style=
"mso-spacerun: yes;"> </span>The solid vertical line at 70.90 corresponds to the mean value computed from the complete dataset.<span style="mso-spacerun: yes;"> </span>It has been said
that a true compromise is an agreement that makes all parties equally unhappy, and this seems a reasonable description of the mean here: the value lies about mid-way between the two peaks in this
distribution, centered at approximately 55 and 80; in fact, this value lies fairly close to the trough between the peaks in this density estimate.<span style="mso-spacerun: yes;"> </span>(The
situation is even worse for the arcsin density discussed above: there, the two modes occur at values of 1 and 2, while the mean falls equidistant from both at 1.5, arguably the “least representative”
value in the whole data range.)<span style="mso-spacerun: yes;"> </span>The median waiting time value is 76, corresponding to the dotted line just to the left of the main peak at about 80, and
the mode (again, computed using the package <strong>modeest</strong> with the “hrm” method) corresponds to the dashed line at 83, just to the right of the main peak.<span style="mso-spacerun: yes;">&
nbsp; </span>The basic difficulty here is that all of these location estimators are inherently inadequate since they are attempting to characterize “the representative value” of a data sequence that
has “two representative values:” one representing the smaller peak at around 55 and the other representing the larger peak at around 80.<span style="mso-spacerun: yes;"> In this case, both the
median and the mode do a better job of characterizing the larger of the two peaks in the distribution (but not a great job), although such a partial characterization is not always what we want.
</span>This type of behavior is exactly what the mixture models I discussed in my last few posts are intended to describe.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-aFToge7EWtc/Tk_IdMboJOI/AAAAAAAAADc/CRKirO7Nh0s/s1600/UselessFig03.png" imageanchor="1" style=
"margin-left: 1em; margin-right: 1em;"><img border="0" height="319" qaa="true" src="http://2.bp.blogspot.com/-aFToge7EWtc/Tk_IdMboJOI/AAAAAAAAADc/CRKirO7Nh0s/s320/UselessFig03.png" width="320" /></a>
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">To illustrate
the third situation where the mean is essentially useless, consider the Cauchy distribution, corresponding to the Student’s t distribution with one degree of freedom.<span style="mso-spacerun: yes;">
</span>This is probably the best known infinite-variance distribution there is, and it is often used as an extreme example because it causes a lot of estimation procedures to fail.<span style=
"mso-spacerun: yes;"> </span>The plot below is a (truncated) boxplot comparison of the values of the mean, median, and mode computed from 1000 independently generated Cauchy random number
sequences, each of length N = 100.<span style="mso-spacerun: yes;"> </span>It is clear from these boxplots that the variability of the mean is much greater than that of either of the other two
estimators, which are the median and the mode, the latter again estimated from the data using the half-range mode (hrm) method in the <strong>modeest</strong> package.<span style="mso-spacerun: yes;
"> </span>One of the consequences of working with infinite variance distributions is that the mean is no longer a consistent location estimator, meaning that the variance of the estimated mean
does not approach zero in the limit of large sample sizes.<span style="mso-spacerun: yes;"> </span>In fact, the Cauchy distribution is one of the examples I discuss in Chapter 6 of <em>
Exploring Data</em> as a counterexample to the Central Limit Theorem: for most data distributions, the distribution of the mean approaches a Gaussian limit with a variance that decreases inversely
with the sample size N, but for the Cauchy distribution, the distribution of the mean is exactly the same as that of the data itself.<span style="mso-spacerun: yes;"> </span>In other words, for
the Cauchy distribution, averaging a collection of N numbers does not reduce the variability at all.<span style="mso-spacerun: yes;"> </span>This is exactly what we are seeing here, although
the plot below doesn’t show how bad the situation really is: the smallest value of the mean in this sequence of 1000 estimates is -798.97 and the largest value is 928.85.<span style="mso-spacerun:
yes;"> </span>In order to see any detail at all in the distribution of the median and mode values, it was necessary to restrict the range of the boxplots shown here to lie between -5 and +5,
which eliminated 13.6% of the computed mean values.<span style="mso-spacerun: yes;"> </span>In contrast, the median is known to be a reasonably good location estimator for the Cauchy
distribution (see Section 6.6.1 of <em>Exploring Data</em> for a further discussion of this point), and the results presented here suggest that Bickel’s half-range mode estimator is also a reasonable
candidate.<span style="mso-spacerun: yes;"> </span>The main point here is that the mean is a completely unreasonable estimator in situations like this one, an important point in view of the
growing interest in data models like the infinite-variance Zipf distribution to describe “long-tailed” phenomena in business.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-uwnQfCezko8/Tk_JXm5QSOI/AAAAAAAAADg/pTyXC8kq8iI/s1600/UselessFig04.png" imageanchor="1" style=
"margin-left: 1em; margin-right: 1em;"><img border="0" height="319" qaa="true" src="http://1.bp.blogspot.com/-uwnQfCezko8/Tk_JXm5QSOI/AAAAAAAAADg/pTyXC8kq8iI/s320/UselessFig04.png" width="320" /></a>
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">I will have
more to say about both the <strong>modeest</strong> package and Zipf distributions in upcoming posts.</div></span>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com3tag:blogger.com,1999:blog-9179325420174899779.post-26562012778515765402011-08-06T14:23:00.000-07:002011-08-06T14:23:22.895-07:00<div class="MsoNormal" style=
"margin: 0in 0in 0pt;">My last two posts have been about mixture models, with examples to illustrate what they are and how they can be useful.<span style="mso-spacerun: yes;"> </span>
Further discussion and more examples can be found in Chapter 10 of <span><a href="http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650?ie=UTF8&tag=widgetsamazon-20&
amp;link_code=btl&camp=213689&creative=392969" target="_blank">Exploring Data in Engineering, the Sciences, and Medicine</a></span>.<span style="mso-spacerun: yes;"> </span>One
important topic I haven’t covered is how to fit mixture models to datasets like the <place w:st="on">Old Faithful</place> geyser data that I have discussed previously: a nonparametric
density plot gives fairly compelling evidence for a bimodal distribution, but how do you estimate the parameters of a mixture model that describes these two modes?<span style="mso-spacerun: yes;">&
nbsp; </span>For a finite Gaussian mixture distribution, one way is by trial and error, first estimating the centers of the peaks by eye in the density plot (these become the component means), and
adjusting the standard deviations and mixing percentages to approximately match the peak widths and heights, respectively.<span style="mso-spacerun: yes;"> </span>This post considers the more
systematic alternative of estimating the mixture distribution parameters using the <strong>mixtools</strong> package in <em>R</em>.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></
div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The <strong>mixtools</strong> package is one of several available in <em>R</em> to fit mixture distributions or to solve the closely related
problem of model-based clustering.<span style="mso-spacerun: yes;"> </span>Further, <strong>mixtools</strong> includes a variety of procedures for fitting mixture models of different
types.<span style="mso-spacerun: yes;"> </span>This post focuses on one of these – the <strong>normalmixEM</strong> procedure for fitting normal mixture densities – and applies it to two
simple examples, starting with the <place w:st="on">Old Faithful</place> dataset mentioned above.<span style="mso-spacerun: yes;"> </span>A much more complete and thorough discussion of the
<strong>mixtools</strong> package – which also discusses its application to the <place w:st="on">Old Faithful</place> dataset – is given in the <em>R</em> package vignette, <a href="http:/
/cran.r-project.org/web/packages/mixtools/vignettes/vignette.pdf">mixtools: An R Package for Analyzing Finite Mixture Models</a>.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-kbk_korLXMw/Tj2JMvEPiPI/AAAAAAAAADE/avAFubexWKk/s1600/mixtoolsFig01.png" imageanchor="1" style=
"margin-left: 1em; margin-right: 1em;"><img border="0" height="319" src="http://3.bp.blogspot.com/-kbk_korLXMw/Tj2JMvEPiPI/AAAAAAAAADE/avAFubexWKk/s320/mixtoolsFig01.png" t$="true" width="320" /></a>
</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The above
plot shows the results obtained using the <strong>normalmixEM</strong> procedure with its default parameter values, applied to the <place w:st="on">Old Faithful</place> waiting time data.<span style=
"mso-spacerun: yes;"> </span>Specifically, this plot was generated by the following sequence of <em>R</em> commands:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span></div><blockquote><div class="MsoNormal" style
="margin: 0in 0in 0pt;"> library(mixtools)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;">&
nbsp; </span>wait = faithful$waiting</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span>mixmdl = normalmixEM(wait)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span>plot(mixmdl,which=2)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span>lines(density(wait), lty=2, lwd=2)</div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin:
0in 0in 0pt;">Like many modeling tools in <em>R</em>, the <strong>normalmixEM</strong> procedure has associated plot and summary methods.<span style="mso-spacerun: yes;"> </span>In this case,
the plot method displays either the log likelihood associated with each iteration of the EM fitting algorithm (more about that below), or the component densities shown above, or both.<span style=
"mso-spacerun: yes;"> </span>Specifying “which=1” displays only the log likelihood plot (this is the default), specifying “which = 2” displays only the density components/histogram plot shown
here, and specifying “density = TRUE” without specifying the “which” parameter gives both plots.<span style="mso-spacerun: yes;"> </span>Note that the two solid curves shown in the above plot
correspond to the individual Gaussian density components in the mixture distribution, each scaled by the estimated probability of an observation being drawn from that component distribution.
<span style="mso-spacerun: yes;"> </span>The final line of <em>R</em> code above overlays the nonparametric density estimate generated by the <strong>density</strong> function with its default
parameters, shown here as the heavy dashed line (obtained by specifying “lty = 2”).</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in
0pt;">Most of the procedures in the <strong>mixtools</strong> package are based on the iterative <em>expectation maximization (EM) algorithm,</em> discussed in Section 2 of the <strong>mixtools</
strong> vignette and also in Chapter 16 of <em>Exploring Data</em>.<span style="mso-spacerun: yes;"> </span>A detailed discussion of this algorithm is beyond the scope of this post – books have
been devoted to the topic (see, for example, the book by McLachlan and Krishnan, <span><a href="http://www.amazon.com/Algorithm-Extensions-Wiley-Probability-Statistics/dp/0471201707?ie=UTF8&tag=
widgetsamazon-20&link_code=btl&camp=213689&creative=392969" target="_blank">The EM Algorithm and Extensions (Wiley Series in Probability and Statistics)</a><img alt="" border="0" height=
"1" src="http://www.assoc-amazon.com/e/ir?t=widgetsamazon-20&l=btl&camp=213689&creative=392969&o=1&a=0471201707" style="border-bottom: medium none; border-left: medium none;
border-right: medium none; border-top: medium none; margin: 0px; padding-bottom: 0px !important; padding-left: 0px !important; padding-right: 0px !important; padding-top: 0px !important;" width="1"
/></span> ) – but the following two points are important to note here.<span style="mso-spacerun: yes;"> </span>First, the EM algorithm is an iterative procedure, and the time required for it to
reach convergence – if it converges at all – depends strongly on the problem to which it is applied.<span style="mso-spacerun: yes;"> </span>The second key point is that because it is an
iterative procedure, the EM algorithm requires starting values for the parameters, and algorithm performance can depend strongly on these initial values.<span style="mso-spacerun: yes;"> </
span>The <strong>normalmixEM</strong> procedure supports both user-supplied starting values and built-in estimation of starting values if none are supplied.<span style="mso-spacerun: yes;"> </
span>These built-in estimates are the default and, in favorable cases, they work quite well.<span style="mso-spacerun: yes;"> </span>The <place w:st="on">Old Faithful</place> waiting time data
is a case in point – using the default starting values gives the following parameter estimates:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span></div><blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;">
> mixmdl[c("lambda","mu","sigma")]</div><div class="MsoNormal" style="margin: 0in 0in 0pt; text-indent: 0.5in;">$lambda</div><div
class="MsoNormal" style="margin: 0in 0in 0pt; text-indent: 0.5in;">[1] 0.3608868 0.6391132</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt; text-indent: 0.5in;">$mu</div><div class="MsoNormal" style="margin: 0in 0in 0pt;
text-indent: 0.5in;">[1] 54.61489 80.09109</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;">
</span></div><div class="MsoNormal" style="margin: 0in 0in 0pt; text-indent: 0.5in;">$sigma</div><div class="MsoNormal" style="margin: 0in 0in 0pt; text-indent: 0.5in;">[1] 5.871241 5.867718</div></
blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The mixture density described by these parameters is given by:</div><div
class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span>p(x) = lambda[1] n(x; mu[1], sigma[1]) + lambda[2] n(x; mu[2], sigma[2])</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin:
0in 0in 0pt;">where <em>n(x; mu, sigma)</em> represents the Gaussian probability density function with mean <em>mu</em> and standard deviation <em>sigma.</em></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">One reason the default starting values work well for the Old Faithful waiting time data is that if nothing is
specified, the number of components (the parameter k) is set equal to 2.<span style="mso-spacerun: yes;"> </span>Thus, if you are attempting to fit a mixture model with more than two
components, this number should be specified, either by setting k to some other value and not specifying any starting estimates for the parameters lambda, mu, and sigma, or by specifying a vector with
k components as starting values for at least one of these parameters.<span style="mso-spacerun: yes;"> </span>(There are a number of useful options in calling the <strong>normalmixEM</strong>
procedure: for example, specifying the initial sigma value as a scalar constant rather than a vector with k components forces the component variances to be equal.<span style="mso-spacerun: yes;">&
nbsp; </span>I won’t attempt to give a detailed discussion of these options here; for that, type “help(normalmixEM)”.)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Another important point about the default starting
values is that, aside from the number of components k, any unspecified initial parameter estimates are selected randomly by the <strong>normalmixEM</strong> procedure.<span style="mso-spacerun: yes;
"> </span>This means that, even in cases where the default starting values consistently work well – again, the <place w:st="on">Old Faithful</place> waiting time dataset seems to be such a case
– the number of iterations required to obtain the final result can vary significantly from one run to the next.<span style="mso-spacerun: yes;"> </span>(Specifically, the <strong>normalmixEM</
strong> procedure does not fix the seed for the random number generators used to compute these starting values, so repeated runs of the procedure with the same data will start from different initial
parameter values and require different numbers of iterations to achieve convergence.<span style="mso-spacerun: yes;"> </span>In the case of the Old Faithful waiting time data, I have seen
anywhere between 16 and 59 iterations required, with the final results differing only very slightly, typically in the fifth or sixth decimal place.<span style="mso-spacerun: yes;"> </span>If
you want to use the same starting value on successive runs, this can be done by setting the random number seed via the <strong>set.seed</strong> command before you invoke the <strong>normalmixEM</
strong> procedure.)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-lOvyRUwwCyM
/Tj2Za7E91KI/AAAAAAAAADI/Ld4rDY-0gvM/s1600/mixtoolsFig02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" src="http://4.bp.blogspot.com/-lOvyRUwwCyM/
Tj2Za7E91KI/AAAAAAAAADI/Ld4rDY-0gvM/s320/mixtoolsFig02.png" t$="true" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in
0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">It is important to note that the default starting values do not always work well, even if the correct number of components is
specified.<span style="mso-spacerun: yes;"> </span>This point is illustrated nicely by the following example.<span style="mso-spacerun: yes;"> </span>The plot above shows two curves: the
solid line is the exact density for the three-component Gaussian mixture distribution described by the following parameters:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><span
style="mso-tab-count: 1;"><blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>mu
= (2.00, 5.00, 7.00)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>sigma = (1.000,
1.000, 1.000)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>lambda = (0.200,
0.600, 0.200)</div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The dashed curve in the figure is the nonparametric
density estimate generated from n = 500 observations drawn from this mixture distribution.<span style="mso-spacerun: yes;"> </span>Note that the first two components of this mixture
distribution are evident in both of these plots, from the density peaks at approximately 2 and 5.<span style="mso-spacerun: yes;"> </span>The third component, however, is too close to
the second to yield a clear peak in either density, giving rise instead to slightly asymmetric “shoulders” on the right side of the upper peaks.<span style="mso-spacerun: yes;"> </span>The
key point is that the components in this mixture distribution are difficult to distinguish from either of these density estimates, and this hints at further difficulties to come.</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Applying the <strong>normalmixEM</strong> procedure to the 500 sample sequence used to
generate the nonparametric density estimate shown above and specifying k = 3 gives results that are substantially more variable than the <place w:st="on">Old Faithful</place> results discussed above.
<span style="mso-spacerun: yes;"> </span>In fact, to compare these results, it is necessary to be explicit about the values of the random seeds used to initialize the parameter estimation
procedure.<span style="mso-spacerun: yes;"> </span>Specifying this random seed as 101 and only specifying k=3 in the <strong>normalmixEM</strong> call yields the following parameter estimates
after 78 iterations:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><span style="mso-tab-count: 1;"><blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-tab-count: 1;"> </span>mu = (1.77, 4.87, 5.44)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-tab-count: 1;"> </span>sigma = (0.766, 0.115, 1.463)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-tab-count: 1;"> </span>lambda = (0.168, 0.028, 0.803)</div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"></
div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Comparing these results with the correct parameter values listed above, it is clear that some of these estimation errors are quite large.
<span style="mso-spacerun: yes;"> </span>The figure shown below compares the mixture density constructed from these parameters (the heavy dashed curve) with the nonparametric density estimate
computed from the data used to estimate them.<span style="mso-spacerun: yes;"> </span>The prominent “spike” in this mixture density plot corresponds to the very small standard deviation
estimated for the second component and it provides a dramatic illustration of the relatively poor results obtained for this particular example.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;
"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-_EAqyfF3h3k/Tj2ftS37PxI/AAAAAAAAADM/yV__jlaiSSc/s1600/mixtoolsFig03.png" imageanchor=
"1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" src="http://4.bp.blogspot.com/-_EAqyfF3h3k/Tj2ftS37PxI/AAAAAAAAADM/yV__jlaiSSc/s320/mixtoolsFig03.png" t$="true" width=
"320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
Repeating this numerical experiment with different random seeds to obtain different random starting estimates, the <strong>normalmixEM</strong> procedure failed to converge in 1000 iterations
for seed values of 102 and 103, but it converged after 393 iterations for the seed value 104, yielding the following parameter estimates:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br
/></div><span style="mso-tab-count: 1;"><blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span>mu = (1.79, 5.03, 5.46)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </
span>sigma = (0.775, 0.352, 1.493)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span>
lambda = (0.169, 0.063, 0.768)</div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style
="margin: 0in 0in 0pt;">Arguably, the general behavior of these parameter estimates is quite similar to those obtained with the random seed value 101, but note that the second variance component
differs by a factor of three, and the second component of lambda increases almost as much. </div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br />Increasing the sample size from n = 500 to n = 2000 and repeating these experiments, the <strong>normalmixEM</strong> procedure failed to converge after 1000 iterations
for all four of the random seed values 101 through 104.<span style="mso-spacerun: yes;"> </span>If, however, we specify the correct standard deviations (i.e., specify “sigma = c(1,1,1)” when we
invoke <strong>normalmixEM</strong>) and we increase the maximum number of iterations to 3000 (i.e., specify “maxit = 3000”), the procedure does converge after 2417 iterations for the seed value 101,
yielding the following parameter estimates:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><span style="mso-tab-count: 1;"><blockquote><div class="MsoNormal" style="margin: 0in
0in 0pt;"><span style="mso-tab-count: 1;"> </span>mu = (1.98, 4.98, 7.15)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<span style="mso-tab-count: 1;"> </span>sigma = (1.012, 1.055, 0.929)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span
style="mso-tab-count: 1;"> </span>lambda = (0.198, 0.641, 0.161)</div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;
"></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">While these parameters took a lot more effort to obtain, they are clearly much closer to the correct values, emphasizing the point that
when we are fitting a model to data, our results generally improve as the amount of available data increases and as our starting estimates become more accurate.<span style="mso-spacerun: yes;">
</span>This point is further illustrated by the plot shown below, analogous to the previous one, but constructed from the model fit to the longer data sequence and incorporating better initial
parameter estimates.<span style="mso-spacerun: yes;"> </span>Interestingly, re-running the same procedure but taking the correct means as starting parameter estimates instead of the correct
standard deviations, the procedure failed to converge in 3000 iterations.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both;
text-align: center;"><a href="http://1.bp.blogspot.com/-RE8snFLFQDk/Tj2gaHs0hcI/AAAAAAAAADQ/cjNii4IW3w8/s1600/mixtoolsFig04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img
border="0" height="319" src="http://1.bp.blogspot.com/-RE8snFLFQDk/Tj2gaHs0hcI/AAAAAAAAADQ/cjNii4IW3w8/s320/mixtoolsFig04.png" t$="true" width="320" /></a></div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">Overall, I like what I have seen so far of the <strong>
mixtools</strong> package, and I look forward to exploring its capabilities further.<span style="mso-spacerun: yes;"> </span>It’s great to have a built-in procedure – i.e., one I didn’t have to
write and debug myself – that does all of the things that this package does.<span style="mso-spacerun: yes;"> </span>However, the three-component mixture results presented here do illustrate an
important point: the behavior of iterative procedures like <strong>normalmixEM</strong> and others in the <strong>mixtools</strong> package can depend strongly on the starting values chosen to
initialize the iteration process, and the extent of this dependence can vary greatly from one application to another.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><br /></div></span></span><br /> </span></span>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/
15693640298594791682noreply@blogger.com2tag:blogger.com,1999:blog-9179325420174899779.post-36360260655971212432011-07-16T11:32:00.000-07:002011-07-16T11:32:52.103-07:00<span></span> <div class=
"MsoNormal" style="margin: 0in 0in 0pt;">In response to my last post, Chris had the following comment:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span></div><blockquote><span style="mso-tab-count: 1;"></span>I am
actually trying to better understand the distinction between mixture models and mixture distributions in my own work.<span style="mso-spacerun: yes;"> </span>You seem to say mixture models
apply to a small set of models – namely regression models.</blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;"></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div
class="MsoNormal" style="margin: 0in 0in 0pt;">This comment suggests that my caution about the difference between <em>mixed-effect models</em> and <em>mixture distributions</em> may have caused
as much confusion as clarification, and the purpose of this post is to try to clear up this confusion.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;">So first, let me offer the following general observations.<span style="mso-spacerun: yes;"> </span>The terms “mixture models” refers to a generalization of the
class of finite mixture distributions that I discussed in my previous post.<span style="mso-spacerun: yes;"> </span>I give a more detailed discussion of finite mixture distributions in Chapter
10 of <span><a href="http://www.amazon.com/Exploring-Data-Engineering-Sciences-Medicine/dp/0195089650?ie=UTF8&tag=widgetsamazon-20&link_code=btl&camp=213689&creative=392969"
target="_blank">Exploring Data in Engineering, the Sciences, and Medicine</a><img alt="" border="0" height="1" src="http://www.assoc-amazon.com/e/ir?t=widgetsamazon-20&l=btl&camp=213689&
creative=392969&o=1&a=0195089650" style="border-bottom: medium none; border-left: medium none; border-right: medium none; border-top: medium none; margin: 0px; padding-bottom: 0px !important;
padding-left: 0px !important; padding-right: 0px !important; padding-top: 0px !important;" width="1" /></span><span> </span>, and the more general class of mixture models is discussed in the book
<span><a href="http://www.amazon.com/Mixture-Models-Statistics-Textbooks-Monographs/dp/0824776917?ie=UTF8&tag=widgetsamazon-20&link_code=btl&camp=213689&creative=392969" target=
"_blank">Mixture Models (Statistics: A Series of Textbooks and Monographs)</a><img alt="" border="0" height="1" src="http://www.assoc-amazon.com/e/ir?t=widgetsamazon-20&l=btl&camp=213689&
creative=392969&o=1&a=0824776917" style="border-bottom: medium none; border-left: medium none; border-right: medium none; border-top: medium none; margin: 0px; padding-bottom: 0px !important;
padding-left: 0px !important; padding-right: 0px !important; padding-top: 0px !important;" width="1" /></span> by Geoffrey J. McLachlan and Kaye E. Bashford.<span style="mso-spacerun: yes;"> </
span>The basic idea is that we are describing some observed phenomenon like the Old Faithful geyser data (the <strong>faithful</strong> data object in <em>R</em>) where a close look at the data
(e.g., with a nonparametric density estimate) suggests substantial heterogeneity.<span style="mso-spacerun: yes;"> </span>In particular, the density estimates I presented last time for both of
the variables in this dataset exhibit clear evidence of bimodality.<span style="mso-spacerun: yes;"> </span>Essentially, the idea behind a mixture model/mixture distribution is that we are
observing something that isn’t fully characterized by a single, simple distribution or model, but instead by several such distributions or models, with some random selection mechanism at work.
In the case of mixture distributions, some observations appear to be drawn from distribution 1, some from distribution 2, and so forth.<span style="mso-spacerun: yes;"> </span>The more
general class of mixture models is quite broad, including things like heterogeneous regression models, where the response may depend approximately linearly on some covariate with one slope and
intercept for observations drawn from one sub-population, but with another, very different slope and intercept for observations drawn from another sub-population. I present an example at the
end of this post that illustrates this idea.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The probable source of confusion
for Chris – and very possibly other readers – is the comment I made about the difference between these mixture models and <i style="mso-bidi-font-style: normal;">mixed-effect models</i>.<span style=
"mso-spacerun: yes;"> </span>This other class of models – which I only mentioned in passing in my post – typically consists of a linear regression model with two types of prediction variables:
deterministic predictors, like those that appear in standard linear regression models, and random predictors that are typically assumed to obey a Gaussian distribution.<span style="mso-spacerun: yes;
"> </span>This framework has been extended to more general settings like generalized linear models (e.g., mixed-effect logistic regression models).<span style="mso-spacerun: yes;"> </
span>The <em>R</em> package <strong>lme4</strong> provides support for fitting both linear mixed-effect models and generalized linear mixed-effect models to data.<span style="mso-spacerun: yes;">&
nbsp; </span>As I noted last time, these model classes are distinct from the mixture distribution/mixture model classes I discuss here.<span style="mso-spacerun: yes;"> </span>The models that I
do discuss – mixture models – have strong connections with cluster analysis, where we are given a heterogeneous group of objects and typically wish to determine how many distinct groups of objects
are present and assign individuals to the appropriate groups.<span style="mso-spacerun: yes;"> </span>A very high-level view of the many <em>R</em> packages available for clustering – some
based on mixture model ideas and some not – is available from the <a href="http://cran.r-project.org/web/views/Cluster.html">CRAN clustering task view page</a>.<span style="mso-spacerun: yes;">
</span>Two packages from this task view that I plan to discuss in future posts are <strong>flexmix</strong> and <strong>mixtools</strong>, both of which support a variety of mixture model
applications.<span style="mso-spacerun: yes;"> </span>The following comments from the vignette <a href="http://cran.r-project.org/web/packages/flexmix/vignettes/flexmix-intro.pdf">FlexMix:
A General Framework for Finite Mixture Models and Latent Class Regression in R</a> give an indication of the range of areas where these ideas are useful:</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><br /></div><blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;">“Finite mixture models have been used for more than 100 years, but have seen a real boost in
popularity over the last decade due to the tremendous increase in available computing power.<span style="mso-spacerun: yes;"> </span>The areas of application of mixture models range from
biology and medicine to physics, economics, and marketing.<span style="mso-spacerun: yes;"> </span>On the one hand, these models can be applied to data where observations originate from various
groups and the group affiliations are not known, and on the other hand to provide approximations for multi-modal distributions.”</div></blockquote><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Do8IMtBIjKY/TiHSCHhpw-I/AAAAAAAAAC4/lPxrVps2ZNs/s1600/OldFaithfulEx01.png" imageanchor=
"1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" m$="true" src="http://1.bp.blogspot.com/-Do8IMtBIjKY/TiHSCHhpw-I/AAAAAAAAAC4/lPxrVps2ZNs/s320/OldFaithfulEx01.png" width=
"320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<span style="mso-tab-count: 1;"> </span>The following example illustrates the second of these ideas, motivated by the <place w:st=
"on">Old Faithful</place> geyser data that I discussed last time.<span style="mso-spacerun: yes;"> </span>As a reminder, the plot above shows the nonparametric density estimate generated from
the 272 observations of the <place w:st="on">Old Faithful</place> waiting time data included in the <strong>faithful</strong> data object, using the <strong>density</strong> procedure in <em>R</em>
with the default parameter settings.<span style="mso-spacerun: yes;"> </span>As I noted last time, the plot shows two clear peaks, the lower one centered at approximately 55 minutes, and the
second at approximately 80 minutes.<span style="mso-spacerun: yes;"> </span>Also, note that the first peak is substantially smaller in amplitude and appears to be somewhat narrower than the
second peak.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-kTFAkWBQjpw/
TiHSPs00adI/AAAAAAAAAC8/3Nf0iIoj2Zw/s1600/MixDensFig01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" m$="true" src="http://1.bp.blogspot.com/
-kTFAkWBQjpw/TiHSPs00adI/AAAAAAAAAC8/3Nf0iIoj2Zw/s320/MixDensFig01.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin:
0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">To illustrate the connection with finite mixture distributions, the <em>R</em> procedure described below generates a
two-component Gaussian mixture density whose random samples exhibit approximately the same behavior seen in the <place w:st="on">Old Faithful</place> waiting time data.<span style="mso-spacerun: yes;
"> </span>The results generated by this procedure are shown in the above figure, which includes two overlaid plots: one corresponding to the exact density for the two-component Gaussian mixture
distribution (the solid line), and the other corresponding to the nonparametric density estimate computed from N = 272 random samples drawn from this mixture distribution (the dashed line).<span
style="mso-spacerun: yes;"> </span>As in the previous plot, the nonparametric density estimate was computed using the <strong>density</strong> command in <em>R</em> with its default parameter
values.<span style="mso-spacerun: yes;"> </span>The first component in this mixture has mean 54.5 and standard deviation 8.0, values chosen by trial and error to approximately match the lower
peak in the <place w:st="on">Old Faithful</place> waiting time distribution.<span style="mso-spacerun: yes;"> </span>The second component has mean 80.0 and standard deviation 5.0, chosen to
approximately match the second peak in the waiting time distribution.<span style="mso-spacerun: yes;"> </span>The probabilities associated with the first and second components are 0.45 and
0.55, respectively, selected to give approximately the same peak heights seen in the waiting time density estimate.<span style="mso-spacerun: yes;"> </span>Combining these results, the density
of this mixture distribution is:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span>p(x) = 0.45 n(x; 54.5, 8.0) + 0.55 n(x; 80.0, 5.0),</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;">where n(x;m,s) denotes the Gaussian density function with mean m and standard deviation s.<span style="mso-spacerun: yes;"> </span>These density
functions can be generated using the <strong>dnorm</strong> function in <em>R</em>.</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-tab-count: 1;"> </span>The <em>R</em> procedure listed below generates <strong>n</strong> independent,
identically distributed random samples from an <em>m</em>-component Gaussian mixture distribution.<span style="mso-spacerun: yes;"> </span>This procedure is called with the following
parameters:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> &
nbsp; </span><strong>n</strong> = the number of random samples to generate</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;">&
nbsp; </span><strong>mvec</strong> = vector of <em>m</em> mean values</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style=
"mso-tab-count: 1;"> </span><strong>svec</strong> = vector of <em>m</em> standard deviations</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span><strong>pvec</strong> = vector of probabilities for each of the <em>m
</em>components</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 1;"> </span><strong>iseed</
strong> = integer seed to initialize the random number generators</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The <em>R</
em> code for the procedure looks like this:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">MixEx01GenProc <- function(n,
muvec, sigvec, pvec, iseed=101){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<span style="mso-spacerun: yes;"> </span>set.seed(iseed)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>m <- length(pvec)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </
span>indx <- sample(seq(1,m,1), size=n, replace=T, prob=pvec)</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>yvec <- 0</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>for (i
in 1:m){</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>xvec <- rnorm(n, mean=muvec[i], sd=sigvec[i])</div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>yvec <- yvec + xvec * as.numeric(indx == i)</div><div class="MsoNormal" style="margin: 0in 0in
0pt;"><span style="mso-spacerun: yes;"> </span>}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>#</div><div class="MsoNormal" style=
"margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"> </span>yvec</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">}</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></
div><div class="MsoNormal" style="margin: 0in 0in 0pt;">The first statement initializes the random number generator using the <strong>iseed</strong> parameter, which is given a default value of 101.
<span style="mso-spacerun: yes;"> </span>The second line determines the number of components in the mixture density from the length of the <strong>pvec</strong> parameter vector, and the third
line generates a random sequence <strong>indx</strong> of component indices taking the values 1 through <em>m</em> with probabilities determined by the <strong>pvec</strong> parameter.<span style=
"mso-spacerun: yes;"> </span>The rest of the program is a short loop that generates each component in turn, using <strong>indx</strong> to randomly select observations from each of these
components with the appropriate probability. <span style="mso-spacerun: yes;"> </span>To see how this works, note that the first pass through the loop generates the random vector <strong>xvec</
strong> of length <strong>n</strong>, with mean given by the first element of the vector <strong>muvec</strong> and standard deviation given by the first element of the vector <strong>sigvec</
strong>.<span style="mso-spacerun: yes;"> </span>Then, for every one of the <strong>n</strong> elements of <strong>yvec</strong> for which the <strong>indx</strong> vector is equal to 1,
<strong>yvec</strong> is set equal to the corresponding element of this first random component <strong>xvec</strong>.<span style="mso-spacerun: yes;"> </span>On the second pass through the
loop, the second random component is generated as <strong>xvec</strong>, again with length <strong>n</strong> but now with mean specified by the second element of <strong>muvec</strong> and standard
deviation determined by the second element of <strong>sigvec</strong>.<span style="mso-spacerun: yes;"> </span>As before, this value is added to the initial value of <strong>yvec</strong>
whenever the selection index vector <strong>indx</strong> is equal to 2.<span style="mso-spacerun: yes;"> </span>Note that since every element of the <strong>indx</strong> vector is unique,
none of the nonzero elements of <strong>yvec</strong> computed during the first iteration of the loop are modified; instead, the only elements of <strong>yvec</strong> that are modified in the second
pass through the loop have their initial value of zero, specified in the line above the start of the loop.<span style="mso-spacerun: yes;"> </span>More generally, each pass through the loop
generates the next component of the mixture distribution and fills in the corresponding elements of <strong>yvec</strong> as determined by the random selection index vector <strong>indx</strong>.</
div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Zu-7I2Gm0Ew/TiHUTcXeLSI/
AAAAAAAAADA/8VTpCBWBFMI/s1600/MixExFig03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" m$="true" src="http://3.bp.blogspot.com/-Zu-7I2Gm0Ew/
TiHUTcXeLSI/AAAAAAAAADA/8VTpCBWBFMI/s320/MixExFig03.png" width="320" /></a></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">As I noted at the beginning of this post, the notion of a mixture model is more general than that of the finite mixture distributions
just described, but closely related.<span style="mso-spacerun: yes;"> </span>I conclude this post with a simple example of a more general mixture model.<span style="mso-spacerun: yes;">
</span>The above scatter plot shows two variables, x and y, related by the following mixture model:</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class=
"MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-tab-count: 2;">
</span>y = x + e<sub>1</sub> with probability p<sub>1</sub> = 0.40,</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">and</div><div class="MsoNormal" style="margin: 0in 0in 0pt;">
<span style="mso-spacerun: yes;"> </span>y = -x + 2 + e<sub>2
</sub> with probability p<sub>2</sub> = 0.60,</div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">where e<sub>1</sub> is a
zero-mean Gaussian random variable with standard deviation 0.1, and e<sub>2</sub> is a zero-mean Gaussian random variable with standard deviation 0.3.<span style="mso-spacerun: yes;"> </span>To
emphasize the components in the mixture model, points corresponding to the first component are plotted as solid circles, while points corresponding to the second component are plotted as open
triangles.<span style="mso-spacerun: yes;"> </span>The two dashed lines in this plot represent the ordnary least squares regression lines fit to each component separately, and they both
correspond reasonably well to the underlying linear relationships that define the two components (e.g., the least squares line fit to the solid circles has a slope of approximately +1 and an
intercept of approximately 0). In contrast, the heavier dotted line represents the ordinary least squares regression line fit to the complete dataset without any knowledge of its underlying
component structure: this line is almost horizontal and represents a very poor approximation to the behavior of the dataset.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"
style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><span style="mso-spacerun: yes;"></span>The point of this example is to illustrate two things.<span style=
"mso-spacerun: yes;"> </span>First, it provides a relatively simple illustration of how the mixture density idea discussed above generalizes to the setting of regression models and beyond: we
can construct fairly general mixture models by requiring different randomly selected subsets of the data to conform to different modeling assumptions.<span style="mso-spacerun: yes;"> </span>
The second point – emphasized by the strong disagreement between the overall regression line and both of the component regression lines – is that if we are given only the dataset (i.e., the x and y
values themselves) without knowing which component they represent, standard analysis procedures are likely to perform very badly.<span style="mso-spacerun: yes;"> </span>This question – how do
we analyze a dataset like this one without detailed prior knowledge of its heterogeneous structure – is what <em>R</em> packages like <strong>flexmix</strong> and <strong>mixtools</strong> are
designed to address.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin: 0in 0in 0pt;"><br /></div><div class="MsoNormal" style="margin: 0in 0in 0pt;">More about
that in future posts. </div>Ron Pearson (aka TheNoodleDoodler)http://www.blogger.com/profile/15693640298594791682noreply@blogger.com3 | {"url":"http://exploringdatablog.blogspot.com/feeds/posts/default","timestamp":"2014-04-16T16:58:27Z","content_type":null,"content_length":"684890","record_id":"<urn:uuid:5f3ff545-c220-43d3-9331-6c1bc3538a8a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Randomness Condensers for Efficiently Samplable, Seed-Dependent Sources
Yevgeniy Dodis
New York University
Randomness Condensers for Efficiently Samplable, Seed-Dependent Sources
We initiate a study of randomness condensers for sources that are
efficiently samplable but may depend on the seed of the condenser.
That is, we seek functions Cond : {0,1}^n x {0,1}^d -> {0,1}^m such that
if we choose a random seed S in {0,1}^d and a source X=A(S) is
generated by a randomized circuit A of size t, such that X has min-entropy
at least k given S, then Cond(X;S) should have min-entropy at least some
k' given S. The distinction from the standard notion of randomness condensers is
that the source X may be correlated with the seed S (but is restricted to be
efficiently samplable). Randomness extractors of this type (corresponding to
the special case where k'=m) have been implicitly studied in the past
(by Trevisan and Vadhan, FOCS `00).
We show that:
1) Unlike extractors, we can have randomness condensers for samplable,
seed-dependent sources whose computational complexity is smaller than
the size t of the adversarial sampling algorithm A. Indeed, we show that
sufficiently strong collision-resistant hash functions are seed-dependent
condensers that produce outputs with min-entropy k' = m - O(log t),
i.e. logarithmic *entropy deficiency*.
2) Randomness condensers suffice for key derivation in many cryptographic
applications: when an adversary has negligible success probability (or
negligible "squared advantage") for a uniformly random key, we can use instead
a key generated by a condenser whose output has logarithmic entropy deficiency.
3) Randomness condensers for seed-dependent samplable sources that are robust
to side information generated by the sampling algorithm imply soundness of the
Fiat-Shamir Heuristic when applied to any constant-round, public-coin
proof system. (and thus imply that such proof systems cannot be zero knowledge).
In fact, this only requires condensers for "leaky sources" --- ones
that are uniform
prior to conditioning on the adversary's side information --- and we
show that such
condensers are also *necessary* for soundness of the Fiat--Shamir Heuristic.
Joint workwith Tom Ristenpart and Salil Vadhan. | {"url":"http://cs.nyu.edu/crg/newAbstracts/abstract_03_28_12.html","timestamp":"2014-04-19T10:08:03Z","content_type":null,"content_length":"2568","record_id":"<urn:uuid:5c08cce7-01d3-4457-bddb-239cbff5f59b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
THEA Test Skill Descriptions
The purpose of the test is to assess the reading, mathematics, and writing skills that entering freshman-level students should have if they are to perform effectively in undergraduate certificate or
degree programs in Texas public colleges or universities. The skills listed below are eligible to be assessed by the THEA Test. Each skill is accompanied by a brief description of how the skill may
be measured on the test.
THEA Reading Section
General Description
The Reading Section of the THEA Test consists of approximately 40 multiple-choice questions matched to about seven reading selections of 300 to 750 words each. The selections represent a variety of
subject areas and are similar to reading materials (e.g., textbooks, manuals) that students are likely to encounter during their first year of college. Students will be asked to answer several
multiple-choice questions about each reading selection.
Skill Descriptions
The Reading Section of the THEA Test is based on the skills listed below. Each skill is accompanied by a description of the content that may be included on the test.
Skill: Determine the meaning of words and phrases.
Includes using the context of a passage to determine the meaning of words with multiple meanings, unfamiliar and uncommon words and phrases, and figurative expressions.
Skill: Understand the main idea and supporting details in written material.
Includes identifying explicit and implicit main ideas and recognizing ideas that support, illustrate, or elaborate the main idea of a passage.
Skill: Identify a writer's purpose, point of view, and intended meaning.
Includes recognizing a writer's expressed or implied purpose for writing; evaluating the appropriateness of written material for a specific purpose or audience; recognizing the likely effect
on an audience of a writer's choice of words; and using the content, word choice, and phrasing of a passage to determine a writer's opinion or point of view.
Skill: Analyze the relationship among ideas in written material.
Includes identifying sequence of events or steps, identifying cause-effect relationships, analyzing relationships between ideas in opposition, identifying solutions to problems, and drawing
conclusions inductively and deductively from information stated or implied in a passage.
Skill: Use critical reasoning skills to evaluate written material.
Includes evaluating the stated or implied assumptions on which the validity of a writer's argument depends; judging the relevance or importance of facts, examples, or graphic data to a
writer's argument; evaluating the logic of a writer's argument; evaluating the validity of analogies; distinguishing between fact and opinion; and assessing the credibility or objectivity of
a writer or source of written material.
Skill: Apply study skills to reading assignments.
Includes organizing and summarizing information for study purposes; following written instructions or directions; and interpreting information presented in charts, graphs, or tables.
THEA Mathematics Section
General Description
The Mathematics Section of the THEA Test consists of approximately 50 multiple-choice questions covering four general areas: fundamental mathematics, algebra, geometry, and problem solving. The test
questions focus on a student's ability to perform mathematical operations and solve problems. Appropriate formulas will be provided to help students perform some of the calculations required by the
test questions.
You will have access to an on-screen calculator during the Mathematics Section of the THEA IBT. See "The Test Session" for more information.
Skill Descriptions
The Mathematics Section of the THEA Test is based on the skills listed below. Each skill is accompanied by a description of the content that may be included on the test.
FUNDAMENTAL MATHEMATICS
Skill: Solve word problems involving integers, fractions, decimals, and units of measurement.
Includes solving word problems involving integers, fractions, decimals (including percents), ratios and proportions, and units of measurement and conversions (including scientific notation).
Skill: Solve problems involving data interpretation and analysis.
Includes interpreting information from line graphs, bar graphs, pictographs, and pie charts; interpreting data from tables; recognizing appropriate graphic representations of various data;
analyzing and interpreting data using measures of central tendency (mean, median, and mode); and analyzing and interpreting data using the concept of variability.
Skill: Graph numbers or number relationships.
Includes identifying the graph of a given equation or a given inequality, finding the slope and/or intercepts of a given line, finding the equation of a line, and recognizing and interpreting
information from the graph of a function (including direct and inverse variation).
Skill: Solve one- and two-variable equations.
Includes finding the value of the unknown in a given one-variable equation, expressing one variable in terms of a second variable in two-variable equations, and solving systems of two
equations in two variables (including graphical solutions).
Skill: Solve word problems involving one and two variables.
Includes identifying the algebraic equivalent of a stated relationship and solving word problems involving one and two unknowns.
Skill: Understand operations with algebraic expressions and functional notation.
Includes factoring quadratics and polynomials; performing operations on and simplifying polynomial expressions, rational expressions, and radical expressions; and applying principles of
functions and functional notation.
Skill: Solve problems involving quadratic equations.
Includes graphing quadratic functions and quadratic inequalities; solving quadratic equations using factoring, completing the square, or the quadratic formula; and solving problems involving
quadratic models.
Skill: Solve problems involving geometric figures.
Includes solving problems involving two-dimensional geometric figures (e.g., perimeter and area problems) and three-dimensional geometric figures (e.g., volume and surface area problems) and
solving problems using the Pythagorean theorem.
Skill: Solve problems involving geometric concepts.
Includes solving problems using principles of similarity, congruence, parallelism, and perpendicularity.
PROBLEM SOLVING
Skill: Apply reasoning skills.
Includes drawing conclusions using inductive and deductive reasoning.
Skill: Solve applied problems involving a combination of mathematical skills.
Includes applying combinations of mathematical skills to solve problems and to solve a series of related problems.
THEA Writing Section
General Description
The Writing Section of the THEA Test consists of two subsections: a writing sample subsection and a multiple-choice subsection. The writing sample subsection requires students to demonstrate their
ability to communicate effectively in writing on a given topic. The multiple-choice subsection includes approximately 40 questions assessing students' ability to recognize various elements of
effective writing. You are scored first on the writing sample subsection. If your score on the writing sample subsection is neither a clear pass nor a clear fail, the multiple-choice subsection
contributes to your passing status. See "How to Read Your Score Report" for a description of how the writing sample subsection is scored.
Description: Writing Sample Subsection
The writing sample subsection of the THEA Test consists of one writing assignment. Students are asked to prepare a MULTIPLE-PARAGRAPH writing sample of about 300–600 words on an assigned topic.
Students' writing samples are scored on the basis of how effectively they communicate a whole message to a specified audience for a stated purpose. Students will be assessed on their ability to
express, organize, and support opinions and ideas, rather than on the position they express. The following characteristics may be considered in scoring the writing samples:
• APPROPRIATENESS—the extent to which the student addresses the topic and uses language and style appropriate to the given audience, purpose, and occasion.
• UNITY AND FOCUS—the clarity with which the student states and maintains a main idea or point of view.
• DEVELOPMENT—the amount, depth, and specificity of supporting detail the student provides.
• ORGANIZATION—the clarity of the student's writing and the logical sequence of the student's ideas.
• SENTENCE STRUCTURE—the effectiveness of the student's sentence structure and the extent to which the student's writing is free of errors in sentence structure.
• USAGE—the extent to which the student's writing is free of errors in usage and shows care and precision in word choice.
• MECHANICAL CONVENTIONS—the student's ability to spell common words and to use the conventions of capitalization and punctuation.
Your written response should be your original work, written in your own words, and not copied or paraphrased from some other work.
Skill Descriptions: Multiple-Choice Subsection
The multiple-choice subsection of the Writing Section of the test is based on the skills listed below. Each skill is accompanied by a description of the content that may be included on the test.
Please note that the term standard as it appears below refers to language use that conforms to the conventions of edited American English.
ELEMENTS OF COMPOSITION
Skill: Recognize purpose and audience.
Includes recognizing writing that is appropriate for a given purpose and recognizing writing that is appropriate for a given audience and occasion.
Skill: Recognize unity, focus, and development in writing.
Includes recognizing unnecessary shifts in point of view or distracting details that impair the development of the main idea in a piece of writing, recognizing revisions that improve the
unity and focus of a piece of writing, and recognizing examples of well-developed writing.
Skill: Recognize effective organization in writing.
Includes recognizing methods of paragraph organization and the appropriate use of transitional words or phrases to convey text structure and reorganizing sentences to improve cohesion and the
effective sequence of ideas.
SENTENCE STRUCTURE, USAGE, AND MECHANICS
Skill: Recognize effective sentences.
Includes recognizing ineffective repetition and inefficiency in sentence construction; identifying sentence fragments and run-on sentences; identifying standard subject-verb agreement;
identifying standard placement of modifiers, parallel structure, and use of negatives in sentence formation; and recognizing imprecise and inappropriate word choice.
Skill: Recognize edited American English usage.
Includes recognizing the standard use of verb forms and pronouns; recognizing the standard formation and use of adverbs, adjectives, comparatives, superlatives, and plural and possessive
forms of nouns; and recognizing standard punctuation. | {"url":"http://www.thea.nesinc.com/TA_IBTtestskills.asp","timestamp":"2014-04-18T02:59:51Z","content_type":null,"content_length":"23618","record_id":"<urn:uuid:9311ae3b-6473-42e9-a0b4-64c6a0a95482>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
, Philosophical, are a collection of the principal papers and matters read before certain philosophical societies, as the Royal Society of London, and the Royal Society of Edinburgh. These
Transactions contain the several discoveries and histories of nature and art, either made by the members of those societies, or communicated by them from their correspondents, with the various
experiments, observations, &c, made by them, or transmitted to them, &c.
The Philos. Trans. of the Royal Society of London were set on foot in 1665, by Mr. Oldenburg, the then secretary of that Society, and were continued by him till the year 1677. They were then
discontinued upon his death, till January 1678, when Dr. Grew resumed the publication of them, and continued it for the months of December 1678, and January and February 1679, after which they were
intermitted till January 1683. During this last interval their want was in some measure supplied by Dr. Hook's Philosophical Collections. They were also interrupted for 3 years, from December 1687 to
January 1691, beside other smaller interruptions amounting to near a year and a half more, before October 1695, since which time the Transactions have been carried on regularly to the present day,
with various degrees of credit and merit.
Till the year 1752 these Transactions were published in numbers quarterly, and the printing of them was always the single act of the respective secretaries till that time; but then the society
thought fit that a committee should be appointed to consider the papers read before them, and to select out of them such as they should judge most proper for publication in the future Transactions.
For this purpose the members of the couneil for the time being, constitute a standing committee: they meet on the first Thursday of every month, and no less than seven of the members of the committee
(of which number the president, or in his absence a vice president, is always to be one) are allowed to be a quorum, capable of acting in relation to such papers; and the question with regard to the
publication of any paper, is always decided by the majority of votes taken by ballot.
They are published annually in two parts, at the expence of the society; and each fellow, or member, is entitled to receive one copy gratis of every part published after his admission into the
society. For many years past, the collection, in two parts, has made one volume in each year; and in the year 1793 the number of the volumes was 83, being 10 less than the number of the year in the
century. They were formerly much respected for the great number of excellent papers and discoveries contained in them; but within the last dozen years there has been a great falling off, and the
volumes are now considered as of very inferior merit, as well as quantity.
There is also a very useful Abridgment, of those volumes of the Transactions that were published before the year 1752, when the society began to publish the Transactions on their own account. Those
to the end of the year 1700 were abridged, in 3 volumes, by Mr. John Lowthorp: those from the year 1700 to 1720 were abridged, in 2 volumes, by Mr. Henry Jones: and those from 1719 to 1733 were
abridged, in 2 volumes, by Mr. John Eames and Mr. John Martyn; Mr. Martyn also continued the abridgment of those from 1732 to 1744 in 2 volumes, and of those from 1744 to 1750 in 2 volumes; making in
all 11 volumes, of very curious and useful matters in all the arts and sciences.
The Royal Society of Edinburgh, instituted in 1783, have also published 3 volumes of their Philosophical Transactions; which are deservedly held in the highest respect for the importance of their
TRANSCENDENTAL Quantities, among Geometricians, are indeterminate ones; or such as cannot be expressed or fixed to any constant equation: such is a transcendental curve, or the like.
M. Leibnitz has a dissertation in the Acta Erud. Lips. in which he endeavours to shew the origin of such quantities; viz, why some problems are neither plain, solid, nor sursolid, nor of any certain
degree, but do transcend all algebraic equations.
He also shews how it may be demonstrated without calculus, that an algebraic quadratrix for the circle or hyperbola is impossible: for if such a quadratrix could be found, it would follow, that by
means of it any angle, ratio, or logarithm, might be divided in a given proportion of one right line to another, and this by one universal construction: and consequently the problem of the section of
an angle, or the invention of any number of mean proportionals, would be of a certain finite degree. Whereas the different degrees of algebraic equations, and therefore the problem understood in
general of any number of parts of an angle, or mean proportionals, is of an indefinite degree, and transcends all algebraical equations.
Others define Transcendental equations, to be such fluxional equations as do not admit of fluents in common finite algebraical equations, but as expressed by means of some curve, or by logarithms, or
by infinite series; thus the expression is a Transcendental equation, because the fluents cannot both be expressed in finite terms. And the equation which expresses the relation between an arc of a
circle and its sine is a Transcendental equation; for Newton has demonstrated that this relation cannot be expressed by any finite algebraic equation, and therefore it can only be by an infinite or a
Transcendental equation.
It is also usual to rank exponential equations among Transcendental ones; because such equations, although expressed in finite terms, have variable exponents, which cannot be expunged but by putting
the equation into fluxions, or logarithms, &c. Thus, the exponential | equation .
Transcendental Curve, in the Higher Geometry, is such a one as cannot be defined by an algebraic equation; or of which, when it is expressed by an equation, one of the terms is a variable quantity,
or a curve line. And when such curve line is a geometrical one, or one of the first degree or kind, then the Transcendental curve is said to be of the second degree or kind, &c.
These curves are the same with what Des Cartes, and others after him, call mechanical curves, and which they would have excluded out of geometry; contrary however to the opinion of Newton and
Leibnitz; for as much as, in the construction of geometrical problems, one curve is not to be preferred to another as it is defined by a more simple equation, but as it is more easily described than
that other: besides, some of these Transcendental, or mechanical curves, are found of greater use than almost all the algebraical ones.
M. Leibnitz, in the Acta Erudit. Lips. has given a kind of Transcendental equations, by which these Transcendental curves are actually defined, and which are of an indefinite degree, or are not
always the same in every point of the curve. Now whereas algebraists use to assume some general letters or numbers for the quantities sought, in these Transcendental problems Leibnitz assumes general
or indefinite equations for the lines sought; thus, for example, putting x and y for the absciss and ordinate, the equation he uses for a line required, is : by the help of which indefinite equation,
he seeks for the tangent; and comparing that which results with the given property of tangents, he finds the value of the assumed letters a, b, c, &c, and thus defines the equation of the line
If the comparison abovementioned do not succeed, he pronounces the line sought not to be an algebraical, but a Transcendental one.
This supposed, he proceeds to find the species of Transcendency: for some Transcendentals depend on the general division or section of a ratio, or upon logarithms, others upon circular arcs, &c.
Here then, beside the symbols x and y, he assumes a third, as v, to denote the Transcendental quantity; and of these three he forms a general equation of the line sought, from which he finds the
tangent according to the differential method, which succeeds even in Transcendental quantities. This found, he compares it with the given properties of the tangents, and so discovers not only the
values of a, b, c, &c, but also the particular nature of the Transcendental quantity.
Transcendental problems are very well managed by the method of fluxions. Thus, for the relation of a circular arc and right line, let a denote the arc, and x the versed sine, to the radius 1, then is
; and if the ordinate of a cycloid be y, then is .
Thus is the analytical calculus extended to those lines which have hitherto been excluded, for no other cause but that they were thought incapable of it. | {"url":"http://words.fromoldbooks.org/Hutton-Mathematical-and-Philosophical-Dictionary/t/transactions.html","timestamp":"2014-04-21T07:11:25Z","content_type":null,"content_length":"14007","record_id":"<urn:uuid:6b6f235e-a636-4f71-9221-b9e261bcd9ab>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bowman, J
Department of Mathematical and Statistical Sciences
CAB 521
Mailing Address:
Department of Mathematical Sciences
University of Alberta
Edmonton, Alberta
Canada, T6G 2G1
+1 780 492 0532
+1 780 492 6826
GPG key: jcbowman-pubkey.asc
GPG Key fingerprint: 6237 D46E 270E 1C3B 7B12 F032 F9EB A966 4CD7 3FB3
Research & Teaching:
BS Eng (Alberta)
MA (Princeton)
PhD (Princeton)
Area of Specialization:
My past work on the analytical and numerical aspects of statistical closures in turbulence has led to the recent development of Spectral Reduction, a reduced statistical description of
turbulence. The agreement with full numerical simulations appears to be remarkably good, even in flows containing long-lived coherent structures. Among the practical applications, such a tool can
be used to assess the effect of various dissipation mechanisms in large-eddy simulations, as a subgrid model, or even as a substitute for full simulation of high-Reynolds number turbulence.
My other research interests include: implicit dealiasing of linear convolutions, 3D vector graphics, inertial-range scaling laws for two-dimensional fluid, plasma, and geophysical turbulence;
nonlinear symmetric stability criteria for constrained non-canonical Hamiltonian dynamics; turbulent transport and the role of anisotropy in plasma and geophysical turbulence; realizable
statistical closures; electro-osmotic flow; parcel advection algorithms; exactly conservative integration algorithms; anisotropic multigrid solvers. | {"url":"http://www.math.ualberta.ca/~bowman/","timestamp":"2014-04-18T01:11:20Z","content_type":null,"content_length":"17041","record_id":"<urn:uuid:01e11fb9-c209-4c02-8e85-e61bae042fb9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newport Beach Math Tutors
...For this reason it is important to have a good understanding in Precalculus to be successful in Calculus I, II and III. Your investment to improve Precalculus grade is highly profitable in
Calculus. Happy Precalculusing!
11 Subjects: including algebra 1, algebra 2, calculus, geometry
...I can provide you with testimonials from satisfied customers. I am also currently a primary educator and curriculum developer for home schooled students.I have a Master?s Degree as a Reading
Specialist. I have been certified by the states of both Pennsylvania and New Jersey as an elementary tea...
41 Subjects: including precalculus, trigonometry, GRE, algebra 1
...Typically a little bit of differential equations is taught in the third calculus class (usually the calculus course which covers multivariable calculus). The topics covered in the differential
equations classes that I completed were as follows: 1) Separation of variables 2) Homogeneous equations...
34 Subjects: including SAT math, algebra 1, algebra 2, calculus
...I grew up in Orange County, CA and attended Rosary High School, where I was a four year varsity softball player and involved with clubs such as CSF and NHS. During my time in high school I
worked very hard not only at my academics but also in athletics and was admitted to Princeton University an...
18 Subjects: including prealgebra, English, biology, reading
...When my students don't succeed, then I do not. I am an experienced tutor with several years of tutoring experience. I prefer to take a individualized approach whereby on our first meeting I try
to assess what each student is having difficulty with and how they learn.
46 Subjects: including geometry, MCAT, physics, ACT Math | {"url":"http://www.algebrahelp.com/Newport_Beach_math_tutors.jsp","timestamp":"2014-04-18T18:19:00Z","content_type":null,"content_length":"24932","record_id":"<urn:uuid:f38eef5f-9faf-4136-8f6b-20e361ce02b3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by LeAnn/Please help me on Wednesday, November 18, 2009 at 9:57pm.
Just so you guys know, I'm not trying to get the answers to all of my assignment. These problems I'm posting are the ones that I'm stuck on.
Use set-builder notation to describe solution.
{x/x_ _}
• College Algebra - LeAnn/I'm not trying to just get answers, Wednesday, November 18, 2009 at 10:08pm
I'm not trying to just get answers. I'm trying to better understand the areas where I am not understanding things. As you all help me and put everything out in details, I'm writing down
everything to better understand and reference back to if I get stuck again.
• College Algebra - ana, Wednesday, November 18, 2009 at 10:55pm
check my work
• College Algebra - LeAnn, Wednesday, November 18, 2009 at 10:57pm
K, thanks and your last problem you helped me with was right.
• College Algebra - ana, Wednesday, November 18, 2009 at 10:58pm
Nice to hear that
• College Algebra - LeAnn, Wednesday, November 18, 2009 at 11:04pm
I have a few problems that are on here that I was redirected to a website that I went to but still can't understand the problem. If you could try to help me with maybe some of those too, I'd
greatly appreciate it. I only have a couple more problems I need help with besides the ones already on here. Thanks again hun.
Related Questions
College Algebra - can someone tell me how to get a division symbol when posting ...
writing thesis - So, I need a thesis statement for a research paper for Murder ...
college algebra - I'm stuck on 2 problems I know the answers but need to learn ...
Easy (yet confusing) Algebra - These problems are pretty easy, but confusing (as...
College Math - Suzy has a bag containing 5 red marbles, 2 green ones, 4 white ...
College Algebra - How do you solve with a problem like this one? (I couldn't ...
thank you live experts - hey jiskha live experts i just wanted to thank you for ...
math - I have no idea where to start! *A quick quiz consists of 3 multiple ...
To Lashan and Anonymous - Arrghh! Not only are you trying to mooch answers, you'...
Algebra - I am trying to find the equation to m= 8(3,2)This is what I did and ... | {"url":"http://www.jiskha.com/display.cgi?id=1258599471","timestamp":"2014-04-17T16:18:21Z","content_type":null,"content_length":"9911","record_id":"<urn:uuid:367cc8de-1ce0-48ae-9082-e7c5cf33cade>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
NADA for R
Water Resources of the United States
NADA for R: Nondetects and Data Analysis for the R statistical computing environment
NADA for R is a library package for the R environment for graphics and statistics. The package contains data and analysis methods described in the book Nondetects and Data Analysis: Statistics for
Censored Environmental Data , Helsel (2005) -- See references below.
NADA for R can be used to plot data, generate summary statistics, compute exceedance probabilities, and perform hypothesis tests, on data containing multiple censoring thresholds (a.k.a nondetects,
or less-thans).
All functions within NADA for R are presented within a clear, consistent, and easy to use framework.
• Statistical Methods
□ Robust Regression on Order Statistics (Robust ROS)
□ Empirical cumulative distribution functions (ECDFs) using Kaplan-Meier methods
□ Nonparametric hypothesis testing based on ECDFs and rank-sum methods.
□ Maximum Likelihood Estimation summary statistic, and regression methods.
□ Censored probability plots
□ Censored boxplots
• Miscellaneous
□ Generic summary, printing, and plotting functions.
□ All statistical methods have common query and prediction functions that allow the computation of summary statistics and prediction of modeled values.
Dennis R. Helsel, 2005. Nondetects and Data Analysis: Statistics for Censored Environmental Data. John Wiley and Sons, New York, 250p.
Lopaka Lee and Dennis Helsel, 2005. Statistical analysis of water-quality data containing multiple detection limits I: S-language software for regression on order statistics. Computers and
Geoscience, 31, 1241-1248.
Lopaka Lee and Dennis Helsel, 2005. Baseline Models of Trace Elements in Major Aquifers of the United States. Applied Geochemistry, 20, 1560-1570.
Lopaka Lee and Dennis Helsel, (in press) Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis
testing. Computers and Geoscience.
Installing NADA for R:
The primary requirement for running the software is a working installation of the R environment. R can be obtained at http://www.r-project.org/.
Once R is installed and the machine has a functioning internet connection, the NADA package may be automatically installed using the following command:
> install.packages("NADA")
Both R and NADA for R are Free Software and can be obtained, used, modified and distributed under the terms of the GNU General Public License (version 2 or newer). | {"url":"http://water.usgs.gov/software/NADA_for_R/","timestamp":"2014-04-19T15:31:03Z","content_type":null,"content_length":"9507","record_id":"<urn:uuid:af453fad-d4ee-4e0c-86bc-3bd060deb738>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Algebra/Polynomials/Zero and Negative Exponents
There are two very important things you need to know when working with Zero Power or Negative Exponents.
First, any number to the Zero Power always equals one. For example (-50^0 = 1)
There is one number that CANNOT be raised to the Zero Power, 0^0 does not exist!
When dealing with Negative Exponents there is a simple trick. Whatever part of a fraction the negative exponent is in, switch it and the exponent becomes positive.
a^-2 = 1/a^2
1/a^-3 = a^3
If we have something a little more complicated, we only move things with Negative Exponents. These processes only work with multiplication. If there is addition/subtraction involved, then we are in
something a little more complicated than Algebra 1...
(a^-2c^3)/b^-1 = (bc^3)/a^2
Something like this wouldn't follow the aforementioned rules
(a^-2 + b^5)/(c^6)
This problem would require a little more work: splitting up the fraction and working with both parts individually and having an answer with two fractions instead of one nice one. It's possible but it
doesn't flow like the other examples or the practice problems.
Example ProblemsEdit
(5645848213489487561864756189465548914564751567)^0 = ?? [= 1]
(a^-3b^4c^-1)^-2 = ?? [= a^6b^-8c^2 = a^6c^2 / b^8]
a^-8b^-2c^-1 = ?? [= 1 / a^8b^2c]
a^2b^-3c^4 = ?? [= a^2c^4 / b^3]
-2 squared = 4.
Last modified on 14 May 2010, at 19:29 | {"url":"http://en.m.wikibooks.org/wiki/Basic_Algebra/Polynomials/Zero_and_Negative_Exponents","timestamp":"2014-04-21T04:39:22Z","content_type":null,"content_length":"15991","record_id":"<urn:uuid:eb99620f-5946-43e1-a49f-0fee7678bf3f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
examples of symplectic manifolds
examples of symplectic manifolds
Examples of symplectic manifolds: The most basic example of a symplectic manifold is $\mathbb{R}^{{2n}}$. If we choose coordinate functions $x_{1},\ldots,x_{{n}},y_{1},\ldots y_{n}$, then
$\omega=\sum_{{m=1}}^{n}dx_{m}\wedge dy_{m}$
is a symplectic form, and one can easily check that it is closed.
If $M$ is any manifold, then the cotangent bundle $T^{*}M$ is symplectic. If $x_{1},\ldots,x_{n}$ are coordinates on a coordinate patch $U$ on $M$, and $\xi_{1},\ldots,\xi_{n}$ are the functions $T^
$\xi_{i}(m,\eta)=\eta(\frac{\partial}{\partial x_{i}})(m)$
at any point $(m,\eta)\in T^{*}(M)$, then
$\omega=\sum_{{i=1}}^{n}dx_{i}\wedge d\xi_{i}.$
(Equivalently, using the notation $\alpha$ from the entry Poincare 1-form, we can define $\omega=-d\alpha$.)
One can check that this behaves well under coordinate transformations, and thus defines a form on the whole manifold. One can easily check that this is closed and non-degenerate.
Examples of non-symplectic manifolds: Obviously, all odd-dimensional manifolds are non-symplectic.
More subtlely, if $M$ is compact, $2n$ dimensional and $M$ is a closed 2-form, consider the form $\omega^{n}$. If this form is exact, then $\omega^{n}$ must be 0 somewhere, and so $\omega$ is
somewhere degenerate. Since the wedge of a closed and an exact form is exact, no power $\omega^{m}$ of $\omega$ can be exact. In particular, $H^{{2m}}(M)eq 0$ for all $0\leq meq n$, for any compact
symplectic manifold.
Thus, for example, $S^{n}$ for $n>2$ is not symplectic. Also, this means that any symplectic manifold must be orientable.
Finally, it is not generally the case that connected sums of compact symplectic manifolds are again symplectic: Every symplectic manifold admits an almost complex structure (a symplectic form and a
Riemannian metric on a manifold are sufficient to define an almost complex structure which is compatible with the symplectic form in a nice way). In the case of a connected sum of two symplectic
manifolds, there does not necessarily exist such an almost complex structure, and hence connected sums cannot be (generically) symplectic.
Mathematics Subject Classification
no label found | {"url":"http://planetmath.org/examplesofsymplecticmanifolds","timestamp":"2014-04-19T07:01:44Z","content_type":null,"content_length":"70576","record_id":"<urn:uuid:a1b944f7-1572-4f16-96e2-3cc11a604d17>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
Impurity Modulated Static Linear and First Nonlinear Polarizabilities of Doped Quantum Dots
ISRN Optics
VolumeΒ 2012Β (2012), Article IDΒ 847532, 8 pages
Research Article
Impurity Modulated Static Linear and First Nonlinear Polarizabilities of Doped Quantum Dots
^1Department of Physics, Suri Vidyasagar College, Suri, Birbhum 731101, West Bengal, India
^2Department of Chemistry, Physical Chemistry Section, Visva Bharati University, Santiniketan, Birbhum 731235, West Bengal, India
Received 7 January 2012; Accepted 26 February 2012
Academic Editors: G.Β Bellanca, D.Β Cojoc, E.Β Lidorikis, and G.Β Ma
Copyright Β© 2012 Nirmal Kr Datta and Manas Ghosh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We explore the pattern of linear and first nonlinear optical (NLO) response of repulsive impurity doped quantum dots harmonically confined in two dimensions. The dopant impurity potential chosen
assumes Gaussian form. The quantum dot is subject to a static electric field. For some fixed values of transverse magnetic field strength (), and harmonic confinement potential (), the influence of
impurity strength (), impurity stretch (), and impurity location () on the diagonal components of static linear ( and ), and the first NLO ( and ) responses of the dot are computed through linear
variational route. The investigation reveals the crucial roles played by the various impurity parameters in modulating these optical responses. Interestingly, maximization in the first NLO responses
has been observed for some particular dopant location and at some particular value of spatial stretch of the dopant.
1. Introduction
The study of impurity states in low-dimensional heterostructures has emerged as an important aspect to which many theoretical and experimental works have been dedicated. As a result, nowadays the
researches on doped semiconductor devices come out to be ubiquitous [1]. The doped system has quantized properties making them ideal objects for scientific study and robust applications. The dopants
modify the chemical potential of a material. The control of optoelectronic properties of a wide range of semiconductor devices [2β 4] now turns out to be the most fascinating aspect of impurity
doping in such materials.
Miniaturization of semiconductor devices reaches the bottom of the avenue with the advent of so-called low-dimensional structures such as quantum dots (QDs), with QDβ s new perspectives and
delicacies in the field of impurity doping emerge, due to the mingling of new confinement sources with impurity related potentials [5]. Such confinement, coupled with the dopant location, can
dramatically alter the electronic and optical properties of the system [6, 7].
For quite sometimes, search for molecules and materials with high linear and nonlinear susceptibilities and ultrafast response time has been pursued all over the world [8β 14]. The targeted
molecules are important in communication technology, data storage, optical switching, and so forth [9, 11, 15]. As a result of numerous studies, certain broad designing clues have emerged. Based on
these guidelines, most often one tries to design donor-acceptor molecules with large charge transfer from the donor to the acceptor moiety. NLO response is maximized for certain optimal combinations
of charge transfer and hopping interactions and the length over which the charge is transferred [16β 19]. Modulation in the molecular electronic parameters brought through subtle structural changes
or change in substitution changes the electronic wave function and therefore the electron density distribution which ultimately shapes all the properties of atoms and molecules. It would be crucial
to have at oneβ s disposal, systems, the electronic structures of which could be continuously tuned by adjusting suitable control parameters. In molecules, these variations are only discrete, having
caused by either a change of substituent or by structural alteration or by both. Quantum dots are representatives of the systems where the electronic structural dispositions can be practically
continuously varied as functions of a few of the system parameters. From the above discussion, it appears inevitable that variation in the electronic structure of the dot induced by the impurity
could subtly shape all its properties. It is thus crucial to study the role of dopant impurity on the optical properties of the quantum dots. A thorough expedition through literature reveals some
notable works on the optical properties of impurity doped quantum dots [20, 21]. Doped quantum dots, thus, beyond any doubt, possess the potential to exhibit high linear and nonlinear optical
response properties and would thus be principally applied in the area of all-optical signal processing. A detailed investigation of various dopant parameters relevant to optical signal processing
would continue to be a topic of active research. In what follows, of late, we have also studied the frequency dependent linear and nonlinear polarizabilities of doped quantum dots as functions of
various dopant parameters [22, 23].
In the present manuscript, we inspect the diagonal components of linear ( and ) and first nonlinear polarizabilities ( and ) of a repulsive impurity doped quantum dot subject to a static electric
field. Following earlier works on the effects of a repulsive scatterer in multicarrier dots in the presence of magnetic field [24, 25], here we have considered that the QD is doped with a repulsive
Gaussian impurity. When the impurity is doped in an on-center location, it does not destroy the inversion symmetry of the dot and consequently the emergence of value is ruled out. However, nonzero
values are envisaged. At off-center dopant locations, however, emergence of both nonzero and values is observed owing to the destruction of inversion symmetry. We have found that, in conjunction with
the dopant location, the strength and the spatial stretch of the dopant also affect the polarizability values quite prominently.
2. Method
We consider the energy eigenstates of an electron subject to a harmonic confinement potential and a perpendicular magnetic field , where , , and Landau gauge have been used. , , and stand for
harmonic confinement potential, cyclotron frequency (a measure of magnetic confinement offered by ), and vector potential, respectively. The Hamiltonian in our problem reads Define as the effective
frequency in the -direction. In real QDs, the electrons are confined in 3 dimensions that is, the carriers are dynamically confined to zero dimensions. The confinement length scales , , and can be
different in three spatial directions, but typically β nm. In models of such dots, is often taken to be strictly zero and the confinement in the other two directions is described by a potential with
for , . A parabolic potential, , is often used as a realistic and at the same time computationally convenient approximation. Assuming that the -extension could be effectively considered zero, the
electronic properties in these nanostructures have been successfully described within the model of the 1-electron motion in 2-d harmonic oscillator potential in the presence of a magnetic field [26β
28]. Now, intrusion of impurity perturbation transforms the Hamiltonian to where with and for repulsive impurity, and denotes the position of the impurity center. In QDβ s, the electrons visit an
almost equipotential surface wherein they are set free. These electrons are carrier electrons. In reality, regular dot occurs rarely as the deformation of the boundary of QDβ s modifies the
effective potential of the carrier electrons to bunch of uneven pockets. Sometimes, these electrons play the role of defects themselves (repulsive impurity) [29]. is a measure of the strength of
impurity potential whereas determines the extent of influence of the impurity potential. A large value of indicates that the spatial extension of impurity potential is highly restricted whereas a
small accounts for spatially diffused one. Thus, a change in in turn causes a change in the extent of dot-impurity overlap that affects the excitation pattern noticeably [22, 23]. The presence of
repulsive scatterer simulates dopant with excess electrons. The choice of Gaussian impurity potential is not arbitrary as it has been exploited by several investigators [30β 35].
We write the trial wave function as a superposition of the product of harmonic oscillator eigenfunctions and , respectively, as follows: where are the variational parameters and and . The matrix
elements of are given by where with and . With the transformations , , , and , one can write where, , , , , , and . With the help of the standard integral [36], it is now easy to write where with and
. Thus, finally, we obtain stands for the Hermite polynomials of order. The eigenstate of the system in this representation can be written as where , are the appropriate quantum numbers,
respectively, and are composite indices specifying the direct product basis.
In presence of a static electric field of strength , the effective Hamiltonian of the system () becomes We have determined the values of , , , and and we used the data to compute the direct
components of polarizabilities of the dot by the following relations obtained by numerical differentiation [37β 41]: and a similar expression for : and a similar expression is used for computing
component. In these expressions, represents average energy of the system when and . Analogously, we have also written . The above relations indicate different combinations of electric field
intensities and orientations.
3. Results and Discussion
3.1. System Parameters
The model Hamiltonian (cf. (1)) can be made to represent a 2-d quantum dot with a single carrier electron [28, 42]. The form of the confinement potential indicates lateral electrostatic confinement
of the electrons in the - plane. Thus, it is disklike which is normally fabricated by molecular beam epitaxy (MBE) technique. Because of unequal confinements in and -directions, the symmetry is
nonhomogeneous. is the effective electronic mass appropriate for describing the motion of the electrons within the lattice of the material to be used. We have used and set . The radial position of
dopant () has been varied from 0.0β a.u. (on-center) to 70.71β a.u. (off-center) positions. In the linear variational calculation, we have used basis functions (cf. (3)) with for each of the
directions . The direct product basis spans a space of dimension. We have checked that the basis functions span the 2-d space effectively completely, at least with respect to representing the
observables under investigation. We have made the convergence test with still greater number of basis functions.
We have made some attempt to reasonably connect our theoretical parameters to the real life-doped QD. The parameter in the impurity potential can be correlated to , where is proportional to the width
of the impurity potential [24, 25]. Table 1 gives the values in nm corresponding to different values to have a feelings of the actual extension of the impurity domain. The value that we have used (β
a.u.) closely resembles quantum dots (β a.u.). The maximum value of the dopant strength () was limited to ~1β a.u. or meV and the applied magnetic field is of the order of milliTesla (mT). Here,
we want to mention that our method is not strictly a perturbation theory in that sense where some perturbation is added to the original Hamiltonian through the agency of a perturbation parameter. In
our work, we have exploited linear variation principle to determine the approximate eigenstates of the system as it was not possible to solve the time-independent Schrodinger equation containing
impurity in the Hamiltonian as well as in the presence of static electric field. Thus, firstly, we have evaluated the approximate eigenstates of the system taking harmonic oscillator eigenfunctions
as the basis function. This involves modulation of the linear combination coefficients. Obviously, the modulation was not arbitrary but was governed by the diagonalization of the Hamiltonian matrix
containing the contributions from impurity potential and static electric field. Thus, the method is actually a basis function expansion technique with properly adjusted linear combination
coefficients. The validity of the method lies in the fact that how far it can endure the increase in values so that normalization is not hampered. We did not make any forceful normalization of the
wave function and checked that inherent normalization is well maintained even with a value as large as β a.u. for a very large basis size () indicating that our method is quite robust so far as
stability of the method is concerned. The static electric field has an intensity of β V. We believe that these values might give some realization of real systems containing actual impurity.
3.2. Aspects of Polarizability Components
3.2.1. Influence of Impurity Location
Figure 1 depicts the variation of components of linear polarizability ( and ) in and directions as functions of radial position () of the dopant. Both the components are found to decrease
monotonically with increase in . An increase in takes the dopant away from the dot confinement and consequently there occurs a reduction in dot-impurity overlap. As a result, the extent of repulsive
interaction also diminishes which is reflected in the observed behaviors of linear polarizability components. From the plot, it has also been observed that the value is lower than component. This
could be due to varied extent of confinement in and directions. A close look at (1) reveals that the effective harmonic frequency along -axis is while that along -axis is . Thus, the -direction is
under stronger confinement in comparison to the -direction and naturally enjoys less dispersive character. Consequently, the linear polarizability components which are intimately connected with the
dispersive nature of the system assume different values in and directions. It needs to be mentioned that macroscopic polarizability should be isotropic for a symmetric QD in a perpendicular magnetic
field. In our investigation, we have found anisotropy in the values of and on a microscopic scale because of a nonzero . We feel that on a macroscopic scale, the anisotropies arising out of various
microscopic components may balance each other leading to no net anisotropy of the system. However, a more extensive and exhaustive computation is required for an in depth investigation.
We now investigate the diagonal components of first hyperpolarizability ( and ) of the doped dot. For the emergence of nonzero (), doping of impurity at off-center position is essential. An impurity
doping at on-center position does not destroy the inversion symmetry of the Hamiltonian and is therefore unable to generate non-zero value. Both the components exhibit maximization at a particular
dopant location as the impurity is shifted away from the dot center (Figure 2). A shift of the dopant away from the dot confinement center is endowed with opposite consequences. On one hand, the
intensity of dot confinement certainly decreases thereby favoring the high optical response. On the other hand, the said shift also decreases the extent of dot-impurity repulsive interaction and
discourages emergence of high optical response. So far as generation of values are concerned; it appears that the interplay of these two opposing factors are responsible for the aforesaid
maximization. For high NLO response, therefore, the designer quantum dots appear to be not hard to find. The requirement being the controlled incorporation of the dopant at definite site to break the
symmetry of the confinement potential. In this regard, we should mention that of late there are some excellent experiments which show the mechanism of dopant incorporation [43] and how such
incorporations can be controlled [44].
3.2.2. Influence of Impurity Strength
We have now varied over a wide range to observe its influence on the polarizability components. Figure 3 exhibits the and profiles as function of dopant strength for on and two off-center (β a.u.
and 70.71β a.u.) dopant locations. The plots reveal how a variation in the dopant strength can influence the polarizability components. We notice that at all dopant locations both the linear
polarizability components exhibit a saturation at high dopant strength. Within extremely small dopant strength regime (), there occurs an initial uprise in the polarizability values with increase in
. It seems that in this regime the dot-impurity interaction is very feeble. Small increase in dopant strength in this domain primarily causes some sort of development in the dot-impurity repulsive
interaction and we observe an initial surge in the polarizability values. At large the plots show some kind of saturation indicating a steady extent of dot-impurity interaction. As observed earlier
now also we find that values are somewhat lower in magnitude than values. Moreover, from the plots, we find that while the values of a particular component for on and near off-center dopant locations
(β a.u.) are in close proximity, the value drops substantially at far off-center dopant location (β a.u.) owing to a diminished dot-impurity interaction.
A similar saturation behavior is also envisaged in and profiles as a function of (Figure 4). Since the components assume zero value at on-center dopant location the figure depicts their profiles at
near off-center (β a.u.) and far off-center (β a.u.) dopant locations. Here, also we notice a lower magnitude of component in comparison to its counterpart along -direction. Interestingly, each of
the component exhibits greater magnitude in the far off-center dopant location in comparison to that of near off-center position. The behavior happens to be quite obvious as already we have come
across position dependent maximization in components.
3.2.3. Influence of Impurity Spread
We now turn our attention towards inspecting the influence of spatial stretch of impurity () on the polarizability components for on and two off-center (β a.u. and 70.71β a.u.) dopant locations
(Figure 5). The and components initially delineate sort of decrease with increase in (in small domain) but ultimately culminate in some steady value with further increase in value. A small value
implies that the impurity potential is diffused over a long spatial region, so also its influence. In the very low regime, this extreme diffusive nature of the impurity domain results in substantial
dot-impurity interaction and consequently a large dispersive nature of the system. The polarizability components thus register somewhat large value. The scenario changes completely in the high
regime. Now, a spatially quenched impurity potential undergoes very weak overlap with the dot confinement center resulting in small dispersive nature of the system. This causes a fall in the linear
polarizability values. At very high value of , the spatial stretch of impurity becomes highly condensed and the decreasing dispersive nature of the system reaches its limit. The outcome being the
steady values of the linear polarizability components. Also, as before, the components are smaller in magnitude in comparison to their -directional counterpart. Furthermore, for each components, the
extent of fall in its magnitude with increase in becomes progressively more pronounced with shift of the dopant from on- to off-center location. This is simply because of the fact that since a high
value makes dot-impurity overlap already weak, a shift of dopant to more off-center location further aggravates the situation and hastens the fall of polarizability. Peculiarly, for the intermediate
off-center impurity positions, the curve is not monotonous. It seems difficult to understand why in this case we do not encounter an otherwise straightforward curve. It appears that at intermediate
dopant locations, along with dot-impurity overlap, the extent of dot confinement also plays a role in shaping the polarizability components. At these locations, a reduced dot-impurity overlap
decreases the polarizability values. On the other hand, because of considerable dot-impurity separation the dot confinement also becomes less stringent making the system more flexible which enhances
the polarizability values. The competitive behavior between these two opposite influences could be the cause behind the departure of the aforesaid profile from an otherwise monotonous plot.
and components, on the other hand, exhibit prominent maximization as a function of (Figure 6). Here, we present the corresponding profiles at near off-center (β a.u.) and far off-center (β a.u.)
dopant locations and as usual the components assume zero value at on-center dopant location. The continual decrease in dopantβ s spatial stretch with increase in value might have some contrasting
consequences on components. Although it reduces the confines of dot potential on the dopant but simultaneously the dot-impurity repulsive interaction is also diminished. We feel that a changeover in
the mutual dominance of these two contrasting factors could give rise to maximization of components at some particular value. As expected, here also we notice a lower magnitude of component in
comparison to its counterpart along -direction. Interestingly, each of the component exhibits greater magnitude in the far off-center dopant location in comparison to that of near off-center
position. The behavior happens to be quite obvious as already we have visualized position dependent maximization in components.
4. Conclusions
The diagonal components of linear and first nonlinear polarizabilities of repulsive impurity doped quantum dots subject to a static electric field reveal intriguing features. For an in-depth
analysis, we examine the roles played by impurity spread, impurity strength and most importantly impurity location meticulously on these components. We have found that the linear polarizability
components decrease with increase in separation of dopant location from that of dot confinement center and also with an increase in spatial shrinkage of dopant potential. On the other hand, we
envisage an increase in the said components with impurity strength. However, in all cases finally some steady behavior has been observed and direction has been found to be under more stringent
confinement than the direction. The first nonlinear polarizability components evince maximization at some particular dopant location and for some particular value of spatial spread of dopant in
absolute sense. The maximization appears to be due to the conflict between two opposing factors that foment and hinder the effective confinement of the dot on the dopant. However, as a function of
dopant strength, we observe a persistent increase in values in absolute sense which culminates in a steady magnitude. We expect that the results obtained could have important implications in optical
applications of quantum dot nanodevices.
At the fag end of the discussion, it appears to be quite significant to highlight the new findings in the present investigation in the light of the results of [23]. First of all, we want to mention
that in the present study we have explored the role of impurity location rigorously which was absent from our earlier study on frequency-dependent polarizability (FDP) [23]. Secondly, in [23], we
observed that in the limitingly small dopant strength domain an increase in dopant strength causes an initial lull in the FDP values. However, as the dopant strength is increased as bit FDP values
begin to increase smoothly with dopant strength culminating in a steady value. In the present study, although we have found some kind of similar trend at high limit, but behavior is absolutely
different in small domain. The role of dopantβ s spatial spread comes out to be quite distinct in the two investigations. In the earlier study, we found that squeezing the spatial expansion of the
dopant domain results in persistent increase in FDP values whereas in the present enquiry a reverse behavior has been envisaged. We hope to explore a rigorous investigation on the frequency-dependent
and static polarizability components on a comparative basis in near future.
In the present investigation, we did not consider the influence of size on the optical properties. Although in principle the dot wave function can stretch up to infinity but in practice, it actually
terminates at some finite values. Thus, the size effect would be important at length scales within the actual termination of wave function.
It is quite expected that donor and acceptor impurity would exhibit distinct impacts on the NLO properties. Recently, Hazra et al. have investigated the role of donor and acceptor impurities in a
slightly different context. It needs further study to precisely understand the distinct roles of acceptor and donor impurity pertinent to the present investigation.
The authors N. K. Datta and M. Ghosh thank D. S. T.-F. I. S. T (Government of India) and U. G. C.-S. A. P (Government of India) for financial support.
1. P. M. Koenraad and M. E. Flatté, β Single dopants in semiconductors,β Nature Materials, vol. 10, pp. 91β 100, 2011. View at Publisher Β· View at Google Scholar
2. H. J. Queisser and E. E. Haller, β Defects in semiconductors: some fatal, some vital,β Science, vol. 281, no. 5379, pp. 945β 950, 1998. View at Scopus
3. B. Çakir, Y. Yakar, A. Özmen, M. Ö. Sezer, and M. Şahin, β Linear and nonlinear optical absorption coefficients and binding energy of a spherical quantum dot,β Superlattices and
Microstructures, vol. 47, no. 4, pp. 556β 566, 2010. View at Publisher Β· View at Google Scholar Β· View at Scopus
4. Y. Yakar, B. Çakir, and A. Özmen, β Calculation of linear and nonlinear optical absorption coefficients of a spherical quantum dot with parabolic potential,β Optics Communications, vol. 283,
no. 9, pp. 1795β 1800, 2010. View at Publisher Β· View at Google Scholar Β· View at Scopus
5. J. L. Movilla and J. Planelles, β Off-centering of hydrogenic impurities in quantum dots,β Physical Review B, vol. 71, no. 7, Article ID 075319, pp. 1β 7, 2005. View at Publisher Β· View at
Google Scholar Β· View at Scopus
6. M. J. Kelly, Low-Dimensional Semiconductors, Oxford University Press, Oxford, UK, 1995.
7. C. P. Poole Jr and F. J. Owens, Introduction to Nanotechnology, Wiley, New York, NY, USA, 2003.
8. J. L. Brédas, C. Adant, P. Tackx, A. Persoons, and B. M. Pierce, β Third-order nonlinear optical response in organic materials: theoretical and experimental aspects,β Chemical Reviews, vol.
94, no. 1, pp. 243β 278, 1994. View at Scopus
9. P. N. Prasad and D. J. Williams, Introduction to Non-linear Optical Effects in Molecules and Polymers, Wiley, New York, NY, USA, 1991.
10. G. I. Stegemen, A. Miller, and J. E. Midwinter, Eds., Photonics in Switching volume 1 Background and Components, Academic Press, Boston, Mass, USA, 1993.
11. D. S. Chemla and J. Zyss, Eds., Non-linear Optical Properties of Molecules and Crystals, vol. 1, Academic Press, New York, NY, USA, 1983.
12. A. Leclercq, E. Zojer, S. H. Jang et al., β Quantum-chemical investigation of second-order nonlinear optical chromophores: comparison of strong nitrile-based acceptor end groups and role of
auxiliary donors and acceptors,β Journal of Chemical Physics, vol. 124, no. 4, Article ID 044510, pp. 1β 7, 2006. View at Publisher Β· View at Google Scholar Β· View at Scopus
13. Z.-Y. Hu, A. Fort, M. Barzoukas, A. K. Y. Jen, S. Barlow, and S. R. Marder, β Trends in optical nonlinearity and thermal stability in electrooptic chromophores based upon the 3-
(dicyanomethylene)-2,3-dihydrobenzothiophene-1, 1-dioxide acceptor,β Journal of Physical Chemistry B, vol. 108, no. 25, pp. 8626β 8630, 2004. View at Publisher Β· View at Google Scholar Β·
View at Scopus
14. G. Ramos-Ortiz, M. Cha, S. Thayumanavan, J. C. Mendez, S. R. Marder, and B. Kippelen, β Third-order optical autocorrelator for time-domain operation at telecommunication wavelengths,β Applied
Physics Letters, vol. 85, no. 2, pp. 179β 181, 2004. View at Publisher Β· View at Google Scholar Β· View at Scopus
15. G. J. Ashwell, R. C. Hargreaves, C. E. Baldwin, G. S. Bahra, and C. R. Brown, β Improved second-harmonic generation from Langmuir-Blodgett films of hemicyanine dyes,β Nature, vol. 357, no.
6377, pp. 393β 395, 1992. View at Scopus
16. S. R. Marder, J. E. Sohn, and G. D. Stucky, Eds., Materials for Non-Linear Optics: Chemical Perspectives, ACS Sympos, vol. 455, American Chemical Society, Washington, DC, USA, 1991.
17. R. Sen, D. Majumdar, S. P. Bhattacharyya, and S. N. Bhattacharyya, β Modeling hyperpolarizabilities of some TICT molecules and their analogues,β Journal of Physical Chemistry, vol. 97, no. 29,
pp. 7491β 7498, 1993.
18. A. D. Buckingham, E. P. Concannon, and I. D. Hands, β Hyperpolarizability of interacting atoms,β Journal of Physical Chemistry, vol. 98, no. 41, pp. 10455β 10459, 1994. View at Scopus
19. K. L. C. Hunt and A. D. Buckingham, β The polarizability of H2 in the triplet state,β The Journal of Chemical Physics, vol. 72, no. 4, pp. 2832β 2840, 1980. View at Scopus
20. F. Furtmayr, M. Vielemeyer, M. Stutzmann, A. Laufer, B. K. Meyer, and M. Eickhoff, β Optical properties of Si- and Mg-doped gallium nitride nanowires grown by plasma-assisted molecular beam
epitaxy,β Journal of Applied Physics, vol. 104, no. 7, Article ID 074309, 2008. View at Publisher Β· View at Google Scholar Β· View at Scopus
21. O. Weidemann, P. K. Kandaswamy, E. Monroy, G. Jegert, M. Stutzmann, and M. Eickhoff, β GaN quantum dots as optical transducers for chemical sensors,β Applied Physics Letters, vol. 94, no. 11,
Article ID 113108, 2009. View at Publisher Β· View at Google Scholar Β· View at Scopus
22. K. Sarkar, N. Kumar Datta, and M. Ghosh, β Frequency dependent linear and non-linear response properties of electron impurity doped quantum dots: influence of impurity location,β Physica E,
vol. 42, no. 5, pp. 1659β 1666, 2010. View at Publisher Β· View at Google Scholar Β· View at Scopus
23. N. K. Datta and M. Ghosh, β Impurity strength and impurity domain modulated frequency-dependent linear and second non-linear response properties of doped quantum dots,β Physica Status Solidi B
, vol. 248, no. 8, pp. 1941β 1948, 2011. View at Publisher Β· View at Google Scholar
24. V. Halonen, P. Hyvönen, P. Pietiläinen, and T. Chakraborty, β Effects of scattering centers on the energy spectrum of a quantum dot,β Physical Review B, vol. 53, no. 11, pp. 6971β 6974, 1996.
View at Scopus
25. V. Halonen, P. Pietilinen, and T. Chakraborty, β Optical-absorption spectra of quantum dots and rings with a repulsive scattering centre,β Europhysics Letters, vol. 33, no. 5, pp. 377β 382,
1996. View at Publisher Β· View at Google Scholar
26. R. Turton, The Quantum Dot. A Journey into Future Microelectronics, Oxford University Press, New York, NY, USA, 1995.
27. L. Jacak, P. Hawrylak, and A. Wojos, Quantum Dots, Springer-Verlag, Berlin, Germany, 1998.
28. M. A. Reed, Quantum Dots, Scientific American, Ocala, Fla, USA, 1993.
29. Y. Alhassid, β The statistical theory of quantum dots,β Reviews of Modern Physics, vol. 72, no. 4, pp. 895β 968, 2000. View at Scopus
30. J. Adamowski, A. Kwaśniowski, and B. Szafran, β LO-phonon-induced screening of electron-electron interaction in D-centres and quantum dots,β Journal of Physics Condensed Matter, vol. 17, no.
28, pp. 4489β 4500, 2005. View at Publisher Β· View at Google Scholar Β· View at Scopus
31. S. Bednarek, B. Szafran, K. Lis, and J. Adamowski, β Modeling of electronic properties of electrostatic quantum dots,β Physical Review B, vol. 68, no. 15, Article ID 155333, 9 pages, 2003.
32. P. D. Siverns, S. Malik, G. McPherson et al., β Scanning transmission-electron microscopy study of InAs/GaAs quantum dots,β Physical Review B, vol. 58, no. 16, pp. R10127β R10130, 1998. View
at Scopus
33. S. Bednarek, B. Szafran, and J. Adamowski, β Theoretical description of electronic properties of vertical gated quantum dots,β Physical Review B, vol. 64, no. 19, Article ID 195303, pp.
1953031β 19530313, 2001. View at Scopus
34. A. P. Alivisatos, β Perspectives on the physical chemistry of semiconductor nanocrystals,β Journal of Physical Chemistry, vol. 100, no. 31, pp. 13226β 13239, 1996. View at Scopus
35. A. A. Guzelian, U. Banin, A. V. Kadavanich, X. Peng, and A. P. Alivisatos, β Colloidal chemical synthesis and characterization of InAs nanocrystal quantum dots,β Applied Physics Letters, vol.
69, no. 10, pp. 1432β 1434, 1996. View at Scopus
36. I. S. Gradshteyn and I. M. Ryzhik, Tables of Integrals, Series, and Products, Corrected and Enlarged Edition, Academic Press, City, State, USA, 1980.
37. J. J. P. Stewart and K. M. Dieter, β Calculation of the nonlinear optical properties of molecules,β Journal of Computational Chemistry, vol. 11, no. 1, pp. 82β 87, 1990.
38. P. Dutta and S. P. Bhattacharyya, β On exact calculation of response properties of oscillators in static electric field: a Fourier grid Hamiltonian approach. I. One-dimensional systems,β
International Journal of Quantum Chemistry, vol. 51, no. 5, pp. 293β 305, 1994. View at Publisher Β· View at Google Scholar
39. G. Maroulis and A. J. Thakkar, β Multipole moments, polarizabilities, and hyperpolarizabilities for N2 from fourth-order many-body perturbation theory calculations,β The Journal of Chemical
Physics, vol. 88, no. 12, pp. 7623β 7632, 1988. View at Scopus
40. G. Maroulis, β Hyperpolarizability of H2O,β The Journal of Chemical Physics, vol. 94, no. 2, pp. 1182β 1190, 1991. View at Scopus
41. G. Maroulis, β Hyperpolarizability of H[2]O,β Journal of Chemical Physics, vol. 94, pp. 1182β 1190, 1991.
42. T. Chakraborty, Quantum Dots-A Survey of the Properties of Artificial Atoms, Elsevier, Amsterdam, The Netherlands, 1999.
43. S. V. Nistor, M. Stefan, L. C. Nistor, E. Goovaerts, and G. Van Tendeloo, β Incorporation and localization of substitutional Mn2+ ions in cubic ZnS quantum dots,β Physical Review B, vol. 81,
no. 3, Article ID 035336, 2010. View at Publisher Β· View at Google Scholar Β· View at Scopus
44. P. A. Sundqvist, V. Narayan, S. Stafström, and M. Willander, β Self-consistent drift-diffusion model of nanoscale impurity profiles in semiconductor layers, quantum wires, and quantum dots,β
Physical Review B, vol. 67, no. 16, Article ID 165330, pp. 1653301β 1653308, 2003. View at Scopus | {"url":"http://www.hindawi.com/journals/isrn/2012/847532/","timestamp":"2014-04-16T07:33:56Z","content_type":null,"content_length":"337226","record_id":"<urn:uuid:cd12ea3e-ce25-4684-84d6-85985e2069db>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Use the Rational Zeros Theorem to write a list of all potential rational zeros f(x) = x^3 - 10x^2 + 4x - 24
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50bd3af3e4b0de42629fba20","timestamp":"2014-04-16T16:22:25Z","content_type":null,"content_length":"39637","record_id":"<urn:uuid:bb52b08b-c83b-495d-b4cb-ae03c45ab763>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Lorentz force
Next: Ampère's law Up: Time-independent Maxwell equations Previous: Ampère's experiments
The Lorentz force
The flow of an electric current down a conducting wire is ultimately due to the motion of electrically charged particles (in most cases, electrons) through the conducting medium. It seems reasonable,
therefore, that the force exerted on the wire when it is placed in a magnetic field is really the resultant of the forces exerted on these moving charges. Let us suppose that this is the case.
Let 229), the force per unit length acting on the wire is
However, a unit length of the wire contains
We can combine this with Eq. (169) to give the force acting on a charge
This is called the Lorentz force law, after the Dutch physicist Hendrik Antoon Lorentz who first formulated it. The electric force on a charged particle is parallel to the local electric field. The
magnetic force, however, is perpendicular to both the local magnetic field and the particle's direction of motion. No magnetic force is exerted on a stationary charged particle.
The equation of motion of a free particle of charge
according to the Lorentz force law. This equation of motion was first verified in a famous experiment carried out by the Cambridge physicist J.J. Thompson in 1897. Thompson was investigating cathode
rays, a then mysterious form of radiation emitted by a heated metal element held at a large negative voltage (i.e., a cathode) with respect to another metal element (i.e., an anode) in an evacuated
tube. German physicists held that cathode rays were a form of electromagnetic radiation, whilst British and French physicists suspected that they were, in reality, a stream of charged particles.
Thompson was able to demonstrate that the latter view was correct. In Thompson's experiment, the cathode rays passed though a region of ``crossed'' electric and magnetic fields (still in vacuum). The
fields were perpendicular to the original trajectory of the rays, and were also mutually perpendicular.
Let us analyze Thompson's experiment. Suppose that the rays are originally traveling in the
where the ``time of flight'' 236) that with a properly adjusted magnetic field strength
Thus, Eqs. (237) and (238) and can be combined and rearranged to give the charge to mass ratio of the particles in terms of measured quantities:
Using this method, Thompson inferred that cathode rays were made up of negatively charged particles (the sign of the charge is obvious from the direction of the deflection in the electric field) with
a charge to mass ratio of
Consider, now, a particle of mass 235), the particle's equation of motion can be written:
This reduces to
Here, cyclotron frequency. The above equations can be solved to give
According to these equations, the particle trajectory is a spiral whose axis is parallel to the magnetic field. The radius of the spiral is
Finally, if a particle is subject to a force
The power input to the particle from the force field is
where 234), that the power input to a particle moving in electric and magnetic fields is
Note that a charged particle can gain (or lose) energy from an electric field, but not from a magnetic field. This is because the magnetic force is always perpendicular to the particle's direction of
motion, and, therefore, does no work on the particle [see Eq. (250)]. Thus, in particle accelerators, magnetic fields are often used to guide particle motion (e.g., in a circle) but the actual
acceleration is performed by electric fields.
Next: Ampère's law Up: Time-independent Maxwell equations Previous: Ampère's experiments Richard Fitzpatrick 2006-02-02 | {"url":"http://farside.ph.utexas.edu/teaching/em/lectures/node33.html","timestamp":"2014-04-21T07:05:52Z","content_type":null,"content_length":"27559","record_id":"<urn:uuid:19ccc661-ec70-435c-b4ab-4f3e800885fd>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Morton Grove Statistics Tutor
...My tutoring and teaching goal is not only to ensure students understand how to solve the problems at hand but understand why. I strive to make the material fun and enjoyable, relating to the
student where they are currently. I prefer to get to know my students to understand what they like and do outside of studying and school.
16 Subjects: including statistics, physics, calculus, geometry
...I am a CFA Charterholder. I passed every exam on the first attempt. MY WORK EXPERIENCE BACKGROUND: I have eight years' working experience at multiple investment partnerships (hedge funds and
venture capital fund). I was a generalist/special situations analyst and venture capitalist primarily focused on the technology sector.
6 Subjects: including statistics, accounting, finance, business
I will be teaching honors physics and chemistry this year. This summer, I worked for ComEd's "smart grid" education program. I also spent a year doing ACT tutoring at Huntington Learning Center. I
am available for tutoring chemistry, physics, earth science, math, and ACT on the weekends
12 Subjects: including statistics, chemistry, physics, algebra 1
...I am an actuary and one of the topics covered in our syllabus was Finite Difference, which is basically the same as Discrete Math. Discrete math can cover many different topics and is a fairly
advanced topic. I have assisted students at a high school on Discrete Math as it is taught there.
26 Subjects: including statistics, calculus, physics, GRE
I have a PhD in microbial genetics and have worked in academic research as a university professor and for commercial companies in the biotechnology manufacturing sector. I have a broad background
in science and math, a love of written and oral communication and a strong desire to share the knowledg...
35 Subjects: including statistics, English, chemistry, reading | {"url":"http://www.purplemath.com/morton_grove_statistics_tutors.php","timestamp":"2014-04-18T04:02:15Z","content_type":null,"content_length":"24227","record_id":"<urn:uuid:1ce21969-c919-4b3f-8c55-70204356a5a4>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
bang In left inlet: Evaluates the expression using the values currently stored.
int input The number received in each inlet will be stored in place of the $i or $f argument associated with it. (Example: The number in the second inlet from the left will be stored in
[int] place of the $i2 and $f2 arguments, wherever they appear.)
input Performs the same function as the int message. See above.
(inlet1) [int]
input Performs the same function as the int message. See above.
(inlet2) [int]
input Performs the same function as the int message. See above.
(inlet3) [int]
input Performs the same function as the int message. See above.
(inlet4) [int]
input Performs the same function as the int message. See above.
(inlet5) [int]
input Performs the same function as the int message. See above.
(inlet6) [int]
input Performs the same function as the int message. See above.
(inlet7) [int]
input Performs the same function as the int message. See above.
(inlet8) [int]
input Performs the same function as the int message. See above.
(inlet9) [int]
float input The number in each inlet will be stored in place of the $f or $i argument associated with it. The number will be truncated by a $i argument.
input Performs the same function as the float message. See above.
(inlet1) [float]
input Performs the same function as the float message. See above.
(inlet2) [float]
input Performs the same function as the float message. See above.
(inlet3) [float]
input Performs the same function as the float message. See above.
(inlet4) [float]
input Performs the same function as the float message. See above.
(inlet5) [float]
input Performs the same function as the float message. See above.
(inlet6) [float]
input Performs the same function as the float message. See above.
(inlet7) [float]
input Performs the same function as the float message. See above.
(inlet8) [float]
input Performs the same function as the float message. See above.
(inlet9) [float]
input In left inlet: The word set, followed by one or more numbers, treats those numbers as if each had come in a different inlet, replacing the stored value with the new value, but the
set [list] expression is not evaluated and nothing is sent out the outlet. If there are fewer numbers in the message than there are inlets, the stored value in each remaining inlet stays
sm1 table-name Performs the same function as the symbol message. See above.
sm2 table-name Performs the same function as the symbol message. See above.
sm3 table-name Performs the same function as the symbol message. See above.
sm4 table-name Performs the same function as the symbol message. See above.
sm5 table-name Performs the same function as the symbol message. See above.
sm6 table-name Performs the same function as the symbol message. See above.
sm7 table-name Performs the same function as the symbol message. See above.
sm8 table-name Performs the same function as the symbol message. See above.
sm9 table-name Performs the same function as the symbol message. See above.
symbol table-name The word symbol, followed by the name of a table, will be stored in place of the $s argument associated with that inlet, for accessing values stored in the table.
In left inlet: The items of the list are treated as if each had come in a different inlet, and the expression is evaluated. If the list contains fewer items than there are inlets,
list input the most recently received value in each remaining inlet is used. Any of the above messages in the left inlet will evaluate the expression and send out the result. If a value has
[list] never been received for each changeable argument, that value is considered 0 when the expression is evaluated. The number of inlets is determined by how many changeable arguments
are typed in. The maximum number of inlets is 9. | {"url":"http://www.cycling74.com/docs/max5/refpages/max-ref/expr.html","timestamp":"2014-04-20T11:16:28Z","content_type":null,"content_length":"15946","record_id":"<urn:uuid:8527712c-219b-459c-b6de-45aaf461ac11>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: ReplyToBarwise
Jon Barwise barwise at phil.indiana.edu
Mon Nov 10 17:04:15 EST 1997
Harvey writes:
>Jon, I think many of us will want to hear more details and examples about
>the distinction you draw between "interpreted" versus "uninterpreted"
>languages, in order to judge how heretical your views really are.
>Thank you for your contribution to fom. Hope you find the time to give us
>Your conservative friend, who defends the status quo --- HMF.
The basic point is that, contrary to Russell, I think mathematical
statements wear their content pretty much on their sleeve. If they appear
to be talking about natural numbers, curves in R^2, or finite sets, say,
then their content is a claim about natural numbers, curves in R^2, or
finite sets, respectively. More generally, I think there are various
mathematical PROPERTIES and RELATIONS and the things that have these
properties and stand in these relations. The language of mathematics has
predicates to express things about these properties, relations, and the
things that have these properties and stand in these relations.
I don't think this is a heretical idea. Godel certainly had it, though he
preferred to talk of CONCEPTS rather than PROPERTIES and RELATIONS. He
spoke for example of the mathematical concepts of SET and MEMBERSHIP, which
I would call a property and a relation. But let's use Godel's term. Godel
points out that these concepts are not sets but are mathematical objects
about which we speak using the language of mathematics.
Now if mathematicians speak of these concepts, properties, relations,
etc., in the language of mathematics, they must be using a language where
the predicates are interpreted, that is, they have an intended significance
(I purposefully use the nontechnical term "significance" here to avoid any
of the technical terms like interpretation, reference, extension, etc.).
The words "real," "set." "member" etc. must signify the concepts of real,
set, and membership.
I also don't think this idea is at all deep. It is just common sense. But
the closest we can come to an interpreted language in first-order logic is
to speak of a language paired with an intended model. But that model is
still extensional, there are no properties (or concepts) in the picture.
And I think that keeping this idea in mind can prevent one from making
various errors, like confusing a mathematical concept with a particular
set-theoretic model of the extension of that concept.
For those who are not already bored, let me approach this in a different
manner. Here are two extreme views:
A) Mathematics takes place in an extended version of the mathematician's
native language, one where words and expressions mean whatever they mean.
The rules of inference that are valid are a consequence of the meanings of
sentences in the language.
C) Mathematics takes place in a totally formal language by using certain
purely syntactic rules of inference.
In between is a different idea which I think we often teach our students::
B) Mathematics takes place in a formal language where certain items have
fixed meanings and the rest are uninterpreted. Usually we would be the
boolean connectives and quantifiers among the meaningful and the rest among
the meaningless.
To me, it seems pretty clear that (A) fits the facts better than either (B)
or (C). I look at the formal languages we have developed as mathematical
models of an idealization of (A), one from which we can learn a lot about
mathematical activity. But they are only that: models of what is going on
in mathematics, not mathematics itself. They should be judged the way we
judge models, by how well they fit the data, make predictions, square with
phenomena, etc.
I suggest that the relationship between standard partially interpreted
first-order languages and the actual language of mathematics is analogous
to the relationship between the (mathematical) notion of Turing computable
and the (informal) notion of effectively computable. In each case we have
a modeling relationship. It is just that I don't see any reason to believe
the analogue of Turing's Thesis in this context.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1997-November/000200.html","timestamp":"2014-04-16T10:14:04Z","content_type":null,"content_length":"6450","record_id":"<urn:uuid:24a2e776-ad55-445e-8023-70ffe1dc8093>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
Where does pi come from?
Date: 13 Jan 1995 13:17:34 -0500
From: Kavita George
Subject: (none)
I was wondering how exactly the math notation pi was
derived and whoever derived it, why did he make up that symbol for pi?
Please answer this small question of mine by Jan 17. and mail it to
Ms. Kavita George
Monta Vista High School - kgeorge@walrus.mvhs.edu
Date: 14 Jan 1995 00:37:37 -0500
From: Dr. Ken
Subject: Re: your mail
Hello there!
The the Greek letter {Pi} (when I write that, I mean the symbol that you're
used to seeing for Pi) was first used by a man named William Jones in 1706.
He lived in England at the same time as Isaac Newton (who also lived in
England), and he used the symbol {Pi} in a book called "A New Introduction
to the Mathematics." He said that the work was intended "for the Use of some
Friends who have neither Leisure, Convenience, nor, perhaps, Patience, to
search into so many different Authors, and turn over so many tedious
Volumes, as is unavoidably required to make but tolerable progress in the
The Greek letter {Pi} is pronounced like a p in English. So it is widely
believed, both in light of that fact, and in the way that Jones used it in
his work, that {Pi} was used as an abbreviation for the English word
"periphery." He used the word periphery the same way we now use
circumference and perimeter for the distance around a circle and a polygon.
However, Williams Jones was no heavyweight in the math world, and nobody
really paid much attention to his use of the letter {Pi}. It wasn't until
1737 that Leonhard Euler (a big math heavyweight - one of the most
revolutionary mathematicians who ever lived) used it, and then it gained
pretty wide acceptance.
I got most of my information from the book "A History of Pi" by Petr
Beckmann, and if you're interested in learning more about Pi, I recommend
you check it out. Thanks for asking!
-Ken "Dr." Math
Date: 14 Jan 1995 00:42:05 -0500
From: Dr. Ken
Subject: Re: where did Pi come from?
Hello there!
Here's some more information on Pi that we (actually, that Ausgezeichnet
math doctor Sydney wrote it by herself) gave as an answer to a previous
question. Enjoy!
Date: Mon, 5 Dec 1994 15:09:11 -0500 (EST)
From: Dr. Sydney
Subject: Re: question
Pi is defined to be the ratio of the circumference of a circle to the
diameter of a circle. Say you have a circle of radius 1. Then the
circumference of the circle is 2Pi(1) and the diameter is 2(1), so the
ratio of the circumference to the diameter is Pi. Anyway, Pi is an
infinite decimal that is approximately equal to 3.14.
People have worked on approximating Pi for thousands of years.
For instance, Archimedes approximated Pi by inscribing polygons in
the circle and taking the ratio of the circumference of the polygon to the
radius of the circle (which is also the "radius" of the polygon). The more
sides on the polygon, the more accurate the approximation. So, if you
inscribe a dodecagon (12-sided polygon) and compute the ratio of the
circumference to the radius, you will get a better approximation than if
you do the same for a hexagon (6 sides).
This makes sense if you draw it out. The polygons with the greater
number of sides more closely resemble circles. It is important to
remember Pi does not equal 3.14; instead, 3.14 is an approximation for Pi.
Really Pi = 3.141592653... (it is an infinite decimal).
Mathematicians began notating this ratio with the Greek letter Pi around
1706. Perhaps the letter Pi was chosen to represent "periphery" (Pi is the
ratio of the circumference (periphery) to the diameter).
I hope this helps. If not or if you have any other questions, please feel
free to write back. Thanks!
--Sydney, Dr. "Math Rocks"
-Ken "Dr." Math | {"url":"http://mathforum.org/library/drmath/view/52483.html","timestamp":"2014-04-20T00:54:47Z","content_type":null,"content_length":"8767","record_id":"<urn:uuid:d6539223-2170-4dd4-a7bb-87b9e805555f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
PLZ CHECK MY ANSWERS: 1. Classify the following polynomial by degree and number of terms: 2x^4 - 4x^2 + x My Answer: 4th degree polynomial 2. What is the maximum times a third degree function's graph
could cross the x-axis? My Answer: 3 times. 3. If a function is even and contains the point (2,21), what other point must it contain? My Answer: (-2,21) 4. What is the range of the polynomial
function f(x)= x^3 - 6x + 5? My Answer: All real #s 5. What is the y-intercept of the function g(x)= -(x-2)(x-1)^4(x-3)^2? My Answer: 18 6. f(x)= -2x^7 + 7x^3 - 7 (even or odd?) My Answer: neither
• one year ago
• one year ago
Best Response
You've already chosen the best response.
all are correct
Best Response
You've already chosen the best response.
having doubts in any ?
Best Response
You've already chosen the best response.
1) 4th degree and a trinomial (3 terms) 2) your ans is correct
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/506a1204e4b0ba2216ca9e05","timestamp":"2014-04-16T16:53:43Z","content_type":null,"content_length":"32896","record_id":"<urn:uuid:9e0ce013-7a3e-4084-977e-631332c95169>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Limits and Graphs
Limits and Graphs
4965 views, 9 ratings - 00:10:47
Copyright 2005, Department of Mathematics, University of Houston. Created by Selwyn Hollis. Find more information on videos, resources, and lessons at http://online.math.uh.edu/HoustonACT/
The concept of limit from an intuitive, graphical point of view. Left and right-sided limits. Infinite one-sided limits and vertical asymptotes.
• What is a limit?
• What is a hole?
• What does it mean for a graph to be continuous?
• What happens if f(1) does not equal the limit as f(x) approaches 1?
• When does a limit not exist?
• What is a one-sided limit?
This video explains limits and continuity completely and very well. If you are having any trouble understanding limits, be sure to check out this video. A truly great tutorial, in which several
examples are explained and every step is detailed. The resources are great companions to this video.
This is an excellent video. If you need help with limits, check this out.
Perfect choice of problems and additional resources. Helps a lot in my classroom, much better than just work from overhead projector. 'help on this topic' part is the best!!! | {"url":"http://mathvids.com/lesson/mathhelp/392-limits-and-graphs","timestamp":"2014-04-20T05:43:40Z","content_type":null,"content_length":"83842","record_id":"<urn:uuid:dff34e58-3fbc-4bb1-b0b0-7657b77494c1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dare to research!
From Motion Mountain Wiki
Comments are possible on the Fun Discussion Page.
Theory and experiment
The value of a theory is decided by its correspondence with experiment. So far, no experiment yet found a deviation from the standard model of particle physics. This is precisely what is predicted by
the strand model, the approach presented in volume VI of the Motion Mountain Physics Text. All other approaches to the final theory predict deviations; so do many researchers in particle physics.
Stay tuned.
What researchers can learn from entrepreneurs
Businesses have success only if they value their customers. In other words, business must value reality. Entrepreneurs who follow their beliefs usually lead their companies into bankruptcy.
Entrepreneurs who follow reality lead their company to success.
Not only teachers, also researchers can learn from business people. If you falsely believe that truth is defined by philosophers, or by ideologies, or by your wishes, take a break and stop. Truth is
correspondence with facts. You can learn more about truth from a good entrepreneur than from a bad scientist. Some telling examples follow.
On microscopic models of gravity
Electromagnetic fields obey indeterminacy relations - they are fuzzy. Fields are fuzzy in the same way that the positions of quantum particles are fuzzy. The fuzziness of electromagnetic fields
proves that electromagnetic fields are built of many microscopic degrees of freedom. Quantum theory thus implies that electrostatic fields result from a large number of elementary excitations, which
are called photons. Electrostatic fields are due to the exchange of virtual photons. As a result, the electromagnetic field has entropy. Indeed, quantum physicists, in particular experts on quantum
optics, know since almost a century that electromagnetic fields have entropy.
Gravitational fields obey indeterminacy relations - they are fuzzy. Fields are fuzzy in the same way that the positions of quantum particles are fuzzy. The fuzziness of gravitational fields proves
that gravitational fields are built of many microscopic degrees of freedom. Quantum theory implies that gravitational fields result from a large number of elementary excitations, called gravitons.
Static gravitational fields are due to the exchange of virtual gravitons.
In other words, space and gravity are made of virtual gravitons buzzing around. And as such, like any system that is made of many components buzzing around, space and gravity have entropy. If you
falsely believe that gravity has no entropy, explore the issue and convince yourself - especially if you give lectures.
On the number of dimensions of space
The dimensionality of space is a measured quantity: it is found to be 3 in all experiments ever performed. What is the dimensionality at very small dimensions? Well, we know that there is a minimal
measurable length in nature, the Planck length. At the latest at that scale, there is no way to measure dimensionality. In other words, a shortest measurable length implies that dimensionality is not
defined at Planck scale.
If you falsely believe that space has 4, 9, 10 or even more dimensions at Planck scale, take a break and convince yourself that such a statement contradicts every possible experimental check.
On the limitations of the standard model of particle physics
The standard model does not explain many of its assumptions, including the gauge groups, the couplings and the particle masses. The standard model is incomplete. This point is undisputed and correct.
On top of that, one finds hundreds of papers claiming that the standard model is also wrong or self-contradictory. Look at these arguments in detail. Even though these arguments have been repeated
for over 30 years by thousands of people, every single one is unconvincing. In fact, every one is wrong. This might be the biggest lie of modern theoretical particle physics.
So, if you believe any argument that claims that the standard model is wrong (in contrast to the various correct arguments which claim that it is incomplete) then you are victim of indoctrination and
prejudice. And indoctrination prevents from reaching the final theory.
On the Higgs boson
Many mechanisms can lead to symmetry breaking and to unitarity conservation at TeV energy. The existence of a Higgs boson is only one of various possibilities. But it has been repeated so often that
it is rarely questioned, even though not a single effect that can be unequivocally attributed to the Higgs boson has ever been observed.
The prediction of the strand model (and of several other models) is an unpopular one: the Higgs boson does not exist. So far, all experiments, including the latest Tevatron and LHC data, confirm this
On supersymmetry
A well-known researcher claims that supersymmetry is "predicted by experiment". Another, wiser researcher sighed: "Supersymmetry is the only game in town." One Nobel Prize winner repeats in every
interview that supersymmetry will be found soon, probably at the LHC. Another Nobel Prize winner consistently repeats that supersymmetry is a "figment of human imagination." Who is right?
Supersymmetry relates different particle statistics: fermions and bosons. At the Planck scale, due to the measurement uncertainties induced by quantum gravity effects, particle statistics is not
measurable; in short, fermions and bosons are undefined at the Planck scale. As a consequence, supersymmetry is not valid at the Planck scale.
Supersymmetry is a point symmetry. At the Planck scale, due to the measurement uncertainties induced by quantum gravity effects, points do not exist. Again, as a consequence, supersymmetry and
fermionic coordinates do not exist at the Planck scale.
If you falsely believe that supersymmetry and fermionic coordinates exist, take a break and convince yourself that such a statement contradicts every possible experimental check.
On being daring
Almost all researchers are state employees, or in similar contractual situations. As a result, they are discouraged to take risks or to be daring. The same is true for reviewers. How can reviewers
that are encouraged to play safe during all their life promote daring research?
However, finding the final theory requires to take risks and to be daring. Let us see where this contradiction will lead to.
On being daring - II
"Deru kui wa utareru" - the stake that sticks out will be hammered - is a Japanese saying about what happens when someone sticks his neck out. Lots of people think that they are entitled to hammer.
Such impolite people are driven by a mixture of misguided ideology and attraction to violence. Every entrepreneur knows such stories.
Every entrepreneur knows that one condition for innovation is a climate without fear. The discussion of the merits and demerits of string theory has shown that such a climate does not exist in many
research institutes. As a result of this situation, searching for the final theory is avoided by many. Don't do the same!
Cultivate your curiosity and courage - they make you human.
On the rarity of courage
Bibliographic research, using the "web of science" or "google scholar", shows something astonishing. There are only a handful of papers - besides the superstring conjecture - that claim to propose a
"final theory" or a "theory of everything". And this during the last one hundred years! This shows how touchy the issue has become. There is a definite lack of courage in present researchers.
On the lack of courage of committees
There is an organization that only supports research towards the final theory. It has funded over hundred research projects. How many of the projects it has funded are proposals for a final theory?
You will not believe it: just one. Over 99% of the money is wasted.
If you ever want to support the search for a final theory, think about what you are doing.
On the lack of courage and vision of committees - II
There are many cash prizes offered for the solution of various outstanding famous physics or math problems. Did you know that there is not a single cash prize in the whole world for finding the final
Do a Google search to convince yourself of how much committees shy away from this topic.
On saying what nobody says - on the limitation of symmetries
The search for a final theory of physics is often said to follow from the search for the final symmetry of nature. In fact, past research makes the opposite point. All symmetries known in physics
fail to fix the coupling strengths and the particle masses. But explaining the coupling strengths, such as the famous fine structure constant 1/137.0359, and explaining the particle masses are the
main open point in physics!
Knowing that a body has spherical symmetry does not determine its radius. In other words, anybody who looks for larger symmetries is blocking himself from understanding the fine structure constant
and all the other open points in fundamental physics.
On saying what nobody says - on the lack of larger symmetries
The search for a final theory of physics is often said to follow from the search for the final, all-encompassing symmetry of nature. In fact, there is not the slightest evidence that any unknown
symmetry exists. No experiment ever has provided an argument that symmetries larger than the known ones exist.
In other words, anybody who looks for larger symmetries is putting aside the connection to experiment.
On thinking what nobody thinks - on the requirements for a final theory
The search for a final theory of physics is almost a hundred years old. Despite the effort, there does not seem to be, anywhere in the research literature, a list of requirements that the final
theory has to fulfil. The lack of such a canonical list, and even the lack of proposed lists, is a sign for how much researchers forbid themselves to think clearly.
Research articles and even physics textbooks are full of another list: the list of issues that are unexplained by both quantum field theory and general relativity. But a list of requirements for the
final theory is found nowhere! This lack is a clear sign that many physics researchers are in a mental blockade. (Every researcher can test himself on this point.) The lack of a generally discussed
requirement list is a bizarre lacune of modern theoretical physics. The sixth volume of the Motion Mountain text proposes such a requirement list in chapter 7; see also the html version here.
If you are a researcher in fundamental physics and have never put together a list of requirements that the final theory has to fulfil, your research has most probably been driven by personal
preferences or prejudices, and not by the desire to really find out.
On thinking what nobody thinks - on the final theory
The first half of the sixth volume deduces the requirements for a final theory. They all appear when quantum physics and general relativity are combined. No requirement follows from one theory alone.
In fact, as a result of unification, each requirement for the final theory contradicts both quantum physics and general relativity!
In other words, researchers searching for a final theory are in a tough situation. It is hard to break loose, and if they do, they are treated with scorn by their peers. The easy way out is to search
for unification by remaining in your own research field (either particle physics or general relativity). This approach ensures that at least half the researchers are not against you. But the easy
approach is also the wrong one. The correct approach is not the easy one: the correct approach requires to contradict all researchers.
In other words, anybody who searches for unification but at the same time wants to appease some present group of researchers is doomed.
On simple mathematics and the final theory
Since the final theory is not based on points and manifolds, the evolution of observables is not described by differential equations.
This implies, among others, that the final theory is not described by complicated mathematics. This conclusion is one of the hardest to swallow for most modern physicists. Physicists are used to
think that progress in physics has always been tied to progress in mathematics. This is an old prejudice, but it is wrong. Progress never has been tied to math in this way.
In fact, the idea that the final theory is simple, i.e., algebraic, is at least 50 years old.
In other words, if you think that the final theory requires the most complex mathematical concepts available, reconsider the reasons for your prejudice.
On being trapped by one's own prejudices
A well-known researcher on the final theory stresses in every talk that the final theory must get rid of the concepts of point and manifold. But his own proposal is based on these two concepts!
Update: there at least three internationally known researchers doing this.
If you are working on the theory of everything, be aware of such traps. | {"url":"http://www.motionmountain.net/wiki/index.php?title=Dare_to_research!&oldid=2919","timestamp":"2014-04-20T03:10:36Z","content_type":null,"content_length":"33737","record_id":"<urn:uuid:7b5ed256-9ff0-4322-976a-c1d58ce1721e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Error Correcting Code From The Book
August 8, 2011
But can we find the pages it is on?
Atri Rudra is a coding theorist who helped organize the recent workshop in Ann Arbor on Coding, Complexity, and Sparsity. He has strong coding theory ancestors:
• Venkatesan Guruswami
• Madhu Sudan
• ${\vdots}$
• Leonhard Euler
• Jacob Bernoulli
Jacob was a member of the great Bernoulli family. They together did remarkable work in many aspects of mathematics, but Jacob is the one who invented the Bernoulli distribution that is used
throughout coding theory. Recall this is the distribution that takes on ${0}$ with probability ${p}$ and ${1}$ with probability ${q=1-p}$, in each of multiple independent trials. It is such a simple
idea, but someone had to invent it, and Jacob did.
Atri himself, with his advisor Venkat Guruswami, began making some breakthrough contributions to coding theory several years ago, including this STOC 2006 paper, which was also referenced in a
popular article by Bernard Chazelle for the magazine Science. Yet the following simple problem remains. Atri told me about it a few months ago, and it seems too basic to be open. But it is. The
problem concerns possibly the simplest good linear code.
Good Codes
Jørn Justesen in his seminal 1972 paper “A class of constructive asymptotically good algebraic codes” gave the first example of a strongly explicit binary codes with constant rate and linear
distance. This had been a Holy Grail for years in coding theory, open since its beginning. Before 1972 it was known via randomized constructions that such codes existed, but there were no explicit
constructions. His codes, now known as Justesen Codes, were constructive, and so solved the open problem, but they were not very simple.
A linear code is a ${k}$-dimensional vector subspace given by a ${k \times n}$generator matrix. Its rate is ${k/n}$, and its minimum distance ${d}$ equals the minimum number of non-zero entries in a
non-zero codeword, which is called the codeword’s weight. A good code has ${k}$ and ${d}$ both ${\Theta(n)}$. This means the code can detect many errors without excessively extending the length-${k}$
plaintext words that are given to it.
There is a simple linear code that is attributed to John Wozencraft by Justesen, and this code is known to be good. The problem is that the code, which we will denote by ${W(\alpha)}$, depends on a
parameter ${\alpha}$. If this parameter is selected randomly, then the code is terrific: its rate in fact lies on the famous Gilbert-Varshamov Bound. This bound is due to Edgar Gilbert and Rom
Varshamov who independently discovered it—again that phenomenon of dual discovery.
The main open question with this code is how to de-randomize it: Is there a way to explicitly construct for each length ${n}$ an ${\alpha}$ that works? This has been open going on close to fifty
The ${W(\alpha)}$ Codes
The code ${W(\alpha)}$ operates over the finite field ${\mathbb{F}_{2^{n}}}$. The code consists of the words of the form:
$\displaystyle (x,\alpha \cdot x),$
where ${x}$ is viewed as being in the finite field and the product is the finite field multiplication. Clearly this is a simple code: to encode ${x}$ one only needs to do one field operation. Pretty
I will not give the full proof of why this code has such a good distance for ${\alpha}$ selected randomly, but will give the intuition behind it. The question reduces to what is the least weight of a
non-zero code word.
Let ${(x,\alpha \cdot x)}$ be a non-zero code word with the lowest possible weight. Clearly, ${x}$ must be non-zero and of low weight. If its weight is large, then it does not matter what the rest of
the code word is. Therefore we can assume that ${x}$ has low weight. But ${\alpha}$ was selected randomly, and since ${x}$ is non-zero, the word ${y = \alpha \cdot x}$ is a random code word. Thus we
need only note that such a word is exponentially unlikely to be of low weight. A union-bound argument then applies because there are many fewer low-weight words ${x}$ than possible ${\alpha}$.
The full proof is mainly a careful analysis of the probability of a word having a low weight. It and more including links to Atri’s own notes can be found here*—the “*” is a footnote. A pretty neat
result, a pretty cool code, and a very hard open problem. How can we select ${\alpha}$ deterministically? The proof shows that almost all ${\alpha}$ work, but finding one explicitly seems to be
beyond our current abilities.
Complexity Theory Connections
The simplicity of this code ${x \rightarrow (x,\alpha \cdot x)}$ suggest some connection to de-randomization problems of complexity theory. The rough idea is this:
• Show that ${\alpha \cdot x}$ is computed by some low level complexity class.
• Then show that there is a method to de-randomize this class.
The reason I think that this is a possible approach is the very nature of the code. The code only does one finite field operation, which suggests to me that there could be some hope in solving this
A more general question notes that the only property used in building this code is that ${\alpha \cdot x}$ is an invertible operation. This means that for any ${x}$ non-zero, the values of ${\alpha \
cdot x}$ are uniformly distributed, if ${\alpha}$ is. Perhaps there is another operation that has this property that will still make the code work, but is easier to de-randomize?
Open Problems
Can you see how to de-randomize this code? Can you relate this problem to other open de-randomization problems from complexity theory?
$\displaystyle \S$
The footnote*: Ken fixed typos “Wonzen—” to “Wozen—” in the Wikipedia article on ${W(\alpha)}$, but the typo persists in the article’s title and hence its URL. This is what the typo-fixed link will
be. Is there a good way to automate correcting this kind of link error on the Net? Can error-correcting codes applied to URLs be relevant here?
1. August 8, 2011 1:18 pm
I would guess that good choices of $\alpha$ are complicated in the sense of having minimal polynomials of large degree and/or having large order in the multiplicative group. If that’s true,
derandomizing this algorithm might be related to hard number-theoretic problems. (At least, I would guess that finding primitive roots in finite fields is hard.)
2. August 8, 2011 1:39 pm
Ken, to fix a problem in the title of a Wikipedia article, you have to move the page.
3. August 8, 2011 1:41 pm
I moved the Wikipedia article to the correct title for you: http://en.wikipedia.org/wiki/Wozencraft_ensemble (note lower case in the second title word).
□ August 8, 2011 3:25 pm
Thanks!—to you and Tyson. Meanwhile I fixed the link in the post by making that ‘e’ lowercase. I’m pretty sure the original link had the uppercase ‘E’, and I know Wikipedia’s style asks
lowercase there, so it was a second unit of edit-distance in the URL.
4. August 8, 2011 2:31 pm
It’s a very nice open question.
A more general question notes that the only property used in building this code is that is an invertible operation. This means that for any non-zero, the values of $\alpha\cdot x$ are uniformly
distributed, if $\alpha$ is. Perhaps there is another operation that has this property that will still make the code work, but is easier to de-randomize?
This should be qualified (probably the qualification is already clear to you, but for readers’ benefit).
Technically, the argument uses that the operation $x \rightarrow \alpha \cdot x$ (over $\mathbb{F}_{2^n}$) is linear, viewed as an operation over $\mathbb{F}^n_{2}$. This ensures that our code is
linear over $\mathbb{F}_{2}^n$, which allows us to equate the code’s minimum distance with its minimal nonzero-codeword weight.
So to make the same argument work, we’d need to find another linear operation, which is fairly restrictive.
We could use a nonlinear operation and settle for a non-linear code, but then we would have to bound the minimum distance directly. This involves a significantly-larger union bound: rather than
ranging over $x$ with $|x|$ small, one would have to range over $x, y$ with $|x – y|$ small.
5. August 8, 2011 4:16 pm
You say:
> The rough idea is this:
> Show that (alpha * x) is computed by some low level complexity class.
> Then show that there is a method to de-randomize this class.
Well, computing alpha*x is computable in AC^0[mod 2], but it’s not clear to me that this is enough (even if it were in AC^0 or in some class that can be fully derandomized). In particular, it’s
not obvious to me that it’s sufficient to compute alpha*x. Don’t you need a low level complexity class that can *recognize* if you have a good alpha, i.e., one with good minimum distance? On the
surface, this sounds like a much trickier problem than just doing the arithmetic. (Maybe it’s easier because you can restrict to the case where x has a low weight?)
□ August 9, 2011 7:38 am
Alex: You’re right. The main problem seems to be that the only way we can compute the distance of such codes is to go through all possible codewords. The simplicity of the encoding might be
useful but we do not know of a way to exploit it.
There is a derandomization result known. Cheraghchi, Shokrollahi, and Wigderson observed that under standard derandomization complexity assumptions (for much higher complexity classes then
what Dick proposed above) the Wozencraft ensemble can be derandomized.
Recent Comments
John Sidles on In Praise Of P=NP Proofs
Ibrahim Cahit on In Praise Of P=NP Proofs
rrtucci on In Praise Of P=NP Proofs
KWRegan on In Praise Of P=NP Proofs
KWRegan on In Praise Of P=NP Proofs
Jon Awbrey on In Praise Of P=NP Proofs
Joshua Zelinsky on In Praise Of P=NP Proofs
mkatkov on In Praise Of P=NP Proofs
Ibrahim Cahit on In Praise Of P=NP Proofs
Paul Beame on In Praise Of P=NP Proofs
In Praise Of P=NP Pr… on No-Go Theorems
In Praise Of P=NP Pr… on Graph Isomorphism and Graph…
In Praise Of P=NP Pr… on Can Amateurs Solve P=NP?
Jon Awbrey on Triads and Dyads
Pip on Triads and Dyads | {"url":"https://rjlipton.wordpress.com/2011/08/08/an-error-correcting-code-from-the-book/","timestamp":"2014-04-20T23:30:26Z","content_type":null,"content_length":"94825","record_id":"<urn:uuid:7cce489b-5882-4644-950d-65f5b3931eb0>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
rational number
A number that can be written as an ordinary fraction – a ratio, a/b, of two integers, a and b, where b isn't zero – or as a decimal expansion that either stops (like 4.58) or is periodic (like
1.315315...). Other examples include 1, 1.2, 385.66, and 1/3.
Rational numbers are countable, which means that, although there are infinitely many of them, they can always be put in a definite order, from smallest to largest, and can thus be counted. They also
form what's called a densely ordered set; in other words, between any two rationals there always sits another one – in fact infinitely many other ones.
The rational numbers are a subset of the real numbers; real numbers that aren't rational are called, rationally enough, irrational numbers. Although rationals are dense on the real number line, in
the sense that any open set contains a rational, they're pretty sparse in comparison with the irrationals. One way to think of this is that the infinity of rationals (which, strangely enough, is
exactly the same size as the infinity of whole numbers) is smaller than the infinity of irrational numbers. Another way to grasp the scarcity versus density issue, is to realize that the rationals
can be covered by a set whose "length" is arbitrarily small. In other words, given a string of any positive length, no matter how short, it will still be long enough to cover all the rationals. In
mathematical parlance, the rationals are a measure zero set. The irrationals, by contrast, are a measure one set. This difference in measure means that the rationals and irrationals are quite
different even though a rational can always be found between any two irrationals, and an irrational exists between any two rationals.
Related category
TYPES OF NUMBERS | {"url":"http://www.daviddarling.info/encyclopedia/R/rational_number.html","timestamp":"2014-04-18T13:08:24Z","content_type":null,"content_length":"7847","record_id":"<urn:uuid:a45bf96c-0bf5-41bf-9718-c0bd88927f4a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Retsil Prealgebra Tutor
Find a Retsil Prealgebra Tutor
...I can help students understand the concepts behind specific problems and how those concepts fit into the big picture. And perhaps most importantly of all, I love mathematics, and I have always
enjoyed helping others learn to love it, too! I have always loved to write, and I enjoy using that passion to help others work with language as well.
35 Subjects: including prealgebra, English, reading, writing
My goal as a tutor is to see the student excel. I have over four years of experience as a tutor, working with students from the elementary through the college level. When I work with students, I
am aiming for more than just good test scores - I will build confidence so that my students know that they know the material.
8 Subjects: including prealgebra, calculus, geometry, algebra 1
...I feel communication is the strongest skill required for good tutoring. I have been helping kids for the past 3 years and have developed wonderful communication to help children in a variety of
settings. I recently was a volunteer tutor at the Kent and Covington libraries where I tutored children K-12th grade in many subjects.
25 Subjects: including prealgebra, chemistry, algebra 1, physics
...At the first meeting, I work really hard to understand all my students as people, and not as a label. I personalize my lesson plans to meet your needs. Not only that, I design my lesson plans
to best match your style of learning and comprehension.
38 Subjects: including prealgebra, reading, chemistry, writing
...For the past 3 years, I have been a First Years Program Leader on campus, essentially guiding freshmen through the various challenges and concerns they have upon entering college. I have taught
at a "Read 'n' Lead" program at my local library where I read to and helped elementary school students...
42 Subjects: including prealgebra, reading, English, calculus | {"url":"http://www.purplemath.com/retsil_wa_prealgebra_tutors.php","timestamp":"2014-04-21T14:53:59Z","content_type":null,"content_length":"24009","record_id":"<urn:uuid:4b4304db-9954-4851-9ad3-eeb4d87339b9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Resolution: standard / high
Figure 11.
Effects induced by bootstrapping. Distribution of the p values of χ^2 tests of each categorical variable X[2],..., X[5 ]and the binary response for the null case simulation study, where none of the
predictor variables is informative. The left plots correspond to the distribution of the p values computed from the original sample before bootstrapping. The right plots correspond to the
distribution of the p values computed for each variable from the bootstrap sample drawn with replacement.
Strobl et al. BMC Bioinformatics 2007 8:25 doi:10.1186/1471-2105-8-25
Download authors' original image | {"url":"http://www.biomedcentral.com/1471-2105/8/25/figure/F11","timestamp":"2014-04-16T16:44:27Z","content_type":null,"content_length":"12118","record_id":"<urn:uuid:f2d4bd94-3b56-4643-8244-b9be058ce3e7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
Answer to Tutorials on Quadratic Functions
Interactive Tutorial (1)
Question: Set a to zero and explain the graph obtained. Which term in ax^2 + bx + c gives the parabolic shape?
If a = 0, f(x) = bx + c, the graph of f(x) in this case is a line: linear function.( if b = 0, f is a constant function).
The term ax^2 gives the parabolic shape.
Interactive Tutorial (3)
a) f(x) = x^2 + x - 2
b) g(x) = 4x^2 + x + 1
c) h(x) = x^2 - 4x + 4
Use the analytical method described in the example to find the x intercepts and compare the results.
a) The graph of f(x) has x intercepts are at (-2 , 0) and (1 , 0).
b) The graph of g(x) has no x intercepts.
c) The graph of h(x) has an x intercept (graph touches the x axis) at (-2 , 0).
a) Use the applet window and set a,b and c to values such that b^2 - 4ac < 0. How many x-intercepts the graph of f(x) has ?
b) Use the applet window and set a,b and c to values such that b^2 - 4ac = 0. How many x-intercepts the graph of f(x) has?
c) Use the applet window and set a, b and c to values such that b^2 - 4ac > 0. How many x-intercepts the graph of f(x) has ?
a)If b^2 - 4ac < 0 there are no x intercepts.
b)If b^2 - 4ac = 0 there is one x intercepts (graph touches x axis).
c) If b^2 - 4ac > 0 there are two x intercepts.
More references on quadratic functions | {"url":"http://www.analyzemath.com/quadraticg/answers_1.html","timestamp":"2014-04-18T10:39:52Z","content_type":null,"content_length":"9150","record_id":"<urn:uuid:4faeacba-acd7-4ca4-b0c0-1d96bfdefa75>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Englewood, NJ Geometry Tutor
Find an Englewood, NJ Geometry Tutor
...I have a degree in physics and a minor in mathematics. I'm also currently working on a masters in applied mathematics & statistics. In several courses, such as Ordinary Differential Equations
(ODE) and Partial Differential Equations (PDE), we make heavy use of programs such as Mathematica and Maple.
83 Subjects: including geometry, chemistry, physics, statistics
...I've worked on all subjects and have excellent recommendations. I started with a major test prep company, and have experience in the following tests: SAT I (math, reading, and writing), ACT,
GRE, GMAT, MCAT Verbal, LSAT, SSAT, SHSAT, ISEE, SSAT, and SAT Subject Tests and AP Tests (Math level 1 a...
42 Subjects: including geometry, reading, English, biology
...Before that I student taught at Scarsdale High School and the Byram Hills middle school (H. C. Crittenden). I hold NYS certification in math education, grades 7 to 12, and students with
7 Subjects: including geometry, algebra 1, algebra 2, trigonometry
...I will try my best to help you achieve your purpose of this subject! I choose to teach Chinese, because it is my first language. Also I speak English well.
2 Subjects: including geometry, Chinese
Hi, I am a senior student in Electrical Engineering at City College of New York (C.U.N.Y). I've worked as a tutor for many years now and have acquired a lot of experience working with high school
and college students. Whether you need help with basic math or college level math or physics,I will be...
23 Subjects: including geometry, chemistry, physics, statistics | {"url":"http://www.purplemath.com/Englewood_NJ_Geometry_tutors.php","timestamp":"2014-04-21T15:21:54Z","content_type":null,"content_length":"23785","record_id":"<urn:uuid:1fc9d7f0-8b1c-4155-9f6e-4fb90fc11ae8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
Log expansion for infinite solenoid
Wow, never mind. Clearly I am being silly here, for [itex] \Lambda \rightarrow \infty [/itex].
[tex] log\bigg( \frac{\Lambda}{\rho} + \sqrt{1 + \frac{\Lambda^2}{\rho^2}} \bigg) \rightarrow log \bigg( \frac{ 2 \Lambda}{\rho} \bigg) \rightarrow log(2 \Lambda) - log(\rho). [/tex]
As for the [itex] \rho_o [/itex] I have no idea why that enters the equation. | {"url":"http://www.physicsforums.com/showthread.php?p=4271512","timestamp":"2014-04-18T21:28:05Z","content_type":null,"content_length":"28739","record_id":"<urn:uuid:34b2717f-ba66-4a76-ba94-bfb3d01063f8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Grading System
14-Nov-2007, 06:02 #2
• Member Info
□ Member Type:
□ English Teacher
□ Native Language:
□ English
□ Home Country:
□ United States
□ Current Location:
□ United States
13-Nov-2007, 20:57 #1
I am a former educator and would like the opinion of another teacher. My son is in 7th grade. He came home with 6 A's and B's on his report card, plus one D. The D was a red flag to me. When I
questioned the teacher, she told me it was because there were a lot of boys in that class and they are goofing off. I accepted that and had a talk with my son.
When I looked closely at his current grades online, I noticed that he had 2 F's on Vocab. units. So I looked in his book to see what he did wrong. I discovered that in one lesson he actually had
a 91/98 which is a 93%. Yet her score was an 8/15, or a 53%. This made no sense to me. He only missed 7 on this lesson out of 98 problems, but he was given an F.
Her system of scoring is this: if you miss 3 on any of the 5 sections of the unit, you get no points at all. Now, the child may have gotten 17 correct out of 20, but she still gives a zero for
that section. An F!! If you miss 2, you get one point. If you miss 1, you get 2 points, and if you miss zero, you get 3 points. Each section is worth 3 points and that's how she came up with her
total of 15 in the paragraph above. There were 5 sections within that unit.
I understand if she wants Vocab to be worth less than other things such as tests, but you cannot give these kids zero credit when they are doing work. She can easily use the total points for the
whole unit and then weigh these units within her computer system.
I am totally appalled by this system and have run this past several parents plus a former university professor. They are in agreement with me that this is outrageous. I checked with another
mother and her son had all A's and B's, except for an F in this class.
I have discussed this with the teacher and she says she does this because of cheating betwteen the classes. I don't know about the other kids, but my child is not cheating on this because he does
these at home. At any rate, cheating has nothing to do with giving my child an "F" when he clearly did "A" work. She also says that she has been using this system for a long time.
Well, just because it's been done forever, doesn't make it right. I don't think parents know what is going on here. It took me a while to discover it myself.
If I were to look at her bell curve in her class, the heaviest would be at the bottom. She did admit to me that all of the kids are doing poorly. That is an obvious indication to me that the
teaching and/or grading methods need to be re-evaluated.
Thanks for any input you can offer. I'm not afraid to take this matter as far as it needs to go to get this changed....
With percentages like that, I'd hate to see how this person teaches math! This is definitely an issue you should take up with the school principal. I would even suggest writing the facts in a
letter, and carbon copy it to the superindendant of schools. (A written letter via snail mail will get more attention than an email or phone call.) Don't be accusatory or hostile in your letter;
phrase it much as you did here - you simply don't understand how 91 correct answers out of 98 could be a failing grade, and that you haven't gotten any satisfactory answers from the teacher. (To
be honest, this teacher sounds like she has some sort of personal grudge or vendetta against males.)
Good luck!
I am a former educator and would like the opinion of another teacher. My son is in 7th grade. He came home with 6 A's and B's on his report card, plus one D. The D was a red flag to me. When I
questioned the teacher, she told me it was because there were a lot of boys in that class and they are goofing off. I accepted that and had a talk with my son.
When I looked closely at his current grades online, I noticed that he had 2 F's on Vocab. units. So I looked in his book to see what he did wrong. I discovered that in one lesson he actually had
a 91/98 which is a 93%. Yet her score was an 8/15, or a 53%. This made no sense to me. He only missed 7 on this lesson out of 98 problems, but he was given an F.
Her system of scoring is this: if you miss 3 on any of the 5 sections of the unit, you get no points at all. Now, the child may have gotten 17 correct out of 20, but she still gives a zero for
that section. An F!! If you miss 2, you get one point. If you miss 1, you get 2 points, and if you miss zero, you get 3 points. Each section is worth 3 points and that's how she came up with her
total of 15 in the paragraph above. There were 5 sections within that unit.
I understand if she wants Vocab to be worth less than other things such as tests, but you cannot give these kids zero credit when they are doing work. She can easily use the total points for the
whole unit and then weigh these units within her computer system.
I am totally appalled by this system and have run this past several parents plus a former university professor. They are in agreement with me that this is outrageous. I checked with another
mother and her son had all A's and B's, except for an F in this class.
I have discussed this with the teacher and she says she does this because of cheating betwteen the classes. I don't know about the other kids, but my child is not cheating on this because he does
these at home. At any rate, cheating has nothing to do with giving my child an "F" when he clearly did "A" work. She also says that she has been using this system for a long time.
Well, just because it's been done forever, doesn't make it right. I don't think parents know what is going on here. It took me a while to discover it myself.
If I were to look at her bell curve in her class, the heaviest would be at the bottom. She did admit to me that all of the kids are doing poorly. That is an obvious indication to me that the
teaching and/or grading methods need to be re-evaluated.
Thanks for any input you can offer. I'm not afraid to take this matter as far as it needs to go to get this changed....
You definitely have an issue to pursue. If her "problem" is cross-class cheating, there are ways to combat this (multiple tests could be one way).
Normally as a teacher you want your mean mark to lie somewhere between 70%-75%. Radically lower results usually mean the test was too difficult, Radically higher results usually mean the test was
too easy. From what you describe, the results of her grading are severely skewed to the low end. This should be a red flag to the principal of the school that something is wrong.
I would discuss this with other parents and present a joint petition to the school principal. A group of ticked-off parents is more forceful than just one.
With percentages like that, I'd hate to see how this person teaches math! This is definitely an issue you should take up with the school principal. I would even suggest writing the facts in a
letter, and carbon copy it to the superindendant of schools. (A written letter via snail mail will get more attention than an email or phone call.) Don't be accusatory or hostile in your letter;
phrase it much as you did here - you simply don't understand how 91 correct answers out of 98 could be a failing grade, and that you haven't gotten any satisfactory answers from the teacher. (To
be honest, this teacher sounds like she has some sort of personal grudge or vendetta against males.)
Good luck!
I agree with Ouisch!
Additionally, I suggest that this matter should be discussed with the principal and the teacher at the same conference. Make the teacher justify her grading system in front of her supervisor
while you are present. I would not speak with the principal privately; the principal's response, most assuredly, will be: "I will speak with the teacher and get back to you!" Besides, speaking
with the principal privately will only serve to infuriate the teacher and cause her to be more inclined to "take it out" on your son.
Keep us posted as to the outcome!
14-Nov-2007, 07:07 #3
Senior Member
• Member Info
□ Member Type:
□ Other
14-Nov-2007, 07:11 #4
VIP Member
• Member Info
□ Member Type:
□ Academic
□ Native Language:
□ American English
□ Home Country:
□ United States
□ Current Location:
□ United States | {"url":"http://www.usingenglish.com/forum/threads/53442-Grading-System","timestamp":"2014-04-16T23:44:26Z","content_type":null,"content_length":"66709","record_id":"<urn:uuid:baa683c9-d9eb-493c-9a83-cd48ac93d226>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maintaining order in a generalized linked list
Results 1 - 10 of 23
, 1989
"... This paper is a study of persistence in data structures. Ordinary data structures are ephemeral in the sense that a change to the structure destroys the old version, leaving only the new version
available for use. In contrast, a persistent structure allows access to any version, old or new, at any t ..."
Cited by 250 (6 self)
Add to MetaCart
This paper is a study of persistence in data structures. Ordinary data structures are ephemeral in the sense that a change to the structure destroys the old version, leaving only the new version
available for use. In contrast, a persistent structure allows access to any version, old or new, at any time. We develop simple, systematic, and effiient techniques for making linked data structures
persistent. We use our techniques to devise persistent forms of binary search trees with logarithmic access, insertion, and deletion times and O(1) space bounds for insertion and deletion.
- PROCEEDINGS OF THE 10TH ANNUAL EUROPEAN SYMPOSIUM ON ALGORITHMS (ESA , 2002
"... In the Order-Maintenance Problem, the objective is to maintain a total order subject to insertions, deletions, and precedence queries. Known optimal solutions, due to Dietz and Sleator, are
complicated. We present new algorithms that match the bounds of Dietz and Sleator. Our solutions are simple, ..."
Cited by 62 (9 self)
Add to MetaCart
In the Order-Maintenance Problem, the objective is to maintain a total order subject to insertions, deletions, and precedence queries. Known optimal solutions, due to Dietz and Sleator, are
complicated. We present new algorithms that match the bounds of Dietz and Sleator. Our solutions are simple, and we present experimental evidence that suggests that they are superior in practice.
, 2001
"... We present a labeling scheme for rooted trees that supports ancestor queries. Given a tree, the scheme assigns to each node a label which is a binary string. Given the labels of any two nodes u
and v, it can in constant time be determined whether u is ancestor to v alone from these labels. For tr ..."
Cited by 48 (7 self)
Add to MetaCart
We present a labeling scheme for rooted trees that supports ancestor queries. Given a tree, the scheme assigns to each node a label which is a binary string. Given the labels of any two nodes u and
v, it can in constant time be determined whether u is ancestor to v alone from these labels. For trees of size n our scheme assigns labels of size bounded by log n + O( p log n) bits to each node.
This improves a recent result of Abiteboul, Kaplan and Milo at SODA'01, where a labeling scheme with labels of size 3=2 log n+ O(log log n) was presented. The problem is among other things motivated
in connection with ecient representation of information for XML-based search engines for the internet.
- SIAM Journal on Computing , 1999
"... Abstract. We show how to maintain a data structure on trees which allows for the following operations, all in worst-case constant time: 1. insertion of leaves and internal nodes, 2. deletion of
leaves, 3. deletion of internal nodes with only one child, 4. determining the least common ancestor of any ..."
Cited by 43 (0 self)
Add to MetaCart
Abstract. We show how to maintain a data structure on trees which allows for the following operations, all in worst-case constant time: 1. insertion of leaves and internal nodes, 2. deletion of
leaves, 3. deletion of internal nodes with only one child, 4. determining the least common ancestor of any two nodes. We also generalize the Dietz–Sleator “cup-filling ” scheduling methodology, which
may be of independent interest.
, 2003
"... This paper focuses on the optimization of the navigation through voluminous subsumption hierarchies of topics employed by Portal Catalogs like Netscape Open Directory (ODP). We advocate for the
use of labeling schemes for modeling these hierarchies in order to efficiently answer queries such as subs ..."
Cited by 37 (6 self)
Add to MetaCart
This paper focuses on the optimization of the navigation through voluminous subsumption hierarchies of topics employed by Portal Catalogs like Netscape Open Directory (ODP). We advocate for the use
of labeling schemes for modeling these hierarchies in order to efficiently answer queries such as subsumption check, descendants, ancestors or nearest common ancestor, which usually require costly
transitive closure computations. We rst give a qualitative comparison of three main families of schemes, namely bit vector, prefix and interval based schemes. We then show that two labeling schemes
are good candidates for an efficient implementation of label querying using standard relational DBMS, namely, the Dewey Prefix scheme [6] and an Interval scheme by Agrawal, Borgida and Jagadish [1].
We compare their storage and query evaluation performance for the 16 ODP hierarchies using the PostgreSQL engine.
- In Proceedings of the 10th Annual European Symposium on Algorithms , 2002
"... Abstract. We study the problem of maintaining a dynamic ordered set subject to insertions, deletions, and traversals of k consecutive elements. This problem is trivially solved on a RAM and on a
simple two-level memory hierarchy. We explore this traversal problem on more realistic memory models: the ..."
Cited by 32 (11 self)
Add to MetaCart
Abstract. We study the problem of maintaining a dynamic ordered set subject to insertions, deletions, and traversals of k consecutive elements. This problem is trivially solved on a RAM and on a
simple two-level memory hierarchy. We explore this traversal problem on more realistic memory models: the cache-oblivious model, which applies to unknown and multi-level memory hierarchies, and
sequential-access models, where sequential block transfers are less expensive than random block transfers. 1
- J. of Algorithms , 1993
"... We introduce data-structural bootstrapping, a technique to design data structures recursively, and use it to design confluently persistent deques. Our data structure requires O(log 3 k)
worstcase time and space per deletion, where k is the total number of deque operations, and constant worst-case t ..."
Cited by 15 (4 self)
Add to MetaCart
We introduce data-structural bootstrapping, a technique to design data structures recursively, and use it to design confluently persistent deques. Our data structure requires O(log 3 k) worstcase
time and space per deletion, where k is the total number of deque operations, and constant worst-case time and space for other operations. Further, the data structure allows a purely functional
implementation, with no side effects. This improves a previous result of Driscoll, Sleator, and Tarjan. 1 An extended abstract of this paper was presented at the 4th ACM-SIAM Symposium on Discrete
Algorithms, 1993. 2 Supported by a Fannie and John Hertz Foundation fellowship, National Science Foundation Grant No. CCR-8920505, and the Center for Discrete Mathematics and Theoretical Computer
Science (DIMACS) under NSF-STC88-09648. 3 Also affiliated with NEC Research Institute, 4 Independence Way, Princeton, NJ 08540. Research at Princeton University partially supported by the National
Science Foundatio...
- Journal of Functional Programming , 1997
"... Arrays are probably the most widely used data structure in imperative programming languages, yet functional languages typically only support arrays in a limited manner, or prohibit them
entirely. This is not too surprising, since most other mutable data structures, such as trees, have elegant immuta ..."
Cited by 13 (0 self)
Add to MetaCart
Arrays are probably the most widely used data structure in imperative programming languages, yet functional languages typically only support arrays in a limited manner, or prohibit them entirely.
This is not too surprising, since most other mutable data structures, such as trees, have elegant immutable analogues in the functional world, whereas arrays do not. Previous attempts at addressing
the problem have suffered from one of three weaknesses, either that they don't support arrays as a persistent data structure (unlike the functional analogues of other imperative data structures), or
that the range of operations is too restrictive to support some common array algorithms efficiently, or that they have performance problems. Our technique provides arrays as a true functional
analogue of imperative arrays with the properties that functional programmers have come to expect from their data structures. To efficiently support array algorithms from the imperative world, we
provide O(1) operations for single-threaded array use. Fully persistent array use can also be provided at O(1) amortized cost, provided that the algorithm satisfies a simple requirement as to
uniformity of access. For those algorithms which do not access the array uniformly or single-threadedly, array reads or updates take at most O(log n) amortized time, where n is the size of the array.
Experimental results indicate that the overheads of our technique are acceptable in practice for many applications.
- Journal of Algorithms , 1998
"... Let the position of a list element in a list be the number of elements preceding it plus one. An indexed list supports the following operations on a list: Insert; delete; return the position of
an element; and return the element at a certain position. The order in which the elements appear in the li ..."
Cited by 12 (0 self)
Add to MetaCart
Let the position of a list element in a list be the number of elements preceding it plus one. An indexed list supports the following operations on a list: Insert; delete; return the position of an
element; and return the element at a certain position. The order in which the elements appear in the list is completely determined by where the insertions take place; we do not require the presence
of any keys that induce the ordering. We consider approximate indexed lists, and show that a tiny relaxation in precision of the query operations allows a considerable improvement in time complexity.
The new data structure has applications in two other problems; namely, list labeling and subset rank. 1 Introduction An indexed list [5] is a list abstract data type that supports the following
operations: Insert(x; y): Insert list element y immediately after list element x, which may be a list header; Delete(x): Delete list element x; Pos(x): Return the position of list element x, that is,
one plu...
- Journal of Web Sematics 1, Issue , 2004
"... This paper focuses on the optimization of the navigation through voluminous subsumption hierarchies of topics employed byPortal Catalogs like Netscape Open Directory (ODP). We advocate for the
use of labeling schemes for modeling these hierarchies in order to efficiently answer queries such as su ..."
Cited by 11 (1 self)
Add to MetaCart
This paper focuses on the optimization of the navigation through voluminous subsumption hierarchies of topics employed byPortal Catalogs like Netscape Open Directory (ODP). We advocate for the use of
labeling schemes for modeling these hierarchies in order to efficiently answer queries such as subsumption check, descendants, ancestors or nearest common ancestor, which usually require costly
transitive closure computations. Wefirstgive a qualitative comparison of three main families of schemes, namely bit vector, prefix and interval based schemes. We then show that two labeling schemes
are good candidates for an efficientimplementation of label querying using standard relational DBMS, namely the Dewey Prefix scheme and an Interval scheme by Agrawal, Borgida and Jagadish. We compare
their storage and query evaluation performance for the 16 ODP hierarchies using the PostgreSQL engine. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=86073","timestamp":"2014-04-16T21:45:14Z","content_type":null,"content_length":"38733","record_id":"<urn:uuid:8741bb9c-292c-465c-a57f-afda9a10b7fa>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Final exam tomorrow, need help!
May 4th 2010, 05:18 PM #1
May 2010
Final exam tomorrow, need help!
I don't need any work shown I just need the answers so I can check my answer. I have my final exam tomorrow so I need to be sure I'm doing these right.
Any answer would help me out a lot, but the more I can get answers to the better.
Thanks so much!
a) Initial value problem
dy/dx=y^2 + 1
b)Find the area of the surface obtained by rotating the curve about the y-axis?
y=(1/4)*x^2 - (1/2)*ln(x); 1≤x≤2
c)Find the length of the curve
x=((y^4)/8) + (1/(4*y^2)); 1≤y≤2
d)Solve the differential equation.
7yy' = 5x
e)Set up, but do not evaluate, an integral for the area of the surface obtained by rotating the curve about the given axis.
y=e^x; 1≤y≤3; y-axis
f) Find the area of the surface obtained by rotating the curve about the x-axis.
x= (1/3)*(y^2 +2)^(3/2)
g)Find the area of the surface obtained by rotating the curve about the x-axis.
Have you made any attempts at these or are we working with a blank canvas? Showing some working will help us help you.
This first one is separable.
$\frac{dy}{dx}= y^2 + 1$
$\frac{dy}{ y^2 + 1}= dx$
Now integrate both sides, after you show me that, I will show you more.
I don't need any work shown I just need the answers so I can check my answer. I have my final exam tomorrow so I need to be sure I'm doing these right.
Any answer would help me out a lot, but the more I can get answers to the better.
Thanks so much!
a) Initial value problem
dy/dx=y^2 + 1
b)Find the area of the surface obtained by rotating the curve about the y-axis?
y=(1/4)*x^2 - (1/2)*ln(x); 1≤x≤2
c)Find the length of the curve
x=((y^4)/8) + (1/(4*y^2)); 1≤y≤2
d)Solve the differential equation.
7yy' = 5x
e)Set up, but do not evaluate, an integral for the area of the surface obtained by rotating the curve about the given axis.
y=e^x; 1≤y≤3; y-axis
f) Find the area of the surface obtained by rotating the curve about the x-axis.
x= (1/3)*(y^2 +2)^(3/2)
g)Find the area of the surface obtained by rotating the curve about the x-axis.
Damn! thats too many questions... it is hard to read.. it would be great if you posted the questions separately
for no.c to find the length of the curve:
first find out the derivative of the given funciton...and then the length of the curve is given by
$L =\int_1^2 \sqrt {1+ \mbox{(derivative)}^2} \mbox{dy}$
you can check your answers for differentiation, integration etc. at this website
May 4th 2010, 06:07 PM #2
May 4th 2010, 06:09 PM #3
MHF Contributor
Mar 2010
May 4th 2010, 06:11 PM #4 | {"url":"http://mathhelpforum.com/calculus/143098-final-exam-tomorrow-need-help.html","timestamp":"2014-04-17T20:25:50Z","content_type":null,"content_length":"42617","record_id":"<urn:uuid:4ef571f8-787c-4e77-868c-027f6b3d932a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
Canonical Tensor Decompositions
Pierre Comon
In: ARCC Workshop on Tensor Decomposition, July 18 - 24, 2004, Palo Alto, California.
The Singular Value Decomposition (SVD) may be extended to tensors at least in two very different ways. One is the High-Order SVD (HOSVD), and the other is the Canonical Decomposition (CanD). Only the
latter is closely related to the Tensor Rank. Important basic questions are raised in this short paper, such as the maximal achievable rank of a tensor of given dimensions, or the computation of a
CanD. Some questions are answered, and it turns out that the answers depend on the choice of the underlying field, and on tensor symmetry structure, which outlines a major difference compared to
PDF - Requires Adobe Acrobat Reader or other PDF viewer. | {"url":"http://eprints.pascal-network.org/archive/00000412/","timestamp":"2014-04-19T04:23:54Z","content_type":null,"content_length":"6684","record_id":"<urn:uuid:0841ba73-1766-4fed-9c9e-e1209bf8e57f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
September 26th 2009, 04:51 AM #1
Junior Member
Jun 2009
$n\geq 0$
I need to write $a_{n+1}$ in terms of $\frac{1}{2}a_n$
I've written out the first few terms... but I'm just not seeing it... apparently this is a geometric sequence if that helps...
Also, while I'm here, what's the TeX code to put a space? Cheers
$n\geq 0$
I need to write $a_{n+1}$ in terms of $\frac{1}{2}a_n$
I've written out the first few terms... but I'm just not seeing it... apparently this is a geometric sequence if that helps...
Also, while I'm here, what's the TeX code to put a space? Cheers
Question - Is it
$a_{n+1}=a_n+\frac{1}{2^{n+1}},\;\;n\geq 0,\;\;a_0=1$
$a_{n+1}=a_n+\frac{1}{2^{n}},\;\;n\geq 0,\;\;a_0=1$
As for spaces, the easiest is using \, (small) or \; (regular)
It says $a_{n+1}=a_n+\frac{1}{2^{n+1}},\;\;n\geq 0,\;\;a_0=1$... but maybe it's a typo... can you answer the question if it's
$a_{n+1}=a_n+\frac{1}{2^{n}},\;\;n\geq 0,\;\;a_0=1$? I still can't :S
Thanks for your help
September 26th 2009, 05:38 AM #2
September 26th 2009, 02:28 PM #3
Junior Member
Jun 2009 | {"url":"http://mathhelpforum.com/discrete-math/104364-sequences.html","timestamp":"2014-04-20T21:53:01Z","content_type":null,"content_length":"39122","record_id":"<urn:uuid:d625620b-2cf5-413c-a1e6-c24a8406f5a8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Primality Testing Question
Mark Andrews on Wed, 22 Mar 2000 15:58:24 -0800
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Primality Testing Question
I'm a developer (not a mathematician) recently tasked with finding 1) a primality test algorithm for numbers at least up to about 10^20 and preferably 10^40. It must be a test for true primality, not
pseudo primality 2) a factoring algorithm. Poking around a bit, I came across Pari. The manual leads me to believe a good composite pseudo primality test is incorporated, but I have no idea from docs
what range of numbers IsPrime() is valid for as a true primality test. In addition, I would appreciate a coded example of how to do factorization using Pari. The ideal would be to incorporate, as a
library, those parts of Pari applicable to the task into a Windows 32 bit executable. I read Bill Daly's instructions and got the calculator to compile fine. Any help at this point would be
Mark Andrews
Director of Product Design and Development marka@origindata.com
Origin Data, Inc. (805) 965-8115 x125
104 West Anapamu, Suite C (805) 965-7880 fax
Santa Barbara, CA 93101 http://www.origindata.com | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-users-0003/msg00000.html","timestamp":"2014-04-17T21:26:25Z","content_type":null,"content_length":"4345","record_id":"<urn:uuid:115ce76f-d127-46d5-ba24-721291a1f2a0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hyram is helping his little sister write her integers from 1 to 50. He suggests that they take a break after she has written her 33rd digit. What was the last two-digit number that Hyram’s little
sister wrote before they took the break? *
Ten items have the following weights, in pounds: 2, 3, 5, 5, 6, 6, 8, 8, 8, 9. The items are then divided into three sets of 20 pounds each. How many sets contain an 8 pound weight? *
The population of Expotown is currently 54,000 people. If the population has increased by 200% in the past 30 years, what was the population 30 years ago? *
An arithmetic series of positive integers has 8 terms and a sum of 2008. What is the smallest possible value of any member of the series? *
The 660 students at Mandelbrot Middle School voted on their choice for favorite among six mathematicians. The table shows the results of the vote. Finn made a circle graph to represent the data in
the table. In degrees, what is the measure of the central angle of the sector that represents votes for Gauss? *
In a round-robin chess tournament every player plays one game with every other player. Five participants withdrew after playing two games each. None of these players played a game against each other.
A total of 220 games were played in the tournament. Including those who withdrew, how many players participated? *
The sum of the reciprocals of three consecutive positive integersis equal to 47 divided by the product of the integers. What is the smallest of the three integers? *
A bag contains ten identical blue marbles and ten identical green marbles. In how many distinguishable ways can five of these marbles be put in a row if there are at least two blue marbles in the row
and every blue marble is next to at least one other blue marble? *
The sum of five different positive integers is 80. What is the largest possible value of the second largest number? *
From the time a shop opens at 10 a.m., one customer enters the shop every 15 minutes, until the shop closes at 7 p.m. There is a 1/3 chance that the salesman will convince the customer to buy a
widget. The shop owner makes a profit of $6 on each widget sold. What is the most the shop owner can pay the salesman (in dollars per hour) to exactly break even? That is, what hourly rate will make
the amount paid to the salesman equal the total amount of income from widget sales? *
Create your own free online surveys now! Powered by Polldaddy | {"url":"http://wbur.polldaddy.com/s/mathcounts?iframe=1","timestamp":"2014-04-21T12:19:56Z","content_type":null,"content_length":"16401","record_id":"<urn:uuid:4eff5adc-b8b9-4c14-8268-bf42406fa314>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Retsil Prealgebra Tutor
Find a Retsil Prealgebra Tutor
...I can help students understand the concepts behind specific problems and how those concepts fit into the big picture. And perhaps most importantly of all, I love mathematics, and I have always
enjoyed helping others learn to love it, too! I have always loved to write, and I enjoy using that passion to help others work with language as well.
35 Subjects: including prealgebra, English, reading, writing
My goal as a tutor is to see the student excel. I have over four years of experience as a tutor, working with students from the elementary through the college level. When I work with students, I
am aiming for more than just good test scores - I will build confidence so that my students know that they know the material.
8 Subjects: including prealgebra, calculus, geometry, algebra 1
...I feel communication is the strongest skill required for good tutoring. I have been helping kids for the past 3 years and have developed wonderful communication to help children in a variety of
settings. I recently was a volunteer tutor at the Kent and Covington libraries where I tutored children K-12th grade in many subjects.
25 Subjects: including prealgebra, chemistry, algebra 1, physics
...At the first meeting, I work really hard to understand all my students as people, and not as a label. I personalize my lesson plans to meet your needs. Not only that, I design my lesson plans
to best match your style of learning and comprehension.
38 Subjects: including prealgebra, reading, chemistry, writing
...For the past 3 years, I have been a First Years Program Leader on campus, essentially guiding freshmen through the various challenges and concerns they have upon entering college. I have taught
at a "Read 'n' Lead" program at my local library where I read to and helped elementary school students...
42 Subjects: including prealgebra, reading, English, calculus | {"url":"http://www.purplemath.com/retsil_wa_prealgebra_tutors.php","timestamp":"2014-04-21T14:53:59Z","content_type":null,"content_length":"24009","record_id":"<urn:uuid:4b4304db-9954-4851-9ad3-eeb4d87339b9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some heuristics about spline smoothing
October 8, 2013
By arthur charpentier
Let us continue our discussion on smoothing techniques in regression. Assume that .$\mathbb{E}(Y\vert X=x)=h(x)$ where $h(\cdot)$ is some unkown function, but assumed to be sufficently smooth. For
instance, assume that $h(\cdot)$ is continuous, that $h'(\cdot)$ exists, and is continuous, that $h''(\cdot)$ exists and is also continuous, etc. If $h(\cdot)$ is smooth enough, Taylor’s expansion
can be used. Hence, for $x\in(\alpha,\beta)$
$h(x)=h(\alpha)+\sum_{k=1}^ d \frac{(x-\alpha)^k}{k!}h^{(k)}(x_0)+\frac{1}{d!}\int_{\alpha}^x [x-t]^d h^{(d+1)}(t)dt$
which can also be writen as
$h(x)=\sum_{k=0}^ d a_k (x-\alpha)^k +\frac{1}{d!}\int_{\alpha}^x [x-t]^d h^{(d+1)}(t)dt$
for some $a_k$‘s. The first part is simply a polynomial.
The second part, is some integral. Using Riemann integral, observe that
$\frac{1}{d!}\int_{\alpha}^x [x-t]^d h^{(d+1)}(t)dt\sim \sum_{i=1}^ j b_i (x-x_i)_+^d$
for some $b_i$‘s, and some
$\alpha < x_1< x_2< \cdots < x_{j-1} < x_j < \beta$
$h(x) \sim \sum_{k=0}^ d a_k (x-\alpha)^k +\sum_{i=1}^ j b_i (x-x_i)_+^d$
Nice! We have our linear regression model. A natural idea is then to consider a regression of $Y$ on $\boldsymbol{X}$ where
$\boldsymbol{X} = (1,X,X^2,\cdots,X^d,(X-x_1)_+^d,\cdots,(X-x_k)_+^d )$
given some knots $\{x_1,\cdots,x_k\}$. To make things easier to understand, let us work with our previous dataset,
If we consider one knot, and an expansion of order 1,
The prediction obtained with this spline can be compared with regressions on subsets (the doted lines)
It is different, since we have here three parameters (and not four, as for the regressions on the two subsets). One degree of freedom is lost, when asking for a continuous model. Observe that it is
possible to write, equivalently
So, what happened here?
Here, the functions that appear in the regression are the following
Now, if we run the regression on those two components, we get
If we add one knot, we get
the prediction is
Of course, we can choose much more knots,
We can even get a confidence interval
polygon(c(xr,rev(xr)),c(P[,2],rev(P[,3])),col="light blue",border=NA)
And if we keep the two knots we chose previously, but consider Taylor’s expansion of order 2, we get
So, what’s going on? If we consider the constant, and the first component of the spline based matrix, we get
If we add the constant term, the first term and the second term, we get the part on the left, before the first knot,
and with three terms from the spline based matrix, we can get the part between the two knots,
and finallty, when we sum all the terms, we get this time the part on the right, after the last knot,
This is what we get using a spline regression, quadratic, with two (fixed) knots. And can can even get confidence intervals, as before
polygon(c(xr,rev(xr)),c(P[,2],rev(P[,3])),col="light blue",border=NA)
The great idea here is to use functions $(x-x_i)_+$, that will insure continuity at point $x_i$.
Of course, we can use those splines on our Dexter application,
Here again, using linear spline function, it is possible to impose a continuity constraint,
But we can also consider some quadratic splines,
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/some-heuristics-about-spline-smoothing/","timestamp":"2014-04-20T08:40:12Z","content_type":null,"content_length":"46159","record_id":"<urn:uuid:d42c70f1-d3af-498f-a7da-b18d8420b96a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
To find a factor of n, find some m such that (m,n) is not 1
Quick description
If you are trying to design an algorithm for factorizing large numbers, then an easy but useful observation is that if you can find some such that the highest common factor of and is not , then you
can find a factor of by applying Euclid's algorithm (which is very efficient) to and .
Example 1
Suppose that you didn't know that could be factorized as , and somebody gave you the information that mod . You could reason as follows. If mod , then mod , which is the same as saying that is a
factor of . Therefore, cannot be coprime to both and . Applying that to our example, we know that has a factor in common with either or . Applying Euclid's algorithm, we find that and also that .
General discussion
The simple trick shown here has the obvious drawback that it depended on our being told a highly nontrivial piece of information. But that is not surprising, given that there is no known quick
algorithm for factorizing large numbers, and it is widely believed that no such algorithm exists. From a theoretical point of view, the trick is important, and is in fact a vital component of Shor's
famous proof that efficient factorizing can be done with a quantum computer.
Login or register to post comments | {"url":"http://www.tricki.org/article/To_find_a_factor_of_n_find_some_m_such_that_mn_is_not_1","timestamp":"2014-04-20T18:23:42Z","content_type":null,"content_length":"22868","record_id":"<urn:uuid:b31ff0c0-c923-4162-a1b7-36995fe10a50>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fractions Worksheets For Pre-schoolers
Kids in preschool can learn fractions using basic fractions worksheets
Can preschool kids use fractions worksheets to learn fractions?
Yes, of course, kids going to preschool can learn fractions too, using the basic fractions worksheets. The condition is that they should know the counting numbers to ten, at least. If kids in
preschool don’t know counting to tens, get counting numbers worksheets and lessons by clicking this link.
Once kids know how to count to ten and they can count the number of objects too, then they can be introduced with basic fractions.
Apple activity to introduce basic fractions to preschoolers:
Get an apple and ask kids to count, of course this is one apple and kids can recognize it as number “one”: Look at picture of apple and you can use this picture if you don’t want to use a real apple.
After showing the apple (a real apple or the above image) to kids, now cut this apple into two equal pieces. You can use the following picture again if you don’t have a real apple to cut.
One half is the most basic fraction. Keep giving more examples of half to kids until they get comfortable with this idea. Ask your kids lots of question about half.
Once kids get the idea of one half the next basic fraction is one quarter and we choose a pizza example to introduce kids with the one quarter. Also the task to explain three quarter gets very simple
with the example of a pizza. Consider a family of four members named, Harry, Poonam and their Mom and Dad and the Mom made a pizza as shown in the picture below:
Now mom cuts the above pizza into four equal slices and each member have a slice to eat. But only Harry wants to eat his share of the pizza right now. Below is a little try to introduce one quarter
and three quarters to kids.
Hence the basic fractions parents can teach to their kids before even sending them to pre-school. I used this to teach fractions to my own daughter and it worked perfect. You too can pick more
example from daily life to teach fractions to your kids yourself. Later on your can use our fractions worksheets to teach higher fractions topics to your kids.
For more fractions worksheets and ideas visit our site by clicking any of the links in the post.
Great Regards
7 Responses to “Fractions Worksheets For Pre-schoolers”
1. Great post I must say. Simple but yet interesting. Wonderful work!
2. We’re a bunch of volunteers and opening a new scheme in our community. Your site provided us with useful info to paintings on. You’ve done a formidable process and our entire community will
likely be grateful to you.
3. Wonderful paintings! That is the kind of info that are meant to be shared around the web. Shame on Google for not positioning this submit higher! Come on over and talk over with my site . Thank
you =)
4. Hi, i feel that i noticed you visited my weblog thus i came to ?return the prefer?.I am trying to in finding issues to enhance my website!I assume its ok to make use of some of your ideas!!
5. Thanks for every other fantastic article. Where else may just anyone get that kind of info in such an ideal way of writing? I’ve a presentation subsequent week, and I’m on the search for such
6. You could certainly see your skills within the work you write. The world hopes for more passionate writers such as you who aren’t afraid to mention how they believe. All the time go after your
7. Im impressed, I ought to say. Pretty hardly ever do I come across a weblog thats each informative and entertaining, and let me let you know, youve hit the nail on the head. Your blog is
essential; the issue is some thing that not adequate men and women are talking intelligently about. Im seriously pleased that I stumbled across this in my search for something relating to this
Leave a Comment | {"url":"http://fractions-worksheets.com/fractions-worksheets-for-pre-schoolers/","timestamp":"2014-04-18T08:52:07Z","content_type":null,"content_length":"54994","record_id":"<urn:uuid:ce6c836e-0fa2-458e-bfe5-97df019c5c2f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Lydia on Saturday, March 10, 2007 at 3:47pm.
Can someone please help me with this one?
Solve (5x+4)^1/2-3x=0
Just so you know, it is to the 1/2 power.
Add 3x to both sides of the equation, then square both sides.
That should enable you to gather terms and factor.
Yeah, that's the part I understood. Could you help me with the factoring?
This is what I have as of now:
Go to the quadratic equation
x= -b +- sqrt (b^2=4ac) /2a
x= 5 +- sqrt (25+48) /6
x= 5 +- sqrt (73) /6
x= -0.590667291 ; x= 2.25733396
check my work.
Oh, really? We didn't learn about doing it like that, but I think it does work. Thanks!
wait, I got the same answers as you, but then I checked my work and neither worked
dang: error
5x+4= 9x^2
That changes the quadratic to
9x^2 -5x -4=0
x= ( 5 +- sqrt (25 + 36*4))/18
check that.
yes it works! thanks a ton!
Related Questions
algebra - how do I do this Solve for x. 3x – 2y = 6 Add 2y to both sides and ...
algebra - Solve for x. (l is the absoulte bar, not one) 3*l 3x-2 l -6 > l 3x-...
science - rearrange this equation 2 make x the subect. 6y = 9 - 3x so first i ...
algebra - I need a step by step explanation on how to add these two equations 2x...
algebra1 - How do i solve this problem? 2(x-3)-3(x+5)=3(x-2)-7 what are the ...
math correction - solve for x: 3x - 2y = 6 i need help in solving this: the ...
Pre Cal - 6-x=2x+9 If you are to solve the equation for x, first try to get the ...
math - just an algebra question. how would you solve: (3x^2 +5)(3x^2 +5) ^ means...
Variable - SOLVING EQUATIONS WITH THE VARIABLE ON BOTH SIDES Solve the equation...
math,correction - can someone correct these for me i'll appreciate it thank. ... | {"url":"http://www.jiskha.com/display.cgi?id=1173559657","timestamp":"2014-04-25T06:23:24Z","content_type":null,"content_length":"9035","record_id":"<urn:uuid:2076bf77-d244-4542-933c-401859822782>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework help!!!
November 26th 2007, 09:15 PM #1
Homework help!!!
I have a midterm tomorrow and can't spend all night trying to figure out what is going on with these problems. PLEASE help!!!! I understand congruence classes but am just having a problem putting
together exactly what is being asked here. Please clarify.
Fix a positive integer m. In this exercise, you will study which elements of Z(m) have inverses. Recall that we write Z(m) for the set of congruence classes modulo m, and that there are m of
these: [0],[1],[2],.....,[m-1]. Here [a] = b element of Z such that b is congruent a modm. Now given [a] element of Z(m), a congruence class [b] is called the multiplicative inverse of [a] if [a]
[b]=1. If such a class [b] exists then [a] is called invertible.
a) Show that if gcd (a,m) =1, then [a] is invertible. (Hint: begin by using the pypothesis to find x,y element of Z for which ax + my=1)
b) Show that if [a] is invertible, then gcd(a,m)=1 (Hint: If [a] is invertible show first that there exists b element of Z such that ab is congruent (mod m). Then prove that if gcd (a,m) is not
equal to 1 then a contradiciton is reached. )
c) Find all of the invertible elements of Z(15), using parts (a) and (b). Find inverses for each of the elements you wrote down, by inspection.
I NEED HELP ON a) and b) the most!!!! THANKS IN ADVANCE
No help out there I guess
Let $m\geq 2$ be a positive integer. Define $\mathbb{Z}_m^{\text{x}}$ to be the congruences classes $[x]_m$ such that $\gcd(x,m)=1$. Prove that $\mathbb{Z}_m^{\text{x}}$ is a group under
multiplication (not addition). Thus if you have the equation $[a][x]=[1]$ and $\gcd(a,m)=1$ then by the property of groups this equation is solvable for $[x]$.
November 26th 2007, 11:16 PM #2
November 27th 2007, 08:27 AM #3
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/number-theory/23580-homework-help.html","timestamp":"2014-04-16T21:11:31Z","content_type":null,"content_length":"37908","record_id":"<urn:uuid:53615eac-e539-4380-85b0-e5412b867bd4>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
no one is able to explain!
05-12-2010 #1
Registered User
Join Date
Mar 2010
qsort -> please explain-it
I started my discussion in the above link, while copy and pasting the Quicksort Algorithm from "The C Programming Language, Second Edition" by Brian W. Kernighan and Dennis M. Ritchie (page 87).
And I rised a simple question, e.g. if we have these inputs: e.g. {11, 3, 9, 7, 4)
How this algorithm will work (just one iteration)! Believe me or not, no one was able to explain it
TERRIFYING! And I still have problem to understand it... well I am novice, dummy etc. But what about others?
/* qsort: sort v[left]...v[right] into increasing order */
void qsort(int v[], int left, int right)
int i, last;
void swap(int v[], int i, int j);
if (left >= right) /* do nothing if array contains */
return; /* fewer than two elements */
swap(v, left, (left + right)/2); /* move partition elem */
last = left; /* to v[0] */
for (i = left + 1; i <= right; i++) /* partition */
if (v[i] < v[left])
swap(v, ++last, i);
swap(v, left, last); /* restore partition elem */
qsort(v, left, last-1);
qsort(v, last+1, right);
/* swap: interchange v[i] and v[j] */
void swap(int v[], int i, int j)
int temp;
temp = v[i];
v[i] = v[j];
v[j] = temp;
The idea is somewhat close to that of binary search, but here no part of an array should be ignored, that's why two statements of recursion are used. Try to trace this algorithm in debugger or
even on plain paper.
The only good is knowledge and the only evil is ignorance.
Are you even still paying attention to that thread?
Quick sort sorts through something called partitioning.
Explaining partitioning: You pick an element to partition around called the pivot (or last, in your code). In your code, the algorithm is going to pick the middle one every time. Then you swap
the middle and first arguments. Then you walk the array comparing the ith element with the left element, and swapping to arrange it such that everything to the left of 'last' is less than 'left',
and everything greater is to the right. This sorts the pivot into its final position, last, which explains the swap at the end.
With one element sorted, you still have to divide the array into two parts and partition each part. When every element has had a chance to be a pivot, the array is sorted.
( 9, 3, 11, 7, 4, ) pivot=0...
( 9, 3, 11, 7, 4, ) pivot=1...
( 9, 3, 7, 11, 4, ) pivot=2...
( 9, 3, 7, 4, 11, ) pivot=3...
( 4, 3, 7, 9, 11, ) pivot=3...
( 3, 4, 7, ) pivot=0...
( 3, 4, 7, ) pivot=0...
( 4, 7, ) pivot=1...
( 4, 7, ) pivot=1...
You could have done this yourself, you know.
Part of that was my fault for not interpreting the initial inputs correctly, but you need to take some responsibility too, because the reason I mis-interpreted them is that YOU did not provide
the information.
If you want help with stuff like this, you have to use your brain and make an effort to provide the necessary information, and not expect others to deduce -- which might be why no one else even
void qsort(int v[], int left, int right)
What should the initial values for "left" and "right" be here for array (11, 3, 9, 7, 4)? YOU have the book this code came from, it MUST explain that part. Anyway, I am sure if you go thru the
process I did in the last thread, but use 0 and 4 instead of 0 and 3, it will work out. Try it on paper.* The -1 indicated a premature end to one side of the sort.
Also, it is kind of amazing that almost 24 hours later it has not occurred to you to TEST the code and include some simple printf() statements to help you follow the course of execution and how
various values change (you could also do that with a debugger). This is a MANDATORY skill for programming, start today!!
* if it doesn't work out, post the attempt you made and I will go thru it again, promise. Oh but also you must post the information I mentioned. Perhaps maybe the exercise # too, a lot of K&R
stuff is online. I think except for the random pivot value that is actually a pretty nice in-place qsort.
Last edited by MK27; 05-12-2010 at 09:52 AM.
C programming resources:
GNU C Function and Macro Index -- glibc reference manual
The C Book -- nice online learner guide
Current ISO draft standard
CCAN -- new CPAN like open source library repository
3 (different) GNU debugger tutorials: #1 -- #2 -- #3
cpwiki -- our wiki on sourceforge
05-12-2010 #2
05-12-2010 #3
05-12-2010 #4 | {"url":"http://cboard.cprogramming.com/c-programming/126810-no-one-able-explain.html","timestamp":"2014-04-16T07:24:20Z","content_type":null,"content_length":"56521","record_id":"<urn:uuid:c74e73c4-65e0-4691-9746-f756eae45d70>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Steel Beam Sizing
Steel Beam Sizing Formula
The formula for steel beam sizing or steel beam design is section modulus (S) = moment (M)/allowable yield stress (Fy) or in short S=M/Fy. The tables for structural steel sizes such as steel i beam
sizes show the steel beam dimension for a steel i beam where S can be selected to satisfy the design.
The moment is calculated using the formula M=wL^2/8 for uniformly loaded beam, where L is the unsupported length of the beam and w is the load applied. As this beam moment formula suggests, the
longer the span and heavier the load, the more the moment becomes.
If you want to minimize the depth of a beam, two of the possible solutions are reducing the unsupported length L or reduce the weight applied to the beam.
For a beam loaded with a point load for instance if you need a steel beam to support an air condition unit that weighs 2,000 pounds. The moment formula will be M = PL/4.
To illustrate this process, let’s verify the beam used for the framing plan above. Please refer to concrete footing design for detail information on how the load was calculated. The load used to
design the footing is the same load used for the beam design.
The total load is 48.2 psf with tributary width 12'-7". Multiplying the two yields 606.5 plf. The clear span is 19 ft. Substituting these to the moment formula: M=wL^2/8 = 606.5*19^2/8 = 27,369 lb-ft
x 12/1000 (multiply by 12 to convert to inches / divide by 1000 to convert to kips) = 328.43. The allowable stress is .66*50 = 33 ksi.
The section modulus required is S = 328.43/33 = 9.95 in^3 (cubic inches). S for W12x26 is 33.4 which is much bigger than 9.95, therefore the selected beam is ok. A W12x14 with S = 14.9 would be ok
too but was not available at the time of construction. W12x26 was selected because it was the smallest beam available at the time of construction.
Back to steel beam sizing
From steel beam sizing to all-concrete-cement.com | {"url":"http://www.all-concrete-cement.com/steel-beam-sizing.html","timestamp":"2014-04-19T15:14:59Z","content_type":null,"content_length":"13368","record_id":"<urn:uuid:21799792-d29c-4feb-bb32-642a4e312339>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Georg Ruß' PhD Blog — R, clustering, regression, all on spatial data, hence it's:
Since I’ve been asked quite often recently to publish the HACC-spatial scripts for the respective algorithm I developed in my PhD thesis, this post will list them and give a few explanations.
For now, here’s the zip file with R scripts: hacc-spatial.zip.
From the thesis abstract:
The second task is concerned with management zone delineation. Based on a literature
review of existing approaches, a lack of exploratory algorithms for this task is concluded, in
both the precision agriculture and the computer science domains. Hence, a novel algorithm
(HACC-spatial) is developed, fulfilling the requirements posed in the literature. It is based
on hierarchical agglomerative clustering incorporating a spatial constraint. The spatial
contiguity of the management zones is the key parameter in this approach. Furthermore,
hierarchical clustering offers a simple and appealing way to explore the data sets under
study, which is one of the main goals of data mining.
The thesis itself can be found here: PhD thesis (32MB pdf), the algorithm is described on pdf page 124 (print page 114): hacc-spatial-algorithm.pdf.
Further explanations and shorter descriptions are to be found in two publications, available in fulltext: Exploratory Hierarchical Clustering for Management Zone Delineation in Precision Agriculture
and Machine Learning Methods for Spatial Clustering on Precision Agriculture Data.
Let me know if there are questions, comments or even successful results when applying the algorithm to your data sets.
There are also two youtube videos of the clustering (with an additional pre-clustering step, the “inital phase”): F440-REIP32-movie.avi and F611-REIP32-movie.avi. It’s probably the end of both videos
where it gets interesting. Compare the plots for the REIP32 variable of the F440 and F611 data sets (F440: PhD pdf page 185 (clustering on page 138) and F611: PhD pdf page 195).
Important points
• The algorithm was designed to work with spatial data sets: each data record/point in the data set represents a vector of values which also has a location in space (2D/3D).
• The data points should be spatially roughly uniformly distributed (probably with high density, although that doesn’t really matter). That is, it does not and cannot rely on density differences in
the geospatial data distribution.
• The input structure for the R scripts is a spatialPointsDataFrame with variables. The algorithm (the function) allows to select particular variable(s) for clustering. I.e. you may use multiple
variables for clustering.
• The algorithm is definitely not optimized for speed. It served my purposes well, but may take a while to run on your data.
• The contiguity factor cf is subject to experimentation.
Apart from that, there’s not much to comment (yet). Let me know about questions or issues and I may be able to fix them or list further requirements here.
mail: russ@dma-workshop.de
It just occurred to me that I should probably further develop my research profile and find an appropriate umbrella term that best covers my research interests. A quick suggestion including a
definition would be Environmental Data Mining to describe the task of finding interesting, novel and potentially useful knowledge (=data mining) in georeferenced (spatial) and temporal multi-layered
data sets (=environmental data). I haven’t done any research on this umbrella term yet (search engines provided but a few hits, but if I stay in research, this is probably where I’d try to be headed.
Computer science is (to me) an ancillary science that needs specific applications and builds/provides solutions to specific tasks based on actual data sets collected in practice. And R is the best
tool for this :-)
(this merits a new category at the top level)
Tomorrow’s going to be my second talk in German at MLU Halle. Here are the slides: russ2011mlu-slides.
And there’s also a video of the clustering here: http://www.youtube.com/watch?v=Xk7eT4-F2Fg In short, the video compares a spatial clustering on the precision agriculture data I have, using four
variables (P, pH, Mg, K) and low spatial contiguity (left) as well as high spatial contiguity (right). The clustering is hierarchical agglomerative with an initial tessellation of the field into 250
clusters which are subsequently merged. The clustering has been implemented in R (generating .png files of each plot) with subsequent video encoding with ImageMagick (convert) and Mplayer (mencoder).
Nice demo, I guess.
Looking back on the work I’ve done so far (finding a thesis topic, finding data, finding tools) I can definitely recommend the two books below. They’re R-related and they contain a lot of examples
which still help in implementing the ideas I have. The first is Modern Applied Statistics with SApplied Spatial Data Analysis with RR mailing lists to ask your questions and the authors of the above
books are typically present at those lists.
If you prefer a bookstore, look out for these on the shelves:
I’ve recently been active on the R-help mailing list because I had some issues
with the default implementation of neural networks (nnet). Seems as if the
mailing list solved my problem or at least hinted me towards a solution. The
nnet function seems somewhat strict about its arguments. | {"url":"http://blog.georgruss.de/?cat=29","timestamp":"2014-04-17T07:13:46Z","content_type":null,"content_length":"36587","record_id":"<urn:uuid:1297454f-d133-4f71-ba72-e5b4b2d11462>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculate Concentration - How to Calculate the Concentration of Solutions
Calculate Concentration
This is a collection of definitions and examples of different methods calculating the concentration of a chemical solution. Molarity, molality, normality, dilutions, mole fraction, and mass percent
are among the concentration calculations that are included.
How to Calculate Concentration
This is an overview of the most common methods of calculating concentration, including sample calculations. Molarity, molality, normality, mole fraction, and mass percent are covered.
Prepare a Solution
Here's a quick overview of how to prepare a solution when the final concentration is expressed as M or molarity.
Dilutions from Stock Solutions
If you're working in a chemistry lab, it's essential to know how to calculate a dilution. Here's a review of how to prepare a dilution from a stock solution.
Mass Percent Composition
These are examples of mass percent composition calculations, with links to other worked chemistry problems and homework help.
Molality Example Problem
Here is an example of calculating concentration or molality of a solution.
Molarity Example #1
This is a worked example showing the steps necessary to create a stock solution of predetermined molarity.
Molarity Example #2
This is a worked example of how to determine the concentration of individual ions in an aqueous solution from the total concentration.
Molarity Example #3
This is a worked example showing the steps necessary to find the concentration of a solution when given the amount of solute.
Molarity Example #4
These are examples of concentration and molarity calculations, with links to other worked chemistry problems and homework help.
Molarity Example #5
Here is an example of calculating concentration or molarity of a solution.
Normality Calculation
The normality of a solution is the gram equivalent weight of a solute per liter of solution. Here's an example of how to calculate the normality of a solution.
Volume Percent Concentration
Learn what volume percent or volume-volume percent concentration means and how to calculate volume percent when preparing a solution. | {"url":"http://chemistry.about.com/od/calculateconcentration/","timestamp":"2014-04-16T05:06:21Z","content_type":null,"content_length":"40218","record_id":"<urn:uuid:2eae6b97-a77d-4529-b650-fbdc8ba62915>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
a challenging problem for all
Re: a challenging problem for all
Hi rajinikanth0602
What problems do you see in post #19?
Last edited by anonimnystefy (2012-09-02 21:06:11)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=227151","timestamp":"2014-04-20T06:17:59Z","content_type":null,"content_length":"21681","record_id":"<urn:uuid:69163aa1-a285-4a06-9f7a-ca098025ca0e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Analysis of the Dodwell Hypothesis
I examine the Dodwell hypothesis, that the earth underwent a catastrophic impact in 2345 BC that altered its axial tilt and then gradually recovered by about 1850. I identify problems with the
selection and handling of certain ancient and medieval data. With the elimination of questionable data, a discrepancy may remain between ancient measurements of the earth’s tilt and our modern
understanding of how the tilt has varied over time. This discrepancy, if real, does not demand the sort of catastrophe suggested by Dodwell, so there is doubt that this event occurred. If there were
some abrupt change in the earth’s tilt in the past, the available data are not sufficient to fix the date of that event with any precision.
Keywords: catastrophism, obliquity of the ecliptic
Nearly everyone is familiar with the earth’s axial tilt and knows that it is responsible for our seasons. A less well-known fact is that the direction and magnitude of the earth’s tilt slowly are
changing due to gravitational forces of the sun, moon, and planets. These changes are well understood, but the late Australian astronomer George F. Dodwell (1879–1963) determined that ancient
measurements of the earth’s tilt were at variance with that understanding. Fitting a curve to his data, Dodwell (Dodwell 1) concluded that the earth underwent a catastrophic change in its tilt in the
year 2345 BC, and that the tilt had only recently recovered to the relatively stable situation now governed by the conventional theory. I understand that Dodwell was a Seventh Day Adventist, so he
likely saw in this proposal a connection to biblical catastrophism. For instance, the 2345 BC date for the dramatic change in the earth’s axial tilt is very close to the Ussher chronology date of the
Flood (2348 BC). Many recent creationists today think in terms of huge upheaval at the time of the Flood, including an impact (or impacts) related to the beginning of the Flood. Dodwell obviously was
thinking in terms of an impact that altered the earth’s tilt. Some recent creationists today favor pushing the date of the Flood further back (the Septuagint chronology is nearly a millennium longer
than the Masoretic text), and so in their thinking a 2345 BC impact would coincide with a post-Flood catastrophe. In this paper I will examine Dodwell’s hypothesis, but first, Syrus, we must define a
few terms.
To better understand the terminology that I will use, I ought to start with the celestial sphere (fig. 1). Astronomers use the celestial sphere as a mental construct to describe the locations of
objects and concepts of astronomical interest as seen from the earth. We imagine the earth to be a sphere at the center of the much larger celestial sphere (radius >> the earth’s radius) on which
astronomical bodies and concepts are located. For instance, we can extend the earth’s rotation axis to the celestial sphere. The intersections of this axis and the celestial sphere are the north and
south celestial poles. As viewed from either of the earth’s poles, the corresponding celestial pole would be directly overhead (the zenith). As the earth spins each day, astronomical bodies appear to
revolve around the celestial poles. Since Polaris, or the North Star, is very close to the north celestial pole, it appears to stay relatively motionless as other stars, the sun, the moon, and the
planets appear to revolve around it. As we can draw on the earth an equator half way between the poles, we likewise can construct the celestial equator half way between the celestial poles. The
celestial equator will pass through the zenith at locations on the earth’s equator. Mathematically, the celestial equator is the great circle arc representing the intersection of a plane
perpendicular to the axis of rotation.
The earth’s revolution around the sun defines a plane as well (see fig. 2). The intersection of that plane with the celestial sphere is the ecliptic. Due to the earth’s orbit, the sun appears to move
through the background stars on the celestial sphere along the ecliptic, taking one year to complete one circuit. The ecliptic is a circle, so perpendicular to the ecliptic is the axis around which
the earth revolves each year. Where this revolution axis intersects the celestial sphere is the ecliptic pole. By definition, the angle between the earth’s rotation axis and revolution axis is the
earth’s tilt. The angular separation of the north celestial pole and the ecliptic pole is the same angle, and the planes of the ecliptic and celestial equator have the same angular relationship.
Since the earth’s tilt is a measure of how obliquely the ecliptic is inclined to the celestial equator, astronomers since ancient times have called this tilt the obliquity of the ecliptic, a
convention that we shall follow here. We normally use ε to indicate the obliquity of the ecliptic.
The first measurements of the obliquity of the ecliptic are very ancient. For instance, Hipparchus, a second century BC Greek astronomer, determined that obliquity of the ecliptic was about 23°50´.
Hipparchus also is credited with the discovery of the precession of the equinoxes, though this effect was not explained until after Newton developed physics. Precession is the gradual circular motion
of the axis of a spinning object due to external torques (see fig. 3). The gravity of the sun and moon pulls on the equatorial bulge of the earth, attempting to reduce the earth’s tilt. If force were
the only consideration, the pull of the sun and moon on the earth’s equatorial bulge would cause the obliquity of the ecliptic to go to zero degrees. However, the earth spins, so it possesses angular
momentum. When a force acts on a spinning body in this way, the force produces a torque. The torque causes the direction of the rotation axis slowly to spin, or precess. In this case, the earth’s
rotation axis precesses around the revolution axis, or we can say that the north celestial pole precesses around the ecliptic pole. It takes 25,900 years to complete one circuit. This causes the
equinoxes, the intersections of the celestial equator and ecliptic, to slide gradually along the ecliptic, hence the name, precession of the equinoxes. During the precessional cycle the north
celestial pole would move along a circle having angular radius equal to the obliquity of the ecliptic and centered on the ecliptic pole. Superimposed upon precession is nutation, a much smaller,
similar effect of the moon with an 18.61 year period. Nutation is caused by the moon’s orbit being tilted to the ecliptic by about 5°—if the moon orbited in the ecliptic plane, then there would be no
nutation. The magnitude of nutation is only 9.2′ of arc, far smaller than the nearly 23½° amplitude of precession. Both precession and nutation change the direction of the earth’s axis, but by
themselves they don’t appreciably change the obliquity of the ecliptic, particularly on a timescale of only a few thousand years.
More complex interaction, particularly involving the planets will gradually change the obliquity of the ecliptic. If the moon were not present, the obliquity of the ecliptic would change over a very
wide range, resulting in tilts from nearly 0° to 90°. Instead, the stabilizing effect of the moon limits the change in the obliquity of the ecliptic to about 2°. Wild swings in the obliquity of the
ecliptic would have very devastating effects upon living organisms, so there is a design implication here. The current value of ε is 23.4°, and it has been decreasing for some time. In the secular
view, a near maximum of 24.2° was achieved about 8500 BC. The physics affecting changes in the obliquity of the ecliptic is well known, and the theoretical value of ε is known with great precision
far into the past and future. The value of the obliquity of the ecliptic is described by a polynomial function of time. For nearly a century the standard description of the obliquity of the ecliptic
as a function of time was that of Simon Newcomb (1906, p. 237), determined about 1895. Newcomb’s formula is a third degree polynomial,^1 but more recent treatments are fifth or even tenth degree.
Dodwell used the Newcomb formula, because that was all that was available when Dodwell did his work. However, the high precision of much higher degree expressions is necessary only over very large
time intervals. For the epochs of concern for the Dodwell hypothesis, there is no real difference between the Newcomb expression and others, so use of the Newcomb values is quite adequate.
Dodwell saw a noticeable difference between the Newcomb formula and the values of obliquity of the ecliptic that he derived from historical measurements. Dodwell fitted a curve to his data, and in
the curve he saw two trends superimposed upon the Newcomb curve. Fig. 4 shows Dodwell’s data and his curve fitted to the data (taken directly from the Dodwell manuscript). First, Dodwell’s curve
primarily is a logarithmic sine curve that, going backward in time, increases without bound at the year 2345 BC. Dodwell thought that this represented a catastrophic event, perhaps an impact, at that
date that drastically altered the obliquity of the ecliptic. Second, he saw superimposed upon the logarithmic sine curve a harmonic sine curve of diminishing amplitude that vanished about AD 1850.
Dodwell thought that this was a curve of recovery from the catastrophic event. The possibility of such a catastrophic event obviously is of keen interest to recent creationists. This event, if real,
could be identified with the Flood or, as some recent creationists believe, a possible post-Flood event.
I will analyze how credible this alleged event is. To do this, I will divide the problem into several parts. First, I will examine how well founded the data are. Second, where discrepancies between
the data and the Newcomb formula exist, I will attempt to assess the likely errors in the data. Third, I will discuss whether the data with the appropriate error limits support either of the two
trends that Dodwell noted.
Examination of the Data
The easiest and most direct way to measure the obliquity of the ecliptic is through the use of a vertical gnomon. A gnomon is a device used in casting the sun’s shadow for measurement purposes. The
most common gnomon is on a sundial to cast a shadow on the scribed surface where the hour is read, but this gnomon normally is not mounted vertically. A vertical gnomon is a post of known height
mounted perpendicular to a flat, level surface upon which the shadow of the sun is cast. The length of the shadow divided by the height of the post is the tangent of the altitude of the sun (Fig. 5).
Altitude is the angle that an object makes with the horizon, which must be between 0° and 90°. Fig. 6 shows the situation of measuring the sun’s altitude at noon at the summer solstice and again at
noon six months later at the winter solstice. At the summer solstice the sun will make an angle ε above the celestial equator, and at the winter solstice the sun will make an angle ε below the
celestial equator, so that the difference of the altitude of the sun measured at noon on these two dates will be double the obliquity of the ecliptic. This apparently was the method used by
Hipparchus, because Ptolemy (1952, p. 26) reported Hipparchus’ result as “more than 47°40´ but less than 47°45´.” One might expect to find the obliquity of the ecliptic by dividing this result by
two, yielding 23°51.25´. However, there are three corrections to the observations that one must make. Those corrections are, in order of decreasing magnitude:
1. Semi-diameter of the sun
2. Refraction
3. Solar parallax.
I shall now discuss each of these corrections.
The semi-diameter correction is necessary, because the sun is not a point source. See Fig. 7. Let point P be the bottom of the gnomon and point G be the top of the gnomon. The ray coming from the top
of the sun will pass point G and fall at point A, while the ray from the bottom of the sun will fall at point B. Between points A and B there will be some sunlight, so only the penumbral shadow will
be present there, but the full (umbral) shadow will extend from the bottom of the gnomon to point A. To do proper comparison of the sun’s altitude at different times, we need to know the altitude of
the sun’s center, so it is important to know how to properly correct the observed shadow edge for the shadow that would be cast by rays coming from the center of the sun’s disk. The ray from the
sun’s center will fall at point C, and so the altitude of the sun’s center will be angle GCP. The angle GAP is the observed altitude of the sun as determined by the length of the actual shadow. Let μ
represent the half angular diameter of the sun. From geometry we see that the difference between the observed altitude and the altitude of the sun’s center is μ. Therefore we must correct the
observed altitude of the sun by subtracting the half angular diameter of the sun. Because the earth has an elliptical orbit, the sun’s angular diameter is not constant, but varies between 31.6´ and
32.7´. Thus, the solar semi-diameter correction, μ, can be between 15.8´ and 16.35´. Since the range in μ is only 0.55´ and the likely error in measuring the altitude is at least 1´, in most cases it
is acceptable to use the average, 16.08´. In this discussion I have assumed that a person observing the sun’s shadow would see it end at point A. However, the edge of the shadow will be a bit
indistinct—will a person judge the shadow to end at point A, or at a point past point toward point C? Newton (1973, p. 367) previously has pointed out this problem, and decided that the error
introduced by this ambiguity easily could account for 7–8 arc minutes or error.
Dodwell described an experiment that he conducted in Adelaide, Australia, where several people measured the shadow of a gnomon that he constructed and compared the results to the accurately computed
altitude of the sun’s center. He found that the average correction determined empirically was only 13.2´, a value that Dodwell apparently used in most of his data reductions. Dodwell neither
acknowledged nor commented as to why this correction was nearly 3´ less than expected. Much of this probably is due to the indefinite edge of the sun’s shadow mentioned above. Dodwell reported that
his results came from a total of 172 measurements made by 9 individuals, and he further reported the range of the highest (15.3´) and lowest (10.4´) measurements from the average, and he compared the
mean of those two to the overall mean. However, without more information it is not possible to compute the standard deviation or probable error. From this limited information the likely error of
measurement was at least 1´ and probably more. It might be profitable to repeat this experiment to properly ascertain the error of observation.
If one determines the obliquity of the ecliptic by the above described method in the temperate zone, the correction for the sun’s semi-diameter is made in the same sense for both measurements, so the
effect cancels out. Therefore the exact value of the correction is not important. On the other hand, the earth currently comes to perihelion less than two weeks after the winter solstice and to
aphelion less than two weeks after summer solstice, so if a variable correction is applied, the correction at winter solstice is greater by about a half minute of arc. Dodwell did not discuss whether
he considered this correction in his computations.
The correction for refraction is necessary, because the earth’s atmosphere bends, or refracts, light as it passes through. This phenomenon is well understood, and it is easy to compute using the
plane-parallel approximation, if the altitude is not too low. All the data considered by Dodwell met this criterion. Refraction causes light to bend downward, making the altitude appear greater than
it actually is, so we must subtract this correction to get the true altitude. Let ζ be the zenith distance, the angle that the sun makes with the zenith. Since the zenith is directly overhead, ζ is
the compliment of the altitude. The correction is given by ρ = 58.2” tan(ζ) (Smart 1977, p. 26). The correction is much greater at the winter solstice so that the corrections for summer and winter
can differ by more than an arc minute. This is the correction with the greatest effect for measurements made in temperature latitudes.
The correction for solar parallax is necessary, because the sun’s distance is not infinite, and so people observing the sun at different altitudes are not looking in parallel directions. For proper
comparison, we adjust altitude measurements to what they would be if the sun’s rays traveled along paths parallel to the line connecting the center of the sun to the center of the earth. Consider two
observers located at points A and B on the earth looking at the sun (fig. 8).^2 Point A is along the subsolar line and so requires no correction, but point B is as far off the subsolar line as
possible, requiring the maximum correction. Using the small angle approximation, the maximum angular displacement for these two observers is θ = R/d, where R is the earth’s radius and d is the
distance to the sun. This angle is about 8.8”. Point B corresponds to viewing the sun on the horizon. Since the altitude measurements considered here were taken at noon not in the arctic, the
correction for solar parallax always will be less than the maximum of 8.8”.
Consider an observer at point C located at a distance x above the subsolar line (fig. 9). Let δ be the angle that the line between point C and the earth’s center makes will the subsolar line. The
solar parallax correction will be ψ. Now, x = R sin δ, and by the small angle approximation,
ψ = x/d = (R/d) sin δ = 8.8” sin δ.
Since all observations of interest here are made at noon on the solstices, δ is a simple function of ε and the latitude of the observations, so the solar parallax correction is easy to compute. This
correction will be less than the maximum of 8.8”, and so the correction is at least an order of magnitude less than the error of observation. Given that this correction is dwarfed by the other two,
one may question the necessity of applying it. The only possible gains in applying it are to be as thorough as possible and to avoid round-off errors that could propagate through. In checking the
work of Dodwell I made all three corrections, and in many cases I was able to accurately reproduce his results. A few I was not able to replicate exactly, but the differences between my computations
and those of Dodwell were less than the likely errors in the original measurements.
Let us now consider some specific measurements that Dodwell discussed. Pytheas, a contemporary of Alexander the Great, was famous for an extensive voyage. He measured the altitude of the noon sun on
the summer solstice where he lived in Massalia, a Greek colony at the site of modern Marseilles, France. We assume that the date was about 325 BC, and we know the location of the city, but there is a
discrepancy in the reporting of his measurement. Dodwell (Dodwell 1) wrote that Strabo said that the height of the gnomon to the length of its shadow was 120:41⅘,^3 while Ptolemy said that the ratio
was 60: 20⅚ = 120: 41⅔. The corresponding values for the observed solar altitude are θ = 70°47´42” and θ = 70°51´7”. Note that these values differ by only 3´25”. The situation is diagrammed in Fig.
10. The altitude of the north celestial pole is equal to the latitude, φ. Since the celestial equator is at right angles to the north celestial pole, the altitude of the celestial equator is the
compliment of the latitude, φ´. At the summer solstice the sun is at an angle ε above the celestial equator, so the altitude of the sun,
θ = ε + φ´ = ε + 90°−φ.
ε = θ + φ−90°.
Dodwell (Dodwell 7) took the latitude of Massilia to be 43°17´52”, the latitude of the old Marseilles observatory near the port. This appears to be very close to the latitude where Pytheas made his
measurement. We can apply the three corrections to get two values for the obliquity of the ecliptic, one each for the measurements reported by Strabo and Ptolemy. Dodwell (Dodwell table) reported
values of ε to be 23°53´46” and 23°54´53”, but I have not been able to reproduce these values, for I got 23°52´5” and 23°55´29”. There is something strange here, for this amounts to a single
observation made at one place and time, so the sun’s semi-diameter and solar parallax corrections are the same. The two slightly different altitudes reported result in a difference in the refraction
correction of far less than an arc second. Therefore from the above equation it is obvious that two computations of ε turn out to differ by 3´25”. My two values differ by this amount (with a one
second round-off error), but Dodwell’s values differ by only a third of that amount.
This probably is a good time to point out that, while we can compute the obliquity of the ecliptic to the nearest arc second, the error of observation likely is at least a minute of arc, so reporting
measurements of the obliquity of the ecliptic to the nearest arc second (as Dodwell did) is meaningless. For comparison and to avoid round-off error, it is good practice is to compute ε to full
accuracy but then settle upon final values to the nearest arc minute at best. Following that procedure, Dodwell’s values round to 23°54´ and 23°55´ and mine round to 23°52´ and 23°55´. Furthermore,
following the conventional rule of averaging half values to the nearest even digit, either of our two values average to the same 23°54´. If we recognize that the errors of observation may result in
an error of plus or minus three arc minutes, then all four of these values are within that range. That is, while I cannot exactly replicate Dodwell’s results here, his values are well within the
accuracy probably allowed.
Dodwell applied similar methodology to Ptolemy’s aforementioned statement based upon observing the altitude of the sun at the two solstices that the obliquity of the ecliptic was “always more than
47°40´ but less than 47°45´”.^4 By knowing the latitude of Alexandria, Egypt where Ptolemy did his work, Dodwell was able to determine what Ptolemy’s measured altitudes were. Note that Ptolemy did
not report these altitudes, but that Dodwell inferred them from the result. Let α be the observed altitude of the sun at the winter solstice and β be the observed altitude of the sun at the summer
solstice. Let μ be the correction for the sun’s semi-diameter, ρ be the correction for refraction, and ψ be the correction for solar parallax. If θ[1] is the corrected altitude of the sun’s center on
the winter solstice and θ[2] is the corrected altitude of the sun’s center on the summer solstice, then those values are determined by
θ[1] = α[1]−μ[1]−ρ[1] + ψ[1]
θ[2] = α[2]−μ[2]−ρ[2] + ψ[2],
where the subscripts 1 and 2 refer to the corrections made at the winter and summer solstices, respectively. Note the corrections in the sun’s semi-diameter and refraction decrease the true altitude,
but that the correction for solar parallax increases the angle. From Fig. 6 you can see that
θ[1] = φ´−ε = 90º−φ−ε
θ[2] = φ´−ε = 90º−φ−ε.
Combining these four equations, we find
ε = ½ [(β−α)−(μ[2]−μ[1])−(ρ[2]−ρ[1]) + (ψ[2]−ψ[1])].
In this expression (β − α) is the observed difference in the altitude of the sun measured at noon during the summer and at the winter solstice, the amount fixed by Ptolemy to be between 47º40´ and
47º45´. This observational error of 2.5´ in (β − α) would appear to dominate over the errors of the other terms in the expression. When Dodwell applied these corrections, he determined ε to be
23º52´4”, a value that I replicated within two arc seconds. Rounding to the nearest minute of arc, the value of ε is 23º52´, but with a likely range of 23º50´ − 23º54´. With full consideration of
error in the other terms and rounding, one could argue that the range ought to be 23º49´ − 23º55´.
While I agree with Dodwell’s computation of the obliquity of the ecliptic based upon this ancient measurement, Dodwell assigned this measurement to the wrong epoch, at the time of Eratosthenes, more
than 350 years before Ptolemy. This is based upon a misunderstanding of The Almagest. Dodwell wrote:
Ptolemy tells us that the double obliquity angle observed by Eratosthenes and Hipparchus was less than 47º45´ (maximum value) and greater than 47º40´ (minimum value). (Dodwell 5)
Here is the relevant passage from The Almagest:
… we found the arc from the northernmost to the southernmost limit, which is the arc between the tropic points, to be always more than 47º40´ but less than 47º45´. And with this there results
nearly the same ratio as that of Eratosthenes and as that which Hipparchus used. For the arc between the tropics turns out to be very nearly 11 out of the meridian’s 83 parts. (Ptolemy 1952, p.
Ptolemy clearly stated that “we found” this value, apparently referring to himself and his colleagues in Alexandria. He then goes on to note that this value of twice the obliquity of the ecliptic
agrees with the earlier measurements of Eratosthenes and Hipparchus.
Newcomb’s value for the obliquity of the ecliptic at the epoch of Ptolemy is 23º40´41”. This is only two seconds off from the value of 23º40´39” from Laskar (1986, p. 59), a tenth degree polynomial
expression, showing that at the epochs of concern it doesn’t matter which standard formula of the obliquity of the ecliptic that we use. The measurement of Ptolemy is about ten arc minutes greater
than that expected from Newcomb and well outside of the range suggested by Ptolemy.
Dodwell (Dodwell 5) computed a measurement of the obliquity of the ecliptic supposedly using data from Ptolemy. For this Dodwell relied upon the work of a 17th century Flemish astronomer, Godefroy
Wendelin,^5 but since Dodwell didn’t reference either Wendelin’s statements or where the data supposedly came from Ptolemy, this is difficult to verify. It appears that Wendelin noted that Ptolemy
had observed the moon just 2⅛° from the zenith when the moon was at the summer solstice at its maximum distance north of the ecliptic. There is something garbled here, because the sentence as
constructed indicates that Ptolemy recorded “numerous observations” of this, but this isn’t possible, since this circumstance happens, at best, once every 19 years. Dodwell converted 2⅛° to 2º7´30”,
corrected for refraction and lunar parallax, and, knowing the ecliptic latitude of the moon at that point and the latitude of Alexandria, determined that the obliquity of the ecliptic was 23º48´24”.
Dodwell also computed that this (rare) event must have happened in AD 126. However, in his tabulation of all data used in his study, Ptolemy’s single point is listed as 8” less and in the year AD
139. This discrepancy is insignificantly small, but unexplained. And it is outside of the range for the obliquity of the ecliptic previously determined from a more direct measurement of the obliquity
of the ecliptic derived from Ptolemy’s work.
This datum is fraught with problems. It is a very indirect method, relying upon data not collected for the purpose of determining the obliquity of the ecliptic. It is not well documented, making it
impossible to verify, and it is not consistently reported in Dodwell’s report. Furthermore, the error involved may be larger than most. The zenith distance of the moon was reported as 2⅛°. What does
this mean? In the modern manner of reporting data, it would seem that the error of measurement is ± ^1⁄[16]° = 3´45”. Whatever the error, it would propagate through to the final result, so the final
value of ε could be between 23º45´ and 23º52´, rounding off to the nearest minute of arc. The range of this datum overlaps the range of the earlier determined Ptolemaic obliquity of the ecliptic.
Given the problems with this one point and the fact that what appears to be a more reliable determination of ε that is consistent with this datum with some reasonable error analysis, it is best to
omit this datum from further consideration.
Dodwell again relied upon Wendelin to determine the value of the obliquity ostensibly at the time of Hipparchus, a very important second century BC Greek astronomer credited with the discovery of the
precession of the equinoxes. Dodwell quoted Wendelin,
… from his own observations stated the distance between the topics was in proportion to the whole circle as 11 is to 83, exactly the same as Eratosthenes, and found the maximum obliquity
23º51´20”. (Dodwell 5)
The source of this information obviously is from The Almagest (quoted above), where Ptolemy stated that his determination of twice the obliquity of the ecliptic was the same as that of Hipparchus and
Eratosthenes. In fact, Ptolemy’s statement appears to attribute the 11 to 83 ratio to Eratosthenes, not Hipparchus as Wendelin seemed to think. Nor is the method of the determination mentioned,
though Dodwell assumed that it was done with a vertical gnomon. Dodwell applied corrections assuming that this was the method used and at the location of Rhodes where Hipparchus lived, though use of
the correct location of Eratosthenes at Alexandria is unlikely to change the result much. With his correction Dodwell computed the obliquity of the ecliptic to be 23º52´16”, about a minute of arc
greater than determined by Wendelin. Wendelin almost certainly didn’t correct for refraction, which is on the order of the difference. Rounding to the nearest minute, we get 23º52´, the same result
discussed above from Ptolemy, but this is no surprise since Ptolemy stated that his value agreed with that of Eratosthenes and Hipparchus.
What is the meaning of Ptolemy’s statement that the obliquity of the ecliptic was “very nearly 11 out of the meridian’s 83 parts?” Does this mean that the first number was 11 plus or minus a small
amount or that the number was a little less than 11? Or does it mean that the ratio was 11 to the number 83 more or less? The latter interpretation is the most conservative, and it allows us to
estimate some error. Interpreting this as the higher number in the ratio is closer to 83 than it is to 82 or 84, I find a plus or minus error of 8´ in the 23º52´ measurement of the obliquity of the
ecliptic. This error perhaps is too great, but applying this error gives the minimum value of the obliquity of the ecliptic of 23º44´, a minute of arc greater than the Newcomb value of 23º43´13” at
the epoch of Eratosthenes. Of course, with one of the alternate interpretations mentioned above, the error is greater, and the result then is consistent with Newcomb.
Dodwell computed four measurements of the obliquity of the ecliptic from Eratosthenes’s data, making several assumptions and conjectures about what Eratosthenes did at both Alexandria and Syene
(modern day Aswan, Egypt). For instance, Dodwell seemed to think that the legendary well at Syene with no shadow on its bottom at noon on the summer solstice and thus supposedly inspired Eratosthenes
to measure the size of the earth was exactly on the Tropic of Cancer at the time of Eratosthenes. However, this is not necessarily true, and there are several reasons to doubt this. First, the story
may be apocryphal. Second, one must assume that the walls of the well were vertical on all sides. Third, the semi-diameter correction produces a “gray” region in latitude where one might see no
shadow, but Dodwell assumed that this location was exactly on the edge of this region. Dodwell’s four computations round to 23º52´, and none of the four differ from this round number by more than
13”. Since this agrees with the aforementioned measurement of the obliquity of the ecliptic from the 11:83 ratio, there is no reason to treat these as additional data.
Dodwell again relied upon Wendelin to determine the obliquity of the ecliptic at the time of Thales, a sixth century BC Greek philosopher from Miletus (on the western coast of modern day Turkey).
Dodwell quoted Wendelin as writing that Thales
… defined the interval between the two tropics as 8 parts out of 60 of the whole circle. From this we find the interval 48º, as we divide the circle into 360º, so that the maximum obliquity of
the sun was 24 whole degrees. (Dodwell 5)
Dodwell took this measurement of the obliquity of the ecliptic to be exactly 24º, assumed that it came from vertical gnomon observations, and corrected for the location of Miletus to yield a final
result of 24º0´56” that easily rounds to 24º1´. But was the ratio exactly 8:60? Not likely. Again, taking a conservative approach and treating the measures as we would today, there is a plus or minus
error of 12´. That is, this measure of the obliquity of the ecliptic could be as low as 23º49´ and as high as 24º13´. The Newcomb value for the obliquity of the ecliptic at the epoch of Thales is
23º45´50”, three minutes less than the minimum value considered here.
This idea that the obliquity of the ecliptic was in ratio of 1:15 was prevalent in many ancient cultures. This is a nice round ratio, but unfortunately Dodwell often treated this as a precise
statement, erroneously concluding that the value was 24º0´0”. For instance, Dodwell presented a measurement of the obliquity of the ecliptic from India (Dodwell 4) contemporary to Thales and
similarly expressed as the one attributed to Thales. He referenced Brennand (1896) in saying that the ancient Indians thought that the obliquity of the ecliptic at that time was 24º0´. Assuming the
location of observation and the use of a vertical gnomon, Dodwell corrected this to 24º0´44”. Dodwell assumed a very precise measurement of 24º0´, but Brennand did not claim this precision. The two
pages Dodwell referenced (Brennand 1896, pp. 80, 236) say that the obliquity of the ecliptic was “24º.” And elsewhere Brennand (1896, p. 47) said that the obliquity of the ecliptic was “nearly 24º.”
Brennand never stated that ε was 24°0´, so Dodwell claimed far more precision here than is warranted, so this datum is deleted from further discussion. Dodwell presented an Indian determination of
the obliquity of the ecliptic from an even earlier epoch, but it was based upon what appears to be a cosmological model. Dodwell computed from the specifics of the model precisely what the obliquity
of the ecliptic would be, made corrections, assuming the latitude of observation, and found 24º11´4” for the obliquity of the ecliptic. However, there are many questions here, such as whether the
description of the cosmology was intended to accurately convey what the Indians of the time thought that the obliquity of the ecliptic was. Given the uncertainties, it is best to view this
measurement with caution.
Dodwell included a chapter on ancient Chinese measurements of the obliquity of the ecliptic (Dodwell 3), but these are impossible to evaluate, because he offered none of the original data. And by his
own account, the data were transmitted several times, passing from an early 18th century French missionary in China to a French astronomer at that time, and later to the famous Pierre-Simon Laplace.
As we saw with Wendelin’s handling of quotes of Ptolemy, such transmission can alter meanings. With these reservations, I am skeptical of the ancient Chinese measurements of the obliquity tabulated
by Dodwell, and so I will not consider them further.
Dodwell tabulated many medieval measurements of the obliquity of the ecliptic. In the medieval period the difference between Dodwell’s curve and Newcomb’s curve are smaller than during ancient times.
Dodwell acknowledged that most of the medieval measurements of the obliquity did not include discussions of corrections, if any, which were made. He assumed that many of them made the correction for
the sun’s semi-diameter, but that they used the much too high Ptolemaic solar semidiameter, so Dodwell re-computed the obliquity of the ecliptic by first removing the incorrect semi-diameter and then
adding the correct one. What was the reason for this? Dodwell found that many of the medieval measurements of the obliquity of the ecliptic agreed with the Newcomb formula, but not with his curve. He
even re-computed some measurements on the assumption that some of the gnomon used may have had a conical top, requiring an additional correction. Why? In his own words at the conclusion of his
chapter 6, Dodwell wrote,
If we admit that some of the Arab observations were corrected for Ptolemaic parallax, and some were not, and also that, probably in the earliest part of the period, a gnomon with a conical top
may sometimes have been used, then the observed mean value of the Obliquity would agree more closely with the new Curve than with Newcomb’s Formula. (Dodwell 6)
That is, Dodwell altered some of the medieval data to better fit his thesis. Which points did Dodwell not correct for the incorrect Ptolemaic solar semi-diameter? The ones that fit his thesis without
this correction. At the end of his sixth chapter Dodwell plotted raw and corrected measurements of the obliquity of the ecliptic as a function of time, along with curves representing his thesis (with
and without the oscillation) and Newcomb’s formula. The corrected data scatter around Dodwell’s curves, but the raw data match the Newcomb formula pretty well. One could easily argue that the
medieval measurements do not support the Dodwell hypothesis. The medieval data support the Dodwell hypothesis only with manipulated data. This is begging the question. Given this, and the fact that
the supposed discrepancies are so small during this period, it is best to eliminate the medieval data from discussion.
Dodwell included some more recent measurements of the obliquity of the ecliptic. For instance, at the end of his seventh chapter there is a table containing 42 measurements from 1660 to 1868, along
with the discrepancies from the Newcomb curve. The largest discrepancy is −16”, and the discrepancies sum to −1”. The standard deviation is 5.5”. These modern values are of no help in discriminating
between the Newcomb curve and the Dodwell hypothesis.
Probably the most important datum in support of the Dodwell hypothesis is the alignment of Temple of Amun Re in the Karnak Temple Complex in Egypt. Its importance stems from its antiquity, with
Dodwell’s adopted date of construction of 2045 BC, when the difference between the curves of Newcomb and Dodwell is much greater than at later epochs. Sir Norman Lockyer (1894) was one of the first
to suggest that ancient Egyptian temples had alignments with the rising and setting of various astronomical bodies. Drawing from Lockyer, Dodwell discussed alleged alignments of the solar temples at
Heliopolis and Abu Simbel. The former would have had alignment with the setting sun on two specific dates, and the latter with the direction of the rising sun on two other dates. None of these dates
are the solstices or equinoxes. In 1891 Lockyer took note that the alignment at Karnak was close to the azimuth of the setting sun on the summer equinox. Supposing that this was the purpose of the
alignment, Lockyer asked that the site be surveyed and even checked empirically on the summer solstice. When this eventually was done, it proved not to be viable, even when corrected for Newcomb’s
obliquity of the ecliptic. Of course, if the Newcomb curve is in error, as Dodwell argued, then the alignment may have occurred at the time of construction. Conversely, because of the antiquity of
this structure, this alleged alignment became an important datum in establishing the nature of the Dodwell curve. This is demonstrated by the fact that the obliquity of the ecliptic derived from this
assumption lies precisely on the Dodwell curve, as well as a later (1570 BC) point also from Karnak. If these two points are removed, any number of very different curves could be fitted to the
remaining Dodwell data.
Dodwell made his case for various solar alignments by quoting sources on ancient Egyptian rituals and construction. One must be careful in evaluating these, because while some appear to be
translations of inscriptions, many are conjecture of the authors. The translations of the inscriptions quoted refer to the king looking to the stars while laying the foundation of a temple, but no
solar alignment is mentioned. Apparently, no such inscriptions exist at Karnak, because these translations come from elsewhere. But read what Dodwell concluded about Karnak:
From what has now been said about the orientation ceremonies, so carefully carried out by the Egyptian temple-builders, we have good reason for believing that the Temple of Amen Ra at Karnak, the
most important solar temple in Egypt, was truly oriented to the setting sun at the summer solstice in the year of its foundation, about 2045 BC. (Dodwell 8)
The quotes about the ceremonies that Dodwell offered preceding his statement here said nothing about solar alignment, so this is conjecture about Karnak. A bit later Dodwell quoted from a translation
of an inscription about the worship ceremony at Heliopolis, though that narrative contains no mention of sunlight flooding down a passage at a particular moment. Dodwell follows the quote with this
This inscription relates to a ceremony which took place at Heliopolis, but it is obviously the typical service of the Egyptian solar temple; a similar procedure would be followed at the Karnak
temple, and the Egyptians at Thebes doubtless took advantage of this impressive spectacle in the ritual for the Temple of Amen Ra. (Dodwell 8)
Dodwell has embellished what we actually know of the temple ceremony at Heliopolis, and then transferred it to Karnak. In short, the only evidence that the alignment was to view sunset on the summer
solstice is that the azimuth of the passage is approximately correct for doing so, but it is conjecture to say that this is of necessity the case.
Egyptologists have been very unreceptive to most of the alleged astronomical alignments of ancient Egyptian temples. They likely would be convinced if there were inscriptions that actually showed
this to be the case, but apparently no such inscriptions exist. A recent survey of the orientation of ancient temples in Upper Egypt and Lower Nubia (Shaltout and Belmonte, 2005, p. 273) is most
interesting. This survey listed the azimuths of axes of symmetry in nearly every temple in the region, including all of those at Karnak. There are more than 100 entries. They also listed the
declinations of astronomical bodies that would be visible at rising or setting along the axes of symmetry. There is a strong cluster of these at declination = −24º, which was the position of the sun
at the winter solstice at the time. Furthermore, there is a preponderance of axes oriented toward the southeast (azimuth 115º−120º, depending upon latitude), indicating some interest in aligning with
sunrise in ancient Egypt, but no evidence of interest in sunset at any solstice. The authors noted that “curiously enough, the other solstice, the summer one at 24º is basically absent from our
data.” Indeed, the only one that I saw in the table that was close to this was the 25.4º azimuth of the Amun Re. If this truly was to align with the setting sun on the summer solstice as Dodwell (and
Lockyer before him) concluded, then it makes this temple unique, at least in Upper Egypt and Lower Nubia. Furthermore, if this axis aligned with the sun when the obliquity of the ecliptic was much
larger than that according to the Newcomb curve, then one must explain all the other alignments of the rising sun at winter solstice according to Newcomb but would not have aligned if Dodwell is
right. Preponderance of the data argues against the alignment of Karnak being solsticial.
If Dodwell is wrong, then what is the significance of the azimuth of the axis at Amun Re? The authors of the temple study have an excellent suggestion. They also tabulated the angle that the axes of
symmetry made to the direction of the flow of the Nile River at each location. Most axes, including the one in question here, are aligned at right angles to the river. This suggests that once a site
for a temple was selected, the axis was laid out so that one viewed the axis of symmetry as one approached the temple from a boat on the river. This makes sense, because most sites probably had boat
landings at their entrances, and so this would have grand entrances for nearly everyone who visited the sites.
Dodwell discussed Stonehenge in his chapter 9 (Dodwell 9). Because the architects of Stonehenge left no records, we don’t know its purpose. There are a number of possible astronomical alignments, so
theories abound. Of particular interest is the Avenue, which aligns well with sunrise on the summer solstice. If this and other alignments truly have astronomical significance, then it is possible to
determine the obliquity of the ecliptic at the time of construction. Indeed, determining the date of construction was one of the purposes in measuring the azimuths of such alignments. In the 19th and
well into the 20th century many people thought that Stonehenge was constructed by the Druids, which would date it to the first millennium BC. This was the thinking during Dodwell’s lifetime, for he
We see, from the results, that the astronomical date, found by using either Stockwell’s or Newcomb’s formula, is greatly out of agreement with the modern archaeological investigation previously
described.When the formula is corrected, however, by means of the New Curve of Obliquity, in the same was as for the oriented Solar Temple of Karnak, the astronomical date agrees with archaeology
and history. (Dodwell 9)
John N. Stockwell was an astronomer who had written on the time dependence of the obliquity of the ecliptic prior to Simon Newcomb. Newcomb’s formula improved upon Stockwell’s treatment. Using
archaeological conclusions then available, Dodwell rightly noted that the date of construction of Stonehenge did not conform to the obliquity of the ecliptic from Newcomb’s formula but agreed well
with his determination of the obliquity of the ecliptic at the epoch of Stonehenge’s construction. However, in the past half century much archaeological work has been done at Stonehenge. According to
his preface, Dodwell did much of his work in the 1930s. Since that time archaeologists have revised the time line of Stonehenge, placing its construction over several stages, but all much earlier
than the Druids in England.^6 The date today for the Avenue is at the same epoch derived from the Newcomb value of the obliquity of the ecliptic. Curiously, while Dodwell had derived obliquity of the
equinox measurements at various dates from many historical observations and several other archaeological sites, he did not do so for Stonehenge, for he neither tabulated nor plotted a datum from
Stonehenge. This may be because of uncertainty in precisely dating the construction of Stonehenge in his time. Rather, he used what was then thought about the age of Stonehenge as a sort of test for
his hypothesis. That is, what was then believed about Stonehenge contradicted the Newcomb theory but matched Dodwell’s prediction. However, since then the understanding of Stonehenge has radically
changed, and the modern dating of Stonehenge matches Newcomb’s curve, but not Dodwell’s. In this sense Dodwell’s theory fails the very test that he proposed.
Dodwell presented a lengthy table of historical measurements of the obliquity of the ecliptic that he had determined (Dodwell table), and he plotted those as a function of time in his Fig. 6, along
with a plot of Newcomb’s formula. There is an obvious departure between the data and Newcomb’s curve. Dodwell also plotted in the figure a log sine curve that he fitted to the data. The agreement
between the data and his curve is good, though this is not surprising, since he fitted the curve to the data. The best fit is at the earliest epochs and at the latest epochs. The fit at the latest
epochs is not surprising, because those data are the most numerous and the most accurate and thus have little scatter. The fit at the earliest epochs isn’t surprising either, because the only two
points there show the most radical departure from what is expected from Newcomb, and they represent nearly one quarter of the entire time interval concerned. By mathematical necessity the curve
fitted to the data must pass through or very closely to those two points, so the excellent fit there is not surprising. The greatest scatter in the data is in between, in the first millennium BC and
very late second millennium BC. Thus the scatter here probably gives us an idea of the likely errors of the ancient measurements of the obliquity of the ecliptic. Judging by the curve, those errors
appear to be a few arc minutes.
In his Fig. 4, Dodwell plotted his data fitted to his log sine curve. Dodwell saw in that plot an oscillation of diminishing amplitude with a period of 1,198 years. He judged that the oscillation had
gone through 3½ periods before subsiding about 1850. Dodwell included the plot of this oscillation in his Fig. 4. From that plot the maximum amplitude of the alleged oscillation is less than 3´. By
200 BC where the amplitude is less than 2´ there is much scatter in the data, with the residual of some data points exceeding the amplitude. Dodwell did not compute statistics of this harmonic term,
but because of the large scatter in the data compared to the alleged amplitude of the oscillation, it appears to be a poor fit to the data. If error bars of a few arc minutes were displayed on the
figure, the need for a harmonic term would vanish.
If we were to apply those same errors to the first two data points (those of Karnak, 2045 BC and 1570 BC), any number of curves could pass through the data. If we were to stick with a log sine curve,
the point of verticality at 2345 BC would shift. Thus, with inclusion of likely errors the precision of Dodwell’s date of 2345 BC of the catastrophic event is not supportable. Note that this does not
preclude that such an event took place, but only that we could not establish the date of the event with such certainty.
But what if the axis of symmetry at Karnak was not aligned with sunset on the summer solstice, which is by far the majority (unanimous?) opinion of Egyptologists? Those two points drive Dodwell’s
thesis to the extent that their elimination would seriously undermine Dodwell’s work. A linear expression but with a different slope than Newcomb’s probably could fit the remaining data well.
Non-linear expressions would fit too, though nothing nearly as drastic as Dodwell’s catastrophic change in the obliquity would be required. Therefore, the elimination of the earliest, very
questionable data strongly argues against the Dodwell hypothesis.
Does this mean that the Newcomb formula must be correct? Not necessarily. In the previous section I criticized Dodwell’s unwarranted precision of many of the measurements of the obliquity of the
ecliptic. I also argued against inclusion of questionable data. What data would I exclude and what would I include? Modern measurements (last four centuries or more) differ so little from Newcomb’s
curve as to be irrelevant in this discussion. The medieval measurements listed by Dodwell deviate from Newcomb’s curve more than modern ones, but many of them would be consistent with Newcomb if
appropriate errors were considered. Furthermore, as I previously described, Dodwell massaged some of the medieval data by removing alleged incorrect solar semi-diameter corrections of data that
agreed better with the Newcomb curve than with his curve. These considerations warrant removal of the medieval measurements. I reject the two data from Karnak, because Egyptologists reject the
conclusion upon which they are based. I also reject the ancient Chinese and Indian measurements, because I lack information to further check them. This leaves the measurements gleaned from the work
of Thales, Pytheas, Eratosthenes, and Ptolemy, and I would change the epoch of at least one datum from that of Dodwell. Furthermore, I would eliminate some of the duplication that Dodwell had, such
as the four measurements taken from Eratosthenes. This amounted to over-mining of the information. Besides, all four are well within the likely errors of the observation. This pares the data far from
Dodwell’s total number to four points. These four measurements of the obliquity of the ecliptic are in Table 1, along with the epoch and errors that I assessed in the previous section. Supporters of
Dodwell may cry foul over my paring of the data, but a plot of these points proves most interesting.
Name Epoch ε Error
Thales 558 BC 24º 01´ 12´
Pytheas 326 BC 23º 54´ 3´
Eratosthenes 230 BC 23º 52´ 8´
Ptolemy 139 AD 23º 52´ 3´
The data are plotted in Fig. 11, along with a plot of Newcomb’s formula for the obliquity of the ecliptic. With the points I have included error bars reflecting my assessed errors. Note that the
direction of increasing obliquity of the ecliptic is downward, following the convention of Dodwell’s plots. Not only do all data points, sparse as they are, fall below the Newcomb curve, so do all
the error bars of the points. If these data are to be believed, they strongly suggest a noticeable departure from the Newcomb formula approximately 2,000 years ago. If true, then there is a major
factor affecting the obliquity of the ecliptic (at least in the past, if not effective today) that the Newcomb and other similar definitive treatments fail to account for. However, I may have
underestimated the errors. If Newton’s analysis is correct, then the Newcomb curve falls within the error bars of these data and there is no discrepancy. If Newton overestimated the errors, then
there is a modest discrepancy between Newcomb and the observations, but this does not necessarily lead us to the Dodwell hypothesis for a single catastrophic event or for a decaying harmonic term.
What is the likely response of astronomers to these ancient data? It’s not as if these data haven’t been available. Likely they have been ignored because they don’t fit what we know today, with the
rationale that the errors involved were so great. However, the errors would have to be on the order of ten arc minutes or more. This is a sixth of a degree. While this is small, the eye can discern
angles on the order of a minute or two of arc. Tycho Brahe, the famous 16th century Danish astronomer, was able to make measurements of this accuracy with instruments that were only marginally
improved over those available to the ancient Greeks (Tycho died a few years before the invention of the telescope). We don’t know how ancient Greek instruments compared to that of Tycho, but, in my
opinion the errors of the ancient astronomers is not great enough to explain this discrepancy.
I have examined the methodology that Dodwell employed in developing his hypothesis that the earth was subjected to a catastrophic change in its tilt in 2345 BC, an alleged catastrophe that the earth
has recovered from as recently as 1850. In a few instances I have had difficulty in replicating Dodwell’s results. In other cases Dodwell was a bit overzealous in extracting data and uncritically
relied upon secondary sources. With no discussion of errors in the observations, it appears that he treated his data with near infinite precision. Dodwell’s hypothesis is highly dependent upon early
measurements of the obliquity of the ecliptic that are not supported by Egyptologists. From these considerations, I consider the Dodwell hypothesis untenable. Despite these defects, a skeptical
analysis that I have conducted here has left a few data points that are difficult to square with the conventional understanding of the obliquity of the ecliptic over time. While I cannot rule out
that in the past the earth’s tilt was altered by some yet unknown mechanism, neither can I confirm it. The most reliable ancient data do not demand the sort of catastrophic change in the earth’s tilt
with a gradual recovery that Dodwell maintained, so there is great doubt that this alleged event happened. If such an event actually happened, we cannot fix the date of that event with any certainty.
Creationists are discouraged from embracing the Dodwell hypothesis.
Brennand, W. 1896. Hindu astronomy. London, United Kingdom: Charles Straker and Sons.
Dodwell 1. Retrieved from http://www.setterfield.org/Dodwell_Manuscript_1.html.
Dodwell 3. Retrieved from http://www.setterfield.org/Dodwell_Manuscript_3.html.
Dodwell 4. Retrieved from http://www.setterfield.org/Dodwell_Manuscript_4.html.
Dodwell 5. Retrieved from http://www.setterfield.org/Dodwell_Manuscript_5.html.
Dodwell 6. Retrieved from http://www.setterfield.org/Dodwell_Manuscript_6.html.
Dodwell 7. Retrieved from http://www.setterfield.org/Dodwell_Manuscript_7.html.
Dodwell 8. Retrieved from http://www.setterfield.org/Dodwell_Manuscript_8.html.
Dodwell 9. Retrieved from http://www.setterfield.org/Dodwell_Manuscript_9.html.
Dodwell table. Retrieved from http://www.setterfield.org/Dodwell%20main%20chart.html.
Hamilton, H. C. and W. Falconer. 1854–1857. Strabo’s geography in three volumes. London: Henry G. Bohn. Retrieved from http://books.google.ca/books?id=K_1EAQAAIAAJ http://books.google.ca/books?id=
KcdfAAAAMAAJ http://books.google.ca/books?id=0cZfAAAAMAAJ
Laskar, J. 1986. Secular terms of classical planetary theories using the results of general relativity. Astronomy and Astrophysics 157:59–70.
Lockyer, J. N. 1894. The dawn of astronomy: A study of the temple-worship and mythology of the ancient Egyptians. London, United Kingdom: Cassell.
Newcomb, S. 1906. A compendium of spherical astronomy with its applications to the determination and reduction of positions of the fixed stars. New York, New York: MacMillan.
Newton, R. R. 1973. The authenticity of Ptolemy’s parallax Data – Part I. Quarterly Journal of the Royal Astronomical Society 14:367–388.
Ptolemy. 1952. The almagest (Great Books of the Western World). Trans. R. C. Taliaferro. Chicago, Illinois: Encyclopedia Britannica.
Shaltout, M. and J. A. Belmonte. 2005. On the orientation of ancient Egyptian temples: (1) Upper Egypt and Lower Nubia. Journal for the History of Astronomy 36, no. 3: 273–298.
Smart, W. M. 1977. Textbook on spherical astronomy, 6th ed. Cambridge, United Kingdom: Cambridge University. | {"url":"http://www.answersingenesis.org/articles/arj/v6/n1/analysis-of-dodwell-hypothesis","timestamp":"2014-04-21T12:47:41Z","content_type":null,"content_length":"83672","record_id":"<urn:uuid:a6064df3-180b-4614-a444-8ad94252f9ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Montclair, NJ Trigonometry Tutor
Find a Montclair, NJ Trigonometry Tutor
...This is truly the best job I have ever had! I specialize in tutoring math and English for success in school and on the SAT, GED, GRE, GMAT, and the NYS Regents exams. Whether we are working on
high school geometry proofs or GRE vocabulary, one of my goals for each session is to keep the student challenged, but not overwhelmed.
34 Subjects: including trigonometry, English, reading, writing
I am a highly motivated, passionate math teacher from the Montclair State University. I also previously taught all grades from 9th to 12th and am extremely comfortable teaching all types of math
to all level learners. I am an motivated teacher who can teach you to your understanding.
6 Subjects: including trigonometry, calculus, algebra 1, geometry
...In preparation for Linear Algebra, systems of linear equations are analyzed using matrices and determinants, and three dimensional coordinate geometry is introduced, including vectors. In
preparation for the further study of Algebra, the representation of complex numbers using polar coordinate i...
6 Subjects: including trigonometry, calculus, algebra 2, geometry
...I feel very confident tutoring this subject. I have been tutoring students grades K-5 for the last 5 years, in addition to middle and high school students. I have worked with younger children
outside of tutoring as well, as a babysitter.
19 Subjects: including trigonometry, calculus, geometry, biology
...I can also assist with writing and editing your resume and helping you prepare for job interviews. I taught art and art history at the undergraduate and graduate level for eleven years and
served on college entrance committees during that time. With years of experience in number of academic fields, I would be able to help you with regards to College Counseling.
39 Subjects: including trigonometry, English, reading, ESL/ESOL
Related Montclair, NJ Tutors
Montclair, NJ Accounting Tutors
Montclair, NJ ACT Tutors
Montclair, NJ Algebra Tutors
Montclair, NJ Algebra 2 Tutors
Montclair, NJ Calculus Tutors
Montclair, NJ Geometry Tutors
Montclair, NJ Math Tutors
Montclair, NJ Prealgebra Tutors
Montclair, NJ Precalculus Tutors
Montclair, NJ SAT Tutors
Montclair, NJ SAT Math Tutors
Montclair, NJ Science Tutors
Montclair, NJ Statistics Tutors
Montclair, NJ Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Belleville, NJ trigonometry Tutors
Bloomfield, NJ trigonometry Tutors
Cedar Grove, NJ trigonometry Tutors
Clifton, NJ trigonometry Tutors
East Orange trigonometry Tutors
Garfield, NJ trigonometry Tutors
Glen Ridge trigonometry Tutors
Kearny, NJ trigonometry Tutors
Livingston, NJ trigonometry Tutors
Nutley trigonometry Tutors
Orange, NJ trigonometry Tutors
Passaic trigonometry Tutors
South Kearny, NJ trigonometry Tutors
Verona, NJ trigonometry Tutors
West Orange trigonometry Tutors | {"url":"http://www.purplemath.com/montclair_nj_trigonometry_tutors.php","timestamp":"2014-04-19T09:46:20Z","content_type":null,"content_length":"24370","record_id":"<urn:uuid:29d0fe69-17c2-45f2-ae77-cf9bbf355275>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate the Volume of a Sphere
A sphere is a perfectly round geometrical object that is three dimensional, with every point on its surface equidistant from its center. Many commonly-used objects such as balls or globes are
spheres. If you want to calculate the volume of a sphere, you just have to find its radius and plug it into a simple formula, V = ⁴⁄₃πr³.
2. Find the radius
. If you're given the radius, then you can move on to the next step. If you're given the diameter, then you can just divide it by two to get the radius. Once you know what it is, write it down.
Let's say the radius we're working with is 1 inch.
□ If you're only given the surface area of the sphere, then you can find the radius by finding the square root of the surface area divided by 4π. In that case, r = root (surface area/4π)
3. Cube the radius
. To cube the radius, simply multiply it by itself thrice, or raise it to the third power. For example, 1 inch
is really just 1 inch x 1 inch x 1 inch. The result of 1 inch
is really just 1, since 1 multiplied by itself any number of times will be 1. You'll reintroduce the unit of measurement, inches, when you state your final answer. After you've done this, you can
plug the cubed radius into the original equation for calculating the volume of a sphere,
V = ⁴⁄₃πr³
. Therefore,
V = ⁴⁄₃π x 1
□ If the radius was 2 inches, for example, then to cube it, you would find 2^3, which is 2 x 2 x 2, or 8.
5. Multiply the equation by π. This is the last step to finding the volume of a sphere. You can leave π as it is, stating the final answer as V = ⁴⁄₃π. Or, you can plug π into your calculator and
multiply its value by 4/3. The value of π (approximately 3.14159) x 4/3 = 4.1887, which can be rounded to 4.19. Don't forget to state your units of measurement and to state the result in cubic
units. The volume of a sphere with the radius of 1 is 4.19 in.^3
• Make sure your measurements are all in the same unit. If they aren't, you will need to convert them.
• Don't forget to use cubed units (e.g. 31 ft³).
• If you only need part of a sphere, like half or a quarter, find the full volume first, then multiply by the fraction you want to find. For instance, to find the volume of half a sphere with a
volume 8, you would multiply 8 by one half or divide 8 by 2 to get 4.
• Note that the "*" symbol is used as a multiplication sign to avoid confusion with the variable "x".
• Calculator (reason: to calculate problems that would be annoying to do without it)
• Pencil and paper (not needed if you have an advanced calculator)
Edited by Nicole Willson, Versageek, Andy Zhang, Zack and 75 others | {"url":"http://m.wikihow.com/Calculate-the-Volume-of-a-Sphere","timestamp":"2014-04-20T03:12:15Z","content_type":null,"content_length":"55308","record_id":"<urn:uuid:cc6f0f91-39f7-4632-96e1-3f32a610f5a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Possible Answer
Electric power calculator calculation general basic electrical formulas mathematical voltage electrical equation formula calculating energy work power watts calculator equation current charge
resistance converter ohm's law and power law power formulae formulas understandimg general electrical ... - read more
Voltage, electrical potential difference, electric tension or electric pressure (denoted ∆ V) and measured in units of electric potential: volts, or joules per coulomb is the electric potential
difference between two points, or the difference in electric potential energy of a unit charge ... - read more
Share your answer: what is electrical voltage formula?
Question Analizer
what is electrical voltage formula resources | {"url":"http://www.askives.com/what-is-electrical-voltage-formula.html","timestamp":"2014-04-19T10:28:05Z","content_type":null,"content_length":"37185","record_id":"<urn:uuid:48687d1e-26cc-4366-8d4d-9d7fabbe4696>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/jkristia/answered","timestamp":"2014-04-18T00:25:51Z","content_type":null,"content_length":"120531","record_id":"<urn:uuid:aa098fea-57f2-4054-8d39-6af512fcb7a2>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Genus of curves in projective space
October 12th 2008, 05:20 PM #1
Jul 2008
Genus of curves in projective space
Hello everyone,
I am working on understanding the genus of curves in projective spaces, and I have convinced myself that an elliptic curve
$y^2=x(x-\lambda _1)(x-\lambda _2)$
is a topologically equivalent to a torus (genus 1) by making cuts between the zeros (0, $\lambda _1, \lambda _2$, and $\infty$) and gluing up regions so that the function is single-valued.
However, now I need to know about a curve C of degree d in the complex projective space $\mathbb{C}P$. I know the answer:
$g=\binom {d-1}{2}=\frac{1}{2}(d-1)(d-2).$
So it seems obvious what is happening is for d+1 zeros (degree d + infinity) there are d-1 possible connections between two adjacent points, and we want to identify them in pairs to make our
manifold. But I'm not clear on how we know that this coefficient gives us the genus (ie number of holes). Could someone elaborate on that, or correct me if I am wrong about something?
Thanks very much.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/53330-genus-curves-projective-space.html","timestamp":"2014-04-19T11:11:00Z","content_type":null,"content_length":"30940","record_id":"<urn:uuid:8b9bc6c6-4fd0-4160-84dc-97d08cba8541>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Fractional Transformations
a.k.a. Möbius Transformations, are a type of function. I’ll talk about them as functions from the complex plane to itself. Such functions are given by a formula
where $a,b,c,d$ are complex values. If $c$ and $d$ are both 0, this isn’t much of a function, so we’ll assume at least one isn’t 0.
I’d like to talk about what these functions do, how to have some hope of picturing them as transformations $\mathbb{C}\to\mathbb{C}$. To do this, let’s consider some easy cases first.
If $c=0$ (and so by assumption $deq 0$), then we may write the function $\frac{a}{d}z+\frac{b}{d}$, or simply $a'z+b'$ for some complex values $a',b'$. This is now a linear (some might say affine)
transformation of the complex plane. Think about it as the composite $z\mapsto a'z\mapsto a'z+b$, where the first map multiplies by $a'$, and the second adds $b'$. Multiplying by a complex value $a'$
is the same as scaling by the real value $|a'|$ (the “norm” of $a'$, distance from $a'$ to the origin) and then rotating by the “argument” of $a'$. If you think about $a'$ as a point $(r,\theta)$ in
polar coordinates, then the argument of $a'$ is $\theta$ (or so), and so multiplication by $a'$ is multiplication by the real value $r$ (which is just a stretching (or shrinking) of the complex plane
away from (toward) the origin if $r>1$ (resp. $0\leq r<1$)) and then rotation by the angle $\theta$. The second transformation in the composite, “add $b'$“, just shifts the entire plane (as a “rigid
transformation”) in the direction of $b'$.
So the case when $c=0$ is just a linear transformation, which aren’t too difficult to picture. Another important case is $1/z$, so the coefficients are $a=0,b=1,c=1,d=0$. To talk about what this
does, let’s first talk about “inversion” with respect to a fixed circle.
Let $C$ be a circle with radius $r$, in the plane, and $z$ any point in the plane. Let $O$ denote the center of the circle and $d$ the distance from $O$ to $z$. The inversion of $z$, with respect to
$C$, is the point on the line through $O$ and $z$ (in the direction of $z$ from $O$) whose distance from $O$ is $d'=r^2/d$. This means that points near the center of $C$ are sent far away, and vice
versa. Points on $C$ are unchanged. Technically I guess we should say that this function isn’t defined at $O$, but people like to say it is taken to “the point at infinity” and, conversely, that
inversion takes $\infty$ to $O$. These things can be made precise.
You might notice that doing inversion twice in a row gets you right back where you started. It also turns out that If $C'$ is another circle in the plane, not passing through $O$, then the inversion
of all of its points is another circle. If $C'$ passes through $O$, then the inversion works out to be a line. Since doing inversion twice is the identity, inversion takes lines to circles through
$O$. If you’re thinking about the comments about $\infty$ above, this makes sense because every line “goes to $\infty$“, and so the inversion of a line will go through the inversion of $\infty$,
which I said should be $O$.
All of this talk about inversion was to describe the function $1/z$. This function is the composite of inversion with respect to the unit circle centered at the origin followed by a reflection across
the horizontal axis (real line). Don’t believe me? The equation $d'=r^2/d$ defining the relationship between distances when doing the inversion can be re-written as $dd'=r^2$. If we’re doing
inversion with respect to a unit circle, then $dd'=1$. This means that when we multiply $z$ with its inversion with respect to the unit circle, call it $z'$, the result will be a point with norm 1
(i.e., a point on the unit circle). Next up, multiplying $z$ by $z'$ produces a point whose angle from the positive real axis (which I called the argument before, the $\theta$ from polar coordinates)
is the sum of the angles for $z$ and $z'$. Since we did the reflection across the horizontal axis, the argument for $z'$ is precisely the negative of the argument for $z$, meaning their sum (the
argument of their product) is 0. So $zz'$ is a point on the unit circle making an angle of 0 with the positive real line, i.e., $zz'=1$. That makes $z'=1/z$, as promised.
Let’s get back to the general setup, with the function
and let’s assume $ceq 0$ (since we already handled the case $c=0$, it’s just a linear transformation). For some notational convenience, let me let $\alpha=-(ad-bc)/c^2$. Consider the following
$\begin{array}{rcl} z & \xrightarrow{w\mapsto w+\frac{d}{c}} & z+\dfrac{d}{c} \\ {} & \xrightarrow{w\mapsto \frac{1}{w}} & \dfrac{c}{cz+d} \\ {} & \xrightarrow{w\mapsto \alpha w+\frac{a}{c}} & \dfrac
{\alpha c}{cz+d}+\dfrac{a}{c} \end{array}$
If you check all of these steps, and then play around simplifying the final expression, then you obtain the original formula above. So we can think of any linear fractional transformation as a
composite of some linear functions and an inversion, and we know how to picture all of those steps.
That’s maybe enough for today. It’s certainly enough for me for today. Before I go, I’ll leave you with a video that might be helpful, and is pretty either way.
Tags: linear fractional, mablowrimo, mobius transformation
LFTs and Ford Circles « ∑idiot's Blog Says:
November 7, 2009 at 8:16 pm | Reply
[...] ∑idiot's Blog The math fork of sumidiot.blogspot.com « Linear Fractional Transformations [...] | {"url":"http://sumidiot.wordpress.com/2009/11/06/linear-fractional-transformations/","timestamp":"2014-04-21T12:33:46Z","content_type":null,"content_length":"61175","record_id":"<urn:uuid:711c4779-1286-441d-944f-a9168ca48dd9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
EVOLINO - EVolution of recurrent systems
with Optimal LINear Output - Supervised Recurrent Neural Networks -
Recurrent SVMs - Recurrent support vector machines
5. J. Schmidhuber, D. Wierstra, M. Gagliolo, F. Gomez. Training Recurrent Networks by Evolino. Neural Computation, 19(3): 757-779, 2007. PDF.
4. H. Mayer, F. Gomez, D. Wierstra, I. Nagy, A. Knoll, and J. Schmidhuber (2006). A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks. Proc. IROS-06, Beijing.
3. J. Schmidhuber, D. Wierstra, F. J. Gomez. Evolino: Hybrid Neuroevolution / Optimal Linear Search for Sequence Learning. Proceedings of the 19th International Joint Conference on Artificial
Intelligence (IJCAI), Edinburgh, p. 853-858, 2005. PDF.
2. D. Wierstra, F. Gomez, J. Schmidhuber. Modeling systems with internal state using Evolino. In Proc. of the 2005 conference on genetic and evolutionary computation (GECCO), Washington, D. C., pp.
1795-1802, ACM Press, New York, NY, USA, 2005. PDF. Got a GECCO best paper award.
1. J. Schmidhuber, M. Gagliolo, D. Wierstra, F. Gomez. Evolino for Recurrent Support Vector Machines. TR IDSIA-19-05, v2, 15 Dec 2005. PDF. Short version at ESANN 2006.
Basic principle: Evolve an RNN population; to obtain some RNN's fitness DO: Feed the training sequences into the RNN. This yields sequences of hidden unit activations. Compute an optimal linear
mapping from hidden to target trajectories. The fitness of the recurrent hidden units is the RNN performance on a validation set, given this mapping.
If the goal is to minimize mean squared error, then use the pseudoinverse for computing the optimal mapping (left).
If the goal is to maximize the margin, then use quadratic programming. This yields Recurrent Support Vector Machines.
A recent journal publication on an EVOLINO application to Robotics:
H. Mayer, F. Gomez, D. Wierstra, I. Nagy, A. Knoll, and J. Schmidhuber. A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks. Advanced Robotics, 22/13-14, p.
1521-1537, 2008, in press. | {"url":"http://www.idsia.ch/~juergen/evolino.html","timestamp":"2014-04-21T07:16:56Z","content_type":null,"content_length":"7668","record_id":"<urn:uuid:44c325b9-b14e-45b1-80d7-3bc93b9f7c4d>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - calculating seperate means in a large matrix
Date: Dec 6, 2012 12:10 PM
Author: Jacob Moses
Subject: calculating seperate means in a large matrix
I am working on a project that, without going into unnecessary detail, requires me to calculate the mean of each row in a 64,500x17 matrix. The eventual aim is to populate a 17x1 vector for use in kmeans. My instructor does not want me to use a for statement for efficiency reasons. I am thinking of splitting the matrix into 17 64,500x1 vectors, but I am not sure of how to do this, or if there's another, more efficient way. Any help would be greatly appreciated. | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7933239","timestamp":"2014-04-16T13:48:07Z","content_type":null,"content_length":"1457","record_id":"<urn:uuid:96caf3a5-aa1c-4906-b2e3-0c40f8b04f5e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
EDN Access--12.05.96 A minus B=A+NOT(B)+1(Part 1
Design Features
December 5, 1996
A minus B=A+NOT(B)+1(Part 1)
Clive "Max" Maxfield, Intergraph Computer Systems
Upon examining an ALU, it is surprising to find a 1's complementor glaring balefully at you,
instead of the 2's complementor you expected to be there.
Following the joys of the "Microprocessor 101'' courses you endured in college, most of you are reasonably confident that you understand the way in which computers add and subtract binary numbers.
Thereafter, you bask in the rosy glow that assembly-level instructions, such as add without carry (ADD), add with carry (ADDC), subtract without carry/borrow (SUB), and subtract with carry/borrow
(SUBC), don't scare you anymore.
The years race by as if they are in a desperate hurry to get somewhere, until you arrive at that grim and fateful day when someone asks you to replicate an adder/subtractor function, either as a
chunk of logic for a design or possibly using a hardware description language to create a model for simulation and synthesis. The first port of call may well be to blow the cobwebs off your
microprocessor course notes, only to find that there's more to this than you seem to recall. The description of the ADD instruction, which doesn't require a carry-in, looks simple enough and says
something such as: a[7:0] plus b[7:0]=a[7:0]+b[7:0] (assuming an 8-bit datapath). It's when you turn your attention to the SUB, whose definition may be along the lines of a[7:0] minus b[7:0]=a[7:0]
+NOT(b[7:0])+1, that you realize that perhaps you should have paid just a tad more attention in Professor Gonzo Dribbler's Monday morning lectures—possibly to the extent of actually staying awake.
Light begins to dawn after a few moments, as you recall that you can perform these calculations using 2's complement arithmetic. So, it comes as something of a shock when you peer at the block
diagram of the ALU looking for a 2's complementor but instead find a humble 1's complementor glaring back at you as though it has every right to be there. "Holy socks, Batman, how can this be?"
Obviously, you need to go back to first principles in order to figure this out…
Complement techniques
There are two complement forms associated with every number system: the "radix complement" and the "diminished radix complement," where the term "radix" refers to the base of the number system.
Under the decimal (base-10) system, the radix complement is also known as the "10's complement," and the diminished radix complement is known as the "9's complement." First, consider a decimal
subtraction you can perform using the 9's complement technique—a process also known as "casting out the nines" (
Figure 1
The standard way to perform the operation is to subtract the subtrahend (283) from the minuend (647), which, as in Figure 1, may require the use of one or more borrow operations. To perform the
equivalent operation using a 9's complement technique, you first subtract each of the digits of the subtrahend from 9. You add the resulting 9's complement value to the minuend and then perform an
"end-around-carry" operation. The advantage of the 9's complement technique is that it is never necessary to perform a borrow operation (hence, its attraction to those of limited numerical ability in
the days of yore).
Now, consider the same subtraction performed using the 10's complement technique (
Figure 2
). The advantage of the 10's complement is that it is not necessary to perform an end-around carry, because you simply drop any carry-out resulting from the addition of the most-significant digits
from the final result. The disadvantage is that, during creation of the 10's complement, it is necessary to perform a borrow operation for every nonzero digit in the subtrahend. (You can overcome
this problem by first taking the 9's complement of the subtrahend, adding one to the result, and then performing the remaining operations, as in the example for the 10's complement.) You can employ
similar techniques with any number system including binary (base-2), in which the radix complement is known as the 2's complement and the diminished radix complement is known as the 1's complement.
First, consider a binary subtraction that you perform using the 1's complement technique on 8-bit unsigned binary values, where you can use such values to represent positive integers in the range 010
to 25510 (
Figure 3
). The traditional way to perform the operation is to subtract the subtrahend (000111102) from the minuend (001110012), which may require the use of one or more borrow operations. (Don't beat your
head against a wall trying to understand the standard binary subtraction—simply take my word as to the result.) To perform the equivalent operation in 1's complement, first subtract each of the
digits of the subtrahend from a 1. Add the resulting 1's complement value to the minuend and then perform an end-around-carry operation. As for the 9's complement process, the advantage of the 1's
complement technique is that it is never necessary to perform a borrow operation. In fact, it isn't even necessary to perform a subtraction operation because you can generate the 1's complement of a
binary number simply by inverting all of its bits, that is, by exchanging all the 0s with 1s and vice versa. This means that, even if you stopped here, you already know how to perform a simple binary
subtraction using only inversion and addition without any actual subtraction. Now, you can perform the same binary subtraction using the 2's complement technique (
Figure 4
As with the 10's complement technique, the advantage of the 2's complement is that it is not necessary to perform an end-around carry, because you simply drop any carry-out resulting from the
addition of the two MSBs from the final result. The disadvantage is that, during the creation of the 2's complement, it is necessary to perform a borrow operation for every nonzero digit in the
subtrahend. You can overcome this problem by first taking the 1's complement of the subtrahend, adding one to the result, and then performing the remaining operations, as in the example for the 2's
As fate would have it, there is also a short-cut available to generate the 2's complement of a binary number. Starting with the LSB of the value to be complemented, directly copy each bit up to and
including the first 1, then invert the remaining bits (
Figure 5
Unfortunately, both the 1's and 2's complement techniques return incorrect results if you use unsigned binary representations and if you subtract a larger from a smaller value; that is, for these
techniques to work, the final result must be greater than or equal to zero. The reason for this is that subtracting a larger number from a smaller number results in a negative value, but you are
using unsigned binary numbers, which, by definition, can only be used to represent positive values. It is impractical to only ever perform calculations that have positive results, so you must have
some way of representing negative values. One solution is to use signed binary numbers.
Signed binary numbers
In standard decimal arithmetic, negative numbers are typically represented in "sign-magnitude form" (using a minus sign as a prefix). For example, you represent a value of plus twenty-seven as +27
(or just 27 for short), and a value of minus twenty-seven is indicated as –27 (where the "+" or "–" is the sign and the "27'' is the magnitude, hence, the "sign-magnitude" designation).
You can replicate the sign-magnitude form in binary by simply using the MSB to represent the sign of the number (0=positive, 1=negative). However, computers rarely employ the sign-magnitude form
but instead use a format known as signed binary. You can use signed binary numbers to represent both positive and negative values, and they do this in a rather cunning way. To illustrate the
differences between the sign-magnitude and signed binary formats, consider the decimal sign-magnitude representations of plus and minus twenty-seven, along with the same values represented as signed
binary numbers (
Figure 6
Unlike the decimal values, the bit patterns of the two binary numbers are very different because you form the signed binary equivalent of –2710 by combining –12810 with +10110. That is, the LSBs
continue to represent the same positive quantities as for unsigned binary numbers, and you use the MSB to represent an actual negative quantity rather than a simple plus or minus. In the case of a
signed 8-bit number, a "1'' in the sign bit represents –27 (= –12810), and you use the remaining bits to represent positive values in the range 010 through +12710. Thus, you can use an 8-bit signed
binary number to represent values in the range –12810 through +12710.
At first glance, signed binary numbers appear to be an outrageously complex solution to a fairly simple problem. In addition to representing an asymmetrical range of negative and positive numbers
(–12810 through +12710, in the case of an 8-bit value), the way in which you form these values is, to put it mildly, alien to the way you usually think of numbers. Why then, you may ask, don't you
use the MSB bit to represent the sign of the number and leave it at that?
Well, as you may suspect, there's reason behind the madness. First, if you use the MSB to represent only the sign of the number, then such numbers accommodate both +0 and –0 values. Although this
may not seem like a particularly hairy stumbling block, computers are essentially dumb, and having positive and negative representations of zero introduces complications in recognizing whether a
given value is less than zero or equal to zero (or whether +0 is greater than or equal to –0). But, there's a lot more to signed binary numbers than this. (Now, pay attention, because this is the
clever part.) Closer investigation of the two binary values in Figure 6 reveals that each bit pattern is, in fact, the 2's complement of the other. To put it another way, taking the 2's complement of
a positive-signed binary value returns its negative equivalent and vice versa (the only problem being that, because of the asymmetrical range, you cannot negate the largest negative number; for
example, in an 8-bit number, you cannot negate –12810 to get +12810 because the maximum positive value supported is +12710).
The result of this rigmarole is that using signed binary numbers (which are also commonly referred to as "2's-complement numbers") greatly reduces the complexity of the operations within a
computer. To illustrate why this is so, consider one of the simplest operations—addition. You can compare the additions of positive and negative decimal values in sign-magnitude form with their
signed binary counterparts (
Figure 7
First, examine the standard decimal calculations. The calculation at the top of Figure 7 is easy to understand because it's a straightforward addition of two positive values. However, even though
we are familiar with decimal addition, the other three problems aren't quite as simple because you must decide exactly what to do with the negative values. In comparison, the signed binary
calculations on the right side of Figure 7 are all simple additions, irrespective of whether the individual values are positive or negative.
If you force a computer to use a binary version of the sign-magnitude form to perform additions, then, instead of performing its calculations effortlessly and quickly, it has to perform a painful
sequence of operations. First, the computer has to compare the signs of the two numbers. If the signs are the same, then the computer simply adds the two values (excluding the sign bits themselves)
because, in this case, the result always has the same sign as the original numbers. However, if the signs are different, the computer has to subtract the smaller value from the larger value and then
ensure that the correct sign was appended to the result.
As well as being time-consuming, performing these operations requires a substantial number of logic gates. Thus, the advantage of the signed binary format for addition operations is apparent: You
can always directly add together signed binary operations to provide the correct result in a single operation, irrespective of whether they represent positive or negative values. That is, you perform
the operations a+b, a+(–b), (–a)+b, and (–a)+(–b) in exactly the same way, by simply adding the two values together. This results in adders that are fast because you can construct them using a
minimum number of logic gates.
Now consider subtraction. You know that 10–3=7 in decimal arithmetic and that you can obtain the same result by negating the right-hand value and inverting the operation; that is, 10+(–3)=7. This
technique is also true for signed binary arithmetic, although you perform the negation of the right-hand value by taking its 2's complement rather than by changing its sign. For example, consider a
generic signed binary subtraction represented by a–b. Generating the 2's complement of b results in –b, allowing you to perform the required operation: a+(–b). Similarly, you perform equivalent
operations a–(–b), (–a) –b, and (–a)– (–b) in exactly the same way, by simply taking the 2's complement of b and adding the result to a, irrespective of whether a or b represent positive or negative
values. This means that computers do not require two different blocks of logic (one to add numbers and another to subtract them); instead, they only require an adder and some logic to generate the
2's complement of a number, which tends to make life a lot easier.
So, where's the 2's complementor?
Early digital computers were often based on 1's complement arithmetic for a variety of reasons, including the fact that 2's complement techniques were not well understood. However, designers
quickly migrated to the 2's complement approach because of the inherent advantages it provides.
Unfortunately, the problem mentioned at the beginning of this article remains: When you examine a computer's ALU, there isn't a 2's complementor in sight; instead, a humble 1's complementor glares
balefully at you from its nest of logic gates. So, where is the 2's complementor? Is this part of some nefarious scheme to deprive you of the benefits of 2's complement arithmetic? Fear not my
braves, because all will be revealed in the next exciting installment…
Author's biography
Clive "Max" Maxfield is a member of the technical staff at Intergraph Computer Systems (Huntsville, AL), (800) 763-0242, where he gets to play with the company's high-performance graphics
workstations. In addition to numerous technical articles and papers, Maxfield is also the author of Bebop to the Boolean Boogie: An Unconventional Guide to Electronics (ISBN 1-878707-22-1). To
order, phone (800) 247-6553. You can reach Maxfield via e-mail at crmaxfie@ingr.com.
| EDN Access | feedback | subscribe to EDN! |
Copyright © 1996
EDN Magazine
. EDN is a registered trademark of Reed Properties Inc, used under license.
Loading comments...
Write a Comment | {"url":"http://www.edn.com/design/systems-design/4337962/EDN-Access-12-05-96-A-minus-B-A-NOT-B-1-Part-1","timestamp":"2014-04-19T17:14:48Z","content_type":null,"content_length":"76688","record_id":"<urn:uuid:d3cdc6a3-5c77-41f0-b6b6-9c5ffc26563a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conjugate Heat Transfer
Conjugate Heat Transfer
In this blog post we will explain the concept of conjugate heat transfer and show you some of its applications. Conjugate heat transfer corresponds with the combination of heat transfer in solids and
heat transfer in fluids. In solids, conduction often dominates whereas in fluids, convection usually dominates. Conjugate heat transfer is observed in many situations. For example, heat sinks are
optimized to combine heat transfer by conduction in the heat sink with the convection in the surrounding fluid.
Heat Transfer by Solids and Fluids
Heat Transfer in a Solid
In most cases, heat transfer in solids, if only due to conduction, is described by Fourier’s law defining the conductive heat flux, q, proportional to the temperature gradient: q=-k\nabla T.
For a time-dependent problem, the temperature field in an immobile solid verifies the following form of the heat equation:
\rho C_{p} \frac{\partial T}{\partial t}=\nabla \cdot (k\nabla T) +Q
Heat Transfer in a Fluid
Due to the fluid motion, three contributions to the heat equation are included:
1. The transport of fluid implies energy transport too, which appears in the heat equation as the convective contribution. Depending on the thermal properties on the fluid and on the flow regime,
either the convective or the conductive heat transfer can dominate.
2. The viscous effects of the fluid flow produce fluid heating. This term is often neglected, nevertheless, its contribution is noticeable for fast flow in viscous fluids.
3. As soon as a fluid density is temperature-dependent, a pressure work term contributes to the heat equation. This accounts for the well-known effect that, for example, compressing air produces
Accounting for these contributions, in addition to conduction, results in the following transient heat equation for the temperature field in a fluid:
\rho C_{p} \frac{\partial T}{\partial t}+\rho C_p\bold{u}\cdot\nabla T= \alpha_p\left( \frac{\partial p_\mathrm{A}}{\partial t}+\bold{u}\cdot\nabla p_\mathrm{A}\right)+\tau : S+\nabla \cdot (k\nabla
T) +Q
Conjugate Heat Transfer Applications
Effective Heat Transfer
Efficiently combining heat transfer in fluids and solids is the key to designing effective coolers, heaters, or heat exchangers.
The fluid usually plays the role of energy carrier on large distances. Forced convection is the most common way to achieve high heat transfer rate. In some applications, the performances are further
improved by combining convection with phase change (for example liquid water to vapor phase change).
Even so, solids are also needed, in particular to separate fluids in a heat exchanger so that fluids exchange energy without being mixed.
Flow and temperature field in a shell-and-tube heat exchanger illustrating heat transfer between two fluids separated by the thin metallic wall.
Heat sinks are usually made of metal with high thermal conductivity (e.g. copper or aluminum). They dissipate heat by increasing the exchange area between the solid part and the surrounding fluid.
Temperature field in a power supply unit cooling due to an air flow generated by an extracting fan and a perforated grille. Two aluminum fins are used to increase the exchange area between the flow
and the electronic components.
Energy Savings
Heat transfer in fluids and solids can also be combined to minimize heat losses in various devices. Because most gases (especially at low pressure) have small thermal conductivities, they can be used
as thermal insulators… provided they are not in motion. In many situations, gas is preferred to other material due to its low weight. In any case, it is important to limit the heat transfer by
convection, in particular by reducing the natural convection effects. Judicious positioning of walls and use of small cavities helps to control the natural convection. Applied at the micro scale, the
principle leads to the insulation foam concept where tiny cavities of air (bubbles) are trapped in the foam material (e.g. polyurethane), which combines high insulation performances with light
Window cross section (left) and zoom-in on the window frame (right).
Temperature profile in a window frame and glazing cross section from ISO 10077-2:2012 (thermal performance of windows).
Fluid and Solid Interactions
Fluid/Solid Interface
The temperature field and the heat flux are continuous at the fluid/solid interface. However, the temperature field can rapidly vary in a fluid in motion: close to the solid, the fluid temperature is
close to the solid temperature, and far from the interface, the fluid temperature is close to the inlet or ambient fluid temperature. The distance where the fluid temperature varies from the solid
temperature to the fluid bulk temperature is called the thermal boundary layer. The thermal boundary layer size and the momentum boundary layer relative size is reflected by the Prandtl number (Pr=
C_p \mu/k): for the Prandtl number to equal 1, thermal and momentum boundary layer thicknesses need to be the same. A thicker momentum layer would result in a Prandtl number larger than 1.
Conversely, a Prandtl number smaller than 1 would indicate that the momentum boundary layer is thinner than the thermal boundary layer. The Prandtl number for air at atmospheric pressure and at 20°C
is 0.7. That is because for air, the momentum and thermal boundary layer have similar size, while the momentum boundary layer is slightly thinner than the thermal boundary layer. For water at 20°C,
the Prandtl number is about 7. So, in water, the temperature changes close to a wall are sharper than the velocity change.
Normalized temperature (red) and velocity (blue) profile for natural convection of air close to a cold solid wall.
Natural Convection
The natural convection regime corresponds to configurations where the flow is driven by buoyancy effects. Depending on the expected thermal performance, the natural convection can be beneficial (e.g.
cooling application) or negative (e.g. natural convection in insulation layer).
The Rayleigh number, noted as Ra, is used to characterized the flow regime induced by the natural convection and the resulting heat transfer. The Rayleigh number is defined from fluid material
properties, a typical cavity size, L, and the temperature difference,\Delta T, usually set by the solids surrounding the fluid:
Ra=\frac{\rho^2g\alpha_p C_p}{\mu k}\Delta T L^3
The Grashof number is another flow regime indicator giving the ratio of buoyant to viscous forces:
Gr=\frac{\rho^2g\alpha_p}{\mu^2}\Delta T L^3
The Rayleigh number can be expressed in terms of the Prandtl and the Grashof numbers through the relation Ra=Pr Gr.
When the Rayleigh number is small (typically <10^3), the convection is negligible and most of the heat transfer occurs by conduction in the fluid.
For a larger Rayleigh number, heat transfer by convection has to be considered. When buoyancy forces are large compared to viscous forces, the regime is turbulent, otherwise it is laminar. The
transition between these two regimes is indicated by the critical order of the Grashof number, which is 10^9. The thermal boundary layer, giving the typical distance for temperature transition
between the solid wall and the fluid bulk, can be approximated by \delta_\mathrm{T} \approx \frac{L}{\sqrt[4\,]{Ra}} when Pr is of order 1 or greater.
Temperature profile induced by natural convection in a glass of cold water in contact with a hot surface .
Forced Convection
The forced convection regime corresponds to configurations where the flow is driven by external phenomena (e.g. wind) or devices (e.g. fans, pumps) that dominate buoyancy effects.
In this case the flow regime can be characterized, similarly to isothermal flow, using the Reynolds number as an indicator,Re= \frac{\rho U L}{\mu}. The Reynolds number represents the ratio of
inertial to viscous forces. At low Reynolds numbers, viscous forces dominate and laminar flow is observed. At high Reynolds numbers, the damping in the system is very low, giving small disturbances.
If the Reynolds number is high enough, the flow field eventually ends up in turbulent regime.
The momentum boundary layer thickness can be evaluated, using the Reynolds number, by \delta_\mathrm{M} \approx \frac{L}{\sqrt{Re}}.
Streamlines and temperature profile around a heat sink cooling by forced convection.
Radiative Heat Transfer
Radiative heat transfer can be combined with conductive and convective heat transfer described above.
In a majority of applications, the fluid is transparent to heat radiation and the solid is opaque. As a consequence, the heat transfer by radiation can be represented as surface-to-surface radiation
transferring energy between the solid wall through transparent cavities. The radiative heat flux emitted by a diffuse gray surface is equal to \varepsilon n^2 \sigma T^4. When a surface is surrounded
by bodies at a homogeneous T_\mathrm{amb}, the net radiative flux is q_\mathrm{r} = \varepsilon n^2 \sigma (T_\mathrm{amb}^4-T^4). When surrounding surfaces of different temperatures, each
surface-to-surface exchange is determined by the surface’s view factors.
Nevertheless, both fluids and solids may be transparent or semitransparent. So radiation can occur in fluid and solids. In participating (or semitransparent) media, the radiation rays interact with
the medium (solid or fluid) then absorb, emit, and scatter radiation.
Whereas radiative heat transfer can be neglected in applications with small temperature differences and lower emissivity, it plays a major role in applications with large temperature differences and
large emissivities.
Comparison of temperature profiles for a heat sink with a surface emissivity \varepsilon = 0 (left) and \varepsilon = 0.9 (right).
Heat transfer in solids and heat transfer in fluids are combined in the majority of applications. This is because fluids flow around solids or between solid walls, and because solids are usually
immersed in a fluid. An accurate description of heat transfer modes, material properties, flow regimes, and geometrical configurations enables the analysis of temperature fields and heat transfer.
Such a description is also the starting point for a numerical simulation that can be used to predict conjugate heat transfer effects or to test different configurations in order, for example, to
improve thermal performances of a given application.
C_{p}: heat capacity at constant pressure (SI unit: J/kg/K)
g: gravity acceleration (SI unit: m/s^2)
Gr: Grashof number (dimensionless number)
k: thermal conductivity (SI unit: W/m/K)
L: characteristic dimension (SI unit: m)
n: refractive index (dimensionless number)
p_\mathrm{A}: absolute pressure (SI unit: Pa)
Pr: Prandtl number (dimensionless number)
q: heat flux (SI unit: W/m^2)
Q: heat source (SI unit: W/m^3)
Ra: Rayleigh number (dimensionless number)
S: strain rate tensor (SI unit: 1/s)
T: temperature field (SI unit:K)
T_\mathrm{amb}: ambient temperature (SI unit: K)
\bold{u}: velocity field (SI unit: m/s)
U: typical velocity magnitude (SI unit: m/s)
\alpha_{p}: thermal expansion coefficient (SI unit: 1/K)
\delta_\mathrm{M}: momentum boundary layer thickness (SI unit: m)
\delta_\mathrm{T}: thermal layer thickness (SI unit: m)
\Delta T: characteristic temperature difference (SI unit: K)
\varepsilon: surface emissivity (dimensionless number)
\rho: density (SI unit: kg/m^3)
\sigma: Stefan-Boltzmann constant (SI unit: W/m^2T^4)
\tau: viscous stress tensor (SI unit: N/m^2)
Article Tags
1. ali hassan January 15, 2014 at 12:30 pm
How can using conjugate heat transfer model in case flow boiling water through porous media ?
2. Nicolas Huc January 20, 2014 at 5:00 am
Conjugate heat transfer interface in COMSOL Multiphysics is dedicated to heat transfer in solids and non-isothermal flow for free flows. If the model scale is small enough (so that the porous
cavities are explicitly represented), this interface may be used using a similar approach as shown in http://www.comsol.com/model/boiling-water-3972 .
Otherwise a porous media flow model should be used instead of the free flow model.
Loading Comments... | {"url":"http://www.comsol.com/blogs/conjugate-heat-transfer/","timestamp":"2014-04-17T03:56:10Z","content_type":null,"content_length":"67547","record_id":"<urn:uuid:0d18a49b-8e7f-4a31-aa12-c9878c3b221f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
abtract algebra
If [tex]f\inS_3[\math], show that [tex]f^6=\iota[\math]. Bear with me, I am still learning Latex.
use the $\Sigma$ button to input the [tex] tags correctly. you type the wrong slashes i am pretty sure there is a theorem to this effect. but note that 6 is the LCM of the orders of all the elements
of $S_3$. it follows that any element raised to it will give the identity. you can look up the theorem to prove this elegantly, or since $S_3$ is small. just prove it by cases. $S_3$ has elements of
orders 1,2, and 3. show that for each of these kinds of elements, the 6th power yields the identity | {"url":"http://mathhelpforum.com/advanced-algebra/51063-abtract-algebra-print.html","timestamp":"2014-04-19T02:25:05Z","content_type":null,"content_length":"5336","record_id":"<urn:uuid:cd8561dc-c890-47e4-93d3-f18551fe3fc8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relational Algebra The Natural Join
A selection of articles related to relational algebra the natural join.
Original articles from our library related to the Relational Algebra The Natural Join. See Table of Contents for further available material (downloadable resources) on Relational Algebra The Natural
Religions >> Vedic
Relational Algebra The Natural Join is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Relational Algebra The Natural Join
books and related discussion.
Suggested Pdf Resources
Relational Algebra (Section 6.1) Algebra. Implementation.
Features of SQL beyond relational algebra and relational calculus. 3 . Recall that, in relational algebra, the natural join R ⋈ S is given by π.
operations can express interesting and complex queries. ∎.
1) This operation is equivalent to the regular Relational Algebra natural join over B. 2) If B = 0, it reduces to the regular Cartesian Product.
in relational algebra is the natural join operation, and its variants. In a typical relational algebra expression there will be a number of joins.
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site. | {"url":"http://www.realmagick.com/relational-algebra-the-natural-join/","timestamp":"2014-04-18T10:36:41Z","content_type":null,"content_length":"27316","record_id":"<urn:uuid:dc656ecf-1bd7-4865-8964-467f4adb2986>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
One of the most profound ideas in modern mathematics. A manifold is a space on which you can do differential geometry. Roughly, it is anything which is locally like flat Euclidean space. For
instance, the surface of a sphere is a two-dimensional manifold, because to a bug crawling on the sphere (or a person on the Earth) the surface nearby looks flat. Likewise spacetime is a manifold, of
four (or maybe ten or more) dimensions. | {"url":"http://everything2.com/title/manifold","timestamp":"2014-04-21T02:48:52Z","content_type":null,"content_length":"62242","record_id":"<urn:uuid:36860e27-5377-4feb-9923-bf46bdef066e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reader responses to "Is Mathematics a Science?"
Science is Just Another Religion
All the ardent validations and engrossing intellectual forays notwithstanding, the actuality is that - at it's foundation - science is also very largely a belief system similar to religion. As a
matter of fact it can be shown that science is simply the most popular religion of this era. Only for people who don't understand science. Among scientists and educated people, science has its
position because it works, and the fact that it works is an empirically testable and falsifiable proposition.
Religion is founded entirely on untested claims. Science tests its own claims as surely and as regularly as it tests scientific theories.
In the 1950s, at the height of the polio epidemic, Sister Kenny opened a now-famous clinic to treat polio by massaging the limbs of its sufferers. At the same time, using the methods of science,
Jonas Salk created a vaccine. There is no more apt comparison of religion and science. To put this in contemporary terms, if you have cancer, would you prefer a massage or a vaccine?
Religion is based on perfect confidence and no evidence. Science is based on perpetual skepticism of everything including science itself, and the only reason science prevails is because it meets the
requirements of the most skeptical observer.
Obviously for a disenchanted religious believer who wants to jump ship, science looks like another belief system. In the same way, to a hammer, everything looks like a nail. But to a scientist,
constitutionally inclined to doubt everything, science delivers something religion cannot provide — results. | {"url":"http://www.arachnoid.com/is_math_a_science/feedback.html","timestamp":"2014-04-16T13:38:47Z","content_type":null,"content_length":"32899","record_id":"<urn:uuid:0f8e756c-720c-4162-8462-d4b12ecab1d2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Range of a Function
September 30th 2008, 08:26 PM #1
MHF Contributor
Jul 2008
Range of a Function
I pretty much can find the domain of certain basic functions. I understand the domain and range in the form of the point (x,y) = (domain, range). However, I just don't comprehend the idea of
finding the range of any function.
Here are two questions:
(1) Find the range of f(x) = -(sqrt{-2x + 3})
(2) Find the range of y = -x^4 + 4
Last edited by magentarita; September 30th 2008 at 08:29 PM. Reason: Needed to add a letter
I pretty much can find the domain of certain basic functions. I understand the domain and range in the form of the point (x,y) = (domain, range). However, I just don't comprehend the idea of
finding the range of any function.
Here are two questions:
(1) Find the range of f(x) = -(sqrt{-2x + 3})
(2) Find the range of y = -x^4 + 4
The best way of finding the range of a function is to draw the graph.
(1) (-oo, 0]
The graph is a sideways parabola. Note that:
* when x = 3/2, y = 0.
* As x --> -oo, y --> -oo.
(2) (-oo, 4]
The graph looks similar to an upside down parabola ......
yes but....
Yes, but what is the basic idea in terms of the range of a function?
What do you understand for "all possible values for $y$?" For example, consider the function $f(x)=\frac1{x+1}.$ Note that $ot\exists\,x\in\mathbb R\mid f(x)=0,$ hence $y$ in this case, can't
take the value zero.
Following up on mr fantastic's words, graphing, it's one of the best ways to find the range quickly.
The range is all $y$ such that the equation $y=-\sqrt{-2x+3}$ has a solution in the domain (i.e. $x\leq 1.5$). This gives $-y = \sqrt{-2x+3}$. In order for there to be a solution we need $-y\geq
0\implies y\leq 0$. Thus, $y^2 = -2x+3$. Which gives $x = \tfrac{1}{2}(3 - y^2)$ and this is of course in the domain. Thus, $y\leq 0$ is the range.
(2) Find the range of y = -x^4 + 4[
The range here is any real number. Thus, we are asking to find for what $y$ is the equation $y = -x^4 + 4$ solvable? This implies $x^4 = 4 - y$ and in order to have a solution we need $4 - y \geq
0 \implies y\leq 4$.
Dr. Math put it this way:
"Domain and range are just two different words for "how far something extends"; specifically, a king's domain is the territory he controls, and an animal's range is the region it wanders through.
So it makes some sense that the set of numbers a function "controls" would be called its domain, and the set through which its value can wander is called its range."
Merriam-Webster puts it this way:
Domain: a territory over which dominion is exercised ; the set of lements to which a mathematical or logical variable is limited; specifically : the set on which a function is defined. (The word
comes from the Latin word for "lordship".)
Range: a place that may be ranged over; an open region over which animals (as livestock) may roam and feed; the region throughout which a kind of organism or ecological community naturally lives
or occurs; the set of values a function may take on; the class of admissible values of a variable.
In mathematics:
The domain of a function f(x) is usually fairly easy to find. It is the set of all the numbers x that can be put into f(x) and have the result make sense. That is, the x values that don't make
some expression inside a square root sign negative, or that don't make a denominator zero, and so on.
The range is the set of all values f(x) can take, as x takes every value in the domain.
Example: $y=x^2-2$
As for the domain, there are no restrictions. You can assign any real number to x. So the domain is "all real numbers"
The range is the set of all values y can take, as x takes every value in the domain. In this problem, you know that the square of a number is greater than or equal to 0. Could y take the value
If we try to solve
$-3 = x^2 - 2$
we get
$-1 = x^2$
which is impossible to solve, so -3 is not in the range. One way to find the values of y that are possible is to try solving the equation for x:
$y = x^2 - 2$
$y + 2 = x^2$
$x = \pm \sqrt{y+2}$
Now you can use the logic you used for the domain: what values of y will let this formula make sense? The radicand must be greater than or equal to 0.
$y+2 \ge 0$
$y \ge -2$
Range = $\{y|y \ge -2\} \ \ or \ \ [-2, +\infty)$
much better............
What do you understand for "all possible values for $y$?" For example, consider the function $f(x)=\frac1{x+1}.$ Note that $ot\exists\,x\in\mathbb R\mid f(x)=0,$ hence $y$ in this case, can't
take the value zero.
Following up on mr fantastic's words, graphing, it's one of the best ways to find the range quickly.
Thanks for breaking this up for me.
September 30th 2008, 10:42 PM #2
October 1st 2008, 04:23 PM #3
MHF Contributor
Jul 2008
October 1st 2008, 06:34 PM #4
October 2nd 2008, 03:02 AM #5
MHF Contributor
Jul 2008
October 2nd 2008, 11:44 AM #6
October 2nd 2008, 12:14 PM #7
Global Moderator
Nov 2005
New York City
October 2nd 2008, 12:15 PM #8
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia
October 2nd 2008, 02:01 PM #9
MHF Contributor
Jul 2008 | {"url":"http://mathhelpforum.com/pre-calculus/51454-range-function.html","timestamp":"2014-04-19T10:27:52Z","content_type":null,"content_length":"68926","record_id":"<urn:uuid:24e3c277-9d7b-4555-bcb4-de7e4f1cf74c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arcola, TX ACT Tutor
Find an Arcola, TX ACT Tutor
...My teaching is fun and creative. Algebra and Geometry are my strong points. All my kids are gone for UT Austin so I can devote all my time to help you.
12 Subjects: including ACT Math, reading, geometry, algebra 1
...I started as a Kindergarten teacher's aid, taught architectural design studios at the university (Texas Tech and UCA) and elementary school at HISD and Alief ISD. I have also tutored high
school students in Houston and Sugar Land in various subjects: math, physics, literature, Spanish, studying ...
41 Subjects: including ACT Math, Spanish, reading, English
...I've also taught SAT and ACT prep. I've tutored students in English, reading, and writing. I enjoy teaching a great deal and am well versed in teaching in different styles to fit the student.
34 Subjects: including ACT Math, reading, chemistry, English
...It's all going to be okay, and I am here to help. I've been a professional test prep tutor for five years. I've taught hundreds of students and logged over 1000 hours with a major test prep
22 Subjects: including ACT Math, English, college counseling, ADD/ADHD
...I was taught the art of drumming by using math principles when counting drum rolls, etc. Luckily, I have "an ear for music." I am a creative and original thinker because of all the teachers
who encouraged me to pursue my natural talents once I had mastered basic class subjects. I welcome the ...
44 Subjects: including ACT Math, reading, Spanish, GED
Related Arcola, TX Tutors
Arcola, TX Accounting Tutors
Arcola, TX ACT Tutors
Arcola, TX Algebra Tutors
Arcola, TX Algebra 2 Tutors
Arcola, TX Calculus Tutors
Arcola, TX Geometry Tutors
Arcola, TX Math Tutors
Arcola, TX Prealgebra Tutors
Arcola, TX Precalculus Tutors
Arcola, TX SAT Tutors
Arcola, TX SAT Math Tutors
Arcola, TX Science Tutors
Arcola, TX Statistics Tutors
Arcola, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/Arcola_TX_ACT_tutors.php","timestamp":"2014-04-19T07:39:10Z","content_type":null,"content_length":"23274","record_id":"<urn:uuid:4a56bd3b-2414-4b8f-8da5-442be8fdcf65>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Simulation of Large-Scale Integrated Circuits
A. Richard Newton
EECS Department
University of California, Berkeley
Technical Report No. UCB/ERL M78/52
Electronic circuit simulation programs can accurately predict voltage and current waveforms for small integrated circuits but as the size of the circuit increases, e.g. for Large-Scale Integrated
(LSI) Circuits involving more than 10000 devices, the cost and memory requirements of such analyses become prohibitive. Logic simulators can be used for LSI digital circuit evaluation and design if
only first-order timing information based on user-specified logic gate delays is required. If voltage waveforms and calculated delays are important, a timing simulator may be used. In many circuits,
however, there are critical paths or analog circuit blocks where more accurate circuit analysis is necessary. This dissertation describes the hybrid simulation program SPLICE, developed for the
analysis and design of LSI Metal-Oxide-Semiconductor (MOS) circuits. SPLICE allows the designer to choose the form of analysis best suited to each part of the circuit and logic, timing and circuit
analyses are performed concurrently. The use of an event scheduling algorithm and selective-trace analysis allows the program to take advantage of the relatively low activity of LSI circuits to
reduce the cost of the simulation. SPLICE is between one and three orders of magnitude faster than a circuit simulation program, for comparable analysis accuracy, and requires less than ten percent
of the data storage used in a circuit analysis. SPLICE is written in FORTRAN and is approximately 8000 statements long. The algorithms and data structures used in SPLICE are described and a number of
example simulations are included.
BibTeX citation:
Author = {Newton, A. Richard},
Title = {The Simulation of Large-Scale Integrated Circuits},
School = {EECS Department, University of California, Berkeley},
Year = {1978},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/1978/9605.html},
Number = {UCB/ERL M78/52},
Abstract = {Electronic circuit simulation programs can accurately predict voltage
and current waveforms for small integrated circuits but as the size
of the circuit increases, e.g. for Large-Scale Integrated (LSI)
Circuits involving more than 10000 devices, the cost and memory
requirements of such analyses become prohibitive.
Logic simulators can be used for LSI digital circuit evaluation and
design if only first-order timing information based on user-specified
logic gate delays is required. If voltage waveforms and calculated
delays are important, a timing simulator may be used. In many
circuits, however, there are critical paths or analog circuit blocks
where more accurate circuit analysis is necessary.
This dissertation describes the hybrid simulation program
SPLICE, developed for the analysis and design of LSI
Metal-Oxide-Semiconductor (MOS) circuits. SPLICE allows the
designer to choose the form of analysis best suited to each
part of the circuit and logic, timing and circuit analyses are
performed concurrently. The use of an event scheduling algorithm
and selective-trace analysis allows the program to take advantage
of the relatively low activity of LSI circuits to reduce the cost
of the simulation.
SPLICE is between one and three orders of magnitude faster than
a circuit simulation program, for comparable analysis accuracy,
and requires less than ten percent of the data storage used in a
circuit analysis. SPLICE is written in FORTRAN and is approximately
8000 statements long.
The algorithms and data structures used in SPLICE are described
and a number of example simulations are included.}
EndNote citation:
%0 Thesis
%A Newton, A. Richard
%T The Simulation of Large-Scale Integrated Circuits
%I EECS Department, University of California, Berkeley
%D 1978
%@ UCB/ERL M78/52
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/1978/9605.html
%F Newton:M78/52 | {"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/1978/9605.html","timestamp":"2014-04-20T08:18:31Z","content_type":null,"content_length":"7737","record_id":"<urn:uuid:c628c03a-a7a6-4d80-82b3-b80028699e92>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about likelihood-free methods on Xi'an's Og
Almost immediately after I published my comments on his paper with David Dunson, Xiangyu Wang sent a long comment that I think worth a post on its own (especially, given that I am now busy skiing
and enjoying Chamonix!). So here it is:
Thanks for the thoughtful comments. I did not realize that Neiswanger et al. also proposed the similar trick to avoid combinatoric problem as we did for the rejection sampler. Thank you for pointing
that out.
For the criticism 3 on the tail degeneration, we did not mean to fire on the non-parametric estimation issues, but rather the problem caused by using the product equation. When two densities are
multiplied together, the accuracy of the product mainly depends on the tail of the two densities (the overlapping area), if there are more than two densities, the impact will be more significant. As
a result, it may be unwise to directly use the product equation, as the most distant sub-posteriors could be potentially very far away from each other, and most of the sub posterior draws are outside
the overlapping area. (The full Gibbs sampler formulated in our paper does not have this issue, as shown in equation 5, there is a common part multiplied on each sub-posterior, which brought them
Point 4 stated the problem caused by averaging. The approximated density follows Neiswanger et al. (2013) will be a mixture of Gaussian, whose component means are the average of the sub-posterior
draws. Therefore, if sub-posteriors stick to different modes (assuming the true posterior is multi-modal), then the approximated density is likely to mess up the modes, and produce some faked modes
(eg. average of the modes. We provide an example in the simulation 3.)
Sorry for the vague description of the refining method (4.2). The idea is kinda dull. We start from an initial approximation to θ and then do one step Gibbs update to obtain a new θ, and we call this
procedure ‘refining’, as we believe such process would bring the original approximation closer to the true posterior distribution.
The first (4.1) and the second (4.2) algorithms do seem weird to be called as ‘parallel’, since they are both modified from the Gibbs sampler described in (4) and (5). The reason we want to propose
these two algorithms is to overcome two problems. The first is the dimensionality curse, and the second is the issue when the subset inferences are not extremely accurate (subset effective sample
size small) which might be a common scenario for logistic regression (with large parameters) even with huge data set. First, algorithm (4.1) and (4.2) both start from some initial approximations, and
attempt to improve to obtain a better approximation, thus avoid the dimensional issue. Second, in our simulation 1, we attempt to pull down the performance of the simple averaging by worsening the
sub-posterior performance (we allocate smaller amount of data to each subset), and the non-parametric method fails to approximate the combined density as well. However, the algorithm 4.1 and 4.2
still work in this case.
I have some problem with the logistic regression example provided in Neiswanger et al. (2013). As shown in the paper, under the authors’ setting (not fully specified in the paper), though the
non-parametric method is better than simple averaging, the approximation error of simple averaging is small enough for practical use (I also have some problem with their error evaluation method),
then why should we still bother to use a much more complicated method?
Actually I’m adding a new algorithm into the Weierstrass rejection sampling, which will render it thoroughly free from the dimensionality curse of p. The new scheme is applicable to the nonparametric
method in Neiswanger et al. (2013) as well. It should appear soon in the second version of the draft. | {"url":"http://xianblog.wordpress.com/tag/likelihood-free-methods/","timestamp":"2014-04-16T22:51:34Z","content_type":null,"content_length":"82867","record_id":"<urn:uuid:8afd0d36-f6de-427c-83c7-e38d63b09243>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
finding the area
Find the area of the shaded region the radius of each circle is 1
1. Three of the four midpoints form a right triangle with the length of the hypotenuse of 2. Thus the distance between twoo midpoints placed on a vertical (or horizontal) line is $d = \sqrt{2}$ 2.
The triangle formed by a midpoint and two points of interception (marked blue) is a right triangle whose area is $a = \frac12 \cdot r^2$ 3. The lens shaped area is the difference of a quarter circle
and the blue right triangle $l = \frac14 \cdot \pi r^2-\frac12 \cdot r^2 = \frac14 r^2(\pi - 2)$ 4. The shaded region's area is the difference of 4 circles and 8 lens shaped areas. | {"url":"http://mathhelpforum.com/geometry/198979-finding-area.html","timestamp":"2014-04-21T02:47:08Z","content_type":null,"content_length":"34256","record_id":"<urn:uuid:82503726-1a14-48c3-b0c0-af90cb1e07fa>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Specs needed on a Carver KLW AUDIO K MOS 2200
Can anyone tell me the output power specs on a Carver KLW AUDIO K MOS 2200 amp? Is it 2 X 200?
Also, if anyone is selling an M series amp (M-240 or M-4120 for example), I may be interested.
in stereo each channel can drive 2 ohms, 4 ohms mono bridged(halve its stereo rating) and mono power is just a bit more then 300, close to 340 watts. it gives out 150 watts/chn into 2 ohms. i have
fixed and built over 200 of them, as i purchased carvers remaining surplus of all there parts when they closed down. i was also lucky to pic up many M4120 boards adn M4060 boards that i have fixed
and made into working amps. i love the M series amps for Mid/twts and the Kmos for Bass.
hope this helps...
RynoThePirate wrote:Is the amp 2 ohm stable?
Dan Williams
carver nut,
who lives only
a few miles from
their old office....
Seeking Carver Car Amplifier Specifications
If the Carver K MOS 2200 is only rated 2x100w RMS @ 4 ohms, can anyone explain why it has two 25-amp fuses, when the M-240 and M-2120 which are 2x120w RMS @ 4 ohms only come with a single 20-amp
When I A/B my K MOS 2200 against all 4 of my M-240 and my M-2120, it sure seems to be lot more powerful, effortlessly pushing my Realistic model APM-200 Audio Power Meter well past 100 RMS continuous
even into 8 ohms.
I sure would welcome a scan of the specification page from a Carver owners manual for the M-240, M-2120, M-4120, PMA-2150, K MOS 2150, K MOS 4200, or especially the K MOS 2200.
Last edited by Corvette6769 on Thu May 24, 2007 5:19 am, edited 1 time in total.
My Gear: http://www.SS427.com/stereo
My Websites: http://www.SS427.com/#business
Re: Seeking Carver Car Amplifier Specifications
common mistake, you have to remember that the M series amps are a magnetic power supply( carver exclusive), and are much more efficient then a standard MOsFet A/B amp, kind of between a Digital and a
A/B amp in draw/power output. thus the difference in fuse inputs. the m-240/m2120 are 120 x 2 into 4 ohms, 240 mono into 8ohms, the kmos 2200 is 100 x 2 into 4 ohms, or 150 x 2 into 2 ohms, or 340
mono into 4ohms. kmos 2150 is 75x 2into 4ohms, 200 mono into 4 ohms, the kmos 4200 is 50 x 4 into 4 ohms, 75 x 4 into 2 ohms, and 150 x 2 into 4ohms.
Corvette6769 wrote:If the Carver K MOS 2200 is only rated 2x100w RMS @ 4 ohms, can anyone explain it has two 25-amp fuses, when the M-240 and M-2120 which are 2x120w RMS @ 4 ohms only come with a
single 20-amp fuse?
When I A/B my K MOS 2200 against all 4 of my M-240 and my M-2120, it sure seems to be lot more powerful, effortlessly pushing my Realistic model APM-200 Audio Power Meter well past 100 RMS
continuous even into 8 ohms.
I sure would welcome a scan of the specification page from a Carver owners manual for the M-240, M-2120, M-4120, PMA-2150, K MOS 2150, K MOS 4200, or especially the K MOS 2200.
Dan Williams
carver nut,
who lives only
a few miles from
their old office....
Thank you for the information Dan.
Happen to have specifications for the M-4120 or PMA-2150 models?
Also, at what THD are the Carver car amplifiers rated?
Does Carver rate them THD or THD+N (Total Harmonic Distortion + Noise)?
My Gear: http://www.SS427.com/stereo
My Websites: http://www.SS427.com/#business
the M-4120 was two of the M-2120 together, more or less. the PMA was the same as the Kmos 2150. not sure on the thd or thd+n, but they are all very under rated amps, still to this day i run my M amps
and will not get rid of them.....sweet for mid/highs.
Corvette6769 wrote:Thank you for the information Dan.
Happen to have specifications for the M-4120 or PMA-2150 models?
Also, at what THD are the Carver car amplifiers rated?
Does Carver rate them THD or THD+N (Total Harmonic Distortion + Noise)?
Dan Williams
carver nut,
who lives only
a few miles from
their old office....
VB, not sure on this one, would have to dig, but i would say about 10years right off the top of my head, maybe a bit more.
vernonbishop wrote:I am just curious, just how long did Carver make car stereo components? I am just curious.
Dan Williams
carver nut,
who lives only
a few miles from
their old office....
danw2002 wrote:VB, not sure on this one, would have to dig, but i would say about 10years right off the top of my head, maybe a bit more.
vernonbishop wrote:I am just curious, just how long did Carver make car stereo components? I am just curious.
Thanks anyway. I just did not know that Carver manufacturered car stereo equipment. | {"url":"http://carveraudio.com/phpBB3/viewtopic.php?p=17942","timestamp":"2014-04-16T07:31:22Z","content_type":null,"content_length":"62397","record_id":"<urn:uuid:9703ba2f-af79-4009-93bc-5affd2b8adce>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
Huntington Park Precalculus Tutor
...I spend approximately 5-7 hours a day coding in MATLAB. I have taken a class on numerical methods at Caltech that was done half in mathematica, half in Matlab. I am currently working on a
physics research project studying the structure of a new type of material, a quasicrystal, and the code I am writing for the project is also in mathematica.
26 Subjects: including precalculus, calculus, physics, algebra 2
...I tutored the students in one-on-one sessions, group sessions, and conducted review sessions before exams. In addition, I was a teaching assistant for undergraduate and graduate students in the
Biomedical Engineering and Kinesiology departments. It is my goal to not only teach my students the material, but to give them the tools needed to succeed in all their classes.
30 Subjects: including precalculus, chemistry, calculus, physics
...Students I worked with have scored higher on their finals and other placement tests. I am very flexible and available weekdays and weekends. I will be a great help for students who require
science classes in their majors or for those who are looking to score high on their entry exams.
11 Subjects: including precalculus, chemistry, geometry, algebra 1
Are you looking for an SAT Expert? A Math Mentor at any level? I am a caring, intelligent and entertaining tutor with over 7 years of experience working with high schoolers in SAT prep and in all
levels of Math from Algebra I and Geometry through Calculus.
26 Subjects: including precalculus, English, Spanish, reading
...I am a firm believer that everyone learns differently. When I work with students I constantly change my tactics to adjust to what works best with each student's of learning. My background in
theatre also lends itself to a strong ability to communicate the complicated material of these tough sub...
10 Subjects: including precalculus, chemistry, physics, calculus | {"url":"http://www.purplemath.com/huntington_park_precalculus_tutors.php","timestamp":"2014-04-21T13:03:21Z","content_type":null,"content_length":"24453","record_id":"<urn:uuid:9e0ca5df-354a-43b7-bf12-7affef707f8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 20
that is what was asked it never said total distance it said calculate distance traveled in km
how did you get 12 min from 8:55 to 9:07?? and is the 1st answer that i have correct... because the link says otherwise
an aircraft's reading shows time 8:55 at distance travelled 957km time 9:07 at distance travelled 1083 calculate distance travelled in km calculate the average speed of aircraft in km/h (for the 1st
question 1083-957= 126km) (the second one should i add 1083+957km then div...
write the following simultaneous equations in the form of AX= B where A,X and B are matrices 11x+6y=6 9x+5y=7 hence write the solution for x and y as a product of two matrices.
i really need to know how to do this i have posted it over again and no one is able to help me...
write the following simultaneous equations in the form of AX= B where A,X and B are matrices 11x+6y=6 9x+5y=7 hence write the solution for x and y as a product of two matrices.
write the following simultaneous equations in the form of AX= B where A,X and B are matrices 11x+6y=6 9x+5y=7 hence write the solution for x and y as a product of two matrices.
a credit union pays 8 percent per annum compound interest on all fixed deposits. a consumer deposited $24000 in an account. calculate the total amount of money in the account at the end of two years.
A drinking straw of length 21 cm is cut into 3 pieces the length of the first piece is x cm the second piece is 3cm shorter than the 1st piece the third piece is twice as long as first piece (a)
state in terms of x the length of the pieces (i think i figured that one out...1st...
if the exchange rate is $1.35 BDS$ to EC$ and Karen exchanged EC$ 432 for BDS$ calculate the amount of BDS$ which karen would receive
karen exchanged BDS$ two thousand dollars and received EC$ two thousand seven hundred calculate the value of one BD$ in EC$ I would just like the formula so i can work it out please
How did you get 122cm of wire??
(in this question they did not give me any diagrams) use pie 22/7 (a) a piece of wire is bent to form a square of area 121cm squared Calculate the length of each side of the square. Calculate the
perimeter of the square. (b) the same piece of wire was bent to form a circle. Ca...
(iii) 13,680 - 7,800 = 5,880 still outstanding in the second year. I am not sure how to work out the last one.
(a)A loan of 12 000 was borrowed from a bank at 14% per annum Calculate (i) The interest on the loan at the end of the first year (ii) The total amount owing at the end of the first year A repayment
of $ 7 800 was made at the start of the second year. Calculate (iii) the amoun...
What is the meaning?
That makes much more sense. Ok thank you.
What is the meaning?
Can it also imply on people?
What is the meaning?
Can it also mean that there are years when nothing occurs and then within a few weeks, something massive can occur?
What is the meaning?
I dont understand this quote "There are decades where nothing happensl and there are weeks where decades happen."?????
Physical Science
If a vehicle circled Earth at a distance equal to the Earth-Moon distance, how long would it take to make a complete orbit? | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=INDIANA","timestamp":"2014-04-16T13:51:27Z","content_type":null,"content_length":"10249","record_id":"<urn:uuid:060956be-8b4a-4ec2-8a68-b419c619c059>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with Legrange
October 22nd 2009, 10:30 AM #1
Junior Member
Sep 2009
Help with Legrange
Find max and/or min values of function f given the contraints:
f(x,y,z) = x^2 +y^2 +z^2
x + y + z = 1
x + 2y + 3z = 6
I know how to use legrange generally. When I solve using both constraints, I get values for x,y,z that when plugged into the function f gives 25/3.
The answer in the book gives: "No Maximum, minimum: 25/3"
My question is how do I know the value i got (25/3) is the minimum and how do I know there is no maximum for the function?
It sounds like you are doing the math work in the probelm right, which is good. One way to check your critical points is using the second partials of the function, which can be a pain, but in
this case isnt too bad. Just remember to substitute a value for $z$ in terms of $x$ and $y$ in the equation before you try to take a derivative of it.
Equation: $D=FxxFyy-(Fxy)^2$
-If Fxx and D(x,y,z) are positve then its a min.
-If Fxx is negative and D(x,y,z) is positive then the point is a max.
-If D(x,y,z) is less than 0 then the point is a saddle point.
*note: where $Fxx$ means take the partial derivative of F with respect to x twice.
Yes but these equations have 3 variables so wouldnt I have to do something with the partial derivative with respect to z?
Correct, you would need to solve for Z in terms of x and y.
Then, you would put that in for F(x), then start the second partials.
After that your equatoin will be in terms of only x and y.
October 22nd 2009, 10:36 AM #2
Sep 2009
October 22nd 2009, 11:49 AM #3
Junior Member
Sep 2009
October 22nd 2009, 08:00 PM #4
Sep 2009 | {"url":"http://mathhelpforum.com/calculus/109691-help-legrange.html","timestamp":"2014-04-18T04:03:16Z","content_type":null,"content_length":"33673","record_id":"<urn:uuid:ccacefb6-bfb1-455a-b069-8505ed04f021>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: lincom command
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: lincom command
From Ari Samaranayaka <ari.samaranayaka@ipru.otago.ac.nz>
To <statalist@hsphsun2.harvard.edu>
Subject Re: st: lincom command
Date Wed, 13 Mar 2013 10:54:57 +1300
Thank you Maarten for the comment. Your idea was my very first try, it was given up later for the difficulty of interpreting results. What you are saying is to treat A,B,C,D as 4 separate binary
variables. Problem with that is overlap between those binary variables is high, therefore colinearity. For this reason estimated IRR for ethnicity A and B can be quite different eventhough overlap of
people between ethnicities A and B are quite large. In that situation I am unable to interpret (understand) estimates.
On 12/03/2013 10:23 PM, Maarten Buis wrote:
The much easier solution is to just create separate indicator
variables for whether or not a person feels (s)he belongs to ethnicity
A. The full set of indicator variables will not be mutually exclusive,
but that is exactly what you want, and you will get directly the IRRs
you are looking for without using any post-estimation commands.
-- Maarten
On Tue, Mar 12, 2013 at 5:36 AM, Ari Samaranayaka
<ari.samaranayaka@ipru.otago.ac.nz> wrote:
Hi folks
I need to use a poisson multivatiate model using a human health dataset,
outcome is a specific health outcome, one of the explanatory variables is
ethnicity. One of the results I need to estimate is IRR for various
ethnicities relative to a reference ethnicity. Let us say ethnicities are A,
B, C, D. Some people belong to multiple ethnicities, therefore ethnicity is
not a variable with mutually exclusive categories. For this reason I cannot
represent it using dummy variables. Does any one know how to represent a
categorical variable when categories are not mutually exclusive?
What I have done is, have created a set of mutually exclusive ethnic
categories so that i can use them in the model. Say those categories are P,
Q, R, S, T (here I have more groups than above, I have no research interest
on them). People from single ethnicity in original ethnicity classification
now belongs to multiple categories in new classification, but new
classification can be represented by dummy variables. Then I can have
estimates (regression coefficients and IRR) for each of these new ethnic
categories, but what I really need is the estimates for my original ethnic
categories. Does any one know how to convert estimates for P, Q, R, S, T
into estimates for A, B, C, D?
I thought I can use stata lincom command for that as stata documentation
says. For example, required linear combination for ethnicity A is determined
by the distribution of ethnic A people across ethnic groups P to T. I know
those distributions for all interested ethnicities. Do you think this is a
correct approach?
Thank you in advance for any help.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2013-03/msg00578.html","timestamp":"2014-04-16T10:25:05Z","content_type":null,"content_length":"11212","record_id":"<urn:uuid:1cd61937-ffa9-45a5-b40e-39bfa138ade3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
by Mithrandir
Suppose we have to make a quiz. The questions have different degrees of difficulty and come from several different chapters. We have the following problem:How can we ensure that we select the
questions uniformly from both domains?
First, we see that the problem is similar to finding how uniformly scattered are some points on a rectangle. Yet, even this problem has no simple solution. For example, taking the centre of all the
points and seeing if it is close to the centre of the rectangle fails. In the same manner will fail any method using statistical momentums.
What is simpler than one 2D problem? A 1D one. We arrived at the point where we must determine if one function’s graph is horizontal. But this can be solved by using a derivative. However, this
solution doesn’t properly extend to higher dimensions because we are forced to use finite differences (our problem is a discrete one).
Let’s look at the problem from another point of view. Plotting one instance we have:
Without reducing the generality or changing the data aspect, we will divide each point value by its sum, thus normalizing the distribution. Also, we’ll take into account an uniform distribution over
the same interval:
We can easily see now a solution: the shape of the distributions is similar only if their common area is 1. Thus, the uniformity degree is given by the common area of the distributions (normalized,
of course):
We can easily extend this idea to more dimensions. I don’t know about the fractal dimensions but this solution seems to work for all integer ones.
Here is a Python script used for one instance of the problem, illustrating the idea:
def sum_dist(d):
s = 0
for line in d:
s += sum(line)
return s
def get_pdf(md):
s = sum_dist(md)
return [[col/(s+0.0) for col in line] for line in md]
def orig_dist(ids):
return [[len(col) for col in line] for line in ids]
def overlap(d1, d2):
l = [zip(a, b) for (a,b) in zip(d1, d2)]
return sum_dist([[min(x) for x in line] for line in l])
def main():
ids = ...
U = get_pdf(get_uniform())
orig = orig_dist(ids)
print "Degree of uniformity: ", overlap(get_pdf(orig), U)
if __name__ == "__main__":
That’s all. | {"url":"http://pgraycode.wordpress.com/2009/09/21/uniformity/?like=1&_wpnonce=d1fb2cdf27","timestamp":"2014-04-18T18:11:44Z","content_type":null,"content_length":"52431","record_id":"<urn:uuid:4a65302d-080b-4a61-bd40-e655675e10c0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
XXXIV edition of the Brazilian Workshop on Nuclear Physics
Default session
Thermalization time and specific heat of neutron stars crust
PoS(XXXIV BWNP)004 pdf
Superfluid Neutrons in the Core of the Cassiopeia-A Neutron Star
PoS(XXXIV BWNP)005 pdf
PoS(XXXIV BWNP)006 pdf attachments
Theoretical models in nuclearastrophysics
PoS(XXXIV BWNP)008 pdf
Recent results in reactions using radioactive ion beams
PoS(XXXIV BWNP)009 pdf
Strongly Interacting Matter at Very High Energy Density
PoS(XXXIV BWNP)014 pdf
Symposia II
Quantum decoherence in low-energy nuclear reaction dynamics?
PoS(XXXIV BWNP)020 pdf
Nuclear physics in the cosmos
PoS(XXXIV BWNP)022 pdf
Discovering the Atomic Nucleus
PoS(XXXIV BWNP)024 pdf
Oral Contibutions
Nuclear structure and neutrino-nucleus interaction
PoS(XXXIV BWNP)032 pdf
Faddeev-Yakubovsky technique for Weakly Bound Systems
PoS(XXXIV BWNP)034 pdf
Manufacturing of thin films of boron for the measurement of the ${}^{10}B(n,\alpha)$${}^{7}Li$ reaction used in BNCT
PoS(XXXIV BWNP)035 pdf
Active filtering applied to radiographic images unfolded by the Richardson-Lucy algorithm
PoS(XXXIV BWNP)036 pdf
Deconfinement transition at neutron star cores
PoS(XXXIV BWNP)040 pdf
Hybrid stars with the Nambu-Jona-Lasineo model
PoS(XXXIV BWNP)046 pdf
Renormalization of effective field theories for the nuclear force
PoS(XXXIV BWNP)047 pdf
Neutron Generation and Diffusion Process in Accelerator-Driven Systems Reactors
PoS(XXXIV BWNP)050 pdf
Poster Contributions
A Procedure to Fit Nuclear Decay Data With Proper Treatment of Outliers
PoS(XXXIV BWNP)053 pdf
External gamma dose rate and radon concentration in indoor environments covered with Brazilian granites
PoS(XXXIV BWNP)059 pdf
A Geometry and Collimation Study of a Compton Backscatter Device for Inclusions Detection in Materials
PoS(XXXIV BWNP)063 pdf
7Be content in rainfall and soil deposition in South American coastal ecosystems (2011 September 12)
PoS(XXXIV BWNP)064 pdf
Analysis of the Cell Walls of Ceramic Foams by X-ray Microtomography
PoS(XXXIV BWNP)069 pdf
PoS(XXXIV BWNP)070 pdf
Dose-rate dependence of Epitaxial diodes response for gamma dosimetry
PoS(XXXIV BWNP)077 pdf
Studies of the sensitivity dependence of float zone silicon diodes on gamma absorbed dose
PoS(XXXIV BWNP)078 pdf
Monte Carlo transport simulation for a Long Counter neutron detector employed as a cosmic rays induced neutron monitor at ground level
PoS(XXXIV BWNP)082 pdf
The comparison of GEANT 4.8.2 and 4.9.2 results for the 25MeV protons in thick polyethylene
PoS(XXXIV BWNP)083 pdf
Use of the X-ray Computed Microtomography Technique for the Comparative Morphological Characterization of \textit{Proceratophrys Bigibbosa} Species from Southern Brazil
PoS(XXXIV BWNP)085 pdf
Natural nuclear radioactivity and crystallography composition of Camburi sand beach (Vitória - ES)
PoS(XXXIV BWNP)091 pdf
Study of Pulsars and Magnetars
PoS(XXXIV BWNP)098 pdf
Gravitational Wave Radiated by a Collapsing Ellipsoid
PoS(XXXIV BWNP)099 pdf
Proto-Neutron Stars with Delta-Resonances using the Zimanyi-Moszkowski Model
PoS(XXXIV BWNP)101 pdf
Hadronic thermal model with distinct freeze-out temperature for baryons and mesons
PoS(XXXIV BWNP)106 pdf
Domain of parameters in a model with density-dependent quark mass
PoS(XXXIV BWNP)110 pdf
Alpha cluster states in light nuclei populated through the (6Li,d) reaction
PoS(XXXIV BWNP)118 pdf
Shell model formalism for the vector proton asymmetry in the nonmesonic weak hypernuclear decay
PoS(XXXIV BWNP)119 pdf
A systematic calculation of muon capture rates in the number projected QRPA
PoS(XXXIV BWNP)120 pdf
Isospin Mixing Within Relativistic Mean-Field Models Including the Delta Meson
PoS(XXXIV BWNP)122 pdf
Refitting density dependent relativistic model parameters including Center-of-Mass corrections
PoS(XXXIV BWNP)123 pdf
Interacting neutrino gas in a dense nuclear matter
PoS(XXXIV BWNP)124 pdf
Relativistic pn-QRPA to the Double Beta Decay
PoS(XXXIV BWNP)126 pdf
Hadrons in a Dynamical AdS/QCD model
PoS(XXXIV BWNP)127 pdf
Statistical properties of hot nuclei
PoS(XXXIV BWNP)128 pdf
Monte Carlo sampling of pair creation transitions in a model of pre-equilibrium reactions
PoS(XXXIV BWNP)134 pdf
Quasi-Elastic Barrier Distribution for the $^7$Li weakly bound projectile
PoS(XXXIV BWNP)138 pdf
Study of fragmentation reactions of light nucleus
PoS(XXXIV BWNP)143 pdf
Simultaneous mutiparticle emission from compound nuclei in evaporation process
PoS(XXXIV BWNP)144 pdf
One-Proton Radiactivity from Spherical Nuclei
PoS(XXXIV BWNP)116 pdf | {"url":"http://pos.sissa.it/cgi-bin/reader/conf.cgi?confid=142","timestamp":"2014-04-21T07:50:00Z","content_type":null,"content_length":"23460","record_id":"<urn:uuid:d5851551-a262-41de-ab98-237ec3cdb07b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Examples of categorification
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
What is your favorite example of categorification?
Anybody who wants to make a more enduring contribution to the mathematics community might try to construct a concise article with the title What is a .... Categorification? for the widely
distributed AMS Notices. Many of the articles published in that column are far too technical for the general reader, I think, due to the natural self-consciousness people feel about what the
experts will think. But categorification has the great advantage of being readily illustrated by user-friendly examples. (I'd do this myself if I had any expertise at all.)
Jim Humphreys Oct 27 '10 at 18:46
add comment
There are a bunch; I don't know that I have a favorite. Here's one for now:
The free commutative monoid functor is a categorification of the exponential function.
Edit: I have been asked to explain this, so I will. We'll interpret "commutative monoid" in any cocomplete symmetric monoidal category $C$ where $\otimes$ distributes over colimits (each $X
\otimes -$ preserves colimits); the simplest way of ensuring that is to assume the category is symmetric monoidal closed.
Then, at the level of formulas, the free commutative monoid is
up vote $$\exp(X) = \sum_{n \geq 0} X^{\otimes n}/\mathbf{n!}$$
29 down
vote where $\mathbf{n!}$ is the categorifier's notation for the symmetric group $S_n$, and we divide out by the canonical action of the $S_n$ on $X^{\otimes n}$.
There is an awful lot more to say about the categorified analogy, but I'll just say one. Using the hypotheses on the symmetric monoidal category $C$, the object $\exp(X)$ carries a
commutative monoid structure, and in fact it is the free commutative monoid on the object $X$ (think of the symmetric algebra for the category $C = Vect$, for instance). Like any free
functor, the left adjoint $\exp$ preserves colimits, for example coproducts. What is the coproduct of two commutative monoids (in the category of commutative monoid objects)? Their tensor
product in $C$! Thus, we arrive at the exponential law
$$\exp(X + Y) \cong \exp(X) \otimes \exp(Y)$$
and this has many applications.
Can you explain that a bit?
Jan Weidner Oct 26 '10 at 6:19
And if you do your symmetrizing in a derived way, you get the free infinite loopspace, or $E_\infty$ monoid -- stable homotopy instead of homology -- spectrum objects instead of abelian
group objects -- ...
Tom Goodwillie Oct 26 '10 at 16:25
Thanks Todd for the additional explanation, this example is really nice! Thanks Tom for the additional information!
Jan Weidner Oct 26 '10 at 21:34
@Todd: What a fantastic example!! Really, this is one of my favorite MO answers so far.
Martin Brandenburg Jan 12 '11 at 23:20
@Martin: thank you very much! The mathematics is indeed beautiful to contemplate; one of my favorite applications is to the structure of the Lie algebra operad as a categorified logarithm
valued in superspaces, via the PBW theorem. I tried to say something about this a long time ago at math.ucr.edu/home/baez/trimble/trimble_lie_operad.pdf , but I'm afraid I didn't do
justice to it...
Todd Trimble♦ Jan 13 '11 at 0:14
show 6 more comments
The move from betti numbers to homology groups.
up vote 20 Although this might not fit super tightly with the usual modern examples of "categorification" (in the way that say a monoidal category is a categorification of a monoid), it is probably
down vote the first and most important example of a concept being categorified, allowing for notions such as functoriality, naturality, etc. to flourish. [No way for a continuous map between spaces
to induce a map between betti numbers! The old days before functoriality!].
Maybe one should consider the categorification of Euler characteristic to homology groups. This is generalized in the theories of Floer homologies of 3-manifolds and Khovanov homology,
whose Euler characteristics give the Alexander polynomial and Jones polynomial, respectively.
Ian Agol Oct 26 '10 at 3:36
This is just about the first example given in the paper "Categorification" by Baez and Dolan, so I'd say it fits in quite well with the modern spirit. An excellent example of structural
Todd Trimble♦ Oct 26 '10 at 13:45
add comment
up vote 15 The classical BGG resolution as a categorification of the Weyl character formula.
down vote
That's pretty cool! How do you deduce the Weyl character formula from the BGG resolution?
David MJC Oct 26 '10 at 8:11
The BGG resolution gives you a resolution of the simple module in terms of direct sums of Verma modules. The Euler-Poincaré principle applied to the resolution, together with the
formula for the character of the Verma module, gives you the Weyl character formula. (This is all from memory: I have some notes in my office and I can check tomorrow.)
José Figueroa-O'Farrill Oct 27 '10 at 0:10
add comment
up vote I like one of the simplest and most well known examples: the category of finite sets and [S:bijections:S] functions (see below for comments) categorifies the natural numbers. Or rather it
14 down un-de-categorifies the de-categorification that led to much of mathematics in the first place. That makes it pretty special, even if it is rather basic compared with other examples.
Isn't it more natural to take all functions as morphisms? The resulting category has initial objects, terminal objects, finite products, finite coproducts, and an exponential, which
categorify 0, 1, multiplication, addition, and exponentiation respectively.
Qiaochu Yuan Oct 26 '10 at 16:11
Quite right - that was a thinko on my part: I have most recently been interested in groupoidification, although the broader (and older) categorification was the first appeal to me. However
this is a fortuitous illustration of an important point, which is that categorification, like quantization, is an art, not a functor, and what is "most natural" may depend on the context!
If a set is viewed as a vector space with a canonical basis indexed by that set (or the dual of a vector space with such a canonical basis) then its groupoidification is a powerful idea in
David MJC Oct 27 '10 at 21:39
add comment
up vote 13 down vote Grassmannian varieties as categorifying (q-)binomial coefficients.
add comment
Here's another example: the functor which maps a group to its classifying space is a categorification of taking the reciprocal.
Edit: The idea is that the total space $EG$ of the classifying bundle of $G$ is contractible and a cofibrant replacement of the point $1$ on which $G$ acts freely. Thus, the construction $BG
= EG/G$ is taking a stack-y quotient $BG = 1//G$.
There is a bit more to this idea than may first appear; let me take a related example (which may appear to have some Eulerian "wishful thinking" in it, but have a little faith here!). One
way of taking the reciprocal is to pass to a geometric series, so that one suggestive notation for the free monoid construction
$$\sum_{n \geq 0} X^{\otimes n}$$
(in a suitable monoidal category; see my other comment on categorifying exponentiation) is a categorified reciprocal $1/(1 - X)$. We can apply this idea in group cohomology for a group $G$
as follows: think of $\mathbb{Z}$ as being an abelianized point, and consider a standard $G$-free resolution of $\mathbb{Z}$ such as the normalized homogeneous bar resolution, which we can
think of as an abelianized $EG$. In one way of constructing this bar resolution (see e.g. Hilton-Stammbach p. 217), the degree $n$ component of $EG$ is
$$\mathbb{Z}G \otimes IG^{\otimes n}$$
up vote where $IG$ is the augmentation ideal, i.e., the kernel of the augmentation map $\varepsilon: \mathbb{Z}G \to \mathbb{Z}$. As a bare module (or seen in degree 0), $IG$ can be seen as an
12 down abelianized "$G - 1$". However, in the differential-graded world, it is better to think of it as in degree 1, and this degree 1 shift $\Sigma IG$ can be seen as a categorified "$-IG = 1 -
vote G$" (this may make more sense in the "super-world"; see for example my old notes on the Lie operad when I was doing some work with Saunders Mac Lane, or consider for example the occurrence
of signs in the Euler characteristic). So now the total space of the bar resolution $EG$ is the sum of the degree $n$ components
$$\mathbb{Z}G \otimes \sum_{n \geq 0} (\Sigma IG)^{\otimes n}$$
which is an abelianized categorified form of $g \cdot \sum_{n \geq 0} (1 - g)^n$ which is formally $1$ by the geometric series. Very similar types of categorified geometric series
constructions occur in Joyal's theory of species (see especially his article on virtual species in Springer LNM 1234), which constructs the Lie operad by categorified constructions [if you
read between the lines!], and in the bar resolution for operads as discussed by Ginzburg-Kapranov; I tried to amplify this in my notes on the Lie operad.
Just to put one final gloss on this: consider the Schubert cell decomposition of projective space as a finite geometric series. For a field $k$ we have
$$\mathbb{P}^{n-1}(k) = \frac{k^n - 1}{k - 1} = 1 + k + k^2 + \ldots + k^{n-1}$$
(the '$1$' in the numerator is a zero vector, and the denominator is nonzero scalars $k^\ast$). We can pass to a limit and get infinite-dimensional projective space. Keeping in mind that
degree shifts introduce some sign changes in the geometric series, the infinite-dimensional projective space $\mathbb{RP}^\infty$ would be a model of the homotopy quotient $1//\mathbb{R}^* \
simeq 1//\mathbb{Z}_2$.
Which is most easily seen for finite groups...
David Roberts Oct 26 '10 at 2:44
@Todd: Can you explain this?
Martin Brandenburg Oct 26 '10 at 9:06
I will at my next available opportunity...
Todd Trimble♦ Oct 26 '10 at 11:46
Thank you, this is very interesting! :-)
Martin Brandenburg Jan 12 '11 at 23:24
add comment
The Monster Vertex Algebra (aka the Moonshine Module) categorifies Klein's $j$-invariant, in the sense that it is a graded vector space whose graded dimension is the $q$-expansion of
$j-744$. More generally, vertex operator algebras often categorify modular functions and (quasi-)modular forms. This has something to do with invariance properties of torus partition
up vote 10
down vote The Monster Lie Algebra categorifies the Koike-Norton-Zagier $j$-function product identity, in the sense that the Weyl-Kac-Borcherds denominator formula of the Lie algebra is precisely
this identity. More generally, physicists seem to use constructions with words like "BPS states" and "D-branes" in a way that categorifies automorphic forms on higher rank orthogonal
groups (but I don't how it works).
That's a great example, even for those who act like they don't like categories and "categorification".
Todd Trimble♦ Oct 26 '10 at 11:48
add comment
A small example, but I think it's nice. The generating function $C(t) = \sum_{n \ge 0} \frac{1}{n+1} {2n \choose n} t^{2n}$ of the Catalan numbers is defined by the identity $C(t) = 1 + t^2
up vote 8 C(t)^2$. So one might try to find a "Catalan object" in some category satisfying an isomorphism generalizing this identity. One can take the corresponding combinatorial species in the sense
down vote of Joyal, but another choice is to take the invariant subalgebra of the tensor algebra of the defining representation of $\text{SU}(2)$!
add comment
up vote 8 down vote The plethystic monoidal product, or the substitution product of Joyal species, as a categorification of functional composition.
add comment
up vote 6 down vote The empty category is a categorification of the empty set :-))
add comment
Here's an example I learned from Todd Trimble. Recall that the degree $k$ part of the exterior algebra on a vector space $V$ of dimension $n$ has dimension ${n \choose k}$, and similarly
the degree $k$ part of the symmetric algebra has dimension ${n+k-1 \choose k} = \left( {n \choose k} \right)$. So one can think of these constructions as categorifying binomial
coefficients. More precisely, the exterior algebra categorifies its Hilbert series $(1 + t)^n$, and the symmetric algebra categorifies its Hilbert series $\frac{1}{(1 - t)^n}$.
up vote 5
down vote But there's more! The duality between the Hilbert series above manifests itself in the identity $\left( {-n \choose k} \right) = (-1)^k {n \choose k}$, which categorifies to the following
statement: "the exterior algebra is the symmetric algebra of a purely odd supervector space." So isomorphisms in the category of supervector spaces categorify identities involving negative
binomial coefficients.
Yes, the degree 1 shift in the super-world can be seen as taking a negative, as I remark in my recent edit on the classifying space as reciprocal.
Todd Trimble♦ Oct 26 '10 at 15:55
Qiaochu, could you add this over on my question here: mathoverflow.net/questions/22750/… ? It sounds quite relevant. What are the power series (if there are any natural ones) that have the
other pieces of the twelvefold way as their coefficients?
Zev Chonoles Jan 5 '11 at 19:31
@Zev: I don't think there's much more to say beyond what's in Stanley's answer. I also don't really think the twelvefold way is the right way to look at the vector space side of things;
quantum mechanics seems to me a much better source of motivation, and it is also relevant to this supervector space stuff. I discuss this briefly at the end of my most recent blog post:
Qiaochu Yuan Jan 5 '11 at 20:16
add comment
The sphere spectrum as categorification of the integers, as remarked in a comment of Thomas Kragh below his answer here, and which I believe is due to Joyal.
up vote 5 down vote
Hm, that's a lot of answers from me. Should I stop now, or keep going?
No, keep going! Your answers are great, especially 1/g !
Jan Weidner Oct 26 '10 at 21:55
add comment
Okay then, here's another. 2-Hilbert spaces as a categorification of Hilbert spaces, and the categorified Gram-Schmidt process (which I first learned from James Dolan).
up vote 4
down vote This may be used to derive a $\mathbb{Z}$-linear basis for the representation ring of $S_n$ that consists of permutation representations, hence a combinatorial alternative to the basis
consisting of irreducible representations. The reference above sketches how this works in the case $Rep(S_4)$.
add comment
The canonical example in my mind is:
up vote 4 down vote Sets ~> vector spaces ~> linear categories
This is not so trivial -- it is relevant to the topic of extended TQFTs.
add comment
I don't know if I have a favourite either, but here's another one:
up vote 3 down vote
crossed modules in $Grp$ are strict 2-groups aka group objects in $Cat$ aka category objects in $Grp$.
add comment
Of course, my favorite example is the $2$-category of $2$-tangles (defined below) is a categorification of the category of tangles. The category of tangles is a monoidal category with
objects that correspond to the non-negative integers, morphisms are generated by $|$, $\cup$, $\cap$, $X$ and $\bar{X}$. In this $1$-category, the Reidemeister moves (and zig-zag and $\
psi$-move) are identities.
up vote 3
down vote In the $2$-category of $2$-tangles, the $2$-morphisms are generated by $\{ \} \leftrightarrow O$ (birth or death), $| \ |\leftrightarrow \stackrel{\cup}{\cap}$ (saddle), and the
aforementioned five Reidemeister moves (I, II, III, zig-zag, and $\psi$). These are subject to the full set of (35 or so) movie moves. The $2$-category of $2$-tangles is a braided monoidal
$2$-category with duals. In fact, it is the free braided monoidal $2$-category with duals on one self-dual object generator (Baez and Langford).
add comment
up vote 3 down vote The category of groupoids as a categorification of the ring of rational numbers. See this MO question and this n-category cafe post.
add comment
Is it perverse to just quote the original inception by Crane?
An obvious nice collection would be the paper with Yetter on examples of Categorification.
up vote 2
down vote However, I actually like another Paper of Yetter's better in this direction; categorical linear algebra.
Also, Rosenberg's Noncommutative spectrum is a categorification: pdf-link. Not in the strict sense, but "morally". That would be undoubtedly my favorite.
But what is Rosenberg's reconstruction theorem (morally) a categorification of?
Todd Trimble♦ Oct 27 '10 at 0:30
I am saying the spectrum is the categorification of the spectrum in the classical case. By considering the main object of study R-mod(corresponding to QCsheaves on a scheme) we make the
transition from sets to abelian categories. We replaces our spaces with 'spaces' which are simply categories. In this sense, it is a categorification.
B. Bischof Oct 27 '10 at 2:50
The proof in this old paper is incomplete (which was first noticed by Gabber).
Martin Brandenburg Jan 12 '11 at 23:27
add comment
up vote 1 In my limited experience of categories, I liked Quillen's notion of homotopy fibre (his paper Higher Algebraic K-Theory I) for a functor between categories modelling the homotopy fibre of
down vote any map.
Hmmm... I don't think I know quite what you mean here. Quillen's Theorem B says that when all the categorical fibers are homotopy equivalent to one another under base change, then they do
indeed all model the homotopy fiber. But if they're not all equivalent, then Theorem B tells you nothing. In general I think it's a very hard (and unsolved) problem to find a category
whose classifying space is the homotopy fiber of a functor $C\to D$. On the other hand, the dual problem of modeling the homotopy colimit of a diagram of categories was solved by Thomason
in his thesis.
Dan Ramras Oct 26 '10 at 0:54
add comment | {"url":"http://mathoverflow.net/questions/43579/examples-of-categorification/43597","timestamp":"2014-04-18T14:11:32Z","content_type":null,"content_length":"147400","record_id":"<urn:uuid:4c1c5497-a80d-44d3-b9b4-ba3f317a0f5e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Re: st: RE: problem with factor variable and margins.
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Re: st: RE: problem with factor variable and margins.
From Philip Ender <ender97@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: Re: st: RE: problem with factor variable and margins.
Date Wed, 17 Mar 2010 11:01:06 -0700
Rich wrote:
>I think I have tracked down the problem. In addition to the factor variable nainc, I have a continuous variable nainc_ainc as a regressor (which is the >indicator times a continuous variable). The margins command then got confused and assumed that nainc was an abbreviation for nainc_ainc, which did >appear in the results and subsequent tests. When I rename nainc_ainc as n2ainc_ainc, my problem goes away -- a marginal effect for the discrete >change in the factor variable nainc does indeed show up in the results.
>If I am right in diagnosing why the rename solves the problem, this means there is a bug in stata's margins. It is not reporting an ambiguous >abbreviation, it is simply picking one. Should I report it to stata's tech staff, or is that something you do, Martin?
I don't think the problem is with -margins- but with the fact that you
created the interaction outside of your model. If you create the
interaction using the factor variables in the model then -margins- not
only identifies all of the terms but produces marginal effects that
take into account the interactions. Without reproducing you entire
model, it would look something like this:
. tobit dv i.naic c.ainc i.naic#c.ainc ...
Phil Ender
UCLA Statistical Consulting Group
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-03/msg01205.html","timestamp":"2014-04-20T23:48:16Z","content_type":null,"content_length":"8607","record_id":"<urn:uuid:a0c8732d-a46b-4a8a-8d07-b920e634988e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Object Pascal - Numeric Representation
The single bit is used only to represent a tinny piece of information. To get effective numbers, the computer combines the bits. The first combination of bits consists of grouping four consecutive
To count the bits, we number them starting at 0, followed by 1, 2, and 3. The count starts with the most right bit. The first bit, on the right side of the nibble, is called the Low Order bit or LO
bit. This is also called the least significant bit. The last bit, on the left side of the nibble, is called the High Order bit or HI bit; it is also called the most significant bit. The bit on the
right side is counted as bit 0. The bit on the left side is counted as bit 3. The other bits are called by their positions: bit 1 and bit 2.
Once again, each bit can have one of two states. Continuing with our illustration, when a box is empty, it receives a value of 0. Otherwise, it has a value of 1. On a group of four consecutive bits,
we can have the following combinations:
This produces the following binary combinations: 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111 = 16 combinations. When using the decimal system, these
combinations can be represented as 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 15.
As you can see, a nibble is represented by a group of 4 (consecutive) bits. If you have a number that is less than four bits, such as 10 or 01 or 101, to complete and fill out the nibble consists of
displaying 0 for each non-represented bit. Consequently, the binary number 10 is the same as 0010. The number 01 is the same as 0001. The number 101 is the same as 0101. This technique is valuable
and allows you to always identify a binary number as a divider of 4.
When all bits of a nibble are 0, you have the lowest value you can get, which is 0000. Any of the other combinations has at least one 0 bit, except for the last one. When all bits are 1, this
provides the highest value possible for a nibble. The lowest value, also considered the minimum value, can be represented in the decimal system as 0. The highest value, also considered the maximum,
can be expressed in decimal value as 24 (2 represents the fact that there are two possible states: 0 and 1; 4 represents the fact that there are four possible combinations), which is 16. This
produces 16 because 24 = 16.
As you can see, the binary system is very difficult (simply because we are not familiar with it) to read when a value combines various bit representations. To make it a little easier, the computer
recognizes the hexadecimal representation of bits. Following the box combinations above, we can represent each 4-bit of the sixteen combinations using the decimal, hexadecimal, and binary systems as | {"url":"http://www.functionx.com/objectpascal/Lesson04.htm","timestamp":"2014-04-18T05:44:42Z","content_type":null,"content_length":"14762","record_id":"<urn:uuid:b726a740-6216-41b5-afea-8ea64d106180>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Undergraduate Presentations
Undergraduate Presentations
Session I - Stratton Hall 202
4:00-4:10 Fibonacci and Nature
Cherline Beaubrun, Framingham State College
Many people go through life without realizing that mathematics is all around them in nature. Leonardo Fibonacci was not one of these people. Fibonacci saw the connection between mathematics and
nature in the simple 'multiplication' of rabbits. His insight helped him to develop the Fibonacci numbers. Even after hundreds of years mathematicians still find ways to apply the Fibonacci numbers
to their own work. In this presentation, I will discuss the Fibonacci numbers, Fibonacci’s golden numbers, the manifestation of these numbers in nature, and some of the ways other mathematicians have
applied these numbers.
4:15-4:25 Rene Descartes and the Scientific Revolution
Christopher Jackson and Thierry Nkouga, Framingham State College
Rene Descartes’ trust in human senses superseded his trust in European theology and spurred his contributions to the scientific revolution of the Renaissance, the historical transition from the
Medieval Era to the Modern Age. In this presentation, Descartes’ philosophy and some of his accomplishments in mathematics and physics will be presented in order to illustrate the progress of science
in Europe during the 17th Century. The basis of Descartes’ analytical geometry of a cone to produce graphs of an ellipse, circle, parabola, and hyperbola will be used to examine applications to
optics and light diffraction with lenses.
4:30-4:40 Maria Gaetana Agnesi: A Look at Her Life and Her Contributions to Mathematics
Cynthia George, Framingham State College
The early 1700’s saw the dawn of the Catholic Enlightenment and with it the birth of an intelligent and caring woman in Maria Gaetana Agnesi. The eldest of twenty-one children, Agnesi’s genius was
evidenced by her early command of modern languages, her gifted public speaking, and her appreciation for the study of mathematics. In this presentation, I will examine Agnesi’s dilemma: to apply her
knowledge to help educate others or to pursue her other passion, caring for the infirm and elderly by joining a convent.
4:45-4:55 Benoit Mandlebrot: A Life of Chaos to the Chaos of Life
Kevin Sylvia, Framingham State College
"Clouds are not spheres, mountains are not cones, coastlines are not circles and bark is not smooth, nor does lightning travel in a straight line." Benoit Mandlebrot. This is obvious, but often
ignored, and some still continue to try to squeeze life into the over-simplistic models of Euclidean geometry. Fortunately, fractals have revolutionized many diverse fields. From physics, chemistry,
and biology to art, economics, and topology, fractals have changed our view of the world around us. This new way of thinking is due to the 'father of fractal geometry', Benoit Mandlebrot, whose
meandering and self-labeled chaotic life led to his search for order in chaos and a new description of so-called chaotic structures in nature. In this presentation, I will discuss Mandlebrot’s
progression toward an understanding of chaos and explore the fruition of his work: the Mandlebrot set.
Session II - Stratton Hall 308
4-4:10 Approximating 2D Diffusion: What every Student Doesn’t (Necessarily) Know
Dave Voutila, Worcester Polytechnic Institute
Undergraduate applied mathematics students usually spend the majority of their studies in a one-dimensional world when learning about diffusion and its approximation. The typical reasons for not
delving into two-dimensional diffusion studies are complexity and time constraints. But the approximation of two-dimensional diffusion using finite differences presents an excellent example of the
concepts of numerical stability and dissipation. As a result, they provide a good introduction to Von Neumann analysis. This presentation will provide a brief explanation of stability and dissipation
for the two-dimensional diffusion equation using Von Neumann analysis.
4:15-4:25 Making Spirals
Mary Servatius, Worcester Polytechnic Institute
We will give several interesting examples of spirals, explain how they can be described, and present a discrete method for creating them.
4:30-4:40 Compatible 0-1 Sequences
Jason Gronlund and John Hajeski, Worcester Polytechnic Institute
We define two 0-1 sequences to be compatible if it is possible to delete certain 0’s from both sequences in order to make the two sequences complementary. The general conjecture is that the sequences
will be compatible with positive probability when the occurrence of 1’s within the sequences is some probability less than 0.5. We will consider small finite examples of compatible sequences.
4:45-4:55 Modeling the Dynamic of Tumor Growth in the Brain
Kim Ware, Worcester Polytechnic Institute
Currently, a person diagnosed with glioblastoma (GBM), a highly invasive cancer of the structural cells in the brain, has an extremely low chance of long-term survival. One of the obstacles in
treating GBM is the inability of current medical imaging technology to observe growth at an extremely small scale. Our task, in cooperation with IBM Corporation and researchers at Harvard University,
is to develop a continuum model that accounts for both the proliferation and migration of tumor cells. In formulating this model, we will use a system of partial differential equations to describe
the dynamics of the tumor and its effects on the surrounding brain tissue. In addition, we will employ finite difference methods to approximate the solution to the system. Our goal is to utilize the
model in understanding patterns in the initial stages of tumor growth.
Session III - Higgins Labs 116
4-4:10 Vertex Magic Total Labelings of Bipartite Graphs
Karthik Raman, Norwich University
We will describe two new computer programs. The first can generate all the possible vertex magic total labelings (VMTL) for K3, 3. We will show how this program is not computationally feasible for
looking for vertex magic labelings of larger complete bipartite graphs. The second program searches for a special VMTL. This program is also useful for large graphs. We will describe the labelings
that this program will find, and why they play a central role in the theory.
4:15-4:25 Predicting Academic Success Using CART
Charity Combs and Timothy Phelps, Norwich University
A general classification problem may be described as follows: Given a multivariate observation, which is known to belong to one of several populations, determine which population is most likely.
Traditional methods for dealing with this problem often lack flexibility. Observations are often a ssumed to be normally distributed, for instance. Traditional methods cannot work with categorical
data or incomplete data in a natural way. CART (Classification and Regression Trees) techniques are being applied to data from a small private undergraduate institution. Current processing and
results will be discussed.
4:30-4:40 Convergence Time for Different Selection Schemes in Genetic Algorithms
Jamie Kingsbery, Williams College
Abstract genetic algorithms (GA’s) are an incredibly useful way of finding good solutions to hard optimization problems in a time-and-space-efficient manner, but they are poorly understood from a
theoretical standpoint. One effort to better understand GA’s has emerged through the notion of convergence time. We will define and examine this idea, and compare the convergence times in GA’s that
use both ranking and proportional selection. We will see why the idea of convergence time is useful and at the same time limited in analyzing GA performance.
4:45-4:55 Using Linear Programming and Function Approximation to Optimize Arrival Routing
Mike Frechette, Gordon College
Minimizing long-run average cost per stage in stochastic network problems requires solving the countably infinite system of nonlinear equations generated by Bellman’s equation. One approximation
method transforms Bellman’s equation into a linear program, the system of linear constraints that yields the exact solution. We will explore a method of further simplifying this linear program by
fitting functions to Bellman’s equation. Our result is a small linear program that is computationally simple to solve for a reasonably accurate average cost.
Session IV - Higgins Labs 154
4:00-4:10 Fractals and the Magnetic Pendulum
Benjamin Morin, University of Maine
Placing a magnet on a pendulum and an array of magnets beneath it, the understood behavior of the pendulum gives way to unpredictable orbits and end states. Without knowledge of the exact initial
state, the end state cannot be predicted with any degree of accuracy. This is because the pendulum creates fractals along the boundaries of its basins of attraction. I have written software that has
allowed me to show that the boundary is a fractal.
4:15-4:25 Vectors, Scalars, and Motion
Tim Coburn, Framingham State College
For those looking to shave a few strokes off their golf game or master the intricacies of billiards, a good understanding of vectors, scalars, and motion could go a long way. One of the most famous
equations, Newton’s Second Law, simultaneously demonstrates the importance of vectors, scalars, and motion. In this presentation we will provide an overview of these notions, discuss the history of
the development and application of vectors, and emphasize the relationship to three-dimensional motion.
4:30-4:40 The Tribonacci Spiral
Caroline Mallary, Worcester State College
The ratio of successive entries of the Fibonacci sequence of integers converges to a number which can be used to construct a two-dimensional 'golden spiral'. The 'Tribonacci' sequence of integers
converges to another ratio which can be used to construct a similar spiral in three dimensions. I will discuss some of the properties of this spiral and the number on which it is based, which is
approximately 1.839286755.
4:45-4:55 Putting the Odds in Your Favor
James Piette III
Due to the popularity of Texas Hold’em poker, many mathematical techniques have been implemented to help determine an optimal winning strategy. One important statistic is a hand’s probability of
winning (the hand’s winning percentage). This presentation illustrates a new technique that calculates the exact winning percentage of a hand. These values were tested via Monte Carlo simulations.
While not equal, the calculated and simulated results are similar enough to validate the approach.
Session V - Higgins Labs 218
4:00-4:10 Calculus at Work: Elastic Deflection of Support Beams
Michael Coleman and Frank Grimmer, Western Connecticut State University
Using a classic differential equation for the deflection curve of an elastic beam, we derive a general mathematical model for a beam subject to a non-symmetric system of loads. The governing
differential equation is them solved by a standard calculus method. Four arbitrary constants are determined from natural constraints on the deflection function and its derivative. Examples with
specific material and geometric parameters for the beam are given and the absolute maximum of the deflection function is found. If time permits, possible applications in design problems will be
4:15-4:25 Strategy in the Design of Voting Systems
Warren Schudy, Worcester Polytechnic Institute
In most US elections every voter may vote for one candidate and the candidate receiving the most votes wins. This voting system has a number of flaws, most notably the 'spoiler effect'; two similar
candidates can lose to a third even though their common policies are supported by a majority. A number of voting systems solve this problem, including approval voting, Condorcet methods, and single
transferable vote. Various mathematical criteria are used to analyze these voting systems. The importance of strategy in the design of voting systems will be discussed.
4:30-4:40 Elusive Zeros Under Newton's Method
Trevor O'Brien, College of the Holy Cross
Given its iterative nature, it is natural to study Newton's method as a discrete dynamical system. Of particular interest are the various open sets of initial seeds that fail to converge to a root.
We shall examine a certain family of 'bad' polynomials that contain extraneous, attracting periodic cycles. The family relies on only one parameter, and we have developed and implemented computer
programs to locate values of this parameter for which Newton's method fails on a relatively large set of initial conditions. In doing so, we have discovered some rather surprising dynamical figures
including Mandlebrot-like sets, tricorns, and swallowtails. We have also uncovered analytic and numerical evidence that aids in explaining the existence of such figures.
4:45-4:55 Two Towers Are More Fun Than One
Karen Shively, Wheelock College
The Tower of Hanoi problem, in which a pile of different sized disks must move from one pole to another, is solved using a recursive process. In this variation of the Tower of Hanoi problem, two
piles of disks must trade places. How can you do this? Does the recursion still work? What is the least number of moves possible? Learn all this and more! | {"url":"http://users.wpi.edu/~sweekes/NESMAA/UGradtalks.html","timestamp":"2014-04-20T23:44:53Z","content_type":null,"content_length":"15193","record_id":"<urn:uuid:27f5db6a-f483-46a3-837f-0f446eb8dc0c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
mp_arc 96-317
96-317 D. Noja, A. Posilicano
THE WAVE EQUATION WITH ONE POINT INTERACTION AND THE ( LINEARIZED ) CLASSICAL ELECTRODYNAMICS OF A POINT PARTICLE (258K, postscript) Jun 26, 96
Abstract , Paper (src), View paper (auto. generated ps), Index of related papers
Abstract. We study the point limit of the linearized Maxwell--Lorentz equations describing the interaction, in the dipole approximation, of an extended charged particle with the electromagnetic
field. We find that this problem perfectly fits into the framework of singular perturbations of the Laplacian; indeed we prove that the solutions of the Maxwell--Lorentz equations converge --
after an infinite mass renormalization which is necessary in order to obtain a non trivial limit dynamics -- to the solutions of the abstract wave equation defined by the self--adjoint operator
describing the Laplacian with a singular perturbation at one point. The elements in the corresponding form domain have a natural decomposition into a regular part and a singular one, the singular
subspace being three--dimensional. We obtain that this three--dimensional subspace is nothing but the velocity particle space, the particle dynamics being therefore completely determined -- in an
explicit way -- by the behaviour of the singular component of the field. Moreover we show that the vector coefficient giving the singular part of the field evolves according to the
Abraham--Lorentz--Dirac equation.
Files: 96-317.ps | {"url":"http://rene.ma.utexas.edu/mp_arc-bin/mpa?yn=96-317","timestamp":"2014-04-16T23:32:08Z","content_type":null,"content_length":"2305","record_id":"<urn:uuid:9d1123eb-4f88-40ed-aa2a-fe5db3b6d6f3>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
Squirrel-cage induction motor and generator
RETURN HOME
SQUIRREL-CAGE INDUCTION MACHINE
Motor and generator principle of operations
The rotating magnetizing field represented by the space vector B[m ](or, equivalently by the magnetizing current I[m]) moves at the synchronous speed ω[s ]with respect to a stator (or stationary)
observer and at the slip speed ω[sl ]= ω[s ]- ω[m ]with respect to a rotor observer. In the motor mode of operation where ω[m]< ω[s], the rotor effectively moves backwards (clockwise) with respect to
the field, inducing in each bar a voltage having the polarity indicated and a magnitude proportional to slip velocity u and to the field strength acting on the bar (in accordance with the
flux-cutting rule v = Blu). Since the magnetic field is sinusoidally distributed in space, so will the induced voltages in the rotor bars. Ignoring the effects of rotor leakage ( i.e. assuming that
the rotor is purely resistive), the resulting rotor currents are in phase with the induced voltages and are thus sinusoidally distributed in space varying sinusoidally in time at slip frequency; they
may then be represented by the space vector I[r] which rotate at the slip speed ω[sl] with respect to the rotor and at synchronous speed ω[s ]with respect to the stator. Because B[m ]cannot change
with a fixed stator input voltage (in accordance with Faraday's law), a stator space vector I[R] is created in order to compensate for the rotor effects so that the resultant stator current becomes I
[s] = I[R] + I[m]. The electromagnetic force exerted on a rotor bar can be derived from the f = Bli rule and it is acting in the positive or anticlockwise direction (same as rotor speed) in the
present case of a motor. The resultant torque developed on the rotor also acts in the same direction. Follow the path taken by one rotor bar as it travels around, observing the polarity and magnitude
(described by the size) of the bar current.
In the case of a generator where ω[m ]> ω[s], all polarities and directions are reversed as can be observed in the right figure (except for the magnetizing component).
© M. Riaz | {"url":"http://www.ece.umn.edu/users/riaz/animations/sqmoviemotgen.html","timestamp":"2014-04-20T18:29:32Z","content_type":null,"content_length":"8077","record_id":"<urn:uuid:b6b24714-fa19-4a8a-babb-9c8b77a907b9>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: bug in addrr
Ilya Zakharevich on Sat, 12 Oct 2002 12:54:51 -0700
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
On Tue, Oct 01, 2002 at 05:05:27PM +0200, Karim BELABAS wrote:
> 3) switching to base 2^BITS_IN_LONG exponents would enable us to reuse the
> integer multiplication code for floating point operations [ currently, they
> are disjoint sets of routines ]. In particular fast multiplication would be
> enabled at last for t_REALs.
Note one shortcoming of this scheme: currently, to convert a
float/double etc. to a t_REAL without a loss of precision, one can
calculate in advance how many words one needs. With this change one
needs to inspect the exponent first. This may introduce many subtle bugs...
Hope this helps, | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0210/msg00053.html","timestamp":"2014-04-18T10:51:12Z","content_type":null,"content_length":"4629","record_id":"<urn:uuid:4e9d5478-fbf4-4581-a0db-a384c983ef13>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
I'm confused
I am not familiar with these equations, but I try to give some opinions for discussion.
The problem is that there will be an unexpected term, i.e. exp(i*q), if the B(m)B(m+1) are transformed in K-space.
As I learned, B(m)B(m+1) denotes the transition process between state m and state m+1.
The complete formula is usually written in the sum of V(m,m+1)B(m)B(m+1), where V(m,m+1) is the transition matrix element.
When the formula is transformed in K-space, V(m,m+1) are also transformed as V(m,k,m+1,q), or written in V(k,q) for the shortness.
And what i am thinking is that the unexpected term exp(i*q) will be absorbed in V(k,q).
That means you can do the transformation in the case of (m,m+1) just like what you did in the case of (m,m). The difference for the m and m+1 only appears in the transition matrix elements. | {"url":"http://www.physicsforums.com/showthread.php?p=3105516","timestamp":"2014-04-18T21:21:34Z","content_type":null,"content_length":"25942","record_id":"<urn:uuid:fa37b470-be5b-46ee-9b03-482f2140872c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can U solve this?
Re: Can U solve this?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Can U solve this?
T[4] means that you need to get the 5th triangular number because T[0] is 1, the first triangular number.
Next question:
Numbers 3 4 5
Operations T +
Answer 46
Re: Can U solve this?
The first triangular number is 0, so T0 = 0.
But using your definition we can say
T3 + T4 + T5 = 46
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Can U solve this?
Oh. My classmate taught me that T thing.
Number 14 28 42
Operation + - ^2
Answer 1764
Re: Can U solve this?
42^2 = 1764
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Can U solve this?
New edition :
P * C.P = 5
P = Prime
C = Composite
Real Member
Re: Can U solve this?
it can't be.
5 = 5*1 and there isn't any other factorisation
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Can U solve this?
Hi julianthemath;
5 only has the factorization of 5 x 1. 1 is not composite or prime. Please post another problem this one has no solution.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Can U solve this?
C.P means decimal number. Try again.
Re: Can U solve this?
julianthemath wrote:
New edition :
P * C.P = 5
P = Prime
C = Composite
Still not possible.
All P are such that P > 1. The smallest P is 2. And, the smallest C such that C > 0 is 4
Thus P * C.P >= 2 * 4.d = 8.0 + 2d (where 2d is 2 times the decimal part)
If you were trying to go for 2 * 2.5, then the 2 in 2.5 is not Composite, it is Prime.
Last edited by mathdad (2013-04-23 07:04:17)
Re: Can U solve this?
Oh, well.
No answers, no points.
Wanna ask, mathdad? bobbym? Or just keep me?
Re: Can U solve this?
Okay, let me see the answer. Please hide the answer cause someone else may still want to work on it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Can U solve this?
81 + X - 4 = 85
Solve for X.
Re: Can U solve this?
Thanks for the answer.
X = 0
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Can U solve this?
81 + X - 4 = 85
You might have an error.
Re: Can U solve this?
81 + X - 4 = 85
Add 4 to both sides:
81 + X = 89
Subtract 81 from both sides.
X = 8
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Can U solve this?
1777 - 777 + 777 * 777 = ?
Re: Can U solve this?
604729 is the answer.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Can U solve this?
This time, there are points.
X + X = X
What makes this impossible to answer? 30 points
Re: Can U solve this?
Nothing, X can be 0.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Can U solve this?
Oh. How do you like my signature? Funny, right?
Bobbym - 30
3 6 9 15 ? What is the fifth number?
Last edited by julianthemath (2013-04-26 19:01:50)
Re: Can U solve this?
I like 21 as the nest number.
I worked a long time on bobbym - 30 but could not come up with anything.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Can U solve this?
Bobbym - 60
If China has a population of 1,234,567,890,
India 123,456,789
USA 12,345,678
Indonesia 1,234,567
What is the population of Brazil?
Re: Can U solve this?
Is it 123 456 ?
How are the chickenpox?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Can U solve this?
No. I didn't have it, my cousins do. The start of the infection was during the afternoon I came home after my last exams. And now, some of my cousins are having the chickenpox. So, I just need to
stay inside.
Yes. It is 123,456.
Bobbym - 90
If Gangnam style were a country (by no. of views), what rank is it? | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=264996","timestamp":"2014-04-18T05:50:10Z","content_type":null,"content_length":"37140","record_id":"<urn:uuid:1e926a1b-5e16-4931-b5d2-6db96bcf17f0>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solve the differential equation? - WyzAnt Answers
Solve the differential equation (2xy+y^3)dx+(x^2+3xy^2-2y)dy=0.
I got x^2*y+xy^3+yx^2+xy^3-y^2=C but that's not the answer. And is Differential Equations taught after Linear Algebra? If so, then is differential equations same as partial differential equations? Is
partial differential equations a different course taught from differential equations? Which of these 2 comes first?
Do you guys know how to input this equation on Ti-89 calculator to find the answer? I know how to do this problem but I want to know this technique to check other problems. Like you press F3, then C,
deSolve...How do I put this equation?
I don't have T89 in my hands now, but Ias i rememeber, you have to go to MATH menu. It opens a llist of options. Scroll down the screen to get to to the line "differential equations". As I know, it
opens the screen in which you can type your equation.
Tutors, please sign in to answer this question.
2 Answers
You have to study linera algebra first, then course of differential equations, because some methods of differential eduaqtions are based on methods of linear algebra (use of determinants such as
Wornscian, finding characteristic numbers of DEs, etc.). Equations with partial derivative are equations for function of multiple variable, and these equations are usually a part of mathematical
If you notice that
y^3dx + 3xy^2dy = d(xy^3), 2xydx + x^2dy = d(x^2y) , 2ydy = d(y^2)
then your differential equation can be written in the form
d(xy^3 + x^2y) = d(y^2) or
xy^3 + x^2y = y^2 + C
You are close!
The form of the differential equation suggests a check for exactness.
(Df/Dx)dx + (Df/Dy)dy = 0 where 'D' is used as the symbol for partial differentiation.
Check if D(Df/Dx)/Dy = D(Df/Dy)/Dx. In this case 2x + 3y^2 = 2x + 3y^2, so the differential equation is exact.
Df/Dx = 2xy + y^3 is to be integrated with respect to x to get f(x,y) = yx^2 + xy^3 + f(y)
Df/Dy = x^2 + 3xy^2-2y integrates to f(x,y) = yx^2 +xy^3 - y^2 + f(x)
Comparing the two solutions, f(y) = -y^2 and f(x) = 0.
The solution is C = yx^2 +xy^3 -y^2. Although we work with partial derivatives, exact differential equations are usually taught in the differential equations.
In some college math departments, linear algebra is taught along with differential equations in the same class. Some colleges require linear algebra or differential equations or both depending on a
student's major. The differential equations class usually follow a 2 or 3 semester sequence in calculus. A basic linear algebra course could be taught that does not require prerequisites in calculus.
The differential equations class usually covers DE's with one independent variable (ordinary differential equations). Maybe at the end of the course, some partial differential equations are
introduced but are usually covered in an advanced courses. | {"url":"http://www.wyzant.com/resources/answers/11667/solve_the_differential_equation","timestamp":"2014-04-18T19:46:56Z","content_type":null,"content_length":"43709","record_id":"<urn:uuid:8cdfb148-d269-4972-adfb-e98448bb2b89>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Richland Hills, TX Prealgebra Tutor
Find a Richland Hills, TX Prealgebra Tutor
...I've taught TAKS test preparation for the past four years for elementary age students in all subjects. This also includes study skills, focus development exercises, goal setting and test
anxiety strategies. Initially I studied the basics of nutrition and its effect on cognitive functionality and health out of necessity.
27 Subjects: including prealgebra, English, reading, writing
I have had a career in astronomy which included Hubble Space Telescope operations, where I became an expert in Excel and SQL, and teaching college-level astronomy and physics. This also involved
teaching and using geometry, algebra, trigonometry, and calculus. Recently I have developed considerable skill in chemistry tutoring.
15 Subjects: including prealgebra, chemistry, calculus, ASVAB
...I may recommend students who have advanced beyond my capabilities to my own teacher. My undergrad my major was in philosophy, which required that I learn extensively about logical operations,
validity, and symbolic representation of arguments. I was taught classical and modern logics, sentential and symbolic logic.
14 Subjects: including prealgebra, reading, English, writing
...This knowledge of mathematics is tied in with a life time of industrial and research activity in lubricants and polymers (polyethylene). Through me the student can learn that proficiency in
mathematics leads to the ability to navigate the life stream of the world of work. In other words, academ...
7 Subjects: including prealgebra, statistics, algebra 1, algebra 2
...During my teaching tenure I tutored math and reading to third grade students. My tutoring method is to focus on concepts the student is currently studying in their classroom or concepts in
which the student is experiencing challenges. I have a Texas Teacher Certification to teach elementary grades 1-8.
19 Subjects: including prealgebra, reading, writing, algebra 1
Related Richland Hills, TX Tutors
Richland Hills, TX Accounting Tutors
Richland Hills, TX ACT Tutors
Richland Hills, TX Algebra Tutors
Richland Hills, TX Algebra 2 Tutors
Richland Hills, TX Calculus Tutors
Richland Hills, TX Geometry Tutors
Richland Hills, TX Math Tutors
Richland Hills, TX Prealgebra Tutors
Richland Hills, TX Precalculus Tutors
Richland Hills, TX SAT Tutors
Richland Hills, TX SAT Math Tutors
Richland Hills, TX Science Tutors
Richland Hills, TX Statistics Tutors
Richland Hills, TX Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Bedford, TX prealgebra Tutors
Colleyville prealgebra Tutors
Fort Worth prealgebra Tutors
Fort Worth, TX prealgebra Tutors
Haltom City prealgebra Tutors
Hurst, TX prealgebra Tutors
N Richland Hills, TX prealgebra Tutors
N Richlnd Hls, TX prealgebra Tutors
North Richland Hills prealgebra Tutors
Pantego, TX prealgebra Tutors
River Oaks, TX prealgebra Tutors
Saginaw, TX prealgebra Tutors
Sansom Park, TX prealgebra Tutors
Watauga, TX prealgebra Tutors
Westover Hills, TX prealgebra Tutors | {"url":"http://www.purplemath.com/Richland_Hills_TX_Prealgebra_tutors.php","timestamp":"2014-04-20T09:03:22Z","content_type":null,"content_length":"24623","record_id":"<urn:uuid:125aaa95-6940-44f8-a204-f036e14ef2aa>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aerospaceweb.org | Ask Us -
Speed of Sound and Mach 1
Speed of Sound and Mach 1
What is the exact speed of sound? How fast is Mach 1 exactly?
- question from name withheld
What is the speed of sound at sea level in mph and in knots? Also what is the speed of sound at the Tropopause in mph and knots?
- question from Brittney
We get this question or variations thereof constantly. The subject was first addressed in a question we answered on how fast is Mach 2 in miles per hour. The conclusion of that explanation was that
the speed of sound is not a constant value. Instead, it changes depending on how high up in the atmosphere you are and on the temperature.
To account for this behavior, aerospace engineers make use of what is called the standard atmosphere. This standard atmosphere is based on scientific atmospheric data collected at different locations
within the atmosphere. This data was then used to create a series of equations that mathematically model the values of key atmospheric properties, such as temperature, density, and speed of sound.
The results of this model provide engineers with "average" atmospheric properties on a so-called standard day.
The standard atmospheric model tells us that the speed of sound, or Mach 1, at sea level is:
• 1,116.4 ft/s
• 340.3 m/s
• 761.2 mph
• 1,225.1 km/h
• 661.5 knots
However, this model assumes a "standard day" in which the air temperature is 59°F (15°C). If the actual temperature is higher, then the speed of sound will be higher as well. But the difference is
small enough that we can neglect it for most engineering purposes, and the above values are accepted around the world as the speed of sound at sea level.
But now we face another problem, because aircraft do not typically spend much time flying at sea level. They instead cruise tens of thousands of feet above the Earth's surface where the speed of
sound changes. This change in speed of sound is directly related to the change in temperature as altitude increases. This temperature change can be observed below.
Variation of temperature through the layers of the atmosphere
Furthermore, the temperature (T) and speed of sound (a) are directly related by the following equation:
Having established that temperature changes with altitude and speed of sound is directly proportional to temperature, it is now clear that the speed of sound changes as altitude increases. As
illustrated above, the temperature decreases at a linear rate up to about 11 km (6.8 mi) where the Tropopause begins. This region of the atmosphere is marked by constant temperature, and therefore
constant speed of sound. The Tropopause extends up to about 20 km (12.4 mi), so the speed of sound does not change throughout this entire 9 km (5.6 mi) thick region. Scientific measurements and the
standard atmospheric model have established that the speed of sound, or Mach 1, within this realm is:
• 968.1 ft/s
• 295.1 m/s
• 660.1 mph
• 1,062.3 km/h
• 573.6 knots
Finally, we'd like to point out once again that the speed of sound at any altitude up to 280,000 ft (86,000 m) can be easily calculated using our Atmospheric Properties Calculator.
- answer by Jeff Scott, 10 November 2002
Related Topics:
What is the speed of sound? How fast is Mach 1? What is the sound barrier? How fast do you have to go to break the sound barrier? How fast is the speed of sound at ground level? What does the
term Mach number mean? What is the speed of an aircraft traveling at Mach 3 at an altitude of 30,000 feet? What is subsonic speed?
Your site and many others list the speed of sound as 761 mph, but an equal number of other sites say 742 mph. Which value is right? Why are they different?
How fast is Mach 1, Mach 2, Mach 4, Mach 6, Mach 15, Mach 20, etc. at different altitudes?
Why do we use knots? I know that was probably imported from the naval arena, but why do they use it? How fast is a knot?
Where does the word "Mach" come from? Is it an abbreviation?
I was on a flight ... when the TV screen showed a speed of well over 700 mph, and the pilot told us that we were being pushed by one of the fastest tail winds he had ever experienced. ... Did I
fly faster than the speed of sound?
Read More Articles:
Aircraft | Design | Ask Us | Shop | Search
About Us | Contact Us | Copyright © 1997-2012 | {"url":"http://www.aerospaceweb.org/question/atmosphere/q0102c.shtml","timestamp":"2014-04-19T02:33:00Z","content_type":null,"content_length":"13275","record_id":"<urn:uuid:a2c16dab-939b-4cb7-9c8b-11fd69f5c16a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |