content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Unirational implies rationally connected
up vote 2 down vote favorite
It is evidently a well-known fact that a unirational variety $X$ over an algebraic closed field (i.e. there is a dominant rational map from $\mathbb P^n$ to $X$) is rationally connected (by which I
mean that any two points can be joined by a chain of rational curves). Numerous authors on birational geometry seem to state this as a remark, but don't indicate how one might prove it. The only
proofs I have found of this fact (i.e. Fulton's Intersection Theory book example 10.1.6 and the paper of Samuel he quotes there) use the completion of local rings and power series. I was wondering if
there was a purely algebraic (i.e. without completions) proof of this result.
In particular, by blowing $\mathbb P^n$ at the indeterminancy locus of the rational map to $X$ we get a commutative diagram involving a birational, projective, surjective morphism from $\tilde{\
mathbb P^N}$ to $\mathbb P^n$, our original rational map from $\mathbb P^n$ to $X$, and a projective, surjective morphism $\tilde{\mathbb P^n} \rightarrow X$, so if we can show that the blowup is
rationally connected then mapping to $X$ will give us our chain of rational curves connecting any two points of $X$. This reduces to the following affine case: We are then left with the case of
showing that if $\pi: T\rightarrow \mathbb A^n$ is the blow-up of $\mathbb A^n$ along a subcscheme Z, with exceptional divisor $E$, and $t\in E$, then there is a morphism $h: \mathbb A^1\rightarrow
T$ with $h(0)=t$ but $h(\mathbb A^1)$ not contained in $E$. It is here that I was wondering if people knew of a way to procede without using power series as Fulton and Samuel do.
I would also be interested in other proofs of this result.
ag.algebraic-geometry birational-geometry
1 You should emphasize your hypotheses more. The reason this would be merely a remark in many birational geometry papers is that it is completely trivial in case X is smooth and the characteristic
is zero: there rationally connected is equivalent to being able to join a general pair of points by a rational curve, which is obvious for unirational varieties. – Jack Huizenga Dec 6 '11 at 21:47
@Jack Why is it completely obvious as you say? For example if my points are outside of the image of rational map from projective space, I don't see why it should be clear I can get a rational
curve connecting these points, or even a chain of curves. – HNuer Dec 6 '11 at 21:50
2 You need only join a general pair of points, not every pair. The general pair of points lies in the image of the map from projective space. It's not obvious why the various definitions of
rationally connected are equivalent in characteristic zero, however; this typically requires deformation theory. See Debarre, "Higher Dimensional Algebraic Geometry," chapter 4, for this. – Jack
Huizenga Dec 6 '11 at 21:54
I have edited my question to ask why any two points can be joined by a chain of rational curves. I agree if we're using your defition then it's obvious. – HNuer Dec 6 '11 at 22:00
add comment
2 Answers
active oldest votes
In case you are still interested in this question, here is a proof of the explicit statement at the end of the post.
Claim Let $\pi: T\rightarrow \mathbb A^n$ be the blow-up of $\mathbb A^n$ along a subcscheme $Z$ with exceptional divisor $E$. Then for any $t\in E$, there exists a morphism $h: \
mathbb A^1\rightarrow T$ with $h(0)=t$ but $h(\mathbb A^1)$ not contained in $E$.
up vote 1
down vote Proof: Assume that $\pi(t)=0\in \mathbb A^n$. Then the point $t\in E$ corresponds to a normal direction of $Z$ at $0$. Let $L\subseteq \mathbb A^n$ be a line pointing in that direction
accepted and let $\widetilde L=\pi^{-1}_*L\subseteq T$ be the strict transform of $L$ on $T$. Observe that by choice $L\not\subseteq Z$ and hence $\widetilde L\not\subseteq E$. Also note that $\
pi|_{\widetilde L}: \widetilde L\to L$ is the blow up of $L$ along $L\cap Z$, and hence it is an isomorphism. Therefore there exists a morphism $h: \mathbb A^1\rightarrow \widetilde L\
subseteq T$ with $h(0)=t$ but $h(\mathbb A^1)=\widetilde L$ not contained in $E$. $\square$
add comment
Assume that $X$ is unirational. Then we have a dominant rational map $$\phi:\mathbb{P}^{n}\dashrightarrow X$$ Let $x,y \in X$ be two general points and let us consider two points $p,q$ in
$\pi^{-1}(x),\pi^{-1}(y)$ respectively. Take the line $L$ generated by $p$ and $q$. Then $\phi_{|L}:L\rightarrow X$ is a finite morphism and its image $C = \phi(L)$ is a rational curve
up vote 3 through $x$ and $y$. So $X$ is rationally connected.
down vote
Why is $\phi|_L$ a morphism instead of a rational map? – S. Carnahan♦ Oct 16 '13 at 15:46
@Scott: $L$ is a smooth projective curve. There is no room for indeterminacy. (I suppose one may assume that $X$ is projective as well). – Sándor Kovács Oct 16 '13 at 16:54
Yes, $\phi_{|L}$ is a morphism just because $L$ is a smooth curve. I am assuming $X$ projective. – CamSar Oct 16 '13 at 17:03
I acknowledged, after Jack's comments above, that your proof works for the usual definition of rationally connected, namely that two general points can be connected by a rational curve.
However, I edited the question to more clearly reflect what I was asking which was why ANY two points can be connected by a chain of rational curves. – HNuer Oct 16 '13 at 21:10
If X is projective and two general points can be connected by a rational curve then any two points can be connected by a rational curve. – CamSar Oct 23 '13 at 17:04
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry birational-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/82823/unirational-implies-rationally-connected/115759","timestamp":"2014-04-24T03:42:09Z","content_type":null,"content_length":"67104","record_id":"<urn:uuid:3ff35909-8dea-4b68-a118-2f577c66979c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
jocuri de descarcat pe calculator grepolis
Search results: "jocuri de descarcat pe calculator grepolis"
Sponsored High Speed Downloads for "jocuri de descarcat pe calculator grepolis"
jocuri de descarcat pe calculator grepolis Full Download
8301 downloads at 3775 kb/s
jocuri de descarcat pe calculator grepolis
4223 downloads at 3175 kb/s
jocuri de descarcat pe calculator grepolis free full download, jocuri de descarcat pe calculator grepolis Rapidshare Torrent
#1: Software » Multimedia : VAT Sales Tax Calculator 2.1 Multilingual
Author: tttoooommm | 9-04-2014, 07:41 |
VAT Sales Tax Calculator 2.1 Multilingual | MacOSX | 1.4 MB
#2: Software » Multimedia : VAT Sales Tax Calculator 2.1 Multilingual
Author: tttoooommm | 6-04-2014, 18:01 |
VAT Sales Tax Calculator 2.1 Multilingual | MacOSX | 1.4 MB
Simple and easy to use VAT / Sales Tax calculator. Just enter one value and all other values are calculated instantly. Two programmable keys
allow to store tax rates.
#3: Software » Multimedia : VAT Sales Tax Calculator 2.1 Multilingual
Author: tttoooommm | 7-04-2014, 15:42 |
VAT Sales Tax Calculator 2.1 Multilingual | MacOSX | 1.4 MB
Simple and easy to use VAT / Sales Tax calculator. Just enter one value and all other values are calculated instantly. Two programmable keys
allow to store tax rates.
#4: Software » Multimedia : BlueMATH Calculator 1.2
Author: tttoooommm | 27-03-2014, 16:42 |
BlueMATH Calculator 1.2 | 0.4 MB
BlueMath Calculator is a flexible desktop calculator and math expression evaluator. It can calculate simultaneously multiple expressions,
showing the results in the same time as you type. Type the mathematical expressions in a similar way as you would write them on paper.
#5: Software » Multimedia : BlueMATH Calculator 1.2
Author: tttoooommm | 24-03-2014, 09:13 |
BlueMATH Calculator 1.2 | 0.4 MB
BlueMath Calculator is a flexible desktop calculator and math expression evaluator. It can calculate simultaneously multiple expressions,
showing the results in the same time as you type. Type the mathematical expressions in a similar way as you would write them on paper.
#6: Software » Multimedia : BlueMATH Calculator 1.2
Author: tttoooommm | 29-03-2014, 18:14 |
BlueMATH Calculator 1.2 | 0.4 MB
BlueMath Calculator is a flexible desktop calculator and math expression evaluator. It can calculate simultaneously multiple expressions,
showing the results in the same time as you type. Type the mathematical expressions in a similar way as you would write them on paper.
#7: Software » MAC OS : Calculator Handy v1.3 MacOSX Retail-CORE
Author: zerocoolvn | 15-02-2013, 23:44 |
Release name: Calculator.Handy.v1.3.MacOSX.Retail-CORE
Size: 1.3 MB
Date: 12.02.2013
#8: Software » MAC OS : Magic Calculator v2.7 (MacOSX)
Author: tttoooommm | 29-01-2014, 05:35 |
Magic Calculator v2.7 | MacOSX | 2.2 MB
Magic Calculator is a scientific calculator that allows you to write your expressions in infix notation. A Magic Calculator document looks
like a text editor. Each line of the text is a calculation. Just type expressions and get immediately the results. If you make a change,
everything is automatically updated. It also allows you to define variables and functions of one or more variables. With the function de
scriptionter, you can visualise the graph of the functions you have defined.
#9: Software : Magic Calculator 2.7 MacOSX
Author: tttoooommm | 28-01-2014, 09:00 |
Magic Calculator 2.7 | MacOSX | 2.2 MB
Magic Calculator is a scientific calculator that allows you to write your expressions in infix notation. A Magic Calculator document looks
like a text editor. Each line of the text is a calculation. Just type expressions and get immediately the results. If you make a change,
everything is automatically updated. It also allows you to define variables and functions of one or more variables. With the function De
scriptionter, you can visualise the graph of the functions you have defined.
#10: Software » MAC OS : Magic Calculator 2.7 MacOSX
Author: tttoooommm | 30-01-2014, 11:46 |
Magic Calculator 2.7 | MacOSX | 2.2 MB
Magic Calculator is a scientific calculator that allows you to write your expressions in infix notation. A Magic Calculator document looks
like a text editor. Each line of the text is a calculation. Just type expressions and get immediately the results. If you make a change,
everything is automatically updated. It also allows you to define variables and functions of one or more variables. With the function De
scriptionter, you can visualise the graph of the functions you have defined.
jocuri de descarcat pe calculator grepolis free full download, jocuri de descarcat pe calculator grepolis Rapidshare Torrent | {"url":"http://aspgfx.org/e/jocuri+de+descarcat+pe+calculator+grepolis.html","timestamp":"2014-04-19T14:29:13Z","content_type":null,"content_length":"42669","record_id":"<urn:uuid:6f8ced53-3f77-465a-b567-b53c853127ed>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Back to Alexey Onufriev's home page
"GEM" -- Analytical Poisson-Boltzmann
Electrostatic interactions are a key factor determining properties of biomolecules. Ability to compute electrostatic potential generated by a molecule is often essential to understand the mechanism
behind its biological function such as catalytic activity or ligand binding. To obtain the electrostatic potential everywhere in space, the (linearized) Poisson-Boltzmann equation -- 2nd order PDE --
is usually solved, traditionally by numerical approaches. We are working on a new theory -- ALPB -- which allows one to compute electrostatic potential around biomolecules orders of magnitude faster
(and with much less memory needed) than the traditional approach based on numerical solution of the PB equation. A software package GEM has been developed, mainly by John Gordon, to visualize and
manipulate electrostatic potentials. Remarkably, it was the ability to visualize and compare approximate analytical potentials with the exact and numerical references that allowed us (this part is
mostly due to Andrew Fenley), after many a trial and error, to understand and capture the key physics of the problem in the form of a very simple analytical formula. | {"url":"http://people.cs.vt.edu/~onufriev/RESEARCH/GEM.html","timestamp":"2014-04-19T14:43:21Z","content_type":null,"content_length":"3805","record_id":"<urn:uuid:464938e7-34b5-490f-ada6-1c7c91e5fdbe>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
A textbook revolution: J. Deighton and Sons and the reform
A textbook revolution: J. Deighton and Sons and the reform of mathematics in early nineteenth-century Cambridge
Jon Topham
University of Cambridge
Historians have long considered the transformation which took place in the mathematical studies of the University of Cambridge in the early nineteenth century to be a critical event in the
development of British mathematics and mathematical physics, with far-reaching consequences for subsequent generations of students which included William Thomson and James Clerk Maxwell. As a result,
there exists an extensive body of literature which examines the causes of this transformation, focusing particularly on the activities of the members of the Analytical Society - a short-lived student
society formed in 1812 by, among others, Charles Babbage, William Herschel, and George Peacock. This generation of Cambridge students arrived in a University where the dominant pedagogical framework
prescribed by the Senate House examination embodied a geometrical, intuitive approach to mathematical studies which was presented as standing in a Newtonian tradition, and which repudiated in
partisan terms the analytical approach by which the great innovations of Continental, and particularly French, mathematicians of the later eighteenth century had been achieved.
However, while the formation of the Analytical Society was undoubtedly a significant moment in the history of the mathematical studies of the University, it is generally agreed that it was more so
for its impact on the small coterie of its members than for any direct impact on the studies of the University as a whole. The introduction of a more analytical approach to mathematics in this wider
context, it is considered, owed more to changes in pedagogy effected in the later years of the decade--in particular, the introduction by George Peacock of the continental differential notation into
the Senate House examination in 1817 and 1819, and the introduction by William Whewell and others of new textbooks incorporating analytical approaches.
While textbooks are always likely to be prominent in any account of curricular reform, such accounts tend to be written chiefly from the perspective of the texts ultimately produced, with little
attention accorded to the processes of book production and distribution. Thus, for all the references to textbooks in the literature on the analytical revolution in Cambridge mathematics, only one
author mentions the leading Cambridge publishing firm, the house of Deighton, which was responsible for publishing most of the textbooks circulating in the University, and even then, it is only to
observe that it would be worth a study. My purpose in this paper was to examine the changes which occurred in Cambridge mathematical education in the second decade of the nineteenth century in the
context of the local book trade. Whilst not demanding a major revision of the received understanding of these changes, I argued that such an approach provides important additional insights into the
reform movement.
I began the paper by outlining the context of the Cambridge book trade, concentrating on the special place of textbooks in the local market, and on John Deightons rise to pre-eminence in that market.
I next re-examined the activities of the Analytical Society in the context of the local culture of print, pointing out the extent to which the young students were divorced from the machinery of
pedagagic publishing. Finally, I briefly contrasted their experience with that of William Whewell, who was able to manipulate the machinery of pedagogic publishing to make an effectual contribution
to the reform of the curriculum.
Table of Contents - Joint Meeting between the Textbook Colloquium and The British Society for the History of Science held on January 10th 1998 at Leeds University. | {"url":"http://www.open.ac.uk/Arts/TEXTCOLL/paper5.html","timestamp":"2014-04-19T06:57:58Z","content_type":null,"content_length":"4708","record_id":"<urn:uuid:89be5d62-380d-4cf3-b15d-12d6b3cc95b1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
proof by existence??
September 8th 2012, 04:26 AM #1
proof by existence??
can anybody help me on this problem ...
i can show by Disproof by counterexample
for all integers n>= 1
suppose n^2 - 7n + 49 is prime
try n =1 then it is equal to 43, still prime
if n = 2 then 39, composite
n^2 - 7n + 49 is not a prime for some integer n>=1.
but for this problem i can not seem to express how it should be done..
thanks a lot so much.
Re: proof by existence??
Seriously, are you even trying to do these problems yourself? Do you not see that "-7n+ 49= 7(-n+ 7)" is divisible by 7 for all n? What does that mean for $n^2$"?
No, that is not a "disproof by counter example" because the problem does NOT assert that " $n^2- 7n+ 49$ is divisible by 7 for all n". The problem is to show that there exists at least one such
n. The fact that there exist n for which it is NOT true is irrelevant.
You have posted a large number of widely diverse problems in which you seem to have no idea what they are about or even what they are asking. What is going on here?
Last edited by HallsofIvy; September 8th 2012 at 06:07 AM.
Re: proof by existence??
HallsofIvy sorry i was not expecting your reply... i never posted it for you actually... you might not know the answer either
if you have nothing to say good and your reply is just a an insult ... well that is not the reason why im here... im here for the answer or guide or help ... not that... if you have nothing good
to message/tell/reply .. better not say it or what. you are not the only helper who could answer or help me. i would never say all these things if never had done that to me also.
thanks and God Bless.
Be good at all times
according to
"Do not do to others what you would not like yourself. Then there will be no resentment against you, either in the family or in the state."
THANKS AND SORRY TOO..
Last edited by rcs; September 8th 2012 at 10:09 PM.
September 8th 2012, 06:04 AM #2
MHF Contributor
Apr 2005
September 8th 2012, 09:58 PM #3 | {"url":"http://mathhelpforum.com/number-theory/203093-proof-existence.html","timestamp":"2014-04-17T20:57:30Z","content_type":null,"content_length":"38352","record_id":"<urn:uuid:e492c496-e678-4dc8-b4b4-24aad023893d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Let f"(x)=4x^3-2x and let f(x) have critical values -1, 0, and 1. Determine which critical values give a relative maximum.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50cff0bfe4b06d78e86d7a2a","timestamp":"2014-04-17T16:54:46Z","content_type":null,"content_length":"37322","record_id":"<urn:uuid:1bd1a7d6-ea0e-45b4-8ed6-ebefdb75c62a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Podiatry Arena - View Single Post - Columnar Theory of Foot Function
Re: Columnar Theory of Foot Function
Demp, PH: A mathematical model for the study of metatarsal length patterns. JAPA 54:2 1964 p.107-110
Demp PH: Mathematical medicine. JAPA 60:9 1970 p352-353
Demp PH: The metatarsal hyperbola and the pathomechanical forefoot. Currrent Podiatry 20:3 1971 p15-17
Demp PH: A numerical taxonomy for evaluating the angular biomechanics of the human metatarsus. Current Podiatry 24:5 1975 p.9-11
Demp PH: Biomechanical optimality and the mathematical measurement of diagnostic patterns in the human foot. Arch Pod Med Foot Surg 3:1 1976 p.11-21
Demp PH: Biomechanical foot roentgenometry. Yearbook of podiatry 1978-1979. Ed: TH Clarke. Futura Publ. Co. New York 1978 p. 64-70
Demp PH: An anthropometric index for screening foot dysfunction. Current Podiatry. 28:6 1979a p.11-13
Demp PH: A mathematical taxonomy to evaluate the biomechanical quality of the human foot. M.S. Thesis (unpublished) Polytechnic Institute of New York, USA June 1979b
Demp PH: A correlation of length, width, height and pathomechanical quality in the human foot. Current Podiatry 31:8 1982 p23
Demp PH: Biomechanical profile analysis of the foot radiograph based on mathematical modelling. Current Podiatry 32:10 1983a p15-17
Demp PH:Mathematical modelling in podiatric surgery. A new approach to biomechanical evaluation. J Acad Amb Foot Surg 1:1 1983b p72-73
Demp PH: A mathematical taxonomy to evaluate the biomechanical quality of the human foot. Mathl Comput Modelling 11 1988 p341-345
Demp PH: A mathematical taxonomy to evaluate the biomechanical quality of the human foot. Mathl Comput Modelling 12 1989 p777-790
Demp PH: Using conic curves to classify pathomechanical biostructure of the metatarsus. Mathl Comput Modelling 14 1990a p668-673
Demp PH: Pathomechanical metatarsal arc: radiographic evaluation of its geometric configuration. Clin Pod Med Surg 7:4 1990b p765-776
Demp PH: Numerical diagnosis of pathoanatomy in the human forefoot: A pilot study. The Lower Extremity 1:2 1994 p133-138
Demp PH: Geometric models that classify structural variation of the foot. JAPMA 88:9 1998 437-441
I'll come back to your radiographic interpretation of subluxation when I have more time.
Who? What? When? Why? Yeah? And? So? What?
"My mission drive is to open up my eyes, 'cause the wicked lies and all the sh!te you say..."
"Science is the antidote to the poison of enthusiasm and superstition." | {"url":"http://www.podiatry-arena.com/podiatry-forum/showpost.php?p=28551&postcount=27","timestamp":"2014-04-19T02:46:57Z","content_type":null,"content_length":"32407","record_id":"<urn:uuid:0d6145c5-74b5-4e3e-b840-99736ece0d67>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Continued Fractions
Vignette 18
Continued Fractions
A continued fraction is a fractional expression of the form
where a[0], a[1], a[2], a[3], ... are integers, all positive, with the possible exception of a[0]. As a convenient shorthand notation, we will denote the continued fraction above by
Finite Continued Fractions
A finite continued fraction -- an expression like that above that actually ends -- represents a rational number. As an example,
What is perhaps more surprising, every rational number can be represented as a finite continued fraction! To express a given rational number as a continued fraction, we need only perform ordinary
division. For example, to express 221/41 as a continued fraction, we divide 221 by 41 to obtain the quotient 5 and remainder16. That is,
This leads to the continued fraction
Notice that the numbers 2, 1, 1, 3 and 2 are precisely those colored magenta in the repeated divisions above.
Try this process yourself, with the fraction 23/16.
Infinite Continued Fractions
Infinite continued fractions involve more than ordinary arithmetic, because the chain of fractions never ends. A formal meaning can be given to an infinite continued fraction in the following way.
The value of the infinite continued fraction
each of which makes sense because it is finite. A somewhat technical argument shows that every infinite continued fraction represents an irrational number. Moreover, every irrational number can be
expressed as an infinite continued fraction! This fact was proven by the prolific Swiss mathematician Leonhard Euler (1707-1783). Euler also derived many interesting continued fraction formulas, some
of which we will see below.
The Golden Ratio
The most basic of all continued fractions is the one using all 1's:
and so a natural question concerns what the value of this continued fraction is. If we let x denote this value, then
In the equation above, notice that the part colored blue is, in fact, identical to x, giving the result that x is actually embedded within the expression for x !) Multiplying the equation x gives us
x^2 = x + 1 or, equivalently,
must be a solution to the equation golden ratio or golden mean. This number pops up over and over in unexpected places in mathematics. The ancient Greeks discovered it in certain geometric
constructions, and used it extensively in their architecture. It occurs repeatedly in nature, as a limiting value of sequences of ratios in certain measurements of flowers and other plant life.
Psychological tests have indicated that the most aesthetically pleasing size for a rectangle is one for which the ratio of length to width is the golden ratio. The golden ratio appears in art
throughout the ages, from Leonardo da Vinci to Piet Mondrian. So it is certainly fitting that this same number should arise when considering the most basic of continued fractions!
Some Other Continued Fractions
Euler found that
This is actually easy to see, for if we let x denote the value of this continued fraction, then we have
Noticing that the blue part is identical to x, we see that
Euler also discovered how to write an infinite series as a continued fraction, and vice versa. Using this technique, he found a number of interesting continued fractions involving the number e -- the
number used as the base of the natural logarithm function. Among the formulas that he discovered are:
Further Exploration
• Eli Maor, e: The Story of a Number. In JCU Library, call number QA247.5 .M33
• H. E. Huntley, The Divine Proportion: A Study in Mathematical Beauty. In JCU Library, call number QA466 .H85
Copyright © 2000 by Carl R. Spitznagel | {"url":"http://www.jcu.edu/math/Vignettes/continued.htm","timestamp":"2014-04-20T18:24:51Z","content_type":null,"content_length":"7861","record_id":"<urn:uuid:8def6751-62b3-4fdc-8621-10e8c1c0a88f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Important Mission: Please Find The Pattern
Re: Important Mission: Please Find The Pattern
I agree, if that is what it is to be used for than I think he should do it himself. You should not try to enlist other peoples unwitting help in unethical activities.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=129685","timestamp":"2014-04-19T17:20:53Z","content_type":null,"content_length":"12821","record_id":"<urn:uuid:d8f521d2-3f95-4b77-b37d-1e7eae952a94>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
III Hurwicz Workshop on Mechanism Design Theory
International Scientific Conference
III Hurwicz Workshop on Mechanism Design Theory
Mathematical Center for Science and Technology PAS
and Department of Applied Mathematics, SGGW
Warsaw, July 1 – 2, 2011
The 2011 Hurwicz Workshop on Mechanism Design Theory is a continuation of the initiative started in 2009 and continued in 2010 of holding an annual conference to honor the 2007 Nobel Prize Laureate
in Economics, professor Leonid Hurwicz. Leonid Hurwicz lived in Warsaw until 1938 and studied at the University of Warsaw. He frequently visited Poland in 1990's. Hurwicz is often credited with
introducing rigorous mathematical approach to economic analysis. He received the Nobel Prize in Economic Sciences in 2007 for his fundamental contributions to the theory of the design of economic
mechanisms. Theory of mechanism design relies heavily on mathematical methods of functional analysis, differential equations, differential topology, dynamical systems, etc. He has made important
contributions to mathematics as well as economics, in particular, to non-linear programming. The previous Hurwicz Workshops had an interdisciplinary focus with presentations on topics ranging from
macroeconomic issues to mathematical game theory and stochastic finance.
The aim of the 2011 Hurwicz Workshop is to bring together scholars from Poland and abroad who specialize in mathematical approach to economic theory, including, but not necessarily confined to the
mechanism design theory. The Workshop will provide them with a forum for presenting their research to an audience with expertise in mathematics and economics as well as for informal discussions.
Continuing the tradition of the previous 2009 and 2010 Hurwicz Workshops the program includes a Hurwicz Memorial Lecture, which will be delivered this year by professor Roger B. Myerson of the
Department of Economics, University of Chicago, who shared the 2007 Nobel Prize with Leonid Hurwicz and Eric Maskin. This will give participants of the Workshop a unique opportunity to learn a
first-hand account of the mechanism design theory. We hope that the workshop will contribute to popularization of the mathematical approach to economic analysis in Poland.
Professor Roger B. Myerson's visit to Warsaw will be sponsored by the Polish Financial Supervision Authority (KNF).
See also the page of the Banach Center and Second Announcement. | {"url":"http://www.impan.pl/CZM/Hurwicz/index11.html","timestamp":"2014-04-16T21:52:01Z","content_type":null,"content_length":"4843","record_id":"<urn:uuid:200f1ebd-82a3-42ae-b329-cc9502e6b834>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculte max force that can be applied to a Beam flange?
em07189: Maximum moment on the I cross section occurs underneath the wheels closest to the beam midspan when one axle is 140 mm from the beam midspan; and this moment is My = 1215(2*F1), where My =
moment about the I-beam cross section lateral (strong) axis (N*mm), and F1 = downward load applied by one wheel (N). Therefore, the beam longitudinal bending stress, at the bottom face of the bottom
flange, is sigma_x = My*(0.5*h)/I, where h = I-beam cross section height. The flange bending moment about the x axis, per Q_Goest, is Mx = -0.5*b*F1; therefore, the flange lateral bending stress, at
the bottom face of the bottom flange, is sigma_y = -6*F1/t^2, where t = I-beam flange thickness. Notice sigma_x is positive, and sigma_y is negative. Combine sigma_x and sigma_y using von Mises,
which will give sigma_vm. Using yield factor of safety FSy = 2.0, suggested by Q_Goest, ensure sigma_vm < Sty/FSy, where Sty = beam material tensile yield strength.
Transverse shear stress on a free face is zero; therefore, you can check peak shear stress at the flange midplane separately. Using the model suggested by Q_Goest, peak shear stress is tau = 1.50*F1/
A = 3*F1/(b*t), where b = I-beam total flange width. Ensure tau < 0.577*Sty/FSy.
Use consistent units for all quantities; e.g., N, mm, MPa. Feel free to increase FSy if you think the design code requires a higher FSy. Follow a design code if it varies from the above. | {"url":"http://www.physicsforums.com/showthread.php?t=298444","timestamp":"2014-04-19T19:43:45Z","content_type":null,"content_length":"53597","record_id":"<urn:uuid:80ff5ea4-b242-4d66-a99e-31951cdebf99>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by y912f on Thursday, January 29, 2009 at 5:11pm.
1.what is the shortest possible time in which a bacterium could travel a distance of 8.4 cm across a Petri dish at a constant speed of 3.5 mm/s.
the answer is 24s. how?
3.An athlete swims from the north end to the south end of a 50.0m pool in 20.0s and makes the return trip to the starting position in 22.0s.
a. What is the average velocity for the first half of the swim?
the answer is 2.50m/s to South
b. What is the average velocity for the second half of the swim?
the answer is 2.27m/s to North
c. What is the average velocity for the roundtrip?
the anser is 0.0 m/s
i have the answers, i need to know how to GET to the answers..step by step please.
• Physics - y912f, Thursday, January 29, 2009 at 7:50pm
can anyone answer these two questions??
• Physics - tee, Thursday, January 29, 2009 at 8:49pm
convert cm to mm
8.4 cm times 10mm per cm equals 84mm
84mm divided by 3.5mm per s equals 24s
calculate average velocity for the first half of trip
50.0m divided by 20.0 equals 2.5m per s
calculate average velocity for 2nd half of trip
50.0 m divided by 22.0 s equals 2.27 m per s
average velocity is
change in distance divided by change in time
50m minus 50m divided by 22.0 s minus 20.0 s equals 0 m per s
I'm not certain about this one. But, I hope it helps.
• Physics - logi, Saturday, November 14, 2009 at 2:35pm
Related Questions
physics - what is the shortest possible time in which a bacterium could travel a...
Physics - What is the shortes possible time in which a bacterium could travel a ...
physics - what is the shortest possible time in which a bacterium could drift at...
Physics - Two boats, A and B, travel with a velocity of 4.90 m/s across a river ...
physics - speed of blood flow is 30 cm/s near the heart, but only 20 cm/s at a ...
Math - suppose you put one bacterium on a petri plate containing a suitable ...
Physics - :'( Please help me....? (7 points) 2. Contrast the metric units of ...
physics please help!! - A satellite dish is designed to pick up radio waves from...
Physics 201 - Suppose the mass of the wing is 0.258g and the effective spring ...
.. - are most petri dishes the same size? if so what would the diameter of one ... | {"url":"http://www.jiskha.com/display.cgi?id=1233267100","timestamp":"2014-04-21T10:23:53Z","content_type":null,"content_length":"9659","record_id":"<urn:uuid:3b84177b-0125-44b2-8193-8de727d1399a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polygons are named according to the number of sides they have. The first part of the word polygon ('poly') means 'any number' and the second part ('gon') means 'straight sides', so a polygon is a two
dimensional shape with striaght sides. (forgot all you may have heard about opening the cage door and polly's gone!).
Most of the polygons shown in the diagram are regular polygons with the exception of the quadrilateral. That is, they have all their sides the same length and all their angles the same size. You need
to know the names of the polygons from the triangle up to the decagon (10 sides) and the dodecagon (12 sides). Many of the others do have names, but you won't need them for your GCSE maths
The reason we have shown mainly regular polygons is that is the way they are normally used in exam questions, for example 'Find the volume of this prism, when the cross section of the prism is a
regular hexagon', but it is important to remember that any shape with six straight sides is a hexagon, even if it is a very irregular one.
Name Number of sides
Triangle 3
Quadrilateral 4
Pentagon 5
Hexagon 6
Heptagon 7
Octagon 8
Nonagon 9
Decagon 10
Dodecagon 12 | {"url":"http://www.gcsemathematics4u.com/gcse-topics/polygons.php","timestamp":"2014-04-16T07:13:32Z","content_type":null,"content_length":"5927","record_id":"<urn:uuid:6ae097a7-7ff3-4e32-b13b-b41f7ce1c725>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
— Rational numbers
9.5. fractions — Rational numbers
The fractions module provides support for rational number arithmetic.
A Fraction instance can be constructed from a pair of integers, from another rational number, or from a string.
class fractions.Fraction(numerator=0, denominator=1)
class fractions.Fraction(other_fraction)
class fractions.Fraction(string)
The first version requires that numerator and denominator are instances of numbers.Rational and returns a new Fraction instance with value numerator/denominator. If denominator is 0, it raises a
ZeroDivisionError. The second version requires that other_fraction is an instance of numbers.Rational and returns an Fraction instance with the same value. The last version of the constructor
expects a string instance. The usual form for this string is:
[sign] numerator ['/' denominator]
where the optional sign may be either ‘+’ or ‘-‘ and numerator and denominator (if present) are strings of decimal digits. In addition, any string that represents a finite value and is accepted
by the float constructor is also accepted by the Fraction constructor. In either form the input string may also have leading and/or trailing whitespace. Here are some examples:
>>> from fractions import Fraction
>>> Fraction(16, -10)
Fraction(-8, 5)
>>> Fraction(123)
Fraction(123, 1)
>>> Fraction()
Fraction(0, 1)
>>> Fraction('3/7')
Fraction(3, 7)
[40794 refs]
>>> Fraction(' -3/7 ')
Fraction(-3, 7)
>>> Fraction('1.414213 \t\n')
Fraction(1414213, 1000000)
>>> Fraction('-.125')
Fraction(-1, 8)
>>> Fraction('7e-6')
Fraction(7, 1000000)
The Fraction class inherits from the abstract base class numbers.Rational, and implements all of the methods and operations from that class. Fraction instances are hashable, and should be treated
as immutable. In addition, Fraction has the following methods:
fractions.gcd(a, b)
Return the greatest common divisor of the integers a and b. If either a or b is nonzero, then the absolute value of gcd(a, b) is the largest integer that divides both a and b. gcd(a,b) has the
same sign as b if b is nonzero; otherwise it takes the sign of a. gcd(0, 0) returns 0.
See also
Module numbers
The abstract base classes making up the numeric tower. | {"url":"http://www.wingware.com/psupport/python-manual/3.1/library/fractions.html","timestamp":"2014-04-20T21:20:15Z","content_type":null,"content_length":"21380","record_id":"<urn:uuid:aa3c8438-26ff-4e78-8ae9-9e07a395e406>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: July 1998 [00345]
[Date Index] [Thread Index] [Author Index]
RE: Re: Re: Non-comm
• To: mathgroup at smc.vnet.net
• Subject: [mg13414] RE: [mg13344] Re: [mg13280] Re: Non-comm
• From: Ersek_Ted%PAX1A at mr.nawcad.navy.mil
• Date: Thu, 23 Jul 1998 03:33:00 -0400
• Sender: owner-wri-mathgroup at wolfram.com
Isn't this built-in as a different type of multiplication?
"a ** b ** c is a general associative, but non-commutative, form of \
If you like you can use the Notation package to define a convention for
Input and/or Output that is more readable.
Ted Ersek
|It's OK to say that we should not remove attributes from fundemantal
ops |like Plus and Times. I agree, though for experimental purposes, I
like |the flexibility. Thanks for giving us that. |
|However there is one obvious and common case where the Orderless
|attribute doesn't apply, namely matrix math. The operation of
|multiplying matrices must not have the Orderless attribute. |
|Try writing out some matrix equations in Mathematica and asking the
|program to simplify or otherwise rearrange them. Unless you have
|actually constructed the matrices symbolically, it won't work. In
|other words, Mathematica does not allow any kind of shorthand for
|matrix math. You have to write out {{a[1,1],a[1,2]},{a[2,1],a[2,2]}}
|and can't just put A.
|There is one, and only one definition for a new symbol like "A": it
is, |by decree of Wolfram Research (:-/), a complex number. It can't
be a |real number, an integer, or -- a matrix. |
|In the field of control theory, the whole point of using matrix math is
|the shorthand notation, thus:
| x-dot = A.x + B.u (system state equation) | y = C.x + D.u
(system output equation) |
| x = the n x 1 state vector for
| an nth-order system
| u = the m x 1 input vector for
| a system with m inputs | y = the p x 1 output vector
for | a system with p outputs | A = the n x n system
(or plant) matrix for an | nth-order system
| B = the n x m input matrix for an | nth-order system
with m inputs | C = the p x n output matrix for an |
nth-order system with p outputs | D = the p x m feed forward matrix
for a | system with p outputs and m inputs |
|In the past I tried to tinker with some Kalman filtering equations in
|Mathematica but got nowhere fast. The Kalman theory is derived from
|pure symbolic matrix mathematics. The dimensions of the matrices are
|irrelevant to the theory, but the fact that they are matrices is
|Sometimes you know the values of m,n,p but even in that case, in
|Mathematica you still have to write out the full matrices by hand. At
|other times, you want to leave m,n,p undefined. |
|I would say that WRI should take the task of writing the rule base for
|symbolic matrix math with the Orderless attribute removed. That's not
|a job for the users. I think that this example alone answers David
|Withoff's proposition:
|> A separate question is whether or not functions such as Expand could
or |> should somehow be modified, probably by invoking separate
algorithms, |> to handle operations in other algebras. That is an
interesting |> question, and one that many people have considered. |
|Yes, they should invoke separate algorithms for symbols declared as
|matrices, once we can make such declarations. |
|Best regards to the developers,
• Follow-Ups: | {"url":"http://forums.wolfram.com/mathgroup/archive/1998/Jul/msg00345.html","timestamp":"2014-04-17T04:20:30Z","content_type":null,"content_length":"37597","record_id":"<urn:uuid:a244205e-b790-4fe3-b069-b092882e86c3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
in the
Complexity theory in the New York Times
"Arts Beat" on March 15, 2012: Jennifer Schuessler uses the Broadway production of "Death of a Salesman" opening that day as an excuse to riff on one of the Clay Mathematics Institute's millenium
problems. "Willy Loman ... may think he has problems. But it turns out he's got nothing on the hypothetical road warriors ... who are pushed to visit hundreds, thousands, even millions of cities,
mostly in pursuit of someone else's mathematical glory." Basing herself on the recent book "In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation" by William J. Cook (Georgia
Tech), she sketches the history of the Traveling Salesman problem: it dates back to the early 19th century; it "was born as a major mathematical preoccupation in the late 1940s, around the same time
as Loman himself, but has gone on to have a far more satisfying career;" the name was in fact coined by Julia Robinson in 1949. The problem is stated ("involves finding the most efficient route among
geographical points") and we get an idea of its complexity: "Mapping out an optimized route for a 33-city version of the problem used in a 1962 contest sponsored by Procter & Gamble, Dr. Cook notes,
involves measuring [$1.3\times 10^{35}$] possible routes, which would take the Energy Department's Roadrunner supercomputer roughly 28 trillion years."
A Traveling Salesman problem with 33 cities from a 1962 Procter & Gamble contest (more details on this larger image).
"Procter & Gamble's 33-city contest went unsolved in 1962." Here is Prof. Cook's more recent solution. Images courtesy of William Cook.
Schuessler is a bit vague on the exact nature of the Clay Institute's challenge but she reports Cook's prediction: "The salesman may defeat us in the end ... but not without a good fight."
Computational complexity turns up again in the Times, on March 20, this time in a story about detecting cheating in chess. Dylan Loeb McClain tells us about Kenneth W. Regan, "an associate professor
of computer science at the University of Buffalo who is also an international master at chess." Regan has worked on "constructing a mathematical proof to see if someone has cheated," based on "a
model of how often the moves of players of varying ability match those of chess programs" constructed from the records of some 200,000 games from the last two centuries. But high-tech fraud detection
is only a sideline for Prof. Regan: "his principal focus is the holy-grail math problem P versus NP. (P versus NP is about whether problems that have solutions that can be verified by a computer can
also be solved quickly by a computer.)"
Spermatozoon calculus
In an article in the February 27, 2012 Journal of Cell Biology a team led by Luis Alvarez (CAESAR, Bonn) describes how sperm (they studied those of the sea urchin Arbacia punctulata) use the gradient
in $[\mbox{Ca}^{2+}]_i$, the intracellular calcium ion concenration, to direct their swimming. Their main result is that the time derivative of $[\mbox{Ca}^{2+}]_i$ controls the path curvature.
The path traced by an Arbacia spermatozoon, colored by the time derivative of the intracellular calcium ion concentration. The magnitude of the derivative correlates strongly with the curvature of
the path. Image © Alvarez et al., from "The rate of change of $\mbox{Ca}^{2+}$ concentration controls sperm chemotaxis," JCB 196 653-663, published by Rockefeller University Press.
As the authors explain, the flagellum, the single oar with which the spermatozoon sculls about, "serves both as a propeller and antenna that detects chemical cues." They describe a "chemical
differentiator model" which would couple changes in $[\mbox{Ca}^{2+}]_i$ to "flagellar assymetry," producing more sculling on one side than on the other and hence curvature in the path. This work was
picked up by Pierre Barthélémy in Le Monde online's "Passeur de Sciences" blog, under the heading "Les spermatozoïdes savent calculer," March 22, 2012.
Pi Day in Greenburgh
According to the Daily Greenburgh (Westchester County, New York; March 13, 2012) Greenburgh Central 7 School District Superintendent Ronald Ross "has declared Wednesday, March 14 a day off for all
district students in honor of 'Pi Day.' Pi, the ratio of the circumference of a circle to its diameter, is approximately 3.14, thus 'Pi Day.'" During their day off, students are to contemplate
mathematics. This is especially encouraged for those taking part in an enrichment program scheduled this spring and run by the mathematician Jonathan Farley, who is quoted: "It's open to any kid who
has a love of math and wants to explore something different." Article contributed by Rick Pezzullo.
Tony Phillips
Stony Brook University
tony at math.sunysb.edu | {"url":"http://ams.org/news/math-in-the-media/04-2012-media","timestamp":"2014-04-19T13:14:44Z","content_type":null,"content_length":"15822","record_id":"<urn:uuid:ea1fbe33-1e48-4662-a4b1-46322987b288>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometric Progression #2
June 22nd 2011, 08:48 PM #1
Junior Member
May 2011
Geometric Progression #2
Thanks a lot mrfantastic!!! =)
But I got a similiar question which I can't solve:
The sum of the first 20 terms of a geometric series is 10 and the sum of the first 30 terms is 91, find the sum of the first 10 terms.
I divided S20 by S10 to eliminate 'a' so as to find 'r':
(r^30 -1) / (r^20 -1) = 91/10
Got to here but I am not sure how to continue. If I expand I will end up with this equation:
10r^30 - 91r^20 + 81 = 0
and the graph of this equation looks kind of weird.
But if I express in this form: 10(r^10)^3 - 91(r^10)^2 +81 = 0
I get r^10 = 9, 1, -9/10
Is there a better way to simplify so I can end up with a simple EQN like the previous question here: http://www.mathhelpforum.com/math-he...tml#post661852?
Last edited by mr fantastic; June 22nd 2011 at 10:14 PM.
Re: Geometric Progression #2
Thanks a lot mrfantastic!!! =)
But I got a similiar question which I can't solve:
The sum of the first 20 terms of a geometric series is 10 and the sum of the first 30 terms is 91, find the sum of the first 10 terms.
I divided S20 by S10 to eliminate 'a' so as to find 'r':
(r^30 -1) / (r^20 -1) = 91/10
Got to here but I am not sure how to continue. If I expand I will end up with this equation:
10r^30 - 91r^20 + 81 = 0
and the graph of this equation looks kind of weird.
But if I express in this form: 10(r^10)^3 - 91(r^10)^2 +81 = 0
I get r^10 = 9, 1, -9/10
Is there a better way to simplify so I can end up with a simple EQN like the previous question here: http://www.mathhelpforum.com/math-he...tml#post661852?
Dear Blizzardy,
You have obtained values for $r^{10}$. Hence there are three possible values that $S_{10}$ could take depending on $r^{10}$. Hope you can continue.
Re: Geometric Progression #2
Thanks a lot mrfantastic!!! =)
But I got a similiar question which I can't solve:
The sum of the first 20 terms of a geometric series is 10 and the sum of the first 30 terms is 91, find the sum of the first 10 terms.
I divided S20 by S10 to eliminate 'a' so as to find 'r':
(r^30 -1) / (r^20 -1) = 91/10
Got to here but I am not sure how to continue. If I expand I will end up with this equation:
10r^30 - 91r^20 + 81 = 0
and the graph of this equation looks kind of weird.
But if I express in this form: 10(r^10)^3 - 91(r^10)^2 +81 = 0
I get r^10 = 9, 1, -9/10
Is there a better way to simplify so I can end up with a simple EQN like the previous question here: http://www.mathhelpforum.com/math-he...tml#post661852?
I think that the way the problem was presented,
you could try to find S10 from S30 and S20,
since the difference between these sums is 10 terms.
We can write 2 equations from the above
$S_{30}-S_{20}=ar^{20}+ar^{21}+....+ar^{29}=r^{20}\left(a+ ar+ar^2+...+ar^9}\right)=r^{20}S_{10}$
$S_{20}=S_{10}+ar^{10}+ar^{11}+....+ar^{19}=S_{10}+ r^{10}S_{10}$
$\Rightarrow\ S_{10}\left(1+r^{10}\right)=10$
Therefore, we have the equation
This leads to a quadratic equation
$10x^2-81x-81=0\Rightarrow\ (10x+9)(x-9)=0$
Since $r^{10}$ is an even power and hence positive, the negative solution for x is ruled out.
Re: Geometric Progression #2
Thanks a lot mrfantastic!!! =)
But I got a similiar question which I can't solve:
The sum of the first 20 terms of a geometric series is 10 and the sum of the first 30 terms is 91, find the sum of the first 10 terms.
I divided S20 by S10 to eliminate 'a' so as to find 'r':
(r^30 -1) / (r^20 -1) = 91/10
Got to here but I am not sure how to continue. If I expand I will end up with this equation:
10r^30 - 91r^20 + 81 = 0
and the graph of this equation looks kind of weird.
But if I express in this form: 10(r^10)^3 - 91(r^10)^2 +81 = 0
I get r^10 = 9, 1, -9/10
Is there a better way to simplify so I can end up with a simple EQN like the previous question here: http://www.mathhelpforum.com/math-he...tml#post661852?
After seeing Archie Meade's post I found a mistake in my previous post. You cannot use all the three values you have obtained as I have mistakenly stated.
Case 1: Let, $r^{10}=1\Rightarrow{r=\pm{1}}$
When r=1;
$S_{20}=a+ar+ar^2+ar^3+....+ar^{19}=20a=10 \Rightarrow{a=0.5}$
$S_{30}=a+ar+ar^2+ar^3+.......+ar^{29}=30a=91 \Rightarrow{a=\frac{91}{31}}$
But 'a' must have a unique value, so we cannot take r=1.
When r=-1;
But we know that, $S_{30}\mbox{ and }S_{20}$ are not equal to zero. Hence we cannot take r=-1 either.
So $r^{10}=1$ cannot be taken.
Case 2: Let, $r^{10}=-\frac{9}{10}$
In this case r would be a complex value and obviously the sums $S_{30}\mbox{ and }S_{20}$ would also be complex values which is again a contradiction.
Therefore we cannot take $r^{10}=-\frac{9}{10}$
The only solution that could be used is, $r^{10}=9$
June 22nd 2011, 10:50 PM #2
Super Member
Dec 2009
June 23rd 2011, 04:48 AM #3
MHF Contributor
Dec 2009
June 23rd 2011, 06:50 AM #4
Super Member
Dec 2009 | {"url":"http://mathhelpforum.com/pre-calculus/183480-geometric-progression-2-a.html","timestamp":"2014-04-17T01:51:30Z","content_type":null,"content_length":"50344","record_id":"<urn:uuid:bc9e799e-6742-43d6-a36f-72918c60bdd3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Scipy-svn] r5281 - trunk/doc/source/tutorial
scipy-svn@scip... scipy-svn@scip...
Sat Dec 20 05:05:18 CST 2008
Author: david.warde-farley
Date: 2008-12-20 05:05:15 -0600 (Sat, 20 Dec 2008)
New Revision: 5281
Huge fixer-upper. All module functions prefixed with :func: (is this correct?). All parameters enclosed in \*param\* (seemed like the closest thing in the Sphinx documentation that I could find. Tables and sectional cross-references fixed.
Modified: trunk/doc/source/tutorial/ndimage.rst
--- trunk/doc/source/tutorial/ndimage.rst 2008-12-20 06:53:22 UTC (rev 5280)
+++ trunk/doc/source/tutorial/ndimage.rst 2008-12-20 11:05:15 UTC (rev 5281)
@@ -1,42 +1,44 @@
-Multi-dimensional image processing
+Multi-dimensional image processing (:mod:`ndimage`)
-{Peter Verveer} {verveer@users.sourceforge.net}
-{Multidimensional image analysis functions}
+.. moduleauthor:: Peter Verveer <verveer@users.sourceforge.net>
-.. _ndimage_introduction:
+.. currentmodule:: scipy.ndimage
+.. _ndimage-introduction:
Image processing and analysis are generally seen as operations on
two-dimensional arrays of values. There are however a number of
fields where images of higher dimensionality must be analyzed. Good
examples of these are medical imaging and biological imaging.
-{numarray} is suited very well for this type of applications due
-its inherent multi-dimensional nature. The {numarray.nd_image}
+:mod:`numarray` is suited very well for this type of applications due
+its inherent multi-dimensional nature. The :mod:`scipy.ndimage`
packages provides a number of general image processing and analysis
functions that are designed to operate with arrays of arbitrary
dimensionality. The packages currently includes functions for
linear and non-linear filtering, binary morphology, B-spline
interpolation, and object measurements.
-.. _ndimage_properties_shared_by_all_functions:
+.. _ndimage-properties-shared-by-all-functions:
Properties shared by all functions
All functions share some common properties. Notably, all functions
-allow the specification of an output array with the {output}
+allow the specification of an output array with the *output*
argument. With this argument you can specify an array that will be
changed in-place with the result with the operation. In this case
-the result is not returned. Usually, using the {output} argument is
+the result is not returned. Usually, using the *output* argument is
more efficient, since an existing array is used to store the
The type of arrays returned is dependent on the type of operation,
but it is in most cases equal to the type of the input. If,
-however, the {output} argument is used, the type of the result is
+however, the *output* argument is used, the type of the result is
equal to the type of the specified output argument. If no output
argument is given, it is still possible to specify what the result
of the output should be. This is done by simply assigning the
@@ -51,15 +53,15 @@
{In previous versions of :mod:`scipy.ndimage`, some functions accepted the *output_type* argument to achieve the same effect. This argument is still supported, but its use will generate an deprecation warning. In a future version all instances of this argument will be removed. The preferred way to specify an output type, is by using the *output* argument, either by specifying an output array of the desired type, or by specifying the type of the output that is to be returned.}
+.. _ndimage-filter-functions:
Filter functions
-.. _ndimage_filter_functions:
The functions described in this section all perform some type of spatial filtering of the the input array: the elements in the output are some function of the values in the neighborhood of the corresponding input element. We refer to this neighborhood of elements as the filter kernel, which is often
rectangular in shape but may also have an arbitrary footprint. Many
of the functions described below allow you to define the footprint
-of the kernel, by passing a mask through the {footprint} parameter.
+of the kernel, by passing a mask through the *footprint* parameter.
For example a cross shaped kernel can be defined as follows:
@@ -84,7 +86,7 @@
[0 0 1 1 1 0 0]
Sometimes it is convenient to choose a different origin for the
-kernel. For this reason most functions support the {origin}
+kernel. For this reason most functions support the *origin*
parameter which gives the origin of the filter relative to its
center. For example:
@@ -115,7 +117,7 @@
[ 0 1 0 0 -1 0 0]
however, using the origin parameter instead of a larger kernel is
-more efficient. For multi-dimensional kernels {origin} can be a
+more efficient. For multi-dimensional kernels *origin* can be a
number, in which case the origin is assumed to be equal along all
axes, or a sequence giving the origin along each axis.
@@ -125,18 +127,18 @@
borders. This is done by assuming that the arrays are extended
beyond their boundaries according certain boundary conditions. In
the functions described below, the boundary conditions can be
-selected using the {mode} parameter which must be a string with the
+selected using the *mode* parameter which must be a string with the
name of the boundary condition. Following boundary conditions are
currently supported:
- {"nearest"} {Use the value at the boundary} {[1 2 3]->[1 1 2 3 3]}
- {"wrap"} {Periodically replicate the array} {[1 2 3]->[3 1 2 3 1]}
- {"reflect"} {Reflect the array at the boundary}
- {[1 2 3]->[1 1 2 3 3]}
- {"constant"} {Use a constant value, default value is 0.0}
- {[1 2 3]->[0 1 2 3 0]}
+ ========== ==================================== ====================
+ ------------------------------------------------------------------------
+ "nearest" Use the value at the boundary [1 2 3]->[1 1 2 3 3]
+ "wrap" Periodically replicate the array [1 2 3]->[3 1 2 3 1]
+ "reflect" Reflect the array at the boundary [1 2 3]->[1 1 2 3 3]
+ "constant" Use a constant value, default is 0.0 [1 2 3]->[0 1 2 3 0]
+ ========== ==================================== ====================
The {"constant"} mode is special since it needs an additional
parameter to specify the constant value that should be used.
@@ -150,19 +152,19 @@
Correlation and convolution
- The {correlate1d} function calculates a one-dimensional correlation
+ The :func:`correlate1d` function calculates a one-dimensional correlation
along the given axis. The lines of the array along the given axis
- are correlated with the given {weights}. The {weights} parameter
+ are correlated with the given *weights*. The *weights* parameter
must be a one-dimensional sequences of numbers.
- The function {correlate} implements multi-dimensional correlation
+ The function :func:`correlate` implements multi-dimensional correlation
of the input array with a given kernel.
- The {convolve1d} function calculates a one-dimensional convolution
+ The :func:`convolve1d` function calculates a one-dimensional convolution
along the given axis. The lines of the array along the given axis
- are convoluted with the given {weights}. The {weights} parameter
+ are convoluted with the given *weights*. The *weights* parameter
must be a one-dimensional sequences of numbers.
{A convolution is essentially a correlation after mirroring the
@@ -170,7 +172,7 @@
in the case of a correlation: the result is shifted in the opposite
- The function {convolve} implements multi-dimensional convolution of
+ The function :func:`convolve` implements multi-dimensional convolution of
the input array with a given kernel.
{A convolution is essentially a correlation after mirroring the
@@ -178,31 +180,31 @@
in the case of a correlation: the results is shifted in the opposite
-.. _ndimage_filter_functions_smoothing:
+.. _ndimage-filter-functions-smoothing:
Smoothing filters
- The {gaussian_filter1d} function implements a one-dimensional
+ The :func:`gaussian_filter1d` function implements a one-dimensional
Gaussian filter. The standard-deviation of the Gaussian filter is
- passed through the parameter {sigma}. Setting {order}=0 corresponds
+ passed through the parameter *sigma*. Setting *order* = 0 corresponds
to convolution with a Gaussian kernel. An order of 1, 2, or 3
corresponds to convolution with the first, second or third
derivatives of a Gaussian. Higher order derivatives are not
- The {gaussian_filter} function implements a multi-dimensional
+ The :func:`gaussian_filter` function implements a multi-dimensional
Gaussian filter. The standard-deviations of the Gaussian filter
- along each axis are passed through the parameter {sigma} as a
- sequence or numbers. If {sigma} is not a sequence but a single
+ along each axis are passed through the parameter *sigma* as a
+ sequence or numbers. If *sigma* is not a sequence but a single
number, the standard deviation of the filter is equal along all
directions. The order of the filter can be specified separately for
each axis. An order of 0 corresponds to convolution with a Gaussian
kernel. An order of 1, 2, or 3 corresponds to convolution with the
first, second or third derivatives of a Gaussian. Higher order
- derivatives are not implemented. The {order} parameter must be a
+ derivatives are not implemented. The *order* parameter must be a
number, to specify the same order for all axes, or a sequence of
numbers to specify a different order for each axis.
@@ -214,13 +216,13 @@
prevented by specifying a more precise output type.}
- The {uniform_filter1d} function calculates a one-dimensional
- uniform filter of the given {size} along the given axis.
+ The :func:`uniform_filter1d` function calculates a one-dimensional
+ uniform filter of the given *size* along the given axis.
- The {uniform_filter} implements a multi-dimensional uniform
+ The :func:`uniform_filter` implements a multi-dimensional uniform
filter. The sizes of the uniform filter are given for each axis as
- a sequence of integers by the {size} parameter. If {size} is not a
+ a sequence of integers by the *size* parameter. If *size* is not a
sequence, but a single number, the sizes along all axis are assumed
to be equal.
@@ -236,59 +238,59 @@
Filters based on order statistics
- The {minimum_filter1d} function calculates a one-dimensional
- minimum filter of given {size} along the given axis.
+ The :func:`minimum_filter1d` function calculates a one-dimensional
+ minimum filter of given *size* along the given axis.
- The {maximum_filter1d} function calculates a one-dimensional
- maximum filter of given {size} along the given axis.
+ The :func:`maximum_filter1d` function calculates a one-dimensional
+ maximum filter of given *size* along the given axis.
- The {minimum_filter} function calculates a multi-dimensional
+ The :func:`minimum_filter` function calculates a multi-dimensional
minimum filter. Either the sizes of a rectangular kernel or the
- footprint of the kernel must be provided. The {size} parameter, if
+ footprint of the kernel must be provided. The *size* parameter, if
provided, must be a sequence of sizes or a single number in which
case the size of the filter is assumed to be equal along each axis.
- The {footprint}, if provided, must be an array that defines the
+ The *footprint*, if provided, must be an array that defines the
shape of the kernel by its non-zero elements.
- The {maximum_filter} function calculates a multi-dimensional
+ The :func:`maximum_filter` function calculates a multi-dimensional
maximum filter. Either the sizes of a rectangular kernel or the
- footprint of the kernel must be provided. The {size} parameter, if
+ footprint of the kernel must be provided. The *size* parameter, if
provided, must be a sequence of sizes or a single number in which
case the size of the filter is assumed to be equal along each axis.
- The {footprint}, if provided, must be an array that defines the
+ The *footprint*, if provided, must be an array that defines the
shape of the kernel by its non-zero elements.
- The {rank_filter} function calculates a multi-dimensional rank
- filter. The {rank} may be less then zero, i.e., {rank}=-1 indicates
+ The :func:`rank_filter` function calculates a multi-dimensional rank
+ filter. The *rank* may be less then zero, i.e., *rank* =-1 indicates
the largest element. Either the sizes of a rectangular kernel or
- the footprint of the kernel must be provided. The {size} parameter,
+ the footprint of the kernel must be provided. The *size* parameter,
if provided, must be a sequence of sizes or a single number in
which case the size of the filter is assumed to be equal along each
- axis. The {footprint}, if provided, must be an array that defines
+ axis. The *footprint*, if provided, must be an array that defines
the shape of the kernel by its non-zero elements.
- The {percentile_filter} function calculates a multi-dimensional
- percentile filter. The {percentile} may be less then zero, i.e.,
- {percentile}=-20 equals {percentile}=80. Either the sizes of a
+ The :func:`percentile_filter` function calculates a multi-dimensional
+ percentile filter. The *percentile* may be less then zero, i.e.,
+ *percentile* =-20 equals *percentile* =80. Either the sizes of a
rectangular kernel or the footprint of the kernel must be provided.
- The {size} parameter, if provided, must be a sequence of sizes or a
+ The *size* parameter, if provided, must be a sequence of sizes or a
single number in which case the size of the filter is assumed to be
- equal along each axis. The {footprint}, if provided, must be an
+ equal along each axis. The *footprint*, if provided, must be an
array that defines the shape of the kernel by its non-zero
- The {median_filter} function calculates a multi-dimensional median
+ The :func:`median_filter` function calculates a multi-dimensional median
filter. Either the sizes of a rectangular kernel or the footprint
- of the kernel must be provided. The {size} parameter, if provided,
+ of the kernel must be provided. The *size* parameter, if provided,
must be a sequence of sizes or a single number in which case the
size of the filter is assumed to be equal along each axis. The
- {footprint} if provided, must be an array that defines the shape of
+ *footprint* if provided, must be an array that defines the shape of
the kernel by its non-zero elements.
@@ -297,15 +299,15 @@
Derivative filters can be constructed in several ways. The function
{gaussian_filter1d} described in section
-:ref:`_ndimage_filter_functions_smoothing` can be used to calculate
-derivatives along a given axis using the {order} parameter. Other
+:ref:`ndimage-filter-functions-smoothing` can be used to calculate
+derivatives along a given axis using the *order* parameter. Other
derivative filters are the Prewitt and Sobel filters:
- The {prewitt} function calculates a derivative along the given
+ The :func:`prewitt` function calculates a derivative along the given
- The {sobel} function calculates a derivative along the given
+ The :func:`sobel` function calculates a derivative along the given
@@ -316,21 +318,21 @@
calculate the second derivative along a given direction and to
construct the Laplace filter:
- The function {generic_laplace} calculates a laplace filter using
- the function passed through {derivative2} to calculate second
- derivatives. The function {derivative2} should have the following
+ The function :func:`generic_laplace` calculates a laplace filter using
+ the function passed through :func:`derivative2` to calculate second
+ derivatives. The function :func:`derivative2` should have the following
{derivative2(input, axis, output, mode, cval, \*extra_arguments, \*\*extra_keywords)}
It should calculate the second derivative along the dimension
- {axis}. If {output} is not {None} it should use that for the output
- and return {None}, otherwise it should return the result. {mode},
- {cval} have the usual meaning.
+ *axis*. If *output* is not {None} it should use that for the output
+ and return {None}, otherwise it should return the result. *mode*,
+ *cval* have the usual meaning.
- The {extra_arguments} and {extra_keywords} arguments can be used
+ The *extra_arguments* and *extra_keywords* arguments can be used
to pass a tuple of extra arguments and a dictionary of named
- arguments that are passed to {derivative2} at each call.
+ arguments that are passed to :func:`derivative2` at each call.
For example:
@@ -348,7 +350,7 @@
[ 0 0 1 0 0]
[ 0 0 0 0 0]]
- To demonstrate the use of the {extra_arguments} argument we could
+ To demonstrate the use of the *extra_arguments* argument we could
@@ -378,44 +380,44 @@
The following two functions are implemented using
-{generic_laplace} by providing appropriate functions for the
+:func:`generic_laplace` by providing appropriate functions for the
second derivative function:
- The function {laplace} calculates the Laplace using discrete
+ The function :func:`laplace` calculates the Laplace using discrete
differentiation for the second derivative (i.e. convolution with
{[1, -2, 1]}).
- The function {gaussian_laplace} calculates the Laplace using
- {gaussian_filter} to calculate the second derivatives. The
+ The function :func:`gaussian_laplace` calculates the Laplace using
+ :func:`gaussian_filter` to calculate the second derivatives. The
standard-deviations of the Gaussian filter along each axis are
- passed through the parameter {sigma} as a sequence or numbers. If
- {sigma} is not a sequence but a single number, the standard
+ passed through the parameter *sigma* as a sequence or numbers. If
+ *sigma* is not a sequence but a single number, the standard
deviation of the filter is equal along all directions.
The gradient magnitude is defined as the square root of the sum of
the squares of the gradients in all directions. Similar to the
-generic Laplace function there is a {generic_gradient_magnitude}
+generic Laplace function there is a :func:`generic_gradient_magnitude`
function that calculated the gradient magnitude of an array:
- The function {generic_gradient_magnitude} calculates a gradient
- magnitude using the function passed through {derivative} to
- calculate first derivatives. The function {derivative} should have
+ The function :func:`generic_gradient_magnitude` calculates a gradient
+ magnitude using the function passed through :func:`derivative` to
+ calculate first derivatives. The function :func:`derivative` should have
the following signature:
{derivative(input, axis, output, mode, cval, \*extra_arguments, \*\*extra_keywords)}
- It should calculate the derivative along the dimension {axis}. If
- {output} is not {None} it should use that for the output and return
- {None}, otherwise it should return the result. {mode}, {cval} have
+ It should calculate the derivative along the dimension *axis*. If
+ *output* is not {None} it should use that for the output and return
+ {None}, otherwise it should return the result. *mode*, *cval* have
the usual meaning.
- The {extra_arguments} and {extra_keywords} arguments can be used
+ The *extra_arguments* and *extra_keywords* arguments can be used
to pass a tuple of extra arguments and a dictionary of named
- arguments that are passed to {derivative} at each call.
+ arguments that are passed to *derivative* at each call.
- For example, the {sobel} function fits the required signature:
+ For example, the :func:`sobel` function fits the required signature:
@@ -428,28 +430,28 @@
[0 1 2 1 0]
[0 0 0 0 0]]
- See the documentation of {generic_laplace} for examples of using
- the {extra_arguments} and {extra_keywords} arguments.
+ See the documentation of :func:`generic_laplace` for examples of using
+ the *extra_arguments* and *extra_keywords* arguments.
-The {sobel} and {prewitt} functions fit the required signature and
-can therefore directly be used with {generic_gradient_magnitude}.
+The :func:`sobel` and :func:`prewitt` functions fit the required signature and
+can therefore directly be used with :func:`generic_gradient_magnitude`.
The following function implements the gradient magnitude using
Gaussian derivatives:
- The function {gaussian_gradient_magnitude} calculates the
- gradient magnitude using {gaussian_filter} to calculate the first
+ The function :func:`gaussian_gradient_magnitude` calculates the
+ gradient magnitude using :func:`gaussian_filter` to calculate the first
derivatives. The standard-deviations of the Gaussian filter along
- each axis are passed through the parameter {sigma} as a sequence or
- numbers. If {sigma} is not a sequence but a single number, the
+ each axis are passed through the parameter *sigma* as a sequence or
+ numbers. If *sigma* is not a sequence but a single number, the
standard deviation of the filter is equal along all directions.
+.. _ndimage-genericfilters:
Generic filter functions
-.. _ndimage_genericfilters:
To implement filter functions, generic functions can be used that accept a
callable object that implements the filtering operation. The iteration over the
input and output arrays is handled by these generic functions, along with such
@@ -457,17 +459,17 @@
callable object implementing a callback function that does the
actual filtering work must be provided. The callback function can
also be written in C and passed using a CObject (see
-:ref:`_ndimage_ccallbacks` for more information).
+:ref:`ndimage-ccallbacks` for more information).
- The {generic_filter1d} function implements a generic
+ The :func:`generic_filter1d` function implements a generic
one-dimensional filter function, where the actual filtering
operation must be supplied as a python function (or other callable
- object). The {generic_filter1d} function iterates over the lines
- of an array and calls {function} at each line. The arguments that
- are passed to {function} are one-dimensional arrays of the
+ object). The :func:`generic_filter1d` function iterates over the lines
+ of an array and calls :func:`function` at each line. The arguments that
+ are passed to :func:`function` are one-dimensional arrays of the
{tFloat64} type. The first contains the values of the current line.
It is extended at the beginning end the end, according to the
- {filter_size} and {origin} arguments. The second array should be
+ *filter_size* and *origin* arguments. The second array should be
modified in-place to provide the output values of the line. For
example consider a correlation along one dimension:
@@ -479,7 +481,7 @@
[27 32 38 41]
[51 56 62 65]]
- The same operation can be implemented using {generic_filter1d} as
+ The same operation can be implemented using :func:`generic_filter1d` as
@@ -498,7 +500,7 @@
function was called.
Optionally extra arguments can be defined and passed to the filter
- function. The {extra_arguments} and {extra_keywords} arguments
+ function. The *extra_arguments* and *extra_keywords* arguments
can be used to pass a tuple of extra arguments and/or a dictionary
of named arguments that are passed to derivative at each call. For
example, we can pass the parameters of our filter as an argument:
@@ -523,11 +525,11 @@
[51 56 62 65]]
- The {generic_filter} function implements a generic filter
+ The :func:`generic_filter` function implements a generic filter
function, where the actual filtering operation must be supplied as
- a python function (or other callable object). The {generic_filter}
- function iterates over the array and calls {function} at each
- element. The argument of {function} is a one-dimensional array of
+ a python function (or other callable object). The :func:`generic_filter`
+ function iterates over the array and calls :func:`function` at each
+ element. The argument of :func:`function` is a one-dimensional array of
the {tFloat64} type, that contains the values around the current
element that are within the footprint of the filter. The function
should return a single value that can be converted to a double
@@ -541,7 +543,7 @@
[12 15 19 23]
[28 31 35 39]]
- The same operation can be implemented using {generic_filter} as
+ The same operation can be implemented using *generic_filter* as
@@ -559,15 +561,15 @@
equal to two, which was multiplied with the proper weights and the
result summed.
- When calling {generic_filter}, either the sizes of a rectangular
- kernel or the footprint of the kernel must be provided. The {size}
+ When calling :func:`generic_filter`, either the sizes of a rectangular
+ kernel or the footprint of the kernel must be provided. The *size*
parameter, if provided, must be a sequence of sizes or a single
number in which case the size of the filter is assumed to be equal
- along each axis. The {footprint}, if provided, must be an array
+ along each axis. The *footprint*, if provided, must be an array
that defines the shape of the kernel by its non-zero elements.
Optionally extra arguments can be defined and passed to the filter
- function. The {extra_arguments} and {extra_keywords} arguments
+ function. The *extra_arguments* and *extra_keywords* arguments
can be used to pass a tuple of extra arguments and/or a dictionary
of named arguments that are passed to derivative at each call. For
example, we can pass the parameters of our filter as an argument:
@@ -599,7 +601,7 @@
the filter dependening on spatial location. Here is an example of
using a class that implements the filter and keeps track of the
current coordinates while iterating. It performs the same filter
-operation as described above for {generic_filter}, but
+operation as described above for :func:`generic_filter`, but
additionally prints the current coordinates:
@@ -645,9 +647,9 @@
[12 15 19 23]
[28 31 35 39]]
-For the {generic_filter1d} function the same approach works,
+For the :func:`generic_filter1d` function the same approach works,
except that this function does not iterate over the axis that is
-being filtered. The example for {generic_filte1d} then becomes
+being filtered. The example for :func:`generic_filter1d` then becomes
@@ -688,53 +690,53 @@
[51 56 62 65]]
Fourier domain filters
The functions described in this section perform filtering
operations in the Fourier domain. Thus, the input array of such a
function should be compatible with an inverse Fourier transform
-function, such as the functions from the {numarray.fft} module. We
+function, such as the functions from the {scipy.fft} module. We
therefore have to deal with arrays that may be the result of a real
or a complex Fourier transform. In the case of a real Fourier
transform only half of the of the symmetric complex transform is
stored. Additionally, it needs to be known what the length of the
axis was that was transformed by the real fft. The functions
-described here provide a parameter {n} that in the case of a real
+described here provide a parameter *n* that in the case of a real
transform must be equal to the length of the real transform axis
before transformation. If this parameter is less than zero, it is
assumed that the input array was the result of a complex Fourier
-transform. The parameter {axis} can be used to indicate along which
+transform. The parameter *axis* can be used to indicate along which
axis the real transform was executed.
- The {fourier_shift} function multiplies the input array with the
+ The :func:`fourier_shift` function multiplies the input array with the
multi-dimensional Fourier transform of a shift operation for the
- given shift. The {shift} parameter is a sequences of shifts for
+ given shift. The *shift* parameter is a sequences of shifts for
each dimension, or a single value for all dimensions.
- The {fourier_gaussian} function multiplies the input array with
+ The :func:`fourier_gaussian` function multiplies the input array with
the multi-dimensional Fourier transform of a Gaussian filter with
- given standard-deviations {sigma}. The {sigma} parameter is a
+ given standard-deviations *sigma*. The *sigma* parameter is a
sequences of values for each dimension, or a single value for all
- The {fourier_uniform} function multiplies the input array with the
+ The :func:`fourier_uniform` function multiplies the input array with the
multi-dimensional Fourier transform of a uniform filter with given
- sizes {size}. The {size} parameter is a sequences of values for
+ sizes *size*. The *size* parameter is a sequences of values for
each dimension, or a single value for all dimensions.
- The {fourier_ellipsoid} function multiplies the input array with
+ The :func:`fourier_ellipsoid` function multiplies the input array with
the multi-dimensional Fourier transform of a elliptically shaped
- filter with given sizes {size}. The {size} parameter is a sequences
+ filter with given sizes *size*. The *size* parameter is a sequences
of values for each dimension, or a single value for all dimensions.
{This function is
only implemented for dimensions 1, 2, and 3.}
Interpolation functions
This section describes various interpolation functions that are
based on B-spline theory. A good introduction to B-splines can be
@@ -743,22 +745,22 @@
22-38, November 1999. {Spline pre-filters} Interpolation using
splines of an order larger than 1 requires a pre- filtering step.
The interpolation functions described in section
-:ref:`_ndimage_interpolation` apply pre-filtering by calling
-{spline_filter}, but they can be instructed not to do this by
-setting the {prefilter} keyword equal to {False}. This is useful if
+:ref:`ndimage-interpolation` apply pre-filtering by calling
+:func:`spline_filter`, but they can be instructed not to do this by
+setting the *prefilter* keyword equal to {False}. This is useful if
more than one interpolation operation is done on the same array. In
this case it is more efficient to do the pre-filtering only once
and use a prefiltered array as the input of the interpolation
functions. The following two functions implement the
- The {spline_filter1d} function calculates a one-dimensional spline
+ The :func:`spline_filter1d` function calculates a one-dimensional spline
filter along the given axis. An output array can optionally be
provided. The order of the spline must be larger then 1 and less
than 6.
- The {spline_filter} function calculates a multi-dimensional spline
+ The :func:`spline_filter` function calculates a multi-dimensional spline
{The multi-dimensional filter is implemented as a sequence of
@@ -769,25 +771,25 @@
This can be prevented by specifying a output type of high precision.}
+.. _ndimage-interpolation:
Interpolation functions
-.. _ndimage_interpolation:
Following functions all employ spline interpolation to effect some type of
geometric transformation of the input array. This requires a mapping of the
output coordinates to the input coordinates, and therefore the possibility
arises that input values outside the boundaries are needed. This problem is
-solved in the same way as described in section :ref:`_ndimage_filter_functions`
+solved in the same way as described in section :ref:`ndimage-filter-functions`
for the multi-dimensional filter functions. Therefore these functions all
-support a {mode} parameter that determines how the boundaries are handled, and
-a {cval} parameter that gives a constant value in case that the {'constant'}
+support a *mode* parameter that determines how the boundaries are handled, and
+a *cval* parameter that gives a constant value in case that the {'constant'}
mode is used.
- The {geometric_transform} function applies an arbitrary geometric
- transform to the input. The given {mapping} function is called at
+ The :func:`geometric_transform` function applies an arbitrary geometric
+ transform to the input. The given *mapping* function is called at
each point in the output to find the corresponding coordinates in
- the input. {mapping} must be a callable object that accepts a tuple
+ the input. *mapping* must be a callable object that accepts a tuple
of length equal to the output array rank and returns the
corresponding input coordinates as a tuple of length equal to the
input array rank. The output shape and output type can optionally
@@ -809,7 +811,7 @@
[ 0. 8.2625 9.6375]]
Optionally extra arguments can be defined and passed to the filter
- function. The {extra_arguments} and {extra_keywords} arguments
+ function. The *extra_arguments* and *extra_keywords* arguments
can be used to pass a tuple of extra arguments and/or a dictionary
of named arguments that are passed to derivative at each call. For
example, we can pass the shifts in our example as arguments:
@@ -835,17 +837,17 @@
[ 0. 4.8125 6.1875]
[ 0. 8.2625 9.6375]]
- {The mapping function can also be written in C and passed using a CObject. See :ref:`_ndimage_ccallbacks` for more information.}
+ {The mapping function can also be written in C and passed using a CObject. See :ref:`ndimage-ccallbacks` for more information.}
- The function {map_coordinates} applies an arbitrary coordinate
+ The function :func:`map_coordinates` applies an arbitrary coordinate
transformation using the given array of coordinates. The shape of
the output is derived from that of the coordinate array by dropping
- the first axis. The parameter {coordinates} is used to find for
+ the first axis. The parameter *coordinates* is used to find for
each point in the output the corresponding coordinates in the
- input. The values of {coordinates} along the first axis are the
+ input. The values of *coordinates* along the first axis are the
coordinates in the input array at which the output value is found.
- (See also the numarray {coordinates} function.) Since the
+ (See also the numarray *coordinates* function.) Since the
coordinates may be non- integer coordinates, the value of the input
at these coordinates is determined by spline interpolation of the
requested order. Here is an example that interpolates a 2D array at
@@ -863,12 +865,12 @@
[ 1.3625 7. ]
- The {affine_transform} function applies an affine transformation
- to the input array. The given transformation {matrix} and {offset}
+ The :func:`affine_transform` function applies an affine transformation
+ to the input array. The given transformation *matrix* and *offset*
are used to find for each point in the output the corresponding
coordinates in the input. The value of the input at the calculated
coordinates is determined by spline interpolation of the requested
- order. The transformation {matrix} must be two-dimensional or can
+ order. The transformation *matrix* must be two-dimensional or can
also be given as a one-dimensional sequence or array. In the latter
case, it is assumed that the matrix is diagonal. A more efficient
interpolation algorithm is then applied that exploits the
@@ -877,33 +879,33 @@
shape and type.
- The {shift} function returns a shifted version of the input, using
- spline interpolation of the requested {order}.
+ The :func:`shift` function returns a shifted version of the input, using
+ spline interpolation of the requested *order*.
- The {zoom} function returns a rescaled version of the input, using
- spline interpolation of the requested {order}.
+ The :func:`zoom` function returns a rescaled version of the input, using
+ spline interpolation of the requested *order*.
- The {rotate} function returns the input array rotated in the plane
- defined by the two axes given by the parameter {axes}, using spline
- interpolation of the requested {order}. The angle must be given in
- degrees. If {reshape} is true, then the size of the output array is
+ The :func:`rotate` function returns the input array rotated in the plane
+ defined by the two axes given by the parameter *axes*, using spline
+ interpolation of the requested *order*. The angle must be given in
+ degrees. If *reshape* is true, then the size of the output array is
adapted to contain the rotated input.
+.. _ndimage-binary-morphology:
Binary morphology
-.. _ndimage_binary_morphology:
- The {generate_binary_structure} functions generates a binary
+ The :func:`generate_binary_structure` functions generates a binary
structuring element for use in binary morphology operations. The
- {rank} of the structure must be provided. The size of the structure
+ *rank* of the structure must be provided. The size of the structure
that is returned is equal to three in each direction. The value of
each element is equal to one if the square of the Euclidean
distance from the element to the center is less or equal to
- {connectivity}. For instance, two dimensional 4-connected and
+ *connectivity*. For instance, two dimensional 4-connected and
8-connected structures are generated as follows:
@@ -921,34 +923,34 @@
Most binary morphology functions can be expressed in terms of the
basic operations erosion and dilation:
- The {binary_erosion} function implements binary erosion of arrays
+ The :func:`binary_erosion` function implements binary erosion of arrays
of arbitrary rank with the given structuring element. The origin
parameter controls the placement of the structuring element as
- described in section :ref:`_ndimage_filter_functions`. If no
+ described in section :ref:`ndimage-filter-functions`. If no
structuring element is provided, an element with connectivity equal
- to one is generated using {generate_binary_structure}. The
- {border_value} parameter gives the value of the array outside
- boundaries. The erosion is repeated {iterations} times. If
- {iterations} is less than one, the erosion is repeated until the
- result does not change anymore. If a {mask} array is given, only
+ to one is generated using :func:`generate_binary_structure`. The
+ *border_value* parameter gives the value of the array outside
+ boundaries. The erosion is repeated *iterations* times. If
+ *iterations* is less than one, the erosion is repeated until the
+ result does not change anymore. If a *mask* array is given, only
those elements with a true value at the corresponding mask element
are modified at each iteration.
- The {binary_dilation} function implements binary dilation of
+ The :func:`binary_dilation` function implements binary dilation of
arrays of arbitrary rank with the given structuring element. The
origin parameter controls the placement of the structuring element
- as described in section :ref:`_ndimage_filter_functions`. If no
+ as described in section :ref:`ndimage-filter-functions`. If no
structuring element is provided, an element with connectivity equal
- to one is generated using {generate_binary_structure}. The
- {border_value} parameter gives the value of the array outside
- boundaries. The dilation is repeated {iterations} times. If
- {iterations} is less than one, the dilation is repeated until the
- result does not change anymore. If a {mask} array is given, only
+ to one is generated using :func:`generate_binary_structure`. The
+ *border_value* parameter gives the value of the array outside
+ boundaries. The dilation is repeated *iterations* times. If
+ *iterations* is less than one, the dilation is repeated until the
+ result does not change anymore. If a *mask* array is given, only
those elements with a true value at the corresponding mask element
are modified at each iteration.
- Here is an example of using {binary_dilation} to find all elements
+ Here is an example of using :func:`binary_dilation` to find all elements
that touch the border, by repeatedly dilating an empty array from
the border using the data array as the mask:
@@ -968,16 +970,16 @@
[0 0 0 0 0]]
-The {binary_erosion} and {binary_dilation} functions both have an
-{iterations} parameter which allows the erosion or dilation to be
+The :func:`binary_erosion` and :func:`binary_dilation` functions both have an
+*iterations* parameter which allows the erosion or dilation to be
repeated a number of times. Repeating an erosion or a dilation with
-a given structure {n} times is equivalent to an erosion or a
+a given structure *n* times is equivalent to an erosion or a
dilation with a structure that is {n-1} times dilated with itself.
A function is provided that allows the calculation of a structure
that is dilated a number of times with itself:
- The {iterate_structure} function returns a structure by dilation
- of the input structure {iteration} - 1 times with itself. For
+ The :func:`iterate_structure` function returns a structure by dilation
+ of the input structure *iteration* - 1 times with itself. For
@@ -996,11 +998,11 @@
If the origin of the original structure is equal to 0, then it is
also equal to 0 for the iterated structure. If not, the origin must
- also be adapted if the equivalent of the {iterations} erosions or
+ also be adapted if the equivalent of the *iterations* erosions or
dilations must be achieved with the iterated structure. The adapted
origin is simply obtained by multiplying with the number of
- iterations. For convenience the {iterate_structure} also returns
- the adapted origin if the {origin} parameter is not {None}:
+ iterations. For convenience the :func:`iterate_structure` also returns
+ the adapted origin if the *origin* parameter is not {None}:
@@ -1016,151 +1018,149 @@
d dilation. Following functions provide a few of these operations
for convenience:
- The {binary_opening} function implements binary opening of arrays
+ The :func:`binary_opening` function implements binary opening of arrays
of arbitrary rank with the given structuring element. Binary
opening is equivalent to a binary erosion followed by a binary
dilation with the same structuring element. The origin parameter
controls the placement of the structuring element as described in
- section :ref:`_ndimage_filter_functions`. If no structuring element is
+ section :ref:`ndimage-filter-functions`. If no structuring element is
provided, an element with connectivity equal to one is generated
- using {generate_binary_structure}. The {iterations} parameter
+ using :func:`generate_binary_structure`. The *iterations* parameter
gives the number of erosions that is performed followed by the same
number of dilations.
- The {binary_closing} function implements binary closing of arrays
+ The :func:`binary_closing` function implements binary closing of arrays
of arbitrary rank with the given structuring element. Binary
closing is equivalent to a binary dilation followed by a binary
erosion with the same structuring element. The origin parameter
controls the placement of the structuring element as described in
- section :ref:`_ndimage_filter_functions`. If no structuring element is
+ section :ref:`ndimage-filter-functions`. If no structuring element is
provided, an element with connectivity equal to one is generated
- using {generate_binary_structure}. The {iterations} parameter
+ using :func:`generate_binary_structure`. The *iterations* parameter
gives the number of dilations that is performed followed by the
same number of erosions.
- The {binary_fill_holes} function is used to close holes in
+ The :func:`binary_fill_holes` function is used to close holes in
objects in a binary image, where the structure defines the
connectivity of the holes. The origin parameter controls the
placement of the structuring element as described in section
- :ref:`_ndimage_filter_functions`. If no structuring element is
+ :ref:`ndimage-filter-functions`. If no structuring element is
provided, an element with connectivity equal to one is generated
- using {generate_binary_structure}.
+ using :func:`generate_binary_structure`.
- The {binary_hit_or_miss} function implements a binary
+ The :func:`binary_hit_or_miss` function implements a binary
hit-or-miss transform of arrays of arbitrary rank with the given
structuring elements. The hit-or-miss transform is calculated by
erosion of the input with the first structure, erosion of the
logical *not* of the input with the second structure, followed by
the logical *and* of these two erosions. The origin parameters
control the placement of the structuring elements as described in
- section :ref:`_ndimage_filter_functions`. If {origin2} equals {None} it
+ section :ref:`ndimage-filter-functions`. If {origin2} equals {None} it
is set equal to the {origin1} parameter. If the first structuring
element is not provided, a structuring element with connectivity
- equal to one is generated using {generate_binary_structure}, if
+ equal to one is generated using :func:`generate_binary_structure`, if
{structure2} is not provided, it is set equal to the logical *not*
of {structure1}.
+.. _ndimage-grey-morphology:
Grey-scale morphology
-.. _ndimage_grey_morphology:
Grey-scale morphology operations are the equivalents of binary
morphology operations that operate on arrays with arbitrary values.
Below we describe the grey-scale equivalents of erosion, dilation,
opening and closing. These operations are implemented in a similar
fashion as the filters described in section
-:ref:`_ndimage_filter_functions`, and we refer to this section for the
+:ref:`ndimage-filter-functions`, and we refer to this section for the
description of filter kernels and footprints, and the handling of
array borders. The grey-scale morphology operations optionally take
-a {structure} parameter that gives the values of the structuring
+a *structure* parameter that gives the values of the structuring
element. If this parameter is not given the structuring element is
assumed to be flat with a value equal to zero. The shape of the
-structure can optionally be defined by the {footprint} parameter.
+structure can optionally be defined by the *footprint* parameter.
If this parameter is not given, the structure is assumed to be
-rectangular, with sizes equal to the dimensions of the {structure}
-array, or by the {size} parameter if {structure} is not given. The
-{size} parameter is only used if both {structure} and {footprint}
+rectangular, with sizes equal to the dimensions of the *structure*
+array, or by the *size* parameter if *structure* is not given. The
+*size* parameter is only used if both *structure* and *footprint*
are not given, in which case the structuring element is assumed to
-be rectangular and flat with the dimensions given by {size}. The
-{size} parameter, if provided, must be a sequence of sizes or a
+be rectangular and flat with the dimensions given by *size*. The
+*size* parameter, if provided, must be a sequence of sizes or a
single number in which case the size of the filter is assumed to be
-equal along each axis. The {footprint} parameter, if provided, must
+equal along each axis. The *footprint* parameter, if provided, must
be an array that defines the shape of the kernel by its non-zero
Similar to binary erosion and dilation there are operations for
grey-scale erosion and dilation:
- The {grey_erosion} function calculates a multi-dimensional grey-
+ The :func:`grey_erosion` function calculates a multi-dimensional grey-
scale erosion.
- The {grey_dilation} function calculates a multi-dimensional grey-
+ The :func:`grey_dilation` function calculates a multi-dimensional grey-
scale dilation.
Grey-scale opening and closing operations can be defined similar to
their binary counterparts:
- The {grey_opening} function implements grey-scale opening of
+ The :func:`grey_opening` function implements grey-scale opening of
arrays of arbitrary rank. Grey-scale opening is equivalent to a
grey-scale erosion followed by a grey-scale dilation.
- The {grey_closing} function implements grey-scale closing of
+ The :func:`grey_closing` function implements grey-scale closing of
arrays of arbitrary rank. Grey-scale opening is equivalent to a
grey-scale dilation followed by a grey-scale erosion.
- The {morphological_gradient} function implements a grey-scale
+ The :func:`morphological_gradient` function implements a grey-scale
morphological gradient of arrays of arbitrary rank. The grey-scale
morphological gradient is equal to the difference of a grey-scale
dilation and a grey-scale erosion.
- The {morphological_laplace} function implements a grey-scale
+ The :func:`morphological_laplace` function implements a grey-scale
morphological laplace of arrays of arbitrary rank. The grey-scale
morphological laplace is equal to the sum of a grey-scale dilation
and a grey-scale erosion minus twice the input.
- The {white_tophat} function implements a white top-hat filter of
+ The :func:`white_tophat` function implements a white top-hat filter of
arrays of arbitrary rank. The white top-hat is equal to the
difference of the input and a grey-scale opening.
- The {black_tophat} function implements a black top-hat filter of
+ The :func:`black_tophat` function implements a black top-hat filter of
arrays of arbitrary rank. The black top-hat is equal to the
difference of the a grey-scale closing and the input.
+.. _ndimage-distance-transforms:
Distance transforms
-.. _ndimage_distance_transforms:
Distance transforms are used to
calculate the minimum distance from each element of an object to
the background. The following functions implement distance
transforms for three different distance metrics: Euclidean, City
Block, and Chessboard distances.
- The function {distance_transform_cdt} uses a chamfer type
+ The function :func:`distance_transform_cdt` uses a chamfer type
algorithm to calculate the distance transform of the input, by
replacing each object element (defined by values larger than zero)
with the shortest distance to the background (all non-object
elements). The structure determines the type of chamfering that is
done. If the structure is equal to 'cityblock' a structure is
- generated using {generate_binary_structure} with a squared
+ generated using :func:`generate_binary_structure` with a squared
distance equal to 1. If the structure is equal to 'chessboard', a
- structure is generated using {generate_binary_structure} with a
+ structure is generated using :func:`generate_binary_structure` with a
squared distance equal to the rank of the array. These choices
correspond to the common interpretations of the cityblock and the
chessboard distancemetrics in two dimensions.
@@ -1168,11 +1168,11 @@
In addition to the distance transform, the feature transform can be
calculated. In this case the index of the closest background
element is returned along the first axis of the result. The
- {return_distances}, and {return_indices} flags can be used to
+ *return_distances*, and *return_indices* flags can be used to
indicate if the distance transform, the feature transform, or both
must be returned.
- The {distances} and {indices} arguments can be used to give
+ The *distances* and *indices* arguments can be used to give
optional output arrays that must be of the correct size and type
(both {Int32}).
@@ -1182,7 +1182,7 @@
27:321-345, 1984.
- The function {distance_transform_edt} calculates the exact
+ The function :func:`distance_transform_edt` calculates the exact
euclidean distance transform of the input, by replacing each object
element (defined by values larger than zero) with the shortest
euclidean distance to the background (all non-object elements).
@@ -1190,16 +1190,16 @@
In addition to the distance transform, the feature transform can be
calculated. In this case the index of the closest background
element is returned along the first axis of the result. The
- {return_distances}, and {return_indices} flags can be used to
+ *return_distances*, and *return_indices* flags can be used to
indicate if the distance transform, the feature transform, or both
must be returned.
Optionally the sampling along each axis can be given by the
- {sampling} parameter which should be a sequence of length equal to
+ *sampling* parameter which should be a sequence of length equal to
the input rank, or a single number in which the sampling is assumed
to be equal along all axes.
- The {distances} and {indices} arguments can be used to give
+ The *distances* and *indices* arguments can be used to give
optional output arrays that must be of the correct size and type
({Float64} and {Int32}).
@@ -1209,7 +1209,7 @@
in arbitrary dimensions. IEEE Trans. PAMI 25, 265-270, 2003.
- The function {distance_transform_bf} uses a brute-force algorithm
+ The function :func:`distance_transform_bf` uses a brute-force algorithm
to calculate the distance transform of the input, by replacing each
object element (defined by values larger than zero) with the
shortest distance to the background (all non-object elements). The
@@ -1219,17 +1219,17 @@
In addition to the distance transform, the feature transform can be
calculated. In this case the index of the closest background
element is returned along the first axis of the result. The
- {return_distances}, and {return_indices} flags can be used to
+ *return_distances*, and *return_indices* flags can be used to
indicate if the distance transform, the feature transform, or both
must be returned.
Optionally the sampling along each axis can be given by the
- {sampling} parameter which should be a sequence of length equal to
+ *sampling* parameter which should be a sequence of length equal to
the input rank, or a single number in which the sampling is assumed
to be equal along all axes. This parameter is only used in the case
of the euclidean distance transform.
- The {distances} and {indices} arguments can be used to give
+ The *distances* and *indices* arguments can be used to give
optional output arrays that must be of the correct size and type
({Float64} and {Int32}).
@@ -1241,11 +1241,11 @@
Segmentation and labeling
Segmentation is the process of separating objects of interest from
the background. The most simple approach is probably intensity
-thresholding, which is easily done with {numarray} functions:
+thresholding, which is easily done with :mod:`numpy` functions:
@@ -1260,13 +1260,13 @@
[0 0 0 0 1 0]]
The result is a binary image, in which the individual objects still
-need to be identified and labeled. The function {label} generates
+need to be identified and labeled. The function :func:`label` generates
an array where each object is assigned a unique number:
- The {label} function generates an array where the objects in the
+ The :func:`label` function generates an array where the objects in the
input are labeled with an integer index. It returns a tuple
consisting of the array of object labels and the number of objects
- found, unless the {output} parameter is given, in which case only
+ found, unless the *output* parameter is given, in which case only
the number of objects is returned. The connectivity of the objects
is defined by a structuring element. For instance, in two
dimensions using a four-connected structuring element gives:
@@ -1297,7 +1297,7 @@
[0 0 0 0 1 0]]
If no structuring element is provided, one is generated by calling
- {generate_binary_structure} (see section :ref:`_ndimage_morphology`)
+ *generate_binary_structure* (see section :ref:`ndimage-binary-morphology`)
using a connectivity of one (which in 2D is the 4-connected
structure of the first example). The input can be of any type, any
value not equal to zero is taken to be part of an object. This is
@@ -1323,13 +1323,13 @@
There is a large number of other approaches for segmentation, for
instance from an estimation of the borders of the objects that can
be obtained for instance by derivative filters. One such an
-approach is watershed segmentation. The function {watershed_ift}
+approach is watershed segmentation. The function :func:`watershed_ift`
generates an array where each object is assigned a unique label,
from an array that localizes the object borders, generated for
instance by a gradient magnitude filter. It uses an array
containing initial markers for the objects:
- The {watershed_ift} function applies a watershed from markers
+ The :func:`watershed_ift` function applies a watershed from markers
algorithm, using an Iterative Forest Transform, as described in: P.
Felkel, R. Wegenkittl, and M. Bruckschwaiger, "Implementation and
Complexity of the Watershed-from-Markers Algorithm Computed as a
@@ -1390,7 +1390,7 @@
The result is that the object (marker=2) is smaller because the
second marker was processed earlier. This may not be the desired
effect if the first marker was supposed to designate a background
- object. Therefore {watershed_ift} treats markers with a negative
+ object. Therefore :func:`watershed_ift` treats markers with a negative
value explicitly as background markers and processes them after the
normal markers. For instance, replacing the first marker by a
negative marker gives a result similar to the first example:
@@ -1415,10 +1415,10 @@
The connectivity of the objects is defined by a structuring
element. If no structuring element is provided, one is generated by
- calling {generate_binary_structure} (see section
- :ref:`_ndimage_morphology`) using a connectivity of one (which in 2D is
- a 4-connected structure.) For example, using an 8-connected
- structure with the last example yields a different object:
+ calling :func:`generate_binary_structure` (see section
+ :ref:`ndimage-binary-morphology`) using a connectivity of one
+ (which in 2D is a 4-connected structure.) For example, using
+ an 8-connected structure with the last example yields a different object:
@@ -1437,14 +1437,14 @@
Object measurements
Given an array of labeled objects, the properties of the individual
-objects can be measured. The {find_objects} function can be used
+objects can be measured. The :func:`find_objects` function can be used
to generate a list of slices that for each object, give the
smallest sub-array that fully contains the object:
- The {find_objects} finds all objects in a labeled array and
+ The :func:`find_objects` function finds all objects in a labeled array and
returns a list of slices that correspond to the smallest regions in
the array that contains the object. For instance:
@@ -1461,10 +1461,10 @@
[1 1 1]
[0 1 0]]
- {find_objects} returns slices for all objects, unless the
- {max_label} parameter is larger then zero, in which case only the
- first {max_label} objects are returned. If an index is missing in
- the {label} array, {None} is return instead of a slice. For
+ :func:`find_objects` returns slices for all objects, unless the
+ *max_label* parameter is larger then zero, in which case only the
+ first *max_label* objects are returned. If an index is missing in
+ the *label* array, {None} is return instead of a slice. For
@@ -1473,7 +1473,7 @@
[(slice(0, 1, None),), None, (slice(2, 3, None),)]
-The list of slices generated by {find_objects} is useful to find
+The list of slices generated by :func:`find_objects` is useful to find
the position and dimensions of the objects in the array, but can
also be used to perform measurements on the individual objects. Say
we want to find the sum of the intensities of an object in image:
@@ -1522,118 +1522,118 @@
>>> print sum(image, labels, [0, 2])
[178.0, 80.0]
-The measurement functions described below all support the {index}
+The measurement functions described below all support the *index*
parameter to indicate which object(s) should be measured. The
-default value of {index} is {None}. This indicates that all
+default value of *index* is {None}. This indicates that all
elements where the label is larger than zero should be treated as a
-single object and measured. Thus, in this case the {labels} array
+single object and measured. Thus, in this case the *labels* array
is treated as a mask defined by the elements that are larger than
-zero. If {index} is a number or a sequence of numbers it gives the
-labels of the objects that are measured. If {index} is a sequence,
+zero. If *index* is a number or a sequence of numbers it gives the
+labels of the objects that are measured. If *index* is a sequence,
a list of the results is returned. Functions that return more than
-one result, return their result as a tuple if {index} is a single
-number, or as a tuple of lists, if {index} is a sequence.
+one result, return their result as a tuple if *index* is a single
+number, or as a tuple of lists, if *index* is a sequence.
- The {sum} function calculates the sum of the elements of the object
- with label(s) given by {index}, using the {labels} array for the
- object labels. If {index} is {None}, all elements with a non-zero
- label value are treated as a single object. If {label} is {None},
- all elements of {input} are used in the calculation.
+ The :func:`sum` function calculates the sum of the elements of the object
+ with label(s) given by *index*, using the *labels* array for the
+ object labels. If *index* is {None}, all elements with a non-zero
+ label value are treated as a single object. If *label* is {None},
+ all elements of *input* are used in the calculation.
- The {mean} function calculates the mean of the elements of the
- object with label(s) given by {index}, using the {labels} array for
- the object labels. If {index} is {None}, all elements with a
- non-zero label value are treated as a single object. If {label} is
- {None}, all elements of {input} are used in the calculation.
+ The :func:`mean` function calculates the mean of the elements of the
+ object with label(s) given by *index*, using the *labels* array for
+ the object labels. If *index* is {None}, all elements with a
+ non-zero label value are treated as a single object. If *label* is
+ {None}, all elements of *input* are used in the calculation.
- The {variance} function calculates the variance of the elements of
- the object with label(s) given by {index}, using the {labels} array
- for the object labels. If {index} is {None}, all elements with a
- non-zero label value are treated as a single object. If {label} is
- {None}, all elements of {input} are used in the calculation.
+ The :func:`variance` function calculates the variance of the elements of
+ the object with label(s) given by *index*, using the *labels* array
+ for the object labels. If *index* is {None}, all elements with a
+ non-zero label value are treated as a single object. If *label* is
+ {None}, all elements of *input* are used in the calculation.
- The {standard_deviation} function calculates the standard
+ The :func:`standard_deviation` function calculates the standard
deviation of the elements of the object with label(s) given by
- {index}, using the {labels} array for the object labels. If {index}
+ *index*, using the *labels* array for the object labels. If *index*
is {None}, all elements with a non-zero label value are treated as
- a single object. If {label} is {None}, all elements of {input} are
+ a single object. If *label* is {None}, all elements of *input* are
used in the calculation.
- The {minimum} function calculates the minimum of the elements of
- the object with label(s) given by {index}, using the {labels} array
- for the object labels. If {index} is {None}, all elements with a
- non-zero label value are treated as a single object. If {label} is
- {None}, all elements of {input} are used in the calculation.
+ The :func:`minimum` function calculates the minimum of the elements of
+ the object with label(s) given by *index*, using the *labels* array
+ for the object labels. If *index* is {None}, all elements with a
+ non-zero label value are treated as a single object. If *label* is
+ {None}, all elements of *input* are used in the calculation.
- The {maximum} function calculates the maximum of the elements of
- the object with label(s) given by {index}, using the {labels} array
- for the object labels. If {index} is {None}, all elements with a
- non-zero label value are treated as a single object. If {label} is
- {None}, all elements of {input} are used in the calculation.
+ The :func:`maximum` function calculates the maximum of the elements of
+ the object with label(s) given by *index*, using the *labels* array
+ for the object labels. If *index* is {None}, all elements with a
+ non-zero label value are treated as a single object. If *label* is
+ {None}, all elements of *input* are used in the calculation.
- The {minimum_position} function calculates the position of the
+ The :func:`minimum_position` function calculates the position of the
minimum of the elements of the object with label(s) given by
- {index}, using the {labels} array for the object labels. If {index}
+ *index*, using the *labels* array for the object labels. If *index*
is {None}, all elements with a non-zero label value are treated as
- a single object. If {label} is {None}, all elements of {input} are
+ a single object. If *label* is {None}, all elements of *input* are
used in the calculation.
- The {maximum_position} function calculates the position of the
+ The :func:`maximum_position` function calculates the position of the
maximum of the elements of the object with label(s) given by
- {index}, using the {labels} array for the object labels. If {index}
+ *index*, using the *labels* array for the object labels. If *index*
is {None}, all elements with a non-zero label value are treated as
- a single object. If {label} is {None}, all elements of {input} are
+ a single object. If *label* is {None}, all elements of *input* are
used in the calculation.
- The {extrema} function calculates the minimum, the maximum, and
+ The :func:`extrema` function calculates the minimum, the maximum, and
their positions, of the elements of the object with label(s) given
- by {index}, using the {labels} array for the object labels. If
- {index} is {None}, all elements with a non-zero label value are
- treated as a single object. If {label} is {None}, all elements of
- {input} are used in the calculation. The result is a tuple giving
+ by *index*, using the *labels* array for the object labels. If
+ *index* is {None}, all elements with a non-zero label value are
+ treated as a single object. If *label* is {None}, all elements of
+ *input* are used in the calculation. The result is a tuple giving
the minimum, the maximum, the position of the mininum and the
postition of the maximum. The result is the same as a tuple formed
- by the results of the functions {minimum}, {maximum},
- {minimum_position}, and {maximum_position} that are described
+ by the results of the functions *minimum*, *maximum*,
+ *minimum_position*, and *maximum_position* that are described
- The {center_of_mass} function calculates the center of mass of
- the of the object with label(s) given by {index}, using the
- {labels} array for the object labels. If {index} is {None}, all
+ The :func:`center_of_mass` function calculates the center of mass of
+ the of the object with label(s) given by *index*, using the
+ *labels* array for the object labels. If *index* is {None}, all
elements with a non-zero label value are treated as a single
- object. If {label} is {None}, all elements of {input} are used in
+ object. If *label* is {None}, all elements of *input* are used in
the calculation.
- The {histogram} function calculates a histogram of the of the
- object with label(s) given by {index}, using the {labels} array for
- the object labels. If {index} is {None}, all elements with a
- non-zero label value are treated as a single object. If {label} is
- {None}, all elements of {input} are used in the calculation.
- Histograms are defined by their minimum ({min}), maximum ({max})
- and the number of bins ({bins}). They are returned as
+ The :func:`histogram` function calculates a histogram of the of the
+ object with label(s) given by *index*, using the *labels* array for
+ the object labels. If *index* is {None}, all elements with a
+ non-zero label value are treated as a single object. If *label* is
+ {None}, all elements of *input* are used in the calculation.
+ Histograms are defined by their minimum (*min*), maximum (*max*)
+ and the number of bins (*bins*). They are returned as
one-dimensional arrays of type Int32.
-Extending {nd_image} in C
+.. _ndimage-ccallbacks:
-.. _ndimage_ccallbacks:
+Extending *ndimage* in C
-{C callback functions} A few functions in the {numarray.nd_image} take a call-back argument. This can be a python function, but also a CObject containing a pointer to a C function. To use this feature, you must write your own C extension that defines the function, and define a python function that
+{C callback functions} A few functions in the {numarray.ndimage} take a call-back argument. This can be a python function, but also a CObject containing a pointer to a C function. To use this feature, you must write your own C extension that defines the function, and define a python function that
returns a CObject containing a pointer to this function.
An example of a function that supports this is
-{geometric_transform} (see section :ref:`_ndimage_interpolation`).
+:func:`geometric_transform` (see section :ref:`ndimage-interpolation`).
You can pass it a python callable object that defines a mapping
from all output coordinates to corresponding coordinates in the
input array. This mapping function can also be a C function, which
@@ -1660,17 +1660,17 @@
This function is called at every element of the output array,
-passing the current coordinates in the {output_coordinates} array.
-On return, the {input_coordinates} array must contain the
+passing the current coordinates in the *output_coordinates* array.
+On return, the *input_coordinates* array must contain the
coordinates at which the input is interpolated. The ranks of the
-input and output array are passed through {output_rank} and
-{input_rank}. The value of the shift is passed through the
-{callback_data} argument, which is a pointer to void. The function
+input and output array are passed through *output_rank* and
+*input_rank*. The value of the shift is passed through the
+*callback_data* argument, which is a pointer to void. The function
returns an error status, in this case always 1, since no error can
A pointer to this function and a pointer to the shift value must be
-passed to {geometric_transform}. Both are passed by a single
+passed to :func:`geometric_transform`. Both are passed by a single
CObject which is created by the following python extension
@@ -1737,23 +1737,23 @@
[ 0. 4.8125 6.1875]
[ 0. 8.2625 9.6375]]
-C Callback functions for use with {nd_image} functions must all
+C Callback functions for use with :mod:`ndimage` functions must all
be written according to this scheme. The next section lists the
-{nd_image} functions that acccept a C callback function and
+:mod:`ndimage` functions that acccept a C callback function and
gives the prototype of the callback function.
Functions that support C callback functions
-The {nd_image} functions that support C callback functions are
+The :func:`ndimage` functions that support C callback functions are
described here. Obviously, the prototype of the function that is
provided to these functions must match exactly that what they
expect. Therefore we give here the prototypes of the callback
functions. All these callback functions accept a void
-{callback_data} pointer that must be wrapped in a CObject using
+*callback_data* pointer that must be wrapped in a CObject using
the Python {PyCObject_FromVoidPtrAndDesc} function, which can also
accept a pointer to a destructor function to free any memory
-allocated for {callback_data}. If {callback_data} is not needed,
+allocated for *callback_data*. If *callback_data* is not needed,
{PyCObject_FromVoidPtr} may be used instead. The callback
functions must return an integer error status that is equal to zero
if something went wrong, or 1 otherwise. If an error occurs, you
@@ -1761,45 +1761,45 @@
message before returning, otherwise, a default error message is set
by the calling function.
-The function {generic_filter} (see section
-:ref:`_ndimage_genericfilters`) accepts a callback function with the
+The function :func:`generic_filter` (see section
+:ref:`ndimage-genericfilters`) accepts a callback function with the
following prototype:
The calling function iterates over the elements of the input and
output arrays, calling the callback function at each element. The
elements within the footprint of the filter at the current element
- are passed through the {buffer} parameter, and the number of
- elements within the footprint through {filter_size}. The
- calculated valued should be returned in the {return_value}
+ are passed through the *buffer* parameter, and the number of
+ elements within the footprint through *filter_size*. The
+ calculated valued should be returned in the *return_value*
-The function {generic_filter1d} (see section
-:ref:`_ndimage_genericfilters`) accepts a callback function with the
+The function :func:`generic_filter1d` (see section
+:ref:`ndimage-genericfilters`) accepts a callback function with the
following prototype:
The calling function iterates over the lines of the input and
output arrays, calling the callback function at each line. The
current line is extended according to the border conditions set by
the calling function, and the result is copied into the array that
- is passed through the {input_line} array. The length of the input
- line (after extension) is passed through {input_length}. The
+ is passed through the *input_line* array. The length of the input
+ line (after extension) is passed through *input_length*. The
callback function should apply the 1D filter and store the result
- in the array passed through {output_line}. The length of the
- output line is passed through {output_length}.
+ in the array passed through *output_line*. The length of the
+ output line is passed through *output_length*.
-The function {geometric_transform} (see section
-:ref:`_ndimage_interpolation`) expects a function with the following
+The function :func:`geometric_transform` (see section
+:ref:`ndimage-interpolation`) expects a function with the following
The calling function iterates over the elements of the output
array, calling the callback function at each element. The
coordinates of the current output element are passed through
- {output_coordinates}. The callback function must return the
+ *output_coordinates*. The callback function must return the
coordinates at which the input must be interpolated in
- {input_coordinates}. The rank of the input and output arrays are
- given by {input_rank} and {output_rank} respectively.
+ *input_coordinates*. The rank of the input and output arrays are
+ given by *input_rank* and *output_rank* respectively.
More information about the Scipy-svn mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-svn/2008-December/003225.html","timestamp":"2014-04-16T19:25:36Z","content_type":null,"content_length":"81820","record_id":"<urn:uuid:59f5045c-018e-4c3e-b9cc-d6e0d93915bb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Back to Alexey Onufriev's home page
"GEM" -- Analytical Poisson-Boltzmann
Electrostatic interactions are a key factor determining properties of biomolecules. Ability to compute electrostatic potential generated by a molecule is often essential to understand the mechanism
behind its biological function such as catalytic activity or ligand binding. To obtain the electrostatic potential everywhere in space, the (linearized) Poisson-Boltzmann equation -- 2nd order PDE --
is usually solved, traditionally by numerical approaches. We are working on a new theory -- ALPB -- which allows one to compute electrostatic potential around biomolecules orders of magnitude faster
(and with much less memory needed) than the traditional approach based on numerical solution of the PB equation. A software package GEM has been developed, mainly by John Gordon, to visualize and
manipulate electrostatic potentials. Remarkably, it was the ability to visualize and compare approximate analytical potentials with the exact and numerical references that allowed us (this part is
mostly due to Andrew Fenley), after many a trial and error, to understand and capture the key physics of the problem in the form of a very simple analytical formula. | {"url":"http://people.cs.vt.edu/~onufriev/RESEARCH/GEM.html","timestamp":"2014-04-19T14:43:21Z","content_type":null,"content_length":"3805","record_id":"<urn:uuid:464938e7-34b5-490f-ada6-1c7c91e5fdbe>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Math Images
Abram 7/9
I replaced "The differences between catenaries arises from the scaling factor a from the first equation above" with "The differences between catenaries arises from the scaling factor a in the first
equation above," which I'm hoping doesn't feel like stepping on your toes.
I would still say that it is more accurate to say that one equation defines the shape of the parabola, while the second equation just defines one of the symbols in that equation, than it is to that
two equations define the catenary (the same way if you have an equation for hours of daylight that involves using the tan function, a second equation defining tan(x) = sin(x)/cos(x) wouldn't be a
second equation describing hours of daylight, but would simply define the tan function).
However, this change isn't critical and the page as a whole is excellent. Really interesting, well laid out, good choice of details, well explained, etc. I vote ready.
Abram 7/8
If you want help figuring out how to say whatever you're getting at with that sentence about the family, we could discuss it in person and you could tell me what you mean, though you could certainly
drop the line as well. Whichever you prefer.
My comment from before about how the hidden image in the basic description shouldn't be hidden still holds, especially because the text that explains the image is not hidden, so it's weird to have
visible text describing a hidden image.
Anna 7/7
I still think that what you're trying to say is valuable. I also suggest making my change to the math that I suggested and Abram agreed with. I think it clarifies the equation, though you may want to
break it up into two lines (I've seen you align equations elsewhere, so I know you know how to do it).
Tanya 7/7
I understand the issue with saying family. I believe then that it would just be easier for me to take out that sentence and leave it at that...?
Abram 7/7
Great job on the catenary/parabola section. The wording you chose is much better than my suggested wording. The only thing there I would change would be to unhide the hidden image. It's a useful
image, and I think having it shown from the beginning does not clutter the page.
You are absolutely right about your definition of cosh(x), that cosh(x) is defined as (e^x + e^-x) / 2 and nothing else. For this reason, it would be totally correct to say that "cosh" represents the
same thing in both equations. However, the entire equation y = a cosh(x/a) is not identical or equivalent to the equation cosh(x) = (e^x + e^-x) / 2. For instance, one equation has a's in it and one
doesn't. Anna's recommended fix (below) is a good one.
Thanks for explaining what you mean about the family of curves business. Anna is right that the each possible value of a gives a different curve, and all these curves together make up a family.
Also, I'm not quite sure what you mean by "there are no other equations derived from the hyperbolic cosine." After all, you can make up any old equation you want to using cosh. How about y = cosh(x)^
3 + 2* cosh(x/4), or anything else. Are you trying to say that hypberbolic cosine only has one definition? If so, I think you already convey that with the equation that defines cosh as e^x + e^-x /
2. Or maybe there's something else you are getting at?
Tanya 7/7
I had the three part equality before, but Steve told me to change it to how it is now... I don't mind changing it back but just letting you know why it is that way.
Anna 7/7
Now that I see Abram's comment, I agree that you're using the term "family" in an inappropriate way. I'd say that $2 cosh \left( \frac{x}{2} \right)$ and $4 cosh \left( \frac{x}{4} \right)$ are a
part of the same family of curves, where the family is defined as the set $\{ a cosh \left( \frac{x}{a} \right) \}$ such that a is an element of the real numbers. $4 cosh \left( \frac{x}{4} \right)$
is a catenary, which is an element of of the family of curves that I just defined.
I'd also rearrange that section now that I'm looking at it... I'd start by saying "We use the hyperbolic cosine function, $cosh(x)=\frac{e^x+e^{-x}}{2}$ to write the equation of the catenary:
$y=a cosh \left( \frac{x}{a} \right) = a \left(\frac{e^{\frac{x}{a}}+e^{-\frac{x}{a}}}{2} \right)$
or something like that.
Tanya 7/7
I changed the catenary/parabola section and added a mouseover (so now it is botha mouseover and a link) to the word parabola. I added a sentence explaining what I meant by not being a family of
curves. All my research shows that cosh(x) is equal to the e^x statement, so I am in no position to change it. You can check Wolfram MathWorld, and I also believe Steve was the one that told me to
put the equal sign. Let me know what you think.
Abram 7/6
This page is organized and presented really well, and I really like your choice of material to include. I agree with Anna that "The Catenary and the Parabola Conceptually" can be moved to the basic
description. It's a really nice description. I also agree that the word "parabola" has the potential to scare some people, but I bet that could be addressed with a mouseover that says something like,
"Another type of u-shaped curve. A parabola is not the shape formed by a rope hanging from two ends..." or something similar. This kind of mouseover ought to make this section totally appropriate for
the basic description. By the way, it's totally possible to have both a mouseover and a link on the word parabola.
A couple of details on "The Catenary Mathematically":
• The second equation should read "cosh(x)" on the left-hand-side of the equation, instead of just "cosh".
• You write that the two equations in this section are identical, which they aren't. What if you said something like, "the second equation defines the cosh symbol that is used in the first
• Two notes on this sentence: It is important to note that catenaries are not a family of curves; the differences between catenaries comes from the scaling factor a from the equations and the value
of x.
□ I would think that cateneries are a family of curves, because each value of a gives a different curve.
□ Any catenary takes every possible value of x as input, so the value of x does not actually give rise to different curves, and it's not even clear what "the value of x" refers to. You may be
trying to say something that is right on; I'm just not clear what that is.
Anna 7/6
Just going by the sub heading "The Catenary and the Parabola Conceptually" I really do think that should be in the basic description. I think it will add a lot to the page to have that unhidden at
the beginning. It's a really great explanation that doesn't need any math beyond algebra 1, which I would consider pretty basic.
The cosh totally belongs in the next section (let's face it, hyperbolic functions scare people!), but I'm afraid that people won't look at that simple explanation because they're too afraid it will
require more than than they know.
I'd suggest you get another opinion on this (probably from Abram), too, because I see you haven't changed it from before. If you could at least give me a solid explanation of why you feel it fits in
the mathematical description, I'd really appreciate it.
Anna 7/5
First, check out the email I sent you about this page. But also, I'd move your discussion of the bridges up into "a basic description" and leave just the math in the "a more mathematical
Would you like me to go over how one can derive the catenary equation from the physics of the hanging string? You could also find it in a lot of physics books... try Classical Mechanics by John
Taylor (this is the phys 111 text book).
-Anna 6/9 | {"url":"http://mathforum.org/mathimages/index.php?title=Talk:Catenary&oldid=7903","timestamp":"2014-04-17T16:48:44Z","content_type":null,"content_length":"26087","record_id":"<urn:uuid:5d979816-c233-43c0-af7e-437172e24d47>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Re: Polynomial problems - Solid Harmonics
Replies: 0
Re: Polynomial problems - Solid Harmonics
Posted: Jun 18, 1996 3:09 AM
Tommy Nordgren wrote:
> I have a set of orthogonal polynomials in x,y,z, which is
> Gram-Scmidt orthogonalized with respect to integration over the unit
> sphere.
Aren't you are working with the Solid Harmonics then?
The solid harmonics are closely related to the Spherical Harmonics.
After defining a suitable transformation between spherical polar
coordinates and cartesian coordinates:
rtpToxyz = {Exp[Complex[0,n_] p] ->
((x+Sign[n] I y)/(r Sin[t]))^Abs[n],
the (complex) solid harmonics are:
SolidHarmonics[l_,m_,x_,y_,z_] :=
((r^l SphericalHarmonicY[l,m,t,p] /.
rtpToxyz) /. r->(x^2+y^2+z^2)^(1/2)) // Simplify
For example,
SolidHarmonics[2,1,x,y,z] // ComplexExpand
-3 Sqrt[----] x z
6 Pi 3 I 5
----------------- - --- Sqrt[----] y z
2 2 6 Pi
The solid harmonics are orthonormal and very easily computed.
Paul Abbott
Department of Physics Phone: +61-9-380-2734
The University of Western Australia Fax: +61-9-380-1014
Nedlands WA 6907 paul@physics.uwa.edu.au
AUSTRALIA http://www.pd.uwa.edu.au/Paul | {"url":"http://mathforum.org/kb/thread.jspa?threadID=223606","timestamp":"2014-04-16T11:34:02Z","content_type":null,"content_length":"15086","record_id":"<urn:uuid:274da8a2-e9b1-4a34-85f0-4be1546fd9bf>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Palos Heights Geometry Tutor
...Proper time management and attention to detail are the keys to a high score. This requires effortful engagement with the material and some open mindedness on the part of the student. The tutors
job is the provide the student with the strategies that will help them overcome an obstacles.
24 Subjects: including geometry, calculus, physics, GRE
...The level is not important---I'm just here to help you learn! I'm also available to tutor music theory. My grades in my first two semesters of the IU theory sequence were both A+; I earned A's
in subsequent honors theory courses.
13 Subjects: including geometry, calculus, statistics, algebra 1
My name is Marty, and I would love to tutor your children in whatever they need help in. I majored in History at the University of Illinois at Urbana-Champaign, and also achieved an Associates in
Science from Parkland Community College. For my Associates degree, I studied mostly chemistry and math, but also physics and biology.
28 Subjects: including geometry, chemistry, calculus, algebra 1
...Our program consisted of a "Math Lab" where beginning college students taking these classes could come in and work on their homework with free resources, including on-duty math tutors. As one
of these tutors it was my responsibility to assist students with questions and to periodically check in ...
7 Subjects: including geometry, calculus, physics, algebra 1
...I am their friend; however, I am very firm and stick to certain 'rules'. For example, I stand firm on restricting the use of a calculator for what I consider basic math calculations. If a
fourth grader can do the math in their head, than I expect someone in grades 5 through 12 to be able to do the same!
12 Subjects: including geometry, algebra 1, algebra 2, trigonometry | {"url":"http://www.purplemath.com/Palos_Heights_geometry_tutors.php","timestamp":"2014-04-21T12:33:37Z","content_type":null,"content_length":"24096","record_id":"<urn:uuid:2ace511b-c43b-426a-ba8f-4dc074627925>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
Record Types and Subtyping in Type Theory, with Applications to the Theory of Programming
, 1996
"... Several proof-assistants rely on the very formal basis of Pure Type Systems. However, some practical issues raised by the development of large proofs lead to add other features to actual
implementations for handling namespace management, for developing reusable proof libraries and for separate verif ..."
Cited by 24 (3 self)
Add to MetaCart
Several proof-assistants rely on the very formal basis of Pure Type Systems. However, some practical issues raised by the development of large proofs lead to add other features to actual
implementations for handling namespace management, for developing reusable proof libraries and for separate verification of distincts parts of large proofs. Unfortunately, few theoretical basis are
given for these features. In this paper we propose an extension of Pure Type Systems with a module calculus adapted from SML-like module systems for programming languages. Our module calculus gives a
theoretical framework addressing the need for these features. We show that our module extension is conservative, and that type inference in the module extension of a given PTS is decidable under some
hypotheses over the considered PTS.
- Theorem Proving in Higher Order Logics, TPHOLs 2000 , 2000
"... this paper appears in Theorem Proving in Higher Order Logics, TPHOLs 2000, c ..."
- PROCEEDINGS OF THE 1996 WORKSHOP ON TYPES FOR PROOFS AND PROGRAMS , 1997
"... This thesis is about exploring the possibilities of a limited version of Martin-Löf's type theory. This exploration consists both of metatheoretical considerations and of the actual use of that
version of type theory to prove Higman's lemma. The thesis is organized in two papers, one in which type t ..."
Cited by 5 (0 self)
Add to MetaCart
This thesis is about exploring the possibilities of a limited version of Martin-Löf's type theory. This exploration consists both of metatheoretical considerations and of the actual use of that
version of type theory to prove Higman's lemma. The thesis is organized in two papers, one in which type theory itself is studied and one in which it is used to prove Higman's lemma. In the first
paper, A Lambda Calculus Model of Martin-Löf's Theory of Types with Explicit Substitution, we present the formal calculus in complete detail. It consists of Martin-Lof's logical framework with
explicit substitution extended with some inductively defined sets, also given in complete detail. These inductively defined sets are precisely those we need in the second paper of this thesis for the
formal proof of Higman's lemma. The limitations of the formalism come from the fact that we do not introduce universes. It is known that for other versions of type theory, the absence of universes
implies the impossib...
"... . We present an example of formalization of systems of algebras using an extension of Martin-Lof's theory of types with record types and subtyping. This extension has been presented in [5]. In
this paper we intend to illustrate all the features of the extended theory that we consider relevant for th ..."
Cited by 4 (1 self)
Add to MetaCart
. We present an example of formalization of systems of algebras using an extension of Martin-Lof's theory of types with record types and subtyping. This extension has been presented in [5]. In this
paper we intend to illustrate all the features of the extended theory that we consider relevant for the task of formalizing algebraic constructions. We also provide code of the formalization as
accepted by a type checker that has been implemented. 1. Introduction We shall use an extension of Martin-Lof's theory of logical types [14] with dependent record types and subtyping as the formal
language in which constructions concerning systems of algebras are going to be represented. The original formulation of Martin-Lof's theory of types, from now on referred to as the logical framework,
has been presented in [15, 7]. The system of types that this calculus embodies are the type Set (the type of inductively defined sets), dependent function types and for each set A, the type of the
elements of A...
- in International Workshop on Types for Proofs and Programs 1998, TYPES '98 Selected Papers, LNCS , 1998
"... This article presents a formulation of the fan theorem in Martin-Löf's type theory. Starting from one of the standard versions of the fan theorem we gradually introduce reformulations leading to
a final version which is easy to interpret in type theory. Finally we describe a formal proof of that fin ..."
Cited by 2 (0 self)
Add to MetaCart
This article presents a formulation of the fan theorem in Martin-Löf's type theory. Starting from one of the standard versions of the fan theorem we gradually introduce reformulations leading to a
final version which is easy to interpret in type theory. Finally we describe a formal proof of that final version of the fan theorem.
, 2000
"... The extension of Martin-Löf's theory of types with record types and subtyping has elsewhere been presented. We give a concise description of that theory and motivate its use for the
formalization of systems of algebras. We also give a short account of a proof checker that has been implemented on mac ..."
Cited by 1 (0 self)
Add to MetaCart
The extension of Martin-Löf's theory of types with record types and subtyping has elsewhere been presented. We give a concise description of that theory and motivate its use for the formalization of
systems of algebras. We also give a short account of a proof checker that has been implemented on machine. The logical heart of the checker is constituted by the procedures for the mechanical
verification of the forms of judgement of a particular formulation of the extension. The case study that we put forward in this work has been developed and mechanically verified using the implemented
system. We illustrate all the features of the extended theory that we consider relevant for the task of formalizing algebraic constructions.
- In this thesis , 1997
"... This paper presents a proof-irrelevant model of Martin-Lof's theory of types with explicit substitution; that is, a model in the style of [Smi88], in which types are interpreted as truth values
and objects (or proofs) are irrelevant. The fundamental difference here is the need to cope with a formal ..."
Cited by 1 (1 self)
Add to MetaCart
This paper presents a proof-irrelevant model of Martin-Lof's theory of types with explicit substitution; that is, a model in the style of [Smi88], in which types are interpreted as truth values and
objects (or proofs) are irrelevant. The fundamental difference here is the need to cope with a formal system which in addition to types has sets and substitutions. This difference leads us to a whole
reformulation of the model which consists in defining an interpretation in terms of the untyped lambda calculus. From this interpretation the proof-irrelevant model is obtained as a particular
instance. Finally, the paper outlines the definition of a realizability model which is also obtained as a particular instance. Keywords: type theory, explicit substitution, models of type theory,
proof-irrelevant model, realizability model. Contents 1 Introduction 1 2 Type theory 2 2.1 Syntax : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 3 A lambda calculus model 8 3.1
"... Records and record types in semantic theory ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=271040","timestamp":"2014-04-18T22:29:05Z","content_type":null,"content_length":"31326","record_id":"<urn:uuid:1f61b60e-466d-4ab3-937b-0ae5b8f8e5cd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advances in Variational Bayesian Nonlinear Blind Source Separation
Antti Honkela
(2005) PhD thesis, Helsinki University of Technology.
Linear data analysis methods such as factor analysis (FA), independent component analysis (ICA) and blind source separation (BSS) as well as state-space models such as the Kalman filter model are
used in a wide range of applications. In many of these, linearity is just a convenient approximation while the underlying effect is nonlinear. It would therefore be more appropriate to use nonlinear
methods. In this work, nonlinear generalisations of FA and ICA/BSS are presented. The methods are based on a generative model, with a multilayer perceptron (MLP) network to model the nonlinearity
from the latent variables to the observations. The model is estimated using variational Bayesian learning. The variational Bayesian method is well-suited for the nonlinear data analysis problems. The
approach is also theoretically interesting, as essentially the same method is used in several different fields and can be derived from several different starting points, including statistical
physics, information theory, Bayesian statistics, and information geometry. These complementary views can provide benefits for interpretation of the operation of the learning method and its results.
Much of the work presented in this thesis consists of improvements that make the nonlinear factor analysis and blind source separation methods faster and more stable, while being applicable to other
learning problems as well. The improvements include methods to accelerate convergence of alternating optimisation algorithms such as the EM algorithm and an improved approximation of the moments of a
nonlinear transform of a multivariate probability distribution. These improvements can be easily applied to other models besides FA and ICA/BSS, such as nonlinear state-space models. A specialised
version of the nonlinear factor analysis method for post-nonlinear mixtures is presented as well.
PDF - Requires Adobe Acrobat Reader or other PDF viewer. | {"url":"http://eprints.pascal-network.org/archive/00000983/","timestamp":"2014-04-16T04:14:32Z","content_type":null,"content_length":"9590","record_id":"<urn:uuid:b6a6f0e9-33d8-447e-a8aa-ca9759897bd1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Manipulate neighboring points in 2D array
deb otrov@hush...
Thu Dec 27 14:20:39 CST 2012
I have 2D array, let's say: `np.random.random((100,100))` and I want
to do simple manipulation on each point neighbors, like divide their
values by 3.
So for each array value, x, and it neighbors n:
n n n n/3 n/3 n/3
n x n -> n/3 x n/3
n n n n/3 n/3 n/3
I searched a bit, and found about scipy ndimage filters, but if I'm
not wrong, there is no such function. Of course me being wrong is
quite possible, as I did not comprehend whole ndimage module, but I
tried generic filter for example and browser other functions.
Is there better way to make above manipulation, instead using for loop
over every array element?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20121227/61733c25/attachment.html
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-December/064908.html","timestamp":"2014-04-18T20:52:28Z","content_type":null,"content_length":"3458","record_id":"<urn:uuid:b7b204ea-b3b7-4c9e-9397-9cc7b647019a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mar Vista, CA Trigonometry Tutor
Find a Mar Vista, CA Trigonometry Tutor
I'm a former tenured Community College Professor with an M.A. degree in Math from UCLA. I have also taught university level mathematics at UCLA, the University of Maryland, and the U.S. Air Force
9 Subjects: including trigonometry, calculus, geometry, algebra 1
...I was required to take a differential equations course as part of the core curriculum at Caltech. Solving differential equations was also an integral part of several of the physics classes
that I took (QM, waves, etc.) Further, I have been tutoring the core differential equations class at Caltec...
28 Subjects: including trigonometry, Spanish, chemistry, French
...Ever since I was young, Math has been one of my favorite subjects and it is evident in my experiences. I have always helped my friends in Math when growing up, whether it be Algebra or
Calculus. I started getting paid for it during college when I tutored students in Pre-Calculus to upper level Calculus courses.
18 Subjects: including trigonometry, chemistry, calculus, geometry
...I have a doctoral degree in Child and Adolescent Clinical Psychology (Psy.D.) from a program fully accredited by the American Psychological Association. I have ample experience tutoring
students with a variety of learning differences. I have previously tutored students diagnosed with Dyslexia.
44 Subjects: including trigonometry, English, writing, reading
...I want to be a math professor one day and help out many students the way my teachers have helped me throughout the years. I have been tutoring for this website for almost one year and had the
pleasure of meeting all types of people. I've tutored subjects as low as third grade math, and as high as trignometry.
10 Subjects: including trigonometry, calculus, geometry, algebra 1
Related Mar Vista, CA Tutors
Mar Vista, CA Accounting Tutors
Mar Vista, CA ACT Tutors
Mar Vista, CA Algebra Tutors
Mar Vista, CA Algebra 2 Tutors
Mar Vista, CA Calculus Tutors
Mar Vista, CA Geometry Tutors
Mar Vista, CA Math Tutors
Mar Vista, CA Prealgebra Tutors
Mar Vista, CA Precalculus Tutors
Mar Vista, CA SAT Tutors
Mar Vista, CA SAT Math Tutors
Mar Vista, CA Science Tutors
Mar Vista, CA Statistics Tutors
Mar Vista, CA Trigonometry Tutors
Nearby Cities With trigonometry Tutor
August F. Haw, CA trigonometry Tutors
Baldwin Hills, CA trigonometry Tutors
Bicentennial, CA trigonometry Tutors
Culver City trigonometry Tutors
Hollyglen, CA trigonometry Tutors
Lennox, CA trigonometry Tutors
Marina Del Rey trigonometry Tutors
Marina Dl Rey, CA trigonometry Tutors
Playa Vista, CA trigonometry Tutors
Playa, CA trigonometry Tutors
Rancho Park, CA trigonometry Tutors
Sanford, CA trigonometry Tutors
View Park, CA trigonometry Tutors
Westchester, CA trigonometry Tutors
Windsor Hills, CA trigonometry Tutors | {"url":"http://www.purplemath.com/Mar_Vista_CA_trigonometry_tutors.php","timestamp":"2014-04-18T00:35:22Z","content_type":null,"content_length":"24402","record_id":"<urn:uuid:31912be7-5b28-4297-8eb5-1a9e5231d952>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
BUTLER R-5 SCHOOL DISTRICT
CONSUMER MATH CURRICULUM
Approved by the Board of Education, November 1999
Consumer Math is a mathematics course designed not only to develop the basic fundamental skills needed to function in everyday consumer situations but also to develop an understanding of these
concepts as well. Topics to be covered will include, but not restricted to, the following: bank accounts, loans, wages, salary, budgeting, taxes, insurance, and home ownership.
As adults in today's world individuals will need to apply basic mathematic skills in solving real-life situations that arise in their personal lives and careers. This course provides that
BUTLER R-5 SCHOOL DISTRICT
CONSUMER MATH 1
LEVEL 9-12
A. Demonstrate an understanding of numbers.
1. Rename a fraction as a decimal and a decimal as a fraction.
Strands addressed: Communication, Connections, Number Sense, Patterns and Relationships
(Show-Me Standards 1.6, 1.8)
2. Write a percent as a decimal and a decimal as a percent.
Strands addressed: Communication, Connections, Number Sense, Patterns and Relationship
(Show-Me Standards 1.6, 1.8)
B. Apply the basic operations in computational situations.
1. Compute the sum, difference, product, and quotient of rational numbers.
Strands addressed: Number Sense
(Show-Me Standard 1.10)
2. Demonstrate the use of a calculator.
Strands addressed: Communication
(Show-Me Standard 1.4)
3. Find the percent of a given number.
Strands addressed: Number Sense
(Show-Me Standard 1.10)
4. Determine the amount of elapsed time when given two times, and estimate to check answer.
Strands addressed: Number Sense
(Show-Me Standard 1.10)
C. Estimate results and judge reasonableness of solutions.
1. Round a number to a given place.
Strands addressed: Reasoning
(Show-Me Standards 1.6, 1.10, 3.4)
2. Judge reasonableness of solutions in consumer and physical situations.
Strands addressed: Reasoning
(Show-Me Standards 3.4, 3.7)
D. Apply the concept of measurement to the physical world.
1. Use unit multipliers to convert units of measure.
Strands addressed: Geometric and Spatial Sense
(Show-Me Standard 1.10)
2. Compute the area/perimeter of regular and irregular shaped figures with main focus on regular.
Strands addressed: Geometric and Spatial Sense
(Show-Me Standard 1.10)
E. Recognize geometric relationships.
1. Be able to recognize simple geometric figures so as to apply appropriate formulas for area and surface area problems.
Strands addressed: Patterns and Relationships, Geometric and Spatial Sense
(Show-Me Standards 1.6, 1.10)
F. Use statistical techniques and interpret statistical information.
1. Read and interpret tables, graphs, and charts.
Strands addressed: Data Analysis, Probability and Statistics, Communication
(Show-Me Standard 1.5)
2. Find the mean, median, mode, and range of a given set of data.
Strands addressed: Data Analysis, Probability and Statistics
(Show-Me Standards 1.8, 1.10)
3. Recognize misuse of statistical data by advertisements, etc.
Strands addressed: Data Analysis, Probability and Statistics
(Show-Me Standards 1.5, 1.7)
4. Formulate and solve problems that involve collecting and analyzing statistical data.
Strands addressed: Data Analysis, Probability and Statistics
(Show-Me Standards 1.2, 1.8, 3.1, 3.2)
CONCEPT ANALYSIS: Finding a range of data, constructing graphs, computing the average, and reading data points as answers to specific questions are important skills, but they comprise
a very narrow view of data analysis. Being able to compute an average and knowing when to compute that average and what it means are two different skills. Students need to be actively involved in
each of the necessary steps to solve problems in statistics, not just compute a specific number from given data. These steps include: identifying the question, collecting and organizing the data,
constructing and interpreting tables and graphs, drawing inferences to describe trends, developing convincing arguments, and evaluating the arguments of others. The ability to evaluate conclusions is
vitally important to all students since statistical data is so often used in diverse areas such as advertising and forecasting, as well as in the development of public policy.
SAMPLE TEST ITEM:
Component Skills
A. Describe trends by drawing inferences.
B. Evaluate conclusions from a set of data.
C. Solve problems involving mean, median, mode, and range.
Students are given statistical information in graphic or tabular form. They are asked to describe trends by drawing inferences. They are also asked to evaluate given conclusions based
on statistical data. Additionally, they are asked to solve computational and non-computational problems involving mean, median, mode, or range.
Sample Item:
1. The median price of a house in a small midwestern city is $48,000. Which conclusion is correct?
A. Most houses cost $48,000.
B. The price of the most expensive house is $48,000.
C. The best price for a house is $48,000.
*D. There are as many houses costing less than $48,000 as there are costing more.
G. Apply problem-solving strategies.
1. Use the calculator to solve consumer problems.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.4, 1.10, 3.2)
2. Solve problems involving surface area and area in consumer situations (paint, carpet, etc.)
Strands addressed: Problem Solving, Connections, Geometric and Spatial Sense
(Show-Me Standards 1.10, 3.1, 3.8)
CONCEPT ANALYSIS: Consumer area and perimeter problems are sometimes very difficult for students. Lack of viable experience hampers understanding, and the multiple steps sometimes
involved in obtaining an answer may cause difficulties. Adults usually do not have as much trouble with these contexts, because they grow accustomed to working with the real problems--the room to be
painted or the floor to be carpeted. Students need practice in defining situations, in recognizing the various components of a problem, and in working through to a satisfactory conclusion. Using
models or actual rooms, floors, etc., can help. Practice in diagramming given specifications will help those who have trouble visualizing. Problems which require the students to measure various
dimensions will improve measurement skills, and this experience may make the problems easier by giving them a real context.
TEST SAMPLE ITEM
Component Skills:
A. Solve problems involving area in consumer situations.
B. Solve problems involving perimeter in consumer situations.
Students are given a diagram or dimensions that involve a consumer situation and are asked to compute the area or perimeter. For example, they may be asked to find how much paint,
wallpaper, sod, or shingles are required for a given situation.
Sample Item:
1. A wall is 40 feet long and 5 feet high. One can of paint will cover 60 square feet. How many cans of paint will be needed to paint the entire wall with two coats of paint?
A. 3
B. 4
C. 5
*D. 7
3. Solve problems involving use of ratio and proportion in consumer situations (better buy, scale drawing, recipe conversion, etc.)
Strands addressed: Problem Solving, Connections, Number Sense
(Show-Me Standards 1.10, 3.1, 3.8)
CONCEPT ANALYSIS: Proportional thinking is a very powerful and useful tool. Comparing numbers and identifying consistent relationships often offer immediate solutions to seemingly
complex problems. When faced with consumer decisions involving how much items cost, which item is cheaper, or how far one place is from another, a proportion is usually the most direct solution. The
mathematics of ratio and proportion is arithmetically simple; it is the variety of applications that produces confusion. Most students do not readily see the concept of proportion as it applies
simultaneously to unit pricing in a better-buy situation, or the amount of flour needed if a cookie recipe is tripled, or computation of the distance between Kansas City and Chicago based on map
information. The applications must be taught both individually for context and together as examples of the same problem-solving technique: ratio and proportion.
SAMPLE TEST ITEM
Component Skills:
A. Solve problems involving pricing.
B. Solve problems involving food, recipes, and nutrition.
C. Solve problems involving medication dosage.
D. Solve problems involving relative distance between locations.
Students are given a consumer situation and are asked to solve a problem using proportions. The problem will involve pricing, food recipes and nutrition, medication dosage, or
relative distances between locations.
Sample Item:
1. Peanut butter is on sale at the grocery store. The price is $2.55 for a 28-ounce jar. What is the unit price in cents per ounce?
*A. 9.1
B. 10.1
C. 11.1
D. 12.1
4. Collect data from real-life situations to apply to a given formula.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.3, 1.10)
H. Solve problems in consumer situations.
1. Compute the total hours on a weekly time card.
Strands addressed: Problem Solving, Connections
(Show-Me Standard 1.10)
2. Compute Gross Pay from straight-time, overtime tips, commission, piece work.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.8, 1.10, 3.1, 3.2)
CONCEPT ANALYSIS: Receiving a paycheck is an important event for most Americans. Being able to compute gross pay is therefore essential, as more and more students join the work force, at
least as part-time workers, before graduation from high school. The mathematics of getting paid includes the vocabulary of the various methods by which one may be paid as well as the arithmetic.
Local variations may exist in vocabulary, but basically employee wages are calculated according to hourly, monthly, commission, and piecework rates. Overtime, holiday, or Sunday pay varies from job
to job. Figuring pay often motivates students to practice application of concepts considered otherwise boring, such as ratio, proportion, and percent.
SAMPLE TEST ITEM
Component Skills:
A. Compute gross pay using hourly rates.
B. Compute gross pay using monthly rates.
C. Compute gross pay using commission rates.
D. Compute gross pay using piecework rates.
E. Compute gross pay using overtime rates.
Students are given an employment situation and are asked to compute the employee's gross pay.
Sample Item:
1. Jennifer earns $4.50 per hour at her part-time job. She is paid time-and-a-half for working holidays. During the week of Thanksgiving, she worked 20 hours plus 8 hours on
Thanksgiving Day. What is her gross pay for the week?
A. $108.00
B. $126.00
*C. $144.00
D. $162.00
3. Compute the salary for a pay period.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.10, 3.1, 3.2)
4. Compute net pay using appropriate information for Social Security deductions, state and federal taxes, etc.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.10, 3.1, 3.2)
CONCEPT ANALYSIS: Our society is politically and economically very complex. For students to function well, they must have some understanding of the controlling principles and the
interconnections between political and economic realities. When high school students join the work force, they must learn to calculate wages and deductions. The vocabulary, as well as the
mathematics, must be taught. For example, students need to know the history and intent of Social Security laws and how tax deductions are computed. They also need to know the basic premises behind
federal and state income taxes and how to read tax tables. The curriculum for college-bound students may need to be adjusted to assure them an opportunity to learn about these political and economic
SAMPLE TEST ITEM
Component Skills:
A. Compute net pay.
B. Compute Social Security deduction.
C. Compute state and federal taxes.
Students are given data about a person's pay check and other variables; these may appear in a table. They are asked to compute net pay considering one or more of the following: Social
Security deductions, state and federal taxes, and other similar deductions.
5. Compute the total purchase price considering discounts, sale price, and taxes.
Strands addressed: Problem Solving, Connections
(Show-Me Standard 1.10)
6. Compute the unit price and determine the better buy.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.10, 4.1)
7. Solve banking problems related to maintaining a checking account.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.10, 3.1, 3.2)
8. Compute simple and compound interest.
Strands addressed: Problem Solving, Connections, Discrete Mathematics
(Show-Me Standards 1.10, 3.2, 4.8)
9. Compute the new balance in a charge account based on the finance charge, previous balance, and unpaid balances.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.10, 3.2, 4.8)
10. Compute finance charges on various types of loans.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.10, 3.2, 4.8)
11. Compute sticker price, dealer's cost, and retail price of new/used cars.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.10, 3.2, 4.8)
12. Use tables to determine driver rating factor and compute automobile insurance premiums.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.10, 3.2, 4.8)
13. Compute cost of operating and maintaining an automobile.
Strands addressed: Problem Solving, Connections
(Show-Me Standard 1.10)
14. Compute the cost of buying, maintaining a home (assessed value, taxes, homeowners insurance, escrow)
Strands addressed: Problem Solving, Connections
(Show-Me Standard 1.10)
15. Read a meter and compute cost of electricity, gas, and water.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.10, 4.8)
16. Use records of past expenditures to prepare a monthly budget.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.1, 1.2, 1.8, 1.10, 3.1, 3.2, 3.6, 3.8, 4.1)
17. Compute and compare total housing cost with suggested guidelines.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.1, 1.2, 1.10, 3.1)
18. Prepare a state and federal income tax form.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.8, 1.10, 2.2, 4.1, 4.8)
19. Use tables to compute premiums for term and whole life insurance.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.10, 3.1, 3.2, 4.8)
20. Compute health insurance premiums and deductions.
Strands addressed: Problem Solving, Connections
(Show-Me Standards 1.10, 4.8)
Consumer Mathematics textbook (Glencoe Macmillan/McGraw-Hill 1992, orig. 1982)
Using Checking Account
Your Pay Check
Getting a Job (Media Materials, Inc.)
Software, Cross Country USA | {"url":"http://www.butler.k12.mo.us/curric/ma-consumer.htm","timestamp":"2014-04-19T20:02:38Z","content_type":null,"content_length":"60642","record_id":"<urn:uuid:1a888c66-0464-403d-a487-23d4d7cceaf2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
sketch the graph of f(x)
April 18th 2010, 02:28 AM #1
Junior Member
Jan 2010
sketch the graph of f(x)
determine the horizontal and vertical asymptotes of the function f(x) = 1/(4-x^2) and sketch the graph of f(x)...
my attempts to the question :
i rearrange the graph f(x) = 1/-(x^2-4) ==> 1/-((x-2)(x+2))
so it has two vertical assymptotes which are x = -2 and 2. is there any horizontal assymptote??? huhuhu
by using the conventional method to sketch the graph the graph of f(x) can be obtained by :
1. reflecting graph 1/x^2 by y-axis and shift the graph 4 unit to the right...
but when i sketch the graph of f(x) by using gc it looks like in the figure...
since in the exam we are not allow to bring gc, is there any other ways to sketch the graph??? or my way to treat this question is wrong??? PLZ HELP ME...
determine the horizontal and vertical asymptotes of the function f(x) = 1/(4-x^2) and sketch the graph of f(x)...
my attempts to the question :
i rearrange the graph f(x) = 1/-(x^2-4) ==> 1/-((x-2)(x+2))
so it has two vertical assymptotes which are x = -2 and 2. is there any horizontal assymptote??? huhuhu
by using the conventional method to sketch the graph the graph of f(x) can be obtained by :
1. reflecting graph 1/x^2 by y-axis and shift the graph 4 unit to the right...
but when i sketch the graph of f(x) by using gc it looks like in the figure...
since in the exam we are not allow to bring gc, is there any other ways to sketch the graph??? or my way to treat this question is wrong??? PLZ HELP ME...
As x approaches infinity to the left or right, the denominator goes to $-\infty$
as there is no conceiveable difference between $4-\infty$ and $-\infty$
so the expression goes to zero.
Hence the graph approaches the x-axis as f(x) approaches zero
why suddenly in the graph got sharp turn point on the right and the left of the graph... i really could not understand this... can anyone explain to me with prove rather than giving explaination
without proving on how to sketch the graph of f(x) from graph of 1/x^2
Last edited by bobey; April 18th 2010 at 02:56 AM.
Here is more detail on the graph, using more values of x.
Your machine is not using very much of the domain to draw the graph.
can you explain to me on how to get the graph of f(x) by transformation of graph 1/x^2??? since we are not allowed to bring gc in exam hall.. graph of 1/x^2 got two graph on the left and on the
right and suddenly the graph above got three graph, i.e., on the left, right + AT THE CENTER??? hoe this happened?
If you sketch a few solutions of these types of problems,
or just examine the graphs until you understand clearly what's happening on the graph,
you will master it.
When the denominator approaches zero, the graph is approaching a vertical asymptote.
Vertical asymptotes are located on the x that causes the denominator to be zero.
To find these, you factorise the denominator.
Each of the factors then show the x causing that factor to be zero.
Hence, there are vertical asymptotes at $x=2,\ x=-2$
To understand the shape of the graph, pick x "very near" x=-2 and 2.
If x is slightly less than -2, f(x) is negative and goes to negative infinity in this case.
If x is slightly greater than -2, f(x) is positive and the graph goes to infinity.
Do the same analysis at x=2.
The part in the middle comes down from infinity and reaches $\frac{1}{4}$ when x=0.
The part in the middle is always positive.
Finally, you must understand how to calculate the limit of f(x) as x approaches plus and minus infinity.
Take the function given and work with that rather than trying to compare with another function.
In this case, because of the 2 vertical asymptotes,
the graph is split into 3 regions,
one to the left of x=-2,
another to the right of x=2,
the other between x=-2 and x=2.
April 18th 2010, 02:36 AM #2
MHF Contributor
Dec 2009
April 18th 2010, 02:43 AM #3
Junior Member
Jan 2010
April 18th 2010, 02:54 AM #4
MHF Contributor
Dec 2009
April 18th 2010, 03:02 AM #5
Junior Member
Jan 2010
April 18th 2010, 03:23 AM #6
MHF Contributor
Dec 2009 | {"url":"http://mathhelpforum.com/pre-calculus/139811-sketch-graph-f-x.html","timestamp":"2014-04-21T13:51:53Z","content_type":null,"content_length":"49086","record_id":"<urn:uuid:a2627521-65b6-4914-b5cf-7d07dca44b1c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Iteration in Scala - effectful yet functional
One of the papers that influenced me a lot in 2010 was
The Essence of the Iterator Pattern
by Jeremy Gibbons and Bruno C. d. S. Oliveira. It builds upon where McBride and Paterson left in their treatise on
Applicative Functors
. Gibbons' paper discusses the various aspects of building traversal structures in the presence of effects.
In this post I look at some of the traversal patterns' functional implementations using scalaz. In the paper on applicative functors, McBride and Paterson defines
as an applicative mapping operation ..
traverse :: Applicative f => (a -> f b) -> [a] -> f [b]
Gibbons et. al. uses this abstraction to study various traversal structures in the presence of
. The paper starts with a C# code snippet that uses the syntax sugar of
to traverse over a collection of elements ..
public static int loop<MyObj> (IEnumerable<MyObj> coll){
int n = 0;
foreach (MyObj obj in coll){
n = n+1;
return n;
In the above
method, we do two things simultaneously :-
1. mapping - doing some operation touch() on the elements of coll with the expectation that we get the modified collection at the end of the loop
2. accumulating - counting the elements, which is a stateful operation for each iteration and which is independent of the operation which we do on the elements
And in the presence of mutation, the two concerns are quite conflated. Gibbons et. al. uses McBride and Paterson’s applicative functors, the
operator which they discuss in the same paper, to come up with some of the special cases of effectful traversals where the mapping aspect is independent of accumulation and vice versa.
Over the last weekend I was exploring how much of these effectful functional traversals can be done using
, the closest to Haskell you can get with Scala. Section 4.2 of the original paper talks about two definite patterns of effectful traversal. Both of these patterns combine mapping and accumulation
(like the C# code above) but separates the concerns skillfully using functional techniques. Let's see how much of that we can manage with scalaz functors.
Pattern #1
The first pattern of traversal accumulates elements
, but modifies the elements of the collection
independently of this accumulation
. Here's the scalaz implementation of
(see the original paper for the haskell implementation) ..
def collect[T[_]:Traverse, A, B, S](f: A => B, t: T[A], g: S => S) =
t.traverse[({type λ[x] = State[S,x]})#λ, B](a => state((s: S) => (g(s), f(a))))
To the uninitiated, the type annotation in
looks ugly - it's there because scalac cannot infer partial application of type constructors, a problem which will be rectified once Adriaan fixes issue
on the Scala Trac.
is one of the typeclasses in scalaz similar to the model of
in Haskell.
trait Traverse[T[_]] extends Functor[T] {
def traverse[F[_] : Applicative, A, B](f: A => F[B], t: T[A]): F[T[B]]
import Scalaz._
override def fmap[A, B](k: T[A], f: A => B) = traverse[Identity, A, B](f(_), k)
and scalaz defines implementations of the
typeclass for a host of classes on which you can invoke
The above implmentation uses the State monad to handle effectful computations. For an introduction to the State monad in scalaz, have a look at this
from Tony Morris.
is the pure function that maps on the elements of the collection,
is the function that does the effectful accumulation through the State monad. Using
, here's a version of the C#
method that we did at the beginning ..
val loop = collect((a: Int) => 2 * a, List(10, 20, 30, 40), (i: Int) => i + 1)
loop(0) should equal((4, List(20, 40, 60, 80)))
Now we have the effectful iteration without using any mutable variables.
Pattern #2
The second pattern of traversal modifies elements
but dependent on some
that evolves independently of the elements. Gibbons et. al. calls this abstraction
, whose scalaz implementation can be as follows ..
def disperse[T[_]: Traverse, A, S, B](t: T[A], s: A => State[S, B]) =
t.traverse[({type λ[x] = State[S,x]})#λ, B](s)
Note how the elements of the collection are being modified through the State monad. Using
, we can write a labeling function that labels every element with its position in order of traversal ..
def label[T[_]: Traverse, A](t: T[A]) =
disperse(t, ((a: A) => state((i: Int) => (i+1, i)))) ! 0
label(List(10, 20, 30, 40)) should equal(List(0, 1, 2, 3))
can also be used to implement the
example that ships with scalaz distribution. Actually it counts the number of characters and lines in a stream.
def charLineCount[T[_]:Traverse](t: T[Char]) =
disperse(t, ((a: Char) => state((counts: (Int, Int)) =>
((counts._1 + 1, counts._2 + (if (a == '\n') 1 else 0)), (counts._1, counts._2))))) ! (1,1)
charLineCount("the cat in the hat\n sat on the mat\n".toList).last should equal((35, 2))
3 comments:
retronym said...
s/Traverse is one of the functors/Traverse is one of the type classes/
Debasish said...
Thanks for pointing out .. Fixed!
Marc-Daniel Ortega said...
Thank you my friend for this blog. I found it enlightening and it helped me in finishing my code for the upcoming post. | {"url":"http://debasishg.blogspot.in/2011/01/iteration-in-scala-effectful-yet.html","timestamp":"2014-04-19T12:22:56Z","content_type":null,"content_length":"130688","record_id":"<urn:uuid:3efcc869-fbe2-424b-a21a-3b0e725076a7>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shady Shores, TX Algebra 1 Tutor
Find a Shady Shores, TX Algebra 1 Tutor
...I am currently tutoring organic on campus in the science learning resource center and am also tutoring my sister in organic this semester. I find organic chemistry, fun and exciting,
interesting, and generally enjoy tutoring organic. Most people get through organic by memorization... I am not one of those people!
17 Subjects: including algebra 1, chemistry, biology, algebra 2
...Geology is very instrumental to my career as a Civil Engineer. I have drilled, sampled, and tested rock formations ranging from Mexico to northern Canada. I have been an avid costumer since
2003 and sew all my costumes.
28 Subjects: including algebra 1, reading, English, chemistry
...I have taught 6th, 7th, and 8th grade math in both classroom settings and individualized tutoring sessions throughout my career. I am currently certified to teach mathematics in the state of
Texas for grades 4-12. I have taught or tutored 5th, 7th, and 8th grade science in my career.
15 Subjects: including algebra 1, chemistry, algebra 2, geometry
...I am flexible, encouraging and patient with reputations for providing support for students who are struggling with mathematical concepts and quickly diagnose and develop strategies to fill the
gaps with appropriate materials, also I am a very creative and talented instructor skilled at developing...
20 Subjects: including algebra 1, calculus, physics, geometry
...I have a flexible schedule, with an 8-hour cancellation window. Tutoring can be done at any location to your convenience in the North Dallas metroplex. Libraries or coffee shops are often
conducive to a learning environment, though I leave that up to your discretion.
10 Subjects: including algebra 1, chemistry, geometry, biology
Related Shady Shores, TX Tutors
Shady Shores, TX Accounting Tutors
Shady Shores, TX ACT Tutors
Shady Shores, TX Algebra Tutors
Shady Shores, TX Algebra 2 Tutors
Shady Shores, TX Calculus Tutors
Shady Shores, TX Geometry Tutors
Shady Shores, TX Math Tutors
Shady Shores, TX Prealgebra Tutors
Shady Shores, TX Precalculus Tutors
Shady Shores, TX SAT Tutors
Shady Shores, TX SAT Math Tutors
Shady Shores, TX Science Tutors
Shady Shores, TX Statistics Tutors
Shady Shores, TX Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Addison, TX algebra 1 Tutors
Bartonville, TX algebra 1 Tutors
Copper Canyon, TX algebra 1 Tutors
Corinth, TX algebra 1 Tutors
Cross Roads, TX algebra 1 Tutors
Crossroads, TX algebra 1 Tutors
Denton, TX algebra 1 Tutors
Hickory Creek, TX algebra 1 Tutors
Highland Village, TX algebra 1 Tutors
Lake Dallas algebra 1 Tutors
Lakewood Village, TX algebra 1 Tutors
Little Elm algebra 1 Tutors
Northlake, TX algebra 1 Tutors
Oak Point, TX algebra 1 Tutors
The Colony algebra 1 Tutors | {"url":"http://www.purplemath.com/shady_shores_tx_algebra_1_tutors.php","timestamp":"2014-04-18T23:56:53Z","content_type":null,"content_length":"24267","record_id":"<urn:uuid:7f135345-b297-4e24-97d4-310f2578d5d6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
Date: 04/12/97 at 08:33:01
From: Chris Riley
Subject: Circumference
My class was trying to figure out the answer to this question. What
is the circumference of a circle with a radius of 5.5?
Date: 04/12/97 at 15:48:25
From: Doctor Mason
Subject: Re: Circumference
Hi, Chris,
Most of the time we use a formula to calcuate the circumference of a
circle once either the radius or diameter is known. To find the
circumference, you only need to multiply the diameter by a constant
known as pi.
Pi is approximately equal to the decimal 3.14 or the fraction 22/7.
Neither one of these values is exactly equal to pi, since pi is a
special kind of number (an irrational number), but even though 3.14
and 22/7 aren't exactly equal to pi, they are close enough to do most
In the formulas below
C = circumference
d = diameter
r = radius and
* means multiply:
C = pi * d or since 2 radius equal a diameter,
C = pi * 2 * r or often written
C = 2*pi*r
If you know that the radius = 5.5, then according to the last formula:
C = 2 * 3.14 * 5.5 = 34.54.
Glad to help! For more information about pi, see our Dr. Math FAQ at
-Doctor Mason, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/58987.html","timestamp":"2014-04-17T04:11:20Z","content_type":null,"content_length":"6237","record_id":"<urn:uuid:a9e3cd36-eadc-46fe-a447-a85e38233bb4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometric Mean
By Kardi Teknomo, PhD.
<Previous | Next | Contents>
Geometric mean is n-th root of product of data values.
Property of Geometric Mean
• All data must be positive. Zero data value will produce zero Geometric mean. Negative data value leads to Complex number which commonly regarded as NaN (not a number) in the interactive program
• Geometric mean is scale invariant toward the product of the factor.
For example, we have two objects name A and B, which properties are given below.
│ │Cost ($) │Time (Hour) │
│Object│ │ │
│A │2 │8 │
│B │5 │6 │
Now suppose we would like to make an index based on the two properties of cost and time. We consider geometric mean and arithmetic mean as the index as shown in the table below.
Notice that we are using the same properties; if we change the scale of the time from hour to minutes, arithmetic mean produces rank reversal while geometric index preserve the rank. This is one
advantage of geometric mean compare to arithmetic mean
│Object│GM Index (sqrt($.hour)) │AM Index ($+hour)│GM Index (sqrt($.min))│AM Index ($+min)│
│A │4 (rank 1) │5 (rank 1) │30.98 (rank 1) │241 (rank 2) │
│B │5.48 (rank 2) │5.5 (rank 2) │42.43(rank 2) │182.5 (rank 1) │
The interactive program below computes Geometric mean of a list of numbers separated by comma. Feel free to experiment with your own input values. What happen when you have zero or negative data
input? Do you think Geometric mean is robust against outlier or susceptible to error?
<Previous | Next | Contents>
See also: Lehmer mean
Rate this tutorial or give your comments about this tutorial
This tutorial is copyrighted.
Preferable reference for this tutorial is
Teknomo, Kardi. Mean and Average. http:\\people.revoledu.com\kardi\ tutorial\BasicMath\Average\ | {"url":"http://people.revoledu.com/kardi/tutorial/BasicMath/Average/geometric%20mean.html","timestamp":"2014-04-19T07:55:58Z","content_type":null,"content_length":"17298","record_id":"<urn:uuid:b475a157-a232-4745-874f-91a06a87cc5a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Problem using plots and loops
Date: Feb 3, 2013 11:04 AM
Author: dpb
Subject: Re: Problem using plots and loops
On 2/3/2013 1:06 AM, Ankita wrote:
> 1) In the first 2 if loops, I want to make sure that the ratio fits in a
> given limit. Is there any better way of doing that?
You could write things like Nmax=min(InputRatio,Limit) etc., but if it
works as is, that's a nicety and since it's just a single set of values
and is in an user input section the time isn't an issue...I'd let it go
until got other things worked out and develop some more familiarity w/
Matlab and programming in general...
> 2) When I execute
> the program, only the fprintf in 2nd if loop is executed. Why is that?
Would seem to be owing to the values you've selected -- altho it's
possible you've written a set of conditions such that one branch can
never be reached because the tests aren't possible--I didn't have time
at the moment to read it that carefully, sorry...
> 3) For the last for loop, I should be get 4 different curves for 4
> values of N. Instead I get one curve (the curve is actually a set of
> points. Not a continuous line. That too is my problem.)
Compute the points in an array and then plot each line when you have a
set of values for the curve instead of each point as it is calculated.
> %NOSE AND MAIN WHEEL LOAD CALCULATION
> B=M_f+N_f;
> if M_f/B<0.20
> if M_f/B>0.1
> Nmax=W*(M_f/B);
> else
> Nmax=W*0.15; %max static load per nose wheel
> fprintf('The M_f/B ratio ...
> end
> end
> if M_a/B>0.05
> if M_a/B<0.1
> Nmin=W*(M_f/B);
> else
> Nmin=W*0.08; %min static load per nose wheel
> fprintf('The M_a/B ratio has ...
> end
> end
> %hold on;
> K=L/W;
for N=1.5:0.5:3
hold on
Above computes the S vector for each point in the V vector w/o a loop
then plots it.
You'll probably want to add a color value to the plot since the colors
won't automagically cycle unless S is a column vector. Or, you could
create a 2D array and save the whole thing then plot since the sizes are
quite small...
S=zeros(length(V)),4); % preallocate
for N=1.5:0.5:3
i = i+1; % array column index
And, of course, one could look at
doc meshgrid
to do the computation w/o any explicit loops at all... | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8246486","timestamp":"2014-04-17T21:43:19Z","content_type":null,"content_length":"3863","record_id":"<urn:uuid:1de30160-c7b6-463f-943e-2d857fa7f38e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phylogenetic inference under varying proportions of indel-induced alignment gaps
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Evol Biol. 2009; 9: 211.
Phylogenetic inference under varying proportions of indel-induced alignment gaps
The effect of alignment gaps on phylogenetic accuracy has been the subject of numerous studies. In this study, we investigated the relationship between the total number of gapped sites and
phylogenetic accuracy, when the gaps were introduced (by means of computer simulation) to reflect indel (insertion/deletion) events during the evolution of DNA sequences. The resulting (true)
alignments were subjected to commonly used gap treatment and phylogenetic inference methods.
(1) In general, there was a strong – almost deterministic – relationship between the amount of gap in the data and the level of phylogenetic accuracy when the alignments were very "gappy", (2) gaps
resulting from deletions (as opposed to insertions) contributed more to the inaccuracy of phylogenetic inference, (3) the probabilistic methods (Bayesian, PhyML & "MLε, " a method implemented in
DNAML in PHYLIP) performed better at most levels of gap percentage when compared to parsimony (MP) and distance (NJ) methods, with Bayesian analysis being clearly the best, (4) methods that treat
gapped sites as missing data yielded less accurate trees when compared to those that attribute phylogenetic signal to the gapped sites (by coding them as binary character data – presence/absence, or
as in the MLε method), and (5) in general, the accuracy of phylogenetic inference depended upon the amount of available data when the gaps resulted from mainly deletion events, and the amount of
missing data when insertion events were equally likely to have caused the alignment gaps.
When gaps in an alignment are a consequence of indel events in the evolution of the sequences, the accuracy of phylogenetic analysis is likely to improve if: (1) alignment gaps are categorized as
arising from insertion events or deletion events and then treated separately in the analysis, (2) the evolutionary signal provided by indels is harnessed in the phylogenetic analysis, and (3) methods
that utilize the phylogenetic signal in indels are developed for distance methods too. When the true homology is known and the amount of gaps is 20 percent of the alignment length or less, the
methods used in this study are likely to yield trees with 90–100 percent accuracy.
DNA sequences are used routinely to infer phylogenies [1-3]. The sequences within lineages (branches of the phylogenetic tree) evolve independently over time by means of several evolutionary
processes, including point replacements of nucleotides (base substitutions), and insertion and deletion (indel) events. While base substitutions change the nucleotide composition of a given sequence,
indels are likely to change the total length of the sequence. If indel events have occurred during the course of evolution of the molecular sequences being studied, it becomes necessary to align the
corresponding homologous regions among the sequences for a proper site-by-site comparison among them, before phylogenetic analysis. In the process of alignment, gaps are introduced in the sequences
to account for the indels. Different methods have been devised for dealing with gapped sites during phylogenetic analysis, ranging from ignoring the gapped sites from the alignment to inferring or
differentially coding the state at each gapped site, using a number of different methods (for a list of methods, see [4-6]). Most of these treatment methods work reasonably well when the proportion
of gapped sites in an alignment is small [5,6].
There are many examples in the literature of studies that have used molecular sequences (DNA and protein) with rather large gaps to infer phylogenies [7-9]. It appears logical to expect an inverse
relationship between the proportion of gapped sites in an alignment and the accuracy of the inferred phylogeny, particularly if the gaps are not treated as reflective of distinct evolutionary events,
and thus, containing distinct phylogenetic signal. However, the relationship between the extent of "gappiness" in the data resulting from indel events in the evolutionary history of the sequences on
the one hand, and phylogenetic accuracy on the other, has not been studied by introducing and systematically varying the number of gaps in the alignments in a biologically realistic manner, even as
the literature on alignment gaps in the phylogenetic context has increased of late [6,10-15]. For example, several studies investigating the relationship between the amount of alignment gap and
phylogenetic accuracy have done so in the context of aligning sequence fragments such as ESTs (e.g., [12,13]), using computer simulation to first generate the alignments and then introduce gaps, such
that the gaps do not contain any phylogenetic signal (e.g., [10,11]); are in the context of only empirical data (e.g., [8]); or where the emphasis was more on levels of divergence among the taxa
(e.g., [16]). Furthermore, the relative performance of the gap treatment methods that are common among inference methods has also not been compared in this context. For example, all inference methods
allow gaps to be treated as missing data or "MD" (although the treatment of the missing data differs among the methods, with the state at the gapped sites inferred in parsimony and distance-based
methods of phylogenetic analysis, based on criteria that are specific to each method, while in likelihood and Bayesian analyses, the likelihoods are summed over all four possible assignments of a
nucleotide to a given gapped site). It is not known how the data inferred under these criteria work in conjunction with each of the respective inference methods to influence the accuracy of
phylogenetic inference, when the gaps reflect indel events in the alignment.
We obtained sequence alignments for this study by means of simulating non-coding DNA sequence evolution, introducing nucleotide point substitutions (replacements) and insertion/deletion (indel)
events along a balanced (symmetrical) 16-taxon model tree (Figure (Figure1).1). (Simulations were done along random and pectinate 16-taxa trees as well, but we report the results only from the
balanced model tree shown in Figure Figure11 for reasons explained below.) The simulations were done while systematically varying the values of different sequence and indel parameters. All the
simulation parameters were varied to include biologically realistic values. For example, the rate of introduction of indels included the range seen in non-coding sequences [17-20]. Similarly, the
ratio of insertion to deletion events was also varied based on published results [18,20-22]. It was important to vary the ratio of insertions to deletions in order to determine if there was a
differential effect on phylogenetic accuracy, since most of the commonly used gap treatment methods do not differentiate between gaps resulting from the two types of evolutionary events.
The model tree. The 16-taxon balanced tree, obtained from Ogden and Rosenberg (2006), was used as a model tree for the simulations of DNA evolution, the results of which we have presented in this
paper. Two more sets of simulations were also done – ...
We assessed the accuracy of phylogenetic inference as the topological correctness of the inferred tree when compared to the model tree. Our results show that overall, when the percentage of gapped
sites (cells in the alignment matrix) in the alignment is low (≤ 20 percent), all the inference methods (using any gap-treatment method) perform well (with 90–100% accuracy). On the other hand, when
the number of gapped sites increases in the alignment, the probabilistic methods (particularly Bayesian analysis) are clearly more accurate, although at the highest gap levels, NJ and MP are
sometimes better. Our results also show that gaps resulting from deletion events in the evolutionary history of the sequences appear to be harder to reconcile (when compared to those resulting from
insertion events), leading to greater inaccuracies in phylogenetic inference, evidently because of the loss of the phylogenetic signal present in the sites deleted. When compared to MD method of
treating gaps, a much higher accuracy was seen in our study when the gaps were coded separately as in the BC (Binary Character state) treatment in conjunction with the Bayesian and MP methods, or as
in the DNAMLε package [23].
We first describe the manner in which the alignment gaps were quantified in this study, and the effect of different simulation parameters on the number of gaps in the alignment.
Quantification of alignment gaps
The amount of gap in an alignment was determined as a percentage, in the following manner. First, the number of gapped sites was determined for each sequence and then obtained as an average among all
the sequences in the alignment. (Note that our definition of a gapped site is common to all methods: a single cell in the alignment matrix. Thus, a gap that covers three cells (the space of three
bases) in a given sequence, even if contiguous, is counted as three gapped sites in that sequence.) This was expressed as a percentage of the altered length (as a result of indel introduction) of the
alignment. This percentage was then averaged across the replicates, for a given treatment, as a simple arithmetic mean (since the change in length due to the introduction of the indels varied
minimally among them). We refer to the gap percentage by the term G/S (for Gap percentage per Sequence) throughout the paper. The gap percentages thus obtained (G/S) were used to compare the relative
performances of the phylogenetic methods (PhyML, MP, NJ, and Bayesian analysis) under the different gap treatment methods (MD, BC, and MLε).
Figure Figure22 shows the G/S distribution of the total number of gaps in an alignment, expressed as a percentage of the total length of the alignment. Panels A and B refer to simulations where the
rate ratio of insertions to deletions was 1:1 and 1:3, respectively. In each panel, the distribution of gaps has been plotted separately for each substitution rate (r), as a function of the rate of
indel introduction (λ) that in turn, was varied as a function of the substitution rate. Both panels of Figure Figure22 show that the average gap percentage increases nonlinearly with increase in λ
for all r. The gap percentage was minimum (~2%) when λ = 0.03, and r = 0.025, and maximum (~90%) when λ ≥ 0.19 and r ≥ 1.0. No noticeable differences were seen in the percentage of gaps in the
alignments when different values of sequence length (l) and transition-transversion rate ratio (κ) were used in the simulations (not shown). However, as expected, the gap percentage varied
considerably with the relative proportion of insertions and deletions, with more gaps seen when the ratio of insertions to deletions was 1:1 and fewer when the ratio was 1:3, especially at low to
medium substitution rates. This difference in the number of gaps is because an insertion event in a single sequence adds a gap of the size of the insertion to all the other sequences during
alignment, whereas a deletion in a sequence results in a gap in only that sequence and no other, especially if the insertion/deletion event is recent in the evolutionary history of the affected
sequence. The distribution of gap percentages for the random and pectinate trees was largely similar to that shown for the balanced tree in Figure Figure22.
Distribution of alignment gap percentages. The percentage of alignment gaps, G/S, given as the number of gapped sites per sequence length averaged over all the sequences in the alignment and
expressed as a percentage of the alignment length altered after ...
Finding the gap threshold
Using the above measures of phylogenetic accuracy, it is possible to determine thresholds of gap percentages for given levels of phylogenetic accuracy. These thresholds are shown in Figure Figure3,3
, which is arranged such that there are two panels for each inference method, one for the insertion-deletion rate ratio of 1:1 and the other for the ratio 1:3. The horizontal and vertical axes in
each panel reflect the rate of nucleotide substitution and rate of indel introduction, respectively. However, in order to relate to empirical phylogenetic analyses (where these rates are not
routinely determined), the background in this figure has been color-coded based on the percentage of alignment gaps (which can be easily determined), and the contour lines of phylogenetic accuracy
have been drawn against this background. Thus, one can trace the level of accuracy of phylogenetic reconstruction based on the percentage of gaps in the alignment rather than on the rates of
substitution or indel introduction. Such a representation also makes it easier to determine gap thresholds for phylogenetic accuracy in empirical studies, to determine the expected level of accuracy
given a certain percentage of gaps in an alignment.
Gap thresholds for different levels of phylogenetic accuracy. The gap thresholds are shown for NJ (Panels A, B), PhyML (C, D), MLε (E, F), Bayesian (G, H), and MP (I, J) methods, for various levels
of phylogenetic accuracy. The background in each ...
In Figure Figure3,3, Panels A and B show the level of phylogenetic accuracy for gap treatments in the NJ analysis. These results are remarkable for several reasons. First, the contour lines of
accuracy typically follow specific gap percentage ranges, as indicated by the color of the background. In other words, there appears to be a somewhat deterministic relationship between the number of
gaps in an alignment and the level of phylogenetic accuracy one can expect in an NJ analysis. This appears to be true in the case of PhyML also (Panels C and D). Furthermore, both methods can be seen
to be doing better in the 1:1 than in the 1:3 panels, showing that the relative proportions of insertions and deletions matter in determining the accuracy.
Panels E and F show the results for MLε analysis. Here, we see that the minimum accuracy is approximately 90% and 70% for the 1:1 and 1:3 cases, respectively. Clearly, the MLε analysis has higher
accuracy when compared to the MD analysis in conjunction with any inference method. The integrated method incorporating both substitutions and indels, MLε appears to be equivalent in accuracy to the
BC method in Bayesian analysis, in the case of the 1:1 ratio of insertions and deletions. However, the accuracy of MLε is lower for datasets with larger deletion biases (as in the 1:3 ratio) and is
in keeping with the other indel-coding methods (Panels G-J).
The Bayesian and MP analyses are shown in the panels, G, H, and I, J, respectively, with the dark red and white dotted contour lines within each panel representing the accuracy when the gaps are
treated as Missing Data (MD) and Binary Characters (BC), respectively. As in the case of the other methods, the relationship between accuracy and G/S is clearly strong here too. Furthermore, this
apparent cause-and-effect relationship appears to hold, whether the treatment method is MD or BC, especially at larger G/S values (towards the red end of the background color). It must, however, be
noted that the actual relationship between the percentage of gaps in an alignment and the level of phylogenetic accuracy that can be expected is vastly different between the two gap treatment
methods, MD and BC. Thus, even when the G/S value exceeds 80 percent of the length of the alignment (orange color background), as much as 70 percent of the branches are reconstructed accurately by MP
(Panel I), and 90% by the Bayesian method (Panel G), when the gaps are treated as binary characters (BC), and the insertion-deletion ratio is 1:1. In contrast, only approximately 40 percent of the
branches are accurately inferred in either analysis under the MD treatment (same panels). The only case where the relationship between the percentage of gaps and phylogenetic accuracy is not as
straightforward is at very high accuracy levels; the contour lines for 95 percent accuracy cross color (gap percentage) boundaries or are confined to small portions of the gap percentage range of
0–20 percent. The reconstruction accuracy for the Bayesian and MP methods in alignments where the gaps are largely due to deletion events (1:3) is worse than when compared to alignments where there
is equal contribution from insertions and deletions to the gaps (1:1). Panels G and H (Bayesian analysis) and I and J (MP) show that there is a 10–20 percent difference in accuracy for a given level
of gaps in an alignment between the two insertion-deletion ratios, whether for MD or BC treatments. Thus, when the gaps exceed 90 percent of the alignment length, the reconstruction accuracy is seen
to be around 90 percent (BC treatment) and approximately 40 percent (MD treatment) when the insertion-deletion ratio is 1:1, whereas it is less than 80 percent (but more than 60 percent; BC
treatment) and 20 percent (MD treatment) when the ratio is 1:3 (Panels G and H).
Increasing the sequence length does not appear to change the pattern of these results very much, except that there is greater accuracy when the sequence length is 2500 nts. (not shown). This
improvement in accuracy, however, is not uniform across the breadth of the gap percentage landscape, being higher at the low gap percentage levels. For example, when the sequence length is 500 nts.,
and the gaps are equal to or greater than 80 percent, the accuracy is approximately 80 percent (Figure (Figure3,3, Panel I; BC treatment). The corresponding accuracy when the sequence length is 2500
nts., is approximately 99 percent – a 9–10% difference between the two lengths. Sequence length is known to be an important determinant of phylogenetic accuracy [24-26] and its influence is not being
investigated in this study.
Figure Figure33 also shows that there are some differences among the inference methods with respect to the gap threshold. First, the level of accuracy at a given gap percentage is higher in the
Bayesian, PhyML and MP analyses when compared to the other analyses, especially at higher gap percentages, when the comparison is made for the MD treatment (which is common among the inference
methods, except of course, MLε, which is an integrated gap-coding/phylogenetic analysis method). Thus, when the gaps amount to more than 80 percent in the alignment (for insertion-deletion rate ratio
of 1:1), the Bayesian, MP, and PhyML-analyzed trees are inferred with approximately 60%, 40%, and 40% accuracy, respectively, whereas the accuracy in the NJ analyses is less than 20 percent. The
comparison here also reflects the differences among the criteria used in treating the gaps as missing data in the three inference methods. From these results, it appears that the MD treatment in NJ
infers the states at the gapped sites less accurately than does the corresponding treatment in MP and the method used in assigning the likelihood in Bayesian and PhyML analyses. Furthermore, the
tightness of the relationship between the contour lines of phylogenetic accuracy and the gap percentages on the one hand, and the lower accuracy at high gap percentages in the PhyML and NJ analyses
on the other, imply that while these two methods appear to be more capable of overcoming other sources of error in phylogenetic inference (such as homoplasy in the case of MP), they fall victim to
poorer treatment of gaps as missing data.
Phylogenetic accuracy of different inference methods under varying gap percentages
We first compare the phylogenetic accuracy of all the inference methods, taken two at a time, for the MD gap treatment since this is available for all the methods. The results are shown in Figure
Figure4.4. In each panel in Figure Figure4,4, the average phylogenetic accuracy,
Pairwise comparison of inference methods under MD gap treatment. ...
For each comparison between inference methods, the left panel shows the results for the insertion-deletion ratio 1:1, and the right panel for the ratio 1:3. In each panel, the graph is also
color-coded to reflect the gap percentage (G/S) against which the Figure3,3, all methods yield more accurate trees for a given G/S value when the gaps are caused by insertions and deletions in equal
proportions (left panels), when compared to alignments where the gaps result from largely deletion events (right panels). This is evident by noting that the dots of a given color (G/S value) are
higher in the charts in the left panel and lower in the right, for any given pair of inference methods being compared. However, there are distinct differences among the methods too, and they are
brought out in these pairwise comparisons. For instance, it is clear that, irrespective of whether the insertion-deletion ratio is 1:1 or 1:3, in general, the Bayesian, MP and PhyML methods are
somewhat comparable, while NJ does the poorest in the presence of gaps, especially when G/S is large. However, comparing the relative performance of the methods from such graphs becomes subjective.
Therefore, we conducted the paired t-test (at 5% level of significance) for each of the 110 data points (parameter combinations or "genes") in each of the graphs. The results of the t-test are given
by means of a letter that signifies if a particular method is statistically better than the other in a given comparison (B – Bayesian analysis, P – PhyML, L – MLε, M – Maximum Parsimony, and J –
Neighbor-Joining). We also determined which method was better, overall, in each of the panels, using the Z test, and this is shown by the corresponding letter with an asterisk in the upper triangle.
Thus, in the comparison between NJ and PhyML, we see that PhyML shows a significantly greater overall accuracy (Z test; p < 0.001), and that this difference is almost always statistically significant
for the individual comparisons. The superiority of PhyML over NJ in the presence of gaps is very clear when the insertion-deletion ratio is 1:1 (all 110 comparisons statistically significant; t test;
p < 0.05). When the ratio is 1:3, again, PhyML is better than NJ almost all the time; NJ is found to be significantly better only in two instances out of 110; two comparisons were not significant.
The comparison between MP and NJ also yields similar results, with MP being clearly superior most of the time (for 78 "genes", with 32 comparisons turning out non significant; NJ is never better than
MP.) in the left panel. The result in favor of MP is more pronounced at higher G/S values, with the graph deviating away from the diagonal. In the comparison between MP and PhyML, PhyML is superior
to the other in a majority of cases, irrespective of the insertion-deletion ratio. Interestingly, MP superiority is seen only at very high G/S values, while PhyML is better almost everywhere else.
This is particularly evident in the right panel (insertion-deletion rate ratio of 1:3). In both panels, PhyML is significantly better, overall (Z test; p < 0.001).
In Figure Figure5,5, we show the results of the Bayesian method under the MD treatment compared to PhyML, MP, and NJ methods. The figure shows that, irrespective of the insertion-deletion rate ratio
and the G/S value, the Bayesian method is more accurate than MP, NJ or PhyML, overall (p < 0.001). When compared individually, it is seen to be better than NJ in all the genes in the left panel and
all but one of the genes in the right panel. Next, the Bayesian method is seen to be better than PhyML, overall, but it is statistically better (p < 0.05) 48 times (out of 110 comparisons across the
entire spectrum of G/S values), with PhyML outperforming it (t test; p < 0.05) in 22 cases, with neither being better than the other in the remaining 40 genes, in the right panel. In the left panel
(1:1) similar results were obtained, where PhyML and Bayesian were each statistically better (t test; p < 0.05) than other roughly equal number of times (around 30 cases each), with the two methods
occupying different "niches" (Bayesian doing better at high gap percentages and PhyML at the low to intermediate levels of gap percentage.). When Bayesian and MP methods are compared, the Bayesian
analysis is statistically better (t test; p < 0.05) in almost 90 cases, whereas MP is better almost never, irrespective of the insertion-deletion rate ratio.
Pairwise comparison of inference methods under MD gap treatment. ...
It is important to note that the MD treatment is different among the inference methods. (In the case of the probabilistic methods – PhyML and Bayesian analysis, the likelihood is summed over all four
nucleotides at the gapped sites [27-29], while for a distance method like NJ, the nucleotide state at each gap is inferred by distributing the missing changes to unambiguous changes, and finally, for
the MP method, a given state is assigned to each gapped site in a sequence if it is the most parsimonious, given the placement of the taxon in the tree (see the FAQ on the PAUP*website http://
paup.csit.fsu.edu/paupfaq/paupfaq.html. Hence, it appears that the level of accuracy seen in this study for the different inference methods is also attributable to the accuracy with which the state
is inferred/likelihood is computed at the gaps by the corresponding MD methods.
Next, we compare the accuracy of the different inference methods, again, taken two at a time, when gaps are treated not merely as missing data but as information that is included in the phylogenetic
analysis. Under the MP and Bayesian methods, gaps can be treated as binary characters (BC), with sites in a given column being scored as a 1 if gapped and 0 if not. Among the recent advances in the
modeling of molecular sequence evolution is the integration of insertion and deletion events along with base substitution processes in a probabilistic framework for phylogenetic inference [23]. We
have used this method for maximum likelihood analysis of our data, to compare this treatment (which we refer to as MLε) with the BC treatment in MP and Bayesian analyses. These comparisons (again,
pairwise as in Figure Figure44 and and5)5) are shown in Figure Figure66.
Pairwise comparison of inference methods when gapsare coded as distinct evolutionary events. ε analysis, when the gaps were treated as binary ...
As in Figure Figure44 and and5,5, here too the average phylogenetic accuracy, Figure66 shows that the differences among the inference methods in accuracy in the presence of indel-induced gaps, is
much more evident when gaps are included in the phylogenetic analysis and not treated as missing data (that is, when compared to the results in Figure Figure44 and and5).5). Furthermore, the
accuracy is generally much higher, even in the method with the lower accuracy (note the Figure44 and and5,5, but in a more pronounced manner, that the accuracy of both methods is higher for a given
G/S level when the gaps result from equal proportion of insertions and deletions (panels in left column), as opposed to when they are largely from deletion events (right column), as evidenced by a
comparison of the heights of the dots of a given color between the two panels, particularly at the mid to higher G/S values (light green, yellow and orange colored dots).
The different indel-coding methods are compared for their performance, in conjunction with the corresponding inference methods, in Figure Figure6.6. It is immediately obvious that accuracy is much
higher (and in a tight range of values) in all the left panels (1:1 ratio) for the probabilistic methods (Bayesian and MLε) when compared to the right side panels (1:3 ratio), providing a compelling
case for the association between phylogenetic accuracy and evolutionary origin of alignment gaps (insertion or deletion) – at least for the probabilistic methods. This fact is even more obvious in
the middle left panel where the two probabilistic methods are compared. When MP and MLε are compared (Figure (Figure6,6, top panels), it is clear that MLε has a much higher accuracy in the medium to
high range of G/S values when the insertion-deletion ratio is 1:1, with MP doing better at low to medium G/S values. This difference between the two methods is much more pronounced in the right panel
(insertion-deletion ratio, 1:3), where MLε is better only at the highest G/S values and MP clearly the better of the two elsewhere. The middle panels show that Bayesian analysis produced more
accurate trees when compared to MLε in 77 (right panel) and 96 (right panel) out of 110 comparisons. Just as in the MP, MLε comparison (top panels), MLε again outperforms Bayesian at the highest G/S
values. The difference between the two panels is quite evident, with both methods varying in accuracy in a very tight range in the left panel (when compared to the 1:1 ratio). Note that the
distribution of ε (top and middle panels for the 1:3 ratio), suggesting a similar pattern between MP and Bayesian, under BC treatment of gaps, although the Bayesian method appears to be doing better
than MP against MLε. These two methods are compared in the bottom panels. As mentioned above, it is immediately apparent that the accuracy for the Bayesian method in the left panel is much higher
(minimum 90%) when compared to the right panel. These panels also show that whenever the difference in accuracy between MP and the Bayesian method is statistically significant (t test; p < 0.05), the
latter is always better (with 30 percent of the cases being non significant). Furthermore, the "genes" where the difference in accuracy is statistically significant are mostly spread across medium to
high G/S values. In summary, the Bayesian method is superior to the MP and MLε methods under the gap coding approach, irrespective of the relative proportions of insertions and deletions in the
The results shown in Figures Figures4,4, ,5,5, and and66 have been obtained from our analyses of the alignments obtained from simulations done on the balanced (symmetric) model tree (Figure
(Figure1).1). We also obtained sequence alignments from simulations done with 16-taxon random-branching and pectinate trees for a subset of parameter values that, however, spanned the range of
parameter values used in this study (see Additional File 1). All the analyses shown in Figures Figures4,4, ,5,5, and and66 were done on these alignments as well (not shown), including the pairwise
comparisons among the inference methods and the paired t-tests. The results in those analyses showed that while the inference methods compare among themselves for the random-branching tree just as
they did for the balanced tree, there are some differences in the case of the pectinate tree. In the case of the MD analysis, while Bayesian was the better method overall, the performance of PhyML in
the case of the pectinate tree was glaringly different. In the case of the balanced tree, PhyML showed greater accuracy than NJ in essentially all the cases, irrespective of the insertion-deletion
ratio. However, for the pectinate tree, the roles are exactly reversed, with NJ better than PhyML in essentially all the cases – again, irrespective of the insertion-deletion ratio. Similarly, while
PhyML and MP were each better than the other roughly equal number of times in the case of the balanced tree, MP accuracy was superior for the pectinate tree in essentially all the genes studied,
irrespective of the insertion-deletion rate ratio.
When indel coding was used, again, the random tree results are quite similar to those from the balanced tree. Interestingly, the results from the pectinate tree were not glaringly different from
those from the balanced tree, but rather, the two were largely similar, except that the overall accuracy was lower by about 20 percent.
We undertook this study to investigate the relationship between the number of gapped sites in a sequence alignment and the accuracy of phylogenetic inference, and furthermore, to understand the
impact of different gap treatment methods, phylogenetic inference methods, the ratio of insertions to deletion events in the evolutionary history of the sequences, and other sequence parameters such
as sequence length and the transition-transversion rate ratio, on this relationship. Using the computer program, Dawg version 1.2 [30] we simulated DNA evolution along a 16-taxon model tree (Figure
(Figure1),1), incorporating both nucleotide substitution events and insertion and deletion (indel) events (the latter as a function of the substitution rate.). The resulting DNA alignments were then
subjected to three gap treatment methods, namely, MD, BC, and MLε, and the phylogenetic analysis was done using popular phylogenetic inference methods – distance (NJ), parsimony (MP), likelihood
(PhyML) and Bayesian analysis.
A remarkable result in this study is the strong, almost deterministic, dependence of the accuracy of phylogenetic inference on the percentage of gapped sites in the alignment, irrespective of the
inference method, gap treatments, or insertion-deletion rate ratio, when the percentage of gapped sites was high (Figure (Figure3).3). This made the assignment of gap thresholds for specific levels
of phylogenetic accuracy fairly straightforward, without being necessarily concerned with other determinants of phylogenetic accuracy. It was only at lower gap levels that the relationship was not as
straightforward, and other factors (e.g., substitution rate) began to play a part in directly influencing the accuracy of the inferred trees (as evidenced by the contour lines of accuracy crossing
gap percentage thresholds in Figure Figure33).
Earlier studies that have compared gap treatment methods have been confined to comparing their relative performances within a given inference method, particularly MP [5,6]. Therefore, this study was
undertaken to provide users with a comparison of other commonly used inference methods as well. We find that the probabilistic methods are clearly superior to MP and NJ, irrespective of whether gaps
are treated as missing data or binary characters. Treating gaps as binary characters implies the assignment of unambiguous phylogenetic signal to them in the evolutionary history of the sequences.
Therefore, the number of gaps has little bearing on the distortion of the phylogenetic signal under the BC method. On the other hand, the MD method requires the inference of the missing state at each
gapped site (or the summation of the likelihood for all four nucleotides at the gapped sites), a process that is bound to be strained with increasing number of gaps in the alignment. Therefore, it is
easy to understand the relative superiority of the BC gap treatment method. It must be noted, of course, that this method can only contribute to phylogenetic accuracy as long as the alignment gaps
are known without error (as in this study). Thus, the importance of the accuracy of sequence alignment cannot be underestimated.
The MLε method performed well in our study, although the Bayesian method was better, especially when the insertion-deletion ratio was 1:3 (Figure (Figure6).6). When compared to MP analysis (the
other inference method that incorporated the BC), MLε was much better when the number of gaps was high, irrespective of the insertion-deletion ratio. Such methods hold the potential for more accurate
reconstruction of phylogenies in the presence of large alignment gaps (also see [15,31]).
In addition to the MD treatment and the gap-coding treatments such as BC, other treatment methods exist, although not widely used anymore. One of these is pairwise deletion, a gap treatment method
that is meaningful only when sequences are compared in a pairwise fashion, as in distance methods of inference, such as NJ. Moreover, it is an extremely rapid method that is suited to the speed of
NJ. The other is complete deletion of entire columns of gapped sites from the alignment, which is a gap-treatment method that is applicable to any phylogenetic inference method. We did these analyses
as well, because there is sometimes an uncertainty about which of these two methods is better [2,32]. The complete deletion of gaps posed a problem in our study as the number of sites that needed to
be removed from the alignment, especially at higher substitution rates, caused the remaining sequence length to become so small that often at least one of the four nucleotides failed to be
represented in the alignment. Therefore, we used this method only when the substitution rate was very low (r ≤ 0.2), and when the alignment length (remaining after complete deletion) for each
replicate of a given sequence combination was at least 100 nts.
Since the complete deletion treatment could be used only for low substitution rates, the comparison between the two treatments is also made only across this range. Furthermore, since the pairwise
deletion method can only be used in conjunction with the NJ method in this study, we compared the two methods only for NJ. Both methods are comparable at low to moderate gap percentages, but diverge
thereafter in the accuracy of phylogenetic inference (not shown). It must also be noted that the gap percentage does not reach very high levels in the pairwise deletion as it does in the complete
deletion method. Thus, while for a given gap percentage, the two treatment methods may be comparable in terms of phylogenetic accuracy, the pairwise removal of gaps appears to be better since the gap
percentage is much lower with this method.
A comprehensive list and analysis of gap treatment methods may be found in Ogden and Rosenberg [6] and Simmons Muller and Norton [5]. However, they did not compare among phylogenetic inference
methods, even for those gap-treatment methods that were common to multiple inference methods. In this study, while we do compare among gap-treatment methods, our emphasis is also on comparing among
inference methods, insertion-deletion ratios, and the effect of the amount of gap on phylogenetic accuracy under varying parameters.
In order to better understand the influence of the alignment gaps on phylogenetic accuracy, we performed the same simulations, but with only base substitutions and no indels. As there were no gaps in
the alignments, the data were subjected to phylogenetic analysis without any processing by means of gap treatment methods. The results of this analysis showed that, as expected, Maximum Likelihood
and Bayesian analysis produced the most accurate trees, particularly at the highest substitution rates (not shown).
Another notable finding in this study is the differential influence of insertions and deletions on phylogenetic accuracy. Most of the commonly used gap treatment methods do not distinguish between
insertions and deletions. Our results show that phylogenetic accuracy was lower when the insertion-deletion ratio was 1:3. Even the probabilistic methods (PhyML, MLε and Bayesian), which produced the
most accurate trees when insertions and deletions were introduced in equal numbers, performed somewhat poorly when the ratio was 1:3 (Figure (Figure3,3, ,4,4, ,5,5, ,6).6). It therefore, appears
important to develop methods that first distinguish between insertion and deletion events in the evolutionary history of the sequences in an alignment, and then treat them separately to add distinct
signals to the phylogenetic analysis.
In this study, the metric we have used to measure the accuracy against is the percentage of gaps in the alignment, and this in turn has been measured mainly as G/S. Some studies have found that it is
not the amount of data missing but rather the amount of data remaining that matters in determining the accuracy of the phylogeny being inferred [10,12,33]. In order to compare our results with the
results from these studies, we show the accuracy, (Figure7).7). The layout of Figure Figure77 is the same as that of Figure Figure3,3, with the left and right columns referring to
insertion-deletion ratios of 1:1 and 1:3, respectively, and the inference methods arranged one below the other, in the same order, namely, NJ, PhyML, MLε, Bayesian analysis and MP.
Effect of the alignment gap percentage on phylogenetic accuracy, number of characters remaining in the alignment, and total alignment length after gap-introduction. The average accuracy, ...
One of the first things that stand out in Figure Figure77 is the general accuracy of the MD method when the G/S is low and poor accuracy when G/S is high, irrespective of the inference method.
Interestingly, when the accuracy curve in each graph is compared to the curve of the remaining number of nucleotides, there seems to be little relationship between the two in the left panels (1:1),
again, irrespective of the inference method. Thus, even as the number of remaining nucleotides (red triangles) continues to be high for large G/S values, the Figure7,7, and as mentioned in [10,12,33
]), this is true only when homology among the sequences in the alignment can be established in the remaining character data. If, however, the remaining character data is largely a result of insertion
events, the relationship is unlikely to hold, as seen in the left panels.
On the other hand, if the gaps are coded separately (e.g., as BC), then the phylogenetic signal present in the gaps (if the alignment is accurate) increasingly becomes the only information for the
inference method to rely on, as G/S increases. The loss of signal from the character data is reflected in the decreased phylogenetic accuracy at high G/S values (left column). The greater loss of
phylogenetic accuracy at medium G/S values in the right column panels of Figure Figure77 can be attributed to fewer deletion events that are distinct and non-overlapping when compared to insertion
events that are more likely to be distinct and non-overlapping, as the increase in the total length of the alignment with indel introduction will be much higher when the insertion-deletion rate ratio
is 1:1.
In this study, we also found that the alignments from the random-branching tree yielded essentially the same results as those from the balanced tree, while those from the pectinate tree were
different (not shown). The analyses from the pectinate tree data in general showed lower accuracy than the corresponding analyses from the balanced tree datasets. Furthermore, the relative
performances of the different inference methods were not the same between the two model topologies. In particular, the relative performance of the PhyML method was worse when the topology contained
pectinate branching.
This is a simulation-based study and is confined to certain specific simulation parameters and methods of gap treatment and phylogenetic inference used in this study. However, the choices of the
parameter values have been made based on empirical studies in the literature. This included the size distribution of indels as well [18], which may not be a critical feature as far as the BC
treatment is concerned, but may be important when the state is inferred at the gaps or coded. Therefore, we believe that the results obtained in this study are sufficiently general to be useful to
the community of molecular phylogeneticists. However, we must add a note of caution that while it is likely that the general results of this study will hold, the particulars may be dependent on the
specific choices of simulation and other parameter values. Finally, the relationships between phylogenetic accuracy and gap percentage in this study were derived based on two unlikely events in
empirical studies – knowledge of the true tree and a perfect alignment. These certainly are sources of uncertainty and/or error in real data analysis, and must be accounted for, in empirical studies.
However, the utility of simulation-based studies such as this is that they serve to provide an assessment and quantification of relationships in the absence of confounding factors.
The presence of gaps in molecular sequence alignments is common-place in the literature. Our simulation-based results show that, when the alignment gaps reflect indel events without error, and the
number of gapped sites per sequence is ≤20 percent of the sequence length, all the inference methods used (NJ, PhyML, MLε, Bayesian analysis and MP) perform well in accurately inferring the
phylogeny. However, when the number of gaps is large (≥80 percent), the Bayesian method clearly outperforms the other inference methods when the gaps are treated as Missing Data (MD), although it
must be noted that since each inference method uses a different criterion in treating gaps as missing data, the higher accuracy for the Bayesian and PhyML method can perhaps also be attributed to a
more accurate integration of the state at each of the gapped sites. Within the MP and Bayesian methods, the inference of the phylogeny was significantly more accurate when each gapped site was
treated as a Binary Character state (BC) than when the gaps were treated as MD. When the sequences in an alignment contain a large number of gaps, as in the case of highly diverged sequences, coding
gaps as in likelihood analysis (MLε) may be more efficient than Bayesian or MP in combination with the BC method. Finally, our results also show that it is more difficult to accurately infer the
phylogeny from an alignment where a greater proportion of gaps reflect deletion events rather than insertion events in the evolutionary history of the sequences in the alignment.
Computer simulations
True DNA sequence alignments were generated by simulating evolution along a 16-taxon model tree using the computer program, Dawg, version 1.2 [30]). The model tree topology used for the simulations
was a balanced, non-ultrametric tree with random branch lengths (Figure (Figure1),1), borrowed from [34]. The nucleotide substitution model used was HKY [35], with rate heterogeneity among sites.
Simulations were done to mimic nucleotide sequences with different properties by systematically varying the sequence and indel parameters (in a fully factorial manner, see below). Simulations were
also done with 16-taxon random-branching and pectinate trees, and these alignments were also subjected to the same analyses that the alignments from the balanced tree of Figure Figure11 were. The
results of these analyses showed that while the alignments from the random-branching tree were essentially the same as that from balanced tree, those from the pectinate tree were not. These
differences have been pointed out at appropriate parts in the text, while presenting the results only from the balanced tree of Figure Figure11.
The values of the sequence and indel parameters used in the simulations are given in Additional file 1. These values were varied based on several studies (e.g., [18,20,36,37]) to ensure the
generality of the conclusions from this study. The sequence parameters varied were: initial sequence length (l), transition to transversion rate ratio (α), and the rate of nucleotide substitution (r)
as the number of substitutions per site. The shape parameter (a) of the gamma distribution was set to 0.5 to specify the extent of rate heterogeneity among sites. The nucleotide base frequencies were
kept constant (A% = T% = 30%; G% = C% = 20%) throughout the simulations, which were based on the literature [18,19]. All other options in the program pertaining to the sequences were set to default
during simulation. Note also that the results, when compared between l = 500 and l = 2500, produced similar patterns (except for an increase in accuracy), as also when compared between κ = 2.0 and κ
= 5.0. Therefore, results have been presented in this paper for only l = 500 and κ = 2.0.
The rate at which indels were introduced during simulation, λ, was varied as a function of the substitution rate (see Additional file 1). For example, a λ of 0.03 refers to an average of 3 indels per
100 substitutions. λ was also varied to include the range typically observed in empirical sequence data [17-20]. In addition, in order to mimic the very large number of gaps that can potentially be
seen in introns and other non-coding sequences, a few higher indel rates were also added (with corresponding increased substitution rates). Although phylogenetic analysis is typically done without
differentiating between insertions and deletions, the insertion to deletion ratio was set to either 1:3 [18,20] or 1:1 [21,22,38], at a given indel rate, in order to accommodate differing opinions
about the ratio of deletions and insertions, and to determine if the two have different impacts on phylogenetic accuracy. The size distribution of insertions/deletions was as per mammalian pseudogene
data [18] and ranged from 1 to 60 bp in length. This distribution of the indel length can be observed in other non-coding sequences, such as chloroplast inter-genic regions [19], and nuclear DNA
sequences of primates[22]. Each set of sequence and indel parameters (44 sets and 20 sets, respectively) was replicated 100 times, thus producing 88,000 16-taxon non-coding sequence alignments.
Phylogenetic analysis
Phylogenetic analysis was done on the alignments obtained using Neighbor-Joining (NJ) and Maximum Parsimony (MP) methods as implemented in PAUP* version 4.0 b10 [4]. Maximum Likelihood analyses
(PhyML) was done using the program, PhyML version 2.4.4 [27], because of its speed [27]. Finally, Bayesian analysis was done using MrBayes version 3.1.2 [28,29,39] with default settings.
Maximum Likelihood HKY pair-wise distances were used for the NJ analyses. In PhyML analysis, the initial tree was built using BIONJ [27]. The parameters of the HKY substitution model (the four base
frequencies and the transition/transversion rate ratio) along with the proportion of invariable sites and the gamma distribution shape parameter were estimated from the simulated data for both NJ and
PhyML analysis. For the MP analysis, a heuristic search was done using the stepwise addition algorithm for the provisional tree and subsequent branch swapping using the Nearest-Neighbor Interchange
(NNI) method. (NNI results for MP are known to be as good as those from the more thorough – and time-consuming – Tree Bisection Reconnection (TBR) searches [40,41]. In addition, our TBR and NNI
results for a representative subset of the simulations yielded essentially the same results). All other settings were set to default. Similarly, results analyzed from the PhyML version 2.4.4 [27]
using NNI were not different from the recent PhyML version 3.0 [27] with SPR (Subtree Pruning and Regrafting) tree search.
For the Bayesian analysis, the nucleotide substitution model used was HKY with invariant sites and rate heterogeneity of rates across sites. The number of generations was set to 50,000 with a
sampling frequency of 50. In cases when convergence was not obtained (typically for high substitution and indel rates), the number of generations was increased to 100,000 with a sampling frequency of
100. Burn-in was set to 25 percent of the generations and the inferred tree was estimated as the consensus of all compatible groups of the post burn-in trees. The inferred tree was then compared to
the model tree and topological distances were measured using PAUP* version 4.0b10 [4].
Treatment of gaps
Gapped sites in our (true) alignments were subjected to the following gap treatment methods during phylogenetic analysis: a) "MD" (Missing Data) – in this treatment the nucleotide state at each
gapped site is treated as a missing character based on the optimization criteria, based on whether the inference method is distance-based, parsimony, likelihood, or Bayesian [4,27,28]. Treating gaps
as unknown or missing data is the default option in PAUP* The FAQ page for PAUP* at http://paup.csit.fsu.edu/paupfaq/paupfaq.html explains the working of this treatment under each inference method,
and for PhyML and Bayesian, it is explained in [27-29]. Briefly, PAUP* deals with missing characters in the following manner: under the parsimony criterion, a missing character in a sequence is
assigned the most parsimonious state given its placement in the tree. Under the likelihood criterion, a gapped site is assigned a state based on the likelihood which is computed by summing the
likelihoods over all possible states – a strategy that is used by PhyML as well. For distance methods, PAUP* deals with the missing data by distributing the missing or ambiguous changes
proportionally to each unambiguous change. Bayesian analysis was done using the program MrBayes [28,29,39], which deals with gaps just as other Maximum Likelihood programs [4,42-44]. (b) "BC" (Binary
Character state) – the gapped sites in each column are coded as binary characters (1 if gap present, 0 if absent), available for the MP and Bayesian methods [4,28,29]. This treatment can be invoked
in PAUP* for MP analysis with the commands "GapMode = Missing", and "Symbol = 01" under "Format" and "Options", and providing a matrix of symbols reflecting gapped sites. In Bayesian analyses (using
MrBayes), binary characters are included as a separate binary restriction data partition, using the command "coding = variable" under "lset". (c) "MLε ", a probabilistic model implemented in the
DNAML package [23] of PHYLIP [44] program, that incorporates insertion and deletion events in addition to substitution events in the evolutionary model.
Assessing phylogenetic accuracy
The accuracy of the inferred trees was measured as the percentage of internal branches reconstructed correctly in the inferred tree, obtained as d[T ]is the topological distance between the inferred
and model trees [45,46] and m is the number of sequences in the alignment (16). P[C ]values were averaged over all the (100) replicates for each parameter combination, to give 10].
Authors' contributions
SRG conceived and designed the study. BD did the simulations, conducted the analyses, and wrote the first draft of the manuscript. Both authors read and approved the final manuscript.
Supplementary Material
Additional file 1:
Sequence and indel parameter values used in the generation of gapped sequence alignments by computer simulation. The sequence length, l, is measured as the number of nucleotides. The indel rate, λ,
refers to the number of indel events per nucleotide substitution, and is expressed as a proportion (for example, a λ value of 0.03 indicates that there were three indel events for every 100
substitutions), r is a multiplier, that, when multiplied by a given branch length in the model tree and the sequence length, yields the number of substitutions to be introduced in that branch during
We thank Ohio Supercomputer Resources (OSC) for help with computational analyses. We would also like to thank Reed Cartwright for help with his simulation program, and four anonymous reviewers for
insightful suggestions. This research was supported by start-up funds from University of Dayton (to SRG) and summer research funding from the Graduate School, University of Dayton (to BD).
• Felsenstein J. Inferring Phylogenies. Sunderland: Sinauer Associates; 2004.
• Nei M, Kumar S. Molecular Evolution and Phylogenetics. Oxford: Oxford University Press; 2000.
• Hall BG. Phylogenetic trees made easy. A how-to manual, 2nd edition. Sunderland: Sinauer Associates; 2004.
• Swofford DL. PAUP*. Phylogenetic Analysis Using Parsimony (* and Other Methods). Version 4.0b10. Sunderland: Sinauer Associates; 2003.
• Simmons MP, Muller K, Norton AP. The relative performance of indel-coding methods in simulations. Mol Phylogenet Evol. 2007;44:724–740. doi: 10.1016/j.ympev.2007.04.001. [PubMed] [Cross Ref]
• Ogden TH, Rosenberg MS. How should gaps be treated in parsimony? A comparison of approaches using simulation. Mol Phylogenet Evol. 2007;42:817–826. [PubMed]
• Raymond J, Zhaxybayeva O, Gorgaten JP, Blankenship RE. Whole genome analysis of photosynthetic prokaryotes. Science. 2002;298:1616–1620. doi: 10.1126/science.1075558. [PubMed] [Cross Ref]
• Egan AN, Crandall KA. Incorporating gaps as phylogenetic characters across eight DNA regions: Ramifications for North American Psoraleeae (Leguminosae) Mol Phylogenet Evol. 2008;46:532–546. doi:
10.1016/j.ympev.2007.10.006. [PubMed] [Cross Ref]
• Lee C, Wen J. Phylogeny of Panax using chloroplast trnC\u2013trnD intergenic region and the utility of trnC\u2013trnD in interspecific studies of plants. Mol Phylogenet Evol. 2004;31:894–903.
doi: 10.1016/j.ympev.2003.10.009. [PubMed] [Cross Ref]
• Wiens JJ. Missing data, incomplete taxa, and phylogenetic accuracy. Syst Biol. 2003;52:528–538. doi: 10.1080/10635150390218330. [PubMed] [Cross Ref]
• Wiens JJ, Moen DS. Missing data and the accuracy of Bayesian phylogenetics. J Syst Evol. 2008;46:307–314.
• Philippe H, Snell EA, Bapteste E, Lopez P, Holland PWH, Casane D. Phylogenomics of eukaryotes: Impact of missing data on large alignments. Mol Biol Evol. 2004;21:1740–1752. doi: 10.1093/molbev/
msh182. [PubMed] [Cross Ref]
• Hartmann S, Vision TJ. Using ESTs for phylogenomics: Can one accurately infer a phylogenetic tree from a gappy alignment? BMC Evol Biol. 2008;8:95. doi: 10.1186/1471-2148-8-95. [PMC free article]
[PubMed] [Cross Ref]
• Driskell AC, Christidis L. Phylogeny and evolution of the Australo-Papuan honeyeaters (Passeriformes, Meliphagidae) Mol Phylogenet Evol. 2004;31:943–960. doi: 10.1016/j.ympev.2003.10.017. [PubMed
] [Cross Ref]
• Rivas E. Evolutionary models for insertions and deletions in a probabilistic modeling framework. BMC Bioinformatics. 2005;6:63. doi: 10.1186/1471-2105-6-63. [PMC free article] [PubMed] [Cross Ref
• Cantarel BL, Morrison HG, Pearson W. Exploring the relationship between sequence similarity and accurate phylogenetic trees. Mol Biol Evol. 2006;23:2090–2100. doi: 10.1093/molbev/msl080. [PubMed]
[Cross Ref]
• Parsch J. Selective constraints on intron evolution in Drosophila. Genetics. 2003;165:1843–1851. [PMC free article] [PubMed]
• Zhang ZL, Gerstein M. Patterns of nucleotide substitution, insertion and deletion in the human genome inferred from pseudogenes. Nucleic Acids Res. 2003;31:5338–5348. doi: 10.1093/nar/gkg745. [
PMC free article] [PubMed] [Cross Ref]
• Yamane K, Yano K, Kawahara T. Pattern and rate of indel evolution inferred from chloroplast intergenic regions in Poaceae. Genes Genet Syst. 2006;81:418–418. [PubMed]
• Matthee CA, Eick G, Willows-Munro S, Montgelard C, Pardini AT, Robinson TJ. Indel evolution of mammalian introns and the utility of non-coding nuclear markers in eutherian phylogenetics. Mol
Phylogenet Evol. 2007;42:827–837. doi: 10.1016/j.ympev.2006.10.002. [PubMed] [Cross Ref]
• Chen FC, Chen CJ, Li WH, Chuang TJ. Human-specific insertions and deletions inferred from mammalian genome sequences. Genome Res. 2007;17:16–22. doi: 10.1101/gr.5429606. [PMC free article] [
PubMed] [Cross Ref]
• Saitou N, Ueda S. Evolutionary Rates of Insertion and Deletion in Noncoding Nucleotide-Sequences of Primates. Mol Biol Evol. 1994;11:504–512. [PubMed]
• Rivas E, Eddy SR. Probabilistic Phylogenetic Inference with Insertions and Deletions. PLoS Comput Biol. 2008;4:e1000172. doi: 10.1371/journal.pcbi.1000172. [PMC free article] [PubMed] [Cross Ref]
• Graybeal A. Is it better to add taxa or characters to a difficult phylogenetic problem? Syst Biol. 1998;47:9–17. doi: 10.1080/106351598260996. [PubMed] [Cross Ref]
• Rosenberg MS, Kumar S. Traditional phylogenetic reconstruction methods reconstruct shallow and deep evolutionary relationships equally well. Mol Biol Evol. 2001;18:1823–1827. [PubMed]
• Gadagkar SR, Rosenberg MS, Kumar S. Inferring species phylogenies from multiple genes: Concatenated sequence tree versus consensus gene tree. J Exp Zoolog B Mol Dev Evol. 2005;304:64–74. doi:
10.1002/jez.b.21026. [PubMed] [Cross Ref]
• Guindon S, Gascuel O. A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst Biol. 2003;52:696–704. doi: 10.1080/10635150390235520. [PubMed] [Cross Ref]
• Huelsenbeck JP, Ronquist F. MRBAYES: Bayesian inference of phylogenetic trees. Bioinformatics. 2001;17:754–755. doi: 10.1093/bioinformatics/17.8.754. [PubMed] [Cross Ref]
• Ronquist F, Huelsenbeck JP. MrBayes 3: Bayesian phylogenetic inference under mixed models. Bioinformatics. 2003;19:1572–1574. doi: 10.1093/bioinformatics/btg180. [PubMed] [Cross Ref]
• Cartwright RA. DNA assembly with gaps (Dawg): simulating sequence evolution. Bioinformatics. 2005;21:31–38. doi: 10.1093/bioinformatics/bti1200. [PubMed] [Cross Ref]
• Loytynoja A, Goldman N. Phylogeny-aware gap placement prevents errors in sequence alignment and evolutionary analysis. Science. 2008;320:1632–1635. doi: 10.1126/science.1158395. [PubMed] [Cross
• Chang GS, Hong YJ, Ko KD, Bhardwaj G, Holmes EC, Patterson RL, van Rossum DB. Phylogenetic profiles reveal evolutionary relationships within the "twilight zone" of sequence similarity. Proc Natl
Acad Sci USA. 2008;105:13474–13479. doi: 10.1073/pnas.0803860105. [PMC free article] [PubMed] [Cross Ref]
• Wiens JJ. Missing data and the design of phylogenetic analyses. J Biomed Info. 2006;39:34–42. doi: 10.1016/j.jbi.2005.04.001. [PubMed] [Cross Ref]
• Ogden TH, Rosenberg MS. Multiple sequence alignment accuracy and phylogenetic inference. Syst Biol. 2006;55:314–328. doi: 10.1080/10635150500541730. [PubMed] [Cross Ref]
• Hasegawa M, Kishino H, Yano TA. Dating of the Human Ape Splitting by a Molecular Clock of Mitochondrial-DNA. J Mol Evol. 1985;22:160–174. doi: 10.1007/BF02101694. [PubMed] [Cross Ref]
• Yang ZH. On the best evolutionary rate for phylogenetic analysis. Syst Biol. 1998;47:125–133. doi: 10.1080/106351598261067. [PubMed] [Cross Ref]
• Yang ZH. Among-site rate variation and its impact on phylogenetic analyses. Trends Ecol Evol. 1996;11:367–372. doi: 10.1016/0169-5347(96)10041-0. [PubMed] [Cross Ref]
• Schaeffer SW. Molecular population genetics of sequence length diversity in the Adh region of Drosophila pseudoobscura. Genet Res. 2002;80:163–175. doi: 10.1017/S0016672302005955. [PubMed] [Cross
• Altekar G, Dwarkadas S, Huelsenbeck JP, Ronquist F. Parallel metropolis coupled Markov chain Monte Carlo for Bayesian phylogenetic inference. Bioinformatics. 2004;20:407–415. doi: 10.1093/
bioinformatics/btg427. [PubMed] [Cross Ref]
• Takahashi K, Nei M. Efficiencies of fast algorithms of phylogenetic inference under the criteria of maximum parsimony, minimum evolution, and maximum likelihood when a large number of sequences
are used. Mol Biol Evol. 2000;17:1251–1258. [PubMed]
• Piontkivska H. Efficiencies of maximum likelihood methods of phylogenetic inferences when different substitution models are used. Mol Phylogenet Evol. 2004;31:865–873. doi: 10.1016/
j.ympev.2003.10.011. [PubMed] [Cross Ref]
• Yang ZH. PAML 4: Phylogenetic analysis by maximum likelihood. Mol Biol Evol. 2007;24:1586–1591. doi: 10.1093/molbev/msm088. [PubMed] [Cross Ref]
• Yang ZH. PAML: a program package for phylogenetic analysis by maximum likelihood. Comput Appl Biosci. 1997;13:555–556. [PubMed]
• Felsenstein J. PHYLIP – Phylogeny Inference Package (Version 3.2) Cladistics. 1989;5:164–166.
• Penny D, Hendy MD. The Use of Tree Comparison Metrics. Syst Zool. 1985;34:75–82. doi: 10.2307/2413347. [Cross Ref]
• Robinson DF, Foulds LR. Comparison of Phylogenetic Trees. Math Biosci. 1981;53:131–147. doi: 10.1016/0025-5564(81)90043-2. [Cross Ref]
Articles from BMC Evolutionary Biology are provided here courtesy of BioMed Central
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2746219/?tool=pubmed","timestamp":"2014-04-20T13:52:43Z","content_type":null,"content_length":"168221","record_id":"<urn:uuid:97a66277-b59c-4c75-96fa-027b60d1eb30>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Verga, NJ Trigonometry Tutor
Find a Verga, NJ Trigonometry Tutor
...Instead, I'm evaluating your particular strengths and weaknesses and developing *your* strategy to get you the highest score possible.We'll cover tricks and tips that will save you time, and
we'll only use methods that will work for you. I'll help you master vocabulary and learn how to get throu...
47 Subjects: including trigonometry, chemistry, English, reading
...In most math and science subjects, if you cannot understand Chapter 1, you will never be able to understand Chapter 2, and so on, because everything builds on the previous information. I will
help struggling students by working through the problems with them, and providing them with additional q...
30 Subjects: including trigonometry, reading, biology, English
...In addition to the usual subjects, I am qualified to tutor actuarial math, statistics and probability, theoretical computer science, combinatorics and introductory graduate topics in discrete
mathematics. I am willing to tutor individuals or small groups. I am most helpful to students when the tutoring occurs over a longer period of time.
18 Subjects: including trigonometry, calculus, statistics, geometry
...I taught ESL for two years at a language school in Taipei, Taiwan, working primarily with college-level students who needed to refine their speaking and writing skills. Since then, I have
worked with students employed by the Du Pont Co. and AstraZeneca, as well as with graduate students in sever...
32 Subjects: including trigonometry, chemistry, English, biology
...With only five students allowed in the class, I was able to get the full attention and concentration on what I was doing, strengthening my anatomy experience. At the college level I studied
elementary physiology that mainly dealt with the way different body systems coordinate with each other. B...
28 Subjects: including trigonometry, chemistry, geometry, biology
Related Verga, NJ Tutors
Verga, NJ Accounting Tutors
Verga, NJ ACT Tutors
Verga, NJ Algebra Tutors
Verga, NJ Algebra 2 Tutors
Verga, NJ Calculus Tutors
Verga, NJ Geometry Tutors
Verga, NJ Math Tutors
Verga, NJ Prealgebra Tutors
Verga, NJ Precalculus Tutors
Verga, NJ SAT Tutors
Verga, NJ SAT Math Tutors
Verga, NJ Science Tutors
Verga, NJ Statistics Tutors
Verga, NJ Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Almonesson trigonometry Tutors
Blackwood Terrace, NJ trigonometry Tutors
Blenheim, NJ trigonometry Tutors
Center City, PA trigonometry Tutors
Chews Landing, NJ trigonometry Tutors
Grenloch trigonometry Tutors
Hilltop, NJ trigonometry Tutors
Jericho, NJ trigonometry Tutors
Lakeland, NJ trigonometry Tutors
Lester, PA trigonometry Tutors
Passyunk, PA trigonometry Tutors
Penn Ctr, PA trigonometry Tutors
West Collingswood Heights, NJ trigonometry Tutors
West Collingswood, NJ trigonometry Tutors
Westville Grove, NJ trigonometry Tutors | {"url":"http://www.purplemath.com/Verga_NJ_trigonometry_tutors.php","timestamp":"2014-04-18T23:52:51Z","content_type":null,"content_length":"24372","record_id":"<urn:uuid:2c99b849-5f33-4f75-985a-b7ba36bbecf0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
October 21st 2009, 02:55 PM #1
Junior Member
Sep 2009
please help
let p and q be two distinct odd prime numbers, n=pq and let d=gcd(p-1,q-1). Then show that x^(ϕ(n)/d) ≡1 (mod n)
I know that ϕ(n)=(p-1)(q-1)
how can i show that???.
my answer is
x≡a1 (mod p)
x≡a2 (mod q)
we can write x^(p-1(q-1)/d ≡a1 ^ d((p-1)(q-1))/d ≡a1^(p-1)(q-1) ≡1 (mod p)
the same thing for mod q
so x^(ϕ(n)/d) ≡1 (mod pq)≡1 (mod n)
I hope this is correct.
Last edited by koko2009; October 21st 2009 at 03:49 PM.
We have
$x^{p-1}\equiv1\pmod p$
$x^{q-1}\equiv1\pmod q$
Hence $x^{\mathrm{lcm}(p-1,q-1)}\equiv1\pmod n$ (this is because the multiplicative group $\mathbb Z_n^\times$ is isomorphic to $\mathbb Z_p^\times\times\mathbb Z_q^\times).$ The result follows
from the fact that
NB: The result also holds if one of $p$ and $q$ is equal to $2.$
October 22nd 2009, 06:06 AM #2
Junior Member
Oct 2009 | {"url":"http://mathhelpforum.com/number-theory/109531-please-help.html","timestamp":"2014-04-17T22:42:55Z","content_type":null,"content_length":"29110","record_id":"<urn:uuid:2ca7dfbb-7698-448f-a95b-809bb0979967>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Leibniz From Riemann
Leibniz From Riemanns' Standpoint^1
by Lyndon H. LaRouche, Jr.
July 14, 1996
One who has not merely learned, but knows relevant features of the work of Johannes Kepler, Gottfried Leibniz, Carl Gauss, and Bernhard Riemann, must be appalled by the unbridgeable gulf between the
actual work of those exemplary, leading figures of modern European science, and what most of today's relevant academic specialists misrepresent crucial elements of that work to have been. Such has
been the present writer's cumulative experience, over those sixty-odd years, since he began systematic studies of the putatively leading European philosophers from the Seventeenth and Eighteenth
During most of those decades, the writer has wrestled with relevant, published scholarly andother misrepresentations, in his verbal and oral exchanges with relevant professors and students of
philosophy, with ordinary laymen, and with practitioners of mathematical science. With rare exceptions, whenever any among these crucial issues of principle is addressed, nearly all among the
professional opinions encountered, are not merely mistaken, but are uttered with shameless unconcern for truthfulness.
If one applies the method of Socratic dialogue, seeking to smoke out the underlying, axiomatic roots of these differences, two causes for the widespread academic, and popular misrepresentation of
Kepler, Leibniz, and Riemann, are brought to the surface. First, that the standpoint of most of those commentators, is that of Aristotle, or the empiricists. Second, when the core of the difference
is chased back to its relevant epistemological rabbit-hole, any reference to the fact, that the issue is rooted in opposition to the principles underlying the scientific method of Kepler, Leibniz,
and Riemann, evokes their modern opponents' implicitly hysterical effort to deny the fact, that their own, contrary, judgments are derived from such differences in axiomatic assumptions.
Typically, the hysteria expressed on the second count, is of the same form as Isaac Newton's absurd literary outburst: ... et hypotheses non fingo!. The Newtonian system rests upon a very precisely
defined hypothesis, which Newton denies to exist.^2 On the subject of Kepler, Leibniz, or Riemann,^3 the argument of most putative scholarly authorities, is analogous to Newton's denial of the
existence of his own hypothesis. Rather than acknowledging the difference between their own and their subject's axiomatic assumptions, Newton et al. have insisted, that they themselves have no such
assumptions to be contested. That hysterical behavior by Newton, et al., might remind us, of the startled, wild-eyed boy (probably the local schoolyard bully) caught by his mother at the moment he
has his hand in the cookie-jar, with inculpatory crumbs all around his mouth, shrieking at his mother: "What cookie-jar!"
As we shall show in the course of this paper, those writers against which we complain thus, have not relived the Socratic experience of the fundamental discoveries achieved by any among these three
crucial figures of modern science. We shall show, that, for that reason, however much they might claim to have learned, they have no direct mental experience of the relevant acts of discovery of
principle involved. Thus, however much they have merely learned, they know relatively nothing of crucial importance about those types of subject-matters of science, in which the principal variables
to be considered, are differences in underlying (e.g., axiomatic) assumptions.
Thus, one might recognize, as in the manner indicated above, that the seemingly characteristic trait among today's roster of putatively authoritative commentaries, is that each and all are governed
much less by a passion for truth, than by blind zeal. We observe that that zeal is commonly mustered in defense of some philosophical standpoint contrary to that of any and all among of such targets
of their muddled commentaries, as those four whom we have listed at the outset of this paper. In general, it may be said, that most such commentators are fairly classed, either as Aristoteleans, or
philosophical empiricists. All seek to deny, that any influential principle of mathematics or physics (for example) might have been achieved by a scientific method contrary to their own.^4 Above all,
they reject that fundamental principle of Socratic method, Plato's method of hypothesis, by means of which all of the crucial discoveries of Kepler, Leibniz, and Riemann (for example) were generated.
For that, and related reasons, no competent representation of the central conceptions underlying Leibniz's work can be presented in the terms of scholarship which have, unfortunately, become
conventional in qualifying doctoral candidates, or, more generally, in the production of related, putatively "scholarly" theses. In the case, such as this topic, in which most among the putative
authorities are distinguished almost as much by their incompetence (or intellectual dishonesty), as their scholarship, one must emulate that most estimable Franciscan, Fran[c]ois Rabelais, to reject,
as ridiculous, the suggestion, that consensus among a representative body of putative scholarly authorities, such as our modern Suckfists and Kissbreeches of science, might be the relevant approach
to the issues at hand. One must reconstruct the relevant principles, as if from the ground up. To this end, as we have said above, one must follow the map of Plato's method of negation of
axiomatically misguided, but official, or other generally held opinion; we must employ the Socratic method of hypothesis.
Today, the most efficient standpoint from which to present, to a modern, literate audience, the axiomatic basis for Leibniz's scientific work, is the case of the fundamental discovery, respecting the
principle of hypothesis, which Bernhard Riemann applied to mathematical physics, in his 1854 habilitation dissertation.^5 This present writer's discoveries within the domain of Leibniz's science of
physical economy, provides the best vantage-point from which to demonstrate this specific connection of Leibniz to Riemann. We summarize that approach to the conceptions; we, thus, avoid the wide,
textbook-paved road to Hell, and follow the Classical humanist method, instead. The latter, is the method of re-experiencing, at least in outline of the crucial points, the mental processes of one or
more among the relevant original discoverers. The relevant case here, is the present writer's re-enactment of Riemann's discovery, but from a fresh standpoint. This serves, in turn, as our
vantage-point for pointing out some characteristic features of Leibniz's method.
Three points are considered below. First, what the present writer came to recognize as the deeper significance of Riemann's habilitation dissertation. Second, how the writer's own discovery in
physical economy imparts to Riemann's discovery, an otherwise overlooked authority. Finally, how we are forced, by considering Riemann's and the writer's own discoveries, to adopt a deeper
appreciation of some among the more celebrated writings of Leibniz.
The Principle of
'Universal Characteristics'
During the interval from his own fourteenth through eighteenth birthdays, this writer became a follower of Gottfried Wilhelm Leibniz. His acquaintance with Leibniz came through English editions of
some of Leibniz's noted books, obtained, chiefly, either from the family household's library, or the Lynn, Massachusetts Public Library. This came as part of a project begun the summer preceding the
writer's thirteenth birthday, and continued through his eighteenth year: a comparative study of the relatively most popular titles from leading English, French, and German philosophers of the
Seventeenth and Eighteenth centuries, taking each in chronological order.
The writer began with writings of Francis Bacon, turned next to Thomas Hobbes, René Descartes, John Locke, Leibniz, Hume, Berkeley, Rousseau, taking up English translations of Immanuel Kant's
Critique of Pure Reason and Prolegomena to Any Future Metaphyics about two and a half years later. The Leibniz writings featured in this series (and read, over and over again), were the Monadology,
Theodicee, and Clarke-Leibniz Correspondence.^6 At that time, the writer then found the empiricists trivial in content, relative to Leibniz, although foes of some importance respecting their obvious
influence on the world as viewed from 1930's Massachusetts. It was the defense of Leibniz against the central argument of Kant's Critique of Pure Reason, which proved itself a more worthy and
profitable challenge, back then. Although this writer did not turn to a systematic study of Plato's writings until the mid-1950's, he had already been steeped in Plato's method of hypothesis, through
studying and defending certain among the leading published writings of Leibniz.
Obviously, as for any person, many childhood and youthful experiences converged to shape the present writer's character. However, in retrospect, the importance of working through a pro-Leibniz
counter-attack upon Kant, was, without doubt, the most crucial of these formative experiences. This influence was hewn into a practical form by his most significant post-war experience, the
encounters with, first, Norbert Wiener's Cybernetics,^7 and, also, those notions of "operations research" and "systems analysis" converging upon the work of Bertrand Russell's devotee, John Von
Neumann. The earlier wrestling against Kant, provided the standpoint from which to identify the kernel of evil implicit in Wiener's statistical definition of "information theory."
As reported in various locations, by the beginning of the 1950's, the writer's original discoveries, effected in the course of refuting "information theory," impelled him to undertake a careful
rereading of Riemann's habilitation dissertation. The crucial importance of that rereading, lay in Riemann's addressing the subject of the determining function of Plato's method of hypothesis, in
defining any competent form of mathematical physics.^8 Once we have considered the implications of Riemann's work, we are able to see his most famous predecessors within modern science in a fresh
way: Gauss, Leibniz, and Leibniz's crucial predecessors, Kepler, Leonardo da Vinci, and da Vinci's crucial predecessor, Nicolaus of Cusa. Consider the relevant, central implications of Riemann's
habilitation dissertation, and then the significance of Riemann's discovery, when it, in turn, is situated within the context provided by this writer's own original discoveries in physical economy.
Briefly, the significance of Riemann's discovery, is this. Consider the form of algebra introduced to the Seventeenth century by the founder of the "Enlightenment," the atheistic Servite monk, and
follower of William of Ockham, Paolo Sarpi. Consider the expression of this in the work of such Sarpi lackeys and followers as Galileo Galilei, Thomas Hobbes, and René Descartes. The proximate source
of the Enlightenment forms of algebra, employed by René Descartes, Isaac Newton, and their devotees, is derived from an "Ockhamite" reading of what is most widely recognizable as that modern
classroom parody of Euclid's geometry embedded in the mathematics curricula generally, as presented, still, in secondary and higher education during the time of this writer's youth, and earlier.
The fallacies of this algebra, are the starting point of Riemann's dissertation. His point of departure there, is that in the form of algebra derived hereditarily from the work of Galileo, Descartes,
Newton, et al.: Discrete events, and their associated movements, are situated within a Cartesian form of idealized space-time. This point has been presented by the present author in numerous earlier
locations, but, on pedagogical grounds, it must be stated again here, this time in a choice of setting appropriate to the connection we are exposing, between the ideas of Riemann and his predecessor
Riemann opens his dissertation, with two prefatory observations. First, that, until that time (1854), "from Euclid through Legendre," it was generally presumed that geometry, as well as the
principles for constructions in space, was premised upon a priori axiomatic assumptions, whose origins, mutual relations, and justification remained obscure. The second general point of his plan of
investigation, which he restates in the conclusion of the dissertation, is that no rational construction of the principles of geometry could be derived from purely mathematical considerations, but
only from experience.^9 He concludes his dissertation: "We enter the realm of another science, the domain of physics, which the subject of today's occasion [mathematics] does not permit us to enter."
Riemann, thus, refutes the presumption on which a Newton devotee, of Prussia's Frederick II, Leonhard Euler, depended absolutely, for the entirety of his attack on Leibniz's Monadology.^10
On grounds of the principles of Classical humanist, or cognitive pedagogy,^11 the prudent course of action, now, is to reconstruct the conceptions at issue from the initial standpoint of simple,
deductive theorem-lattices. This pedagogical approach leads us by the most direct route, to the central issue of Riemann's discovery: the validation of an axiomatic-revolutionary quality of discovery
of universal principle, by reason of which we are obliged to construct a new mathematical physics, to supersede that erroneous one previously in vogue. Later, continuing that process of construction,
to the point of examining the writer's own original discovery in physical-economy, we identify the cognizable feature of the individual person's mental life, in which we may then locate the
significance of Riemann's revolution in mathematical physics.
Riemann's Principle of Hypothesis
The pedagogical reference-point throughout this paper, is the contrast between that Platonic principle of change,^12 on which both Riemann's and the writer's own discoveries were premised, and the
sterile formalism of the Aristotelean or quasi-Aristotelean models of an ordinary, deductive form of theorem-lattice. In all cases considered here, the notion of theorem-lattice is defined, and
examined from the standpoint of Plato's Socratic method, by the so-called method of hypothesis.
A simple, deductive form of theorem-lattice, is defined by a process of successive approximations, as follows. Given, any set of theorems which are assumed to be not-inconsistent with one another.
This presumes that the Socratic method of Plato would be able to adduce certain minimal, but sufficient, underlying assumptions, the which these theorems share in common. If so, these assumptions
then constitute a set of interdependent terms, in the form of axioms, postulates, and definitions, none of which are deductively inconsistent with any among the previously given, mutually
not-inconsistent theorems. Implicitly, therefore, there might exist an indefinite number of other theorems, none of which is inconsistent, deductively, with the same set of axioms, postulates, and
definitions. The combined set of all such theorems, both known and possible, constitutes a simple theorem-lattice.
For the purpose of defining essential terms: The set of underlying, interdependent axioms, postulates, and definitions, underlying any such theorem-lattice, is the elementary, deductive form of an
hypothesis. That is the definition of "hypothesis" employed by Plato, Leibniz, Riemann, and the present author.
If, then, there exists some stubbornly real condition or event, which were not consistent with that hypothesis, then there is no proposition based upon that condition or event, the which could be the
basis for a theorem of any theorem-lattice corresponding to that hypothesis. However, if, nonetheless, all of the theorems of the first theorem-lattice correspond to actually existing conditions or
events, then, there exists a new hypothesis, which defines a new theorem-lattice, for which a proposition corresponding to the newly discovered condition or event, is a valid theorem. However, no
theorem of the new theorem-lattice is consistent with any theorem of the first theorem-lattice.
The discovery of the change in hypothesis, which enables the leap from the old, failed theorem-lattice, to the new, is, thus, conveniently described as the discovery of a valid,
axiomatic-revolutionary principle.
There is a crucial, corollary point to be taken into account, in reading, and rereading the highly significant, immediately preceding paragraphs. The proposition which we might construct, as our
conscious representation of a condition, or event, is not the condition, or event, which may, in our opinion, have prompted the relevant proposition. This is a scientific matter, but one which is
also brought to our attention by some relatively common, non-scientific, experiences of the layman's daily life.
For example. On this account, we must become uneasy in our seats, when some typical, philosophically illiterate person insists, that he, or she, is, in the words of Hollywood's "Sergeant Friday,"
insisting upon "Just the facts, Ma'am." For example, what the attorneys and judges, in a legal proceeding, insist are "facts," are not reality per se, but merely a special kind of subjective
assessment, which might, or might not, have relevant correspondence to the reality to which the proceeding is putatively addressed.
To this point: Even if we might be persuaded, that we have overcome the hurdles of sincerity, in assessing a witness's report, the fact that the witness might be presumed to be speaking sincerely,
and in his or her best judgment, does not rise to the standard for presuming, that the witness is also speaking competently of what that witness imagines himself, or herself to have experienced.
Usually, the most favorable assumption which might be suggested, in the case of virtually any witness, is that the significance of a truthful effort to state a fact, or facts of a matter, is, that it
represents the present limits of the subject's competence to interpret what the subject believes to have been the experience of his, or her senses.
"Truthful," when employed, carelessly, as a synonym for "sincerity," does not mean "real." What may qualify as a "fact," or "evidence," by extant legal or other professionals' standards, does not
necessarily signify "true," "truthful," or "real," even if the relevant utterance is the most sincere which the subject might utter on the matter of the event being considered.^13
In the language of simple theorem-lattices: In the case, that some evidence forces us to abandon one hypothesis, for another, only the valid evidence prompting the theorems of the first
theorem-lattice, but not the theorems themselves, are carried forward as evidence addressed by theorems of the second lattice. Virtually none of the theorems of the old lattice are incorporated in
the new; virtually all of the theorems which, in the first lattice, were associated with the carried-forward experimental evidence, are abandoned by the second lattice, as inconsistent with truth.
Truthfulness, in science, or in ordinary testimony, lies not in what the witness believes he, or she has seen, heard, touched, felt, tasted, or smelled; truthfulness lies in the choice of hypothesis,
which underlies those subjective things, called propositions, which the witness has constructed as much, or more, from his, or her prejudices, as from the relevant experience. This is to be said in
the same sense, as to argue, that where a member of an illiterate culture recognizes no more than "rock," a representative of a literate culture recognizes "ore." Or, to say, that the representative
of the illiterate culture sees the stars moving about us; whereas, the representative of the literate culture, such as that of Plato's Academy of Athens, sees the moon orbitting the Earth, and the
Earth rotating, while orbitting the sun.^14
Riemann makes clear, in his referenced dissertation, that his emphasis upon experience, does not signify the popular delusion of the illiterate persons: The delusion that what we know as factual, is
what we believe that we have experienced through our senses. Rather, the point of his argument there, is that the truthfulness of our opinions respecting actual experiences, depends, absolutely, upon
the validity of the axiomatic assumptions which govern the way in which we form propositions and theorems in response to promptings of experience. It is on this point that Riemann focuses his
devastating refutation of both Aristoteleanism and empiricism.
Riemann's exposure of the fraud embedded in the taught geometry and physics of both the Aristoteleans and empiricists, renders transparent the issues listed above.
The simple space-time employed by Galileo, Descartes, Hobbes, Hooke, Newton, et al., was based on certain, a priori, axiomatic assumptions respecting extension in four, mutually independent senses of
direction, three of extension in space, and one in time: a "quadruply-extended space-time manifold." It was assumed, a priori, that space is extended without limit, and in perfectly uninterrupted
continuity: backward-forward, up-down, side-to-side. It was assumed, a priori, that time is extended, similarly, backward and forward. It was assumed, a priori, that place, size, and movements of
events can be situated mathematically, as though these were something plopped into what were otherwise an empty, continuous, space-time void.^15
To these arbitrary, a priori assumptions, other assumptions of a physical nature were similarly attached. Those persons who might be classed as "materialists," presumed, not only that these
assumptions about space-time were products of the senses, but that the relevant features of sense-perceptions were mirror-images of the real world external to our senses. Others, such as the
empiricist followers of Sarpi, Galileo, Hobbes, et al., did not presume that sense-perceptions were necessarily mirror-images of the world outside our skins; however, from the standpoint of the
pervasive fallacy intrinsic to popular misconceptions of physical space-time, still today, Riemann's dissertation applies equally to all among the Aristoteleans, materialists, and empiricists.
Riemann's argument against that view of physical space-time, is predominantly twofold. First, that the referenced assumptions of Galileo, Descartes, Newton, et al., were merely arbitrary assumptions.
Second, that these assumptions were demonstrably false. The proof of these two arguments lay in the principle set forth by the founder of modern science, Nicolaus of Cusa, in his De Docta Ignorantia:
the principle of measurement.
Given the topic under which this paper is subsumed, which is the retrospective view of Leibniz from the standpoint of Riemann's discoveries: The most convenient illustration of the way the principle
of measurement applies, is the instance of the use which Jean Bernoulli and Leibniz made of the intersecting subjects of isochronicity (a phenomenon of gravitation) and the brachystochrone problem
(refraction of light at a measurable, "constant speed"). Both of these were treated by Bernoulli and Leibniz, as arising out of the work of Christiaan Huyghens.^16 In this connection, lay the
physical basis for Leibniz's insistence upon replacing the "algebraic" methods of Galileo, Descartes, and Newton, by a "non-algebraic" (transcendental) form of mathematical physics.^17
Riemann's dissertation introduces explicitly, a conception already implicit in the work of Leibniz and others, earlier: he establishes there the replacement of Newtonian physics in space-time, by the
notion of physical space-time.^18 He excludes the recklessly gratuitous, a priori assumptions of limitless extension, and perfectly continuous extension. He then attributes the principle of extension
to every physical principle whose validity has been demonstrated by experimental measurement, as Ole R[o]mer, in 1676, had reported his astrophysical measurement of the estimated "speed of light,"
and as Jean Bernoulli, twenty years later, reported the coincidence of refraction of that light and Huyghens' representation of isochronicity within the gravitational field. Thus, every validated
physical principle is to be added to dimensions of space and time, as an independent dimension of a physical space-time manifold of "n dimensions." This arrangement excludes, axiomatically, any
toleration of the Euler-Cauchy-Clausius-Helmholtz, et al. notion of "linearization of physical space-time in the very small."
At the outset of his dissertation, Riemann already defends what is to appear as his construction of a multiply extended physical space-time manifold. This defense rests chiefly on two general
premises. First, each discovered principle validated by experimental measurement, has, consequently, the manifest quality of extension. Second, each such principle has the quality of a dimension, in
the respect of the same rule of mutual independence among dimensions, which any Euclidean form of geometry attributes to mutually independent senses of direction of dimensions of space and time.
Yet, this construction poses problems which can not be resolved within either the confines of a formal mathematics, or any extant formal mathematical physics. To resolve these further problems, one
must depart the domain of mathematics, to enter the domain of experimental physics. One must enter Nicolaus of Cusa's domain of measurement.
There must be some experimental proof, which demonstrates, in a measurable way, that a certain crucial-experimental occurrence requires us to construct one kind of mathematical physics, rather than
some other. This demonstration must have such unique significance. Riemann points to three hints, on which he has relied for elaborating the general quality of "yardstick" we require for that kind of
measurement. Two hints are taken from the work of Riemann's patron, Professor Carl F. Gauss: Gauss's work on bi-quadratic residues,^19 and general theory of curved surfaces.^20 The third is borrowed
from Riemann's own work, the concept of Geistesmassen which he outlined in his posthumously published Zur Psychologie und Metaphysik.^21
To be considered validated, the new physical principle must correspond to some measurable difference in the characteristic action "connecting any two points" within the reality corresponding to the
choice of mathematical-physics manifold being tested. The notion of this measurable difference, is suggested by the attempt to determine whether the very large surface on which one is travelling is a
plane, or a curved surface.^22 In terms of a physical space-time manifold of "n dimensions," it is the relative curvature of the "surface," which the crucial experiment must measure. Hence, the
importance, for Riemann, of the hints supplied by Gauss's work on biquadratic residues and general theory of curved surfaces.
For Riemann's physics, one such yardstick is required. The present writer's discoveries demonstrate that two yardsticks, rather than one, are required. We shall come to that in due course, below.
First, we must locate the place where Riemann's notion of Geistesmassen fits in; this touches the most crucial distinction of Riemann's physics, and also the unique feature from which the unique,
crucial superiority of the present writer's work in economics has been derived. To that purpose, we now restate what we have just described, this time, explicitly referencing, as Riemann does,
Plato's and Leibniz's method of hypothesis.
In place of the words "dimension," substitute such words as "axiom, postulate, definition." That is to say, recognize the equivalence of a Riemann multiply-extended, physical space-time manifold, to
Plato's, Leibniz's, Riemann's, and the present author's notion of "hypothesis." The connection is highlighted by reference to Leibniz's notion of necessary and sufficient reason, a notion which is
Leibniz's refined treatment of the notion of reason as this appeared in the work of that Johannes Kepler, whose specified requirements for the development of a calculus were satisfied by Leibniz's
Proceed to that end, thus. As we proceed, now, bear in mind the following: Think of "dimension, axiom, postulate, definition," and "hypothesis," as representative of a common quality termed,
alternately, either "formal discontinuity," or "singularity." Physically, each, as in the case of adding a new degree of independent dimension, signifies some break in the continuum extant prior to
the introduction of such a singularity.
Consider the proposition: What is a sufficiency of properly selected, axiomatic assumptions, respecting the task of assessing the significance of a particular event, when that event is considered
primarily as a change in the state of the universe in which it occurs? Select, as such an event, the equivalence which Jean Bernoulli demonstrated, between Huyghens' notion of the cycloid path as one
of isochronicity (tautochrone) in Kepler's "gravitational field,"^23 and the fact that the variable feature of refraction describes the same tautochronic pathway.^24 What are the necessary and
sufficient features of an hypothesis, which hypothesis defines a physical space-time in which these phenomena and their coincidence must occur? That hypothesis, whatever it may prove to be,
constitutes "necessary and sufficient reason."
That reflects Leibniz's refinement of Kepler's use of the notion of Reason. This function of Reason(Kepler), or necessary and sufficient reason(Leibniz), is the alternative to the use of the
percussive notion of "causality," as a geometrically degenerate parody of the notion of Reason, in the work of materialists, or empiricists such as Galileo, Newton, et al.
This leads to Riemann's notion of unique events, as those experimental events which force us to reconsider whatever has passed, until now, for a notion of necessary and sufficient reason, that
hypothesis heretofore considered as established. The general use of "crucial experiment," as ostensibly a substitute for "unique," does not rise to the functional significance of our use of "unique"
Implicitly, every event is, potentially, a unique experimental event. In some circumstance, any event must implicitly overthrow the presumptions of someone's hypothesis. Obviously, we, like Riemann,
Leibniz before him, and so on, are situating these and related matters within an historically specific, task-oriented setting, the interdependency between mankind's progressive mastery of the
universe, and the internal development of Classical forms of art and science. Therefore, we employ "unique" to designate those events which have pivotal, historic significance for the discovery of
valid, axiomatic-revolutionary principles of our universe. E.g., the critical experimental, or analogous events, which correspond to the singularities of a never-perfectly continuous extension of
scientific and artistic progress.
In Riemann, this overview of scientific progress is typified by progress from a relatively valid physical space-time of "n dimensions," to a more powerful conception, a superior, relatively valid
physical space-time of "n+1 dimensions." In other words, from one, relatively valid hypothesis, to a superior valid hypothesis.
This central implication of the habilitation dissertation, leads us, implicitly, to reconsider the so-called "ontological paradox" of Plato's Parmenides.^25 Resituate the notion of a Riemann series
(e.g., of surfaces of differing Gaussian curvature), of the topological type (n+1)/n, as implicitly defined by the habilitation dissertation. This presents us a series of hypothesis, n = 4, ... , i,
i+1, i+2, ... . What is the ordering principle of such a series? The answer is, first: some principle of valid successive discovery of hypotheses: a higher type of hypothesis, which underlies a
series of hypotheses, as an ordinary, relatively valid hypothesis underlies the series of theorems represented by a theorem-lattice. Plato identifies this higher type of hypothesis, simply, as an
"higher hypothesis." Hence, the title of Riemann's Platonist dissertation: "The Hypotheses Which Underlie Geometry."
As we depart one hypothesis of that series, to approach its proper supersessor, we must depart the domain of mathematical formalism, for the domain of either experimental physics, or something
functionally equivalent to such a physics. These domains are to be found, relative to formalism, within transinfinitesimally small, mathematical discontinuities, the existence of which the followers
of Newton, Euler, Bertrand Russell, et al., each and all, fraudulently deny.^26 Each valid, axiomatic-revolutionary discovery of principle (e.g., a formal axiom, a dimension, an hypothesis), is a
singularity, which, discovered, fills the place defined by a transinfinitesimally small formal discontinuity in the fabric of the mathematical-physics being superseded.
The process by which that valid singularity is generated, can never be detailed at the proverbial "blackboard." Nonetheless, that process exists; its existence is provable, not by mathematics, but
according to the principle of measurement.^27 The form in which that existence impinges upon knowledge, is the same quality of true metaphor, which is the distinguishing activity of all successful
Classical forms of artistic compositions. The activity is known, otherwise, as "creative reason," or, "cognition," when either term is employed to signify the quality of non-deductive mental activity
typified by an original valid, axiomatic-revolutionary discovery of a principle of nature. In physical science, this activity is typified by the successful generation of a valid new hypothesis.
Riemann approaches the conceptualization of this activity of creative reason, with his use of the term Geistesmassen. This implication of the same principle of hypothesis, which underlies Riemann's
dissertation, is the focus of Leibniz's Monadology.
'Psychology & Metaphysics'
That mental activity, through which principles of nature are discovered (and, recognized), and, through which artistic metaphor is generated (and, recognized), is not a subject for deductive methods.
In that sense, the validation of an axiomatic-revolutionary principle can not be represented mathematically, either at the blackboard, or in kindred modes.^28 Nonetheless, like those discovered, and
empirically validated principles of science themselves, the non-deductive mental activity of creative reason (cognition) can be known as clearly as any object presented to our minds by
sense-perception. If education is based, not on the stultifying, textbook drill-and-grill mode, of indoctrination in a secularist catechism, but, rather, upon the student's reenacting the original
discoverer's act of discovery within the student's own, sovereign cognitive processes, the repeated experience of coming to know these discoveries in this way, enables the pupil to come to recognize
the common form of that mental action of change, which is the common feature of the progress of the pupil's mind, from one hypothesis to the next.^29
This brings us to the matter of agape: the emotional quality, contrasted to erotic impulses, which is characteristic of what we term here, alternately, "creative mentation," or "cognition."
In Plato, the term agape arises as "love for justice," "love for truth." The Latin translation of Plato's notion of agape, where the Greek term appears in the Christian New Testament, is the caritas
which is translated as "charity" in the King James Version's English translation of the Latin edition of Paul's Epistles.^30 There are some well-known, if absurd, but clinically foreseeable,
capriolically pornographic renderings of the term, from among devotees of the Oxbridge glosses on Plato; despite such sick minds, the intention, "love for justice and truth," is the only accurate
rendering of "Platonic love." This quality of emotion, agape, is associated only with a category of objects of thought which belong strictly to the category of "Platonic ideas."
The antonym for agape is eros, the latter the quality of emotion peculiar to either objects of sense-perception, or to those words, methods, and procedures, the which are induced in individual
behavior through the anti-cognitive, "sing for your supper," modes of "drill and grill."^31
To make clear the significance of the term "Platonic ideas," the present author prefers the example of Eratosthenes' fair estimate for the length of the Earth's meridian. By aid of an ingenious, but
mathematically simple experimental procedure, Eratosthenes estimated the polar diameter of the Earth within a margin of error of about fifty miles, and did this more than two thousand years before
any person had seen the curvature of our planet. The several Classical Greek estimates of the distance from the Earth to the moon, including that of Eratosthenes, have the same relevance. We can not
see, as objects, the actual astrophysical distances from Earth to the moon, sun, or neighoring planets; virtually all of astrophysics, and the entire domain of microphysics address objects which are
not defined directly by our senses. Those matters of knowledge which lie outside simple sense-perception, fall within the category of "Platonic ideas."^32
The distinction between living and non-living processes, and the distinction between the cognitive processes of the human individual, and the behavior of all lower forms of life, are also
subject-matters which are not defined directly by our sense-perceptions. Similarly, neither "justice" and "truth," nor any validated discovery of a principle of nature, are objects defined as
sense-perceptions. All of these distinctions of physical processes, which we can not define as matters of direct, simple sense-perception, but which we are able to know to be true in other ways,
belong to the catgeory of "Platonic ideas."^33
We summarize here, once again, the way in which the case of Eratosthenes' estimate of the length of the Earth's meridian presents the central role of Platonic ideas in science [see Figure 1].
A series of measurements is taken, by sun-dials placed at intervals along a measured (paced off) interval, along a South-North line, between Aswan and Alexandria, in Egypt. Each set of these
successive series of measurements is taken at noon (as indicated by the sun-dials) on the same day. The angles of the shadow cast are compared. This comparison shows that the Earth's surface is not
flat. However, by use of similar figures, it appears that the data fits the case in which the Earth's surface is approximately that of a sphere, with the South-North direction, from Aswan to
Alexandria, corresponding to an arc of a meridian. Since the length of that arc had been measured, the method of similar figures gave an estimate for the size, and diameter of the relevant complete
The crucial point of describing that, in the present location, is, as stressed earlier, that Eratosthenes' defined and measured the curvature of the planet more than two thousands years before man
first saw the curvature of the planet. For related reasons, Columbus did not merely suspect that the Earth was a spheroid; almost five centuries before anyone saw the curvature of the planet,
Columbus knew it with scientific certainty, through work done by Toscanelli, based upon ancient Greek science, decades prior to Columbus' acquisition of the map of the planet produced by Toscanelli.
The size of the planet, estimated by Toscanelli, was accurate to at least the degree of precision of Eratosthenes estimates, about 1,700 years earlier.^35 The estimates of the distance to the moon,
by Eratosthenes, and Aristarchus' derivation of the demonstration that the Earth orbitted the sun, are examples of the same principle of Platonic ideas.
The archetypical expression of Platonic ideas, is the quality of mental act, by means of which a valid, axiomatic-revolutionary discovery of a principle of nature is generated. The overriding mission
of a competent policy in education, is to prompt the pupil to reenact the series of relatively more truthful, valid, axiomatic-revolutionary discoveries of principle underlying the development of
both scientific knowledge, and also of forms of plastic and non-plastic art which are consistent with what we shall identify, below, as the Classical principle of composition and performance. The
primary mission of a competent educational policy, is the use of teaching of such crucial principles as a "pretext" for fostering the development of the individual person's potential for deploying
and recognizing that distinct quality of mental act (cognition) which is the only means by which such discoveries may be either effected as original discoveries, or by one to whom the principle is
presented as a challenge for reenacting the mental experience of the original discovery.
This potential for development of the creative powers of cognition, is that distinction between man and beast underlying Genesis 1:26-30: mankind, male and female, made in the image of God: as
Nicolaus of Cusa emphasizes, the principles of imago viva dei and capax dei. In its paradigmatic expression, as knowable to the successful student in such a Classical-humanist program of education,
this act of cognition is located in the person's experience, as the quality of mental activity through which the validation of an axiomatic-revolutionary discovery of principle, is effected. In other
words, the generation of a valid "leap" from a given hypothesis (theorem-lattice) to a relatively superior hypothesis. This paradigmatic act, is, therefore, the experience of higher hypothesis.
That paradigmatic experience has two distinguishable, but inseparable interdependent qualities. The occurrence of the formally validatable discovery itself, and the distinctive quality of emotion
associated with that act of discovery. That latter quality of emotion, is agape as Plato defines it, and as I Corinthians 13 also defines it.^36 It is through the summoning of the developed quality
of agapic emotion, that the thinker is able, willfully, to summon the creative cognitive powers needed to address a challenge.
The kind of deductive reductionism typical of Aristotelean formalism, is erotic, and hatefully anti-agapic, in type, as the psychopathological case of Kant and his philosophical writings, typifies
the pathology of personal character inhering in the true follower of Aristotle's philosophy and method. Thus, Friedrich Schiller and his follower Wilhelm von Humboldt, set forth as the primary
objective of a Classical-humanist form of education, the fostering of the development of the personal character of the future adult citizen; the efficient principle referenced by Schiller and
Humboldt on this account, is rooted in the argument of I Corinthians 13, and it is also the underlying character of Plato's dialogues taken as a whole.
Hypothesis, and higher hypothesis, are each a special kind of object, an object of the form which Plato associates with the good. To introduce this conception, consider, first, the example offered by
a very ordinary sort of theorem-lattice, as we defined this earlier, here.
In the simple theorem-lattice, the derivation of theorems has a certain ordering, in the sense that some theorems, once proven, serve as the basis for deriving later theorems. This sense of ordering
implies ordering in time. Nonetheless, the hypothesis underlying that lattice undergoes no modification during the time a sequence of theorems unfolds: from beginning, through to the end, the
hypothesis remains unchanged; it is the veritable "alpha and omega" of that theorem-lattice. In Plato's method, every hypothesis, including every higher hypothesis, has this same property: it is the
unchanging "alpha and omega" of whatever process of lattice-generation it underlies. In all, higher hypothesis is subsumed by God, the unsurpassable "hypothesis," the ultimate Good. Yet, every
relatively valid hypothesis also imitates that form, as a lesser good.^37
Agape is the motivating state of mind which corresponds to the experience of any valid, or relatively valid such good.
Every person engaged in cognitive concentration, has lived through a relevant experiment: One's mind is working on the problem, up to the point the concentration collapses, as it were a man who
suddenly toppled over, and fell asleep during a brisk walk. This might occur when one were exhausted, but we are considering only the type of case in which exhaustion was not determining. The
motivation for the cognitive concentration has collapsed, as if the current had suddenly been cut off from an electronic device, as if the "batteries had died." Consider the instance, in which taking
a break to participate in working through, or hearing a good performance of J.S. Bach, Haydn, Mozart, Beethoven, Schubert, or Brahms, returns one to one's cognitive undertaking with full powers of
concentration restored "batteries fully recharged." From this vantage-point, we turn our attention to certain identical features of Classical art-forms and valid axiomatic-revolutionary discoveries
of physical principle. We are considering a topic which might be entitled: cognitive energy.
In Classical art-forms, the place of a mathematical discontinuity is taken by the ultimate expression of ambiguity, metaphor. During his 1948-1952 project, to refute Wiener's absurd claim, that human
communication could be represented by statistical "information theory," the present author adopted the policy, that, although the case against Wiener could be made best from the standpoint of
technological progress's increasing the productive powers of labor, it would be necessary to show that what was true for physical science, was also true for the generation and transmission of
knowledge in Classical art-forms.
Thus, the study of "information" from the standpoint of technological progress, was parallelled by focus upon three closely related forms of non-plastic Classical media: poetry, drama, and the
Classical art-song, the latter centered upon the Classical German lied, of Mozart, Beethoven, Schubert, Schumann, and Brahms, all compared with the Romantic lied of Hugo Wolf and Richard Strauss.
The standpoint in music, from which Classical forms of drama, poetry, and song were examined during that time, was the principle of motivic thorough-composition, as typified by Wolfgang Mozart's
K.475 product of his study of the Bach Musical Offering, and the influence of that, and closely related Mozart compositions in later Classical composition. Today, the present author would have
written of that approach, that keys and modes are hypotheses underlying the theorem-lattices of Classical forms of musical compositions, and that motivic thorough-composition, as typified by the
Mozart K.475, is a prototype for higher hypothesis as the subject of musical composition.^38
Thus, effective Classical musical composition, especially since those aspects of the work of J.S. Bach so deeply admired and emulated by Mozart, Beethoven, et al., is an exercise in agape. Similarly,
Classical tragedy, and great Classical poetry, which rely upon the implicit bel-canto well-tempering of the well-spoken language, as the medium for speech, embody the developmental principle of the
Greek Classical tragedy and Socratic dialogue. This is that cognitive medium of artistic development, which such poetry and drama employ, to instruct musical composition in the principles of musical
dialogue, called polyphony, the which is the principle of Classical artistic development.
It is those artistic resolutions of ambiguity which carry the mind from one hypothesis to another, whether in poetry, drama, music, or plastic art-forms, which are the principle of change underlying
Classical forms of artistic composition. This is that principle of Reason in art, which the psychosexually impotent Immanuel Kant could not recognize.^39 Those ambiguities which can not be resolved
(e.g., "explained") deductively, as mere simile, symbolism, or hyperbole, are metaphors. These metaphors, which exist implicitly in the subjunctive mood, are the Geistesmassen of art.^40 Hence,
during the course of the 1948-1952 study, the present author employed this sense of "metaphor" to embrace the expression of Platonic hypothesis in both physical science and Classical art-forms.
All successful art meeting those standards, evokes the same sense of uplifting agapic beauty we experience otherwise in those activities of the individual mind, through which original, or reenacted,
valid, axiomatic-revolutionary discoveries of principle are generated. Such art is an integral part of science, in the broader sense of science. Such art increases the potential productive powers of
labor, in the same sense that technological progress does. Such art also "recharges the batteries" of the individual's, and society's exercise of its creative powers of reason.
All too often, in observing discussions of mathematical, or of scientific work, we may be startled to recognize that the discussion we are witnessing, is painted in fresh coats of gray upon gray,
proceeding with the implied assumption, that there is no emotional motivation in scientific thought as such, but only in arguments about its conclusions. Poor actor Leonard Nimoy, trapped for
eternity in endless sequels of "Star Trek," babbling forever the idiot-savant's: true scientific "logic" is a quality free from emotions!
John Keats' Ode on a Grecian Urn spoke elegantly for Plato: truth is beauty, and beauty is truth. It is the passion of a mind gripped by a prescience of great beauty, which impels the creative
thinker to ascend the impossible alp of scientific risks. Well-meaning laymen speak, foolishly, of financial rewards as motives for scientific (or, artistic) work. Feed a scientist, nourish his
family, and offer him the opportunity to meet the kind of challenge which inspires him; freed of distracting such matters, his incentive is his passion never to lose that sense of a (Leibnizian)
pursuit of happiness, the which is for him, or her, the lure of the scientific (like the Classical artistic) profession. The sense of truth is the source of the sense of overwhelming beauty; the
recall of the emotion one associates with that sense of beauty, is the passion which drives one to push forward, one more step, and another, in pursuit of truth. Like Edmund Hillary, the scientist
climbs the Everest of science and Classical art, "because it is there." Keats' Ode is dedicated, passionately, to the triumph of agape over eros.^41
Such is "cognitive energy." The composition and performance of the Classical art-form are the mirror-image of valid scientific discovery, on this account. Thus, does art command the power to recharge
the batteries of the cognitive process for the scientist. That is a subject which, however curious that might seem, at first hearing, belongs to the department of economics: to the Leibnizian science
of physical economy.
It is relevant here, to consider what might be described as a "structured" feature to agape, a feature presented in the clearest way by considerations of technological attrition.
We have already indicated, that the Riemann topological series of hypotheses, typified, symbolically, by (n+1)/n, corresponds to a series of formal-mathematical discontinuities. Each such
discontinuity corresponds to a corresponding singularity, an added "dimension" of the series of manifolds. All of the singularities functionally extant at the time each of the manifolds is in
operation (subjectively and in corresponding practice), is efficiently present in every interval of thought-action of the person whose judgment and practice are being directed in accord with that
manifold. Thus, we may apply the notion of implicitly enumerable densities of discontinuities, for any arbitrarily selected interval of thought-action, for that manifold's influence, under those
general conditions.
The increase of the density of discontinuities, in such modes, has the twofold quality of "tension" and "potential." The "potential" corresponds to the relative increase of power over nature, per
capita and per square kilometer of the planet's surface. The "tension" corresponds to a higher development of the internal (subjective) mental state of the relevant person. The increase in potential,
corresponds to capacity for effectiveness of action; the increase of "tension," corresponds to an increase in the psychological motivation for action, to an increased sense of agapic, subjective
The notion of hypothesis, and higher hypothesis, as of the timeless form of a good, defines these notions as what Kepler defined as Reason, and Leibniz as necessary and sufficient reason. A related
term, to the same general effect, is universal characteristics. The significance of the latter term is shown more clearly from the standpoint of the present author's original discoveries in the
domain of physical economy.
Go to Part II
Go to top
1. Unless otherwise noted, the references to Leibniz's writings cited here, are limited to the following: [Loemker] Gottfried Wilhelm Leibniz, Philosophical Papers and Letters, ed. by Leroy Loemker
(Boston: Kluwer Academic Publishers, 1989); [Monadology] G.W. Leibniz, Monadology and Other Philosophical Essays, trans. by Paul and Anne Martin Schrecker (London: McMillan, 1965); [Theodicy] G. W.
Leibniz, Theodicy, trans. by E.M. Huggard, ed. by Austin Farrar, 5th printing (Peru, Ill.: Open Court Publishing Co., 1996). The principal reference to the work of Bernhard Riemann, is to Riemann's
1854 habilitation dissertation, Über die Hypothesen, welche der Geometrie zu Grunde liegen ("On The Hypotheses Which Underlie Geometry"), in Bernhard Riemanns Gesammelte Mathematische Werke, ed. by
H. Weber, reprint of (Stuttgart: B. G. Teubner Verlag, 1902) [(New York: Dover Publications, 1953) and (Vaduz, Liechtenstein: Saendig Reprint Verlag)], pp. 272-287. Various English translations of
this habilitation dissertation are extant, but, for purposes of precision, reference is made to the German. Other references to Riemann's writings are always to the reprint of the Weber edition:
Riemann Werke. As a general, recurring reference, see Ralf Schauerhammer and Lyndon H. LaRouche, on Kepler and Riemann [Schauerhammer and LaRouche], respectively, in the "Riemann Refutes Euler"
feature, in 21st Science & Technology, Vol. 8, No. 4, Winter 1995-1996, passim.
2. See Riemann Werke, pp. 525: Die Unterscheidung, welche Newton zwischen Bewegungsgesetzen oder Axiomen macht, scheint mir nicht haltbar. Das Trägheitsgezetz ist die Hypothese: Wenn ein materieller
Punkt allein in der Welt vorhanden wäre und sich im Raum mit einer bestimmten Geschwindigkeit bewegte, so würde er diese Geschwindigkeit beständig behalten. An English translation of this is found in
the translation of the "Philosophical Fragments" from the Riemann Werke, published in 21st Century Science & Technology, Vol. 8, No. 4, Winter 1995-1996, p.57. More on the hypothetical basis for
Newtonian physics, below.
3. Hereinafter, we focus upon these three figures of the four listed. Our primary focus here, is the retrospective connection of Riemann to Leibniz. Kepler is kept in focus, for reasons to become
clear later in the paper. Gauss, the most prolific mind in modern science after Leibniz, represents, together with his collaborator Wilhelm Weber, and protégé, Riemann, a topic deserving of special
attention in a location devoted to that connection.
4. As James C. Maxwell purported to justify his refusal to acknowledge the work of the Gauss, Weber, and Riemann which Maxwell had parodied. He explained, that it was his policy to refuse to
recognize the existence of any geometries but "our own."
5. See footnote 1.
6. See footnote 1.
7. Norbert Wiener, Cybernetics (New York: John Wiley & Sons, 1948). The writer's first encounter with Wiener's book occurred during Winter 1948, prior to the Wiley release of the hardbound U.S.
edition, in the form of a loan to him of an earlier, Paris, paperbound printing.
8. Lyndon H. LaRouche, Jr., "On LaRouche's Discovery," Fidelio, Vol. III, No. 1, Spring 1994. The use of the argument supplied in Riemann's habilitation dissertation, enabled the writer to solve the
problem of mathematical representation incurred by his own original discovery in the science of physical economy. Hence, because of this relationship of Riemann's discovery to his own, the result
came to be identified as "The LaRouche-Riemann Method." On Riemann's habilitation dissertation, see footnote 1.
9. Loc. cit., footnote 1. On the second point, Riemann writes: ...dass die Sätze der Geometrie sich nicht aus allgemeinen Grössenbegriffen ableiten lassen, sondern dass diejenigen Eigenschaften,
durch welche sich der Raum von anderen dreifach augedehnten Grössen underscheidet, nur aus der Erfahrung entnommen werden können. (pp. 272-273.) The concluding sentence of the dissertation restates
this point: Es Führt dies hinueber in das Gebiet einer andern Wissenschaft, in das Gebeit der Physik, welches wohl die Natur der heutigen Veranlassung [the subject of mathematics] nicht zu betreten
erlaubt. (p. 286).
10. On Euler's attack on Leibniz, see, Lyndon H. LaRouche, Jr., The Science of Christian Economy, (Washington, D.C.: Schiller Institute, 1987), "Appendix XI: Euler's Fallacies," pp. 407-425. Note a
typographical error on p. 407; the passage should read "He [Euler] was a proponent of the Newtonian reductionist method in mathematical physics." Euler was a member of an anti-Leibniz salon within
the Berlin Academy of Prussia's "Frederick the Great," closely associated with such followers of Newton's patron, Abbé Antonio Conti, and members of Conti's network of salons, as Pierre-Louis
Maupertuis, Johann Lambert, Giammaria Ortes (the founder of "Malthusianism"), Voltaire, and Joseph Lagrange. On this attack on Leibniz by Euler, the following history is most notable. A purely
geometrical proof for the fact that π is of a higher cardinality than the Plato-Eudoxus-Eratosthenes-Archimedes notion of "irrationals," was discovered by Nicolaus of Cusa (cf., De Docta Ignorantia,
1440). The physical proof, that non-algebraic (i.e., transcendental) functions must supersede the algebraic notions of Descartes and Newton, was demonstrated by Leibniz, Jean Bernoulli, et al.,
during the 1690's, in respect to the interconnected facts of isochronicity in the gravitational field (Huyghens) and the relativity of a constant "speed of light" with respect to refraction (Roemer,
Huyghens, J. Bernoulli). Using the same false premises which he adopted for the attack on the Monadology, Euler presumed that the distinction between algebraic and non-algebraic ("transcendental")
functions could be degraded to its relatively degenerate expression, as a subject of infinite series (see Leibniz-Clarke Correspondence on the subject of differential calculus and infinite series).
Around this, the Newtonian devotees, following Euler and Lambert, built the myth that the proof of π's transcendental quality, is the proof derived, "hereditarily," from the tautologically fallacious
assumptions of Euler's 1761 attack on the Monadology. Hence, the popularization of the myth, that it was Ferdinand Lindemann, in 1882, who first "proved" the transcendental quality of π! (See Lyndon
H. LaRouche, Jr., "Kenneth Arrow Runs Out of Ideas, But Not Words," 21st century Science & Technology, Vol. 8, No. 3, Fall 1995; see reference to the π controversy, under the subhead "Axiomatic
Method," pp. 43-44. See also, LaRouche reply to a critic of this section of that paper, in Letters, 21st Science & Technology, Vol. 9, No. 2, Summer 1996.
11. The "Classical humanist" method in education has two leading features which might be treated as the definitional distinctions of that method. "Classical" should be understood, in first
impression, as implying a foundation in what are identified as the "Classical," as distinct from "Archaic" (for example) plastic and non-plastic art-forms of Classical Greece. In literature, this
implies the Homeric epics, and the tragedies of Athen's Golden Age. In science, it implies Plato's Socratic method of hypothesis, as typified by Plato, Eudoxus, Theaetetus, Eratosthenes, and,
implicitly, also, Archimedes. Overall, it signifies the struggle of the Ionian city-states and the tradition of Solon of Athens, in combatting both the Babylonian tradition, expressed as the Persian
Empire, and, also, the usurious cult of Gaia-Python/Dionysos-Apollo at Delphi (and, later, pagan Rome). In art, science, and history, it implies the principle of agape, as defined by Plato and the
Christian apostles, as in the Gospel of John and the Epistles of Paul. The use of these Classical Greek referents, including the Christian New Testament, is the significance of a Classical-humanist
secondary education for the relevant medieval European teaching orders, such as the Brothers of the Common Life, the continuation of that standard of literacy among the proponents of the original
(anti-Justice Antonin Scalia) intent of the U.S. Federal Constitution, and the reforms of education in Germany designed by Friedrich Schiller and his followers Wilhelm and Alexander von Humboldt.
This exemplary significance of that use of the term, "Classical," extends to the principle, that all of those discoveries of principle which have been proven to be valid, as such discoveries, from
all currents of humanity, non-European as European, ought to be replicated mental experiences of discovery within the minds of all prospective secondary graduates, as a precondition for citizenship,
in a durable form of society. The Classical currents of philology, as those with which the Humboldt brothers were associated in their time, illustrate the manner in which the notion of "Classical" is
to be extended in choice of referents, from Classical Greece, to mankind as a whole. It is the emphasis on recreating the experience of the original discovery of principle, within the mind of each
pupil, which distinguishes a cognitive education, from the evil of John Dewey and the "New Math," in particular, and from today's more popular textbook, or even worse standards, in general.
12. Once one has worked one's way through the sets of later dialogues of Plato, it becomes clear, that his Parmenides serves implicitly as a prologue to all of those dialogues; it poses the crucial,
ontological paradox, which the other dialogues address, each in its own respect. For this purpose, the Parmenides should be read as if it were the prefatory chorus of a tragedy, modelled upon the
tragic principle characteristic of Aeschylos' work. One might apply Friedrich Schiller's explication of the principles for design of a tragedy: from opening germ, through punctum saliens, to
conclusion. In the dialogue taken as a whole, the character Parmenides fails as pitiably as Shakespeare's Hamlet. The character Parmenides, like his real-life image, can not comprehend the notion of
change as an efficient principle, just as Hamlet identifies the same cause for his own, oncoming doom, in the famous Act III, Scene 1 soliloquy. This is change as Heraclitus references its
definition; so, for Plato, and for Riemann, the elementary form of efficient existence, is not objects akin to the notion of objects of sense-perception, but, rather, the principle of change, which
brings such secondary phenomena as mere, apparently fixed objects, into being. Change, so referenced, has the connotation of generate or create. That is key to any competent reading of Plato, of
Cusa, of Kepler, of Leibniz, of Riemann, or this writer's own original discoveries of the same efficient principle in physical economy.
13. In the line of discussion being developed here, we have already put to one side the substitution of non-existent conditions or events, for real ones. Three distinct classes of such substitutions
are notable among those excluded from consideration in this portion of the text. (A) Simple lies. (B) Sophistries derived, as conclusions, from wishfully altered hypotheses. For a simple example: "I
do not like him, therefore, I choose to find plausible anything bad said of him, and profess to consider as incredible, anything which might work to his credit." (C) Fallacies of composition
superimposed, like a Procrustean Bed, upon perceived reality, to the purpose of protecting either an hypothesis, or some specific, isolated belief. Illustration: the principal origin of spread of
gnosticism within western European Christianity, is the legalization of Christianity, as part of the Roman pagan Pantheon, by the Emperor Constantine. The most important action to this effect, was
the later Byzantine emperors' virtual, or actual banning of the Plato who had been the correlative of Christian theology, and the introduction of Plato's adversary and bellwether of oligarchical
social order, Aristotle, as authorized replacement. The efforts of the powerful oligarchical families, to defend their feudal and financier-aristocratic privileges, despite Christianity, has been the
continuing source of renewal of the corrupting influence, within the clergy and churches, of the gnosticism inherent in Aristotle's philosophy and method. To avoid the embarrassing truth about the
origins of gnosticism, the myth was created, that it was the Jews who are chiefly responsible for introducing gnosticism to western Europe, as via "Averroesism." This apology for oligarchism of both
the landed and financier oligarchies and, Aristotle, has been, thus, the most common source of religious anti-semitism. On the other hand, Friedrich Nietzsche, like his follower Adolf Hitler,
premised his argument for ridding Europe of Jews, on the charge that it was the Jews whose collective crime had been the establishment of Christianity. Similarly, another illustration of category (C)
taken from real life: To defend the Venice-created cult of Isaac Newton, Leonhard Euler, and many other devotees of the Newton cult, were willing to go to any lengths, as did J.C. Maxwell and Hermann
Helmholtz, to defend the hypothesis of their cult's demi-god. Or, for a concluding example of this most relevant problem: The babbling fool who insists, that, since Karl Marx approved the idea of a
progressively graduated income-tax, in the Communist Manifesto, that a man as fascistic as that "Miniver Cheevy" of the Confederacy's "Lost Cause," Ku Klux Klan fanatic and U.S. President Woodrow
Wilson, was a Communist. Under "Lost Cause" devotee J. Edgar Hoover, the FBI was riddled with precisely such fanatical fools of the Roy M. Cohn breed.
14. These elementary considerations respecting solar phenomena, underscore the fact, that any university which tolerates a policy of eliminating, or minimizing the student's requirement for mastery
of the work of "dead European males," is clearly guilty of perpetrating a fraud upon both the students, and those institutions of society, including government, to which that university presents its
graduates as competently educated. Exemplary is the fairy-tale, repeated by many illiterates with university bachelor and even terminal credentials, who believe in the myth of the "Copernican
Revolution," that Mesopotamian lunatic calendars preceded solar calendars, and that the best astronomy, prior to Copernicus, was that of the fraud concocted, for ideological purposes, by Claudius
Ptolemy. India's Bal Gangadhar Tilak was only citing already extant astrophysical and scholarly evidence, when he reported, in his Orion, that the Vedic solar astronomical calendars of Central Asia,
circa 6,000-4,000 b.c. , were already vastly more advanced scientifically, than any of the lunar calendars later presented in Mesopotamia. A similar case is demonstrated for ancient Egypt's solar
astronomy. Aristarchus, long prior to Claudius Ptolemy's concoction of his hoax, had already defined the elementary hypothesis upon which rested the modern solar astronomy of such as the
pre-Copernicus (1473-1543) Nicolaus of Cusa (1401-1464). Every competent program of combined secondary and higher education, requires a student's mastery of the work in mathematics, astronomy, and
philosophy, by Thales, Plato, Theaetetus, Eudoxus, Euclid, Aristarchus, Eratosthenes, and Archimedes, through the construction, by Cusa's collaborator, Paolo Toscanelli (1397-1482) of the world map,
which Christopher Columbus acquired through the Portugal-based executor of Nicolaus of Cusa's estate, and upon which Columbus largely relied, for his planning his first, 1492, voyage to the Americas.
Most of the ideas underlying modern science, in every country, are derived chiefly from the original discoveries in geometry and scientific method, which we have inherited, chiefly, from such
representatives of the Classical Greece tradition as these. As in astronomy, so, in general, the truthfulness of any report of a condition or event, lies in the hypothesis which has governed the
manner the revelant experience has been comprehended by the mind of the witness. "Truth in education" cannot exist, without prompting the student to reenact, in his, or her mind, the act of original
discovery by those ancient Greek and other individual minds, to which our civilization is largely indebted for the development of those hypotheses upon which the truthfulness of contemporary judgment
depends, without exception.
15. Cf. Riemann, Plan der Untersuchung, Werke, pp. 272-273.
16. See Christiaan Huyghens, The Pendulum Clock, trans. by Richard Blackwell (Ames, Iowa: Iowa State University Press, 1986); , A Treatise on Light (1678), reprint of English translation: (New York:
Dover Publications). On Huyghens' relationship to the discovery of the "speed of light," see Poul Rasmussen, "Ole R[o]mer and the Discovery of the Speed of Light," 21st Century Science & Technology,
Vol. 6, No. 1, Spring 1993. On the relationship to Jean Bernoulli's solution to the brachystochrone problem, see D.J. Struik, A Source Book in Mathematics, 1200-1800 (Princeton, N.J.: Princeton
University Press, 1986), pp. 391-399.
17. This latter transformation became a central issue of the Leibniz-Clarke correspondence: Leibniz's insistence that a competent calculus could not be represented by the relatively degenerate
geometry of infinite series.
18. For the purposes of this paper, it should be sufficient merely to note, as we do here, that Riemannian physical space-time does not permit "linearization in the very small." On this, note the
conflict between Riemann and Rudolf Clausius. In a related example, also contrast Riemann's notion of physical space-time with that presented by Princeton's Hermann Weyl. For example, in editor H.
Weber's appended note to Riemann's Ein Beitrag zur Electrodynamik [Werke, p. 293], Weber reports Rudolf Clausius' attack upon Riemann's function, as follows.
P = − [0]∫^t ∑∑ εε′ F (τ− ^r⁄[α], τ) dτ .
Of which, Weber reports Clausius to argue: Die Operation, vermöge deren später dafuer ein nicht verschwindend kleiner Werth gefunden wird, muss daher einen Irrthum enthalten, den Clausius in der
Ausführung einer unberechtigten Umkehrung der Integrationsfolge findet. Thus, Clausius demands linearization in the very small. An English translation, by James Cleary, of H. Weber's note, is found
in the textbook by Carol White, Energy Potential (New York: Campaigner Publications, 1977), pp. 299-300.
The formal-mathematical aspect of Clausius' argument is to be recognized at once as an "hereditary" influence of the same tautological fallacy on which Euler premised his 1761 attack upon Leibniz's
Monadology. Similarly, it is the failure of Euler, Lagrange, Laplace's Augustin Cauchy, Hermann Grassmann, Clausius, Hermann Helmholtz, et al., to recognize Leibniz's argument against Venetian Abbot
Antonio Conti's agent, Dr. Samuel Clarke, respecting the implications underlying the incompetency of the mere numerical approximations supplied by use of an infinite series as a substitute for an
actual calculus. In the Beitrag, Riemann is referencing work-product of his own collaboration with Wilhelm Weber, of which more is to be learned in a forthcoming issue of 21st century Science &
Technology. In short, Clausius' invocation of the notorious "sliding rule," is not only flatly wrong, but, reveals much more about his own, and Grassmann's mathematics, than it does respecting the
work of Weber and Riemann.
19. Riemann, op. cit., p. 273: ... Gauss, in der zweiten Abhandlung über die biquadratischen Reste. [Theoria Residuorum Biquadaticorum: Commentatio Secunda (1831), Carl Friedrich Gauss Werke, II
(Hildesheim: Georg Olms Verlag, 1981). pp. 93-178. See, also Zur Theorie der Biquadratischen Reste Werke, II, pp. 315-385.]
20. Ibid., p. 276: ... Zu beidem sind die Grundlagen enhalten in der berühmten Abhandlung des Herrn ... Gauss über die krummen Flächen. See, Disquisitiones Generales Circa Superficies Curvas (1828)
Gauss Werke, IV, pp. 217-258. See, Gauss' notice of this paper: pp. 341-347; the crucial issue of mapping is presented on pp. 344-345. See, also, Allgemeine Auflösung der Aufgabe die Theile einer
gegebenen Fläche so abzubilden (the famous "Copenhagen Prize Essay") (1822), pp. 189-216. Notable is the issue of mapping of an ellipsoid onto a sphere; the referenced work of Gauss' on this subject
was, most immediately, a reflection of his discoveries in geodesy, in the setting of his 1818-1832 triangulation-survey of the territory of the Kingdom of Hanover. However, Gauss' work in
"non-Euclidean geometry" dates not only from his earlier discoveries in astronomy, but, according to a Nov. 28, 1846 letter to H.C. Schumacher, to 1792. Notably, it was from this starting-point in
the work of Gauss, not the quasi-Kantian Newton devotee and plagiarist of Abel, Augustin Cauchy, that Riemann derived what some wags amuse themselves to describe as the "Cauchy-Riemann" function; the
debt to A.M. Legendre is significant, not to Monge's and Legendre's hateful adversary, and Laplace protégé, Cauchy.
21. Ibid., p. 273: ... und einigen philosophischen Untersuchungen Herbart's, durchaus keine Vorarbeiten benutzen konnte. For the relevant text of Riemann's earlier commentary on this, see Werke, pp.
509-520. For an English translation of the latter, see "Riemann's Philosophical Fragments," 21st Century Science & Technology, op. cit., pp. 51-55.
22. As is suggested by Eratosthenes' experimental measurement of the estimated curvature of the Earth's meridian, more than two thousand years before any person had yet seen the Earth's curvature.
23. On this item, no scientifically literate person would introduce, as objection, the somewhat popularized nonsense, of asserting that the original discovery of gravitation was the work of Galileo,
Newton, et al. Newton's algebraic representation of gravitation was explicitly derived, as a relatively degenerate representation, from Kepler's formulation for gravitation. For a summary of the way
in which Newton's plagiarism of Kepler was constructed, see Lyndon H. LaRouche, Jr., The Science of Christian Economy, op. cit., Chapter VII, Note 8 (see pp. 471-473).
24. D.J. Struik, loc. cit.
25. See Proclus' Commentary on Plato's Parmenides, trans. by Glenn R. Morrow and John M. Dillon (Princeton, N.J.: Princeton University Press, 1987), passim.
26. In every case examined, the argument against the existence of mathematical discontinuities is a parody of the tautological fallacy which Euler deployed in his attempted sodomy of 1761, against
Leibniz' Monadology.
27. Cf. B. Riemann, Über die Fortpflanzung ebener Luftwellen von endlicher Schwingungsweite, Werke, pp. 156-175. In this paper, Riemann addressed the implications of the mistaken assumption, that the
speed of sound represented an insuperable barrier to movement of a propelled projectile at higher speeds through the air medium. Out of his understanding of the physical significance of
discontinuities arising in such functions, not only was the possibility of accelerated transsonic flight indicated, but, more generally, the general principle of isentropic compression. The crucial
point illustrated, for our purposes, here, is that Riemann recognized that the appearance of a formal discontinuity, in the mathematical form of the design of his experiment, represented the presence
of a singularity, a new principle isentropic compression to be entered into the validated physical principles of physical space-time. The problem which Riemann had successfully attacked, was that on
which Britain's Lord Rayleigh discredited himself so recklessly on this point. Rayleigh's commentary on Riemann's Fortpflanzung shrieked, to the effect, that, if Riemann were right, then all of the
physics of Rayleigh and the pro-Newton faction, were thoroughly bankrupt intellectually. The root of Rayleigh's consternation: the argument against Riemann's method, by such as Clausius, Grassmann,
Helmholtz, Maxwell, and Rayleigh, is that the wrong view of gas theory is embedded axiomatically in those notions of percussive causality which Sarpi and his followers had embedded in the Cartesians
and British empiricists. Riemann's representation of isentropic compression has important implications within applications of the LaRouche-Riemann method in physical economy. On the latter account,
the present writer commissioned a translation of this paper of Riemann's, by Uwe Henke and Steven Bardwell, which appeared in the SUPPLY DATE edition of The International Journal of Fusion Energy
(Vol. X, No. X).
28. This is the key to understanding the convoluted argument which underlies such later publications of Immanuel Kant as: Critique of Pure Reason (1781), Prolegomena to a Future Metaphysics (1783),
Fundamental Principles of a Metaphysics of Ethics (1785), Critique of Practical Reason (1788), Critique of Judgment (1790), and Perpetual Peace (1795). Kant's argument is the basis for the mysticism
of such Nineteenth-century neo-Kantian mystics as (implicit Volksgeist doctrinaire) Johann Fichte, (Weltgeist doctrinaire) G.W. Hegel, (Zeitgeist/Volksgeist doctrinaire, and Hegel ally) F.K. Savigny,
and the pathological Franz Liszt. The central feature of Kant's Critiques, and related writings on science, psychology, morals, and aesthetics, centers around the mystical irrationalism of his
discussion of synthetic judgment a priori. Unlike his more radical, logical-positivist followers, such as Norbert Wiener of "information theory" notoriety, agnostic Kant is prepared to allow both God
and creative reason to exist somewhere, but not to permit them to be known. Although there is a foretaste of Kant's argument in the mystical side of the gnostic René Descartes, in the notion of deus
ex machina, the empiricists deny the existence of creative reason altogether. (See relevant writings of the neo-Kantians W. Windelband and E. Cassirer, for insight into the continuing distinctions
between neo-Kantianism, on the one side, and empiricism and positivism, on the other.) Similarly, as a reflection of their pro-atheistic, empiricist "mind set," the pseudo-Christian gnostics of
Britain deny the existence of a "divine spark of reason" within the individual person, i.e., deny both Genesis 1:26-30, and the Christian principles of imago dei and capax dei. It is for these same
"Brutish" varieties of religious motives, that Galileo student Thomas Hobbes decreed the policy, for banning both metaphor and the subjunctive mood (e.g., Leviathan), which is the continuing
policy-trend among empiricist and positivist species of modern-language stylists, to the present day. This streak, expressed variously as the atheism axiomatically inherent in empiricism and
positivism, and as "agnosticism" among the followers of Kant, is a strictly correct reading of the import of Aristotle's method and writings. In modern Europe, this atheistic current is to be traced
chiefly to Cardinal Gasparo Contarini's extremely influential teacher, the Pietro Pomponazzi of Padua, who taught, that, among the followers of Aristotle (and, of Pomponazzi), the human soul could
not exist.
29. Cf. Lawrence S. Kubie, "The Fostering of Scientific Creativity," Daedalus, Vol. XX, No. XX, Spring 1962; also, The Neurotic Distortion of the Creative Process (Lawrence:1958).
Although Kubie, a rather celebrated Yale psychoanalyst, was a participant in the Josiah Macy, Jr. Foundation's notorious "Cybernetics" project, he proved himself insightful in his investigation of
the reasons why some of those persons nominally among the most highly qualified, and formerly most promising academics, had proven sterile in the field of scientific creativity. Kubie's referenced
works were published after the writer's structured, quality-control study of indicated patterns of behavior in formally well-qualified management consultants who tended to fail, consistently; hence,
the referenced titles attracted this writer's attention. From the standpoint of the writer's own investigations, Kubie's observations in the 1962 Daedalus piece were on target. In the typical case of
the failure-prone management consultant, in this writer's study, and in related cases, it was the case's educational successes which were, arguably, the source of his performance failures as a
consultant. In his education, usually, that subject had been the kind of "nerd" who hit the books, learned the subject, passed the examination, whose opinions won the approval of his teachers, all
the way to his pre-doctoral orals and written examination. The subject's mind was trapped inside that mere learning as a virtual reality. Clearly, during his education, the subject had employed his
cognitive powers sometimes, but had never recognized the distinction between learning and the role cognitive processes contributed to assisting the learning process. Only rarely, would that subject
rely upon thinking cognitively "in a pinch." If the subject must have been somewhat creative during the earlier phases of his education, his willingness to continue the learning process in that way
would begin to wither away at a point proximate to his completing higher education. As he grew older, the growing maturity of his professional experience was accompanied by an apparent
"calcification" of his cognitive potential. Under the pressure of desire for approval from actually present, or possible professional peers, he would fall back into the virtual reality of
academically, and bureaucratically induced habits of Pavlovian "academic correctness." In a related type of case, the gifted experimental scientist might go stale, during the moments he is confronted
with the prospect of defending mathematically, at the blackboard, or in a paper submitted to referees, what he knows, otherwise, to be his valid experimental discovery. As indicated in later
paragraphs of this text, this is not merely a formal problem, but also a psychiatric problem, arising to this form through the victim's substituting the inappropriate, erotic form of intellectual
motivation, where the non-erotic, agapic form of behavior is required.
30. The paradigmatic New Testament text is I Corinthians 13. Paul's meaning for the term, is fully consistent with that of Plato.
31. The student, and professional, who approaches his subject-matters like one who "sings no better than he believes necessary to gain his supper," is referenced by Friedrich Schiller as of the
category of Brotgelehrten. That has been increasingly the characteristic of the education and standard of adult practice of professionals in general.
32. The empiricist and positivist would argue, that such ideas are "constructs," derived, thus, from sense-perceptions. That empiricist argument, is traced to Padua's Pietro Pomponazzi through
Pomponazzi's student, the Venetian Francesco Zorzi (a.k.a, "Giorgi"), who took up residence in England to serve as marriage counsellor to King Henry VIII, and served as the intellectual resource upon
which the King relied, together with Venice's agent Thomas Cromwell, et al., in that celebrated Anne Boleyn affair upon which the Church of England was established. Zorzi is otherwise notable in the
history of England during that same period, for his direct attack on the influence of Cardinal Nicolaus of Cusa, the crucial organizer in the process leading into 1439-1440 Council of Florence, and,
later, mid-Fifteenth-century canon of the Papacy. Zorzi's attack was directed against the influence of the Erasmians, the principal conveyers of the Renaissance heritage into England at that time.
Zorzi demanded extirpation of the method of "docta ignorantia," and its replacement by a kind of proto-empiricism. The influence of Pomponazzi and his leading students, apart from the key role they
played in orchestrating, as did Gasparo Contarini, the great schism of the early Sixteenth century, was the current of Venice's influence leading into Paolo Sarpi's founding of what we know today as
the British empiricism of Bacon, Hobbes, Locke, Bentham, et al. Echoing Zorzi, the Sixteenth through Nineteenth centuries witnessed an hysterical effort by the followers of Hobbes, Locke, and Newton,
to eliminate the notion of ideas from science and philosophy, through the establishment of the notion that those ideas were merely "constructs." The issue of infinite series, posed by Leibniz in the
Leibniz-Clarke-Newton correspondence, and Euler's lunatic use of a tautological fallacy, to attack Leibniz's Monadology, are bellwether cases of this effort to promote the hoax of the "construct."
33. It is also stressed, in sundry other locations, that scientific knowledge requires uncovering the necessary and sufficient reason underlying the existence of the division of experience among
three distinct qualities of scale, and three mutually exclusive categories of characteristic functional distinction. Of scale, we have astrophysical and microphysical, which are beyond the scope of
objects perceivable to the senses, and, thus, by elimination, the macrophysical scale. Of characteristic functional distinctions, we have putatively non-living, putatively non-cognitive living, and
cognitive processes. The combinations of the two types of distinctions define a simple matrix; a functionally comprehensive definition of all of the relations implicit in that matrix, is science.
Thus, science as a whole does not exist outside the domain of Platonic ideas.
34. See Selections Illustrating the History of Greek Mathematics, trans. by Ivor Thomas, Vol. II (Cambridge, Mass.: Harvard University Press, 1980), Loeb Classical Library, pp. 266-273. Note, that
Eratosthenes also supplied an estimate for the arc of a great circle passing through Alexandria and Rome. Eratosthenes' estimates are typical of the application of Classical Greek science (from
Thales through Eratosthenes' time) to the methods of observation of ancient through early Ptolemaic Egypt. (The fact that Claudius Ptolemy's hoax could be tolerated by his contemporaries, illustrates
the significant degeneration in scientific practice which had occurred since the deaths of Aristarchus, Eratosthenes, and Eratosthenes' correspondent Archimedes.) To gauge this, one might wisely take
into account, Indo-European culture's knowledge of the long equinoctial solar-sidereal astronomical cycle, shown (by progression of positions of observed stellar constellations) to date from some
time between 6,000 and 4,000 b.c. (within Orion), in Central Asia.
35. The conspicuous error in Toscanelli's map, is neither his estimated size of the planet, not the indicated distance to be spanned in crossing the Atlantic. The problem is Venetian lies respecting
the distance across Asia to China and Japan, placing the latter in the middle of the United States.
36. The connection stated here is key to understanding the Lawrence Kubie's thesis set forth in his 1962 Daedalus piece, which we have referenced in a note, above. As matured and reflective sports
fanatics will concede, "erotic" refers not only to explicitly sexual behavior, but to notions of power to dominate, and submission to power, and, more generally, to ideas associated with
sense-perception, as opposed to ideas associated with cognition. This underlies certain more readily recognized connections which come to the surface in forms of sexual abuse, such as rape, sodomy,
intra-family violence, or simply the forms of psychosexual impotence in which the sex-act is performed with little more than a "sex-as-power," animalist pleasure-seeking impulse, for domination or
submission. In the instance of the "Don Juan," or "Macho" type, this may be expressed as a person who is either emotionally confused by, or even virtually incapable of, a human quality of enduring
attachment to merely one woman. "Macho" Don Juan protests, with all the feigned sincerity of indignation such an inveterate confidence man might muster, "Me psycho-sexually impotent?: you have to be
kidding!" In healthy states, the "erotic" impulse (eros) is associated with ideas within the domain of sense-perception; whereas, all ideas associated with cognition are associated with the emotional
impulse of agape. The neurotically pathological characteristic of philosophical empiricism, neo-Kantian romanticism, and positivism, is typified in the extreme by the sexual history of such
empiricists as Francis Bacon, Thomas Hobbes, and Jeremy Bentham. These three typify the neurotically confused state of mind essential to such philosophical currents. All of the ideas which are
distinctively characteristic of Plato and of Christianity are within the domain of agape, as I Corinthians 13 denies the quality of "Christian" to any ostensibly worthy act, which is not generated
and controlled by agape. Thus, the "Macho" type of neurotic responds to that challenge to his beliefs which is beyond what he senses he might be able to refute, not with reason, but with outbursts of
an erotic quality of screaming, shouting, fist-waving rage. The "neurotic distortion of the creative process" which occupied Kubie's attention, is the result of the inappropriateness of the summoning
of the erotic quality of emotional impulse, to address a challenge which requires the kind of ideas summonable only by the agapic impulse pecular to Platonic ideas.
37. This definition of the good, is congruent with Leibniz's definitions for the monad. See, notably, Monadology, 9-18, pp. 149-150 [footnote 1].
38. A few points of clarification must be supplied here, respecting the stages of the development, and related indebtednesses, of the author's progress to his present views on the subject of music.
First, although the author's knowledge of lattice principles dates from his study of the work of Harvard's Birkhoff, during the late 1940's, he did not employ the theorem-lattice as a pedagogical
approach to the principle of hypothesis until a middle 1950's manuscript examining problems of Operations Research from the standpoint of economic principles. In a sense, the author's views on
motivic thorough-composition had perhaps a greater role in prompting the author to employ the pedagogy of theorem-lattices, than the other way around. By 1952, the author's views on motivic
thorough-composition, were centered upon the traceable influence of Mozart's K.475 on Beethoven, Brahms, et al. This is typified by such matters, as the recognition of Brahms' direct quotation from
this Bach-Mozart source in the C-minor (First) Symphony, and the direct quotation from the Adagio Sostenuto (measures 70-85) of Beethoven's Opus 106, as the motivic germ opening Brahms' Fourth
(E-minor) Symphony (measures 2-19). During the same interval, 1948-1952, the author had chosen the characteristics of the composition of the German Classical lied, from Mozart through Brahms, as the
key to all music, including all Classical instrumental compositions, and had emphasized the origins of music in the singing of ancient Classical poetry, and related principles of irony in Classical
drama, especially Classicial tragedy. The next qualitative advance, as contrasted to gradual ones, came through collaboration with immediate associates and others, the others including, most
emphatically, his dear friend, Professor Norbert Brainin, former Primarius of the Amadeus Quartet. In the first phase, 1979-1985, the emphasis was upon the implications of tuning from the standpoint
of Florentine bel canto modes of voice-training. During that period, beginning 1981, the author projected the compilation of a text on the scientific principles underlying Classical musical
composition, which became Book I (On the Human Singing Voice) of A Manual on the Rudiments of Tuning and Registration, ed. by John Sigerson and Kathy Wolfe (Washington, D.C.: Schiller Institute,
1992). In the preparation of the forthcoming Book II (On the motivic thorough-composition and the ensemble), Professor Brainin outlined his own discovery of approximately two decades, respecting the
relationship between Joseph Haydn's launching of Motivführung with his own Opus 33 quartets, and the revolution in motivic thorough-composition which Mozart launched, from approximately 1782-1783
onward, in response to Haydn's program (e.g., Mozart's six quartets dedicated to Haydn). See, Lyndon H. LaRouche, Jr.,"Musical memory and thorough-composition," Executive Intelligence Review, Vol.
22, No. 35, Sept. 1, 1995, and the relevant addendum, "Norbert Brainin on Motivführung," Executive Intelligence Review, Vol. 22, No. 38, Sept. 22, 1995.
39. I.e., Critique of Judgment.
40. It is important to stress, that the subjunctive mood is not the grammatical forms with which its employment may, or may not be associated. The subjunctive mood is the mood of hypothesis, the mood
of thought taking thought-processes as an object. Its Classical expression is the relevant literature of Greece, such as the Homeric epics, the great tragedies of Athens' Golden Age, and the
dialogues of Plato. The type of Classical Greek literature which presents the actuality of the subjunctive mood (as distinct from a mere accident of conventions in grammatical forms) is a trio, of
persons from two cities of different cultural heritage, interacting in a common setting, with one or more representatives of the pagan gods of Olympus. The actual events are shared in common, but
those propositions, generated in response to the events, lead to theorems which are, respectively, mutally inconsistent. One character's, or the audience's, comparison of the differing mental
processes leading to the different reactions, and related ultimate outcomes, is the actuality of the subjunctive mood. Hence, the dialogues of Plato are all written in the subjunctive mood.
41. In music, for example, the difference between a Classical and Romantic style of performance of a Classical composition (e.g., Mozart, Beethoven, Schubert, Schumann, Brahms) is implicit in
conductor Wilhelm Furtwängler's instruction, to perform "between the notes." In the simplest degree, this requires that the performer express the counterpoint, rather than present a sensuous array of
individual notes. To this end, the emphasis must be upon the motivic implications of the interval as an element of change, avoiding resort to erotic obsession with the utterance of the individual
chord or note as such. Ultimately, it requires that each interval be performed with an eye to the hypothesis established by the concluding resolution of that developmental process which is the
composition taken in its entirety. This applies not only to recognizing the proper relative tempi among movements, etc., as motivic considerations of the composition as a whole demand this; it
prohibits decadently erotic emphasis upon uttering individual tones, in movements performed with exaggerated slowness for this purpose, and, on the contrary, excessive velocity, used to bury the
meaninglessof the performance under a sensuous heap of haste. It means a hatred of misrepresenting compositions through resort to readings of portions of a Classical score, such as Schumann, as
"passage work" imported to make the composer appeal more erotically to the taste of a decadent Manhattan audience. The same applies to Classical drama and poetry. In good art, there is no symbolism,
but, rather, the expression of interdependent empyreal ideas and agapic passions, expressed by metaphor.
42. This is not to be confused with erotic qualities of manic elation. The subjective effect is "calming," directly opposed to manic. The increased capacity for action, is associated, metaphorically,
with the notion of serenity and a source of "energy" for action. It suggests the quality of serenity in that great military commander who has achieved the appropriate capacity for what Clausewitz
references in use of the term Entschlossenheit.
Part I
Part I Footnotes
Part II
Part II Footnotes
Related Articles
Fidelio 91-96
Fidelio 97-01
Fidelio 02-06
LaRouche Articles 2002
LaRouche Articles 2001
Meet Lyndon LaRouche
More Liebniz Translations
On Wisdom
Society and Economy
Meditations on Cognition, Truth and Ideas
Two Papers on the Catenary Curve and Logarithmic Curve!
Pedagogical Articles
Plato, Kepler and other translations
Eduction, Science and Poetry Page
Go to Top | {"url":"http://www.schillerinstitute.org/fid_91-96/963A_lieb_rieman.html","timestamp":"2014-04-20T03:33:32Z","content_type":null,"content_length":"130410","record_id":"<urn:uuid:2da829d4-7966-4d13-b907-f940359c8bc9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Double integral in polar coordinates
January 3rd 2010, 04:13 AM
Double integral in polar coordinates
Evaluate the integral by changing to polar coordinates. ∫∫[D] x dA. Where D is the region in the first quadrant that lies bewteen the circles x^2+y^2=4 and x^2+y^2=2x.
Please, explain how can you find the limit for "r" ..
first circle has center (0,0), radius = 2
second circle has center (1,0), radius = 1
I know that
but where is D !!
"BETWEEN TWO CIRCLES" << Omg, i cant figure it out =(
And this is not for my homework
Actually, we dont hae homeworks in our university =D
January 3rd 2010, 04:21 AM
mr fantastic
Evaluate the integral by changing to polar coordinates. ∫∫[D] x dA. Where D is the region in the first quadrant that lies bewteen the circles x^2+y^2=4 and x^2+y^2=2x.
Please, explain how can you find the limit for "r" ..
first circle has center (0,0), radius = 2
second circle has center (1,0), radius = 1
I know that
but where is D !!
"BETWEEN TWO CIRCLES" << Omg, i cant figure it out =(
And this is not for my homework
Actually, we dont hae homeworks in our university =D
Cartesian equation: $x^2+y^2=4$. Polar equation: $r = 2$.
Cartesian equation: $x^2+y^2=2x$. Polar equation: $r^2 = 2 r \cos \theta \Rightarrow r = 2 \cos \theta$.
Note: $x^2+y^2=2x \Rightarrow x^2 - 2x + y^2 = 0 \Rightarrow (x - 1)^2 - 1 + y^2 = 0 \Rightarrow (x - 1)^2 + y^2 = 1$.
I suggest you draw the two circles (that ought to be a trivial thing to do) and shade the required region. Then set up the polar integration.
January 3rd 2010, 04:34 AM
I know theta between 0 and pi/2
I know how to draw the two circles
and i know the circle with radius 1 will lies inside the other circle
BUT the problem is I cant figure this >> "LIES BETWEEN THE TWO CIRCLES"
where is the region which lies between two circle!!
can you show it to me?
and another question please:
r from 2cosθ to 2
but why it not from 2 to 2cosθ ??!
sorry, but i have little problems in this section =(
January 3rd 2010, 04:59 AM
Lies between the two circles = inside both circles
January 3rd 2010, 05:11 AM
So the region will be the half of the small circle above the x-axis
then theta lies between 0 and pi/2
Still good !
But the problem is the limits of r!
this region is bounded by two polar curves
r = 2
and r = 2cosθ
Fantastic! i got it.. but still i have a small problem
why is it from 2cosθ to 2
isnt it from 2 to 2cosθ ??
January 3rd 2010, 09:44 PM
mr fantastic
So the region will be the half of the small circle above the x-axis
then theta lies between 0 and pi/2
Still good !
But the problem is the limits of r!
this region is bounded by two polar curves
r = 2
and r = 2cosθ
Fantastic! i got it.. but still i have a small problem
why is it from 2cosθ to 2
isnt it from 2 to 2cosθ ??
Draw the diagram like I said to do. It should be crystal clear that you are integrating from the inner circle to the outer circle and I have given you the polar equation of those two curves. I
suggest you go back and review the formula for the area between two polar curves.
January 4th 2010, 04:40 AM
I doubt that. What you have is a circle with center at the origin and radius 2 and a circle with origin at (0, 1) and radius 1. The second circle lies completely inside the first so the area
"inside both circles" would be just the area of the smaller circle, $\pi$. "Lies between the two circles" means just that- inside the larger circle but outside the smaller.
January 4th 2010, 04:50 AM
graph ...
January 4th 2010, 08:34 AM
Draw the diagram like I said to do. It should be crystal clear that you are integrating from the inner circle to the outer circle and I have given you the polar equation of those two curves. I
suggest you go back and review the formula for the area between two polar curves.
lol am drawing in the xy-plane ;s
Ok thanks
But what is the relation-ship between the area in polar coordinates and this integral!!!
January 4th 2010, 08:35 AM
I doubt that. What you have is a circle with center at the origin and radius 2 and a circle with origin at (0, 1) and radius 1. The second circle lies completely inside the first so the area
"inside both circles" would be just the area of the smaller circle, $\pi$. "Lies between the two circles" means just that- inside the larger circle but outside the smaller.
its a stupid sentence (Headbang)
i hate it (Headbang)
thanks (Hi)
January 4th 2010, 08:37 AM
January 4th 2010, 10:31 AM | {"url":"http://mathhelpforum.com/calculus/122263-double-integral-polar-coordinates-print.html","timestamp":"2014-04-18T07:04:29Z","content_type":null,"content_length":"14883","record_id":"<urn:uuid:f799af41-13a8-4103-9712-a9196e471e65>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Proper Names and the Diagonal Proof
Insall montez at rollanet.org
Wed Jun 26 16:00:49 EDT 2002
Allen Hazen has done a nice job of trying to explain, in an informal manner,
a reasonable interpretation of Cantor's diagonal argument for fairly strict
anti-Platonists. A more formal explanation follows from a simple
cataloguing (below) of various well-known results, or various minor
modifications thereof. To read this, observe that we use the following
CT ~ ``Class-Set Theory (without Fraenkel's Axiom of Substitution)''
(as described in Goedel's proof of
the relative consistency of the GCH)
CTF ~ ``Class Theory with (the class version of) Fraenkel's Axiom of
CTFC ~ ``'CTF with the Axiom of Choice'
Z ~ ``Zermelo Set Theory''
ZF ~ ``Zermelo-Fraenkel Set Theory''
ZFC ~ ``Zermelo-Fraenkel Set Theory with the Axiom of Choice''
Z+(V=L) ~ ``Zermelo Set Theory with the Axiom of Constructibility''
ZF+(V=L) ~ ``Zermelo-Fraenkel Set Theory with the Axiom of
Results proved long ago:
1. CT is a conservative extension of Z
2. CT is finitely axiomatizable.
3. Z is not finitely axiomatizable.
4. CTF is a conservative extension of ZF.
5. CTF is finitely axiomatizable.
6. CTFC is a conservative extension of ZFC.
7. ZF is not finitely axiomatizable.
8. ZFC is not finitely axiomatizable.
9. It is provable in Z that if X is a set and P is its
power set, then there is an injection from X into P, but
there is no injection from P into X.
10. It is provable in CT that the set of denumerably
long binary sequences is in one-to-one correspondence
with the power set of the natural numbers.
11. The Lowenheim-Skolem Theorem is provable in CTC, ZFC, etc.
A consequence of this is that whether you are a Platonist or not, if you use
classical logic, then your ``describable'' denumerably long binary
sequences - i.e. those which are in the constructible universe - cannot be
constructibly (i.e. ``describably'') placed in a one-one correspondence with
the set of all natural numbers. This holds even if there is a bijection
between the describable denumerably long binary sequences and the natural
numbers, for in this case, the bijection in question is merely
``indescribable'' (not constructible).
Of course, Goedel's argument for the undecidability of Peano Arithmetic
basically does the same thing.
One effect of the Lowenheim-Skolem Theorem (LST) is to formalize reasons for
``paradoxes'', such as the so-called ``Skolem's Paradox'', in which a
countable model of set theory exists, but in that countable model are sets
whose denumerability is not ``witnessed'' by a function that resides inside
the model at hand. (The failure of any such model to include such a
``witnessing'' function can in fact be formalized, to show that a certain
string of symbols in some specific language is recursively generated from
some other specific string of symbols according to a specific set of
instructions. Then one may ignore the existence of the human being with
intuition about functions, and plod along mechanically, with no meaning
behind one's formalisms whatsoever. This makes the anti-Platonists happy, I
guess, while for the Platonists, I expect it is somewhat boring, though I
could be wrong.)
What I do not understand is why anti-Platonistic Formalists are asking for
``real meanings'' or ``real objects'' to go with these mathematical
concepts. Once the definition is written, it seems to me, the only
``object'' that the Formalists I have come to know and love will accept is
that definition itself - the string of marks on the page that they can sense
by some presumably physical means. Now, I do realize that there are
pseudo-Formalists also - computer scientists, for example - who are using
formalism as a way to determine how to ``communicate'' with an inanimate
computation device. But Formalists, as I understand it, derive some sort of
meta-physical conclusions from the lack of understanding a machine or other
inanimate object, like a brick, has, along with some sort of universally
democratic principle that suggests that ``All are bricks.''. But to have
the principle requires that one be not as inanimate as a brick, or computing
device, so there is, inherent in the statement of the philosophical
principle that ``no meaning exists'', the contradictory notion that ``there
is meaning to the statement that ``no meaning exists''.''.
If I have misunderstood the stance of the anti-Platonic Formalists, please
help me to clear up this confusion, by telling me a more correct meaning of
the terms ``formalist'', ``formalism'', etc. I mean this not merely for
formalism in the philosophy of mathematics, but also, in more general terms.
If one cannot accept the existence of an infinite set, how can one accept
the existence of an electron, or, even better, in concert with some of Allen
Hazen's remarks, how can one accept the existence of that electron over
Matt Insall
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2002-June/005650.html","timestamp":"2014-04-19T09:27:38Z","content_type":null,"content_length":"7350","record_id":"<urn:uuid:1e6c05a7-b216-43d4-b123-397e78061d31>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
A virtual particle is a particle that appears spontaneously and exists only for the amount of time allowed by the Heisenberg uncertainty principle. According to the uncertainty principle, the product
of the uncertainty of a measured energy and the uncertainty in the measurement time must be greater than Planck's constant divided by 4 | {"url":"http://www.learner.org/courses/physics/glossary/definition.html?invariant=virtual_particles","timestamp":"2014-04-17T12:36:12Z","content_type":null,"content_length":"3092","record_id":"<urn:uuid:ea407b88-a33d-4ec7-b52f-046ed22eff30>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Illinois Learning Standards
The Illinois Learning Standards for Mathematics were developed by Illinois teachers for Illinois schools. These goals, standards and benchmarks are an outgrowth of the 1985 Illinois State Goals for
Learning influenced by the latest thinking in school mathematics. This includes the National Council of Teachers of Mathematics; Curriculum and Evaluation Standards for School Mathematics; ideas
underlying recent local and national curriculum projects; results of state, national, and international assessment findings; and the work and experiences of Illinois school districts and teachers.
Mathematics is a language we use to identify, describe and investigate the patterns and challenges of everyday living. It helps us to understand the events that have occurred and to predict and
prepare for events to come so that we can more fully understand our world and more successfully live in it.
Mathematics encompasses arithmetic, measurement, algebra, geometry, trigonometry, statistics, probability and other fields. It deals with numbers, quantities, shapes and data, as well as numerical
relationships and operations. Confronting, understanding and solving problems is at the heart of mathematics. Mathematics is much more than a collection of concepts and skills; it is a way of
approaching new challenges through investigating, reasoning, visualizing and problem solving with the goal of communicating the relationships observed and problems solved to others.
All students in Illinois schools need to have the opportunity to engage in learning experiences that foster mastery of these goals and standards. Knowledge of mathematics and the ability to apply
math skills to solve problems can be an empowering force for all students both while in school and later in their lives.
Students reaching these goals and standards will have an understanding of how numbers are used and represented. They will be able to use basic operations (addition, subtraction, mulitiplication,
division) to both solve everyday problems and confront more involved calculations in algebraic and statistical settings. They will be able to read, write, visualize and talk about ways in which
mathematical problems can be solved in both theoretical and practical situations. They will be able to communicate relationships in geometric and statistical settings through drawings and graphs.
These skills will provide all Illinois students with a solid foundation for success in the workplace, a basis for continued learning about mathematics, and a foundation for confronting problem
situations arising throughout their lives.
Applications of Learning
Through Applications of Learning, students demonstrate and deepen their understanding of basic knowledge and skills. These applied learning skills cross academic disciplines and reinforce the
important learning of the disciplines. The ability to use these skills will greatly influence students' success in school, in the workplace and in the community.
Solving Problems
Recognize and investigate problems; formulate and propose solutions supported by reason and evidence.
The solving of problems is at the heart of "doing mathematics." When people are called on to apply their knowledge of numbers, symbols, operations, measurement, algebraic approaches, geometric
concepts and relationships, and data analysis, mathematics' power emerges. Sometimes problems appear well structured, almost like textbook exercises, and simply require the application of an
algorithm or the interpretation of a relationship. Other times, particularly in occupational settings, the problems are non-routine and require some imagination and careful reasoning to solve.
Students must have experience with a wide variety of problem-solving methods and opportunities for solving a wide range of problems. The ability to link the problem-solving methods learned in
mathematics with a knowledge of objects and concepts from other academic areas is a fundamental survival skill for life.
Express and interpret information and ideas.
Everyone must be able to read and write technical material to be competitive in the modern workplace. Mathematics provides students with opportunities to grow in the ability to read, write and talk
about situations involving numbers, variables, equations, figures and graphs. The ability to shift between verbal, graphical, numerical and symbolic modes of representing a problem helps people
formulate, understand, solve and communicate technical information.
Students must have opportunities in mathematics classes to confront problems requiring them to translate between representations, both within mathematics and between mathematics and other areas; to
communicate findings both orally and in writing; and to develop displays illustrating the relationships they have observed or constructed.
Using Technology
Use appropriate instruments, electronic equipment, computers and networks to access information, process ideas and communicate results.
Technology provides a means to carry out operations with speed and accuracy; to display, store and retrieve information and results; and to explore and extend knowledge. The technology of paper and
pencil is appropriate in many mathematical situations. In many other situations, calculators or computers are required to find answers or create images. Specialized technology may be required to make
measurements, determine results or create images.
Students must be able to use the technology of calculators and computers including spreadsheets, dynamical geometry systems, computer algebra systems, and data analysis and graphing software to
represent information, form conjectures, solve problems and communicate results.
Working on Teams
Learn and contribute productively as individuals and as members of groups.
The use of mathematics outside the classroom requires sharing expertise as well as applying individual knowledge and skills. Working in teams allows students to share ideas, to develop and coordinate
group approaches to problems, and to share and learn from each other in communicating findings. Students must have opportunities to develop the skills and processes provided by team problem-solving
experiences to be prepared to function as members of society and productive participants in the workforce.
Making Connections
Recognize and apply connections of important information and ideas within and among learning areas.
Mathematics is used extensively in business; the life, natural and physical sciences; the social sciences; and in the fine arts. Medicine, architecture, engineering, the industrial arts and a
multitude of occupations are also dependent on mathematics. Mathematics offers necessary tools and ways of thinking to unite the concepts, relationships and procedures common to these areas.
Mathematics provides a language for expressing ideas across disciplines, while, at the same time, providing connections linking number and operation, measurement, geometry, data and algebra within
mathematics itself.
Students must have experiences which require them to make such connections among mathematics and other disciplines. They will then see the power and utility that mathematics brings to expressing,
understanding and solving problems in diverse settings beyond the classroom. | {"url":"http://www.isbe.net/ils/math/standards.htm","timestamp":"2014-04-18T13:07:06Z","content_type":null,"content_length":"25969","record_id":"<urn:uuid:bbebfa9f-2cc7-4223-a993-ed997ee63bef>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Risk Forecast
To predict how well endowments can weather economic volatility, investment managers can conduct modeling of the risk spectrum. What they learn can allow them to consider inherent interconnections and
trade-offs that affect results.
By Lucie Lapovsky
Achieving endowment goals requires careful assessment of available data to understand the risks associated with various asset allocation and spending policies. One must look no further than the
recent market downturn to underscore the importance of acknowledging the full spectrum of potential risk in any given strategy. The equity market collapse that began in late 2008 put many
institutions in a quandary with regard to their policies.
On investment policy, responses of institutional endowment committees have ranged from doing nothing (i.e., maintaining the precollapse asset allocation), to putting everything in equities and
equity-like investments—since in theory, the only way to go is up—to putting everything in cash to preserve the endowment corpus and not risk further losses. On spending, institutional responses have
included sticking with current policy (resulting in a reduced endowment draw because of a smaller corpus), to increasing the spending from the endowment to compensate for other revenue reductions in
the budget, to eliminating spending from the endowment altogether for a year or two to allow the corpus to recover more quickly. While few endowment management committees responded at these extremes,
many have adjusted their asset allocations and spending policies to move in one of these directions.
How can institutions know if they are making the best decisions? Like the traveler described above, institutions may unknowingly assume more risk than necessary if they fail to consider the data
available to them. By using model simulations, institutions can forecast the possible impact of their asset allocation and spending policies. To illustrate the use of simulation modeling, this
article models two asset allocation and two spending policies to provide a backdrop for discussing the risks involved in different policy decisions.
Three Goals
Thanks to widely available computer software, modeling a variety of scenarios is now fairly straightforward. Use of these techniques requires investment committees to consider all relevant variables
and to engage in valuable discussions about risk and the trade-offs involved, thereby affording a framework for disciplined decision making. For other stakeholders, this approach culminates in rich
documentation that can clarify the rationale behind specific endowment policies and enhance the transparency of the investment committee's decision making.
One limitation of modeling is that it is based on a set of economic assumptions. In addition to making certain assumptions about inflation, future gift flows, and administrative expenses, rates of
return and the probability of achieving them must be estimated for each asset class. For this reason, it is advisable for most institutions to use investment consultants to help make appropriate
inputs to such models.
Regardless of the models used, three primary goals must be considered in assessing endowment risk:
• Supporting the students of today and tomorrow (intergenerational equity).
• Maintaining the predictability of support from the endowment for the operating budget.
• Maintaining liquidity of the endowment so that money is available to meet spending goals.
Each goal has one or more associated risks that give rise to the chance of not achieving the goal, and each adverse result diminishes the chance of attaining the goal. For example, for the goal of
intergenerational equity, an institution can underspend on the current generation of students to ensure that the endowment corpus is preserved long term. This could actually weaken the institution in
such a way that although the endowment will theoretically be preserved, the institution may essentially wither in the short term.
It is also important to acknowledge the interrelated nature of these risks. Higher spending today can favor the current generation over future generations. Reaching for higher returns can call for
investing in illiquid asset classes, as well as the transfer of too much market risk to the spending regime. More aggressive asset allocations provide the best chance of maintaining the value of the
endowment corpus in perpetuity, while an allocation that favors cash and fixed income may have little volatility but will not maintain its value in real (inflation-adjusted) terms. Because these
risks are not only interrelated, but to some degree are also competing, they cannot adequately be considered in isolation.
Baseline Scenario
For a baseline example, let's consider an institution with a $350 million endowment, an asset allocation of 70 percent in equities, and an expected return of 8.25 percent. The baseline spending
formula assumes 4.75 percent of rolling, 24-month average asset values. In addition, let's assume that inflation will average 3.5 percent over the long term, administrative expenses will be 0.5
percent of endowment assets, and new gifts of $1 million each year will increase annually by inflation.
By using Monte Carlo simulation (a computational technique used to measure uncertainty), we can forecast the impact of these investment and spending policies, considering both the expected (median)
outcome as well as the range of possible outcomes. Figure 1 is a Monte Carlo simulation of what will happen to the endowment over each of the next 20 years in nominal dollars. Each bar summarizes the
results of the thousands of trials for which their values have been ranked. The white line shows the median outcome, the value at which half the trials fall below and half are above. The dark blue
section spans the 25th to 75th percentile, thus comprising 50 percent of the trials. The top of the green bar represents the 95th percentile, so 5 percent of the trials had higher results. Similarly,
5 percent of the trials fell below the bottom of the tan bar (the 5th percentile). Using this model, overall we can anticipate having our result land within the boundaries of the bar with about 90
percent confidence, leaving a 1 in 10 chance of experiencing an outlier.
In this example the graph shows that, with the set of assumptions defined above, this endowment is expected to grow over time (approximately double over 20 years for the median case) and carries
substantial volatility. For instance, in 2029 the endowment range from 5th to 95th percentile is $269 million to $1.545 billion with a median value of $639 million. In terms of risk, there is a
chance, albeit very low, that in 20 years the endowment value will be less than the original $350 million. On the other hand, there is also a small chance that the endowment, in nominal dollars, will
have increased more than 400 percent.
Impact on Endowment Goals
To evaluate this set of baseline policies on each of the three primary goals, we can review appropriate metrics from the simulation looking at real (inflation-adjusted) returns.
Intergenerational equity. While Figure 2 has the same data as Figure 1, the data are adjusted for inflation. This graph shows the probability of maintaining the purchasing power of the endowment over
this period of time. The tan and blue bars meet at the current fund value of $350 million, the level required to keep the purchasing power constant. The percent on top of each bar indicates the
chance of maintaining the purchasing power of the endowment after 20 years. In this example, 46 percent of the thousands of trials ended up with a real market value equal to or greater than $350
million at the 20-year mark, while 54 percent ended up with a real market value of less than $350 million.
This asset allocation and spending policy poses the risk of not maintaining the endowment's purchasing power at least 50 percent of the time, something that may prompt a committee to discuss how to
increase the certainty of maintaining intergenerational equity. For example, a committee might choose to lower the spending rate and/or change the asset allocation to one that has a higher expected
rate of return.
Operating budget support. With regard to spending goals, we can review projected spending levels to see how the endowment can support the operating budget. To more fully reveal the trend, Figure 3
shows a deterministic forecast, a single path representing expected/median outcomes. The baseline spending policy (4.75 percent of the endowment corpus of the previous 24 months) produces a
precipitous decline in spending for the next year and produces lower spending for the next several years.
This picture represents a situation currently being experienced by many institutions, where lower spending has triggered actual program cuts, layoffs, and tuition hikes as institutions deal with the
reality of significant operating budget reductions. In this case, the spending formula inherits volatility from the investment policy. Due to the severe market losses of the fourth quarter in 2008,
nominal spending is slated to decrease by 15 percent to 20 percent from 2009 to 2010 and then slowly rebound. Given the assumptions used in the model, the endowment spend will not return to the 2009
level until 2015.
The concern of some college and university leaders is that if their institutions do not increase the funds available from their endowments to support their operating budgets during these financially
stressful times, they will be severely weakened, perhaps to the point that their future survival may be at stake. Other institution leaders may make a different assessment and decide to reduce their
spending rate because of the risk that there will be insufficient endowment corpus to support future generations.
Liquidity. An institution can calculate a liquidity ratio comparing cash available to cash needed. Figure 4 shows that even the least likely probability comfortably exceeds the threshold of 1.0,
which indicates that the institution will have the cash it needs to support its spending policy. Although interpretations may vary, the odds of a liquidity crisis seem quite low in this example.
Several large endowments that were heavily invested in illiquid investment classes did face liquidity issues in 2009, but this is a risk that had rarely if ever been of concern before the recent
market drop.
Quantitative Policy Setting
To this point, we've considered the baseline case and current investment and spending policies only to gain insight about the trajectory of key metrics and the volatility of those metrics. Next we
will explore how this quantitative approach is invaluable in the context of policy setting. For example, let's imagine that the same investment committee considers new spending and investment
policies to mitigate the current results. To make it tangible, we will introduce three discrete scenarios into our model and evaluate their effectiveness compared to the baseline (current) policies:
• New investment policy (New IP): 100 percent equity allocation resulting in an expected return of 9.0 percent.
• New spending policy (New SP): use of hybrid policy with 70 percent weight on prior-year spending and 30 percent weight on current formula, as well as a 94 percent floor (relative to prior year's
nominal spending amount).
• New investment policy and new spending policy (New IP + New SP), as indicated above.
Figure 5 shows a comparison of the probability of maintaining the real value of assets in the endowment over 10 years and 20 years by using different combinations of the asset allocation and the
spending policies described above. When looking ahead to 2029, the blue scenario (new investment policy and current spending policy) has the highest chance of maintaining intergenerational equity, at
53 percent. The dark green scenario (new investment policy and new spending policy) has about the same chance as the baseline policy (47 percent versus 46 percent, respectively), but with much
greater volatility. And the tan scenario (baseline investment policy and the new hybrid spending policy) has only a 39 percent chance of maintaining the real value of the endowment's assets. For a
clear example of why committee members must consider the range of outcomes possible for any set of policies, note that the worst-case forecast for the dark green scenario (new investment policy and
new spending policy) approaches the zero line. This represents some meaningful chance (about 5 percent) of losing the entire fund. To the extent that the endowment committee had not explicitly
considered this possibility, this process reveals a lurking risk that could prompt a revision in stated investment goals and objectives. Because many investment committees view the risk of loss as
much more serious than the potential of greater gains, they are often willing to forgo the upside of gains for protection against downside loss.
As Figure 6 shows, scenarios using the hybrid spending approach (tan and dark green) have higher spending levels during the first five-year period than the traditional spending policy, which takes a
fixed percentage of the endowment averaged over the previous 24 months. The higher spending in the hybrid policy (though still less in real dollars than the spending in 2009 in the baseline spending
policy) provides greater support for the operating budget from the endowment and affords an institution time to adjust more gradually to the real decline in the endowment. While this mitigates some
of the institutional risk that accompanies the need to make budget reductions swiftly, it does result in a lowered probability of maintaining the real value of the endowment for the long term.
Risk Comfort Zone
In summary, the higher spending provided by the hybrid spending policy in the short term results in a slightly lower endowment corpus in the long term. The new investment policy provides a higher
probability of growth in the corpus over time, yet also has greater volatility and more downside risk. The combination of the new spending and investment policies may appear to be the best choice (as
they suggest higher short-term spending and carry a similar chance of intergenerational equity), until the committee considers the trade-off that these new policies carry a 5 percent probability of
losing virtually the entire fund.
While this analysis provides no clear answer for the best course of action for any institution, these scenarios do provide leaders with a range of choices to consider based on the best available data
and assumptions rather than relying on the average outcome only, which is where many investment committees tend to focus. Risk is inevitable, but by undergoing these kinds of simulations, an
endowment management committee can better determine which risks it is more comfortable accepting—a good first step in policy setting and decision making.
LUCIE LAPOVSKY, New York City, is a consultant in higher education finance and a former college president. | {"url":"http://www.nacubo.org/Business_Officer_Magazine/Magazine_Archives/March_2010/Risk_Forecast.html","timestamp":"2014-04-19T11:58:53Z","content_type":null,"content_length":"43689","record_id":"<urn:uuid:08b46b2f-4b71-43f0-a0bf-23df1549af98>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
ATCH AND 2-
Next: PARITY PROBLEM Up: EVALUATING LONG-TERM DEPENDENCY BENCHMARK Previous: RANDOM GUESSING (RG)
The task is to observe and classify input sequences. There are two classes. There is only one input unit or input line. See, e.g., Bengio et al. (1994); Bengio and Frasconi (1994); Lin et al. (1995).
The latch problem was designed to show how gradient descent fails. We tested two variants, both with 3 free parameters.
Latch variant I. The recurrent net itself has a single free parameter: the recurrent weight of a single tanh unit. The remaining 2 free parameters are the initial input values, one for each sequence
class. The output targets at sequence end are +0.8 and -0.8. Gaussian noise with mean zero and variance 0.2 is added to each sequence element except the first. Hence a large positive recurrent weight
is necessary to accomplish long-term storage of the bit of information identifying the class determined by the initial input. The latter's absolute value must be large to allow for latching the
recurrent unit.
Results. RG solves the task within only 6 trials on average (mean of 40 simulations). This is better than the 1600 trials reported in Bengio et al. (1994) with several methods. RG's success is due to
the few parameters and the fact that in search space
Latch variant II. Sequences from class 1 start with 1.0; others with -1.0. The targets at sequence end are 1.0 and -1.0. The recurrent net has a single unit with tanh activation function. There are 3
incoming connections: one from itself, one from the input, and one bias connection (the inputs are not free parameters). Gaussian noise is added like with variant I.
Results. RG solves variant II within 22 trials on average. This is better than the 1600 trials reported by Bengio et al. (1994) with several methods on the simpler variant I.
2-sequence problem. Only the first
Previous Results. The best method among the six tested by Bengio et al. (1994) solved the 2-sequence problem (
Results with RG. RG with architecture A2 and
Next: PARITY PROBLEM Up: EVALUATING LONG-TERM DEPENDENCY BENCHMARK Previous: RANDOM GUESSING (RG) Juergen Schmidhuber 2003-02-19 | {"url":"http://www.idsia.ch/~juergen/guessing2001/node3.html","timestamp":"2014-04-19T15:00:26Z","content_type":null,"content_length":"8201","record_id":"<urn:uuid:991157b1-e7cb-41e1-a5d7-7788115fba40>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
A complexity-performance-balanced multiuser detector based on artificial fish swarm algorithm for DS-UWB systems in the AWGN and multipath environments
In this article, an efficient multiuser detector based on the artificial fish swarm algorithm (AFSA-MUD) is proposed and investigated for direct-sequence ultrawideband systems under different
channels: the additive white Gaussian noise channel and the IEEE 802.15.3a multipath channel. From the literature review, the issues that the computational complexity of classical optimum multiuser
detection (OMD) rises exponentially with the number of users and the bit error rate (BER) performance of other sub-optimal multiuser detectors is not satisfactory, still need to be solved. This
proposed method can make a good tradeoff between complexity and performance through the various behaviors of artificial fishes in the simplified Euclidean solution space, which is constructed by the
solutions of some sub-optimal multiuser detectors. Here, these sub-optimal detectors are minimum mean square error detector, decorrelating detector, and successive interference cancellation detector.
As a result of this novel scheme, the convergence speed of AFSA-MUD is greatly accelerated and the number of iterations is also significantly reduced. The experimental results demonstrate that the
BER performance and the near–far effect resistance of this proposed algorithm are quite close to those of OMD, while its computational complexity is much lower than the traditional OMD. Moreover, as
the number of active users increases, the BER performance of AFSA-MUD is almost the same as that of OMD.
DS-UWB; Multiuser detection (MUD); Artificial fish swarm algorithm (AFSA); Euclidean solution space
1. Introduction
Ultrawideband (UWB) technology is attractive for its multiple-access (MA) applications in wireless communication systems owing to its high ratio of the transmitted signal bandwidth to information
signal bandwidth (or pulse repetition frequency) [1]. Similarly, power can spread, because of its information symbols transmitted by short pulses, over the wide frequency band [2]. There are mainly
two standard schemes formulated by IEEE 802.15 Task Group 3a, i.e., the multi-band-based orthogonal frequency division multiplexing (MB-OFDM) and single-band-based direct-sequence UWB (DS-UWB) [3].
The former is a carrier-based system that divides the wide bandwidth of UWB into many sub-bands, while the latter is a baseband system modulating its input information symbols with nanosecond pulses,
which is different from conventional code division multiple access (CDMA) systems [1,4,5]. Compared with MB-OFDM, DS-UWB scheme has many advantages, which stem from its UWB nature, such as low
peak-to-average power ratio, wide bandwidth, good information hidden ability, and less sensitivity to multipath fading [6,7]. Our focus is thus on investigating the detection algorithms in multiuser
DS-UWB communication systems.
Actually, the idea of UWB MA systems dates back to the original proposal put forward by Scholtz [8], and with subsequent analyses in [9-12]. However, as in conventional CDMA systems, these proposed
UWB MA systems also suffer from the multiple-access interference (MAI) problem, which severely restricts their performance and system capacity. This is due to the crude assumption that the MAI can be
modeled as a zero-mean Gaussian random variable (called “Gaussian approximation”) [13] for the conventional single-user matched receiver. Moreover, MAI even causes the near–far effect (NFE) [14], the
case that the user with lower received signal power will be swamped by users with higher power. In order to solve these problems, multiuser detection (MUD) technology that can eliminate or weaken the
negative effects of MAI was studied in [15-27]. Among them, the optimum multiuser detection (OMD), proposed for CDMA systems by Verdu [15], has the optimal BER performance [16] and the perfect NFE
resistant ability [17]. But its computational complexity growing exponentially with the number of active users makes it impractical to use [18]. Yoon and Kohno [19] introduced this OMD algorithm to
the UWB MA system; its high computational complexity problem is yet to be solved.
In recent years, many different sub-optimal MUD algorithms have been studied in literatures. In [20], a multiuser frequency-domain (FD) turbo detector was proposed that combines FD turbo equalization
schemes with soft interference cancelation, but its BER performance is unsatisfactory. A blind multiuser detector using support vector machine on a chaos-based code CDMA systems was presented in [21
], which does not require the knowledge of spreading codes of other users at the cost of training procedure. In [22], a low-complexity approximate SISO multiuser detector based on soft interference
cancellation and linear minimum mean square error (MMSE) filtering was developed, but the performance of this detector is unfavorable at low SNR. As the swarm intelligence is one of the latest
methods in the field of signal processing [23] (especially for combinatorial optimization problems [24]), several swarm-intelligence-based MUD algorithms have been considered in [25-27]. However, the
tradeoff problem between computational complexity and BER performance still exists.
To solve these issues, we investigate a complexity-performance-balanced multiuser detector based on the artificial fish swarm algorithm (AFSA-MUD) for DS-UWB systems. As a kind of swarm intelligence
methods, AFSA is selected here for its significant ability to search for the global optimal value and to adapt its searching space automatically [28,29]. And its basic motivation is to find the
global optimum by simulating the fish’s behaviors, such as preying, swarming, and searching.
In this proposed AFSA-MUD algorithm, a simplified Euclidean solution searching space is constructed by the use of the solutions of sub-optimal multiuser detectors, which are MMSE detector,
decorrelating (DEC) detector, and successive interference cancellation (SIC) detector. Specifically, the center of this space is the result judged in terms of the average value of all these
sub-optimal solutions, while its radius is defined as the maximum distance between this center and these sub-optimal solutions. Then, AFSA is applied in this simplified solution space and these
sub-optimal solutions are considered as the initial states for the artificial fishes (AFs). Simulation results show that the BER performance and the NFE resistance capability of this proposed
algorithm are comparable to those of OMD, and significantly better than those of matched filter (MF), SIC, DEC, and MMSE detectors. Besides, its computational complexity is much lower than that of
OMD, indicating a better efficiency.
The remainder of this article is organized as follows. In Section 2, the general multiuser DS-UWB system and some typical MUD algorithms are described, including OMD, MMSE, DEC, and SIC. And in
Sections 3 and 4, the basic principles of AFSA and the proposed AFSA-MUD algorithm are discussed, respectively. In Section 5, simulation experiments that compare the performance of different MUD
algorithms are made, followed by conclusions given in Section 6.
2. Multiuser DS-UWB system model and some classical MUD algorithms
2.1. Multiuser DS-UWB system model in additive white Gaussian noise and IEEE 802.15.3a channels
First, we consider a K-user synchronous DS-UWB system under the additive white Gaussian noise (AWGN) channel and each user employs the BPSK direct sequence spread spectrum modulation [30]. Then, the
kth user’s transmitted signal can be expressed in the following form [31]:
where w[tr](t) represents the transmitted pulse waveform generally characterized as the second derivative of Gaussian pulse [6,19] in Equation (2), {b[j]^(k)} are the information symbols for the kth
user, {p[n]^(k)} denotes the spreading sequences assigned to this user, T[c] is the pulse repetition period (namely the chip period), T[f] is the time duration of information symbol that satisfies T[
f] = N[c]T[c], and N[c] is the length of spreading codes.
where τ[m] is the parameter that determines the width of the pulse.
If these K users are all active, the total received signal composed by different signals of all users is
where A[k] is the amplitude of the kth received signal and n(t) represents the received noise modeled as a normal distribution N(0,σ[n]^2) [4].
The AWGN channel, in which the performance of different MUD detectors can be discussed and analyzed easily, is too ideal for practical use. However, the multipath channel is in reality used more
often, especially in the indoor environment. In this article, IEEE 802.15.3a channel model discussed in [30,32,33] is chosen for the system in discussion. This channel model is slightly modified from
the Saleh–Valenzuela model [34], that is, a lognormal distribution hypothesis for the multipath gain magnitude replaces the Rayleigh distribution hypothesis. This multipath channel model can be
defined as follows
where X is the lognormal shadowing factor, {α[m,l]} are the multipath gain coefficients, T[l] is the delay of the lth cluster, τ[m,l] represents the delay of the mth multipath component (called
“ray”) relative to the lth cluster arrival time (T[l]), i.e., τ[0,l]=0. L and M denote the number of clusters and its rays, respectively. In addition, the amplitude |α[m,l]| has a lognormal
distribution while the phase ∠α[m,l] is equal to {0, π} with equiprobability [30].
According to the conclusions in [32], there are four typical multipath channel models of different channel characteristics, namely CM1–CM4. CM1 represents a line-of-sight (LOS) propagation case with
0–4-m propagation distance, while CM2–CM4 denote three different non-LOS propagation cases with different propagation distance or delay spread. The detailed characteristics of these models are
summarized in [32].
Therefore, the transmitted signal passed through this multipath channel can be expressed as Equation (5), which is dissimilar with Equation (3)
where the symbol ⊗ denotes the convolution operation. Furthermore, in this case, the pulse repetition period T[c] is chosen large enough to preclude intersymbol and intrasymbol interference [10].
With the help of Rake receivers, the MUD algorithms discussed in the AWGN case can be applied to the multipath case easily.
2.2. Classical multiuser detectors
2.2.1. Single-user MF
Since the MA DS-UWB system is assumed to be synchronous, the output of a bank of single-user MFs is a K-dimensional vector y, and its kth component is the output of the filter matched to S[tr]^(k)(t)
at the jth symbol duration
Without loss of generality, we set the case that j = 0 and remove the index j. Thus, Equation (6) turns to
where the first term A[k]b[k] is the ideal detection result of the kth user, the second term indicates the MAI to this user, where ρ[ik]=∫[0]^TfS[tr]^(i)(t)S[tr]^(k)(t)dt denotes the normalized
correlation coefficient, and the last term n[k]=∫[0]^Tfn(t)S[tr]^(k)(t)dt is the noise interference. Consequently, this K-dimensional detection vector y can be represented in matrix and vector
where R is the normalized cross-correlation matrix with {ρ[ik]}[(i,k=1,2,…,K)], and
where diag{A[1], A[2],…,A[K]} represents a diagonal matrix with diagonal elements A[1], A[2], …, A[K]. Furthermore, n is a zero-mean Gaussian random vector with its covariance matrix equal to
2.2.2. OMD
According to [35], the OMD problem is equivalent to the maximum a posteriori estimation. The criterion of OMD is written as follows
It is known that the selection of this optimal solution in the K-dimensional Euclidean solution space is generally a non-deterministic polynomial (NP) hard problem [18]. For this reason, the
computational complexity of OMD grows exponentially with the number of active users.
2.2.3. MMSE detector
The purpose of MMSE detector is to minimize the mean square error between the transmitted signal and the detected signal transformed by matrix M linearly. This linear transformation can also maximize
the signal-to-interference ratio [21,36]. Thus, the MMSE algorithm is equivalent to the choice of the K × K matrix M that achieves
From [21,36], the optimal matrix M for Equation (11) is
and the solution of this MMSE detector can be expressed as
2.2.4. DEC detector
Assume the cross-correlation matrix R is invertible, and then the transformation matrix M of the DEC detector is R^–1
where the interference caused by other users is eliminated completely, but that of background noise is amplified.
2.2.5. SIC detector
This method is motivated by a natural and simple idea that if a decision has been made for an interfering user’s information bit, then its interfering signal can be recreated at the receiver and
subtracted from the original received signal [37]. Thus, the decision of the kth user is [36]
where the decisions of users k + 1, k + 2, …, K are assumed to be correct. Since the reliability of this assumption affects performance drastically, the order of demodulating users becomes the
problem. Here, we set users in order through Equation (16), which can be estimated easily from the MF outputs [36]
Notice that all these MUD algorithms introduced above can be applied to the multipath situation easily by Rake receivers with channel estimators [33] (which is outside the scope of this article).
3. The basic principles of AFSA
AFSA is a random-searching optimization algorithm inspired by fish’s behaviors, such as searching for food, swarming, and following others. It is good at avoiding the local optimum and searching for
the global optimum owing to its adaptive capacity in the parallel search of solution space through simulating these behaviors in nature [27-29]. In this section, the general AFSA is discussed below.
3.1. Some definitions for AFSA
In the AFSA, let the searching solution space is K-dimensional and there are N AFs in this space. Like other swarm-intelligence methods, AFSA searches this solution space based on the cooperation and
competition among its AFs [28]. As is shown in Figure 1, there are some important definitions for AFSA.
Figure 1. The local visual range of AF X[i](the two-dimensional case).
The state of each AF can be modeled as a K-dimensional vector:
where x[i] (i = 1, 2, …, K) is the ith component of X. Moreover, Y = f(X) denotes the food concentration level of this state, where f(.) is also called the fitness function or the objective function
for specific issues.
The distance between the states X[i] and X[j] is formulated as
In addition, Visual denotes the local visual (or search) distance of AFs, δ is the factor of crowdedness that affects the number of AFs in the local space, step is the movement size of AFs, and try_
number is the random-searching times in searching behavior described below.
3.2. The behavior descriptions of AFSA
3.2.1. Searching behavior
Suppose that X[i] is the current state of a certain AF. This AF then selects a new state X[j] within its visual distance randomly. If Y[j] = f(X[j]) > Y[i] = f(X[i]), this AF will move from X[i] to X
[j] as
where the calculation of (X[j] – X[i])/||X[j] – X[i]|| gives the orientation to move. Otherwise, select a new X[j] randomly again and determine whether it satisfies the movement condition (Y[j] > Y[i
]). If no one can satisfy this condition after testing try_number times, this AF will move one step randomly at final as
3.2.2. Swarming behavior
Let X[i] is the current state of a certain AF, and n[f] is the number of companions within its visual range, which is the number of elements in the set of B = {X[j] |d[i,j] < Visual}. Then X[c] is
calculated by Equation (21) as the central state of its companions in its visual range:
Meanwhile, Y[c] = f(X[c]) is the food concentration of this central state. If Y[c]/n[f] > δY[i] and Y[c] > Y[i], which means the food concentration of X[c] is sufficient while this area is not
crowded, then this AF will move one step to the central state as Equation (22). Otherwise, the searching behavior is executed.
3.2.3. Following behavior
Assume that X[i] is the state of a certain AF at present, and then within the visual scope of X[i], search the state X[max] whose food concentration Y[max] is maximum. If the conditions Y[max]/n[f] >
δY[i] and Y[max] > Y[i] satisfy, this AF will move one step to X[max]:
Otherwise, the searching behavior is executed.
3.3. Bulletin board
The bulletin board is designed to prevent the optimization results from degradation, that is, it is used to record and renew the best food concentration and its corresponding state during the
iteration of AFSA. After the maximum number of iterations has been achieved, the final records on this bulletin board will be output as the result of AFSA.
3.4. The selection of different behaviors for AFs
As for the optimization problems, such as the maximum problem, the selection of these different behaviors is based on the trial method [38], which simulates the swarming behavior and the following
behavior of each AF and the better one of them that can increase the food concentration of this AF will be implemented actually. If none of them can improve the former state of this AF, the searching
behavior will be selected. Hence, the whole flowchart of AFSA can be summarized in Figure 2 (the sections in the dashed border are not necessary).
4. The proposed AFSA-MUD algorithm
4.1. The AFSA for MUD problem
It is clear that OMD is a combinatorial optimization problem, and AFSA has a strong global searching capability to solve this problem. Therefore, here AFSA is applied to the MUD problem with some
additional explications in the discrete Euclidean solution space E^K, where K is the number of active users
(1) The expression of AF’s state. In this solution space, the state of each AF is encoded by −1 or +1. If there are K active users in this DS-UWB MA system, thus the state is a K-dimensional vector,
like X[0] = (x[1],x[2],…,x[K])^T, where x[i]∈{−1, + 1} and i = 1,2,…,K.
(2) Initialization. The initial state of each AF is selected randomly in the discrete space with 2^K likely solutions.
(3) The distance between different states. In this case, the operator XOR is used to calculate this distance. For example, if X[i] = (1,1,–1,1,1) and X[j] = (1,–1,1,–1,1), then the distance d[i,j] =
X[i] XOR X[j] = 3.
(4) The food concentration or the fitness function for AFs is the criterion of OMD in Equation (10).
(5) The operations in Equations (19), (22), and (23) are be modified as follows, respectively:
4.2. The improved scheme for the selection of initial states and the simplification of solution space
Since AFSA is a kind of random-searching swarm-intelligence algorithms, its convergence speed and computational complexity mainly depend on its initial states and searching space. This suggests that,
in order to enhance the speed of convergence and decrease the computational complexity of AFSA-MUD, the initial states should be selected with a priori knowledge, rather than selected randomly, and
the K-dimensional solution space should be simplified.
Hence, a novel AFSA-MUD method is proposed here, whose a priori knowledge is the detection results of some sub-optimal detectors, such as MMSE, DEC, and SIC detectors. Besides, its Euclidean solution
space defined by its center and radius is constructed by these sub-optimal results, which is more condensed than the former whole space. As a result, this mechanism cannot only enhance the
convergence speed and search accuracy for the global optimum, but also reduce the time or complexity it takes. The details are described as follows.
(1) Initialization. Let the detection results of MMSE, DEC, and SIC detectors be the K-dimensional vectors X[1], X[2], and X[3]. Thus, the number of AFs can be set as 3 and their initial states are
assigned by X[1], X[2], and X[3], respectively. Notice that this initialization can be expanded to the situation with more than three sub-optimal detectors effortlessly.
(2) The center of the simplified space. Here, the majority voting method is applied, which has widely been used to solve the conflict problem both in engineering and social fields, to the fixing of
the center point:
(3) The radius of the simplified space. In this algorithm, the radius is determined by the maximum distance between the center and these initial states (or sub-optimal solutions):
where d[radius] denotes the searching radius of AFSA. But in fact, these sub-optimal detectors are not independent of each other absolutely, and their correlation degree can be estimated [39] as
In [39], n is the total number of classifiers, N is the total number of testing samples, N^f is the number of samples that are misclassified by all classifiers, and N^r means those samples that are
classified correctly by all classifiers. But here, n is regarded as the total number of sub-optimal detectors, N is the total number of testing information bits, N^f denotes the number of bits that
are detected wrongly by all detectors while N^r is the bits detected correctly by all. Figure 3 depicts the correlation ρ[3] of SIC, DEC, and MMSE detectors versus E[b]/N[0]. From it, we can see as
the E[b]/N[0] increases, their correlation degree rises obviously before E[b]/N[0] = 16 dB (from 0.52 to 0.98), but after that, it stands at nearly 1 all the time. In general, the lower the E[b]/N[0]
is, the more diversity these sub-optimal detectors will have, and also the bigger the space-searching radius is. Furthermore, this correlation degree is quite significant if there are many
sub-optimal detectors to choose from.
Figure 3. The correlation degree of SIC, DEC, and MMSE detectors.
Considering the analysis above, when the case X[1] = X[2] = X[3] occurs, the radius calculated by Equation (26) is zero, which means the solution space is null. In order to avoid this problem, the
radius is set as 1, if X[1] = X[2] = X[3] is satisfied.
To sum up how to determinate the center and the radius of the simplified space, three situations are considered.
i. none of these sub-optimal solutions equals to another (X[1] ≠ X[2] ≠ X[3]);
ii. two of these solutions are equal, but not three (X[1] =X[2] ≠ X[3]);
iii. all of these solutions are equal (X[1] = X[2] = X[3]).
Figure 4 shows these three situations in a two-dimensional solution space, which can be generalized into K-dimensional solution space easily (K > 2).
Figure 4. Three situations in a two-dimensional solution space.
4.3. The proposed AFSA-MUD algorithm
In consideration of the statements above, the overall structure of this proposed AFSA-MUD detector is shown in Figure 5. And the implementation of this detector is summarized as follows.
(1) The output of a bank of single-user MF receivers is fed to sub-optimal detectors, such as SIC, DEC, and MMSE.
(2) The detection results of these sub-optimal detectors are used to construct a simplified solution space and initialize the states of AFs.
(3) The AFSA is executed in this space.
Figure 5. General schematic diagram of the AFSA-MUD detector.
5. Numerical results
In order to test and verify this proposed AFSA-MUD algorithm, Monte Carlo simulations are utilized and the majority parameters used for these simulations are summarized in Table 1. The performance of
MF, SIC, DEC, MMSE, AFSA-MUD, and OMD detectors is compared in both AWGN and multipath channels (only the energy of the first path is received, that is, without Rake diversity combining) as follows,
including the BER performance versus E[b]/N[0], the BER performance versus the number of active users K, and also the NFE resistant capability. Finally, the computational complexity of AFSA-MUD is
compared with those of other detectors to demonstrate its efficiency.
5.1. The BER performance versus E[b]/N[0] comparison
The BER versus E[b]/N[0] curves with perfect power control in the AWGN and multipath IEEE 802.15.3a CM2 channels are depicted in Figures 6 and 7, respectively, when there are ten users in the system.
Besides, the BER versus E[b]/N[0] performance curves of AFSA-MUD conditioned in the different multipath channels, which is CM1–CM4, are displayed in Figure 8.
Figure 6. BER performance comparison when K = 10 in the AWGN channel.
Figure 7. BER performance comparison when K = 10 in the IEEE 802.15.3a CM2 channel.
Figure 8. BER performance comparison for AFSA-MUD whenK= 10 in CM1–CM4 channels.
It can be seen from Figure 6 that the BER performance of AFSA-MUD is superior to those of other sub-optimal detectors including MF, SIC, DEC, and MMSE, and it even coincides with that of OMD. The
reason is that this proposed AFSA-MUD algorithm can make a search within a simplified solution space constructed by the solutions of these sub-optimal detectors, rather than a random search.
Therefore, all these sub-optimal solutions are certainly contained in this searching space. Although all the performances of these algorithms are aggravated in the multipath environment (Figure 7),
the BER performance of AFSA-MUD is still close to that of OMD, both of which are the best.
From the simulation results in Figure 8, we can see that, as the communication channel condition deteriorates from CM1 to CM4, the BER performance of AFSA-MUD also deteriorates. In detail, CM1,
compared with CM2–CM4, is LOS and its transmission distance is the shortest, so that the power of its received signal is larger than others.
5.2. The BER performance versus K comparison
The BER performance curves of these detectors with different number of active users K are analyzed in this experiment, considering two cases: (i) the AWGN channel with the E[b]/N[0] set as 5 dB for
all detectors; (ii) the multipath CM2 channel with the E[b]/N[0] set as 10 dB (to distinguish these curves clearly) for all detectors.
Figures 9 and 10 show the results corresponding to Cases one and two, respectively. As a whole, the BER becomes higher when the number of users increases, and the performance of OMD is the best. The
reason for the performance gap between AFSA-MUD and OMD is that, as the number of users increases, the simplified solution space also expands, and as a result of this, the parameters (Visual, Try_
number, and the iterative times) should be bigger, but in this experiment, they remain unchanged as in Table 1.
Figure 9. Case one: BER performance versus K.
In addition, there are some conspicuous differences between these two figures. The performance of SIC is better than that of MF in Figure 9 but worse in Figure 10, which is because the interfering
user’s bits estimated in AWGN environment are much more accurate than in multipath environment. That is, SIC cannot improve the BER performance of MF in low E[b]/N[0] environment. Then, limited by
its ability to amplify the interference of background noise, DEC cannot achieve the optimal performance, especially in Case two where its performance is the worst when K = 5, 10.
5.3. The NFE resistant capability comparison
The BER performance of these detectors with imperfect power control, called the NFE, is discussed in this simulation. Also we give two cases: (i) the AWGN channel with the number of users set as 10,
when the transmitted energy per information bit of the first user E[b1] keeps the same with its E[b1]/N[0] = 5 dB while that of other users E[b2–10]/N[0] varies from 0 to 15 dB synchronously; (ii)
the multipath CM2 channel with the number of users set as 10, when E[b1]/N[0] = 10 dB (to separate these curves clearly) while that of other users E[b2–10]/N[0] also varies from 0 to 15 dB
synchronously. Notice that only the BER of the first user is analyzed and depicted.
From Figure 11 (Case one), it is obvious that DEC, MMSE, AFSA-MUD, and OMD have the stronger NFE resistant ability (no sense with E[b2–10]/N[0]) than MF and SIC detectors. However, in consideration
of the BER performance of them, AFSA-MUD and OMD are the best. Furthermore, the BER performance curve of SIC has an inflexion at the point where E[b2–10]/N[0] = 5 dB, due to its detection method in
Equations (15) and (16). On one hand, when the energy of users 2–10 calculated by Equation (16) is smaller than that of user 1, which is E[b2–10]/N[0] < 5 dB, then the information bits of user 1 will
be detected at first, which is the same as MF does. This is the reason that the BER performance of SIC is identical with MF until E[b2–10]/N[0] = 5 dB. On the other hand, when E[b2–10]/N[0] > 5 dB,
the information bits of users 2–10 will be detected before those of user 1 with more reliability. As a result, after the interfering signal subtracted from the original received signal by Equation
(15), the BER performance of SIC is improved, agreeing with those of AFSA-MUD and OMD.
Figure 11. Case one: BER performance versus E[b2–10]/N[0].
Figure 12 shows the almost same conclusion for the NFE resistant ability comparison in the multipath case, except for a little diverse. Due to the effect of multipath, especially when E[b2–10]/N[0]
is larger than 8 dB, the interfering users’ bits are not estimated correctly enough (here, BER > 10^–1). From Equation (15), it can be seen that if the estimation of the interfering users’ bits is
inaccurate, the interfering signals can be enhanced perversely, resulting in the worse BER performance of SIC even than that of MF, which is different from the AWGN case in Figure 11.
Figure 12. Case two: BER performance versus E[b2–10]/N[0].
5.4. The computational complexity comparison
The total number of calculating the K-dimensional vector inner products (after the output of MFs in Equation 8) for all these detectors at each symbol duration is listed in Table 2, where K is the
number of active users in this multiuser system and L[i] is the upper bound for the radius of solution space in the current information symbol duration ((i – 1)T[f] < t < iT[f]). The detailed
derivation of the computational complexity of AFSA-MUD is given in Appendix. Note that in our discussed problem, the communication system is static (i.e., the number of active users is fixed, such as
K = 5, 10, 15) so that the matrix inversion in Equations (13) and (14) need not be performed at each symbol period. In other words, the computational complexity of inversion operation is negligible.
Table 2. The computational complexity comparison
As is shown in Table 2, the computational complexity of AFSA-MUD is much lower than that of OMD evidently, because only if L[i] = K and K is large enough, that is satisfied. However, the case L[i] ≥
K/2 is meaningless for a certain communication system. To make it clear, the computational complexity of all these detectors is compared in Figure 13, when L[i]/K = 0.1, 0.3, and 0.5.
Figure 13. The computational complexity of all detectors.
In addition, the average L[i]/K versus E[b]/N[0] curves (K = 10) conditioned on the AWGN case and multipath CM1–CM4 cases are depicted in Figure 14. As it shows, L[i]/K will decrease when the
variable E[b]/N[0] increases, which also means the upper bound for the radius of solution space has a self-adaption capability in accordance with E[b]/N[0]. Besides, the average ratio L[i]/K is about
0.2 for CM1–CM4 cases, which implies that AFSA-MUD can save at least 94.4% of the computational complexity of OMD (in Table 2 with K = 10). In AWGN case, AFSA-MUD will save even more than 98.8% of
the complexity.
Figure 14. The average L[i]/K versus E[b]/N[0]curves when K = 10.
6. Conclusion
In this article, the focus has been on the MUD technology used in the DS-UWB system. In consideration of the high-computational complexity of OMD, and the low BER performance of sub-optimal multiuser
detectors, a complexity-performance balanced MUD algorithm is proposed on the basis of AFSA, named AFSA-MUD. This method executes the different behaviors of AFs in a simplified Euclidean solution
space, which is built by the detection results of sub-optimal detectors. Simulation results have indicated that the BER performance and the NFE resistant ability of this novel algorithm are quite
close to those of OMD, and they are also superior to those of MF, SIC, DEC, and MMSE; furthermore, it takes much lower computational complexity to achieve this performance.
The computational complexity of AFSA-MUD
Let the detection results of SIC, DEC, and MMSE be three K-dimensional vectors:
and in consideration of the parallel execution of these detectors (from Figure 5), the number of calculating the K-vector inner products for this parallel execution is considered K here.
According to Equations (25) and (26), the center of its simplified solution space is
while the radius is d[radius]. An arbitrary solution in this space is X = (x[1], x[2], …, x[K])^T, which satisfies the condition
where L[i] is the upper bound for the radius of this solution space in the current information symbol duration ((i – 1)T[f] < t < iT[f]), and it can be determined by the number of discordant
components in these three K-dimensional vectors
Then, the total number of K-vector inner products for AFSA-MUD is equivalent to counting the number of K-vector inner products for all discrete solutions in this space (its radius is L[i]), and plus
the number for the parallel execution of SIC, DEC, and MMSE, that is
where the term (i = 0, 1, …, L[i]) means the number of all solutions that satisfies d(X[0],X)=i.
This study was supported by “the National Natural Science Foundation of China” (Grant no. 61102084), “the Fundamental Research Funds for the Central Universities in China” (Grant no.
HIT.NSRIF.2010092), and “the China Postdoctoral Science Foundation” (Grant no. 2011M500665). Besides, the authors would like to thank the anonymous reviewers for their invaluable comments.
1. B Hu, NC Beaulieu, Precise performance analysis of DS-UWB systems on additive white Gaussian noise channels in the presence of multiuser interference. IET Commun. 1(5), 977–981 (2007). Publisher
Full Text
2. M Herceg, T Svedek, T Matic, Pulse interval modulation for ultra-high speed IR-UWB communications systems. EURASIP J. Adv. Signal Process. 2010, 8 (2010) Article ID 658451
3. J Maunu, T Koivisto, M Laiho, A Paasio, An analog Viterbi decoder array for DS-UWB receiver. Proceedings of IEEE International Conference on Ultra-wideband (Waltham, MA, USA, 2006), pp. 31–36
24–27 Sept
4. Y Zhang, AK Brown, Data rate for DS-UWB communication systems in wireless personal area networks. Proceedings of IEEE International Conference on Ultra-wideband, 1st edn. (Hannover, Germany,
2008), pp. 187–190 10–12 Sept
5. BR Vojcic, RL Pickholtz, Direct-sequence code division multiple access for ultra-wide bandwidth impulse radio. in Proceedings of IEEE Military Communications Conference (MILCOM 03) vol. 2,
898–902 (2003) 13–16 Oct
6. S Tan, A Nallanathan, B Kannan, Performance of DS-UWB multiple-access systems with diversity reception in dense multipath environments. IEEE Trans. Veh. Technol. 55(4), 1269–1280 (2006).
Publisher Full Text
7. H Sato, T Ohtsuki, Frequency domain channel estimation and equalisation for direct sequence ultra wideband (DS-UWB) system. IEE Proc. Commun. 153(1), 93–98 (2006). Publisher Full Text
8. RA Scholtz, Multiple access with time-hopping impulse modulation. Proceedings of IEEE Military Communications Conference (MILCOM 93), 2nd edn. (Boston, MA, USA, 1993), pp. 447–450 11–14 Oct
9. MZ Win, RA Scholtz, Ultra-wide bandwidth time-hopping spread-spectrum impulse radio for wireless multiple-access communications. IEEE Trans. Commun. 48(4), 679–689 (2000). Publisher Full Text
10. JD Choi, WE Stark, Performance of ultra-wideband communications with suboptimal receivers in multipath channels. IEEE J. Sel. Areas Commun. 20(9), 1754–1766 (2002). Publisher Full Text
11. AR Forouzan, M Nasiri-Kenari, JA Salehi, Performance analysis of time-hopping spread-spectrum multiple-access systems: uncoded and coded schemes. IEEE Trans. Wirel. Commun. 1(4), 671–681 (2002).
Publisher Full Text
12. VS Somayazulu, Multiple access performance in UWB systems using time hopping vs. direct sequence spreading. Proceedings of IEEE Wireless Communications and Networking Conference (WCNC 02), 2nd
edn., 522–525 (2002) Mar
13. G Durisi, S Benedetto, Performance evaluation of TH-PPM UWB systems in the presence of multiuser interference. IEEE Commun. Lett. 7(5), 224–226 (2003)
14. FC Zheng, SK Barton, On the performance of near-far resistant CDMA detectors in the presence of synchronization errors. IEEE Trans. Commun. 43(12), 3037–3045 (1995). Publisher Full Text
15. S Verdu, Minimum probability of error for asynchronous Gaussian multiple-access channels. IEEE Trans. Inf. Theory 32(1), 85–96 (1986). Publisher Full Text
16. R Lupas, S Verdu, Near-far resistance of multiuser detectors in asynchronous channels. IEEE Trans. Commun. 38(4), 496–508 (1990). Publisher Full Text
17. S Verdu, Computational complexity of optimum multiuser detection. Algorithmica 4(1–4), 303–312 (1989)
18. YC Yoon, R Kohno, Optimum multi-user detection in ultra-wideband (UWB) multiple-access communication systems. Proceedings of IEEE International Conference on Communications(ICC 02), 2nd edn. (New
York, NY, USA, 2002), pp. 812–816
19. P Kaligineedi, VK Bhargava, Frequency-domain turbo equalization and multiuser detection for DS-UWB systems. IEEE Trans. Wirel. Commun. 7(9), 3280–3284 (2008)
20. JW Kao, SM Berber, Blind multiuser detector for chaos-based CDMA using support vector machine. IEEE Trans. Neural Netw. 21(8), 1221–1231 (2010). PubMed Abstract | Publisher Full Text
21. X Wang, HV Poor, Iterative (Turbo) soft interference cancellation and decoding for coded CDMA. IEEE Trans. Commun. 47(7), 1046–1061 (1999). Publisher Full Text
22. D Merkle, M Middendorf, Swarm intelligence and signal processing. IEEE Signal Process. Mag. 25(6), 152–158 (2008)
23. N Zhao, ZL Wu, YQ Zhao, TF Quan, A population declining mutated ant colony optimization multiuser detector for MC-CDMA. IEEE Commun. Lett. 14(6), 497–499 (2010)
24. H Liu, J Li, A particle swarm optimization-based multiuser detection for receive-diversity-aided STBC systems. IEEE Signal Process. Lett. 15, 29–32 (2008)
25. M Jiang, C Li, D Yuan, MA Lagunas, Multiuser detection based on wavelet packet modulation and artificial fish swarm algorithm. Proceedings of IET Conference on Wireless, Mobile and Sensor
Networks (Shanghai, China, 2007), pp. 117–120 12–14 Dec
26. XZ Gao, Y Wu, K Zenger, X Huang, A knowledge-based artificial fish-swarm algorithm. Proceedings of IEEE 13th International Conference on Computational Science and Engineering (Hong Kong, China,
2010), pp. 327–332 11–13 Dec
27. YM Cheng, L Liang, SC Chi, Determination of the critical slip surface using artificial fish swarms algorithm. J. Geotech. Geoenviron. Eng. 134(2), 244–251 (2008). Publisher Full Text
28. J Ren, MS Lim, A novel equalizer structure for direct sequence ultra wideband (DS-UWB) system. Proceedings of IEEE International Conference on Portable Information Devices (Orlando, FL, USA,
2007), pp. 1–5 25–29 May
29. N Boubaker, KB Letaief, Performance analysis of DS-UWB multiple access under imperfect power control. IEEE Trans. Commun. 52(9), 1459–1463 (2004). Publisher Full Text
30. AF Molisch, JR Foerster, M Pendergrass, Channel models for ultrawideband personal area networks. IEEE Wirel. Commun. 10(6), 14–21 (2003). Publisher Full Text
31. B Mielczarek, MO Wessman, A Svensson, Performance of coherent UWB Rake receivers with channel estimators. Proceedings of IEEE Vehicular Technology Conference, 3rd edn. (Orlando, FL, USA, 2003),
pp. 1880–1884 6–9 Oct
32. AAM Saleh, R Valenzuela, A Statistical model for indoor multipath propagation. IEEE J. Sel. Areas Commun. 5(2), 128–137 (1987)
33. J Luo, KR Pattipati, PK Willett, F Hasegawa, Near-optimal multiuser detection in synchronous CDMA using probabilistic data association. IEEE Commun. Lett. 5(9), 361–363 (2001)
34. Y Wang, MZ Bocus, JP Coon, Iterative successive interference cancellation for quasi-synchronous block spread CDMA based on the orders of the times of arrival. EURASIP J. Adv. Signal Process. 2011
, 12 Article ID 918046 BioMed Central Full Text
35. H Ma, Y Wang, An artificial fish swarm algorithm based on chaos search. Proceedings of International Conference on Natural Computation, 4th edn. (Tianjian, China, 2009), pp. 118–121 14–16 Aug
Sign up to receive new article alerts from EURASIP Journal on Advances in Signal Processing | {"url":"http://asp.eurasipjournals.com/content/2012/1/229","timestamp":"2014-04-19T04:19:21Z","content_type":null,"content_length":"164037","record_id":"<urn:uuid:598327ec-e750-4027-b4a0-a9b001e6b478>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Orbit Diagram of the Logistic Map
Plot the points , where and the are the iterates of the quadratic map . Each leads to a distinct orbit; for example, for , the orbit is 0, -1.5, 0.75, -0.9375, …. For constant λ, the set of is
infinite, but in this picture only a finite sample is shown. The whole plot is called the orbit diagram for the map. The set of points on the right that looks like a curve consists of stable points.
In the middle the points orbit between two values. Both of these cases are examples of attractors. | {"url":"http://demonstrations.wolfram.com/OrbitDiagramOfTheLogisticMap/","timestamp":"2014-04-18T18:14:10Z","content_type":null,"content_length":"44224","record_id":"<urn:uuid:93b082da-8798-40f5-9834-4935ab899d81>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
John Leaves School To Go Home. He Walks 6 Blocks ... | Chegg.com
1. John leaves school to go home. He walks 6 blocks North and then 8 blocks west. How far is John from the school?
2. Oscar's dog house is shaped like a tent. The slanted sides are both 5 feet long and the bottom of the house is 6 feet across. What is the height of his dog house, in feet and meters at its tallest
point? What are the internal angles of Oscar’s dog house?
3.What are the Cartesian components of a vector force whose magnitude is 47.0 N pointing to the east of south?
4.What are the Cartesian components of a vector force whose magnitude is 47.0 N pointing to the north of west?
5. Three charged particles with charges q1=10 microcoulumb q2=-20 microcoulumb q3=30 microcoulumb are located at points P1 (-4.1), P2 (3.4) and P3 (12, -6) respectively. Find:
a)Find the net force experienced by the load q_2.
bFind the electric field due to this distribution at the point P4 (3, -6).
6.A rod 16.0 cm long is uniformly charged and has a total charge of -24.0 mC. Determine the magnitude and directi on of the electric field along the axis of the rod at a point 34.0 cm from its
7.A small plastic ball 2.00 g in mass is suspended by a 20.0 cm long string in a uniform electric field as shown in Figure P23.52.
If the ball is in equilibrium when the string makes a 15.0° angle with the vertical, what is the net charge on the ball?
8. The electric field everywhere on the surface of a thin spherical shell of radius 0.550 m is measured to be equal to 590 N/C and points radially toward the center of the sphere. (a) What is the net
charge within the sphere's surface?
9. A solid sphere of radius 40.0 cm has a total positive charge of 26.0 C uniformly distributed throughout its volume.
1. Calculate the magnitude of the electric field at 10.0 cm. from the center of the sphere
2. Calculate the magnitude of the electric field at 60.0 cm. from the center of the sphere
10. Consider a thin spherical shell of radius 19.0 cm with a total charge of 36.0 C distributed uniformly on its surface. Find the magnitude of the electric field (a) 12.0 cm and (b) 24.0 cm from the
center of the charge distribution.
11. A square plate of copper with 55.0 cm sides has no net charge and is placed in a region of uniform electric field of 70.0 kN/C directed perpendicular to the plate. Find (a) the charge density of
each face of the plate and (b) the total charge on each face.
12. A solid conducting sphere of radius 2.00 cm has a charge of 8.00 C. A conducting spherical shell of inner radius 4.00 cm and outer radius 5.00 cm is concentric with the solid sphere and has a
charge of -4.00 C. Find the electric field at r = 4.50 cm from the center of this charge configuration.
13. A solid conducting sphere of radius 2.00 cm has a charge of 8.00 C. A conducting spherical shell of inner radius 4.00 cm and outer radius 5.00 cm is concentric with the solid sphere and has a
charge of -4.00 C. Find the electric field at r = 3.00 cm from the center of this charge configuration. | {"url":"http://www.chegg.com/homework-help/questions-and-answers/john-leaves-school-go-home-walks-6-blocks-north-8-blocks-west-far-john-school-2-oscar-s-do-q1115110","timestamp":"2014-04-16T10:51:39Z","content_type":null,"content_length":"39121","record_id":"<urn:uuid:0f114c7e-7142-480a-b72c-0f7ff92ea0dc>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Design Principles
The MathScape curriculum reflects the following principles about teaching and learning mathematics:
All students can be successful at learning mathematics
Humans naturally order, quantify, sort, and describe, and mathematics is a language that serves to organize and make sense out of our human experience. All children possess the ability to think and
learn mathematically.
Mathematics is meaningful to students when it is embedded within their own experience
When students are able to connect their learning to something that has meaning to them, they both understand it and retain it better than when the learning is disconnected. By relating mathematical
investigations to middle school students’ experiences, MathScape promotes meaningful learning for students.
Deep understanding of mathematics content is achieved best through drawing out students’ mathematical thinking and using their thinking as a basis for planning instruction
Students learn by constructing knowledge from an experience and relating that new knowledge to the existing body of knowledge they have. Within mathematics, this means that students must grapple with
concepts by formulating their own ideas and by examining, discussing and testing them. Students’ mathematical thinking is a vital component to MathScape lessons.
There is often more than one way to solve a problem, and it is important to examine and explore these alternative approaches
By analyzing different solution methods for a problem, teachers validate students’ different ways of thinking. This affirmation can motivate students and encourage further learning. In addition,
students gain flexibility in solving problems and develop a repertoire of mathematical methods from which to draw when faced with a new problem.
Studying mathematics topics in depth is preferable to addressing a breadth of coverage; mastery is achieved after sustained and varied work with a concept and the opportunity to apply a concept in
different contexts
A study of topics in depth involves:
a. allotting adequate time for students to build meaning;
b. presenting a topic in a variety of ways so that different learning styles are accommodated and, hence, more children gain understanding;
c. making connections to other areas of mathematics and to other disciplines. These circumstances allow for deep understanding to develop in students in ways that are not possible when many topics
are addressed at a cursory level.
Assessment both informs instruction and evaluates student learning
In order to base instruction on students’ understanding of the material, a teacher needs frequent formal and informal ways to assess that understanding. Curriculum materials can provide a variety of
methods for teachers to use in gathering such information. These methods include whole-class discussion, small-group work observed by the teacher, students’ narrative written work and mathematical
problem-solving. Curriculum materials must also provide evaluative tools to measure what students have learned.
A curriculum can serve as a learning vehicle for teachers as well as for students
When a curriculum presents interesting contexts and views for working with mathematical concepts and skills, there is the opportunity for both students and teachers to make new connections between
what may be familiar pieces of content. | {"url":"http://www2.edc.org/mathscape/phil/design.asp","timestamp":"2014-04-20T03:13:41Z","content_type":null,"content_length":"6726","record_id":"<urn:uuid:746b95f1-872d-4832-a76e-a4581c6c1c06>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
local isomorphism
Topos Theory
Internal Logic
Topos morphisms
Cohomology and homotopy
In higher category theory
A local isomorphism in a presheaf category $PSh(S)$ is a morphism that becomes an isomorphism after passing to sheaves with respect to a given Grothendieck topology on $S$.
The collection of all local isomorphisms not only determines the Grothendieck topology but is precisely the collection of morphisms that are inverted when passing to sheaves. Hence local isomorphisms
serve to understand sheaves and sheafification in terms of the passage to a homotopy category of $PSh(S)$.
This is a particular case of the notion of reflective factorization system, applied to the sheafification reflector. It is discussed in more detail at category of sheaves:
In terms of the discussion at geometric embedding, local isomorphisms in $PSh(S)$ are precisely the multiplicative system $W$ that is sent to isomorphisms by the sheafification functor
$\bar{(-)} : PSh(S) \to Sh(S)$
which is left exact left adjoint to the full and faithful inclusion
$Sh(S) \hookrightarrow PSh(S) \,.$
A system of local isomorphisms on $PSh(S)$ is any collection of morphisms satisfying
1. local isomorphisms are a system of weak equivalences (i.e. every isomorphism is a local isomorphism and they satisfy 2-out-of-3);
2. a morphism $Y\to X$ is a local isomorphism if and only if its pullback
$\array{ U \times_X Y &\to& Y \\ {}^{\mathllap{loc iso}}\downarrow && \downarrow^{\mathrlap{\Leftrightarrow loc iso}} \\ U &\to& X }$
along any morphism $U \to X$, where $U$ is representable, is a local isomorphism.
Relation to Grothendieck topologies
Systems of local isomorphisms on $PSh(S)$ are equivalent to Grothendieck topologies on $S$.
The following indicates how choices of systems of local isomorphisms are equivalent to choices of systems of local epimorphisms. The claim follows by the discussion at local epimorphism.
Local epimorphisms from local isomorphisms
A system of local epimorphisms is defined from a system of local isomorphisms by declaring that $f : Y \to X$ is a local epimorphism precisely if $im(f) \to X$ is a local isomorphism.
Local isomorphisms from local epimorphisms
Given a Grothendieck topology in terms of a system of local epimorphisms, a system of local isomorphisms is constructed as follows.
A local monomorphism with respect to this topology is a morphism $f : A \to B$ in $[S^{op}, Set]$ such that the canonical morphism $A \to A \times_B A$ is a local epimorphism.
A local isomorphism with respect to a Grothendieck topology is a morphism in $[S^{op}, Set]$ that is both a local epimorphism as well as a local monomorphism in the above sense.
Relation to Lawvere-Tierney topologies
Recall that Grothendieck topologies on a small category $S$ are in bijection with Lawvere-Tierney-topologies on $PSh(S)$ and that sheafification with respect to a Lawvere-Tierney topology is encoded
in terms of monomorphisms in $PSh(S)$ which are dense with respect to the Lawvere-Tierney topology.
We have:
the dense monomorphisms are precisely the local isomorphisms which are also ordinary monomorphisms.
The sheafification functor which sends a presheaf $F$ to its weakly equivalent sheaf $\bar F$ can be realized using a colimit over local isomorphisms. See there.
Characterization and relation to sieves
Often one concentrates on the local isomorphisms whose codomain is a representable presheaf, i.e. those of the form
$A \to Y(U) \,,$
where $U$ is an object in $S$ and $Y$ is the Yoneda embedding. These come from covering sieves of a Grothendieck topology on $S$: for $U \in S$ and $\{V_i \to U\}_i$ a covering sieve on $U$, the
coresponding local isomorphism is the presheaf which is the image of the joint injection map
$\sqcup_i Y(V_i) \to Y(U) \,.$
Using the fact that morphisms in a presheaf category are strict morphisms, so that image and coimage coincide, it is useful, with an eye towards generalizations from sheaves to stacks and ∞-stacks
(see in particular descent for simplicial presheaves), to say this equivalently in terms of the coimage: the local isomorphism corresponding to the covering sieve $\{V_i \to U\}$ is
$colim ( (\sqcup_i Y(V_i))\times_{Y(U)} (\sqcup_i Y(V_i)) \stackrel{\to}{\to} (\sqcup_i Y(V_i)) ) \to Y(U)$
Notice that in general these are not all the local isomorphism with representable codomain (more generally these are hypercovers, where $(\sqcup_i Y(V_i))\times_{Y(U)} (\sqcup_i Y(V_i))$ is replaced
in turn by one of its covers).
Notice that local isomorphism with codomain a representable already induce general local isomorphisms using the fact that every presheaf is a colimit of representables (the co-Yoneda lemma) and that
local isomorphisms/sieves are stable under pullback:
If $A \in PSh(S)$ is a local object with respect to local isomorphisms whose codomain is a representable, then every morphism $X \to Y$ of presheaves such that for every representble $U$ and every
morphism $U \to Y$ the pullback $X \times_Y U \to U$ is a local isomorphism, the canonical morphism
$Hom(Y,A) \to Hom(X,A)$
is an isomorphism.
We may first rewrite trivially
$X \simeq X \times_Y Y$
and then use the co-Yoneda lemma to write (suppressing notationally the Yoneda embedding)
$Y \simeq colim_{U \to Y} U$
and hence rewrite $(X \to Y)$ as
$X \times_Y (\colim_{U \to Y} U) \to colim_{U \to Y} U \,.$
Then using that colimits of presheaves are stable under base change this is
$(\colim_{U \to Y}(X \times_Y U)) \to colim_{U \to Y} U \,.$
Recall that by assumption the components $X \times_Y U \to U$ of this are local isomorphisms. Hence
$(Hom(Y,A) \to Hom(X,A)) = lim_{U \to Y} Hom(U, A) \to \lim_{U \to Y} Hom(X \times_Y U, A)$
is a limit over isomorphisms, hence an isomorphism.
This is in section 16.2 of
See in particular exercise 16.5 there for the characterization of Grothendieck topologies in terms of local isomorphisms. | {"url":"http://ncatlab.org/nlab/show/local+isomorphism","timestamp":"2014-04-19T22:32:58Z","content_type":null,"content_length":"49605","record_id":"<urn:uuid:bd778aec-e74a-47f0-94bc-d4e9474d3f03>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simplify: (a1/2)1/2(ab1/2) - Homework Help - eNotes.com
Simplify: (a1/2)1/2(ab1/2)
`(a1/2) 1/2 (ab1/2)`
`therefore (a/2)times 1/2 times ((ab)/2)`
Note how effectively you are multiplying `a/1 times1/2 and (ab)/1 times 1/2`
Now multiply everything:`therefore a/2 times1/2times(ab)/2` becomes `(a times1 timesab)/(2 times2 times2)`
`therefore = (a^2b)/8` (remember 2^3 is 8)
therefore =`( a^2b)/8`
`(a1/2)1/2(ab1/2) `
multiply across
`(a/2)xx1/2xx((ab)/2) `
`axx1xxab = a^2b `
`2xx2xx2= 8 `
`(a^2b)/8 `
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/a1-2-1-2-ab1-2-435242","timestamp":"2014-04-20T11:48:01Z","content_type":null,"content_length":"27093","record_id":"<urn:uuid:9d615255-7861-48ff-923f-d68c5a207325>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics - Urgent
Number of results: 20,733
Statistics (urgent)
do it
Sunday, November 18, 2012 at 4:24pm by dave
Urgent Art questions
That's why I put urgent, but I can take Urgent out of my next ones tomorrow ^.^
Tuesday, September 10, 2013 at 6:28pm by Gabby
urgent urgent urgent
nor do I understand the question. Chemical makeup: proteins, containing C, H, O, N, a few S, and some P.
Friday, February 8, 2013 at 9:26pm by bobpursley
Statistics (urgent)
Sunday, November 18, 2012 at 4:24pm by dave
Statistics (urgent)
do that thang
Sunday, November 18, 2012 at 4:24pm by dave
Monday, December 17, 2012 at 9:00pm by Gabby
urgent urgent urgent
what is the chemical make up of actin and myocin
Friday, February 8, 2013 at 9:26pm by zachary
statistics.. URGENT!! PLZ!!
Tuesday, November 6, 2012 at 5:33pm by jorge
math urgent urgent urgent
perpendicular to line -2x+y=7 contains the point (-4,-2) the equation of line is
Friday, November 30, 2012 at 11:56pm by zach w
oops--urgent urgent urgent
know, not now
Friday, February 8, 2013 at 9:26pm by DrBob222
You're very welcome. I suggest you Google racially segregated schools to find facts and statistics.
Wednesday, February 6, 2008 at 8:29pm by Ms. Sue
math urgent urgent urgent
(12,0) and (0,6) center at (-2,2) radius 3 domain all reals, range all reals >= -9 (-2,-25)
Saturday, December 1, 2012 at 12:37am by Steve
urgent urgent urgent
A freshman studying medicine at an Ivy League College is a part of his class crew team and exercises regularly. After a particularly strenuous exercise session, he experiences severe cramps in his
thighs and pain in his biceps. explain the process the muscle go through
Wednesday, February 6, 2013 at 9:11pm by zachary
math urgent urgent urgent
1)let f(x) be a polynomial function explian how to use the factor theorem to check if (x-c) is a factor of F(x) 2) use synthetic division to factor X^2-2x^2-9x+18 completly
Friday, November 30, 2012 at 9:30pm by zach w
Which of the following refers to when a researcher gathers data from a sample and uses the statistics generated to reach conclusions about the population from which the sample was taken? inferential
statistics holistic statistics conclusive statistics descriptive statistics is...
Thursday, May 20, 2010 at 8:55pm by Thara!
statistics Urgent please!!!!!!!!!!!!!!!!!!!!!
A sample of n = 5 scores has a mean of M = 12. What is ‡” x for this sample?
Saturday, May 28, 2011 at 2:13pm by Dottie
math urgent urgent urgent
-2x+y=7 can be written as y=2x+7, so we see it has slope=2 so, the perpendicular has slope -1/2 now you have a point and a slope: y+2 = -1/2 (x+4) and you can massage that as you will.
Friday, November 30, 2012 at 11:56pm by Steve
urgent urgent urgent
A freshman studying medicine at an Ivy League College is a part of his class crew team and exercises regularly. After a particularly strenuous exercise session, he experiences severe cramps in his
thighs and pain in his biceps. •Explain the chemical process that occurred in ...
Thursday, February 7, 2013 at 11:42am by zachary
statistics Urgent please!!!!!!!!!!!!!!!!!!!!!
sum of all five values of x /5 = 12 so the sum is 60
Saturday, May 28, 2011 at 2:13pm by Damon
Statistics PLZ HELP Urgent!!!!
PLZ Help me with the following questions! very urgent! The marketing department of a company would like to introduce 12 monthly special products in the coming 12 months. If these monthly special
products are selected randomly from 24 products, find the probability that two ...
Tuesday, March 19, 2013 at 11:56am by Alberta
Which of the following usually gets a source note on the same page it appears? a. Quote b. Photo c. Paraphrased Passage d. Statistics please help me it urgent!! thank you!
Monday, October 12, 2009 at 2:54pm by shakib
math urgent urgent urgent
6x+2y=12 find intercept a circle equation x^2+y^2+4x-4y-1=0 and graph 2)find domain and range f(x)=x^2-4x-5 find vertex of f(x) =x^2+4x-21
Friday, November 30, 2012 at 11:56pm by zach w
math urgent urgent urgent
x+2y=12 find intercept a circle equation x^2+y^2+4x-4y-1=0 and graph 2)find domain and range f(x)=x^2-4x-5 find vertex of f(x) =x^2+4x-21
Saturday, December 1, 2012 at 12:37am by zach w
English language
what could be an example of something that is urgent but is not important? and something that is important but not urgent? these questions refer to daily actions or well any actions, i have
absolutely no idea how to answer that. Sure doesn't urgent and important meant almost ...
Tuesday, November 29, 2011 at 2:32pm by Caroline
math URGENT
please factor 2.75x^2-7x-120
Wednesday, December 19, 2012 at 7:51pm by TyLeR c URGENT
3) List 3 strategies to help escape riding in a vehicle with an impaired driver.
Monday, October 7, 2013 at 10:45pm by Urgent
Sorry, I only do the non-URGENT ones
Thursday, March 11, 2010 at 4:38pm by drwls
algebra 1...urgent
If your question is so urgent, what have you done to try to solve these problems?
Wednesday, March 5, 2008 at 9:24pm by Ms. Sue
chemistry urgent urgent
Why are you having trouble with this? What is it you don't understand?
Monday, December 10, 2012 at 9:00pm by DrBob222
Urgent language arts
It's urgent i got 5 min.
Monday, December 17, 2012 at 11:48pm by Gabby
Venn Diagram (URGENT)
What are 6 similarities between Canada and Haiti? 1. French Influence 2. 3. 4. 5. 6. Please someone help, this is urgent. thank you.
Thursday, April 30, 2009 at 12:56am by John
Math *URGENT
Wow, classic case of urgent "homework dumping" I will be very happy to check your answers.
Thursday, December 20, 2012 at 9:37pm by Reiny
chemistry urgent urgent
do compare the differentes
Monday, December 10, 2012 at 9:00pm by zachary
chemistry urgent urgent
Answered in another post.
Monday, January 28, 2013 at 10:51am by Devron
for y(t) do you use the value in the x or y direction?
Sunday, October 6, 2013 at 12:54pm by URGENT
or could you show me your full working?
Sunday, October 6, 2013 at 12:54pm by URGENT
Math urgent
explain how to compute the xy term of the product (3x-4y)^2? if possible with solution please.... its urgent
Sunday, September 7, 2008 at 5:01pm by Ayushma
urgent!!!!urgent!!!!!! have to go to bruins game at 5:45whats the magical fiqures inbetween the rapping in a mummy?
Thursday, January 29, 2009 at 5:06pm by lilgeeny245
The website also says the hyphen is unnecessary when it follows the modified word.
Friday, April 12, 2013 at 10:09pm by Ms. Sue
GEOGRAPHY (URGENT!!!)
I NEED HELP!!! this is urgent!!! Human-Environmental Interactions: how did the people in Khartoum modify the environment? how is the people in khartoum being influence by the physical landscape/
environment? sorry my grammar sucks i have other questions but i really extremely ...
Thursday, November 18, 2010 at 7:50pm by NEED HELP URGENT!!!
VERY URGENT MATH~!!!!!!!!
ik i shouldn't put a time limit and im srry but this is really urgent i didn't know how late it was DX PLZZ HELP!
Friday, January 11, 2013 at 9:15pm by Gabby
Physics urgent help have a test!!!
And your work is where? You are posting "urgent" problems under different names. Very suspicious, we really frown on cheating.
Saturday, April 9, 2011 at 9:01pm by bobpursley
"on the same line"?? What in the world does that mean? Make sure you are clear on what type of word each one is. Then re-think your answer.
Friday, April 12, 2013 at 10:09pm by Writeacher
History reflections URGENT
HELP! Trying to find information on Michael Kaznelson author of Remembering the Soviet State: Kulak Children and Dekulakisation. This is urgent so thanks for all your help.
Tuesday, June 14, 2011 at 8:22am by Em
Statistics - Urgent
This doesn't seem to fit a matched pairs design with the information given. Usually pairs are matched on some factor (like age, gender, height) within a group of subjects.
Wednesday, May 26, 2010 at 1:02pm by MathGuru
GEOGRAPHY (URGENT!!!)
if you dont understand the questions im sorry how did the people in Khartoum modify the environment? eg. modifying like heating and cooling buildings for comfort how is the people in khartoum being
influence by the physical landscape/environment? eg. people depend on the ...
Thursday, November 18, 2010 at 7:50pm by NEED HELP URGENT!!!
Legal Studies
Hi beautiful people! I need help finding statistics on the rates of incarciration/fines or other punishments for person convicted of victimless crime in Australia. Preferably Queensland. This is
urgent, I simply cannot find the information and I would REALLY appreciate your ...
Monday, September 5, 2011 at 6:42am by Em
What are differences between Continental Drifts and Plate Tectonics? I need at last 4 for each. Please help its urgent!!!!! Thanks a bunch! Ellie
Monday, November 24, 2008 at 8:58am by Ellie
spanish!!...urgent!...please help
how did the geography influence the migration and exploration of colombia? venezuela? i need websitres for this urgent...please help asap... :(
Thursday, December 6, 2007 at 6:58pm by maggie
chemistry urgent urgent
thank you
Tuesday, January 15, 2013 at 10:37am by zachary
Chem Urgent Help
can u plz figure out the answer for me .. its very urgent.. thnks .. i still didnt get it thnks a lot Dr bob
Wednesday, November 12, 2008 at 10:18pm by Sarah
It doesn't need a hyphen because "well" is an adverb which modifies the adjective (participle) "behaved" -- a hyphen here would be superfluous!
Friday, April 12, 2013 at 10:09pm by Writeacher
ENGLISH URGENT !!!!!!!
What are good things to buy for a 17 year old boy that are not that expensive???? this is really urgent for a part of my english project..... pleeeeeeeeeeeeeeeeeeeese help!!!!!!!
Thursday, December 20, 2012 at 7:29pm by Jules
urgent urgent urgent
http://en.wikipedia.org/wiki/Actin You can type your other word into google. Google tried to correct me to myosin and i now there are several mycins.
Friday, February 8, 2013 at 9:26pm by DrBob222
John said that he had been working for the highway department for about 9.75 years. Which of the following could be the actual amount of time, in years, that John had been working for the highway
department. f) 9.25 g) 9.35 h) 9.65 i) 9.95
Wednesday, June 1, 2011 at 2:23am by URGENT URGENT URGENT
Statistics play s a major role in the health care field in today's environment. Please provide how statistics can be important and influential in the nursing profession. Provide at least two examples
where you ahve observed statistics to contribute positively to the needs of ...
Tuesday, May 18, 2010 at 3:44pm by cortez
Grade 12 Math-Urgent!
yes, sure... what are these variables? d C and then N, I have never heard of this before? Can you please clarify just a tad? Thanks and much appreciation to you!
Wednesday, January 9, 2008 at 9:34pm by Math genius! Urgent!
GR.10 CIVICS!!!!!URGENT
CAN SOMOENE PLEASE GIVE ME A LINK TO A PICTURE THAT HAS TO DO WITH OR IS RELATED TO ABORIGINALS GAINING THE RIGHT TO VOTE IN CANADA(1960).PLEASE! ITS URGENT.THANKS
Saturday, May 30, 2009 at 6:56pm by LALA
Thx Ms. Sue I was confused with the question to, mainly because well...many different have different opinions but also a hyphin can be used in many ways.
Friday, April 12, 2013 at 10:09pm by Gabby
What is an example of a research problem at your organization that would benefit from the use of either descriptive statistics or probability distribution statistics?
Monday, October 11, 2010 at 3:10pm by Kathy
urgent urgent algebra / precalculas
e^(0.03k) = 1.4 0.03k*Lne = Ln1.4 0.03k*1 = 0.33647 k = 11.22
Friday, March 22, 2013 at 6:24pm by Henry
Earth science..... help urgent
hi can anyone help me? my problem is that i need to write a autobiography about the "LIFE CYCLE OF ROCKS". andf i have no idea how to start it.....plz help me....its urgent....give me some clues how
to start !!!!
Wednesday, December 17, 2008 at 6:37pm by veronica
physics urgent
Urgent? Is the flare going down? height= vi*time-1/2 g t^2, here the downard velocity is zero initially, so figure time. WEll again, horizontal distance= vihorizontal*timeinair
Friday, April 1, 2011 at 2:32pm by bobpursley
statistics Urgent please!!!!!!!!!!!!!!!!!!!!!
Please only post your questions once. Repeating posts will not get a quicker response. In addition, it wastes our time looking over reposts that have already been answered in another post. Thank you.
See you previous post.
Saturday, May 28, 2011 at 2:13pm by PsyDAG
statistics page on jiskha
Super!!!!! That poor child looks as befuzzled as I feel when confronted with statistics. Besides, statistics are illogical <G>
Tuesday, September 9, 2008 at 11:48am by GuruBlue
Find the number of solutions to the equation 1/a+1/b+1/c+1/d=1 where a, b, c, d are positive integers and a≤ b≤ c≤ d.
Thursday, April 18, 2013 at 4:09am by HELP ME...uRGENT!
URGENT chemistry
My chemistry is rusty, but since this is urgent, try this... pV=nRT 45.4 L (p)= 0.625 (R) (-24 +273) R= gas constant T has to be changed into degrees Kelvin then solve for p I think it's right
Sunday, September 7, 2008 at 12:36pm by anonymous
Trigonometry Urgent!
I am writing a paper on the 8 trigonometric identites but can't find any information on them. Please does anyone know of any websites that would have things like their history, development,
applications in ancient times, origins, etc. Please help. Urgent!!!!
Thursday, March 8, 2007 at 10:40pm by kate 316
statistics anova
a professor is interested in whether there is a differnce in counseling student's statistics competency scores among 1) those who have never taken any statistics ourse, 2) those who have only taken
an undergraduate statistics course, 3)
Monday, February 27, 2012 at 3:04pm by caryn
Can someone help me on this? its really urgent. what is Foil Character and Background Character? please don't give me the stage meaning just fiction. thanks. its really urgent.
Thursday, January 8, 2009 at 9:52am by Akitsuke
chemistry urgent urgent
Discuss, in detail, the role played by DNA and RNA in genetic diseases. What are the breakthroughs, if any, in the treatment/management of genetic diseases?
Tuesday, January 15, 2013 at 10:37am by zachary
chemistry urgent urgent
... the hiv virus that causes aids destroys the immune system in the body. classify the hiv virus and explain the statement.
Monday, January 28, 2013 at 10:51am by zachary
Plz urgent
I don't see any chemistry tutors online right now. Did you label it properly, so a chemistry tutor will find it? As far as I know "Plz urgent" is not a school subject!
Saturday, February 23, 2013 at 7:42am by Writeacher
urgent urgent algebra / precalculas
1. Given a function,H(x)explain how to find the components f(X)AND g(x) such that f(g(x)) = H(x) 2. let H(x)=(2x+5)^4 find the function f and , so that f(g(X))=H(x)
Sunday, March 24, 2013 at 10:41am by zachary
math urgent urgent urgent
if f(c) = 0 , then x-c is a factor for f(x) = x^2 - 2x^2 - 9x + 18 why do you have two x^2 terms, I will assume the first is x^3 If so, we don't need the factor theorem for this one, grouping is
obvious x^3 - 2x^2 - 9x + 18 = x^2(x-2) - 9(x-2) = (x-2)(x^2 - 9) = (x-2)(x+3)(x-3...
Friday, November 30, 2012 at 9:30pm by Reiny
Maths *have answer, want method* Urgent!
Don't know about theorem, but here is method. Z = (score-mean)/SD Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportion/
probability related to your Z scores. Multiply by 100.
Saturday, April 13, 2013 at 12:40pm by PsyDAG
urgent urgent algebra / precalculas
I would let g(x) = 2x+5 , and f(x) = x^4 check: f(x) = x^4 f(g(x)) = f(2x+5) = (2x+5)^4
Sunday, March 24, 2013 at 10:41am by Reiny
urgent urgent algebra
find the composite function fog f(X) = 1/X-5,g(x)-6/x question 2 find inverse, domain range and asymatotes of each function f(x)=3+e^4-x can some one help me please been stuck on these for 2 days
need help solving thank you
Tuesday, March 19, 2013 at 11:44pm by zachary
URGENT Statistics
In performing two-tailed hypothesis test with v=20, you obtain a t-statistic of 2.41. The p-value is: Between 2% and 5%. I need help understanding how this range was gotten. I know that the
t-statistic is between t.025 and t.01 and since this is a 2-tailed test these t-values ...
Friday, December 10, 2010 at 1:19pm by Rhea
Trigonometry Paper Urgent!!!
I am writing a paper on the 8 trigonometric identites but can't find any information on them. Please does anyone know of any websites that would have things like their history, development,
applications in ancient times, origins, etc. Please help. Urgent!!!!
Friday, March 9, 2007 at 12:03pm by kate 316
urgent alg 1
add or subtract 2 2 (2t -8t)+ (8t + 9t) second question 3 2 3 (t +8t ) + (-3t ) please helpp its urgent i have the answer butt im not suree pleasee helpp
Wednesday, February 9, 2011 at 11:46pm by kimberly
In the following question, identify which choices would be considered inferential statistics and which would be considered inferential statistics. 1. of 500 randomly selected people in new york city,
210 people had O+ blood. a) "42 percent of the people in NYC have O+ blood" ...
Saturday, November 16, 2013 at 8:55pm by kitty
MATHS! URGENT!
describe how each sequence is generated and write down the next two terms: a) 40 41 43 46 50 55 b)90 89 87 84 80 75 c)1 3 7 13 21 31 d) 2 6 12 20 30 42 PLEASE HELP its urgent!
Wednesday, September 29, 2010 at 10:20am by Ami
Which of the following statements is not a general rule for using statistics in a classroom speech? A The more statistics the better. Use statistics sparingly Make sure information is up to date
Round off long numbers.
Thursday, May 10, 2012 at 12:28pm by Anonymous
An essential part of scientific thinking is not only how to use statistics correctly, but also how to identify the misuse of statistics. Our textbook authors suggest that students should: Answer
distrust all statistics because they convey a false impression of certainty and ...
Tuesday, April 23, 2013 at 4:11pm by J.J
English (urgent)
can you give me 4 examples of informative texts..or websites where can I look for informative texts--4 paragraph informative texts...(i need it now)really urgent...please...
Sunday, February 10, 2013 at 9:06pm by daphnie
5. The table below shows Psychology exam scores, Statistics Exam scores, and IQ scores for a random sample of students. What can you observe in the relationship between IQ and psychology, psychology
and statistics, and IQ and statistics? Using a web-calculator, obtain the ...
Tuesday, November 29, 2011 at 12:12am by shirley
urgent urgent algebra / precalculas
rewrite each in exponential expression as a logarithmic expression log, x=4 log 5=2 log b=x
Friday, March 22, 2013 at 6:12pm by zachary
urgent urgent algebra
assuming a typo, I see f(x) = 1/(x-5) g(x) = 6/x so, f◦g = f(g) = 1/(g-5) = 1/((6/x)-5) = 1/((6-5x)/x) = x/(6-5x) check for some value, say, x=2: g(2) = 3 f(3) = -1/2 (f◦g)(2)=2/(6-5*2) = 2/-4 = -1/2
y=3+e^(4-x) y-3 = e^(4-x) ln(y-3) = 4-x x = 4 - ln(y-3) so, f^-1(...
Tuesday, March 19, 2013 at 11:44pm by Steve
statistics.urgent please help..
The sample mean is a point estimate of the population mean μ. You have two sample means, which are 7.23 and 6.49. Find the difference between the two means for your point estimate of µ1 - µ2.
Thursday, May 23, 2013 at 10:32pm by MathGuru
statistics help
For complete post, please see Statistics help with Statistics help Part 2 and Part 3
Monday, July 28, 2008 at 11:24am by Student
The call letter of a radio station must have 4 letters.The first letter must be a K or a W. How many different station call setters can be made if repetitions are not allowed? If repetitions are
Sunday, June 3, 2012 at 7:21pm by Ellen ""Urgent""
urgent urgent algebra
rewrite each in exponential expression as a logarithmic expression a)log,x=4 b)log 5=2 c)log ,8=x
Thursday, March 21, 2013 at 5:24pm by zachary
Ancient History URGENT Akhenaten
HI Are there any good journal articles on Akhenatens personal aims? What he hoped to achieve. So far all I have is one sentance in a book saying he wanted to do away with the power of the priests of
Amun. This is really urgent. My state library is offline and not responding so...
Wednesday, May 25, 2011 at 8:06am by HELP plz
oops typo, In the following question, identify which choices would be considered descriptive statistics and which would be considered inferential statistics.
Saturday, November 16, 2013 at 8:55pm by kitty
1. of 500 randomly selected people in new york city, 210 people had O+ blood. a) "42 percent of the people in NYC have O+ blood" Is the statement descriptive statistics or inferential statistics? b)
"58 percent of the people in NYC do not have type O+ blood" Is the statement ...
Saturday, November 16, 2013 at 10:42pm by kitty
statistics page on jiskha
Does anyone find the picture as funny as I do? http://www.jiskha.com/math/statistics/ It's a baby holding a statistics textbook.
Tuesday, September 9, 2008 at 11:48am by Leo
Right, this is descriptive statistics, not inferential. http://sociology.about.com/od/Statistics/a/Descriptive-inferential-statistics.htm
Tuesday, January 7, 2014 at 10:49am by PsyDAG
Physics, URGENT!!!
I don't have your textbook. You should look for the diffusion coefficient yourself, since your problem is URGENT. Call it D. It should have dimensions of something like cm^2/s. The transport rate of
the virus will be D*(density2 - density1)*Area/L = 2.17*10^-13 g/sec Solve for...
Monday, November 29, 2010 at 9:44pm by drwls
Statistics (urgent)
A beverage comopany uses a machine to automatically fill 1-liter bottles. Assume that the population of volumes is normally distributed. The company wants to estimate the mean volume of water to
within 1 ML. Determine the minimum sample size required to construct a 95% ...
Sunday, November 18, 2012 at 4:24pm by Erica
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=Statistics+-+Urgent","timestamp":"2014-04-16T14:07:42Z","content_type":null,"content_length":"36259","record_id":"<urn:uuid:582044c7-ce55-4481-b412-6e7f1bc5e84b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Mathematical statistics question. Under what conditions does the function \(f\left(x_1,x_2,\ldots,x_n\right)\) have the error:\[\sigma_f=\sqrt{\left(\frac{\partial}{\partial x_1}f\sigma_{x_1}\right)^
2+\left(\frac{\partial}{\partial x_2}f\sigma_{x_2}\right)^2+\ldots+\left(\frac{\partial}{\partial x_n}f\sigma_{x_n}\right)^2}\]
Best Response
You've already chosen the best response.
Notably the delta method http://en.wikipedia.org/wiki/Delta_method#Note mentions this, alongside what looks like gradient functions. \(\sigma_f\) resembles the magnitude of a gradient, but that's
not inclusive of variance. So, in other words, I have no idea what's going on.
Best Response
You've already chosen the best response.
Me too. I have connections with people who do statistics, and I've done a little data management myself, but I don't know about this. Similar format to what I know, but...
Best Response
You've already chosen the best response.
At this rate I'll never be in analysis research. :( That's okay, though. I'm fine with being a student forever.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
None of the really good people are online I think. I'll ask around.
Best Response
You've already chosen the best response.
I don't go around apologizing for everything I can't do; I don't see why you are.
Best Response
You've already chosen the best response.
I usually come, answer, and then the person goes away understanding the question. But that hasn't happened here.
Best Response
You've already chosen the best response.
I usually read the textbook and learn something new, but that hasn't happened here either. XD These things are out of my control. The best I can do is correct for it, but I'm definitely not
apologizing for it.
Best Response
You've already chosen the best response.
Hey, sorry about not be able to help. Post a link in the chats occasionally. Maybe a good person will see it.
Best Response
You've already chosen the best response.
Is it against etiquette to just repost?
Best Response
You've already chosen the best response.
@myininaya @across @satellite73 can any of you help with this? Or know who can?
Best Response
You've already chosen the best response.
They're HERE??
Best Response
You've already chosen the best response.
Yes, noradetzky, it is. However, hopefully someone will see this tomorrow. The weekend is usually kinda dead around here.
Best Response
You've already chosen the best response.
No, but they are at least pinged now. I don't ping them very often, so this will probably catch their attention.
Best Response
You've already chosen the best response.
@LagrangeSon678 @mertsj @zarkon @jim_thompson5910 @Chlorophyll Please help? Man. Feels like I'm calling up powerful unholy forces. Nice.
Best Response
You've already chosen the best response.
I hope the users here aren't accurately described by "powerful" and "unholy". ;D
Best Response
You've already chosen the best response.
err...I was being figuratively. Please that statement be figurative and not Literal.
Best Response
You've already chosen the best response.
that being literal....ugh.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f6eb0c5e4b0772daa08cd99","timestamp":"2014-04-18T16:01:18Z","content_type":null,"content_length":"69218","record_id":"<urn:uuid:c18426d4-39f7-479c-a153-995f4ca46140>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
Indirect Estimation of the Comparative Treatment Effect in Pharmacogenomic Subgroups
Evidence of clinical utility is a key issue in translating pharmacogenomics into clinical practice. Appropriately designed randomized controlled trials generally provide the most robust evidence of
the clinical utility, but often only data from a pharmacogenomic association study are available. This paper details a method for reframing the results of pharmacogenomic association studies in terms
of the comparative treatment effect for a pharmacogenomic subgroup to provide greater insight into the likely clinical utility of a pharmacogenomic marker, its’ likely cost effectiveness, and the
value of undertaking the further (often expensive) research required for translation into clinical practice. The method is based on the law of total probability, which relates marginal and
conditional probability. It takes as inputs: the prevalence of the pharmacogenomic marker in the patient group of interest, prognostic effect of the pharmacogenomic marker based on observational
association studies, and the unstratified comparative treatment effect based on one or more conventional randomized controlled trials. The critical assumption is that of exchangeability across the
included studies. The method is demonstrated using a case study of cytochrome P450 (CYP) 2C19 genotype and the anti-platelet agent clopidogrel. Indirect subgroup analysis provided insight into
relationship between the clinical utility of genotyping CYP2C19 and the risk ratio of cardiovascular outcomes between CYP2C19 genotypes for individuals using clopidogrel. In this case study the
indirect and direct estimates of the treatment effect for the cytochrome P450 2C19 subgroups were similar. In general, however, indirect estimates are likely to have substantially greater risk of
bias than an equivalent direct estimate.
Citation: Sorich MJ, Coory M, Pekarsky BAK (2013) Indirect Estimation of the Comparative Treatment Effect in Pharmacogenomic Subgroups. PLoS ONE 8(8): e72256. doi:10.1371/journal.pone.0072256
Editor: Marie-Pierre Dubé, Universite de Montreal, Canada
Received: April 16, 2013; Accepted: July 12, 2013; Published: August 27, 2013
Copyright: © 2013 Sorich et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: Financial support for this study was provided by grants from the National Heart Foundation of Australia [grant number G11A5902] and the National Health and Medical Research Council of
Australia [grant number 1028492]. This research was also supported by the Victorian Government’s Operational Infrastructure Support Program. The funders had no role in study design, data collection
and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
An important element of pharmacogenomics is the use of genomic information (genetic variation and gene expression) to enable stratified or personalised medicine. In particular, there is great
interest in use of pharmacogenomic markers to guide medical decisions regarding the best choice of therapy. Evidence of clinical utility for a given marker is a key issue in translating
pharmacogenomics into clinical practice [1] and the extent to which comparative treatment effect differs between subgroups defined by the marker is an important component of assessing clinical
utility. We define clinical utility here as the improvement in clinical outcomes (i.e., evidence of health gain) resulting from use of a pharmacogenomic test [2]. We exclude from the concept of
clinical utility the dimension of cost effectiveness (value for money) of the pharmacogenomic marker in producing the health gain, although we discuss the application of the method to
pharmacoeconomic modelling.
Appropriately designed randomised controlled trials (RCTs) can provide robust evidence of the relationship between treatment effect and pharmacogenomic marker status [3]. However, RCT evidence is not
always available. Association studies of pharmacogenomic markers are much more common but the results of such studies are less useful for providing insight of the clinical utility. Pharmacogenomic
association studies are typically observational cohort or case-control studies which assess the association between a pharmacogenomic marker and clinical/surrogate outcomes for a specific patient
population on a specific treatment. Typically the results of a pharmacogenomic association study will highlight that individuals with one value for the marker are at higher risk of an event when
using a specific drug, compared to individuals who have a different value for the marker. However, this is generally insufficient to inform whether the pharmacogenomic marker identifies subgroups
with clinically important and statistically significant differences in comparative treatment effects.
This paper describes the mathematical basis and assumptions of a method for indirectly estimating comparative treatment effect for subgroups defined by a pharmacogenomic marker based on data commonly
available for the patient population of interest: pharmacogenomic association studies, the prevalence of the marker, and treatment effect in the unstratified population. A case study for the use of
this method is presented, based on the cytochrome P450 (CYP2C19) genotype subgroup analysis of the RCT comparing ticagrelor and clopidogrel for the prevention of cardiovascular (CV) events for
individuals with acute coronary syndrome (ACS). Evidence generated using this approach is not a substitute for direct evidence from an RCT; however, combined with a sensitivity analysis, this
indirect method can provide insight into whether the pharmacogenomic marker is likely to have clinical utility and/or be cost-effective, and hence the value of undertaking further research.
The general approach developed below is to construct a hypothetical trial that embodies the known characteristics of the treatment and pharmacogenomic marker – the overall treatment effect
unstratified by the marker, the marker effect in each study arm, and the distribution of the marker. The comparative treatment effect for the marker subgroups is estimated by demonstrating that only
specific values of the treatment effect for the subgroups will be consistent with the set of treatment and marker characteristics specified.
If an appropriately designed RCT, comparing treatments α and β, were available in which the pharmacogenomic marker status for participants is known, a subgroup analysis may be undertaken on the basis
of the marker. For simplicity it is assumed here that the marker only has two values (A and A′; e.g. corresponding to positive/negative, high/low, mutated/wildtype, carriage of allele/no carriage of
allele) and that the outcome of interest is a binary event (e) that has a probability (P) of occurring over a specified time period. For each marker subgroup the risk ratio ( and ) for the
comparative treatment effect may be directly estimated from such an RCT. As indicated by equation 1, the information derived from such a trial would be sufficient to determine the choice of therapy (
α or β) for each subgroup that will minimize the risk of the event. However, such trials are not always available. Therefore, the specific goal of the analysis presented in this paper is to
indirectly estimate and .(1)
A common form of evidence for a pharmacogenomic marker is an association study. Data from an association study (or meta-analysis of association studies) provides an estimate of the risk ratio of an
outcome between individuals with different values of the marker for individuals using treatment α (: equation 2). A similar estimate may be available for individuals using an alternative treatment β
With this information, a prescriber can advise a patient of his or her prognosis given the use of either drug. However, this information is insufficient to advise the patient as to the optimum choice
of therapy; that which minimizes P[e]. Specifically, if it does not follow that patients with the marker value A′ should not be treated with therapy α, which could still be more effective compared to
alternative treatment options (e.g. β).
In addition to estimates of and from association studies it is assumed that an estimate of the treatment effect is available from a conventional RCT (or meta-analysis of RCTs), in which the cohort is
not stratified for the marker of interest (). Alternatively, may be based on an indirect treatment comparison of RCTs with a common comparator although this may lead to an increased the risk of bias
[4], [5]. Third, it is assumed that data is available on the prevalence of the marker in patients who have the condition that will be treated with α or β. This information is generally available from
the association studies but may also be sourced elsewhere. It is assumed that the prevalence of the marker is balanced between arms of the hypothetical trial.
The probability of the clinical outcome in the unstratified cohort is estimated to be the weighted average of the probability of the clinical outcome in the pharmacogenomic subgroups, using the law
of total probability, which relates marginal probability and conditional probability (equation 3).(3)
Combining equations 2 and 3 leads to the following formulas for indirectly estimating risk of the event in the pharmacogenomic subgroups (A and A′) for treatment α. Calculation of the risk of the
event in the pharmacogenomic subgroups for treatment β may be similarly undertaken.
Subsequently, using the relationship described in equation 1 the comparative treatment effect for the subgroups defined by the pharmacogenomic marker may be indirectly estimated.
Credible intervals (analogous to confidence intervals) for pharmacogenomics subgroup treatment effects and the statistical inference on the difference between subgroup treatment effects may be
estimated using Monte Carlo simulation. This approach essentially estimates the uncertainty of the output ( and ) based on the collective uncertainty of the inputs (,,, and ). Thus, information on
the distribution of the above parameters (e.g. based on the 95% confidence interval) would need to be available. Typically risk ratio estimates are represented by a lognormal distribution and
probabilities by a beta distribution [6]. Monte Carlo simulation involves randomly drawn values from the distributions of the input variables and the calculation of the output variable. This process
is repeated a large number of times (e.g. 10,000) producing the distribution of the output variable. Assessment of whether the difference between subgroups is statistically significant (statistical
test of interaction) may also be estimated [7]. However, care must be taken in interpreting the statistical significance due to the risk of bias inherent in the indirect estimation.
The key assumption of the method is exchangeability of the studies (association studies, RCT). Specifically, the study populations should not differ on any modifiers of the prognostic effect of the
marker or for any modifiers of the predictive effect of the marker. We introduce the label “marker-modifiers” to encompass both prognostic and predictive modifiers. Candidate marker-modifiers include
patient factors (age, sex, severity of index condition, co-existing disease, ethnicity), study factors (length of follow-up, intensity of surveillance) and treatment factors (concomitant medications,
surgery, or dose and duration of the index treatment).
Note that these factors could have different distributions in the included studies without invalidating the assumption of exchangeability. It is only when differences in these factors affects outcome
in groups defined by the marker (i.e., only when a factor is a marker-modifier) that the assumption of exchangeability does not hold. In general, the greater the degree to which the assumption of
exchangeability does not hold, the greater the expected risk of bias for comparative treatment effect estimates of the pharmacogenomic subgroups. The assumption of exchangeability in this context is
analogous to the assumption of exchangeability (sometimes called “similarity”) of RCTs in an indirect treatment comparison; or more broadly of exchangeability for RCTs, non-randomised studies and
direct head-to-head studies in a network meta-analysis. The variables (if any) that can modify the pharmacogenomic association study effect size and the direction of the modification will tend to be
specific to the marker and drug in question and hence it is not possible to make a generic statement of how factors will affect exchangeability. The marker prevalence is unlikely to be an issue with
respect to exchangeability unless there are substantial differences in marker prevalence between studies and marker prevalence is believed to modify the marker effect.
It is also assumed that the contributing studies are methodologically sound and their results are not subject to bias. In general, the greater the risk of bias in the contributing studies, the
greater the expected risk of bias for comparative treatment effect estimates of the pharmacogenomic subgroups. The inputs and assumptions of the approach are summarized in Table 1.
Table 1. Required inputs and assumptions of the indirect estimation approach.
Case Study
A contemporary example of a pharmacogenomic marker is the use of CYP2C19 genotype to guide use of the anti-platelet agent clopidogrel. CYP2C19 loss-of-function (LoF) alleles are associated with
decreased effect of clopigogrel leading to increased risk of adverse CV events [8]–[10]. An example of a treatment decision that may be influenced by CYP2C19 genotype is the choice of clopidogrel or
ticagrelor following ACS. This example is particularly pertinent as a direct pharmacogenomic subgroup analysis has been published which enables a simple comparison of direct and indirect approaches
In the PLATO RCT the hazard ratio for CV events was reported to be 0.84 (95% CI; 0.77 to 0.92) for ticagrelor compared to clopidogrel [12]. Due to the relatively low CV event rate in this scenario
the hazard ratio is a good approximation of (a risk ratio). Meta-analyses of association studies have indicated significant statistical heterogeneity and report summary estimates of the risk ratio of
CV outcomes for individuals using clopidogrel carrying a CYP2C19 LoF allele () ranging from approximately 1.10 to 1.60 [8], [9], [13]. It was assumed that there was no association between CYP2C19
genotype and CV outcomes for individuals that are not taking clopidogrel ( = 1) for three reasons: there is no known biological/pharmacological basis for CYP2C19 genotype to influence CV outcomes in
the absence of clopidogrel therapy, the evidence from pharmacokinetic and pharmacodynamics studies indicates no effect, and association studies have not indicated any significant difference in CV
risk in the absence of clopidogrel [11], [14]–[17]. The probability of carriage of a CYP2C19 LoF allele (P[A]) in a predominantly Caucasian population was estimated to be 28.0% [11], [14], [18], [19]
The relationship between the multiple sources of information used in the indirect subgroup analysis is summarised in Figure 1. A spreadsheet implementing the indirect subgroup analysis for the case
study provides an example of how the calculations may be undertaken (see File S1). As an example of using the formulas derived here, the treatment effect of ticagrelor compared to clopidogrel in the
subgroup that does not have a CYP2C19 LoF allele (i.e. good responders to clopidogrel) is estimated below for a relatively high value of the association between CYP2C19 genotype and CV outcomes with
use of clopidogrel ().
Figure 1. Relationships between subgroup treatment effects, association study results and unstratified RCT study results.
CYP2C19 genotype and clopidogrel is used here as an example to illustrate the groups of individuals (based on treatment and pharmacogenomics marker status) involved in the indirect subgroup analysis
and the relationships between the groups (both known and unknown). Values in the brackets represent the 95% confidence intervals for the estimate. CYP2C19: cytochrome P450 2C19, LoF:
Figure 2 displays a deterministic sensitivity analysis of the indirect estimates of treatment effect (ticagrelor compared to clopidogrel) for CYP2C19 genotypes as a function of the association study
results (). This figure helps translate an association study result into a comparative treatment effect for each pharmacogenomic subgroup and hence provides insight into whether screening for the
pharmacogenomic marker is likely to result in improved patient outcomes (i.e. clinical utility). The subgroup comparative treatment effect estimates may also form the basis of formal
cost-effectiveness modeling. In addition, a probabilistic sensitivity analysis was undertaken utilising Monte Carlo simulation. Using ( = 1.18 [95% CI; 1.09 to 1.28]) from a recent meta-analysis of
association studies [9], and were estimated to be 0.75 (95% CI; 0.67 to 0.83) and 0.88 (95% CI; 0.81 to 0.97), respectively. This compares reasonably well to the direct estimates based on the genetic
substudy of the PLATO RCT: = 0·77 (95% CI; 0·60 to 0·99) and = 0·86 (95% CI; 0·74 to 1·01) [11].
Figure 2. One way deterministic sensitivity analysis for indirect estimates of treatment effect.
The indirect estimates of the treatment effect (relative risk for comparison of ticagrelor and clopidogrel) for subgroups based on cytochrome P450 2C19 (CYP2C19) genotype are displayed as a function
of the size of the association study estimate. LoF = subgroup with a CYP2C19 loss-of-function allele, LoF′ = subgroup without a CYP2C19 loss-of-function allele.
This paper describes the mathematical basis and key assumption (i.e., exchangeability) underlying a method for indirect estimation of the comparative treatment effect in a pharmacogenomic subgroup.
The method is useful for estimating the potential clinical utility of a pharmacogenomic marker, given the available data (e.g. [20], [21]); especially when sensitivity analyses are conducted around
the inputs. It would be straight forward to incorporate the method into a network meta-analysis that included both direct and indirect evidence for the unstratified treatment effect [22]. Also, the
method is a useful addition to the toolbox of methods available to assist in assessing the possible cost-effectiveness of a pharmacogenomic marker (e.g. [23]–[25]). In that context it provides a
clear mathematical structure for synthesising the available evidence and transparency about the underlying assumption (i.e., exchangeability). It lends itself naturally to either deterministic or
probabilistic sensitivity analysis [6].
The major caveats of the method relate to the assumption of exchangeability. Specifically, study populations must be similar with regard to any marker-effect modifiers (moderators of either the
treatment-independent [prognostic] effect of the marker or the treatment-marker interaction effect). This is analogous to the assumption for indirect treatment comparisons where exchangeability is
with respect to moderators of treatment effect. As with indirect comparisons of treatment effects it is also prudent that indirect pharmacogenomics subgroup analyses should include a detailed
narrative comparison of differences in patient, study or treatment factors across the included studies. However, such differences do not necessarily mean that the assumption of exchangeability is
invalidated. Evidence that factors with different distributions across included studies are also marker-effect modifiers would be required. This could be evidence from studies external to the
indirect comparison or knowledge of the pathophysiology of the disease [26].
One example of violation of the assumption of exchangeability could be length of follow-up if the proportional hazards assumption does not hold [27]. If the RCT has median follow-up for 3 month, and
the association study has follow-up for 1 year this may bias the subgroup treatment effect estimated if the relative risk of the association study attenuates with longer follow-up (e.g. RR would have
been 0.6 rather than 0.8 if length of follow up had been 3 months instead of 1 year). The dose of the drug may modify the effect of the pharmacogenomic marker (e.g. irinotecan dose modifies the
effect of UDP glucuronosyltransferase 1A1 genotype on irinotecan toxicity but not tumor response [28], [29]) and thus if the RCT and association studies have different irinotecan doses this will bias
the subgroup treatment toxicity estimates. Pharmacogenomic marker effect may also vary between patient populations (e.g. between different subtypes or stages of the disease). Unexplained
heterogeneity for pharmacogenomic marker effect is also problematic. The clopidogrel case study is a good example in which the effect of CYP2C19 genotype varies significantly between studies, but the
reason for the variation is not well understood [9], [10]. Consequently, it is very difficult to be certain that the studies are sufficiently similar in terms of the (unknown) important
characteristics that can modify the pharmacogenomic marker effect.
Formulas for indirect estimation of subgroup effects for a pharmacogenomic marker are based on the total law of probability and are therefore presented in terms of risk ratios (RR). There are other
commonly used relative measures of treatment effect: odds ratio and hazard ratio. If the baseline event risk is small (say <10%) then these measures will be approximately equal to the risk ratio and
could be substituted for them in the formulas (as is the case for the case study above where the hazard ratio was substituted for the risk ratio). Further research is required assess the best
approach to indirectly estimate the subgroup treatment effects when event rates are significantly higher (e.g. advanced cancer). Additionally, the formulas presented are applicable to the common
situation in which a marker with two levels (e.g. high/low, mutant/wildtype) is used to predict a dichotomous outcome (e.g. event or no event). The general principles used to derive the formulas
should be generalizable to other situations (e.g. continuous/multi-level markers/outcomes) although the formulas are likely to be more complex. A simple option is to convert such data (e.g.
dichotomize a continuous marker or outcome) to enable the application of the formulas presented here although it is important to be cognizant that in some cases this may result in significant loss of
The relationships presented here highlight the importance of understanding association between pharmacogenomic groups and events in the presence and absence of the drug in question (i.e. both and ).
Such information is required to estimate whether the marker is prognostic and/or a predictive modifier. In the absence of both of these values, it is still possible to undertake a sensitivity and
scenario analysis based on plausible assumptions to better understand the value of undertaking further research. Plausible scenarios may include that the marker is not associated with the outcome in
the absence of a specific drug (e.g. CYP2C19 genotype is not associated with CV events when clopidogrel is not being used), or that the association is of similar size to that estimated in the
presence of the drug (indicating a marker that is prognostic rather than a modifier of a specific treatment effect).
In the case study presented here, a deterministic sensitivity analysis facilitated insight into clinical utility by reframing the association study results in terms of plausible subgroup treatment
effects. Given that there is still substantial uncertainty and risk-of-bias with respect to the association study results for clopidogrel and CYP2C19 genotype, the sensitivity analysis (Figure 2)
enables the reader to readily appreciate how the indirect estimate would be affected if the association effect size differs from the value used. In addition, Monte Carlo simulation was used to
estimate the distribution of the subgroup treatment effects. The direct and indirect estimates of the subgroup treatment effects agreed reasonably well in the case study. However, an important
direction of future research will be to undertake a more comprehensive assessment of inconsistency between direct and indirect approaches, as has been recently undertaken for indirect treatment
comparisons [4].
It is valuable to have insight into the expected clinical utility of a proposed pharmacogenomic marker as early as possible in order to assess the likely value of undertaking an RCT designed to
produce higher quality evidence of the clinical utility [30], [31]. Techniques such as value of information analysis may be utilised to explicitly and quantitatively estimate the value of undertaking
further research [23], [32]. In the absence of RCT data on the value of utilising a marker the indirect approach described here allows reframing of association study results in terms of a treatment
effect in subgroups defined by a pharmacogenomic marker. This reframe can allow greater insight of clinical utility, in particular whether testing for the marker is likely to result in improved
clinical decisions regarding treatment selection.
Supporting Information
Ms Excel spreadsheet with example calculations based on the clopidogrel pharmacogenetics case study.
Author Contributions
Conceived and designed the experiments: MJS. Analyzed the data: MJS MC BAKP. Wrote the paper: MJS MC BAKP. | {"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0072256","timestamp":"2014-04-21T15:52:30Z","content_type":null,"content_length":"144028","record_id":"<urn:uuid:680c61fe-1414-4525-96c6-e78010b245ea>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Total mechanical work, solving for x
1. The problem statement, all variables and given/known data
I understand the problem conceptually. I just need help solving for x, which is the hardest part of the problem for me.
1/2mv^2 = 1/2kx^2 + umgx
3. The attempt at a solution
Not really sure how to solve this. I tried and tried to separate by x but nothing worked. Here are 1 of the attempts I suppose. I feel pretty stumped here.
[tex]1/2mv^2 = 1/2kx^2 + umgx[/tex]
[tex]1/2mv^2 - umgx = 1/2kx^2[/tex]
[tex]\frac{mv^2}{k} - \frac{2umgx}{k} = x^2[/tex] | {"url":"http://www.physicsforums.com/showthread.php?p=4199421","timestamp":"2014-04-19T17:45:44Z","content_type":null,"content_length":"25743","record_id":"<urn:uuid:9bbebea4-6b6c-4113-a5ff-f4c63dfd66ca>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reverse Mathematics and Pi^1_2 Comprehension
Stephen G. Simpson
Pennsylvania State University
Conference on Methods of Proof Theory
Max Planck Institute for Mathematics
Bonn, Germany
June 8, 2007
This is joint work with Carl Mummert. We initiate the reverse mathematics of general topology. We show that a certain metrization theorem is equivalent to Pi^1_2 comprehension. An MF space is defined
to be a topological space of the form MF(P) with topology generated by {N_p | p in P}. Here P is a poset, MF(P) is the set of maximal filters on P, and N_p = {F in MF(P) | p in F}. If the poset P is
countable, the space MF(P) is said to be countably based. The class of countably based MF spaces can be defined and discussed within the subsystem ACA_0 of second-order arithmetic. One can prove
within ACA_0 that every complete separable metric space is homeomorphic to a countably based MF space which is regular. We show that the converse statement, "every countably based MF space which is
regular is homeomorphic to a complete separable metric space," is equivalent to Pi^1_2-CA_0. The equivalence is proved in the weaker system Pi^1_1-CA_0. This is the first example of a theorem of core
mathematics which is provable in second-order arithmetic and implies Pi^1_2 comprehension. | {"url":"http://www.personal.psu.edu/t20/talks/mpim0705/pi12-abstract.html","timestamp":"2014-04-18T06:06:56Z","content_type":null,"content_length":"1608","record_id":"<urn:uuid:ce61933c-b8e6-4c81-a553-c6d961dc0d87>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Took Cutie For a Ride In My Death Cab
”..the Son of Man came not to be served, but to serve..” -Matthew 20:28
”..the Son of Man came not to be served, but to serve..”
Imagine if your follower count turned into money Id still be poor
… Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square donuts in than
circle donuts if the circumference of the circle touched the each of the corners of the square donut. So you might end up with more donuts. But then I also think… Does the square or round donut have
a greater donut volume? Is the number of donuts better than the entire donut mass as a whole? Hrm. HRM. A round donut with radius R1 occupies the same space as a square donut with side 2R1. If the
center circle of a round donut has a radius R2 and the hole of a square donut has a side 2R2, then the area of a round donut is πR12 - πr22. The area of a square donut would be then 4R12 - 4R22. This
doesn’t say much, but in general and throwing numbers, a full box of square donuts has more donut per donut than a full box of round donuts.The interesting thing is knowing exactly how much more
donut per donut we have. Assuming first a small center hole (R2 = R1/4) and replacing in the proper expressions, we have a 27,6% more donut in the square one (Round: 15πR12/16 ≃ 2,94R12, square:
15R12/4 = 3,75R12). Now, assuming a large center hole (R2 = 3R1/4) we have a 27,7% more donut in the square one (Round: 7πR12/16 ≃ 1,37R12, square: 7R12/4 = 1,75R12). This tells us that,
approximately, we’ll have a 27% bigger donut if it’s square than if it’s round. tl;dr: Square donuts have a 27% more donut per donut in the same space as a round one. Thank you donut side of Tumblr.
… Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square donuts in than
circle donuts if the circumference of the circle touched the each of the corners of the square donut. So you might end up with more donuts. But then I also think… Does the square or round donut have
a greater donut volume? Is the number of donuts better than the entire donut mass as a whole? Hrm. HRM. A round donut with radius R1 occupies the same space as a square donut with side 2R1. If the
center circle of a round donut has a radius R2 and the hole of a square donut has a side 2R2, then the area of a round donut is πR12 - πr22. The area of a square donut would be then 4R12 - 4R22. This
doesn’t say much, but in general and throwing numbers, a full box of square donuts has more donut per donut than a full box of round donuts.The interesting thing is knowing exactly how much more
donut per donut we have. Assuming first a small center hole (R2 = R1/4) and replacing in the proper expressions, we have a 27,6% more donut in the square one (Round: 15πR12/16 ≃ 2,94R12, square:
15R12/4 = 3,75R12). Now, assuming a large center hole (R2 = 3R1/4) we have a 27,7% more donut in the square one (Round: 7πR12/16 ≃ 1,37R12, square: 7R12/4 = 1,75R12). This tells us that,
approximately, we’ll have a 27% bigger donut if it’s square than if it’s round. tl;dr: Square donuts have a 27% more donut per donut in the same space as a round one.
… Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square donuts in than
circle donuts if the circumference of the circle touched the each of the corners of the square donut. So you might end up with more donuts. But then I also think… Does the square or round donut have
a greater donut volume? Is the number of donuts better than the entire donut mass as a whole? Hrm. HRM.
… Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square donuts in than
circle donuts if the circumference of the circle touched the each of the corners of the square donut.
But then I also think… Does the square or round donut have a greater donut volume? Is the number of donuts better than the entire donut mass as a whole?
A round donut with radius R1 occupies the same space as a square donut with side 2R1. If the center circle of a round donut has a radius R2 and the hole of a square donut has a side 2R2, then the
area of a round donut is πR12 - πr22. The area of a square donut would be then 4R12 - 4R22. This doesn’t say much, but in general and throwing numbers, a full box of square donuts has more donut per
donut than a full box of round donuts.The interesting thing is knowing exactly how much more donut per donut we have. Assuming first a small center hole (R2 = R1/4) and replacing in the proper
expressions, we have a 27,6% more donut in the square one (Round: 15πR12/16 ≃ 2,94R12, square: 15R12/4 = 3,75R12). Now, assuming a large center hole (R2 = 3R1/4) we have a 27,7% more donut in the
square one (Round: 7πR12/16 ≃ 1,37R12, square: 7R12/4 = 1,75R12). This tells us that, approximately, we’ll have a 27% bigger donut if it’s square than if it’s round.
tl;dr: Square donuts have a 27% more donut per donut in the same space as a round one.
Does anyone else see what’s wrong with this picture? oh shit Ohhhhhhh
Does anyone else see what’s wrong with this picture? oh shit
Such polite barks he gets up all excited the last time like YEAH I’M GONNA SPEAK YEAH WATCH THIS "…….wuf"
he gets up all excited the last time like YEAH I’M GONNA SPEAK YEAH WATCH THIS
There are so many types of vegan, cruelty-free milk! Which is your favorite?
Apparently this is "The clearest photo of Mercury ever taken." why isnt everyone getting so excited about this, it is literally another planet look at how beautiful it is stop what your doing and
look at how alien like this planet is what is living there oh my god mercury
Apparently this is "The clearest photo of Mercury ever taken."
why isnt everyone getting so excited about this, it is literally another planet look at how beautiful it is stop what your doing and look at how alien like this planet is what is living there oh my
god mercury | {"url":"http://tookcutieforarideinmydeathcab.tumblr.com/","timestamp":"2014-04-17T06:52:58Z","content_type":null,"content_length":"48829","record_id":"<urn:uuid:490ab76e-73cb-4372-bd7e-7545076f1585>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fast, packed, strict bit streams (i.e. list of Bools) with semi-automatic stream fusion.
This module is intended to be imported qualified, to avoid name clashes with Prelude functions. e.g.
import qualified Data.BitStream as BS
Strict Bitstreams are made of strict Vector of Packets, and each Packets have at least 1 bit.
Data types
data Bitstream d Source
A space-efficient representation of a Bool vector, supporting many efficient operations. Bitstreams have an idea of directions controlling how octets are interpreted as bits. There are two types of
concrete Bitstreams: Bitstream Left and Bitstream Right.
Bitstream (Bitstream d) => Eq (Bitstream d)
Bitstreams are lexicographically ordered.
let x = pack [True , False, False]
y = pack [False, True , False]
Bitstream (Bitstream d) => Ord (Bitstream d) z = pack [False]
[ compare x y -- GT
, compare z y -- LT
Show (Packet d) => Show (Bitstream d)
Bitstream forms Monoid in the same way as ordinary lists:
Bitstream (Bitstream d) => Monoid (Bitstream d) mempty = empty
mappend = append
mconcat = concat
Bitstream (Bitstream Right)
Bitstream (Bitstream Left)
data Left Source
Left bitstreams interpret an octet as a vector of bits whose LSB comes first and MSB comes last e.g.
• 11110000 => [False, False, False, False, True, True , True , True]
• 10010100 => [False, False, True , False, True, False, False, True]
Bits operations (like toBits) treat a Left bitstream as a little-endian integer.
Ord (Packet Left)
Show (Packet Left)
Bitstream (Packet Left)
Bitstream (Bitstream Left)
Bitstream (Bitstream Left)
data Right Source
Right bitstreams interpret an octet as a vector of bits whose MSB comes first and LSB comes last e.g.
• 11110000 => [True, True , True , True, False, False, False, False]
• 10010100 => [True, False, False, True, False, True , False, False]
Bits operations (like toBits) treat a Right bitstream as a big-endian integer.
Ord (Packet Right)
Show (Packet Right)
Bitstream (Packet Right)
Bitstream (Bitstream Right)
Bitstream (Bitstream Right)
Introducing and eliminating Bitstreams
unsafeFromPackets :: Bitstream (Packet d) => Int -> Vector (Packet d) -> Bitstream dSource
O(1) Convert a Vector of Packets into a Bitstream, with provided overall bit length. The correctness of the bit length isn't checked, so you MUST be sure your bit length is absolutely correct.
Converting from/to strict ByteStrings
Converting from/to Bits'
fromNBits :: (Integral n, Integral β, Bits β, Bitstream α) => n -> β -> αSource
O(n) Convert the lower n bits of the given Bits. In the case that more bits are requested than the Bits provides, this acts as if the Bits has an infinite number of leading 0 bits.
Converting from/to Streams
stream :: Bitstream α => α -> Stream BoolSource
O(n) Explicitly convert a Bitstream into a Stream of Bool.
Bitstream operations are automatically fused whenever it's possible, safe, and effective to do so, but sometimes you may find the rules are too conservative. These two functions stream and unstream
provide a means for coercive stream fusion.
You should be careful when you use stream. Most functions in this package are optimised to minimise frequency of memory allocations and copyings, but getting Bitstreams back from Stream Bool requires
the whole Bitstream to be constructed from scratch. Moreover, for lazy Bitstreams this leads to be an incorrect strictness behaviour because lazy Bitstreams are represented as lists of strict
Bitstream chunks but stream can't preserve the original chunk structure. Let's say you have a lazy Bitstream with the following chunks:
bs = [chunk1, chunk2, chunk3, ...]
and you want to drop the first bit of such stream. Our tail is only strict on the chunk1 and will produce the following chunks:
tail bs = [chunk0, chunk1', chunk2, chunk3, ...]
where chunk0 is a singleton vector of the first packet of chunk1 whose first bit is dropped, and chunk1' is a vector of remaining packets of the chunk1. Neither chunk2 nor chunk3 have to be evaluated
here as you might expect.
But think about the following expression:
import qualified Data.Vector.Fusion.Stream as Stream
unstream $ Stream.tail $ stream bs
the resulting chunk structure will be:
[chunk1', chunk2', chunk3', ...]
where each and every chunks are slightly different from the original chunks, and this time chunk1' has the same length as chunk1 but the last bit of chunk1' is from the first bit of chunk2. This
means when you next time apply some functions strict on the first chunk, you end up fully evaluating chunk2 as well as chunk1 and this can be a serious misbehaviour for lazy Bitstreams.
The automatic fusion rules are carefully designed to fire only when there aren't any reason to preserve the original packet / chunk structure.
Changing bit order in octets
Basic interface
foldl :: Bitstream α => (β -> Bool -> β) -> β -> α -> βSource
O(n) foldl, applied to a binary operator, a starting value (typically the left-identity of the operator), and a Bitstream, reduces the Bitstream using the binary operator, from left to right:
foldl f z [x1, x2, ..., xn] == (...((z f x1) f x2) f...) f xn
The Bitstream must be finite.
foldr :: Bitstream α => (Bool -> β -> β) -> β -> α -> βSource
O(n) foldr, applied to a binary operator, a starting value (typically the right-identity of the operator), and a Bitstream, reduces the Bitstream using the binary operator, from right to left:
foldr f z [x1, x2, ..., xn] == x1 f (x2 f ... (xn f z)...)
Special folds
and :: Bitstream α => α -> BoolSource
O(n) and returns the conjunction of a Bool list. For the result to be True, the Bitstream must be finite; False, however, results from a False value at a finite index of a finite or infinite
Bitstream. Note that strict Bitstreams are always finite.
or :: Bitstream α => α -> BoolSource
O(n) or returns the disjunction of a Bool list. For the result to be False, the Bitstream must be finite; True, however, results from a True value at a finite index of a finite or infinite Bitstream.
Note that strict Bitstreams are always finite.
any :: Bitstream α => (Bool -> Bool) -> α -> BoolSource
O(n) Applied to a predicate and a Bitstream, any determines if any bit of the Bitstream satisfies the predicate. For the result to be False, the Bitstream must be finite; True, however, results from
a True value for the predicate applied to a bit at a finite index of a finite or infinite Bitstream.
all :: Bitstream α => (Bool -> Bool) -> α -> BoolSource
O(n) Applied to a predicate and a Bitstream, all determines if all bits of the Bitstream satisfy the predicate. For the result to be True, the Bitstream must be finite; False, however, results from a
False value for the predicate applied to a bit at a finite index of a finite or infinite Bitstream.
scanl :: Bitstream α => (Bool -> Bool -> Bool) -> Bool -> α -> αSource
O(n) scanl is similar to foldl, but returns a Bitstream of successive reduced bits from the left:
scanl f z [x1, x2, ...] == [z, z f x1, (z f x1) f x2, ...]
Note that
last (scanl f z xs) == foldl f z xs
scanl1 :: Bitstream α => (Bool -> Bool -> Bool) -> α -> αSource
O(n) scanl1 is a variant of scanl that has no starting value argument:
scanl1 f [x1, x2, ...] == [x1, x1 f x2, ...]
unfoldr :: Bitstream α => (β -> Maybe (Bool, β)) -> β -> αSource
O(n) The unfoldr function is a `dual' to foldr: while foldr reduces a Bitstream to a summary value, unfoldr builds a Bitstream from a seed value. The function takes the element and returns Nothing if
it is done producing the Bitstream or returns Just (a, b), in which case, a is a prepended to the Bitstream and b is used as the next element in a recursive call.
take :: (Integral n, Bitstream α) => n -> α -> αSource
O(n) take n, applied to a Bitstream xs, returns the prefix of xs of length n, or xs itself if n > length xs.
drop :: (Integral n, Bitstream α) => n -> α -> αSource
O(n) drop n xs returns the suffix of xs after the first n bits, or empty if n > length xs.
takeWhile :: Bitstream α => (Bool -> Bool) -> α -> αSource
O(n) takeWhile, applied to a predicate p and a Bitstream xs, returns the longest prefix (possibly empty) of xs of bits that satisfy p.
span :: Bitstream α => (Bool -> Bool) -> α -> (α, α)Source
O(n) span, applied to a predicate p and a Bitstream xs, returns a tuple where first element is longest prefix (possibly empty) of xs of bits that satisfy p and second element is the remainder of the
span p xs is equivalent to (takeWhile p xs, dropWhile p xs)
break :: Bitstream α => (Bool -> Bool) -> α -> (α, α)Source
O(n) break, applied to a predicate p and a Bitstream xs, returns a tuple where first element is longest prefix (possibly empty) of xs of bits that do not satisfy p and second element is the remainder
of the Bitstream.
break p is equivalent to span (not . p).
Searching streams
Searching by equality
elem :: Bitstream α => Bool -> α -> BoolSource
O(n) elem is the Bitstream membership predicate, usually written in infix form, e.g., x `elem` xs. For the result to be False, the Bitstream must be finite; True, however, results from an bit equal
to x found at a finite index of a finite or infinite Bitstream.
Searching with a predicate
partition :: Bitstream α => (Bool -> Bool) -> α -> (α, α)Source
O(n) The partition function takes a predicate and a Bitstream and returns the pair of Bitstreams of bits which do and do not satisfy the predicate, respectively.
Indexing streams
elemIndex :: (Bitstream α, Integral n) => Bool -> α -> Maybe nSource
O(n) The elemIndex function returns the index of the first bit in the given Bitstream which is equal to the query bit, or Nothing if there is no such bit.
elemIndices :: (Bitstream α, Integral n) => Bool -> α -> [n]Source
O(n) The elemIndices function extends elemIndex, by returning the indices of all bits equal to the query bit, in ascending order.
findIndices :: (Bitstream α, Integral n) => (Bool -> Bool) -> α -> [n]Source
O(n) The findIndices function extends findIndex, by returning the indices of all bits satisfying the predicate, in ascending order.
Zipping and unzipping streams
zip :: Bitstream α => α -> α -> [(Bool, Bool)]Source
O(min(m, n)) zip takes two Bitstreams and returns a list of corresponding bit pairs. If one input Bitstream is short, excess bits of the longer Bitstream are discarded.
zip4 :: Bitstream α => α -> α -> α -> α -> [(Bool, Bool, Bool, Bool)]Source
The zip4 function takes four lists and returns a list of quadruples, analogous to zip.
zipWith :: Bitstream α => (Bool -> Bool -> β) -> α -> α -> [β]Source
O(min(m, n)) zipWith generalises zip by zipping with the function given as the first argument, instead of a tupling function.
zipWith3 :: Bitstream α => (Bool -> Bool -> Bool -> β) -> α -> α -> α -> [β]Source
The zipWith3 function takes a function which combines three bits, as well as three Bitstreams and returns a list of their point-wise combination, analogous to zipWith.
zipWith4 :: Bitstream α => (Bool -> Bool -> Bool -> Bool -> β) -> α -> α -> α -> α -> [β]Source
The zipWith4 function takes a function which combines four bits, as well as four Bitstreams and returns a list of their point-wise combination, analogous to zipWith.
zipWith5 :: Bitstream α => (Bool -> Bool -> Bool -> Bool -> Bool -> β) -> α -> α -> α -> α -> α -> [β]Source
The zipWith5 function takes a function which combines five bits, as well as five Bitstreams and returns a list of their point-wise combination, analogous to zipWith.
zipWith6 :: Bitstream α => (Bool -> Bool -> Bool -> Bool -> Bool -> Bool -> β) -> α -> α -> α -> α -> α -> α -> [β]Source
The zipWith6 function takes a function which combines six bits, as well as six Bitstreams and returns a list of their point-wise combination, analogous to zipWith.
unzip :: Bitstream α => [(Bool, Bool)] -> (α, α)Source
O(min(m, n)) unzip transforms a list of bit pairs into a Bitstream of first components and a Bitstream of second components.
unzip6 :: Bitstream α => [(Bool, Bool, Bool, Bool, Bool, Bool)] -> (α, α, α, α, α, α)Source
Standard input and output
hGetContents :: Bitstream (Packet d) => Handle -> IO (Bitstream d)Source
O(n) Read entire handle contents strictly into a Bitstream.
This function reads chunks at a time, doubling the chunksize on each read. The final buffer is then realloced to the appropriate size. For files > half of available memory, this may lead to memory
exhaustion. Consider using readFile in this case.
The Handle is closed once the contents have been read, or if an exception is thrown.
hGet :: Bitstream (Packet d) => Handle -> Int -> IO (Bitstream d)Source
O(n) hGet h n reads a Bitstream directly from the specified Handle h. First argument h is the Handle to read from, and the second n is the number of octets to read, not bits. It returns the octets
read, up to n, or null if EOF has been reached.
If the handle is a pipe or socket, and the writing end is closed, hGet will behave as if EOF was reached.
hGetSome :: Bitstream (Packet d) => Handle -> Int -> IO (Bitstream d)Source
O(n) Like hGet, except that a shorter Bitstream may be returned if there are not enough octets immediately available to satisfy the whole request. hGetSome only blocks if there is no data available,
and EOF has not yet been reached. | {"url":"http://hackage.haskell.org/package/bitstream-0.2.0.3/docs/Data-Bitstream.html","timestamp":"2014-04-16T22:29:08Z","content_type":null,"content_length":"129231","record_id":"<urn:uuid:289916e2-00cd-4464-adeb-affcdcbdd064>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Binary & Continuous variables in LCA
Binary & Continuous variables in LCA
mike zyphur posted on Friday, February 03, 2006 - 5:31 pm
Hi Bengt/Linda,
I tried to find this issue addressed elsewhere on the message board, but couldn't find it. Perhaps you would be kind enough to address it here:
In an LCA with multiple indicators with variances that are VERY different (e.g., age in months and 5-point likert-type scale responses), to what extent will first standardizing the variables allow
each variable to be treated equally in the process of class formation? While the point of the LCA is to make the variables independent within each class, how strongly influenced is class formation by
variables which have much larger variances than the other variables?
If it is greatly influenced, what does this mean for LCAs which mix continuous and binary indicators? While the variances of continuous variables can be easily changed by, for example,
standardization, clearly this is not true for binary indicators. Does this pose problems for LCAs which have both continuous and binary indicators? Is what I'm bringing up even an issue?
Thank you for your time!
bmuthen posted on Friday, February 03, 2006 - 5:39 pm
Unlike k-means clustering, LCA is not hurt by variances being different across variables. The model allows them to be different. The key to success in LCA is when variances - and most importantly
means - vary across classes for a given variable.
If the variances are very different across variables, however, you may want to scale by 10 or 100 say, to make it numerically easier to do the analysis. But don't standardize.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=23&page=1063","timestamp":"2014-04-19T06:58:22Z","content_type":null,"content_length":"18172","record_id":"<urn:uuid:12d0269b-2b3d-4a11-a0f8-d60db4e07d73>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Creation and annihilation operators
Hi all
1. The problem statement, all variables and given/known data
[tex] (a^\dagger a)^2=a^\dagger a^\dagger a a +a^\dagger a[/tex]
[tex] a= \lambda x +i \gamma p [/tex]
[tex] a^\dagger= \lambda x -i \gamma p [/tex]
2. Relevant equations
3. The attempt at a solution
Well, I haven't got much.
I just tried to use the stuff given, put it into my equation and solve it, but I don't get to the right side.
I calculated a+a first
[tex] a^\dagger a ={\lambda}^2x^2 + \frac {1}{2} I + \gamma^2 p^2[/tex]
But when I now try to calculate the square of that term I get lost. If I square it I get to:
[tex] (a^\dagger a)^2= \lambda^4x^4+\gamma^4p^4 +\lambda^2 \gamma^2 (x^2p^2+p^2x^2)-\lambda^2 x^2 -\gamma^2 p^2 +\frac 1 4 I[/tex]
Can anyone help me with this? I don't know what to do now/ If I'm on the right way.
Thanks for your help
edit: I is the identity matrix | {"url":"http://www.physicsforums.com/showthread.php?p=4152096","timestamp":"2014-04-18T23:18:51Z","content_type":null,"content_length":"43548","record_id":"<urn:uuid:7588b26e-987b-4e36-9fa9-680dbb229e86>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |
division problem
August 20th 2010, 02:56 PM #1
Junior Member
Jul 2010
division problem
Hi can anyone give a detailed solution to this?
If a and b are natural numbers, then $(a+b)!/a!b!$ is an integer
Trying to teach my self numbeer theory but hitting a few walls =p
To select "a" items from "a+b" items, there are
$\displaystyle\frac{(a+b)!}{a!(a+b-a)!}$ ways to do it.
That's an integer.
Hi Archie Meade, thanks for your reply, but is there a way to show/prove it's an integer?? This question is from a problem set in divisibility. Sorry, should of probably included that in the
One way to do it is to note that $\begin{pmatrix}n \\ k \end{pmatrix}= \frac{n!}{k!(n-k)!}$ is the coefficient of $x^ky^{n-k}$ in the binomial $(x+ y)^k$. Since the coefficients of x and y are 1,
all such coefficients are positive integers.
And, of course, $\frac{(a+ b)!}{a!b!}= \frac{(a+ b)!}{(a+b- b)!b!}= \frac{n!}{(n-k)!k!}= \begin{pmatrix}n \\ k\end{pmatrix}$ with n= a+ b and k= b.
If you don't like that, try proving it by induction on b with a fixed.
If b= 1, $\frac{(a+ b)!}{a!b!}= \frac{(a+ 1)!}{a!1!}= a+ 1$, an integer.
Assume true for given b and look at $\frac{(a+ b+1)!}{a!(b+1)!}$.
Thanks for your reply, I may see what I can find with the induction path, cheers =)
Hi James,
I'll try a "Proof By Induction" on this,
though it has not been as straightforward as the PBI proofs I've done up to now.
$\text{\footnotesize We\ know\ that\;\;\;} \displaystyle\frac{(a+b)!}{a!\,b!}\;\;\; \text{\footnotesize is\ a\ positive\ integer\ from\ our\ "counting"\ formula.}$
Aside from that, the following is an attempted proof using PBI.
$\displaystyle\text{\footnotesize We\ can\ take\ the\ general\ \bold{multinomial coefficient}}\;\;\; \frac{(a_1+a_2+.......+a_n)!}{a_1!\,a_2!\,........ \,a_n!}$
and prove it must be a positive integer for all of the terms being positive integers,
then deduce the binomial from that by setting all but 2 terms to zero,
since $0!=1$
Or, we can simply work with the binomial exclusively.
So, taking the multinomial coefficient...
$\displaystyle\frac{(a_1+a_2+.....+a_n)!}{a_1!\,a_2 !\,.........\,a_n!}=x\in\Bbb{N}\;\;\; \bold{whenever}\;\;\; a_1+a+a_2+...+a_n=k$
This means that the integers summing to k may be different sets of integers, but they do sum to k.
It also means that $\; x\;$ is an integer variable, that is, it can vary but takes on only positive integer values.
To illustrate....
$\displaystyle\frac{5!}{1!\,4!}=5,\;\;\text{while}\ ;\;\frac{5!}{2!\,3!}=10$
Therefore, the base case involves establishing this framework for initial values.
For example, suppose k=5 (though we would start with k=smallest practical value of interest)...
$\displaystyle\frac{5!}{1!\,4!}=5,\;\;\;\frac{5!}{2 !\,3!}=10,\;\;\;\frac{5!}{1!2!2!}=30$
for positive integers, and n=2 or 3.
The set of integers sum to k+1.
However, we utilise the fact that
$\displaystyle\frac{\left(a_1+a_2+......+a_n\right) !}{a_1!\,a_2!\,....\,a_n!}=y=\frac{(k+1)!}{a_1!\,a _2!\,....a_n!}$
$\text{\footnotesize\ We\ want\ to\ know\ if\;}y\text{\footnotesize\;\ is\ an\ integer.}$
$\text{\footnotesize\ Divide\ both\ sides\ by\;\;\;} \left(a_1+a_2+......+a_n\right)\;\;\;\text{\footno tesize\ and\ multiply\ both\ sides\ by\;\;\;}\ a_1$
$\Rightarrow\displaystyle\frac{a_1\,y}{a_1+a_2+.... .+a_n}=\frac{\left[\left(a_1-1\right)+a_2+....+a_n\right]!}{\left(a_1-1\right)!\,a_2!\,....\,a_n!}=x_i,\text{\footnotesi ze\ \;one\ of\ the\
values\ of\;\;} x$
$\text{\footnotesize\ Similarly\ for\;\;\;}\displaystyle\frac{a_2\,y}{a_1+a_2+....+ a_n}\;,.......,\;\frac{a_n\,y}{a_1+a_2+....+a_n}$
$\text{\footnotesize\ Then,\ if\ we\ sum\ all\ of\ these,\ the\ result\ is\;\;}y,\text{\footnotesize\; hence\;\;}y\text{\footnotesize\;\;is\ the\ sum\ of\ integers,\ so\ it's\ an\ integer\ also.}
$\displaystyle\ y=\sum_{j=1}^n\;\frac{a_j\,y}{a_1+a_2+......+a_n}$
$\text{\footnotesize\ If\;}x\text{\footnotesize\;\ is an integer when\;}a_1+a_2+....+a_n=k,$
$\text{\footnotesize\ then\;}y \text{\footnotesize\;\ is certainly an integer when\;}a_1+a_2+....+a_n=k+1$
Hence the inductive step is complete.
August 20th 2010, 03:12 PM #2
MHF Contributor
Dec 2009
August 20th 2010, 03:18 PM #3
Junior Member
Jul 2010
August 21st 2010, 04:08 AM #4
MHF Contributor
Apr 2005
August 21st 2010, 08:20 PM #5
Junior Member
Jul 2010
August 22nd 2010, 01:49 AM #6
Nov 2009
August 24th 2010, 05:02 PM #7
MHF Contributor
Dec 2009 | {"url":"http://mathhelpforum.com/number-theory/154082-division-problem.html","timestamp":"2014-04-19T03:21:17Z","content_type":null,"content_length":"57303","record_id":"<urn:uuid:a933efb9-8323-46d0-9431-a89e1ef6c9fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
This article is about the UCD Department. For statistics about Davis people, see Demographics.
Statistics at UC Davis is a small/intimate major and is considered great preparation for careers in several fields ranging from business to science. Because of the relatively small number of course
requirements, students often choose to double major in statistics and their chosen field of application.
At UC Davis statistics majors have a few different degree options. The department offers a B.S. in Statistics and a B.S. in Statistics with a computational emphasis. The new B.S. in Applied
Statistics is similar to the former B.A. in Statistics, which offers a more flexible set of course work, ideal for double majors in the social sciences. Here students may even switch stats 131AB for
the less rigorous stats 130AB.
As for graduate degrees one can either earn a M.S. or a Ph.D. The masters degree in statistics actually requires only a few core graduate courses. Since many graduate students were mathematics or
other science majors as undergraduates, the first year or so of this degree may include several undergraduate statistics classes. For a Ph.D. in statistics or a Ph.D. in Biostatistics one must take a
more rigorous set of courses and conduct extensive research.
The department also offers a minor in statistics. Requirements are the core stats courses such as 106 (Analysis of Variance), 108 (Linear Regression), 130A/131A (Probability Theory), 130B/131B
(Mathematical Statistics) and one upper division course with 130B or 131B as a prerequisite. If you are a math major intending on going to grad school in statistics or a bio major intending on going
to grad school in biostatistics, these courses are considered to be the key preparation you need, so that you don't enter having to take a bunch of undergrad classes. Specifically, many people who
plan on going to grad school in biostats are either bio majors who minor in statistics or math/stats majors who minor in Quantitative biology and bioinformatics.
Undergraduate Advising
Dr. Jie Peng, Undergraduate Adviser (For Fall 2010, while Dr. Chris Drake is on sabbatical)
Elizabeth Dudley, Undergraduate Program Coordinator
Alejandra Garibay, Peer Advisor (2010-2011)
Check the UC Davis Statistics department's
Lower Division
13 - Elementary Statistics. Learn the basics of statistics including but not limited to: probability distributions, hypothesis testing, confidence intervals, combinatorics, simple linear regression,
one-way/two-way ANOVA. Possibly the easiest class in mankind. Most people can learn this material on the job, but it might be good to take it anyways since employers look for it for any discipline.
32 - Statistical Analysis through Computers. Nearly equivalent to STA 13 in terms of statistical concepts covered; yet, there is more emphasis in the usage of computer packages (R). For stat majors,
this is a lot more useful course than STA 13.
Upper Division
100 - Applied Statistics for Biological Sciences. Nearly equivalent to STA 13 in terms of statistical concepts covered. The emphasis is on biological applications. Basically the same course as STA 13
and 32, but it is considered an upper division course.
103 - Applied Statistics for Business and Economics. Goes a little deeper than STA 13, applied problems in business and economics.
104 - Nonparametric Statistics. Many statistical analyses are based on common properties of known statistical models. Nonparametric statistics focus on parameterization via the data. These parameters
are flexible and thus distribution free. This class teaches you how to apply the most common nonparametric statistical tests. Fit for more unusual problems. This potentially can be a difficult
course, but usually the students are non-majors and that dilutes its rigorousness. There used to be an upper division non parametrics class that was much more rigorous, but this is no longer the
106 - Analysis of Variance. Teaches the mathematics of basic ANOVA. Considered one of the easiest classes that one can take in the major. Stats 106 and 108 have a reputation of being more or less
plug and chug classes. Topics include 1-way and 2-way ANOVA, complete randomized block designs, Analysis of Covariance, and nested ANOVA.
108 - Linear Regression. Teaches the mathematics (and data analysis depending on Prof.) of simple linear regression. Unfortunately, it doesn't teach you much more than that. The statistics department
desperately needs an undergraduate class for nonlinear regression. Topics include simple linear regression, multiple linear regression, ANOVA approach to regression, model selection criteria (AIC,
Adjusted R^2, Mallows' Cp), backwards elimination, and forward selection model building.
120 - Probability and Random Variables for Engineers
An easier version of STA 131A. This is the old requirement for EE/CE majors, with the new requirement being EEC 161 starting Fall 08. CSE majors must take the more challenging STA 131A class.
130AB - Brief Mathematical Stats and Probability Theory. Supposedly easier than STA 131ABC, but depending on the Prof., that is not always the case.
131A - Probability Theory. Intro to probability theory. Learn about continuous and discrete probability distributions, CLM, moments, expected values, etc. Possibly the most important course in the
stats major. Everything else (like hypo. testing) follows from the base knowledge of probabilities.
• This class is often considered to be better than its sister class math 135A. The stats class focuses more on applications/problem solving, where the math class does deep into the theory. Most
people tend to agree that the stats version is also easier. -MattHh
• The amount of theory in this class largely depends upon the teacher. Some professors stick more to applications of probability, while others go deep into the mathematics behind probability, such
as Roussas.
131BC - Mathematical Statistics. You get taught the mathematics behind estimation, hypo. testing, simple linear regression, ANOVA, convergence and nonparametric statistics. The mathematical rigor not
withstanding, the subjects covered here are quite boring (IMO).
135 - Multivariate Data Analysis. Most of the material in undergraduate statistics courses are taught in a univariate setting. Multiple variables arise commonly in real world situations. Hence, this
class tends to deal with more realistic data sets. This class does not give as much rigor as the similar STA232C course, but for an undergraduate course, you take what you can get.
137 - Applied Time Series Analysis. You get to learn the basics of time series analysis (starting with AR and MA models). Get to use Shumway's time series software for Windows, ASTSA. No other
undergraduate course deals with time series, widely used in economics and biostatistics (longitudinal analysis).
138 - Categorical Data Analysis. Learn the analysis of categorical data. Most of the times, you use partitioned count data. The class has been taught by Rahman Azari for the past several years, and
is a required course for those pursuing the B.S. Statistics option, unless you can get signed off on taking a different 130-level class instead (135, 137).
141 - Statistical Computing. Traditionally taught by Temple Lang. Class consists of multiple computing assignments (in R) with a final project at the end. There are no exams, but the assignments are
time consuming enough to the point that you rather take an exam than finish an assignment. You get preached that math is really not that useful on the broad scale. Assignments are incredibly
open-ended and allow for a total exploration of the data, with the focus being on how to succinctly express large volumes of data and deal with human error in your data sets.
145 - Bayesian Statistics. Bayesian statistics is a completely different way of doing statistics. Applications in the real world has increased in recent days thanks to the increase in computing
power. Used to be taught by Wes Johnson who emphasized its application using WinBUGS or JAGS. Johnson is at UCI, so Samaniego takes over and he emphasizes theory, which is cryptic at best.
Undergraduate Research
• CLIMB - is a program that focuses on mathematical and statistical modeling in biology. If you are interested in going to grad school in biostatistics, applied statistics or mathematics, or
biology, this research program is for you.
Wolfgang Polonik (Nonparametric Statistics, Probability Theory, Mathematical Statistics)
• One of the most helpful professors, somewhat easy too
• Thick German accent
• Since becoming department chair, he no longer teaches at the undergraduate level, but he still teaches STA231C, Graduate-level mathematical statistics.
George Roussas (Probability Theory, Mathematical Statistics)
• One of the oldest professors on campus whose courses are quite rigorous. Likes to prove everything.
• Thick Greek accent
• Makes you buy his textbook, which is quite expensive
• Very different teaching style in lecture versus office hours. If you find yourself having trouble in his lectures, give his office hours a chance. He sits down and really makes sure you
understand every step.
Prabir Burman (Biostatistics, Analysis of Variance, Regression Analysis, Multivariate Statistics)
• One of the best professors
• Has a bowl of chocolates in his office for students to take whenever he's around
I took Regression Analysis with him. He's a genius, explains concepts clearly and exams are straight forward. Wish he taught more undergrad stats courses.
Fushing Hsieh (Biostatistics, Analysis of Variance, Regression Analysis)
• One the easiest professors
Duncan Temple-Lang (Computational Statistics)
• He is one of the developers for R
• Talks fast, but really easy to get along with.
• Responds very quickly to emails and class mailing list questions
Chris Drake (Biostatistics, Sampling Theory)
• Undergraduate Adviser
• Biostatistics specialist
Francisco Samaniego (Bayesian Analysis, 130A/B)
• His STA 145 is hard (which is fine), but not useful in terms of applying the knowledge in the future.
• His STA 130A/B class is VERY theoretical
Jane-Ling Wang (Longitudinal Data Analysis, Survival Analysis)
• Somewhat methodical, but good teacher and person nonetheless.
• Good to take STA 131A from her.
Useful Statistical Packages
• - - The somewhat hard to understand command-line statistical package. It does not have the neat GUI features as the commercial version (S-PLUS); yet the ever-growing community of R developers
provide add-ons to facilitate unique routines which make this a cunning edge program for research. R is for people who have decent knowledge of programming and a constant supply of novel problems
in data analysis.
• - - Arguably the most used (and coveted by employers) program for people in the field of business and economics. The programming language is even less intuitive then R, but there are many
resources and professionals that can help you in learning the language. It is also only used for data analysis, so one can't get as creative in its analysis as R, but it's very fast. Good to
learn for people who are looking to work in companies which require data analysts. It also has the ugliest graphics engine ever.. (but again, it's fast).
If you are a statistics major, you should edit this page!
One might also be interested in some Wiki related statistics: User Statistics and | {"url":"http://daviswiki.org/Statistics","timestamp":"2014-04-16T07:31:17Z","content_type":null,"content_length":"25048","record_id":"<urn:uuid:aac4e490-4d32-4fab-89ea-a96b13a6863f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
&& , || usage?
April 22nd, 2013, 07:50 PM
&& , || usage?
Hello, i'd like to confirm the following:
suppose the question is: we have three int's (int a, int b, int c) and the value is true if and only if the sum of a and c is greater than b, then b is divisible by 3 without remainder.
Do both the answers below mean the same?
Code Java:
if (!((a + c) > b) || b % 3 == 0) {
return true;
Code Java:
if(a+ c>b && b%3==0){
return true;
April 22nd, 2013, 08:04 PM
Re: && , || usage?
The way to test whether the two expressions are equivalent (rather than just think about them) is to print their values for lots of different input values of a, b and c.
Do you mean "the value is true if and only if if the sum of a and c is greater than b, then b is divisible by 3 without remainder"? If so, the first expression captures this meaning. The second
means something like "the value is true if and only if both the sum of a and c is greater than b, and b is divisible by 3 without remainder".
April 23rd, 2013, 10:53 AM
Re: && , || usage?
This if-statement will return true if a + c > b is false OR b is divisible by 3 without a remainder. The only time it will not return true is if a + c > b is true and b is not divisible by 3.
This if-statement returns true if a + c > b is true AND b is divisible by 3 without a remainder. This will return false if either a + c > b is false or b % 3 == 0 is false.
In Java, as well as other OOP languages, server-side languages (i.e. PHP) and client-side languages (i.e. JavaScript), && is the logical operator AND, while || is the logical operator OR. | {"url":"http://www.javaprogrammingforums.com/%20java-theory-questions/28474-%7C%7C-usage-printingthethread.html","timestamp":"2014-04-21T02:01:01Z","content_type":null,"content_length":"10645","record_id":"<urn:uuid:6a799045-fc27-4b32-8bf4-dd18c15708bf>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ranking Functions Not Windowing As Expected
Ranking Functions Not Windowing As Expected
• Hi all,
I'm new to ranking functions and windowing. Please excuse the simplicity of this post - but I'm failing to grasp why I'm receiving the output I am from SQL 2008 SP2 Dev x64, SQL 2008 SP1 Std x64,
and SQL 2008R2 Std x64. (This tells me I'm interpreting the function wrong!)
I'm running this set of statements:
CREATE TABLE RankTest
OrderBy1 INT NOT NULL,
Value1 CHAR(1) NOT NULL
INSERT INTO RankTest (OrderBy1, Value1) VALUES (1, 'A')
INSERT INTO RankTest (OrderBy1, Value1) VALUES (2, 'A')
INSERT INTO RankTest (OrderBy1, Value1) VALUES (3, 'B')
INSERT INTO RankTest (OrderBy1, Value1) VALUES (4, 'B')
INSERT INTO RankTest (OrderBy1, Value1) VALUES (5, 'A')
INSERT INTO RankTest (OrderBy1, Value1) VALUES (6, 'A')
SELECT *,
ROW_NUMBER() OVER (PARTITION BY Value1 ORDER BY OrderBy1) AS rn,
RANK() OVER (PARTITION BY Value1 ORDER BY OrderBy1) AS r,
DENSE_RANK() OVER (PARTITION BY Value1 ORDER BY OrderBy1) AS dr
FROM RankTest
ORDER BY OrderBy1
DROP TABLE RankTest
From the SELECT, I'm expecting to see (from all of the ranking functions) is a nice series of "1, 2, 1, 2, 1, 2", like this:
│OrderBy1 │Value1 │rn│r│dr│
│1 │A │1 │1│1 │
│2 │A │2 │2│2 │
│3 │B │1 │1│1 │
│4 │B │2 │2│2 │
│5 │A │1 │1│1 │
│6 │A │2 │2│2 │
But instead, I'm getting this:
│OrderBy1 │Value1 │rn│r│dr│
│1 │A │1 │1│1 │
│2 │A │2 │2│2 │
│3 │B │1 │1│1 │
│4 │B │2 │2│2 │
│5 │A │3 │3│3 │
│6 │A │4 │4│4 │
Why is the partition not working like my brain says it should? Doesn't the ORDER BY in the OVER clause mean that the ranking functions should "start over" in their counting when they see the "A"
value in row 5? Or does the PARTITION BY operate "first" (to group the rows) and then the ORDER BY merely orders the contents of the partitions?
How can I modify my query to return the results I'm looking for?
Tuesday, April 26, 2011 6:03 PM
• The windowing functions are working correctly, one way to get what you want is:
With cte As
(SELECT *,
ROW_NUMBER() OVER (ORDER BY OrderBy1) - ROW_NUMBER() OVER (PARTITION BY Value1 ORDER BY OrderBy1) AS Island
--RANK() OVER (PARTITION BY Value1 ORDER BY OrderBy1) AS r,
--DENSE_RANK() OVER (PARTITION BY Value1 ORDER BY OrderBy1) AS dr
FROM RankTest)
Select *,
ROW_NUMBER() OVER (PARTITION BY Value1, Island ORDER BY OrderBy1) As rn,
RANK() OVER (PARTITION BY Value1, Island ORDER BY OrderBy1) AS r,
DENSE_RANK() OVER (PARTITION BY Value1, Island ORDER BY OrderBy1) AS dr
From cte
ORDER BY OrderBy1;
All replies
• The ranking functions are working as expected. I am afraid that I am not awake on this one. To me it looks like the best solution might be to use a cursor. Additional help please?
Here is a method that is more-or-less a triangular join method; however, I think this will be slower than a cursor-based version:
declare @RankTest table
( OrderBy1 INT NOT NULL,
Value1 CHAR(1) NOT NULL)
INSERT INTO @RankTest
select 1, 'A' union all select 2, 'A' union all
select 3, 'B' union all select 4, 'B' union all
select 5, 'A' union all select 6, 'A'
from @rankTest a
join @rankTest b
on b.value1 = a.value1
and b.orderBy1 <= a.orderBy1
and not exists
( select 0 from @rankTest c
where c.value1 <> a.value1
and c.orderBy1 between b.orderBy1 and a.orderBy1
group by a.orderBy1, a.value1
/* -------- Output: --------
orderBy1 value1
----------- ------ -----------
1 A 1
2 A 2
5 A 1
6 A 2
3 B 1
4 B 2
(6 row(s) affected)
( This is a case where the ORACLE "Lag" or "Last" analytic functions might be helpful. )
Tuesday, April 26, 2011 6:16 PM
• To get 1,2,1,2,1,2 like that the values for rows 5 & 6 would need to be 'C'.
This is going to get you pretty close to what you are looking for but if you really need it to reset back to 1 on row 5 you'll have to have the Value1 column be different than either of the
previous sets. ROW_NUMBER() OVER (PARTITION BY Value1 ORDER BY OrderBy1) AS rn
I went through a similare problem last week and went as far as putting one ROW_NUMBER in a sub-select because you can't for example do DENSE_RANK(ROW_NUMBER() OVER (PARTITION BY Value1 ORDER BY
OrderBy1)) AS rn. It's not allowed.
-- Aaron Nelson.
Tuesday, April 26, 2011 6:36 PM
• The windowing functions are working correctly, one way to get what you want is:
With cte As
(SELECT *,
ROW_NUMBER() OVER (ORDER BY OrderBy1) - ROW_NUMBER() OVER (PARTITION BY Value1 ORDER BY OrderBy1) AS Island
--RANK() OVER (PARTITION BY Value1 ORDER BY OrderBy1) AS r,
--DENSE_RANK() OVER (PARTITION BY Value1 ORDER BY OrderBy1) AS dr
FROM RankTest)
Select *,
ROW_NUMBER() OVER (PARTITION BY Value1, Island ORDER BY OrderBy1) As rn,
RANK() OVER (PARTITION BY Value1, Island ORDER BY OrderBy1) AS r,
DENSE_RANK() OVER (PARTITION BY Value1, Island ORDER BY OrderBy1) AS dr
From cte
ORDER BY OrderBy1;
Tuesday, April 26, 2011 7:00 PM
• Think of it this way... The ranking is taking place before the SQL statement's ORDER BY clause. So the last line, has no bearing on any of these functions. It's only affecting the final display
The ranking criteria is 100% in the OVER clause of the given function, with the "PARTITION BY" columns being sorted first, followed by the "ORDER BY" columns.
So based on your example, in all 3 cases, the ranking order is...
OrderBy1 Value1 r
1 A 1
2 A 2
5 A 3
6 A 4
3 B 1
4 B 2
With [Value1] being sorted 1st followed by [OrderBy1].
Jason Long
Tuesday, April 26, 2011 7:06 PM
• Todd,
> Or does the PARTITION BY operate "first" (to group the rows) and then the ORDER BY merely orders the contents of the partitions?
> How can I modify my query to return the results I'm looking for?
Tom already answered your question, so I am going to add little. You are looking to rank the rows not only by the [Value1], if not also by the island (consecutive rows) based on the values of
Island OrderBy1 Value1
1 1 A
1 2 A
1 3 B
1 4 B
2 5 A
2 6 A
If there is no gap in the [OrderBy1] sequence, then you can use:
[OrderBy1] - ROW_NUMBER() OVER(PARTITION BY Value1 ORDER BY [OrderBy1]) AS Island
otherwise you use:
ROW_NUMBER() OVER(ORDER BY [OrderBy1]) - ROW_NUMBER() OVER(PARTITION BY [Value1] ORDER BY [OrderBy1]) AS Island
Then you rank the result set by:
ranking_function() OVER(PARTITION BY [Value1], Island ORDER BY [OrderBy1])
I learned this trick from Itzik Ben-Gan, and you can read more about ranking functions in his last book about T-SQL Querying.
Inside Microsoft® SQL Server® 2008: T-SQL Querying
Tuesday, April 26, 2011 7:08 PM
• Alternatively, Plamen Ratchev has a good blog on this topic as well which I always re-check then faced islands problem
For every expert, there is an equal and opposite expert. - Becker's Law
My blog
Thursday, April 28, 2011 1:45 AM | {"url":"http://social.msdn.microsoft.com/Forums/sqlserver/en-US/f5ca1754-38f8-4a1f-9440-ab140c95c57a/ranking-functions-not-windowing-as-expected?forum=transactsql","timestamp":"2014-04-17T13:15:44Z","content_type":null,"content_length":"91697","record_id":"<urn:uuid:e8eb8f21-4a31-4eb5-9191-449a7d2121d0>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
-en" Joan is a male.
Born in 1965 in the Belgian village of Achel. He received a degree in Electro-Mechanical Civil Engineering from Katholieke Universiteit Leuven in 1988, and won his Ph.D. in Cryptography in 1995. He
does a lot of work with block ciphers, stream ciphers and hash functions.
Daemen has worked for Johnson and Johnson, Bacob (a Belgian bank), and Banksys (operator of most Belgian ATMs). As of this writing, he works for Proton World International, a technology provider
dealing with smart cards and security issues.
Daemen is probably most famous for his work with Vincent Rijmen, a man he met while doing his doctoral research with a group called COSIC. Together, they created Rijndael ("rain-doll"), a
cryptographic algorithm that has been chosen by the National Institute of Standards and Technology as the Advanced Encryption System, set to replace DES as the form of encryption used by the US
government (and probably a host of other groups as well) in the summer of 2001.
Daemen and Rijmen chose the name "Rijndael" as an amalgamation of their last names, which they were tired of hearing mispronounced by careless English speakers. As long as you don't call it "Region
Dale," they're happy. Other names they considered (from Dutch): "Herfstvrucht," "Angstschreeuw" and "Koeieuier." Let it never be said that math people don't have a sense of humor.
Publications by Joan Daemen:
• J. Daemen, R. Govaerts, J. Vandewalle, "Cryptanalysis of MUX-LFSR based scramblers," Proceedings of the 3rd symposium on State and Progress of Research in Cryptography, W. Wolfowicz, Ed.,
Fondazione Ugo Bordoni, 1993, pp. 55-61.
• J. Daemen, R. Govaerts, J. Vandewalle, "Block ciphers based on modular arithmetic," Proceedings of the 3rd symposium on State and Progress of Research in Cryptography, W. Wolfowicz, Ed.,
Fondazione Ugo Bordoni, 1993, pp. 80-89.
• J. Daemen, R. Govaerts, J. Vandewalle, "Resynchronization weaknesses in synchronous stream ciphers," Advances in Cryptology, Proceedings Eurocrypt'93, LNCS 765, T. Helleseth, Ed.,
Springer-Verlag, 1994, pp. 159-169.
• L. Claesen, J. Daemen, M. Genoe, G. Peeters, "Subterranean: a 600 Mbit/sec cryptographic VLSI chip," Proceedings of ICCD '93: VLSI in Computers and Processors, R. Camposan A. Domic, Eds., IEEE
Computer Society Press, 1993, pp. 610-613.
• J. Daemen, R. Govaerts, J. Vandewalle, "Cryptanalysis of 2,5 rounds of IDEA," ESAT-COSIC Technical Report 93/1, 1993.
• J. Daemen, R. Govaerts, J. Vandewalle, "Weak keys of IDEA," Advances in Cryptology, Proceedings Crypto'93, LNCS 773, D. Stinson, Ed., Springer-Verlag, 1994, pp. 224-231.
• J. Daemen, R. Govaerts, J. Vandewalle, "A new approach towards block cipher design," Fast Software Encryption, LNCS 809, R. Anderson, Ed., Springer-Verlag, 1994, pp. 18-32.
• J. Daemen, R. Govaerts, J. Vandewalle, "An efficient nonlinear shift-invariant transformation," Proceedings of the Fifteenth Symposium on Information Theory in the Benelux, Louvain-la-Neuve (B),
May 30-31, 1994, pp. 82-89.
• J. Daemen, R. Govaerts, J. Vandewalle, "Correlation matrices," Fast Software Encryption, LNCS 1008, B. Preneel, Ed., Springer-Verlag, 1995, pp. 275-285.
• J. Daemen, "Cipher and hash function design. Strategies based on linear and differential cryptanalysis," Doctoral Dissertation , March 1995.
• V. Rijmen, J. Daemen, B. Preneel, A. Bosselaers, E. De Win, "The cipher SHARK," Fast Software Encryption, LNCS 1039, D. Gollmann, Ed., Springer-Verlag, 1996, pp. 99-112.
• J. Daemen, L.R. Knudsen, V. Rijmen, "The block cipher Square," Fast Software Encryption, LNCS 1267, E. Biham, Ed., Springer-Verlag, 1997, pp. 149-165.
• J. Daemen and C.S.K. Clapp, "Fast hashing and stream encryption with PANAMA," Fast Software Encryption, LNCS 1372 , S. Vaudenay, Ed., Springer-Verlag, 1998, pp.60-74. Reference implementation
• J. Daemen, M. Peeters, G. Van Assche, "Bitslice ciphers and power analysis attacks," Fast Software Encryption , to appear.
List Source: the official Rijndael homepage. | {"url":"http://www.everything2.com/index.pl?node=Joan%20Daemen","timestamp":"2014-04-19T14:31:20Z","content_type":null,"content_length":"24235","record_id":"<urn:uuid:413170ed-dc91-4394-a408-f6cde02e3330>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
factoring 16
August 29th 2012, 10:23 PM
factoring 16
when factoring 16 compared to 2 thirds you divide 16 into 2x2x2x2 and then times 3
why don't 2 of the twos in the 16 cancel out like in many other cases of factorization
you are told to take only one number from each tree and in this case you take all 4?
August 29th 2012, 10:52 PM
Re: factoring 16
why is it that sometimes when factoring you don't cancel out any factors when multiplying
for example 16 in the denominator
2x2x2x2 and if the other factor from a different fraction was 3 then
it would be 2x2x2x2x3 and not 2x2x3 with two of these 2 canceled?
August 30th 2012, 01:16 AM
Re: factoring 16
Your question does not make much sense in either of your posts. Please provide an example (the full question/problem) and then explain with relevance to that question.
In the mean time, maybe this will help:
When factoring 2 numbers we take the highest common factor that they both share so which we can find easily by writing those numbers down as products of their primes (as you did, 2x2x2x2 or 2x2x3
etc etc). We then find the prime numbers that feature in both lists so in this example we would factor with 2x2 as 2 features twice in both lists....therefore 2x2=4 and 4 would be the highest
common factor that we use. | {"url":"http://mathhelpforum.com/pre-calculus/202697-factoring-16-a-print.html","timestamp":"2014-04-16T08:04:44Z","content_type":null,"content_length":"4666","record_id":"<urn:uuid:d7b3c53c-42a8-4d15-8c73-9e4146b20603>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
January 1998
Imagine an infinite sequence of coin tosses, and suppose you win a penny every time there is a sequence of k heads in a row followed by a tail (in other words, at every occurrence of the sequence
HHH...HT of k heads followed by a tail). If k is large then you will not win pennies very often, but you will certainly win some sooner or later.
How many pennies are you likely to win in the first n tosses of the coin? | {"url":"http://plus.maths.org/content/comment/reply/2916","timestamp":"2014-04-20T08:18:45Z","content_type":null,"content_length":"21030","record_id":"<urn:uuid:e4b6c777-2772-4088-b595-174fc076868a>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computational Adequacy for Recursive Types in Models of Intuitionistic Set Theory
- Manuscript
"... We present a simple and workable axiomatization of domain theory within intuitionistic set theory, in which predomains are (special) sets, and domains are algebras for a simple equational
theory. We use the axioms to construct a relationally parametric set-theoretic model for a compact but powerful ..."
Cited by 10 (3 self)
Add to MetaCart
We present a simple and workable axiomatization of domain theory within intuitionistic set theory, in which predomains are (special) sets, and domains are algebras for a simple equational theory. We
use the axioms to construct a relationally parametric set-theoretic model for a compact but powerful polymorphic programming language, given by a novel extension of intuitionistic linear type theory
based on strictness. By applying the model, we establish the fundamental operational properties of the language. 1.
, 2005
"... Plotkin suggested using a polymorphic dual intuitionistic / linear type theory (PILLY) as a metalanguage for parametric polymorphism and recursion. In recent work the first two authors and R.L.
Petersen have defined a notion of parametric LAPL-structure, which are models of PILLY, in which one can r ..."
Cited by 5 (4 self)
Add to MetaCart
Plotkin suggested using a polymorphic dual intuitionistic / linear type theory (PILLY) as a metalanguage for parametric polymorphism and recursion. In recent work the first two authors and R.L.
Petersen have defined a notion of parametric LAPL-structure, which are models of PILLY, in which one can reason using parametricity and, for example, solve a large class of domain equations, as
suggested by Plotkin. In this paper we show how an interpretation of a strict version of Bierman, Pitts and Russo’s language Lily into synthetic domain theory presented by Simpson and Rosolini gives
rise to a parametric LAPL-structure. This adds to the evidence that the notion of LAPL-structure is a general notion suitable for treating many different parametric models, and it provides formal
proofs of consequences of parametricity expected to hold for the interpretation. Finally, we show how these results in combination with Rosolini and Simpson’s computational adequacy result can be
used to prove consequences of parametricity for Lily. In particular we show that one can solve domain equations in Lily up to ground contextual equivalence. 1
- In preparation , 2006
"... This paper introduces Basic Intuitionistic Set Theory BIST, and investigates it as a first-order set-theory extending the internal logic of elementary toposes. Given an elementary topos,
together with the extra structure of a directed structural system of inclusions (dssi) on the topos, a forcing-st ..."
Cited by 3 (3 self)
Add to MetaCart
This paper introduces Basic Intuitionistic Set Theory BIST, and investigates it as a first-order set-theory extending the internal logic of elementary toposes. Given an elementary topos, together
with the extra structure of a directed structural system of inclusions (dssi) on the topos, a forcing-style interpretation of the language of first-order set theory in the topos is given, which
conservatively extends the internal logic of the topos. Since every topos is equivalent to one carrying a dssi, the language of first-order has a forcing interpretation in every elementary topos. We
prove that the set theory BIST+ Coll (where Coll is the strong Collection axiom) is sound and complete relative to forcing interpretations in toposes with natural numbers object (nno). Furthermore,
in the case that the structural system of inclusions is superdirected, the full Separation schema is modelled. We show that every cocomplete topos and every realizability topos can be endowed (up to
equivalence) with such a superdirected structural system of inclusions. This provides a uniform explanation for why such “real-world ” toposes model Separation. A large part of the paper is devoted
to an alternative notion of category-theoretic model for BIST, which, following the general approach of Joyal and Moerdijk’s Algebraic Set Theory, axiomatizes the structure possessed by categories of
classes compatible with ∗Corresponding author. 1Previously, lecturer at Heriot-Watt University (2000–2001), and the IT University of
, 2004
"... We discuss a notion of universe in toposes which from a logical point of view gives rise to an extension of Higher Order Intuitionistic Arithmetic (HAH) that allows one to construct families of
types in such a universe by structural recursion and to quantify over such families. Further, we show ..."
Add to MetaCart
We discuss a notion of universe in toposes which from a logical point of view gives rise to an extension of Higher Order Intuitionistic Arithmetic (HAH) that allows one to construct families of types
in such a universe by structural recursion and to quantify over such families. Further, we show that (hierarchies of) such universes do exist in all sheaf and realizability toposes but neither in the
free topos nor in the V!+! model of Zermelo set theory. Though universes
"... This survey article is intended to introduce the reader to the field of Algebraic Set Theory, in which models of set theory of a new and fascinating kind are determined algebraically. The method
is quite robust, admitting adjustment in several respects to model different theories including classical ..."
Add to MetaCart
This survey article is intended to introduce the reader to the field of Algebraic Set Theory, in which models of set theory of a new and fascinating kind are determined algebraically. The method is
quite robust, admitting adjustment in several respects to model different theories including classical, intuitionistic, bounded, and predicative ones. Under this scheme some familiar set theoretic
properties are related to algebraic ones, like freeness, while others result from logical constraints, like definability. The overall theory is complete in two important respects: conventional
elementary set theory axiomatizes algebraic framework itself are also complete with respect to a range of natural models consisting of “ideals ” of sets, suitably defined. Some previous results
involving realizability, forcing, and sheaf models are
"... brief introduction to algebraic set theory ∗ ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.153.6157","timestamp":"2014-04-17T14:45:42Z","content_type":null,"content_length":"28348","record_id":"<urn:uuid:3cdb6593-3f37-46ae-bde7-50323dcb787a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
RING is a coordinate transformation program written to provide the quantitative data that aids the interpretation of ring conformations and planar ring forms. Starting from the normal mode analysis
of an N-membered ring (N-ring), one can show that always N-3 vibrations describe the out-of-plane puckering of the N-ring and 2N-3 vibrations in-plane deformations. Based on this analysis RING
provides the following information once the Cartesian coordinates of the ring molecule have been read in:
This information is needed for conformational and electronic structure analysis of ring molecules:
For a set of N-3 out-of-plane vibrations with finite vibrational amplitude a puckered ring is exactly specified. The vibrational amplitudes correspond to the N out-of-plane coordinates zj, which in
turn can be exactly specified by the puckering equations (1) and (2): [A1,A6,C1]
In these equations, the parameters qn correspond to puckering amplitudes and the angles φn to pseudorotation phase angles. The index n denotes the pseudorotation modes of the N-ring. In the
derivation of the puckering coordinates, n = 0 and n = 1 correspond to translation and overall rotation of the planar reference ring in the mean plane. [C1] The 3-membered ring is always planar and
does not possess any put-of-plane vibrations. Hence, it has only index n = 0 and n = 1, defining parameter q1 and rotational angle φ1 for translation and rotation. Puckering starts with the
4-membered ring (N-3 = 1: puckering amplitude q2, one out-of-plane vibration leading to “crown puckering”) and pseudorotation with the 5-membered ring (N-3 = 2: puckering amplitude q2 and
pseudorotation phase angle φ2). For a complete pseudorotation cycle, the phase angle φ2 increases from 0 to 180 and 360°, which is identical to the 0° situation. Such a pseudorotation cycle is
schematically shown in Figure 1 for the 5-ring [A1,A3,B2,B3]
Figure 1. Pseudorotational cycle of tetrahydrofurane (O is atom #1; clockwise numbering around the ring). According to the rules for ring conformations (see below) the Cs-symmetrical envelope form
with the O atom at the apex of the ring is located at φ2 = 0°. The C2-symmetrical twist form of tetrahydrofurane is located at φ2 = 90°. 10 envelope (E) and 10 twist (T) conformations are shown along
a pseudorotation cycle (0° ≤ φ2 ≤ 360°) with fixed puckering amplitude q2. At the center of the pseudorotation cycle the planar form with q2 = 0 Å is located. Along the pseudorotation cycle an
infinite number of forms are positioned where only the 20 E and T forms with the O atom at different ring positions are shown. A superscript before the symbol denotes for an E form that the apex atom
is above the mean plane, a subscript behind E that the apex atom is below the mean plane. For the T forms, there is always a superscript before and a subscript behind T indicating the atoms of the
bond with the largest dihedral angle where the first atom is above, the second atom below the mean plane. Note that φ2 values differing by 180° correspond to original and inverted form. The
conformational space is N-3 = 2-dimensional. Substituents R and S are shown to indicate the different substituent orientations during pseudorotation.
Figure 2. Conformational globe of a puckered six-membered pyran ring. Puckering coordinates {q2, φ2, q3} that span the globe as well as some distinct longitudes and latitudes are shown. The positions
of distinct ring forms are shown in steps of 30° along the equator, the tropic of Cancer, and the tropic of Capricorn (compare with Figure 3). For reasons of clarity only the front side of the globe
is shown. Conformations on the backside are shown in Figure 3. Atom O1 is always indicated.
A six-membered ring such as cyclohexane has 6-3 = 3 puckering coordinates, which split up into the pseudorotational coordinate pair {q2, φ2} describing the pseudorotation of boat and twistboat forms
and a single “crown” puckering amplitude q3, which describes the chair form (positive q3) and the inverted chair form (negative q3). For pseudorotational cycles, the puckering amplitude is always
Figure 3. Conformations of puckered cyclohexane. Three pseudorotational cycles at the hyperspherical angles Θ2 = 66.5, 90, and 113.5° (compare with Figure 2) are shown. Distinct ring forms are shown
in steps of 30° along the equator, the tropic of Cancer, and the tropic of Capricorn (compare with Figure 2).
A non-planar seven-membered ring has 7-3 = 4 puckering parameters spanning two different pseudorotational subspaces: {q2, φ2} and {q3, φ3}. The conformational energy space is 4-dimensional and its
unit vectors, giving the location of the 7-ring boat, twistboat, chair and twistchair forms are orthogonal to each other. The conformational space can be symbolized by a torus as done in Figure 4.
Figure 4. The pseudorotational cycles of a 7-membered ring symbolized by a torus.
In the 8-membered ring, there are these two pseudorotational spaces and an additional crown puckering space spanned by puckering amplitude q4.
For even-membered rings, there are in general (N-4)/2 pseudorotational modes each described by a puckering angle – phase angle pair { qn,φn} where n = 2, 3, ... (N-2)/2. In addition, there is for
even-membered rings an additional puckering amplitude qN/2, which describes crown puckering (N = 4: folding of the ring; N = 6: chair puckering, N = 8, 10, etc.: crown puckering).
For odd-membered rings, there are in general (N-3)/2 pseudorotational modes, again each described by a puckering amplitude – phase angle pair {qn,φn} where n = 2, 3, ... (N-1)/2. All ring
conformations correspond to points in the (N-3)-dimensional conformational energy space.
Because of reasons of analysis, it is useful to use spherical (N = 6) or hyperspherical (N > 6) ring puckering coordinates {Q, Θn, φn} rather than the hypercylindrical coordinates {qn, φn} described
above. For this purpose a total puckering amplitude Q is defined and a polar angle Θn (see Figures 2 and 3 for the six-membered ring). [A1,A3] These are also calculated by the program RING.
In Table 1 (taken from [A3]) the puckering coordinates are listed according to ring size.
The planar ring is the appropriate reference for the puckered ring. It is positioned in a plane, which is called the mean plane (zj = 0). There is a unique mathematical algorithm to determine the
mean plane, which has been described elsewhere. [A1, A6] Each puckered ring is associated with a planar ring, which is obtained by projecting the puckered ring into the mean plane. The projected
planar ring is described with a set of 2N-3 deformation parameters, which quantify the deviation of the planar ring from a regular N-membered polygon of unit length edges (see below).
Definition of Basis Conformation: In the way the puckering coordinates span the conformational space of an N-membered ring they define basis conformations, which can be considered as the basis
vectors of the conformational space. For this purpose, the puckering amplitudes are frozen at a constant value (larger zero) and the phase angles get values as defined in Table 2. In conformational
analysis, one has given certain puckered ring forms with high symmetry common names reflecting the form of the puckered ring. Some of these forms turn out to be basis conformations (Table 2) whereas
others are found on seminversion paths between pseudorotational cycles and inversion paths. In this way, conformational analysis is simplified because puckered ring forms can be related to each other
via connecting paths through conformational space.
Once the mean plane of a puckered ring has been determined, the orientations of the substituent bonds with regard to this plane can be determined. Those bonds, which are close to being perpendicular
to the mean plane are called “g-axial” (g for geometrical) and those which are almost parallel to the mean plane “g-equatorial.”[A3] Substituent Bonds with an orientation being neither g-axial or
g-equatorial but intermediate are termed “g-inclinal” (inclined with regard to the direction of the mean plane). These definitions generalize the Barton-Hassel-Pitzer-Prelog rules for the C-H bonds
in cyclohexane [3] to all rings in a well-defined manner.
Program RING calculates the parameters α and ß that give the orientation of a substituent bond as illustrated in Figure 1. Of course the same description can be used for the ring bonds.
It is useful to distinguish between a top (t) and a bottom (b) side of the ring. Applying the rules for ring numbering, each atom gets a number. If looking at the ring and the atom numbers increase
clockwise (counter-clockwise), the top side (bottom side) is seen.
Figure 5. Definition of substituent orientations relative to the mean plane of the ring.
Southern Methodist University, Dallas, Texas — Contact SMU | Legal | A-Z Site Index | Back to Top | {"url":"http://smu.edu/catco/ring-puckering.html","timestamp":"2014-04-20T16:47:18Z","content_type":null,"content_length":"23892","record_id":"<urn:uuid:8b06c9fc-30f2-4881-adbe-d8bbb4eb8a6c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Artificial Light in a Quantum Spin Ice
Real magnetic monopoles remain a theoretical curiosity, but researchers have found that excitations in materials called spin ices behave like analog monopoles, obeying an effective Coulomb’s law for
magnetic charges. In Physical Review B, Owen Benton and his colleagues at the University of Bristol, UK, ask what “photons,” which emerge in the quantum version of a spin ice, would look like in an
The most common spin ices have a structure consisting of corner sharing tetrahedra, with a spin at each corner. The spins are “frustrated,” meaning they can’t align in such a way that satisfies the
interactions with all their neighbors, so they compromise and form a structure with two spins pointing into each tetrahedron and two pointing out (a configuration analogous to the ordering of
hydrogen bonds in water ice.) In a spin ice, flipping an inward pointing spin to an outward one is analogous to exciting a monopole-antimonopole pair in adjacent tetrahedra.
Theorists have posited that strong quantum fluctuations in ice structures would allow one configuration to transition into another, creating a superposition of classical states called a quantum spin
ice that can be described (mathematically) by a kind of quantum electrodynamics with magnetic monopoles.
Benton et al. have now translated this highly mathematical theory into a prediction for how neutrons, the probe of choice for studying magnets, scatter from a quantum spin ice. They show that the
signature for classical spin ices, called “pinch points,” disappear in the quantum spin ice. Similarly, excitations around these points mimic “photons” in the artificial electromagnetic field. –
Jessica Thomas | {"url":"http://physics.aps.org/synopsis-for/10.1103/PhysRevB.86.075154","timestamp":"2014-04-17T15:31:28Z","content_type":null,"content_length":"11767","record_id":"<urn:uuid:32256eed-be88-4405-8b12-d54ac28ed281>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
find the derivative of f(x)=x^2 + x -3 using the definition of the derivative
• one year ago
• one year ago
Best Response
You've already chosen the best response.
\[f'(x)\]\[=\lim_{h \rightarrow 0}\frac{f(x+h) - f(x)}{h}\]\[=\lim_{h \rightarrow 0}\frac{[(x+h)^2 + (x+h) -3] -(x^2 + x -3)}{h}\]Can you simplify the fraction first?
Best Response
You've already chosen the best response.
I used x^2 + 2(x)(h) + h^2 + x +h -3 -x^2 -x +3 = 2x + x ??
Best Response
You've already chosen the best response.
x^2 + 2(x)(h) + h^2 + x +h -3 -x^2 -x +3 <- correct for denominator. But it's not equal to 2x + x... \[x^2 + 2(x)(h) + h^2 + x +h -3 -x^2 -x +3\]Group the like terms together: \[x^2 -x^2 + x -x
-3 +3+ h^2+ 2(x)(h)+h\] Now, can you simplify the above expression
Best Response
You've already chosen the best response.
basically factor out the h and H(h+2x+1) divided by the h on denominator and = h + 2x + 1?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
And you got: \[\lim_{h \rightarrow 0} (h+2x+1)\]Can you evaluate the limit?
Best Response
You've already chosen the best response.
Would be 2x + 1 ??? Because h is going to 0
Best Response
You've already chosen the best response.
Indeed it is!
Best Response
You've already chosen the best response.
Holy Cow! It came together finally!! Thank you very much for your help! I would have been stuck!!!
Best Response
You've already chosen the best response.
You're welcome :)
Best Response
You've already chosen the best response.
Have a great day!!
Best Response
You've already chosen the best response.
You too! :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50683f49e4b0e3061a1d4e18","timestamp":"2014-04-18T16:34:34Z","content_type":null,"content_length":"54135","record_id":"<urn:uuid:5d1669d8-9f59-4481-90b5-c25f7b5e1fa7>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
Accelerating in Place
With respect to a local inertial coordinate system at any point on the Earth's surface, the surface is accelerating outward at 9.8 m/sec^2, and yet the radius of the Earth is (fairly) constant. It
might seem as if these two facts conflict with each other, but they can be reconciled if we allow local variations in the metric of spacetime, which effectively permits variations in the rate at
which time advances.
Imagine two ideal clocks near the Earth's surface, positioned so that one is directly above the other, separated by a distance of 1 meter. At any arbitrary time we can "synchronize" the two clocks,
so they both read t = 0, using some standard and repeatable method, such as the receipt of light pulses emitted from the mid-point between them. Then, after one second has elapsed on the upper clock,
the same procedure can be used to compare the readings on the clocks, and we find that the lower clock shows an elapsed time of slightly less than one second. This has been confirmed by observations
of the red-shifted frequencies of light emitted from characteristic processes in strong gravitational fields near stars, as well as in terrestrial experiments. The lapse of proper time along a
worldline at a fixed radial positions near a gravitating body of mass m is lower than for worldines further from the body. Quantitatively, the derivative of proper time with respect to Schwarzschild
coordinate time at a fixed radius r in a spherically symmetrical static field is
where G is Newton's gravitational constant and c is the speed of light with respect to local inertial coordinates. Roughly speaking, we can imagine lines of constant proper time (for worldlines of
fixed radial position) drawn on a graph of coordinate time versus radial distance, as shown below.
What effect does this variation in the lapse of proper time versus radial distance have on the behavior of test particles in this region? It might be tempting to think that a free particle, being a
geodesic in spacetime, would accelerate outward, so that its worldline maintains a constant angle relative to the lines of constant proper time. Indeed this would be the case if the signature of the
Minkowski metric was positive. However, the signature is actually negative, which explains why the timelike extremal (i.e., geodesic) paths of free particles in spacetime follow curves that maximize
(rather than minimize) the lapse of proper time. The line element on one radial ray is of the form
where g[tt] is independent of t. This relationship between the three differentials dt, dt, and dr can be re-arranged into the form
Since the geodesic paths on the manifold are strictly a function of the relation between the differentials, and how this relation changes with variations in r, it follows that we get the same
timelike geodesics by minimizing (dt)^2 as we do by maximizing (dt)^2. (This works only because this particular system of coordinates gives metric coefficients that are independent of the time
coordinate.) This suggests plotting a small region of the r,t plane with vertical lines of constant r, but with the lines of constant t (relative to some arbitrary reference time) slanted so that the
vertical height corresponds to the proper time t along a stationary worldline, as illustrated below.
Notice that a stationary worldline is perpendicular to the t = 0 locus, but for subsequent values of t the lines of constant t are sloped progressively inward. In order to minimize the lapse of the
time coordinate t, a free-falling particle would remain always perpendicular to the lines of constant t. This implies that the local inertial reference frame is accelerating inward, and as a result,
a stationary worldline is constantly accelerating outward relative to the local inertial reference frame. (We're using the term "local inertial reference frame" here to signify, at any given point,
the class of local inertial coordinate systems moving uniformly in a straight line with respect to each other. In general these coordinate systems cannot be extended globally, due to the curvature of
the manifold.)
The effective acceleration of a stationary worldline with respect to the local free-falling inertial reference frame can be deduced based on how rapidly the lines of constant t are slanting inwards.
If we consider two neighboring stationary worldlines, separated by an incremental radial distance dr, the lapse of proper time dt for a given incremental lapse of coordinate time dt is less for the
inner worldline than for the outer worldline, as illustrated below.
The amount of this difference can be found by differentiating equation (1) with respect to r, which gives
The denominator of the right side is virtually 1 for typical distances, so we can take just the numerator, and multiply through by dr to give the change in dt for a given dt and dr:
The slope of the locus t = dt is given by dividing this difference by dr, which gives a quantity with units of sec/meter. In order for a worldline at t = dt to be perpendicular to this locus, it must
have the perpendicular slope, which is the same quantity converted from units of sec/meter to units of meter/sec. This is accomplished by multiplying through by c^2, and the result is the incremental
change in velocity for an incremental time dt. Dividing by dt gives the acceleration of the stationary worldline at r[1] relative to the local inertial reference frame
With the gravitational constant G = (6.67)10^-11 Nm^2/kg^2 and the Earth's mass m = (5.98)10^24 kg and the Earth's mean radius r = (6.37)10^6 m, we find the outward acceleration of stationary
worldlines at the Earth's surface is approximately 9.8 m/sec^2.
The fact that the geodesics of spacetime in a spherically symmetrical gravitational field can be found by minimizing the Schwarzschild coordinate time t can be regarded as a confirmation (and even an
extension) of Fermat's Principle of Least Time, but of course it doesn't apply for arbitrary time coordinates. It works only because the coefficients of the Schwarzschild line element are independent
of this particular time coordinate. For more on this, see Path Lengths and Coordinates.
Incidentally, to construct a simple illustration of an approximate low-speed ballistic trajectory in a uniform gravitational field, let t denote a stationary time coordinate (such as the
Schwarzschild coordinate time) in terms of which the rates dt/dt of proper time with respect to coordinate time at the fixed heights h[0] and h[1] are g[0] and g[1] respectively, and assume the rate
varies linearly between these two heights. On a plot of height versus proper time (for a particular trajectory) the lines of constant coordinate time are radial rays through a single point, so the
trajectory is a segment of the circle
A plot of this trajectory for a typical case is shown below.
Return to MathPages Main Menu | {"url":"http://www.mathpages.com/home/kmath409/kmath409.htm","timestamp":"2014-04-19T19:52:03Z","content_type":null,"content_length":"9856","record_id":"<urn:uuid:1e38eaa3-c229-4e98-86f0-d2f98e50b1da>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Explore printable math games, including number puzzles, board games, memory games and more.
For intereactive math games that you can use on your computer or interactive whiteboard, visit our SMART Board Math section and our Promethean Flipchart Math section.
= Preview Document = Member Document = Pin to Pinterest
A game for 2-4 players. Lay cards on a mat in ones, tens, hundreds and thousands places. Compare rows of cards and award a point to the player with the highest number in each row. Includes play
mat, cards, score cards, instructions, center sign, and a worksheet for an alternate 2-player activity. CC: Math: 1.NBT.B.3, 2.NBT.A.4, 3.NBT.A.1, 4.NBT.A.2
Set of 25 colorful bingo cards with a fraction theme. Great for the classroom. Includes call list.
This game is similar to memory. Try to match the clue and the answer. CC: Math: 1.NBT.C.6
This game is similar to memory. Try to match the clue and the answer. CC: Math: 1.NBT.C.6
• Interactive Flipchart game. Enhances basic addition and counting skills. Roll the dice and move along the number line, first player to reach the end wins!
Fun bingo game with 25 cards and call list. Great for all ages.
Common Core: Base Ten (all levels)
Turned Shapes Game: A game to help students learn shapes, regardless of position or orientation in space.
Thirty questions for a game illustrating place value to ten thousands. Children read the number they have and ask who has another number. May use with abcteach's Place Value Ten Thousand Tally
Chart and Game Pieces.
A place value to ten-thousands chart with game pieces. May be used with abcteach's game questions for place value, "I have...Who has..."
Match the math problems (with sums to 15) to the answers to win the game.
Match the math problems (with sums to 15) to the answers to win the game.
"I have three tens... who has $45 dollars in three bills?" Practice combinations of bills (ones, tens, twenties, and fifties) with this all-class math game.
Practice addition and subtraction to (or from) 100 with this simple board game.
Practice addition and subtraction to (or from) 100 with this simple board game.
"Place a TRIANGLE at 2,4." Place the colorful pictures (included) in this simple (5x5) grid by following the the correct co-ordinates. This grid game is great for developing early map skills and
shape recognition, and for practicing following directions.
"Place a TRIANGLE at B,4." Place the colorful pictures (included) in this simple (5x5) grid by following the the correct co-ordinates. This grid game is great for developing early map skills and
shape recognition, and for practicing following directions.
"I have three quarters... who has 85 cents?" Practice combinations of single coins (quarters, dimes, and nickels) with this all-class math game.
"I have... who has...?" Place value math game. Includes thousands place and numbers up to 9999.
Students write multiplication problems (with sums up to 40). When the answer to the problem is called, they cover the square. A fun variation on a popular favorite.
Who has 2 tens and 5 ones? Develop listening skills, practice numbers 1-99, and practice tens and ones place values with this all-class game.
"I have - Who has?" Practice the numbers 1-99 with this place value game.
"I have - Who has?" Practice the numbers 1-99 with this place value game.
"I have - Who has?" Practice the numbers 1-999 with this place value game. Includes hundreds place and numbers up to 999.
"I have - Who has?" Practice the numbers 1-999 with this place value game. (Includes hundreds place and numbers up to 999).
[member-created with abctools] Practice multiplying by 2 by matching equations with answers with these fun flash cards.
Turkey shapes hold numbers from 1-30, as well as plus, minus, multiplication and equals signs. Great for making a November calendar or for playing math games. Four to a page.
Using teacher-created dice, students use this simple game to practice multiplication with factors between four and nine.
Students determine the pattern, then follow the pattern to complete a grid and play a game.
Students determine the pattern, then follow the pattern to complete a grid and play a game.
Students determine the pattern, then follow the pattern to complete a grid and play a game.
Make fractions fun with these two math games. Includes game board and scoring sheets.
Students use turkey-shaped cards (numbered 1-20) to make addition and subtraction equations. Instructions included.
Students use star-shaped cards (numbered 1-20) to make addition and subtraction equations.
A fall theme and addition/subtraction review are creatively combined. | {"url":"http://www.abcteach.com/directory/subjects-math-games-11055-4-1","timestamp":"2014-04-18T03:24:55Z","content_type":null,"content_length":"145202","record_id":"<urn:uuid:b860738f-225d-44c4-9062-6e3bfbe4f202>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
High Heels in High School
I've been meaning to share this foldable for a couple weeks now, but I finally decided to do it now because I was just on Pinterest and noticed that someone had pinned a picture that I had posted of
my students using it.
Quadratics are always such a difficult concept for my students. I think part of it is that they "hate graphing" before we even get started and then the fact that there are three different forms of a
quadratic function helps to confuse them even more. I wanted a foldable that would be helpful in organizing all of this information.
We were working on graphing quadratics with standard form, vertex form, and intercept form (which I've also heard as factored form) before Thanksgiving. We worked on this foldable the day we came
back from Thanksgiving which was a nice review, but also a nice lead in for writing quadratic functions.
Here is the closed foldable. It is meant to go in their notebooks vertically like this. Then they can pull out each section as they need it, like so...
For each form, I included how to find the axis of symmetry, the vertex, any other miscellaneous information (x- and y- intercepts) and a sketch of an arbitrary graph with the important information
from that form.
And here is the file for the foldable:
The fonts in the pictures are a little bit different because I used some dafont.com fonts which wouldn't work on a computer that doesn't have those fonts downloaded. If you want fun fonts just head
over to dafont.com and be prepared to spend hours downloading fun fonts...or maybe that's just me?
Basically for all of my foldables the "thick" lines are for cutting, the dotted lines are for folding and the "regular" lines are just used to separate sections.
I called it a "springy" foldable when I was naming the file, but then while we were working on it in class I told them to fold the strips like an accordion and I liked that better...so accordion
quadratic foldable it is! Enjoy!!
Soooooo I've been MIA for a couple months. I didn't think it would be so hard to work and continue blogging at the same time....buuuuuut I was WRONG! Even though I haven't been blogging I have been
continuing to read others' blog posts and I've been trying to keep up on twitter most days as well. I have also been creating lots of new resources and activities and foldables all influenced by the
awesome mathtwitterblogosphere. I'm going to try to update within the next week or so with all of the new stuff I've been working on.
I just finished creating a quadratics scavenger hunt and I wanted to post it before I got sidetracked or something.
Basically, I have posted problems all around the room that the students have to walk around and work on. In the bottom right corner of each problem is the answer to another problem. Once they find
the solution to the problem they are working on, they go look for that solution in the little box on the other sheets of paper.
I also created a "work page" to go along with the scavenger hunt so that it would be easy to see if students had the correct answers. I adopted this idea of a work page from Sarah over at Everybody
is a Genius. (She has some REALLY great ideas!!)
I did this same activity with my geometry class a couple weeks ago and it went really really well. I love hearing them discuss math with each other! Hopefully the activity goes well today too!!
UPDATE: Students not only really enjoyed the activity and got a good review from it, but they were also using the foldable we had made earlier in the week! It made me so happy to see them utilizing
their foldables and also happy that they found them useful. One student had been questioning why we were making foldables because he "doesn't like arts and crafts" but today he asked if they were
going to be allowed to use them on the test!!! Finally buy in!
I HATE making lunch in the morning. I also hate making it the night before. Actually I always hate making lunch, but I have to have something to eat. I also feel like there aren't very many options
for lunch if you do make it the day of (or the night before). I mean what are my choices when I have 20 minutes to eat...a sandwich? a salad? BOR-ING!!
One of my favorite time (and effort) savers is to make a giant batch of something, anything, on Sunday and then just bring that everyday for lunch for the week. I absolutely love just being able to
grab a "to-go" container and be on my way. I've also very recently started "prepackaging" portions for my snacks (things like grapes, carrots, etc..) so that I can just grab those too.
Two of our faaaaaaaaavorite recipes are this slow cooker chicken chili recipe and this quinoa and beans recipe. Both found courtesy of Pinterest (btw how did I ever live without Pinterest??) I also
make things like baked ziti or lasagna or something, but we're trying to eat a little more healthy (and pasta everyday for lunch just isn't what my body loves).
This year in my classroom one of my goals is to do less work and to not talk as much. I've mentioned before how I end up exhausted at the end of the day because I'm doing so much all day long. One
way I am trying to do this is to make class more interactive and hands on. So far this year I have already used groups/partners I think more than I did in all of last year! I've also implemented some
whiteboarding activites and my newest adventure is into foldables.
After reading some great advice on foldables from Julie (@jreulbach) from I Speak Math and Sarah (@msrubinmath) from Everybody is a Genius I tried my hand at my very first foldable yesterday
afternoon. I have to say I am pretty pleased with the outcome and I'm hoping it goes over well with my students and that it helps them to remember and understand better.
I started off with brainstorming what I wanted them to know from the foldable. The section we will be starting is on the slope of a line and should be a short review since it is an Algebra 2 class.
Since there are 4 categories slope can be classified into I decided on a 4 section foldable. Also, there were 2 things I wanted them to know about each category (beyond the "name" obviously) so I
made it a 4 section, trifold foldable.
On the outside I put the 4 types of slope...
On the flap underneath that one I am going to have them draw an example of a line with that kind slope.
And finallyyyyyyy the inside inside
The REALLY important part of the inside to me is how to write the equation of a line with zero or undefined slope. I feel like that is one of those things that kids almost always forget so I wanted
to make it a key part of my foldable.
So I'm happy at my first foldable attempt. I feel like the more I create the easier it will be aaaaaaand my mind will just start thinking "in foldables" haha. We'll see :)
So it's week 3 of the new blogger initiation, it feels good to be more than halfway done! This week the "prompt" the really jumped out to me was:
A student comes up to you and says "why do we have to learn this?" (where "this" really means mathematics that goes beyond counting change or calculating a tip). How do you respond? (This prompt was
inspired by Steve Grossman's week one post.) (Alterna-question: You are having a parent-teacher conference and the father says "Well I was never really good at math either..." when talking about his
child. How do you respond?)
I can't even count how many times a week I hear "why do we have to learn this?" or "where do you use this is real life?" I had a real moment of clarity one day last year when a student asked me one
of these two questions and we actually had a REALLY GREAT class discussion about the value of math and how they will be using, if not mathematical concepts then mathematical/logical/rational thinking
in their lives. This was the moment when I actually realized that I could see how math is all around us, and how it is used everywhere, but that my students didn't see that and needed to be shown/
Now, I don't mean needed to be shown like the corny math problems in text books that force "real-life" application on so many mathematical concepts (I can't be the only one that feels a lot of
"real-world" problems in textbooks feel very contrived...right?). I mean really shown in a way they are going to believe and going to accept. I mean I'm a math teacher and I enjoy math, but even I
think problems like this one below just aren't cutting it for the kids. I mean seriously...who cares what the angles in a stone are?
My new response to the question "why do we need to learn this?" is that people won't necessarily use every (or any) mathematical concepts later on in their lives or careers, but that EVERYONE needs
to know and be skilled in problem solving. I explain to my students that by learning different parts of math they are
teaching their brains how to think in a certain way and practicing that skill. It doesn't matter what type of job you have or what is going on in your life...problems arise and you have to be adept
at finding solutions to those problems.
Also, new this year (because of all of the wonderful ideas I have gotten through twitter and other math bloggers) I am moving toward more group work and more collaboration between students. When I
was talking to my new students about why we would be doing a lot of group work and why I thought it was important, I explained to them that in the real world when you have a problem usually you
aren't isolated trying to solve that problem and I want to recreate that environment in my classroom.
Now the second half of this prompt is something I feel VERY passionately about. When I hear parents (grandparents, aunts, uncles, brothers, sisters, OTHER TEACHERS!!!) tell a student that it is ok
they aren't good at math because they weren't either I want to scream! This is so detrimental to a child's thinking and does nothing but allow the student to believe they will never be successful and
therefore they give up. I mean seriously, if there was something you were told you would never be good at and would never understand how much effort would you put in? Also, this comment usually comes
from a trusted adult, so the child is likely to accept this as fact.
I VERY much believe that ANYONE is capable of understanding math, so much so that I did an entire "action research" project on it for my Masters. Obviously math is going to come more easily to some
people than to others, but I still believe that it is possible for everyone to understand.
Another thing that I just do not understand is why we have this cultural acceptance of some people not understanding math. It isn't ok or acceptable to be illiterate, but for some reason people have
no problem telling all sorts of people how they aren't literate in math. Like...what?? It seriously just boggles my mind.
As for my response to Mr. "Well I was never really good at math either" I would (and have) explained my thinking about success in math and ask for his support in helping his child be more successful
in math. I would also share a personal story about how both my mother and father were not strong math students, but that my stepmother was a math major in college and how just having that one
positive influence regard math I was able to be successful and enjoy it. I think I would also point out that there isn't some special gene that makes you good at math or not so it isn't hereditary :)
On a random side note, my brother left for college at couple weeks ago (awww he's all grown up...I remember when he was born!) and my dad, stepmom and sister brought him up to school.This is a
picture of them on the front page of the school website...my poor brother is the one hidden behind the text that reads "An Inspiring Welcome to the Class of 2016" and he is the one that is PART of
the class of 2016. Everyone kept thinking my sister was starting school, but she's only a junior in high school. This was just what I needed to give me a chuckle during a stressful first week back!
I use a clipboard when I walk around and check homework, but who wants to use an ugly brown clipboard?? Last summer I covered two of my clipboards with different scrapbooking paper, but I wanted to
re-cover one of them because I wasn't happy with the way it turned out.
I started by ripping off as much of the old paper as I could and then using some sandpaper to try to make it an even surface again.
Then I went through all of my scrapbooking supplies looking for cute paper for both the front and back of the clip board. My clipboards are a little longer than 12 inches so I have to use two pieces
of paper if I want to cover the entire surface. While going through my scrapbook stuff I found this SUPER cute paper that I NEED to find something to do with!
And I must really REALLY like it because apparently I bought another sheet at some point!
So these are the two color combo options I was debating between. I ended up choosing the pieces on the left.
Once I had my paper selected I traced around the clipboard and cut off the excess.
I decided to do the front and back of the clipboard in reverse pattern.
Now it's time for some Mod Podge!!
Ok so now here is the sad part of my story...my paper kept puckering as I was trying to put it on. I was trying to flatten it out by using the paint brush, but I ended up brushing so hard the color
on the paper started to rub away :(
But I didn't give up! I ripped off the paper and tried again....and again...so then I thought to myself "maybe the paper is too thin and that's why it is puckering" so I chose a thicker piece of
scrapbook paper and tried AGAIN. Lesson learned...thicker worked!!!! I also decided to just paint the front of the clipboard instead of trying to cut paper out to go around the "clip" part of the
clip board.
Aaaaaaaaaaand here is my finished clipboard.
So cute!! And I actually like this paper better than the blue and the green polka dots :) SUCCESS!!!
Ok, so I know it might sound kind of cheesy but I LOVE my department. I couldn't ask for a better group of people to work with or to spend most of my days with.
It makes it really nice to go into work when you have a group of people that you can count on to make your day enjoyable. We aren't just colleagues, we're friends too and how much better can work get
than being able to work with your FRIENDS?!
I also know that I can go to any one of the members of my department for anything. I might be struggling with a student or maybe having trouble helping the kids understand a topic and I can always
count on my department members to be there to help, offer suggestions and guidance.
So for #myfavfriday I am shouting out my department members!!! YOU GUYS ROCK :) | {"url":"http://highheeledteach.blogspot.com/","timestamp":"2014-04-16T16:02:19Z","content_type":null,"content_length":"135916","record_id":"<urn:uuid:9ab0de23-915b-426e-b39a-b12e97f45d52>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gabor T. Herman
Position: Distinguished Professor
Campus Affiliation: Graduate Center
Degrees/Diplomas: Ph.D., University of London
Research Interests: Tomography, digital topology, 3-d renderings in medicine
Books in Print
Geometry of Digital Spaces, Birkhäuser Boston, 1998
Discrete Tomography: Foundations, Algorithms, and Applications, Birkhäuser Boston, 1999
3D Imaging in Medicine, 2nd Edition, CRC Press, 2000
Advances in Discrete Tomography and Its Applications, Birkhäuser Boston, 2007
Fundamentals of Computerized Tomography: Image Reconstruction from Projections, 2nd Edition, Springer, 2009
Recent and Current Courses
The Geometry of Digital Spaces, Department of Computer Science, The Graduate Center, City University of New York, Fall 2002
Image Processing, Computer Vision and Visualization, Department of Computer Science, The Graduate Center, City University of New York, Spring 2003
Discrete Tomography, Department of Computer Science, The Graduate Center, City University of New York, Fall 2003
Readings in Computer Science: Algorithms for Image Reconstruction from Projections, Department of Computer Science, The Graduate Center, City University of New York, Spring 2004
Seminar in Image Processing and Computer Vision, Department of Computer Science, The Graduate Center, City University of New York, Spring 2004
Seminar in Image Processing and Computer Vision, Department of Computer Science, The Graduate Center, City University of New York, Spring 2005
Reconstruction from Projections, Department of Computer Science, The Graduate Center, City University of New York, Fall 2005
Seminar in Image Processing and Computer Vision, Department of Computer Science, The Graduate Center, City University of New York, Spring 2006
Discrete Tomography and Its Applications, Department of Computer Science, The Graduate Center, City University of New York, Fall 2006
Seminar in Image Processing and Computation, Department of Computer Science, The Graduate Center, City University of New York, Spring 2007
Statistical Models in Computer Science, Department of Computer Science, The Graduate Center, City University of New York, Fall 2007
Seminar in Image Processing and Computer Vision, Department of Computer Science, The Graduate Center, City University of New York, Spring 2008
Reconstruction from Projections, Department of Computer Science, The Graduate Center, City University of New York, Fall 2008
Seminar in Image Processing and Computer Vision, Department of Computer Science, The Graduate Center, City University of New York, Fall 2009
Inverse Problems in Imaging, Department of Computer Science, The Graduate Center, City University of New York, Spring 2010
Seminar in Image Processing and Computer Vision, Department of Computer Science, The Graduate Center, City University of New York, Fall 2010
Multidimensional Data Structures, Department of Computer Science, The Graduate Center, City University of New York, Spring 2011
Practical Considerations for the Desigh of Efficient Algorithms, Department of Computer Science, The Graduate Center, City University of New York, Spring 2012
Seminar in Image Processing and Computer Vision, Department of Computer Science, The Graduate Center, City University of New York, Fall 2012
Conferences (since 2010)
Optimization Theory and Related Topics, January 11-14, 2010, Haifa, Israel (Invited Speaker)
Mathematical Problems, Models and Methods in Biomedical Imaging , February 8-12, 2010, Los Angeles, USA (Invited Speaker)
Minisymposium on Computational Methods in Three-Dimensional Microscopy Reconstruction, November 8, 2010, New York, New York, USA (Organizer)
Minisymposium on Computational Methods in Three-Dimensional Microscopy Reconstruction, June 15, 2012, New York, New York, USA (Organizer)
Available Software
SNARK09: A Programming System for the Reconstruction of 2D Images from 1D Projections
Principal Investigator on Current Grant
Computational Methods for Inverting the Soft X-Ray Transform, National Science Foundation, DMS-1114901, 2011-2014
Current Editorial Positions
Editorial Board, Computerized Medical Imaging and Graphics
Editorial Board, Journal of Visual Communication and Image Representation
Scientific Affiliations
Head, Discrete Imaging and Graphics Group
Faculty Member, Institute for Macromolecular Assemblies
Married to Marilyn Kirsch, an artist
The drowned man returns to the shore guided by a little fish
(Copyright 2004, Marilyn Kirsch)
Brief Curriculum Vitae
University of London, B.Sc.(Mathematics), 1963
University of London, M.Sc. (Mathematics), 1964
University of California at Berkeley, M.S. (Electrical Engineering), 1966
University of London, Ph.D. (Mathematics), 1968
Faculty Appointments:
1966-67: Lecturer in Computer Science, Brighton College of Technology, England
1967-69: Instructor, IBM (UK) Ltd.
1969-70: Assistant Professor, Department of Computer Science, SUNY at Buffalo
1970-74: Associate Professor, Department of Computer Science, SUNY at Buffalo
1974-81: Professor, Department of Computer Science, SUNY at Buffalo
1981-00: Professor, Department of Radiology, University of Pennsylvania, Philadelphia
2000-01: Professor, Department of Computer and Information Sciences, Temple University, Philadelphia
2001- : Distinguished Professor, Department of Computer Science, The Graduate Center, CUNY, New York
1988: Erskine Fellow - University of Canterbury, New Zealand
1989: Honorary doctorate - Linköping University, Sweden
1989: Honorary member - American Society of Neuro-Imaging
1991: Fellow - Institute of Electrical and Electronics Engineers
1992: Editor-in-Chief, IEEE Transactions on Medical Imaging
1993: Fellow - British Computer Society
1996: Fellow - American Institute for Medical and Biological Engineering
1998: Honorary doctorate - József Attila University, Szeged, Hungary
2000: Honorary doctorate - University of Haifa, Israel
2001: Hewlett Packard Visiting Research Professor - Mathematical Sciences Research Institute, Berkeley, California
Agar, U., Bhachech, M., Herman, G.: Determining inverse functions of output devices having limited bit depth, U.S. Patent Number 7483171, 2009
Articles that have been cited in the literature 100 or more times (as of June 9, 2012):
Gordon, R., Bender, R., Herman, G.T.: Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and x-ray photography, Journal of Theoretical Biology 29:471-482, 1970
Herman, G.T., Lent, A., Rowland, S.: ART: Mathematics and applications (a report on the mathematical foundations and on the applicability to real data of the algebraic reconstruction techniques),
Journal of Theoretical Biology 42:1-32, 1973
Greenleaf, J.F., Johnson, S.A., Lee, S.L., Wood, E.H., Herman, G.T.: Algebraic reconstruction of spatial distributions of acoustic absorption within tissue from their two-dimensional acoustic
projections, International Symposium on Acoustical Holography and Imaging 5:591-603, 1974
Gordon, R., Herman, G.T.: 3-dimensional reconstruction from projections: A review of algorithms, International Review of Cytology 38, Bourne, G.H. (ed.), Academic Press, 111-151, 1974
Gordon, R., Herman, G.T., Johnson, S.A.: Image reconstruction from projections, Scientific American 233(4):56-68, 1975
Herman, G.T., Lent, A.: Iterative reconstruction algorithms, Computers in Biology and Medicine 6:273-294, 1976
Herman, G.T., Liu, H.K.: Display of three-dimensional information in computed tomography, Journal of Computer Assisted Tomography 1:155-160, 1977
Herman, G.T., Naparstek, A.: Fast image reconstruction based on a Radon inversion formula appropriate for rapidly collected data, SIAM Journal on Applied Mathematics 33:511-533, 1977
Herman, G.T.: Correction for beam hardening in computed tomography, Physics in Medicine and Biology 24:81-106, 1979
Herman, G.T., Liu, H.K.: Three-dimensional display of human organs from computed tomograms, Computer Graphics and Image Processing 9:1-21, 1979
Artzy, E., Frieder, G., Herman, G.T.: The theory, design, implementation and evaluation of a three-dimensional surface detection algorithm, Computer Graphics and Image Processing 15:1-24, 1981
Eggermont, P.P.B., Herman, G.T., Lent, A.: Iterative algorithms for large partitioned linear systems, with applications to image reconstruction, Linear Algebra and Its Applications 40:37-67, 1981
Udupa, J.K., Srihari, S.N., Herman, G.T.: Boundary detection in multidimensions, IEEE Transactions on Pattern Analysis and Machine Intelligence 4:41-50, 1982
Hemmy, D.C., David, D.J., Herman, G.T.: Three-dimensional reconstruction of craniofacial deformity using computed tomography, Neurosurgery 13:534-541, 1983
Chen, L.S., Herman, G.T., Reynolds, R.A., Udupa, J.K.: Surface shading in the cuberille environment, IEEE Computer Graphics and Applications 5(12):33-43, 1985 (Erratum appeared in 6(2):67-69, 1986)
Raichlen, J.S., Trivedi, S.S., Herman, G.T., St. John Sutton, M.G., Reichek, N.: Dynamic three-dimensional reconstruction of the left ventricle from two-dimensional echocardiograms, Journal of the
American College of Cardiology 8:364-370, 1986
Levitan, E., Herman, G.T.: A maximum a posteriori probability expectation maximization algorithm for image reconstruction in emission tomography, IEEE Transactions on Medical Imaging MI-6:185-192,
Gur, R.C., Mozley, P.D., Resnick, S.M., Gottlieb, G.E., Kohn, M., Zimmerman, R., Herman, G., Atlas, S., Grossman, R., Berretta, D., Erwin, R., Gur, R.E.: Gender differences in age effect on brain
atrophy measured by magnetic resonance imaging, Proceedings of the National Academy of Sciences 88:2845-2849, 1991
Kohn, M.I., Tanna, N.K., Herman, G.T., Resnick, S.M., Mozley, P.D., Gur, R.E., Alavi, A., Zimmerman, R.A., Gur, R.C.: Analysis of brain and cerebrospinal fluid volumes with MR imaging. Part 1:
Methods, reliability, and validation, Radiology 178:115-122, 1991
Gur, R.E., Mozley, P.D., Resnick, S.M., Shtasel, D., Kohn, M., Zimmerman, R., Herman, G., Atlas, S., Grossman, R., Erwin, R., Gur, R.C.: Magnetic resonance imaging in schizophrenia: I. Volumetric
analysis of brain and cerebrospinal fluid, Archives of General Psychiatry 48:407-412, 1991
Herman, G.T., Zheng, J., Bucholtz, C.A.: Shape-based interpolation, IEEE Computer Graphics and Applications 12(3):69-79, 1992
Levkowitz, H., Herman, G.T.: Color scales for image data, IEEE Computer Graphics and Applications 12(1):72-80, 1992
Levkowitz, H., Herman, G.T.: GLHS: A generalized lightness, hue, and saturation color model, CVPIG: Graphical Models and Image Processing 55:271-285, 1993
Herman, G.T., Meyer, L.B.: Algebraic reconstruction techniques can be made computationally efficient, IEEE Transactions on Medical Imaging 12:600-609, 1993
Marabini, R., Herman, G.T., Carazo, J.-M.: 3D reconstruction in electron microscopy using ART with smooth spherically symmetric volume elements (blobs), Ultramicroscopy 72:53-65, 1998
Scheres, S.H.W., Gao, H., Valle, M., Herman, G.T., Eggermont, P.P.B., Frank, J., Carazo, J.-M.: Disentangling conformational states of macromolecules in 3D-EM through likelihood optimization, Nature
Methods 4:27-29, 2007
Articles that have been cited in the literature 20 or more times but less than 100 times (as of June 9, 2012):
Herman, G.T.: Computing ability of a developmental model for filamentous organisms, Journal of Theoretical Biology 25:421-435, 1969
Herman, G.T.: The uniform halting problem for generalized one-state Turing machines, Information and Control 15:353-367, 1969
Baker, R.W., Herman, G.T.: CELIA - a cellular linear iterative array simulator, Proceedings of the Fourth Annual Conference on Applications of Simulation, Winter Simulation Conference, 64-73, 1970
Herman, G.T.: Role of environment in developmental models, Journal of Theoretical Biology 29:329-341, 1970
Frieder, G., Herman, G.T.: Resolution in reconstructing objects from electron micrographs, Journal of Theoretical Biology 33:189-211, 1971
Herman, G.T., Rowland S.: Resolution in ART: An experimental investigation of the resolving power of an algebraic picture reconstruction technique, Journal of Theoretical Biology 33:213-223, 1971
Gordon, R., Herman, G.T.: Reconstruction of pictures from their projections, Communications of the Association for Computing Machinery 14:759-768, 1971
Herman, G.T.: Models for cellular interactions in development without polarity of individual cells I. General description and the problem of universal computing ability, International Journal of
Systems Science 2:271-289, 1971
Gaarder, N.T. Herman, G.T.: Algorithms for reproducing objects from their X-rays, Computer Graphics and Image Processing 1:97-106, 1972
Herman, G.T.: Two direct methods for reconstructing pictures from their projections: A comparative study, Computer Graphics and Image Processing 1:123-144, 1972
Baker, R.W., Herman, G.T.: Simulation of organisms using a developmental model: 1. Basic description, International Journal of Biomedical Computing 3:201-215, 1972
Herman, G.T.: Models for cellular interactions in development without polarity of individual cells II. Problems of synchronization and regulation, International Journal of Systems Science 3:149-175,
Herman, G.T., Rowland, S.: Three methods for reconstructing objects from x-rays: A comparative study, Computer Graphics and Image Processing 2:151-178, 1973
Herman, G.T.: A biologically motivated extension of ALGOL-like languages, Information and Control 22:487-502, 1973
Herman, G.T., Lee, K.P., van Leeuwen, J., Rozenberg, G.: Characterization of unary developmental languages, Discrete Mathematics 6:235-247, 1973
Herman, G.T., Liu, W.H.: The daughter of Celia, the French flag, and the firing squad: (Progress report on a cellular linear iterative-array simukator), Simulation 21:33-41, 1973
Robb, R.A., Greenleaf, J.F., Ritman, E.L., Johnson, S.A., Sjostrand, J.D., Herman, G.T., Wood, E.H.: Three-dimensional visualization of the intact thorax and contents: A technique for cross-sectional
reconstruction from multiplanar x-ray views, Computers and Biomedical Research 7:395-419, 1974
Johnson, S.A., Robb, R.A., Greenleaf, J.F., Ritman, E.L., Lee, S.L., Herman, G.T., Wood, E.H.: The problem of accurate measurement of left ventricular shape and dimensions from multiplane
roentgenographic data, European Journal of Cardiology 1:241-258, 1974
Herman, G.T.: Closure properties of some families of languages associated with biological systems, Information and Control 24:101-121, 1974
Herman, G.T., Lindenmayer, A., Rozenberg, G.: Description of developmental languages using recurrence systems, Mathematical Systems Theory 8:316-341, 1974
Herman, G.T., Liu, W.H., Rowland, S., Walker, A.: Synchronization of growing cellular arrays, Information and Control 25:103-122, 1974
Herman, G.T., Lindenmayer, A., Rozenberg, G.: Description of developmental languages using recurrence systems, Mathematical Systems Theory 8:316-341, 1974
Herman, G.T.: Reconstruction of binary patterns from a few projections, International Computing Symposium 1973, Günther, A., Levrat, B., Lipps, H. (eds.), Noth-Holland Publ. Co., 371-378, 1974
Herman, G.T.: A relaxation method for reconstructing objects from noisy x-rays, Mathematical Programming 8:1-19, 1975
Herman, G.T., Walker, A.: Context-free languages in biological systems, International Journal of Computer Mathematics 4:369-391, 1975
Herman, G.T., Lakshminarayanan, A.V., Rowland, S.W.: The reconstruction of objects from shadowgraphs with high contrasts, Pattern Recognition 7:157-165, 1975
Herman, G.T., Lakshminarayanan, A.V., Naparstek, A.: Convolution reconstruction techniques for divergent beams, Computers in Biology and Medicine 6:259-271, 1976
Herman, G.T., Lent, A.: A computer implementation of a Bayesian analysis of image reconstruction, Information and Control 31:364-384, 1976
Herman, G.T., Lent, A.: Quadratic optimization for image reconstruction, I, Computer Graphics and Image Processing 5:319-332, 1976
Herman, G.T., Lakshminarayanan, A.V., Naparstek, A., Ritman, E.L., Robb, R.A., Wood, E.H.: Rapid computerized tomography, Medical Data Processing, Laudet, M., Anderson, J., Begon, F. (eds.), Taylor
and Francis Ltd., 581-598, 1976
Herman, G.T., Lent, A., Lutz, P.H.: Relaxation methods for image reconstruction, Communications of the Association for Computing Machinery 21:152-158, 1978
Herman, G.T., Liu, H.K.: Dynamic boundary surface detection, Computer Graphics and Image Processing 7:130-138, 1978
Herman, G.T., Lent, A.: A family of iterative quadratic optimization algorithms for pairs of inequalities, with applications in diagnostic radiology, Mathematical Programming Study 9:15-29, 1978
Herman, G.T., Hurwitz, H., Lent, A., Lung, H.P.: On the Bayesian approach to image reconstruction, Information and Control 42:60-71, 1979
Artzy, E., Elfving, T., Herman, G.T.: Quadratic optimization for image reconstruction, II, Computer Graphics and Image Processing 11:242-261, 1979
Herman, G.T., Rowland, S., Yau, M., A comparative study of the use of linear and modified cubic spline interpolation for image reconstruction, IEEE Transactions on Nuclear Science 26:2879-2894, 1979
Herman, G.T.: Demonstration of beam hardening correction in computed-tomography of the head, Journal of Computer Assisted Tomography 3:373-378, 1979
Herman, G.T., Coin, C.G.: The use of three-dimensional computer display in the study of disk disease, Journal of Computer Assisted Tomography 4:564-567, 1980
Herman, G.T., Lent, A., Hurwitz, H.: A storage-efficient algorithm for finding the regularized solution of a large, inconsistent system of equations, IMA Journal of Applied Mathematics 25:361-366,
Altschuler, M.D., Censor, Y., Eggermont, P.P.B., Herman, G.T., Kuo, Y.H., Lewitt, R.M., McKay, M., Tuy, H.K., Udupa, J.K., Yau, M.M.: Demonstration of a software package for the reconstruction of the
dynamically changing structure of the human heart from cone-beam x-ray projections, Journal of Medical Systems 4:289-304, 1980
Chang, T., Herman, G.T.: A scientific study of filter selectionfor a fan-beam convolution reconstruction algorithm, SIAM Journal on Applied Mathematics 39:83-105, 1980
Herman, G.T., Lewitt, R.M.: Evaluation of a preprocessing algorithm for truncated CT projections, Journal of Computer Assisted Tomography 5:127-135, 1981
Herman, G.T., Webster, D.: Surfaces of organs in discrete three-dimensional space, Mathematical Aspects of Computerized Tomography, Herman, G.T., Natterer, F. (eds.), Springer-Verlag, Berlin,
Germany, 204-224, 1981
Herman, G.T., Udupa, J.K.: Display of three-dimensional discrete surfaces, Proceedings of the Society of Photo-Optical Instrumentation Engineers 283:90-97, 1981
Herman, G.T., Reynolds, R.A., Udupa, J.K.: Computer techniques for the representation of three-dimensional data on a two-dimensional display, Proceedings of the Society of Photo-Optical
Instrumentation Engineers 367:3-14, 1982
Herman, G.T., Udupa, J.K., Kramer, D.M., Lauterbur, P.C., Rudin, A.M., Schneider, J.M.: The three-dimensional display of nuclear magnetic resonance images, Optical Engineering 21:923-926, 1982
Axel, L., Herman, G.T., Udupa, J.K., Bottomley, P.A., Edelstein, W.A.: Three-dimensional display of nuclear magnetic resonance (NMR) cardiovascular images, Journal of Computer Assisted Tomography
7:172-174, 1983
Herman, G.T., Webster, D.: A topological proof of a surface tracking algorithm, Computer Vision, Graphics, and Image Processing 23:162-177, 1983
Herman, G.T., Trivedi, S.S.: A comparative study of two postreconstruction beam hardening correction methods, IEEE Transactions on Medical Imaging 2:128-135, 1983
Herman, G.T., Udupa, J.K.: Display of 3-D digital images: Computational foundations and medical applications, IEEE Computer Graphics and Applications 3(5):39-46, 1983
Chen, L.S., Herman, G.T., Meyer, C.R., Reynolds, R.A., Udupa, J.K.: 3D83: An easy-to-use software package for three-dimensional display from computed tomograms, Proceedings of the IEEE Computer
Society Joint International Sympostum on Medical Images and 1cons, IEEE Computer Society Press, 309-316, 1984
Frieder, G., Herman, G.T., Meyer, C., Udupa, J.: Large software problems for small computers: An example from medical imaging, IEEE Software 2(5):37-47, 1985
Herman, G.T., Vose, W.F., Gomori, J.M, Gefter, W.B: Stereoscopic computed three-dimensional surface displays, RadioGraphics 5:825-852, 1985
Censor, Y., Elfving, T., Herman, G.T.: A method of iterative data refinement and its applications, Mathematical Methods in the Applied Sciences 7:108-123, 1985
Burk, Jr. D.L., Mears, D.C., Cooperstein, L.A., Herman, G.T., Udupa, J.K.: Acetabular fractures: Three-dimensional computed tomographic imaging and interactive surgical planning, Journal of Computed
Tomography 10:1-10, 1986
DeMarino, D.P., Steiner, E., Poster, R. Katzberg, R.W., Hengerer, A.S., Herman, G.T., Wayne, W.S., Prosser, D.C.: Three-dimensional computed tomography in maxillofacial trauma, Archives of
Otolaryngology - Head and Neck Surgery 112:146-150, 1986
Censor, Y., Herman, G.T.: On some optimization techniques in image reconstruction from projections, Applied Numerical Mathematics 3:365-391, 1987
Edholm, P.R., Herman, G.T.: Linograms in image reconstruction from projections, IEEE Transactions on Medical Imaging 6:301-307, 1987
Herman, G.T.: Three-dimensional imaging on a CT or MR scanner, Journal of Computer Assisted Tomography 12:450-458, 1988
Edholm, P.R., Herman, G.T., Roberts, D.A.: Image reconstruction from linograms - Implementation and evaluation, IEEE Transactions on Medical Imaging 7:239-246, 1988
Herman, G.T., Yeung, K.T.D.: Evaluators of image reconstruction algorithms, International Journal of Imaging Systems and Technology 1:187-195, 1989
Toennies, K.D., Udupa, J.K., Herman, G.T., Wornom, I.L., Buchman, S.R.: Registration of three dimensional objects and surfaces, IEEE Computer Graphics and Applications 10(3):52-62, 1990
Herman, G.T.: On topology as applied to image analysis, Computer Vision, Graphics, and Image Processing 52: 409-415, 1990
Censor, Y., De Pierro, A.N., Elfving, T., Herman, G.T., Iusem, A.N.: On iterative methods for linearly constrained entropy maximization, Numerical Analysis and Mathematical Modelling, Waculicz, A.
(ed.), Banach Center, Warsaw, 145–163, 1990
Herman, G.T., Odhner, D., Toennies, K.D., Zenios, S.A.: A parallelized algorithm for image reconstruction from noisy projections, Large Scale Numerical Optimization, Coleman, T.F., Li Y.F. (eds.),
Society of Industrial and Applied Mathematics, Philadelphia, 3-21, 1990
Herman, G.T., Odhner, D.: Performance evaluation of an iterative image reconstruction algorithm for positron emission tomography, IEEE Transactions on Medical Imaging 10:336-346, 1991
Herman, G.T.: Discrete multidimensional Jordan surfaces, CVGIP: Graphical Models and Image Processing 54:507-515, 1992
Herman, G.T., De Pierro, A.R., Gai, N.: On methods for maximum a posteriori image reconstruction with a normal prior, Journal of Visual Communication and Image Representation 3:316-324, 1992
Herman, G.T., Yeung, K.T.D.: On piecewise-linear classification, IEEE Transactions on Pattern Analysis and Machine Intelligence 14:782-786, 1992
Herman, G.T.: Oriented surfaces in digital spaces, CVGIP: Graphical Models and Image Processing 55:381-396, 1993
Furuie, S.S., Herman, G.T., Narayan, T.K., Kinahan, P., Karp, J.S., Lewitt, R.M., Matej, S.: A methodology for testing statistically significant differences between fully 3-D PET reconstruction
algorithms, Physics in Medicine and Biology 39:341-354, 1994
Matej, S., Herman, G.T., Narayan, T.K., Furuie, S.S., Lewitt, R.M., Kinahan, P.: Evaluation of task-oriented performance of several fully 3-D PET reconstruction algorithms, Physics in Medicine and
Biology 39:355-367, 1994
Mozley, P.D., Gur, R.E., Resnick, S.M., Stasel, D.L., Richards, J., Kohn, M., Grossman, R., Herman, G., Gur, R.C.: Magnetic resonance imaging in schizophrenia - Relationship with clinical measures,
Schizophrenia Research 12:195-203, 1994
Kinahan, P.E., Matej, S., Karp, J.S., Herman, G.T., Lewitt, R.M.: A comparison of transform and iterative reconstruction techniques for a volume-imaging PET scanner with a large axial acceptance
angle, IEEE Transactions on Nuclear Science 42:2281-2287, 1995
Levitan, E., Chan, M., Herman, G.T.: Image-modeling Gibbs priors, Graphical Models and Image Proceassing, 57:117-130, 1995
Matej, S., Furuie, S.S., Herman, G.T.: Relevance of statistically significant differences between reconstruction algorithms, IEEE Transactions on Image Processing 5:554-556, 1996
Browne, J.A., Herman, G.T.: Computerized evaluation of image reconstruction algorithms, International Journal of Imaging Systems and Technology, 7:256-267, 1996
Marabini, R., Rietzel, E., Schroeder, R., Herman, G.T., Carazo, J.M.: Three-dimensional reconstruction from reduced sets of very noisy images acquired following a single-axis tilt schema: Application
of a new three-dimensional reconstruction algorithm and objective comparison with weighted backprojection, Journal of Structural Biology 120:363-371, 1997
Aharoni, R., Herman, G.T., Kuba, A.: Binary vectors partially determined by linear equation systems. Discrete Mathematics 171:1-16, 1997
Chan, M.T., Herman, G.T., Levitan, E.: Bayesian image reconstruction using image-modeling Gibbs priors, International Journal of Imaging Systems and Technology 9:85-98, 1998
Herman, G.T.: Algebraic reconstruction techniques in medical imaging, Medical Imaging Systems Techniques and Applications, Computational Techniques, Leondes, C.T. (ed.), CRC Press, 1-42, 1998
Matej, S., Herman, G.T., Vardi, A.: Binary tomography on the hexagonal grid using Gibbs priors, International Journal of Imaging Systems and Technology 9:126-131, 1998
Maki, D.D., Birnbaum, B.A., Chakraborty, D.P., Jacobs, J.E., Carvalho, B.M., Herman, G.T.: Renal cyst pseudo-enhancement: Beam hardening effects on CT numbers, Radiology 213:468-472, 1999
Kuba, A., Herman, G.T.: Discrete tomography: A historical overview, Discrete Tomography: Foundations, Algorithms, and Applications, Herman, G.T., Kuba, A. (eds.), Birkhäuser Boston, 3-34, 1999
Matej, S., Vardi, A., Herman, G.T., Vardi, E.: Binary tomography using Gibbs priors, Discrete Tomography: Foundations, Algorithms, and Applications, Herman, G.T., Kuba, A. (eds.), Birkhäuser Boston,
191-212, 1999
Carvalho, B.M, Gau, C.J., Herman, G.T., Kong, T.Y.: Algorithms for fuzzy segmentation, Pattern Analysis and Applications 2:73-81, 1999
Obi, T., Matej, S., Lewitt, R.M., Herman, G.T.: 2.5-D simultaneous multislice reconstruction by series expansion methods from Fourier-rebinned PET data, IEEE Transactions on Medical Imaging
19:474-484, 2000
Herman, G.T., Carvalho, B.M.: Multiseeded segmentation using fuzzy connectedness, IEEE Transactions on Pattern Analysis and Machine Intelligence 23:460-474, 2001
Sorzano, C.O.S., Marabini, R., Boisset, N., Rietzel, E., Schroeder, R., Herman, G.T., Carazo, J.M.: The effect of overabundant projection directions on 3D reconstruction algorithms, Journal of
Structural Biology 133:108-118, 2001
Vardi, E., Herman, G.T., Kong, T.Y.: Speeding up stochastic reconstructions of binary images from limited projection directions, Linear Algebra and Its Applications 339:75-89, 2001
Censor, Y., Elfving, T., Herman, G.T.: Averaging strings of sequential iterations for convex feasibility problems, Inherently Parallel Algorithms in Feasibility and Optimization and Their
Applications, Butnariu, D., Censor, Y., S. Reich, S. (Eds.), Elsevier Science Publishers, 101-113, 2001
Herman, G.T., Kuba, A.: Discrete tomography in medical imaging, Proceedings of the IEEE 91:1612-1626, 2003
Marabini, R., Sorzano, C.O.S., Matej, S., Fernández, J.J., Carazo, J.M., Herman, G.T.: 3D reconstruction of 2D crystals in real space, IEEE Transactions on Image Processing 13:549-561, 2004
Scheres, S.H.W., Valle, M., Nuñez, R., Sorzano, C.O.S., Marabini, R., Herman, G.T., Carazo, J.-M.: Maximum-likelihood multi-reference refinement for electron microscopy images, Journal of Molecular
Biology 348:139-149, 2005
Carvalho, B.M., Herman, G.T., Kong, T.Y.: Simultaneous fuzzy segmentation of multiple objects, Discrete Applied Mathematics 151:55-77, 2005
Alpers, A., Poulsen, H.F., Knudsen, E., Herman, G.T.: A discrete tomography algorithm for improving the quality of three-dimensional X-ray diffraction grain maps, Journal of Applied Crystallography
39:582-588, 2006
Butnariu, D., Davidi, R., Herman, G.T., Kazantsev, I.G.: Stable convergence behavior under summable perturbations of a class of projection methods for convex feasibility and optimization problems,
IEEE Journal of Selected Topics in Signal Processing 1:540-547, 2007
Herman, G.T., Davidi, R.: Image reconstruction from a small number of projections, Inverse Problems 24:045011, 2008
Censor, Y., Elfving, T., Herman, G.T., Nikazad, T.: On diagonally-relaxed orthogonal projection methods, SIAM Journal on Scientific Computing 30:473-504, 2008
Other archival publications (since 2002):
Udupa, J.K., Herman, G.T.: Medical image reconstruction, processing, visualization and analysis: The MIPG perspective, IEEE Transactions on Medical Imaging 21:281-295, 2002
Censor, Y., Herman, G.T.: Block-iterative algorithms with underrelaxed Bregman projections, SIAM Journal on Optimization 13:283-297, 2002
Zubelli, J.P., Marabini, R., Sorzano, C.O.S., Herman, G.T.: Three-Dimensional reconstruction by Chahine's method from electron microscopic projections corrupted by instrumental aberration, Inverse
Problems 19:933-949, 2003
Sorzano, C.O.S., Marabini, R., Herman, G.T., Censor, Y., Carazo, J.M.: Transfer function restoration in 3D electron microscopy via iterative data refinement, Physics in Medicine and Biology
49:509-522, 2004
Fourey, S., Kong, T.Y., Herman, G.T.: Generic axiomatized digital surface structures, Discrete Applied Mathematics 139:65-93, 2004
Garduño, E., Herman, G.T.: Optimization of basis functions for both reconstruction and visualization, Discrete Applied Mathematics 139:95-111, 2004
Liao, H.Y., Herman, G.T.: Automated estimation of Gibbs priors to be used in binary tomography, Discrete Applied Mathematics 139:149-170, 2004
Liao, H.Y., Herman, G.T.: Discrete tomography with a very few views, using Gibbs priors and a Marginal Posterior Mode approach, Electronic Notes in Discrete Mathematics 20:399-418, 2005
Alpers, A., Knudsen, E., Poulsen, H.F., Herman, G.T.: Resolving ambiguities in reconstructed grain maps using discrete tomography, Electronic Notes in Discrete Mathematics 20:419-437, 2005
Rodek, L., Knudsen, E., Poulsen, H.F., Herman, G.T.: Discrete tomographic reconstruction of 2D polycrystal orientation maps from X-ray diffraction projections using Gibbs priors, Electronic Notes in
Discrete Mathematics 20:439-453, 2005
Liao, H.Y., Herman, G.T.: A coordinate ascent approach to tomographic reconstruction of label images from a few projections, Discrete Applied Mathematics 151:184-197, 2005
Sorzano, C.O.S., Marabini, R., Herman, G.T., Carazo, J.-M.: Multiobjective algorithm parameter optimization using multivariate statistics in three-dimensional electron microscopy reconstruction,
Pattern Recognition 38:2587-2601, 2005
Garduño, E., Herman, G.T.: Implicit surface visualization of reconstructed biological molecules, Theoretical Computer Science 346:281-299, 2005
Liao, H.Y., Herman, G.T.: A method for reconstructing label images from a few projections, as motivated by electron microscopy, Annals of Operations Research 148:117-132, 2006
Fu, X., Knudsen, E., Poulsen, H.F., Herman, G.T., Carvalho, B.M., Liao, H.Y.: Optimized algebraic reconstruction technique for generation of grain maps based on three-dimensional X-ray diffraction
(3DXRD), Optical Engineering 45:116501, 2006
Carvalho, B.M., Herman, G.T.: Low-dose, large-angled cone-beam helical CT data reconstruction using algebraic reconstruction techniques, Image and Vision Computing 25:78-94, 2007
Rodek, L., Poulsen, H.F., Knudsen, E., Herman, G.T.: A stochastic algorithm for reconstruction of grain maps of moderately deformed specimens based on X-ray diffraction, Journal of Applied
Crystallography 40:313-321, 2007
Herman, G.T.: Boundaries in digital spaces, Applied General Topology 8:93-149, 2007
Sorzano, C.O.S., Velázquez-Muriel, J.A., Marabini, R., Herman, G.T., Carazo, J.M.: Volumetric restrictions in single particle 3DEM reconstruction, Pattern Recognition 41:616-626, 2008
Herman, G.T., Chen, W.: A fast algorithm for solving a linear feasibility problem with application to intensity-modulated radiation therapy, Linear Algebra and Its Applications 428:1207-1217, 2008
Herman, G.T., Kalinowski, M.: Classification of heterogeneous electron microscopic projections into homogeneous subsets, Ultramicroscopy 108:327-338, 2008
Garduño, E., Herman, G.T.: Parallel fuzzy segmentation of multiple objects, International Journal of Imaging Systems and Technology 18:336-344, 2008
Herman, G.T.: A note on exact image reconstruction from a limited number of projections, Journal of Visual Communication and Image Representation 20:65-67, 2009
Kulshreshth, A.K., Alpers, A., Herman, G.T., Knudsen, E., Rodek, L., Poulsen, H.F.: A greedy method for reconstructing polycrystals from three-dimensional x-ray diffraction data, Inverse Problems and
Imaging 3:69-85, 2009
Davidi, R., Herman, G.T., Censor, Y.: Perturbation-resilient block-iterative projection methods with applications to image reconstruction from projections, International Transactions in Operational
Research 16:505-524, 2009
Censor, Y., Herman, G.T., Jiang, M.: A note on the behavior of the randomized Kaczmarz algorithm of Strohmer and Vershynin, Journal of Fourier Analysis and Applications 15:431-436, 2009
Chen, W., Herman, G.T.: Efficient controls for finitely convergent sequential algorithms, ACM Transactions on Mathematical Software 37:14, 2010
Kazantsev, I.G., Klukowska, J., Herman, G.T, Cernetic, L.: Fully three-dimensional defocus-gradient corrected backprojection in cryoelectron microscopy, Ultramicroscopy 110:1128-1142, 2010
Censor, Y., Davidi, R., Herman, G.T.: Perturbation resilience and superiorization of iterative algorithms, Inverse Problems 26:065008, 2010
Alpers, A., Herman, G.T., Poulsen, H.F., Schmidt, S.: Phase retrieval for superposed signals from multiple binary objects, Journal of the Optical Society of America A 27:1927-1937, 2010
Chen, W., Craft, D., Madden, T.M., Zhang, K., Kooy, H.M., Herman, G.T.: A fast optimization algorithm for multi-criteria intensity modulated proton therapy planning, Medical Physics 37:4938-4945,
Garduño, E., Davidi, R., Herman, G.T.: Reconstruction from a few projections by ℓ[1]-minimization of the Haar transform, Inverse Problems 27:055006, 2011
Nikazad, T., Davidi, R., Herman, G.T.: Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction, Inverse Problems 28:035005, 2012
Censor, Y., Chen, W., Combettes, P.L., Davidi, R., Herman, G.T.: On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints, Computational
Optimization and Applications 51:1065-1088, 2012
Herman, G.T., Kong, T.Y., Oliviera, L.: Tree representation of digital picture embeddings, Journal of Visual Communication and Image Representation 23:883-891, 2012 | {"url":"http://www.gc.cuny.edu/Page-Elements/Academics-Research-Centers-Initiatives/Doctoral-Programs/Computer-Science/Faculty-Bios/Gabor-T-Herman","timestamp":"2014-04-20T19:37:44Z","content_type":null,"content_length":"204413","record_id":"<urn:uuid:50d3493e-f792-4504-b57f-8d1ba0fe8fe2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Surprise, AZ Prealgebra Tutor
Find a Surprise, AZ Prealgebra Tutor
...I have been very successful at accommodating diverse student needs by facilitating all styles of learners offering individualized and extracurricular support and integrating effective
materials. I speak both Romanian and English. I look forward to hearing from you.With 28 years of hands-on expe...
3 Subjects: including prealgebra, geometry, algebra 1
...This has been my mastery of this method training. As a teacher of 15 years in the classroom, I have worked with many "At Risk" students. The primary problem for students as these is that they
are often disorganized and lack Study Skills to learn discipline in presenting their independence to learn.
107 Subjects: including prealgebra, Spanish, English, reading
...The ACT English section is a fast-paced evaluation of student's knowledge of specific grammar rules and usage. I tutor with a content-based approach, empowering students by teaching them the
specific grammar rules they need to know in order to do well on this section of the ACT. I provide my st...
33 Subjects: including prealgebra, reading, Spanish, English
...I am well versed in a variety of methods as everyone learns differently. I do have some patience but I don't feel I have enough patience to help "special needs" students or English as a Second
Language students. I am also not bilingual enough to help ESL students.
9 Subjects: including prealgebra, reading, English, writing
...I believe that all children are capable of succeeding, but I feel that not all students receive what they need to achieve that success in the classroom. I look forward to working with learners
in a one to one relationship to help those learners achieve a successful learning experience. The sayi...
12 Subjects: including prealgebra, reading, English, writing
Related Surprise, AZ Tutors
Surprise, AZ Accounting Tutors
Surprise, AZ ACT Tutors
Surprise, AZ Algebra Tutors
Surprise, AZ Algebra 2 Tutors
Surprise, AZ Calculus Tutors
Surprise, AZ Geometry Tutors
Surprise, AZ Math Tutors
Surprise, AZ Prealgebra Tutors
Surprise, AZ Precalculus Tutors
Surprise, AZ SAT Tutors
Surprise, AZ SAT Math Tutors
Surprise, AZ Science Tutors
Surprise, AZ Statistics Tutors
Surprise, AZ Trigonometry Tutors | {"url":"http://www.purplemath.com/Surprise_AZ_prealgebra_tutors.php","timestamp":"2014-04-20T02:26:26Z","content_type":null,"content_length":"24067","record_id":"<urn:uuid:e25a2788-270e-427d-8da4-43dc168f20e8>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
Face Recognition using Eigenfaces
Face Recognition using Eigenfaces.
Haripriya Ganta and Pinky Tejwani.
ECE 847 Project Report.
Fall 2004, Clemson University.
We have implemented an efficient system to recognize faces from images with
some near real-time variations. Our approach essentially was to implement and verify the
algorithm Eigenfaces for Recognition [1], which solves the recognition problem for 2-D
image of faces, using the principal component analysis.The face images are projected
onto a face space(feature space) which best defines the variation the known test images.
The face space is defined by the ‘eigenfaces’ which are the eigenvectors of the set of
faces.These eigenfaces do not necessarily correspond to the distinct features perceived
like ears, eyes and noses.The projections of the new image in this feature space is then
compared to the available projections of training set to identify the person.Further, the
algorithm is extended to recognize the identity and gender of a person with different
orientations and certain variations like scaling.
Face recognition can be applied for a wide variety of problems like image and
film processing, human-computer interaction, criminal identification etc.This has
motivated researchers to develop computational models to identify the faces, which are
relatively simple and easy to implement. The model developed in [1] is simple, fast and
accurate in constrained environments.Our goal is to implement the model for a particular
face and distinguish it from a large number of stored faces with some real-time variations
as well.
The scheme is based on an information theory approach that decomposes face
images into a small set of characteristic feature images called ‘eigenfaces’, which are
actually the principal components of the initial training set of face images. Recognition is
performed by projecting a new image into the subspace spanned by the eigenfaces(‘face
space’) and then classifying the face by comparing its position in the face space with the
positions of the known individuals.
Recognition under widely varying conditions like frontal view, a 45° view, scaled
frontal view, subjects with spectacles etc. are tried, while the training data set covers a
limited views.Further this algorithm can be extended to recognize the gender of a person
or to interpret the facial expression of a person. The algorithm models the real-time
varying lighting conditions as well. But this is out of scope of the current implementation.
Eigenface Approach
The information theory approach of encoding and decoding face images extracts
the relevant information in a face image, encode it as efficiently as possible and compare
it with database of similarly encoded faces.The encoding is done using features which
may be different or independent than the distinctly perceived features like eyes, ears,
nose, lips, and hair.
Mathematically, principal component analysis approach will treat every image of
the training set as a vector in a very high dimensional space. The eigenvectors of the
covariance matrix of these vectors would incorporate the variation amongst the face
images. Now each image in the training set would have its contribution to the
eigenvectors (variations). This can be displayed as an ‘eigenface’ representing its
contribution in the variation between the images. These eigenfaces look like ghostly
images and some of them are shown in figure 2.In each eigenface some sort of facial
variation can be seen which deviates from the original image.
The high dimensional space with all the eigenfaces is called the image space
(feature space). Also, each image is actually a linear combination of the eigenfaces. The
amount of overall variation that one eigenface counts for, is actually known by the
eigenvalue associated with the corresponding eigenvector. If the eigenface with small
eigenvalues are neglected, then an image can be a linear combination of reduced no of
these eigenfaces. For example, if there are M images in the training set, we would get M
eigenfaces. Out of these, only M’ eigenfaces are selected such that they are associated
with the largest eigenvalues. These would span the M’-dimensional subspace ‘face space’
out of all the possible images (image space).
When the face image to be recognized (known or unknown), is projected on this
face space (figure 1), we get the weights associated with the eigenfaces, that linearly
approximate the face or can be used to reconstruct the face. Now these weights are
compared with the weights of the known face images so that it can be recognized as a
known face in used in the training set. In simpler words, the Euclidean distance between
the image projection and known projections is calculated; the face image is then
classified as one of the faces with minimum Euclidean distance.
(a) (b)
Figure 1.
(a) The face space and the three projected images on it. Here u1 and u2 are the
(b) The projected face from the training database.
Mathematically calculations.
Let a face image I(x,y) be a two dimensional N by N array of (8-bit) intensity
values. An image may also be considered as a vector of dimension N2, so that a typical
image of size 256 by 256 becomes a vector of dimension 65,536 or equivalently a point
in a 65,536-dimensional space. An ensemble of images, then, maps to a collection of
points in this huge space. Principal component analysis would find the vectors that best
account for the distribution of the face images within this entire space.
Let the training set of face images be T1,T2,T3,….TM. This training data set has to
be mean adjusted before calculating the covariance matrix or eigenvectors.The average
face is calculated as Ψ = (1/M) Σ1MTi Each image in the data set differs from the average
face by the vector Ф = Ti – Ψ. This is actually mean adjusted data. The covariance matrix
C = (1/M) Σ 1M Φi ΦiT (1)
= AA
where A = [ Φ1, Φ2, …. ΦM]. The matrix C is a N2 by N2 matrix and would generate N2
eigenvectors and eigenvalues. With image sizes like 256 by 256, or even lower than that,
such a calculation would be impractical to implement.
A computationally feasible method was suggested to find out the eigenvectors. If
the number of images in the training set is less than the no of pixels in an image (i.e M <
N2), then we can solve an M by M matrix instead of solving a N2 by N2 matrix. Consider
the covariance matrix as ATA instead of AAT. Now the eigenvector vi can calculated as
follows, ATAvi = μivi (2)
where μi is the eigenvalue. Here the size of covariance matrix would be M by M.Thus we
can have m eigenvectors instead of N2. Premultipying equation 2 by A, we have
AATAvi = μi Avi (3)
The right hand side gives us the M eigenfaces of the order N by 1.All such vectors
would make the imagespace of dimensionality M.
Face Space
As the accurate reconstruction of the face is not required, we can now reduce the
dimensionality to M’ instead of M. This is done by selecting the M’ eigenfaces which
have the largest associated eigenvalues. These eigenfaces now span a M’-dimensional
subspace instead of N2.
A new image T is transformed into its eigenface components (projected into ‘face
space’) by a simple operation,
wk = ukT (T - ψ) (4)
here k = 1,2,….M’. The weights obtained as above form a vector Ω = [w1, w2, w3,…. wM’]
that describes the contribution of each eigenface in representing the input face image. The
vector may then be used in a standard pattern recognition algorithm to find out which of a
number of predefined face class, if any, best describes the face.The face class can be
calculated by averaging the weight vectors for the images of one individual. The face
classes to be made depend on the classification to be made like a face class can be made
of all the images where subject has the spectacles. With this face class, classification can
be made if the subject has spectacles or not. The Euclidean distance of the weight vector
of the new image from the face class weight vector can be calculated as follows,
εk = || Ω – Ωk|| (5)
where Ωk is a vector describing the kth face class.Euclidean distance formula can be
found in [2]. The face is classified as belonging to class k when the distance εk is below
some threshold value θε. Otherwise the face is classified as unknown. Also it can be
found whether an image is a face image or not by simply finding the squared distance
between the mean adjusted input image and its projection onto the face space.
ε2 = || Ф - Фf || (6)
where Фf is the face space and Ф = Ti – Ψ is the mean adjusted input.
With this we can classify the image as known face image, unknown face image and not a
face image.
Recognition Experiments
24 images were trained with four individuals having 6 images per individual. The
6 images had different lighting conditions, orientations and scaling. These images were
recognized successfully with the accuracy of 100% for lighting variations, 90% for
orientation variations, and 65% for size variations. The lighting conditions don’t have any
effect of the recognition because the correlation over the image doesn’t change. The
orientation conditions would affect more because of the image would have more hair into
it than it had for training. Scaling affects the recognition significantly because the overall
face data in the image changes. This is because the background is not subtracted for
training. This effect can be minimized by background subtraction.
Figure 2
The first row is some of the images used for training while the second row shows the
eigenfaces with significant eigenvectors.
Figure 3
The first image is the frontal image, while the second and third are the images of the
same subject with different scaling and orientation.
The approach is definitely robust, simple, and easy and fast to implement
compared to other algotithms. It provides a practical solution to the recognition problem.
We are currently investigating in more detail the issues of robustness to changes in head
size and orientation. Also we are trying to recognize the gender of a person using the
same algorithm.
[1] Matthew A. Turk and Alex P. Pentland. “Eigenfaces for recognisation”. Journal of
cognitive nerosciences, Volume 3, Number 1, Nov 27, 2002.
[2] Dimitri PISSARENKO. “Eigenface-based facial recognition”. Dec1, 2202.
[3] Matthew A. Turk and Alex P. Pentland. “Face recognition using eigenfaces”. Proc.
CVPR , pp 586-591. IEEE, June 1991.
[4] L.I.Smith. “A tutorial on princal component analysis”. Feb 2002.
[5] M.Kirby and L.Sirovich. “Application of the karhunen-loeve procedure for the
characterization of human faces”. IEEE trans. on Pattern analysis and machine
intelligence, Volume 12, No.1, Jan 1990.
[6] URL http://www.cs.dartmouth.edu/~farid/teaching/cs88/kimo.pdf
[7] URL http://www.cs.virginia.edu/~jones/cs851sig/slides | {"url":"http://www.docstoc.com/docs/32857691/Face-Recognition-using-Eigenfaces","timestamp":"2014-04-19T15:09:42Z","content_type":null,"content_length":"62888","record_id":"<urn:uuid:1f61bf6c-7291-4d60-b2b0-c1cc1f640500>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inverse tangent: Transformations (subsection 16/01)
Transformations and argument simplifications
Argument involving basic arithmetic operations
Involving tan^-1(- z)
Involving tan^-1(-z) and tan^-1(z)
Involving tan^-1(1/z)
Involving tan^-1(1/z) and tan^-1(z)
Involving tan^-1(z^1/2)
Involving tan^-1(z^1/2) and tan^-1(1/z^1/2)
Involving tan^-1(z^1/2) and tan^-1(1/z^1/2)
Involving tan^-1(z^1/2) and tan^-1(1/1/z^1/2)
Involving tan^-1(z^1/2)
Involving tan^-1(1/z^1/2) and tan^-1(z^1/2)
Involving tan^-1((z[2])^1/2)
Involving tan^-1((z[2])^1/2) and tan^-1(z)
Involving tan^-1((z[2])^1/2) and tan^-1(1/z)
Involving tan^-1(a (b z^c)^m)
Involving tan^-1(a (b z^c)^m) and tan^-1(a b^m z^m c)
Involving tan^-1(1-z/1+z)
Involving tan^-1(1-z/1+z) and tan^-1(z)
Involving tan^-1(1-z/1+z) and tan^-1(1/z)
Involving tan^-1(z-1/z+1)
Involving tan^-1(z-1/z+1) and tan^-1(z)
Involving tan^-1(z-1/z+1) and tan^-1(1/z)
Involving tan^-1(1+z/1-z)
Involving tan^-1(1+z/1-z) and tan^-1(z)
Involving tan^-1(1+z/1-z) and tan^-1(1/z)
Involving tan^-1(z+1/z-1)
Involving tan^-1(z+1/z-1) and tan^-1(z)
Involving tan^-1(z+1/z-1) and tan^-1(1/z)
Involving tan^-1(2z/1-z^2)
Involving tan^-1(2 z/1-z^2) and tan^-1(z)
Involving tan^-1(2 z/1-z^2) and tan^-1(1/z)
Involving tan^-1(2z/z^2-1)
Involving tan^-1(2z/z^2-1) and tan^-1(z)
Involving tan^-1(2 z/z^2-1) and tan^-1(1/z)
Involving tan^-1(1-z^2/2 z)
Involving tan^-1(1-z^2/2 z) and tan^-1(z)
Involving tan^-1(1-z^2/2 z) and tan^-1(1/z)
Involving tan^-1(z^2-1/2 z)
Involving tan^-1(z^2-1/2 z) and tan^-1(z)
Involving tan^-1(z^2-1/2 z) and tan^-1(1/z)
Involving tan^-1(2z^1/2/1-z)
Involving tan^-1(2z^1/2/1-z) and tan^-1(z^1/2)
Involving tan^-1(2z^1/2/1-z) and tan^-1(1/z^1/2)
Involving tan^-1(2z^1/2/1-z) and tan^-1(1/z^1/2)
Involving tan^-1(2z^1/2/z-1)
Involving tan^-1(2z^1/2/z-1) and tan^-1(z^1/2)
Involving tan^-1(2z^1/2/z-1) and tan^-1(1/z^1/2)
Involving tan^-1(2z^1/2/z-1) and tan^-1(1/z^1/2)
Involving tan^-1(1-z/2 z^1/2)
Involving tan^-1(1-z/2 z^1/2) and tan^-1(z^1/2)
Involving tan^-1(1-z/2 z^1/2) and tan^-1(1/z^1/2)
Involving tan^-1(1-z/2 z^1/2) and tan^-1(1/z^1/2)
Involving tan^-1(z-1/2 z^1/2)
Involving tan^-1(z-1/2 z^1/2) and tan^-1(z^1/2)
Involving tan^-1(z-1/2 z^1/2) and tan^-1(1/z^1/2)
Involving tan^-1(z-1/2 z^1/2) and tan^-1(1/z^1/2)
Involving tan^-1((1+z^2)^1/2+c z)
Involving tan^-1((1+z^2)^1/2+z) and tan^-1(z)
Involving tan^-1((1+z^2)^1/2+z) and tan^-1(1/z)
Involving tan^-1((1+z^2)^1/2-z) and tan^-1(z)
Involving tan^-1((1+z^2)^1/2-z) and tan^-1(1/z)
Involving tan^-1(1/(1+z^2)^1/2+c z)
Involving tan^-1(1/(1+z^2)^1/2+z) and tan^-1(z)
Involving tan^-1(1/(1+z^2)^1/2+z) and tan^-1(1/z)
Involving tan^-1(1/(1+z^2)^1/2-z) and tan^-1(z)
Involving tan^-1(1/(1+z^2)^1/2-z) and tan^-1(1/z)
Involving tan^-1((1+z^2)^1/2+a/z)
Involving tan^-1((1+z^2)^1/2+1/z) and tan^-1(z)
Involving tan^-1((1+z^2)^1/2+1/z) and tan^-1(1/z)
Involving tan^-1((1+z^2)^1/2-1/z) and tan^-1(z)
Involving tan^-1((1+z^2)^1/2-1/z) and tan^-1(1/z)
Involving tan^-1(z/(1+z^2)^1/2+a)
Involving tan^-1(z/(1+z^2)^1/2+1) and tan^-1(z)
Involving tan^-1(z/(1+z^2)^1/2+1) and tan^-1(1/z)
Involving tan^-1(z/(1+z^2)^1/2-1) and tan^-1(z)
Involving tan^-1(z/(1+z^2)^1/2-1) and tan^-1(1/z) | {"url":"http://functions.wolfram.com/ElementaryFunctions/ArcTan/16/01/ShowAll.html","timestamp":"2014-04-19T09:51:56Z","content_type":null,"content_length":"141989","record_id":"<urn:uuid:8fa1e3b2-723d-4c03-8a9f-eda0983104b0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prime Implicants discussion with help of Karnaugh map (K-map) and examples.
Custom Search
Feedback ? Send it to admin@fullchipdesign.com or join me at fullchip@gmail.com
Chip Designing for ASIC/ FPGA Design engineers and Students
Digital-logic Design... Dream for many students… start learning front-end…
RTL Design engineers
Prime Implicants discussion with help of Karnaugh map (K-map) - Contd.
Example :- Consider a function F (x, y, z, w) of 11 Minterms shown in Truth Table
So Following are the Four Essential Prime Implicants.
Next step is to map remaining two(7 and 10) minterms in all possible combinations.
So the final simplified solutions of the function are
Solution 1. z’w + xz’+ xw + x’zw’ + yw + xy
Solution 2. z’w + xz’+ xw + x’zw’ + yz
K– map plot of 11 Minterms
There are Four essential terms for the function in discussion. The essential terms are shown within squares in the Truth Table. | {"url":"http://www.fullchipdesign.com/primeimplicants.htm","timestamp":"2014-04-21T02:12:29Z","content_type":null,"content_length":"95879","record_id":"<urn:uuid:5655ad6a-808e-469f-a61b-ff056df6088c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calendar Dates
Fred Espenak
The Julian calendar is used for all dates up to 1582 Oct 04. After that date, the Gregorian calendar is used. Due to the Gregorian Calendar reform, the day after 1582 Oct 04 (Julian calendar) is 1582
Oct 15 (Gregorian calendar). Note that Great Britain did not adopt the Gregorian calendar until 1752. For more information, see Calendars.
The Julian calendar does not include the year 0, so the year 1 BCE[1] is followed by the year 1 CE. This is awkward for arithmetic calculations. All pages in this web site employ the astronomical
numbering system for dates (they use the year 0). Years prior to the year 0 are represented by a negative sign. Historians should note that there is a difference of one year between astronomical
dates and BCE dates. Thus, the astronomical year 0 corresponds to 1 BCE, and year -100 corresponds to 101 BCE, etc.. (See: Year Dating Conventions )
There is some historical uncertainty as to which years from 43 BCE to 8 CE were counted as leap years. For the purposes of this web site, we assume that all Julian years divisible by 4 are be counted
as leap years.
[1] The terms BCE and CE are abbreviations for "Before Common Era" and "Common Era," respectively. They are the secular equivalents to the BC and AD dating conventions. (See: Year Dating Conventions)
2007 Jan 24 | {"url":"http://eclipse.gsfc.nasa.gov/SEhelp/calendar.html","timestamp":"2014-04-18T01:29:29Z","content_type":null,"content_length":"4717","record_id":"<urn:uuid:45e94a72-7677-462b-96fb-3e6085e0cb23>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
MT: November 2000, Volume 93, Issue 8
Mathematics in Search of History
Donald Barry
The author argues that the history of mathematics is a fluid field within which lively debate occurs. He shares a math problem that requires a community of scholars to simulate the process by which
the history of mathematics is actually developed.
The Mathematics of Levi ben Gershon
Shai Simonson
A history of the 14th-century mathematician, some samples of his work with word problems, and an introduction to using historical original sources as a way to interest students and teach mathematics.
Word Histories: Melding Mathematics and Meanings
Rheta Rubenstein, Randy Schwartz
Etymologies of mathematics words as a rich resource for deepening students' understanding and appreciation of mathematics, history, and language. Detailed examples are presented concerning the
branches of mathematics, conic sections, and words of Arabic origin.
Mathematics in the Age of Jane Austen: Essential Skills of 1800
S. Gray
Textbooks from the 1880s for young ladies vs. young men, for the youngest students, more advanced students, as well as university students. These publications furnish a record of the skills thought
to be essential at the turn of the previous century.
Kepler and Wiles: Models of Perseverance
Paul Shotsberger
A recounting of the work of Kepler and Wiles, exemplars of great minds who made a tangible impact on the field of mathematics but who had to overcome seemingly insurmountable roadblocks.
The Evolutionary Character of Mathematics
Richard Davitt
This article advocates Grabiner's UDED paradigm [use-discover-explore-define] as a tool for teachers' own acquisition of authentic historical accounts of the evolution of mathematical topics and as a
pedagogical strategem for their students as well.
Mathematicians Are Human Too
James Lightner
Fascinating stories about mathematicians and their interesting lives. It shows that mathematicians are human beings with peculiar foibles and personality quirks just like the rest of us.
From the Top of the Mountain
Donald Smith
Demonstration that mathematics is a changing science through an in-depth look at the history of the development of logarithms. It also serves as a reminder that we must continue to remember and
appreciate the efforts and contributions of past mathematicians.
The Role of History in a Mathematics Class
Gerald Marshall, Beverly Rich
This article argues that history has a vital role to play in the math classroom: it prompts teachers and students to think and talk about mathematics in meaningful ways; it enriches the curriculum;
demythologizes mathematics; and promotes communicating, connecting, and valuing mathematics.
Benoit Mandelbrot: The Euclid of Fractal Geometry
Dane Camp
This article cites Mandelbrot as an exemplar of one who learned the language of the universe -- mathematics -- biding his time until he could employ his knowledge both as a means of creative
expression and as a tool for comprehending the intricacies of the world around us.
Felix Klein and the NCTM's Standards: A Mathematician Considers Mathematics Education
Kim McComas
A discussion of the parallels between Klein's position at the forefront of a movement to reform mathematics education and that of the NCTM's Standards. A picture of Klein as an important historical
figure who saw equal importance in studying pure mathematics, applying mathematics, and teaching mathematics. | {"url":"http://www.nctm.org/publications/toc.aspx?jrnl=MT&mn=11&y=2000","timestamp":"2014-04-21T16:28:48Z","content_type":null,"content_length":"49955","record_id":"<urn:uuid:92145e16-bd6b-49dd-b7eb-a6841b8240b4>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
The constant, K, in Coulomb’s equation is much larger than the constant, G, in the universal gravitation equation.... - Homework Help - eNotes.com
The constant, K, in Coulomb’s equation is much larger than the constant, G, in the universal gravitation equation. Of what significance is this?
Gravitational and electrostatic forces are part of a group of forces in physics that are called fundamental forces. Coulomb's law and the law of universal gravitation are both examples of inverse
square laws, meaning that the force between two objects is inversely proportional to the square of the distance between the two objects.
Coulomb's law is shown below:
F = K*(q1*q2)/r^2
where F is the electrostatic force, K is Coulomb's constant, q1 and q2 are the scalar charges of the two objects, and r is the distance between the two objects. The value of K is 8.987 x 10^9 N*m^2/
The law of universal gravitation is shown below:
F = G*(m1*m2)/r^2
where F is the gravitational force, G is the gravitational constant, m1 and m2 are the masses of the two objects, and r is the distance between the two objects. The value of G is 6.67 x 10^-11 N*m^2
Basically, the reason that Coulomb's constant is so much larger than the gravitational constant is that gravitational force is much weaker than other fundamental forces, including electrostatic
force. Here is a real life example to demonstrate the point. If you take a balloon, rub it on your shirt, and then place it on a wall, the static electrostatic force will cause the balloon to stick
to the wall. The electrostatic force is plainly evident. But if you did not rub the balloon and placed it near the wall, there would be no obvious force or attraction between the wall and the
balloon. In other words, the gravitational force of attraction between the balloon and the wall is present, but it is so small that it essentially rounds down to zero. In fact, the force of the
static electricity overcomes the force of gravity between the balloon and the Earth (the balloon sticks to the wall instead of falling to the floor). You have to get objects on the size of planetary
scales to get appreciable gravitational forces.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/constant-k-coulombs-equation-much-larger-than-441359","timestamp":"2014-04-17T05:05:44Z","content_type":null,"content_length":"27371","record_id":"<urn:uuid:d922cdeb-ea5e-4df2-81b8-d5bdcc629b2b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Game Stats: Oilers/Senators Oct 19
Here are today’s shot differential stats:
Here’s a new chart that has info on today’s shot distances:
Corsi broken down by player:
By Period:
Vs each opponent:
Corsi% by line faced:
With each teammate:
And here’s the season-to-date Corsi stats through 9 games:
4 Comments
1. Great picture, you have a real knack for that. In this one, isn’t that Devan Dubnyk in the extreme foreground. Appears to be getting seriously outvoted, luckily goalies have veto power, eh.
2. Thanks for the work Mike.
Has anyone put together a probability table of distance of shot vs sh%?
If that’s available, then wouldn’t it be possible to judge a game on the probability of scoring using the shots x probability of scoring?
It might “lessen” the outshooting effect in judging a game, season, etc.
3. Hmmm, found this post by Gabe which suggests luck is too much a factor to rely on shooting distances.
4. Mike,
Have you read this: http://www.hockeyprospectus.com/article.php?articleid=540
Post a Comment | {"url":"http://www.boysonthebus.com/2013/10/19/game-stats-oilerssenators-oct-19/","timestamp":"2014-04-18T11:44:31Z","content_type":null,"content_length":"19752","record_id":"<urn:uuid:2445c24d-e73f-47e1-a4ca-f408480b29bd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cyclic Groups
$\mathbb{Z}_n^*$ is an example of a group. We won’t formally introduce group theory, but we do point out that a group only deals with one operation, so the reason for the $*$ in $\mathbb{Z}_n^*$ is
to stress that we are only considering mulitplication and forgetting about addition.
Notice we rarely add or subtract elements of $\mathbb{Z}_n^*$. For one thing, the sum of two units might not be a unit. We performed addition in our proof of Fermat’s Theorem, but this can be avoided
by using our proof of Euler’s Theorem instead. We did need addition to prove that $\mathbb{Z}_n^*$ has a certain structure, but once this is done, we can focus on multiplication.
In this section, we shall see what we can be said from studying multiplication alone.
When $\mathbb{Z}_n^*$ has a generator, we call $\mathbb{Z}_n^*$ a cyclic group. If $g$ is a generator we write $\mathbb{Z}_n^* = \langle g\rangle$.
A subgroup of $\mathbb{Z}_n^*$ is a subset $H$ of $\mathbb{Z}_n^*$ such that if $a, b \in H$, then $a b \in H$. Thus any subgroup contains $1$, and also the inverse of every element in the subgroup
Examples: Any $a\in\mathbb{Z}_n^*$ can be used to generate cyclic subgroup $\langle a \rangle = \{a, a^2,...,a^d = 1\}$ (for some $d$). For example, $\langle 2 \rangle = \{2,4,1\}$ is a subgroup of $
\mathbb{Z}_7^*$. Any group is always a subgroup of itself. {1} is always a subgroup of any group. These last two examples are the improper subgroups of a group.
Lagrange’s Theorem
We prove Lagrange’s Theorem for $\mathbb{Z}_n^*$. The proof can easily be modified to work for a general finite group.
Our proof of Euler’s Theorem has ideas in common with this proof.
Theorem: Let $H$ be a subgroup of $\mathbb{Z}_n^*$ of size $m$. Then $m | \phi(n)$.
Proof: If $H = \mathbb{Z}_n^*$ then $m = \phi(n)$. Otherwise, let $H = \{h_1,...,h_m\}$. let $a$ be some element of $\mathbb{Z}_n^*$ not in $H$, and consider the set $\{h_1 a ,..., h_m a\}$ which we
denote by $H a$. Every element in this set is distinct (since multiplying $h_i a = h_j a$ by $a^{-1}$ implies $h_i = h_j$), and furthermore no element of $H a$ lies in $H$ (since $h_i = h_j a$
implies $a = h_j^{-1} h_i$ thus $a \in H$, a contradiction).
Thus if every element of $\mathbb{Z}_n^*$ lies in $H$ or $H a$ then $2 m = \phi(n)$ and we are done. Otherwise take some element $b$ in $\mathbb{Z}_n^*$ not in $H$ or $H a$. By a similar argument, we
see that $H b = \{h_1 b ,..., h_m b \}$ contains exactly $m$ elements and has no elements in common with either $H$ or $H a$.
Thus iterating this procedure if necessary, we eventually have $\mathbb{Z}_n^*$ as the disjoint union of the sets $H, H a, H b ,... $ where each set contains $m$ elements. Thus $m | \phi(n)$.∎
Corollary: Euler’s Theorem (and Fermat’s Theorem). Any $a \in \mathbb{Z}_n^*$ generates a cyclic subgroup $\{a, a^2,...,a^d = 1\}$ thus $d | \phi(n)$, and hence $a^{\phi(n)} = 1$.
Subgroups of Cyclic Groups
Theorem: All subgroups of a cyclic group are cyclic. If $G = \langle g\rangle$ is a cyclic group of order $n$ then for each divisor $d$ of $n$ there exists exactly one subgroup of order $d$ and it
can be generated by $a^{n/d}$.
Proof: Given a divisor $d$, let $e = n/d$. Let $g$ be a generator of $G$. Then $\langle g^e \rangle = \{g^e, g^{2e},...,g^{d e} = 1\}$ is a cyclic subgroup of $G$ of size $n/d$.
Now let $H = \{a_1,...,a_{d-1},a_d = 1\}$ be some subgroup of $G$. Then for each $a_i$, we have $a_i = g^k$ for some $k$. By Lagrange’s Theorem the order of $a_i$ must divide $d$, hence $g^{k d} =
Since the order of $g$ is $n$, we have $k d = m n = m d e$ for some $m$. Thus $k = e m$ and $a_i = (g^e)^m$, that is each $a_i$ is some power of $g^e$, hence $H$ is one of the subgroups we previously
described. ∎
Counting Generators
Theorem: Let $G$ be cyclic group of order $n$. Then $G$ contains exactly $\phi(n)$ generators.
Proof: Let $g$ be a generator of $G$, so $G = \{g,...,g^n = 1\}$. Then $g^k$ generates $G$ if and only if $g^{k m} = g$ for some $m$, which happens when $k m = 1 \pmod {n}$, that is $k$ must be a
unit in $\mathbb{Z}_n$, thus there are $\phi(n)$ values of $k$ for which $g^k$ is a generator.∎
Example: When $\mathbb{Z}_n^*$ is cyclic (i.e. when $n = 2,4,p^k , 2p^k$ for odd primes $p$), $\mathbb{Z}_n^*$ contains $\phi(\phi(n))$ generators.
Theorem: For any positive integer $n$
\[ n = \sum_{d|n} \phi(d) . \]
Proof: Consider a cyclic group $G$ of order $n$, hence $G = \{g, ..., g^n = 1\}$. Each element $a \in G$ is contained in some cyclic subgroup. The theorem follows since there is exactly one subgroup
$H$ of order $d$ for each divisor $d$ of $n$ and $H$ has $\phi(d)$ generators.∎
Group Structure
In an abstract sense, for every positive integer $n$, there is only one cyclic group of order $n$ which we denote by $C_n$. This is because once we have a generator $g$, we know $C_n = \{g, g^2,...,g
^n = 1\}$ and the behaviour of $C_n$ is completely determined by this.
Example: Both $\mathbb{Z}_3^*$ and $\mathbb{Z}_4^*$ are cyclic of order 2, so they both behave exactly like $C_2$ (when considering multiplication only). This is an example of a group isomorphism.
Example: For $n = 2^k p_1^{k_1} ... p_m^{k_m}$ for odd primes $p_i$, by the Chinese Remainder Theorem we have
\[ \mathbb{Z}_n^* = \mathbb{Z}_{2^k}^* \times \mathbb{Z}_{p_1^{k_1}}^* \times ... \times \mathbb{Z}_{p_m^{k_m}}^* \]
Recall each $\mathbb{Z}_{p_i^{k_i}}^*$ is cyclic, and so are $\mathbb{Z}_2^*$ and $\mathbb{Z}_4^*$. Also recall for $k \gt 2$ we have that $3 \in \mathbb{Z}_{2^k}^*$ has order $2^{k-2}$ and no
element has a higher order. Using some group theory this means the group structure of $\mathbb{Z}_n^*$ is
\[ C_{2^{k-1}} \times C_{p_1^{k_1} - p_1^{k_1 - 1}} \times ... \times C_{p_m^{k_m} - p_m^{k_m-1}} \]
\[ C_2 \times C_{2^{k-2}} \times C_{p_1^{k_1} - p_1^{k_1 - 1}} \times ... \times C_{p_m^{k_m} - p_m^{k_m-1}} \] | {"url":"http://crypto.stanford.edu/pbc/notes/numbertheory/cyclic.html","timestamp":"2014-04-18T02:58:24Z","content_type":null,"content_length":"11380","record_id":"<urn:uuid:522db81c-0d7c-47f7-b23b-d55e5965f4c5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Benicia Prealgebra Tutor
Find a Benicia Prealgebra Tutor
...I have a growing experience tutoring elementary, middle, and high school students in writing, history, and math. Additionally, I have volunteered as a teacher's aid in both history and English
classrooms in San Francisco. Working with children: In addition to focusing on academics, I have spent the past three years working with children of all ages.
24 Subjects: including prealgebra, English, reading, writing
...I have a current experience working with students with emotional disturbance and a variety of concomitant learning disabilities including but not limited to autism, asperbergers syndrome ADD
and ADHD. My sessions are structured but enjoyable. They are designed to meet the specific needs and of my students.
10 Subjects: including prealgebra, reading, writing, grammar
...I have extensive knowledge of pretty much all of math through the end of college, and of statistics well beyond that. I'm good at zeroing in on precisely what's giving you trouble, and will
break each problem into pieces you can understand and learn. While a grad student at UC Berkeley, I recei...
14 Subjects: including prealgebra, statistics, geometry, GRE
...I also have taken graduate courses in electromagnetism and optics. I worked on applications research and system software (e.g. Open Source) to support applications including the Java (and J2ME)
and Bluetooth communication stacks.I have recent Hon.
15 Subjects: including prealgebra, calculus, GRE, algebra 1
...As a high school student, I was a private math tutor for a 5th grader, and I also regularly helped students in our school's after school tutoring program. I have a lot of experience helping
students of all levels! In addition to teaching the material, I also like to emphasize study strategies and skills.
27 Subjects: including prealgebra, chemistry, calculus, physics | {"url":"http://www.purplemath.com/Benicia_prealgebra_tutors.php","timestamp":"2014-04-20T11:10:35Z","content_type":null,"content_length":"23901","record_id":"<urn:uuid:ae5b2008-0681-4352-b42c-5f7cd816d462>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
dailysudoku.com :: View topic - 13 Dec 2005 very hard sudoku: solution
Discussion of Daily Sudoku puzzles
Author Message
zaks Posted: Tue Dec 13, 2005 7:14 pm Post subject: 13 Dec 2005 very hard sudoku: solution
13 Dec (vh) sdk
Joined: 25 Nov 2005 i use notations:
Posts: 13
boxes (b1...b9):
box1 box 2 box3
box4 box 5 box6
box7 box 8 box9
columns (ca...ci) :
c c c c c c c c c
o o o o o o o o o
L L L L L L L L L
a b c d e f g h i
rows (r1...r9):
row 9
row 8
row 7
row 6
row 5
row 4
row 3
row 2
row 1
number-column-row (exactly as in chess)
e.g., 4b6 means "put number 4 in cell 6", etc
here's my solution (sorry without moves' numbers)
1b8 1i6 1g1 1a3 1f2 ( finished with 1's )
4c5 4h7 6i3
7i9 ( see row 9, col i, and box 2 = see r9+ci+b2 )
8g5 ( see r5 )
7g4 ( see cg ) => 9g7 6g9 6d7
3i7 (see r7 ) => 8h8
5c3 ( in box 7, 5 can't be in cells a1, c1 because
in box 8, 5 must be in row 1)
8c9 ( see box 1 and col c ) => now we fill box 1:
5a7 7b7 => now fill row 7:
8f7 8d4 8b6 ( finished with 8's )
4e4 ( box 5 )
6a6 ( box 4 )
6b2 ( box 7 ) 6e1 ( finished with 6's )
7a5 ( box 4 )
2i5 ( row 5 )
9e5 ( row 5 )
9i4 ( col 9 )
5h4 ( row 4 )
2b4 ( row 4 )
3a4 ( row 4 )
2a1 ( col a )
9c6 ( box 4 )
3c1 ( col 3 )
9b3 ( box 7 )
9h2 ( box 9 )
2h3 ( box 9 )
3h6 ( box 6 )
the rest moves are more/less easy:
9d8 x!
enjoy, zaks
k-bizzle Posted: Wed Dec 14, 2005 7:17 am Post subject:
Guest for the 5c3: why must the 5 in box 7 be in r1 and NOT in r3cd or r3cf?
newbie Posted: Thu Dec 15, 2005 5:14 am Post subject: 13 Dec 2005 very hard sudoku: solution
Guest 5c3 is because of the 4's and 7's in rows 1 and 2 (ie Box 7 and Box 9 both contain 4's and 7's in rows 1 and 2 --> therefore box 8 must contain either a 4 or 7 in both r3cd and
r3cf and there's already a 8 in r3ce --> hence there must be a 5 in row 1 of box
alanr555 Posted: Sat Dec 17, 2005 2:19 am Post subject:
Joined: 01 Aug 2005 It would appear that SamGJ is now compiling "Very Hard"
Posts: 193 puzzles that are not amenable to full solution by "Mandatory
Location: Bideford Pairs" methods - unlike a few weeks ago when the 'v.hard'
Devon EX39 were often easier than the plain 'hard'.
This one was solved after "normalising" column 8 and row 7.
"Normalising" is the splitting of the candidate string for a row
or column into two or (rarely) three subgroups where each
subgroup portrays all possible candidates for a number of
cells equal to the number of candidates in the subgroup.
An example would be 26,378,237,26,38.
The candidate string for the line is 23678.
This MUST have five digits in it as five cells are involved.
Normalisation would spot the pair of 26 items
This gives (26)(2378) as subgroups BUT the second group has
a repeat of one of the digits in the first group - a real trigger
to paying attention!
Removing the '2' from the 237 cell gives the normalised set
of (26)(378). The second group now has three members (with
values 378,37,38) and covers three cells.
The original string 23678 is "congruent" for the line (as it has
five distinct digits and covers five cells) but it is not "normalised"
as it is capable of division into two congruent substrings (albeit
by - beneficial - modification of one of the cell profiles).
NB: The example values are NOT the same ones as in the puzzle!
Alan Rayner BS23 2QT
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum | {"url":"http://www.dailysudoku.com/sudoku/forums/viewtopic.php?t=327&start=0&postdays=0&postorder=asc&highlight=","timestamp":"2014-04-17T15:37:10Z","content_type":null,"content_length":"34601","record_id":"<urn:uuid:84ea67c8-5601-4a2f-b27f-1537fa43d620>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
WEEK 1: 'Words into Math' Block Game | #made4math
In keeping with my Week 1 emphasis in Algebra 1 on activating prior knowledge of how to translate words into mathematical expressions, equations, or inequalities (or at least gelling some of it back
into place), I've also created a "Block" game for practicing 'Words into Math' in my Algebra 1 classes. There are two levels of game cards that correspond to Lessons 1.3 and 1.4 in McDougall Littell
Algebra 1 California edition (for those of you playing along at home).
This is a variation on Maria Anderson's wonderful, tic-tac-toe-style
"blocking games"
Antiderivative Block
Factor Pair Block, and Exponent Block
— using her generic
, rules, and my own game cards for each of these first three games of hers on her web site).
The game can be played in any number of ways — either competitive or collaborative. Students can compete against each other — tic-tac-toe style — to get four of their counters in a row. Or they can
simply take turns choosing the problem and working on solving each problem on the whole board.
I've created two levels of "Words into Math Block":
Level 1 (purple problem cards)
Level 2 (green problem cards).
I use Maria's
generic PDF gameboard
and print or copy them on colored cardstock or paper. I have learned the hard way to give each level its own color ID as soon as I create the game cards so I can easily recreate the card sets later
whenever I need to.
I allow students to use whatever resources they need to during practice activities, so I expect to see those
nifty Troublesome Phrase Translator slider sleeves
flying during these two days. :-)
All of my materials, plus the
photo above
(in case you need a model) are on the
Math Teacher Wiki
Students really love these block games! I have a bunch of different "counters" that they can use as their game board markers: little stars (Woodsies from Michael's), circles, and hearts, colorful
foam planet/star clusters, and various kinds of beans.
I'm hoping to get my students to be less flummoxed by mathematical language by giving them practice in using it early and often. Enjoy!
15 comments:
1. That looks great! I'm definitely stealing this!
2. I am psyched to find your website!! I'm a MS math teacher teaching Algebra I so I'm tickled with the totally awesome ideas you are sharing. :-) I use Holt so I'll only need to tweak a bit
(sweet!). p.s. Thanks for thoughtfully sharing your ideas freely...we appreciate it!
3. I use the anti derivative block game with Leibniz and Newton counters <33332
1. Bowman-
You are so funny. :-D
- Elizabeth (aka @cheesemonkeysf on Twitter)
4. I have never heard of this block game. How do you play or am I skipping over something?
5. Thanks for sharing
I can't find it on the math teacher wiki
6. There are links built in above to Maria Anderson's "Block" games pages (start with http://busynessgirl.com/exponent-block-and-factor-pair-block/ ). While she freely shares her work, I did not
wish to overstep by claiming her instructions as my own.
If you roll your cursor over the text in the paragraph where I give the names of my various pieces and components, you'll find the links as you go. Just in case, here they are again:
Maria Anderson Block Games
Maria's generic game board:
My own Level 1 problems:
My own Level 2 problems:
Let me know if you need more help finding these!
- Elizabeth (aka @cheesemonkeysf on Twitter)
7. this sounds great and i'd like to try it in class on monday. i've looked at all the links but i am having trouble figuring out the rules for the game. can you describe them? thanks!
1. It's basically just Tic-Tac-Toe. Each player chooses a question on a space that they'd like to occupy. Both players work the problem on that question card. If the chooser, Player 1, gets it
correct, s/he wins that round and puts his/her counter on that space.
Then Player 2 chooses a problem, trying to "block" Player 1's progress and make his/her own progress. Both players work the problem. If Player 2 gets it correct, s/he wins that round and puts
his/her counter on that space.
Play continues until one player gets four spaces in a row.
8. Thanks for freely sharing your ideas. I used to teach elementary and have so much colored card stock! I'll be putting it to good use.
9. How do students know if they got the correct or incorrect answer?
1. The cards are double-sided. Students flip over the card to view the answer.
10. Hey this blog post is amazing, but i am not cleared with the rules.
1. Maria's own original instructions are the clearest:
Partners take turns declaring the answer to a problem. This determines who "takes" the square on the board. The objective is to rack up four squares in a row (kind of like Tic-Tac-Toe).
11. This activity looks like it has potential with my fifth grade group. I'm glad I came across your blog via a #MTBoS tweet. Thanks for freely sharing your ideas and resources. | {"url":"http://cheesemonkeysf.blogspot.fr/2012/08/week-1-words-into-math-block-game.html","timestamp":"2014-04-20T18:27:30Z","content_type":null,"content_length":"147802","record_id":"<urn:uuid:490dda11-d862-4688-afa7-8f8bb7ebe7c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
class scipy.interpolate.BarycentricInterpolator(xi, yi=None, axis=0)[source]¶
The interpolating polynomial for a set of points
Constructs a polynomial that passes through a given set of points. Allows evaluation of the polynomial, efficient changing of the y values to be interpolated, and updating by adding more x
values. For reasons of numerical stability, this function does not compute the coefficients of the polynomial.
The values yi need to be provided before the function is evaluated, but none of the preprocessing depends on them, so rapid updates are possible.
xi : array-like
1-d array of x coordinates of the points the polynomial should pass through
yi : array-like
Parameters :
The y coordinates of the points the polynomial should pass through. If None, the y values will be supplied later via the set_y method.
axis : int, optional
Axis in the yi array corresponding to the x-coordinate values.
This class uses a “barycentric interpolation” method that treats the problem as a special case of rational function interpolation. This algorithm is quite stable, numerically, but even in a world
of exact computation, unless the x coordinates are chosen very carefully - Chebyshev zeros (e.g. cos(i*pi/n)) are a good choice - polynomial interpolation itself is a very ill-conditioned process
due to the Runge phenomenon.
Based on Berrut and Trefethen 2004, “Barycentric Lagrange Interpolation”.
│__call__(x) │Evaluate the interpolating polynomial at the points x │
│add_xi(xi[, yi]) │Add more x values to the set to be interpolated │
│set_yi(yi[, axis])│Update the y values to be interpolated │ | {"url":"http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.interpolate.BarycentricInterpolator.html","timestamp":"2014-04-20T13:29:57Z","content_type":null,"content_length":"8947","record_id":"<urn:uuid:85c9d3fd-598b-45a8-823c-de8ce6402307>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lafayette Hill Algebra Tutor
Find a Lafayette Hill Algebra Tutor
...That has given me a lot of experience in teaching writing to children all over the United States and Canada. There are problems with writing that are common to most children. Those problems can
take some time and effort to work on, but they can be corrected and children can become fine writers.
16 Subjects: including algebra 1, algebra 2, reading, English
...Prior to becoming a tutor with WyzAnt, I have been engaged in training professionals in technical writing skills and computers, including Microsoft Word, Publisher and PowerPoint. I also tutor
phonics, grammar, public speaking, reading comprehension, written expression, study skills, the HESI, ...
51 Subjects: including algebra 1, Spanish, English, reading
...Also, I have tutored students in ODE's for over ten years. I worked for close to three years as a pension actuary and have passed the first three exams given by the Society of Actuaries, which
rigorously cover such topics as calculus, probability, interest theory, modeling, and financial derivat...
19 Subjects: including algebra 2, algebra 1, calculus, geometry
...I am currently employed with a company as design engineer but want to fill my free time with something productive and at the same time earn a second income to pay off my heavy student debt. I
have never officially tutored as a job but have done so for my peers who struggled in school. I am most...
8 Subjects: including algebra 1, algebra 2, precalculus, trigonometry
...She too praised Jonathan’s teaching methods. We highly recommend Jonathan! J.C. (Mother)I taught high school math in Delaware, and have taught middle school math in Philadelphia.
22 Subjects: including algebra 2, algebra 1, calculus, writing | {"url":"http://www.purplemath.com/lafayette_hill_algebra_tutors.php","timestamp":"2014-04-19T02:32:38Z","content_type":null,"content_length":"24049","record_id":"<urn:uuid:e6de1a96-13bc-462c-aee3-ba39705c2b5f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Breadth first search 3-SAT algorithms for DNA computers
Results 1 - 10 of 13
- UNCONVENTIONAL MODELS OF COMPUTATION , 1998
"... Biomolecular Computation (BMC) is computation done at the molecular scale, using Biotechnological techniques. This paper discusses the underlying biotechnology that BMC may utilize, and surveys
a number of distinct paradigms for doing BMC. We also identify a number of key future experimental mile ..."
Cited by 15 (6 self)
Add to MetaCart
Biomolecular Computation (BMC) is computation done at the molecular scale, using Biotechnological techniques. This paper discusses the underlying biotechnology that BMC may utilize, and surveys a
number of distinct paradigms for doing BMC. We also identify a number of key future experimental milestones for the field of BMC.
- In Proceedings of the 24th International Colloquium on Automata, Languages, and Programming , 1998
"... The maximum number of strands used is an important measure of a molecular algorithm's complexity. This measure is also called the volume used by the algorithm. Every problem that can be solved
by an NP Turing machine with b(n) binary nondeterministic choices can be solved by molecular computation in ..."
Cited by 14 (5 self)
Add to MetaCart
The maximum number of strands used is an important measure of a molecular algorithm's complexity. This measure is also called the volume used by the algorithm. Every problem that can be solved by an
NP Turing machine with b(n) binary nondeterministic choices can be solved by molecular computation in a polynomial number of steps, with four test tubes, in volume 2 b(n) . We identify a large class
of recursive algorithms that can be implemented using bounded nondeterminism. This yields improved molecular algorithms for important problems like 3-SAT, independent set, and 3-colorability. 1. A
model of molecular computing Molecular computation was first studied in [1, 20]. The models we define were inspired as well by the work of [3, 28]. A molecular sequence is a string over an alphabet \
Sigma (we can use any alphabet we like, encoding characters of \Sigma by finite sequences of base pairs). A test tube is a multiset of molecular sequences. We describe the allowable operations below.
, 1997
"... The potential of DNA as a truly parallel computing device is enormous. Solution-phase DNA chemistry, though not unlimited, provides the only currently-available experimental system. Its
practical feasibility, however, is controversial. We have sought to extend the feasibility and generality of DNA c ..."
Cited by 8 (4 self)
Add to MetaCart
The potential of DNA as a truly parallel computing device is enormous. Solution-phase DNA chemistry, though not unlimited, provides the only currently-available experimental system. Its practical
feasibility, however, is controversial. We have sought to extend the feasibility and generality of DNA computing by a novel application of the theory of counting . The biochemically equivalent
operation for DNA counting is well known. We propose a DNA algorithm that employs this new operation. We also present an implementation of this algorithm by a novel DNA-chemical method. Preliminary
computer simulations suggest that the algorithm can significantly reduce the DNA space complexity (i.e., the maximum number of DNA molecules that must be present in the test tube during computation)
for solving 3SAT to O(2 0:4n ). If the observation is correct, our algorithm can solve 3SAT instances of size up to or exceeding 120 variables. 1 Introduction 1.1 Two major issues in DNA computing
Adleman [Ad...
- In Proceedings of the 5th Israel Symposium on Theory of Computing and Systems , 1997
"... The number of molecular strands used by a molecular algorithm is an important measure of the algorithm's complexity. This measure is also called the volume used by the algorithm. We prove that
three important polynomial-time models of molecular computation with bounded volume are equivalent to model ..."
Cited by 8 (3 self)
Add to MetaCart
The number of molecular strands used by a molecular algorithm is an important measure of the algorithm's complexity. This measure is also called the volume used by the algorithm. We prove that three
important polynomial-time models of molecular computation with bounded volume are equivalent to models of polynomial-time Turing machine computation with bounded nondeterminism. Without any
assumption, we show that the Split operation does not increase the power of polynomial-time molecular computation. Assuming a plausible separation between Turing machine complexity classes, the
Amplify operation does increase the power of polynomial-time molecular computation. 1. Introduction Molecular computation was first studied in [1, 15], which identified the number of molecular
strands used as an important resource. This measure is also Research performed at Yale University and at the University of Maryland. Supported in part by the National Science Foundation under grant
CCR-8958528, CCR-94154...
- In Proceedings of the IEEE Congress on Evolutionary Computation , 1999
"... DNA computation investigates the potential of DNA as a massively parallel computing device. Research is focused on designing parallel computation models executable by DNA-based chemical
processes and on developing algorithms in the models. In 1994 Leonard Adleman initiated this area of research by p ..."
Cited by 5 (0 self)
Add to MetaCart
DNA computation investigates the potential of DNA as a massively parallel computing device. Research is focused on designing parallel computation models executable by DNA-based chemical processes and
on developing algorithms in the models. In 1994 Leonard Adleman initiated this area of research by presenting a DNA-based method for solving the Hamilton Path Problem. That contribution raised the
hope that parallel computation by DNA could be used to tackle NP-complete problems which are thought of as intractable. The current realization, however, is that NP-complete problems may not be best
suited for DNA-based (more generally, molecule-based) computing. A better subject for DNA computing could be large-scale evaluation of parallel computation models. Several proposals have been made in
this direction. We overview those methods, discuss technical and theoretical issues involved, and present some possible applications of those methods. 1 Introduction Biomolecular computing is the
- Problems, 3rd DIMACS Meeting on DNA Based Computers, Univ. of Penns , 1997
"... We develop a general technique for constructing molecular-based approximation algorithms for NP optimization problems. Our algorithms exhibit a useful volume--accuracy tradeoff. In particular we
solve the Covering problem of Hochbaum and Maass using polynomial time and O i ` 2 (log `)n 2 \Gamma ..."
Cited by 4 (2 self)
Add to MetaCart
We develop a general technique for constructing molecular-based approximation algorithms for NP optimization problems. Our algorithms exhibit a useful volume--accuracy tradeoff. In particular we
solve the Covering problem of Hochbaum and Maass using polynomial time and O i ` 2 (log `)n 2 \Gamma n\Delta(n\Gamma1) l 2 =2 \Delta j volume with error ratio (1 + 1 ` ) 2 . We also present the first
candidate for a problem that can be solved more efficiently with the Amplify operation than without. 1. Introduction Molecular computers were introduced by Adleman [1, 8], but so far the field lacks
a "killer application." It is well known that a DNA computer can solve SAT in linear time [8], but using an exponential number of DNA strands. The number of strands used by an algorithm is called the
"volume." Although recent papers [5, 4, 9] solve NP problems using smaller exponential volume, we believe that it is essential to find applications of DNA computers that use subexponential vol...
, 1998
"... Length of DNA strands is an important resource in DNA computing. We show how to decrease strand lengths in known molecular algorithms for some NP-complete problems, such as like 3-SAT and
Independent Set, without substantially increasing their running time or volume. 1. Introduction Since Adleman's ..."
Cited by 3 (0 self)
Add to MetaCart
Length of DNA strands is an important resource in DNA computing. We show how to decrease strand lengths in known molecular algorithms for some NP-complete problems, such as like 3-SAT and Independent
Set, without substantially increasing their running time or volume. 1. Introduction Since Adleman's pioneering experiment [1], many researchers have explored efficient molecular algorithms for
NP-complete problems. The running time for a molecular algorithm is to the number of operations on test tubes. The volume is the maximum number of strings in all test tubes at any time, counting
multiplicities. The strand-length complexity of a molecular algorithm is the length of the longest DNA strand used in the computation. Although time and volume complexity have been well studied [13,
6, 2, 14, 5, 9, 10, 8], strand length has received less attention. Yet Roweis et al [16], state that 2500-base sequences decay at a rate of 10% per hour, and Sambrook [17] states that DNA strands
longer than 1000...
- In Proceedings of the 13th Annual IEEE Conference on Computational Complexity , 1998
"... We survey the theoretical use of DNA computing to solve intractable problems. We also discuss the relationship between problems in DNA computing and questions in complexity theory. 1.
Introduction Adleman's pioneering experiment [1] opened the possibility that moderately large instances of NP-comp ..."
Cited by 2 (0 self)
Add to MetaCart
We survey the theoretical use of DNA computing to solve intractable problems. We also discuss the relationship between problems in DNA computing and questions in complexity theory. 1. Introduction
Adleman's pioneering experiment [1] opened the possibility that moderately large instances of NP-complete problems might be solved via techniques from molecular biology. Since then numerous papers
have explored more efficient molecular algorithms for particular problems in NP [27, 10, 3, 30, 8, 20, 21, 18], molecular solutions to PSPACE-complete problems [7, 37], and fault tolerant molecular
algorithms [12, 25]. Other papers have examined the relationships between molecular complexity classes and classical complexity classes [38, 19]. We will survey some of these advances in this paper.
For previous surveys in DNA computing, see [24, 36, 34, 32]. 2. Biological Background DNA is the storage medium for genetic information. It is composed of units called nucleotides, distinguished by
the che...
- Sixth International Workshop on DNA-based Computers, volume 2054 of LNCS , 2001
"... Abstract. We present a randomized DNA algorithm for k-SAT based on the classical algorithm of Paturi et al. [8]. For an n-variable, m-clause instance of k-SAT (m>n), our algorithm finds a
satisfying assignment, assuming one exists, with probability 1 − e −α, in worst-case time O(k 2 mn) and space O( ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. We present a randomized DNA algorithm for k-SAT based on the classical algorithm of Paturi et al. [8]. For an n-variable, m-clause instance of k-SAT (m>n), our algorithm finds a satisfying
assignment, assuming one exists, with probability 1 − e −α, in worst-case time O(k 2 mn) and space O(2 (1 − 1 k)n+log α). This makes it the most space-efficient DNA k-SAT algorithm for k>3andk<n/log
α (i.e. the clause size is small compared to the number of variables). In addition, our algorithm is the first DNA algorithm to adapt techniques from the field of randomized classical algorithms. 1
, 1998
"... We design volume-efficient molecular algorithms for all problems in #P, using only reasonable biological operations. In particular, we give a polynomial-time O(2 n n 2 log 2 n)-volume algorithm
to compute the number of Hamiltonian paths in an n-node graph. This improves Adleman's celebrated ..."
Add to MetaCart
We design volume-efficient molecular algorithms for all problems in #P, using only reasonable biological operations. In particular, we give a polynomial-time O(2 n n 2 log 2 n)-volume algorithm to
compute the number of Hamiltonian paths in an n-node graph. This improves Adleman's celebrated n!-volume algorithm for finding a single Hamiltonian path. 1. Introduction Molecular computation was
first proposed by Feynman [10], but his idea was not implemented by experiment for a few decades. In 1994 Adleman [1] succeeded to practically solve an instance of the Hamiltonian path problem in a
test tube, just by handling DNA strings. DNA is the storage medium for genetic information. It is composed of units called nucleotides, distinguished by the chemical group (base) attached to them.
The four bases are adenine, guanine, cytosine, and thymine, abbreviated as A, G, C, and T. Single nucleotides are linked end-to-end to form DNA strands. Each DNA strand has two chemically | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1645085","timestamp":"2014-04-18T17:31:18Z","content_type":null,"content_length":"39579","record_id":"<urn:uuid:5c5de574-e884-4629-90dd-6cb118a18df1>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clifford algebra
From Wikipedia, the free encyclopedia
In mathematics, Clifford algebras are a type of associative algebra. They can be thought of as one of the possible generalizations of the complex numbers and quaternions. The theory of Clifford
algebras is intimately connected with the theory of quadratic forms and orthogonal transformations. Clifford algebras have important applications in a variety of fields including geometry and
theoretical physics. They are named for the English geometer William Kingdon Clifford.
Introduction and basic properties
Specifically, a Clifford algebra is a unital associative algebra which contains and is generated by a vector space V equipped with a quadratic form Q. The Clifford algebra Cℓ(V,Q) is the "freest"
algebra generated by V subject to the condition^[1]
$v^2 = Q(v)1\ \mbox{ for all } v\in V.$
If the characteristic of the ground field K is not 2, then one can rewrite this fundamental identity in the form
$uv + vu = 2\lang u, v\rang \mbox{ for all }u,v \in V,$
where <u, v> = ½(Q(u + v) − Q( u) − Q(v)) is the symmetric bilinear form associated to Q. This idea of "freest" or "most general" algebra subject to this identity can be formally expressed through
the notion of a universal property (see below).
Quadratic forms and Clifford algebras in characteristic 2 form an exceptional case. In particular, if char K = 2 it is not true that a quadratic form is determined by its symmetric bilinear form, or
that every quadratic form admits an orthogonal basis. Many of the statements in this article include the condition that the characteristic is not 2, and are false if this condition is removed.
As quantization of exterior algebra
Clifford algebras are closely related to exterior algebras. In fact, if Q = 0 then the Clifford algebra Cℓ(V,Q) is just the exterior algebra Λ(V). For nonzero Q there exists a canonical linear
isomorphism between Λ(V) and Cℓ(V,Q) whenever the ground field K does not have characteristic two. That is, they are naturally isomorphic as vector spaces, but with different multiplications (in the
case of characteristic two, they are still isomorphic as vector spaces, just not naturally). Clifford multiplication is strictly richer than the exterior product since it makes use of the extra
information provided by Q.
More precisely, Clifford algebras may be thought of as quantizations (cf. quantization (physics), Quantum group) of the exterior algebra, in the same way that the Weyl algebra is a quantization of
the symmetric algebra.
Weyl algebras and Clifford algebras admit a further structure of a *-algebra, and can be unified as even and odd terms of a superalgebra, as discussed in CCR and CAR algebras.
Universal property and construction
Let V be a vector space over a field K, and let Q : V → K be a quadratic form on V. In most cases of interest the field K is either R, C or a finite field.
A Clifford algebra Cℓ(V,Q) is a unital associative algebra over K together with a linear map i : V → Cℓ(V,Q) satisfying i(v)^2 = Q(v)1 for all v ∈ V, defined by the following universal property:
Given any associative algebra A over K and any linear map j : V → A such that
j(v)^2 = Q(v)1 for all v ∈ V
(where 1 denotes the multiplicative identity of A), there is a unique algebra homomorphism f : Cℓ(V,Q) → A such that the following diagram commutes (i.e. such that f o i = j):
Working with a symmetric bilinear form <·,·> instead of Q (in characteristic not 2), the requirement on j is
j(v)j(w) + j(w)j(v) = 2<v, w> for all v, w ∈ V.
A Clifford algebra as described above always exists and can be constructed as follows: start with the most general algebra that contains V, namely the tensor algebra T(V), and then enforce the
fundamental identity by taking a suitable quotient. In our case we want to take the two-sided ideal I[Q] in T(V) generated by all elements of the form
$v\otimes v - Q(v)1$ for all $v\in V$
and define Cℓ(V,Q) as the quotient
Cℓ(V,Q) = T(V)/I[Q.]
It is then straightforward to show that Cℓ(V,Q) contains V and satisfies the above universal property, so that Cℓ is unique up to a unique isomorphism; thus one speaks of "the" Clifford algebra Cℓ(V,
Q). It also follows from this construction that i is injective. One usually drops the i and considers V as a linear subspace of Cℓ(V,Q).
The universal characterization of the Clifford algebra shows that the construction of Cℓ(V,Q) is functorial in nature. Namely, Cℓ can be considered as a functor from the category of vector spaces
with quadratic forms (whose morphisms are linear maps preserving the quadratic form) to the category of associative algebras. The universal property guarantees that linear maps between vector spaces
(preserving the quadratic form) extend uniquely to algebra homomorphisms between the associated Clifford algebras.
Basis and dimension
If the dimension of V is n and {e[1],…,e[n]} is a basis of V, then the set
$\{e_{i_1}e_{i_2}\cdots e_{i_k} \mid 1\le i_1 < i_2 < \cdots < i_k \le n\mbox{ and } 0\le k\le n\}$
is a basis for Cℓ(V,Q). The empty product (k = 0) is defined as the multiplicative identity element. For each value of k there are n choose k basis elements, so the total dimension of the Clifford
algebra is
$\dim C\ell(V,Q) = \sum_{k=0}^n\begin{pmatrix}n\\ k\end{pmatrix} = 2^n.$
Since V comes equipped with a quadratic form, there is a set of privileged bases for V: the orthogonal ones. An orthogonal basis is one such that
$\langle e_i, e_j \rangle = 0 \qquad ieq j. \,$
where <·,·> is the symmetric bilinear form associated to Q. The fundamental Clifford identity implies that for an orthogonal basis
$e_ie_j = -e_je_i \qquad ieq j. \,$
This makes manipulation of orthogonal basis vectors quite simple. Given a product $e_{i_1}e_{i_2}\cdots e_{i_k}$ of distinct orthogonal basis vectors, one can put them into standard order by
including an overall sign corresponding to the number of flips needed to correctly order them (i.e. the signature of the ordering permutation).
If the characteristic is not 2 then an orthogonal basis for V exists, and one can easily extend the quadratic form on V to a quadratic form on all of Cℓ(V,Q) by requiring that distinct elements $e_
{i_1}e_{i_2}\cdots e_{i_k}$ are orthogonal to one another whenever the {e[i]}'s are orthogonal. Additionally, one sets
$Q(e_{i_1}e_{i_2}\cdots e_{i_k}) = Q(e_{i_1})Q(e_{i_2})\cdots Q(e_{i_k})$.
The quadratic form on a scalar is just Q(λ) = λ^2. Thus, orthogonal bases for V extend to orthogonal bases for Cℓ(V,Q). The quadratic form defined in this way is actually independent of the
orthogonal basis chosen (a basis-independent formulation will be given later).
Examples: real and complex Clifford algebras
The most important Clifford algebras are those over real and complex vector spaces equipped with nondegenerate quadratic forms.
Every nondegenerate quadratic form on a finite-dimensional real vector space is equivalent to the standard diagonal form:
$Q(v) = v_1^2 + \cdots + v_p^2 - v_{p+1}^2 - \cdots - v_{p+q}^2$
where n = p + q is the dimension of the vector space. The pair of integers (p, q) is called the signature of the quadratic form. The real vector space with this quadratic form is often denoted R^p,q.
The Clifford algebra on R^p,q is denoted Cℓ[p,q](R). The symbol Cℓ[n](R) means either Cℓ[n,0](R) or Cℓ[0,n](R) depending on whether the author prefers positive definite or negative definite spaces.
A standard orthonormal basis {e[i]} for R^p,q consists of n = p + q mutually orthogonal vectors, p of which have norm +1 and q of which have norm −1. The algebra Cℓ[p,q](R) will therefore have p
vectors which square to +1 and q vectors which square to −1.
Note that Cℓ[0,0](R) is naturally isomorphic to R since there are no nonzero vectors. Cℓ[0,1](R) is a two-dimensional algebra generated by a single vector e[1] which squares to −1, and therefore is
isomorphic to C, the field of complex numbers. The algebra Cℓ[0,2](R) is a four-dimensional algebra spanned by {1, e[1], e[2], e[1]e[2]}. The latter three elements square to −1 and all anticommute,
and so the algebra is isomorphic to the quaternions H. The next algebra in the sequence is Cℓ[0,3](R) is an 8-dimensional algebra isomorphic to the direct sum H ⊕ H called split-biquaternions.
One can also study Clifford algebras on complex vector spaces. Every nondegenerate quadratic form on a complex vector space is equivalent to the standard diagonal form
$Q(z) = z_1^2 + z_2^2 + \cdots + z_n^2$
where n = dim V, so there is essentially only one Clifford algebra in each dimension. We will denote the Clifford algebra on C^n with the standard quadratic form by Cℓ[n](C). One can show that the
algebra Cℓ[n](C) may be obtained as the complexification of the algebra Cℓ[p,q](R) where n = p + q:
$C\ell_n(\mathbb{C}) \cong C\ell_{p,q}(\mathbb{R})\otimes\mathbb{C} \cong C\ell(\mathbb{C}^{p+q},Q\otimes\mathbb{C})$.
Here Q is the real quadratic form of signature (p,q). Note that the complexification does not depend on the signature. The first few cases are not hard to compute. One finds that
Cℓ[0](C) = C
Cℓ[1](C) = C ⊕ C
Cℓ[2](C) = M[2](C)
where M[2](C) denotes the algebra of 2×2 matrices over C.
It turns out that every one of the algebras Cℓ[p,q](R) and Cℓ[n](C) is isomorphic to a matrix algebra over R, C, or H or to a direct sum of two such algebras. For a complete classification of these
algebras see classification of Clifford algebras.
Relation to the exterior algebra
Given a vector space V one can construct the exterior algebra Λ(V), whose definition is independent of any quadratic form on V. It turns out that if F does not have characteristic 2 then there is a
natural isomorphism between Λ(V) and Cℓ(V,Q) considered as vector spaces (and there exists an isomorphism in characteristic two, which may not be natural). This is an algebra isomorphism if and only
if Q = 0. One can thus consider the Clifford algebra Cℓ(V,Q) as an enrichment (or more precisely, a quantization, cf. the Introduction) of the exterior algebra on V with a multiplication that depends
on Q (one can still define the exterior product independent of Q).
The easiest way to establish the isomorphism is to choose an orthogonal basis {e[i]} for V and extend it to an orthogonal basis for Cℓ(V,Q) as described above. The map Cℓ(V,Q) → Λ(V) is determined by
$e_{i_1}e_{i_2}\cdots e_{i_k} \mapsto e_{i_1}\wedge e_{i_2}\wedge \cdots \wedge e_{i_k}.$
Note that this only works if the basis {e[i]} is orthogonal. One can show that this map is independent of the choice of orthogonal basis and so gives a natural isomorphism.
If the characteristic of K is 0, one can also establish the isomorphism by antisymmetrizing. Define functions f[k] : V × … × V → Cℓ(V,Q) by
$f_k(v_1, \cdots, v_k) = \frac{1}{k!}\sum_{\sigma\in S_k}{\rm sgn}(\sigma)\, v_{\sigma(1)}\cdots v_{\sigma(k)}$
where the sum is taken over the symmetric group on k elements. Since f[k] is alternating it induces a unique linear map Λ^k(V) → Cℓ(V,Q). The direct sum of these maps gives a linear map between Λ(V)
and Cℓ(V,Q). This map can be shown to be a linear isomorphism, and it is natural.
A more sophisticated way to view the relationship is to construct a filtration on Cℓ(V,Q). Recall that the tensor algebra T(V) has a natural filtration: F^0 ⊂ F^1 ⊂ F^2 ⊂ … where F^k contains sums of
tensors with rank ≤ k. Projecting this down to the Clifford algebra gives a filtration on Cℓ(V,Q). The associated graded algebra
$Gr_F C\ell(V,Q) = \bigoplus_k F^k/F^{k-1}$
is naturally isomorphic to the exterior algebra Λ(V). Since the associated graded algebra of a filtered algebra is always isomorphic to the filtered algebra as filtered vector spaces (by choosing
complements of F^k in F^k+1 for all k), this provides an isomorphism (although not a natural one) in any characteristic, even two.
In the following, assume that the characteristic is not 2.^[2]
Clifford algebras are Z[2]-graded algebra (also known as superalgebras). Indeed, the linear map on V defined by $v \mapsto -v$ (reflection through the origin) preserves the quadratic form Q and so by
the universal property of Clifford algebras extends to an algebra automorphism
α : Cℓ(V,Q) → Cℓ(V,Q).
Since α is an involution (i.e. it squares to the identity) one can decompose Cℓ(V,Q) into positive and negative eigenspaces
$C\ell(V,Q) = C\ell^0(V,Q) \oplus C\ell^1(V,Q)$
where Cℓ^i(V,Q) = {x ∈ Cℓ(V,Q) | α(x) = (−1)^ix}. Since α is an automorphism it follows that
$C\ell^{\,i}(V,Q)C\ell^{\,j}(V,Q) = C\ell^{\,i+j}(V,Q)$
where the superscripts are read modulo 2. This gives Cℓ(V,Q) the structure of a Z[2]-graded algebra. The subspace Cℓ^0(V,Q) forms a subalgebra of Cℓ(V,Q), called the even subalgebra. The subspace Cℓ^
1(V,Q) is called the odd part of Cℓ(V,Q) (it is not a subalgebra). The Z[2]-grading plays an important role in the analysis and application of Clifford algebras. The automorphism α is called the main
involution or grade involution.
Remark. In characteristic not 2 the underlying vector space of Cℓ(V,Q) inherits a Z-grading from the canonical isomorphism with the underlying vector space of the exterior algebra Λ(V). It is
important to note, however, that this is a vector space grading only. That is, Clifford multiplication does not respect the Z-grading, only the Z[2]-grading: for instance if $Q(v)eq 0$, then $v\in C\
ell^1(V,Q)$, but $v^2\in C\ell^0(V,Q)$, not in $C\ell^2(V,Q)$. Happily, the gradings are related in the natural way: Z[2] = Z/2Z. Further, the Clifford algebra is Z-filtered: $C\ell^{\leq i}(V,Q) \
cdot C\ell^{\leq j}(V,Q) \subset C\ell^{\leq i+j}(V,Q)$. The degree of a Clifford number usually refers to the degree in the Z-grading. Elements which are pure in the Z[2]-grading are simply said to
be even or odd.
The even subalgebra Cℓ^0(V,Q) of a Clifford algebra is itself a Clifford algebra^[3]. If V is the orthogonal direct sum of a vector a of norm Q(a) and a subspace U, then Cℓ^0(V,Q) is isomorphic to Cℓ
(U,−Q(a)Q), where −Q(a)Q is the form Q restricted to U and multiplied by −Q(a). In particular over the reals this implies that
$C\ell_{p,q}^0(\mathbb{R}) \cong C\ell_{p,q-1}(\mathbb{R})$ for q > 0, and
$C\ell_{p,q}^0(\mathbb{R}) \cong C\ell_{q,p-1}(\mathbb{R})$for p > 0.
In the negative-definite case this gives an inclusion Cℓ[0,n−1](R) ⊂ Cℓ[0, n](R) which extends the sequence
R ⊂ C ⊂ H ⊂ H⊕H ⊂ …
Likewise, in the complex case, one can show that the even subalgebra of Cℓ[n](C) is isomorphic to Cℓ[n−1](C).
In addition to the automorphism α, there are two antiautomorphisms which play an important role in the analysis of Clifford algebras. Recall that the tensor algebra T(V) comes with an
antiautomorphism that reverses the order in all products:
$v_1\otimes v_2\otimes \cdots \otimes v_k \mapsto v_k\otimes \cdots \otimes v_2\otimes v_1.$
Since the ideal I[Q] is invariant under this reversal, this operation descends to an antiautomorphism of Cℓ(V,Q) called the transpose or reversal operation, denoted by x^t. The transpose is an
antiautomorphism: (xy)^t = y^tx^t. The transpose operation makes no use of the Z[2]-grading so we define a second antiautomorphism by composing α and the transpose. We call this operation Clifford
conjugation denoted $\bar x$
$\bar x = \alpha(x^t) = \alpha(x)^t.$
Of the two antiautomorphisms, the transpose is the more fundamental.^[4]
Note that all of these operations are involutions. One can show that they act as ±1 on elements which are pure in the Z-grading. In fact, all three operations depend only on the degree modulo 4. That
is, if x is pure with degree k then
$\alpha(x) = \pm x \qquad x^t = \pm x \qquad \bar x = \pm x$
where the signs are given by the following table:
│ k mod 4 │ 0 │ 1 │ 2 │ 3 │ │
│ $\alpha(x)\,$ │ + │ − │ + │ − │ (−1)^k │
│ $x^t\,$ │ + │ + │ − │ − │ (−1)^k(k−1)/2 │
│ $\bar x$ │ + │ − │ − │ + │ (−1)^k(k+1)/2 │
The Clifford scalar product
When the characteristic is not 2 the quadratic form Q on V can be extended to a quadratic form on all of Cℓ(V,Q) as explained earlier (which we also denoted by Q). A basis independent definition is
$Q(x) = \lang x^t x\rang$
where <a> denotes the scalar part of a (the grade 0 part in the Z-grading). One can show that
$Q(v_1v_2\cdots v_k) = Q(v_1)Q(v_2)\cdots Q(v_k)$
where the v[i] are elements of V — this identity is not true for arbitrary elements of Cℓ(V,Q).
The associated symmetric bilinear form on Cℓ(V,Q) is given by
$\lang x, y\rang = \lang x^t y\rang.$
One can check that this reduces to the original bilinear form when restricted to V. The bilinear form on all of Cℓ(V,Q) is nondegenerate if and only if it is nondegenerate on V.
It is not hard to verify that the transpose is the adjoint of left/right Clifford multiplication with respect to this inner product. That is,
$\lang ax, y\rang = \lang x, a^t y\rang,$ and
$\lang xa, y\rang = \lang x, y a^t\rang.$
Structure of Clifford algebras
In this section we assume that the vector space V is finite dimensional and that the bilinear form of Q is non-singular. A central simple algebra over K is a matrix algebra over a (finite
dimensional) division algebra with center K. For example, the central simple algebras over the reals are matrix algebras over either the reals or the quaternions.
• If V has even dimension then Cℓ(V,Q) is a central simple algebra over K.
• If V has even dimension then Cℓ^0(V,Q) is a central simple algebra over a quadratic extension of K or a sum of two isomorphic central simple algebras over K.
• If V has odd dimension then Cℓ(V,Q) is a central simple algebra over a quadratic extension of K or a sum of two isomorphic central simple algebras over K.
• If V has odd dimension then Cℓ^0(V,Q) is a central simple algebra over K.
The structure of Clifford algebras can be worked out explicitly using the following result. Suppose that U has even dimension and a non-singular bilinear form with discriminant d, and suppose that V
is another vector space with a quadratic form. The Clifford algebra of U+V is isomorphic to the tensor product of the Clifford algebras of U and (−1)^dim(U)/2dV, which is the space V with its
quadratic form multiplied by (−1)^dim(U)/2d. Over the reals, this implies in particular that
$Cl_{p+2,q}(\mathbb{R}) = M_2(\mathbb{R})\otimes Cl_{q,p}(\mathbb{R})$
$Cl_{p+1,q+1}(\mathbb{R}) = M_2(\mathbb{R})\otimes Cl_{p,q}(\mathbb{R})$
$Cl_{p,q+2}(\mathbb{R}) = \mathbb{H}\otimes Cl_{q,p}(\mathbb{R})$
These formulas can be used to find the structure of all real Clifford algebras; see the classification of Clifford algebras.
Notably, the Morita equivalence class of a Clifford algebra (its representation theory: the equivalence class of the category of modules over it) depends only on the signature p − q mod 8. This is an
algebraic form of Bott periodicity.
The Clifford group Γ
In this section we assume that V is finite dimensional and the quadratic form Q is nondegenerate.
The invertible elements of the Clifford algebra act on it by twisted conjugation: conjugation by x maps $y \mapsto x y \alpha(x)^{-1}$.
The Clifford group Γ is defined to be the set of invertible elements x that stabilize vectors, meaning that
$x v \alpha(x)^{-1}\in V$
for all v in V.
This formula also defines an action of the Clifford group on the vector space V that preserves the norm Q, and so gives a homomorphism from the Clifford group to the orthogonal group. The Clifford
group contains all elements r of V of nonzero norm, and these act on V by the corresponding reflections that take v to v − <v,r>r/Q(r) (In characteristic 2 these are called orthogonal transvections
rather than reflections.)
The Clifford group Γ is the disjoint union of two subsets Γ^0 and Γ^1, where Γ^i is the subset of elements of degree i. The subset Γ^0 is a subgroup of index 2 in Γ.
If V is a finite dimensional real vector space with positive definite (or negative definite) quadratic form then the Clifford group maps onto the orthogonal group of V with respect to the form (by
the Cartan-Dieudonné theorem) and the kernel consists of the nonzero elements of the field K. This leads to exact sequences
$1 \rightarrow K^* \rightarrow \Gamma \rightarrow O_V(K) \rightarrow 1,\,$
$1 \rightarrow K^* \rightarrow \Gamma^0 \rightarrow SO_V(K) \rightarrow 1.\,$
Over other fields or with indefinite forms, the map is not in general onto, and the failure is captured by the spinor norm.
Spinor norm
In arbitrary characteristic, the spinor norm Q is defined on the Clifford group by
$Q(x) = x^tx\,$
It is a homomorphism from the Clifford group to the group K^* of non-zero elements of K. It coincides with the quadratic form Q of V when V is identified with a subspace of the Clifford algebra.
Several authors define the spinor norm slightly differently, so that it differs from the one here by a factor of −1, 2, or −2 on Γ^1. The difference is not very important in characteristic other than
The nonzero elements of K have spinor norm in the group K^*2 of squares of nonzero elements of the field K. So when V is finite dimensional and non-singular we get an induced map from the orthogonal
group of V to the group K^*/K^*2, also called the spinor norm. The spinor norm of the reflection of a vector r has image Q(r) in K^*/K^*2, and this property uniquely defines it on the orthogonal
group. This gives exact sequences:
$1 \to \{\pm 1\} \to \mbox{Pin}_V(K) \to O_V(K) \to K^*/K^{*2},\,$
$1 \to \{\pm 1\} \to \mbox{Spin}_V(K) \to SO_V(K) \to K^*/K^{*2}.\,$
Note that in characteristic 2 the group {±1} has just one element.
From the point of view of Galois cohomology of algebraic groups, the spinor norm is a connecting homomorphism on cohomology. Writing μ[2] for the algebraic group of square roots of 1 (over a field of
characteristic not 2 it is roughly the same as a two-element group with trivial Galois action), the short exact sequence
$1 \to \mu_2 \rightarrow \mbox{Pin}_V \rightarrow O_V \rightarrow 1\,$
yields a long exact sequence on cohomology, which begins
$1 \to H^0(\mu_2;K) \to H^0(\mbox{Pin}_V;K) \to H^0(O_V;K) \to H^1(\mu_2;K)\,$
The 0th Galois cohomology group of an algebraic group with coefficients in K is just the group of K-valued points: H^0(G;K) = G(K), and $H^1(\mu_2;K) \cong K^*/K^{*2}$, which recovers the previous
$1 \to \{\pm 1\} \to \mbox{Pin}_V(K) \to O_V(K) \to K^*/K^{*2},\,$
where the spinor norm is the connecting homomorphism $H^0(O_V;K) \to H^1(\mu_2;K).$
Spin and Pin groups
In this section we assume that V is finite dimensional and its bilinear form is non-singular. (If K has characteristic 2 this implies that the dimension of V is even.)
The Pin group Pin[V](K) is the subgroup of the Clifford group Γ of elements of spinor norm 1, and similarly the Spin group Spin[V](K) is the subgroup of elements of Dickson invariant 0 in Pin[V](K).
When the characteristic is not 2, these are the elements of determinant 1. The Spin group usually has index 2 in the Pin group.
Recall from the previous section that there is a homomorphism from the Clifford group onto the orthogonal group. We define the special orthogonal group to be the image of Γ^0. If K does not have
characteristic 2 this is just the group of elements of the orthogonal group of determinant 1. If K does have characteristic 2, then all elements of the orthogonal group have determinant 1, and the
special orthogonal group is the set of elements of Dickson invariant 0.
There is a homomorphism from the Pin group to the orthogonal group. The image consists of the elements of spinor norm 1 ∈ K^*/K^*2. The kernel consists of the elements +1 and −1, and has order 2
unless K has characteristic 2. Similarly there is a homomorphism from the Spin group to the special orthogonal group of V.
In the common case when V is a positive or negative definite space over the reals, the spin group maps onto the special orthogonal group, and is simply connected when V has dimension at least 3.
Further the kernel of this homomorphism consists of 1 and -1.So in this case the spin group, Spin(n), is a double cover of SO(n). Please note, however, that the simple connectedness of the spin group
is not true in general: if V is R^p,q for p and q both at least 2 then the spin group is not simply connected. In this case the algebraic group Spin[p,q] is simply connected as an algebraic group,
even though its group of real valued points Spin[p,q](R) is not simply connected. This is a rather subtle point, which completely confused the authors of at least one standard book about spin groups.
Clifford algebras Cℓ[p,q](C), with p+q=2n even, are matrix algebras which have a complex representation of dimension 2^n. By restricting to the group Pin[p,q](R) we get a complex representation of
the Pin group of the same dimension, called the spin representation. If we restrict this to the spin group Spin[p,q](R) then it splits as the sum of two half spin representations (or Weyl
representations) of dimension 2^n-1.
If p+q=2n+1 is odd then the Clifford algebra Cℓ[p,q](C) is a sum of two matrix algebras, each of which has a representation of dimension 2^n, and these are also both representations of the Pin group
Pin[p,q](R). On restriction to the spin group Spin[p,q](R) these become isomorphic, so the spin group has a complex spinor representation of dimension 2^n.
More generally, spinor groups and pin groups over any field have similar representations whose exact structure depends on the structure of the corresponding Clifford algebras: whenever a Clifford
algebra has a factor that is a matrix algebra over some division algebra, we get a corresponding representation of the pin and spin groups over that division algebra. For examples over the reals see
the article on spinors.
Real spinors
For more details on this topic, see
To describe the real spin representations, one must know how the spin group sits inside its Clifford algebra. The Pin group, Pin[p,q] is the set of invertible elements in Cl[p,q] which can be written
as a product of unit vectors:
${\mathit{Pin}}_{p,q}=\{v_1v_2\dots v_r |\,\, \forall i, \|v_i\|=\pm 1\}.$
Comparing with the above concrete realizations of the Clifford algebras, the Pin group corresponds to the products of arbitrarily many reflections: it is a cover of the full orthogonal group O(p,q).
The Spin group consists of those elements of Pin[p,q] which are products of an even number of unit vectors. Thus by the Cartan-Dieudonné theorem Spin is a cover of the group of proper rotations SO(p,
Let α : Cℓ → Cℓ be the automorphism which is given by -Id acting on pure vectors. Then in particular, Spin[p,q] is the subgroup of Pin[p,q] whose elements are fixed by α. Let
$Cl_{p,q}^0 = \{ x\in Cl_{p,q} |\, \alpha(x)=x\}.$
(These are precisely the elements of even degree in Cℓ[p,q].) Then the spin group lies within Cℓ^0[p,q].
The irreducible representations of Cℓ[p,q] restrict to give representations of the pin group. Conversely, since the pin group is generated by unit vectors, all of its irreducible representation are
induced in this manner. Thus the two representations coincide. For the same reasons, the irreducible representations of the spin coincide with the irreducible representations of Cℓ^0[p,q]
To classify the pin representations, one need only appeal to the classification of Clifford algebras. To find the spin representations (which are representations of the even subalgebra), one can
first make use of either of the isomorphisms (see above)
Cℓ^0[p,q] ≈ Cℓ[p,q-1], for q > 0
Cℓ^0[p,q] ≈ Cℓ[q,p-1], for p > 0
and realize a spin representation in signature (p,q) as a pin representation in either signature (p,q-1) or (q,p-1).
Differential geometry
One of the principal applications of the exterior algebra is in differential geometry where it is used to define the bundle of differential forms on a smooth manifold. In the case of a (pseudo-)
Riemannian manifold, the tangent spaces come equipped with a natural quadratic form induced by the metric. Thus, one can define a Clifford bundle in analogy with the exterior bundle. This has a
number of important applications in Riemannian geometry. Perhaps more importantly is the link to a spin manifold, its associated spinor bundle and spin^c manifolds.
Clifford algebras have numerous important applications in physics. Physicists usually consider a Clifford algebra to be an algebra spanned by matrices γ[1],…,γ[n] called Dirac matrices which have the
property that
$\gamma_i\gamma_j + \gamma_j\gamma_i = 2\eta_{ij}\,$
where η is the matrix of a quadratic form of signature (p,q) — typically (1,3) when working in Minkowski space. These are exactly the defining relations for the Clifford algebra Cl[1,3](C) (up to an
unimportant factor of 2), which by the classification of Clifford algebras is isomorphic to the algebra of 4 by 4 complex matrices.
The Dirac matrices were first written down by Paul Dirac when he was trying to write a relativistic first-order wave equation for the electron, and give an explicit isomorphism from the Clifford
algebra to the algebra of complex matrices. The result was used to define the Dirac equation and introduce the Dirac operator. The entire Clifford algebra shows up in quantum field theory in the form
of Dirac field bilinears.
Computer Vision
Recently, Clifford algebras have been applied in the problem of action recognition and classification in computer vision. Rodriguez et al. ^[5] propose a Clifford embedding to generalize traditional
MACH filters to video (3D spatiotemporal volume), and vector-valued data such as optical flow. Vector-valued data is analyzed using the Clifford Fourier transform. Based on these vectors action
filters are synthesized in the Clifford Fourier domain and recognition of actions is performed using Clifford Correlation. The authors demonstrate the effectiveness of the Clifford embedding by
recognizing actions typically performed in classic feature films and sports broadcast television.
See also
1. ^ Mathematicians who work with real Clifford algebras and prefer positive definite quadratic forms (especially those working in index theory) sometimes use a different choice of sign in the
fundamental Clifford identity. That is, they take v^2 = −Q(v). One must replace Q with −Q in going from one convention to the other.
2. ^ Thus the group algebra K[Z/2] is semisimple and the Clifford algebra splits into eigenspaces of the main involution.
3. ^ We are still assuming that the characteristic is not 2.
4. ^ The opposite is true when using the alternate (−) sign convention for Clifford algebras: it is the conjugate which is more important. In general, the meanings of conjugation and transpose are
interchanged when passing from one sign convention to the other. For example, in the convention used here the inverse of a vector is given by v ^− 1 = v^t / Q(v) while in the (−) convention it is
given by $v^{-1} = \bar{v}/Q(v)$.
5. ^ Rodriguez, Mikel; Shah, M (2008). "Action MACH: A Spatio-Temporal Maximum Average Correlation Height Filter for Action Classification". Computer Vision and Pattern Recognition (CVPR).
• Bourbaki, Nicolas (1988), Algebra, Berlin, New York: Springer-Verlag, ISBN 978-3-540-19373-9 , section XI.9.
• Carnahan, S. Borcherds Seminar Notes, Uncut. Week 5, "Spinors and Clifford Algebras".
• Lawson, H. Blaine; Michelsohn, Marie-Louise (1989), Spin Geometry, Princeton, NJ: Princeton University Press, ISBN 978-0-691-08542-5 . An advanced textbook on Clifford algebras and their
applications to differential geometry.
• Lounesto, Pertti (2001), Clifford algebras and spinors, Cambridge: Cambridge University Press, ISBN 978-0-521-00551-7
• Porteous, Ian R. (1995), Clifford algebras and the classical groups, Cambridge: Cambridge University Press, ISBN 978-0-521-55177-9
External links | {"url":"http://www.thefullwiki.org/Clifford_algebra","timestamp":"2014-04-17T18:47:25Z","content_type":null,"content_length":"104167","record_id":"<urn:uuid:1538b267-6fbc-4048-af33-f6a5abc2d0f4>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Cubic Time Recognition of Cocircuit Graphs of
Uniform Oriented Matroids
Stefan Felsner, Ricardo G´omez, Kolja Knauer,
Juan Jos´e Montellano-Ballesteros, Ricardo Strausz
July 20, 2010
We present an algorithm which takes a graph as input and decides
in cubic time if the graph is the cocircuit graph of a uniform oriented
matroid. In the affirmative case the algorithm returns the set of signed
cocircuits of the oriented matroid. This improves an algorithm proposed
by Babson, Finschi and Fukuda.
Moreover we strengthen a result of Montellano-Ballesteros and Strausz
characterizing cocircuit graphs of uniform oriented matroids in terms of
crabbed connectivity.
1 Introduction
The cocircuit graph is a natural combinatorial object associated with oriented
matroids. In the case of spherical pseudoline-arrangements, i.e., rank 3 oriented
matroids, its vertices are the intersection points of the lines and two points
share an edge if they are adjacent on a line. More generally, the Topological
Representation Theorem of Folkman and Lawrence [5] says that every oriented | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/961/2594732.html","timestamp":"2014-04-18T09:07:58Z","content_type":null,"content_length":"8180","record_id":"<urn:uuid:5321b632-3b2d-4368-905c-36e5c5572234>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Low-pass_filter
low-pass filter
is a
that passes low-
frequency signals
(reduces the
of) signals with frequencies higher than the
cutoff frequency
. The actual amount of attenuation for each frequency varies from filter to filter. It is sometimes called a
high-cut filter
, or
treble cut filter
when used in audio applications.
The concept of a low-pass filter exists in many different forms, including electronic circuits (like a hiss filter used in audio), digital algorithms for smoothing sets of data, acoustic barriers,
blurring of images, and so on. Low-pass filters play the same role in signal processing that moving averages do in some other fields, such as finance; both tools provide a smoother form of a signal
which removes the short-term oscillations, leaving only the long-term trend.
Examples of low pass filters
Figure 1 shows a low pass RC filter for voltage signals, discussed in more detail below. Signal V[out] contains frequencies from the input signal, with high frequencies attenuated, but with little
attentuation below the corner frequency of the filter determined by its RC time constant. For current signals, a similar circuit using a resistor and capacitor in parallel works the same way. See
current divider.
A stiff physical barrier tends to reflect higher sound frequencies, and so acts as a low-pass filter for transmitting sound. When music is playing in another room, the low notes are easily heard,
while the high notes are attenuated.
Electronic low-pass filters are used to drive subwoofers and other types of loudspeakers, to block high pitches that they can't efficiently broadcast.
Radio transmitters use lowpass filters to block harmonic emissions which might cause interference with other communications.
An integrator is another example of a low-pass filter.
DSL splitters use low-pass and high-pass filters to separate DSL and POTS signals sharing the same pair of wires.
Low-pass filters also play a significant role in the sculpting of sound for electronic music as created by analogue synthesisers. See subtractive synthesis.
Ideal and real filters
ideal low-pass filter
completely eliminates all frequencies above the
cutoff frequency
while passing those below unchanged. The transition region present in practical filters does not exist in an ideal filter. An ideal low-pass filter can be realized mathematically (theoretically) by
multiplying a signal by the
rectangular function
in the frequency domain or, equivalently,
with a
sinc function
in the time domain.
However, the ideal filter is impossible to realize without also having signals of infinite extent, and so generally needs to be approximated for real ongoing signals, because the sinc function's
support region extends to all past and future times. The filter would therefore need to have infinite delay, or knowledge of the infinite future and past, in order to perform the convolution. It is
effectively realizable for pre-recorded digital signals by assuming extensions of zero into the past and future, but even that is not typically practical.
Real filters for real-time applications approximate the ideal filter by truncating and windowing the infinite impulse response to make a finite impulse response; applying that filter requires
delaying the signal for a moderate period of time, allowing the computation to "see" a little bit into the future. This delay is manifested as phase shift. Greater accuracy in approximation requires
a longer delay.
The Whittaker–Shannon interpolation formula describes how to use a perfect low-pass filter to reconstruct a continuous signal from a sampled digital signal. Real digital-to-analog converters use real
filter approximations.
Electronic low-pass filters
There are a great many different types of filter circuits, with different responses to changing frequency. The frequency response of a filter is generally represented using a Bode plot.
• A first-order filter, for example, will reduce the signal amplitude by half (about –6 dB) every time the frequency doubles (goes up one octave); more precisely, the rolloff approaches 20 dB per
decade in the limit of high frequency. The magnitude Bode plot for a first-order filter looks like a horizontal line below the cutoff frequency, and a diagonal line above the cutoff frequency.
There is also a "knee curve" at the boundary between the two, which smoothly transitions between the two straight line regions. If the transfer function of a first-order lowpass filter has a zero
as well as a pole, the Bode plot will flatten out again, at some maximum attenuation of high frequencies; such an effect is caused for example by a little bit of the input leaking around the
one-pole filter; this one-pole–one-zero filter is still a first-order lowpass. See Pole–zero plot and RC circuit.
• A second-order filter attenuates higher frequencies more steeply. The Bode plot for this type of filter resembles that of a first-order filter, except that it falls off more quickly. For example,
a second-order Butterworth filter will reduce the signal amplitude to one fourth its original level every time the frequency doubles (–12 dB per octave, or –40 dB per decade). Other all-pole
second-order filters may roll off at different rates initially depending on their Q factor, but approach the same final rate of –12 dB per octave; as with the first-order filters, zeroes in the
transfer function can change the high-frequency asymptote. See RLC circuit.
• Third- and higher-order filters are defined similarly. In general, the final rate of rolloff for an order-n all-pole filter is 6n dB per octave.
On any Butterworth filter, if one extends the horizontal line to the right and the diagonal line to the upper-left (the asymptotes of the function), they will intersect at exactly the "cutoff
frequency". The frequency response at the cutoff frequency in a first-order filter is –3 dB below the horizontal line. The various types of filters — Butterworth filter, Chebyshev filter, Bessel
filter, etc. — all have different-looking "knee curves". Many second-order filters are designed to have "peaking" or resonance, causing their frequency response at the cutoff frequency to be above
the horizontal line. See electronic filter for other types.
The meanings of 'low' and 'high' — that is, the cutoff frequency — depend on the characteristics of the filter. The term "low-pass filter" merely refers to the shape of the filter's response; a
high-pass filter could be built that cuts off at a lower frequency than any low-pass filter – it is their responses that set them apart. Electronic circuits can be devised for any desired frequency
range, right up through microwave frequencies (above 1000 MHz) and higher.
Passive electronic realization
One simple electrical circuit that will serve as a low-pass filter consists of a resistor in series with a load, and a capacitor in parallel with the load. The capacitor exhibits reactance, and
blocks low-frequency signals, causing them to go through the load instead. At higher frequencies the reactance drops, and the capacitor effectively functions as a short circuit. The combination of
resistance and capacitance gives you the time constant of the filter $tau = RC$ (represented by the Greek letter tau). The break frequency, also called the turnover frequency or cutoff frequency (in
hertz), is determined by the time constant:
$f_mathrm\left\{c\right\} = \left\{1 over 2 pi tau \right\} = \left\{1 over 2 pi R C\right\}$
or equivalently (in radians per second):
$omega_mathrm\left\{c\right\} = \left\{1 over tau\right\} = \left\{ 1 over R C\right\}.$
One way to understand this circuit is to focus on the time the capacitor takes to charge. It takes time to charge or discharge the capacitor through that resistor:
• At low frequencies, there is plenty of time for the capacitor to charge up to practically the same voltage as the input voltage.
• At high frequencies, the capacitor only has time to charge up a small amount before the input switches direction. The output goes up and down only a small fraction of the amount the input goes up
and down. At double the frequency, there's only time for it to charge up half the amount.
Another way to understand this circuit is with the idea of reactance at a particular frequency:
• Since DC cannot flow through the capacitor, DC input must "flow out" the path marked $V_mathrm\left\{out\right\}$ (analogous to removing the capacitor).
• Since AC flows very well through the capacitor — almost as well as it flows through solid wire — AC input "flows out" through the capacitor, effectively short circuiting to ground (analogous to
replacing the capacitor with just a wire).
It should be noted that the capacitor is not an "on/off" object (like the block or pass fluidic explanation above). The capacitor will variably act between these two extremes. It is the Bode plot and
frequency response that show this variability.
Active electronic realization
Another type of electrical circuit is an active low-pass filter.
In the operational amplifier circuit shown in the figure, the cutoff frequency (in hertz) is defined as:
$f_mathrm\left\{c\right\} = \left\{1 over 2 pi R_2 C \right\}$
or equivalently (in radians per second):
$omega_mathrm\left\{c\right\} = frac\left\{1\right\}\left\{R_2 C\right\}$
The gain in the passband is $frac\left\{-R_2\right\}\left\{R_1\right\}$, and the stopband drops off at −6 dB per octave, as it is a first-order filter.
Sometimes, a simple gain amplifier (as opposed to the very-high-gain operation amplifier) is turned into a low-pass filter by simply adding a feedback capacitor C. This feedback decreases the
frequency response at high frequencies via the Miller effect, and helps to avoid oscillation in the amplifier. For example, an audio amplifier can be made into a low-pass filter with cutoff frequency
100 kHz to reduce gain at frequencies which would otherwise oscillate. Since the audio band (what we can hear) only goes up to 20 kHz or so, the frequencies of interest fall entirely in the passband,
and the amplifier behaves the same way as far as audio is concerned.
Laplace notation
Continuous-time filters can also be described in terms of the
Laplace transform
of their impulse response in a way that allows all of the characteristics of the filter to be easily analyzed by considering the pattern of poles and zeros of the Laplace transform in the complex
plane (in discrete time, one can similarly consider the
of the impulse response).
A first-order low-pass filter can be described in Laplace notation as
frac{mathrm{Output}}{mathrm{Input}} = frac{1}{1 + s tau}
where s is the Laplace transform variable and τ is the filter time constant.
Digital simulation
The effect of a low-pass filter can be simulated on a computer by analyzing its behavior in the time domain, and then discretizing the model.
From the circuit diagram to the right, according to Kirchoff's Laws and the definition of capacitance:
$V_\left\{in\right\}\left(t\right) - V_\left\{out\right\}\left(t\right) = I\left(t\right) R$
$Q_c\left(t\right) = C V_\left\{out\right\}\left(t\right)$
$I\left(t\right) = frac\left\{d Q_c\right\}\left\{d t\right\}$
Taking the time derivative of the second equation, $I\left(t\right) = C frac\left\{dV_\left\{out\right\}\right\}\left\{dt\right\}$. Combining this with the first equation:
$V_\left\{in\right\}\left(t\right) - V_\left\{out\right\}\left(t\right) = C left\left[frac\left\{dV_\left\{out\right\}\right\}\left\{dt\right\}right\right] R$
Now we may discretize the equation. Let us represent $V_\left\{in\right\}$ by a series of samples $x_\left\{1...n\right\}$. We will likewise represent $V_\left\{out\right\}$ by a series of sample $y_
\left\{1...n\right\}$ at the same points in time. For simplicity we assume that the samples are taken at evenly-spaced points in time separated by $Delta t$. Making these substitutions:
$x_i - y_i = C left\left[frac\left\{y_\left\{i\right\}-y_\left\{i-1\right\}\right\}\left\{Delta t\right\} right\right] R$
And rearranging terms:
$y_i = x_i left\left(frac\left\{Delta t\right\}\left\{RC + Delta t\right\} right\right) + y_\left\{i-1\right\} left\left(frac\left\{RC\right\}\left\{RC + Delta t\right\} right\right)$
or more succinctly,
$y_n = alpha x_n + \left(1 - alpha\right) y_\left\{n-1\right\},$
where $alpha = frac\left\{Delta t\right\}\left\{RC + Delta t\right\}$
This gives us a way to determine the output samples in terms of the input samples and the preceding output. The following algorithm will simulate the effect of a low-pass filter on a series of
digital samples:
// Return RC low-pass filter output samples, given input samples,
// time interval dt, and time constant RC
function lowpass(real[0..n] x, real dt, real RC)
var real[0..n] y
var real alpha := dt / (RC + dt)
y[0] := x[0]
for i from 1 to n
y[i] := alpha * x[i] + (1-alpha) * y[i-1]
return y
Equivalently, more efficiently, and somewhat more intuitively (the change in filter output is proportional to the difference between the last output and the current input, which is the essence of
exponential decay):
for i from 1 to n
y[i] := y[i-1] + alpha * (x[i] - y[i-1])
See also
External links | {"url":"http://www.reference.com/browse/wiki/Low-pass_filter","timestamp":"2014-04-17T16:55:12Z","content_type":null,"content_length":"94923","record_id":"<urn:uuid:613f9982-58e2-4565-89d5-3093dd8a02d9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
NAG Library
NAG Library Routine Document
1 Purpose
D03EEF discretizes a second-order elliptic partial differential equation (PDE) on a rectangular region.
2 Specification
SUBROUTINE D03EEF ( XMIN, XMAX, YMIN, YMAX, PDEF, BNDY, NGX, NGY, LDA, A, RHS, SCHEME, IFAIL)
INTEGER NGX, NGY, LDA, IFAIL
REAL (KIND=nag_wp) XMIN, XMAX, YMIN, YMAX, A(LDA,7), RHS(LDA)
CHARACTER(1) SCHEME
EXTERNAL PDEF, BNDY
3 Description
D03EEF discretizes a second-order linear elliptic partial differential equation of the form
$αx,y ∂2U ∂x2 + βx,y ∂2U ∂x ∂y + γx,y ∂2U ∂y2 + δx,y ∂U ∂x +εx,y ∂U ∂y + ϕx,y U = ψx,y$ (1)
on a rectangular region
subject to boundary conditions of the form
$ax,y U+ bx,y ∂U ∂n = cx,y$
$\frac{\partial U}{\partial n}$
denotes the outward pointing normal derivative on the boundary. Equation
is said to be elliptic if
for all points in the rectangular region. The linear equations produced are in a form suitable for passing directly to the multigrid routine
The equation is discretized on a rectangular grid, with
grid points in the
-direction and
grid points in the
-direction. The grid spacing used is therefore
$hx=xB-xA/nx-1 hy=yB-yA/ny-1$
and the coordinates of the grid points
$xi=xA+i-1hx, i=1,2,…,nx, yj=yA+j-1hy, j=1,2,…,ny.$
At each grid point
six neighbouring grid points are used to approximate the partial differential equation, so that the equation is discretized on the seven-point stencil shown in
Figure 1
For convenience the approximation ${u}_{ij}$ to the exact solution $U\left({x}_{i},{y}_{j}\right)$ is denoted by ${u}_{\mathrm{O}}$, and the neighbouring approximations are labelled according to
points of the compass as shown. Where numerical labels for the seven points are required, these are also shown.
The following approximations are used for the second derivatives:
$∂2U ∂x2 ≃ 1hx2 uE-2uO+uW ∂2U ∂y2 ≃ 1hy2 uN-2uO+uS ∂2U ∂x∂y ≃ 12hxhy uN-uNW+uE-2uO+uW-uSE+uS.$
Two possible schemes may be used to approximate the first derivatives:
Central Differences
$∂U ∂x ≃ 12hx uE-uW ∂U ∂y ≃ 12hy uN-uS$
Upwind Differences
$∂U ∂x ≃ 1hxuO-uW if δx,y>0 ∂U ∂x ≃ 1hxuE-uO if δx,y<0 ∂U ∂y ≃ 1hyuN-uO if εx,y>0 ∂U ∂y ≃ 1hyuO-uS if εx,y<0.$
Central differences are more accurate than upwind differences, but upwind differences may lead to a more diagonally dominant matrix for those problems where the coefficients of the first derivatives
are significantly larger than the coefficients of the second derivatives.
The approximations used for the first derivatives may be written in a more compact form as follows:
$∂U ∂x ≃ 12hx kx- 1uW- 2kxuO+kx+ 1uE ∂U ∂y ≃ 12hy ky- 1uS- 2kyuO+ky+ 1uN$
for upwind differences, and
for central differences.
At all points in the rectangular domain, including the boundary, the coefficients in the partial differential equation are evaluated by calling
, and applying the approximations. This leads to a seven-diagonal system of linear equations of the form:
$Aij6ui-1,j+1 + Aij7ui,j+1 + Aij3ui-1,j + Aij4uij + Aij5ui+1,j + Aij1ui,j-1 + Aij2ui+1,j-1=fij, i=1,2,…,nx and j=1,2,…,ny,$
where the coefficients are given by
$Aij1 = β xi,yj12hxhy +γ xi,yj1hy2 +ε xi,yj12hy ky- 1 Aij2 = -β xi,yj12hxhy Aij3 = α xi,yj1hx2 +β xi,yj12hxhy +δ xi,yj12hx kx- 1 Aij4 = -α xi,yj2hx2 -β xi,yj1hxhy -γ xi,yj2hy2 -δ xi,yjkyhx-ε
xi,yjkyhy-ϕ xi,yj Aij5 = α xi,yj1hx2 +β xi,yj12hxhy +δ xi,yj12hx kx+ 1 Aij6 = -β xi,yj12hxhy Aij7 = β xi,yj12hxhy +γ xi,yj1hy2 +ε xi,yj12hy ky+ 1 fij = ψ xi,yj$
These equations then have to be modified to take account of the boundary conditions. These may be Dirichlet (where the solution is given), Neumann (where the derivative of the solution is given), or
mixed (where a linear combination of solution and derivative is given).
If the boundary conditions are Dirichlet, there are an infinity of possible equations which may be applied:
$μuij = μfij , μ≠0 .$ (2)
is used to solve the discretized equations, it turns out that the choice of
can have a dramatic effect on the rate of convergence, and the obvious choice
$\mu =1$
is not the best. Some choices may even cause the multigrid method to fail altogether. In practice it has been found that a value of the same order as the other diagonal elements of the matrix is
best, and the following value has been found to work well in practice:
$μ=minij -2hx2 +2hy2 ,Aij4 .$
If the boundary conditions are either mixed or Neumann (i.e.,
$Be 0$
on return from
), then one of the points in the seven-point stencil lies outside the domain. In this case the normal derivative in the boundary conditions is used to eliminate the ‘fictitious’ point,
$∂U ∂n ≃12h uoutside-uinside.$ (3)
It should be noted that if the boundary conditions are Neumann and $\varphi \left(x,y\right)\equiv 0$, then there is no unique solution. The routine returns with ${\mathbf{IFAIL}}={\mathbf{5}}$ in
this case, and the seven-diagonal matrix is singular.
The four corners are treated separately.
is called twice, once along each of the edges meeting at the corner. If both boundary conditions at this point are Dirichlet and the prescribed solution values agree, then this value is used in an
equation of the form
. If the prescribed solution is discontinuous at the corner, then the average of the two values is used. If one boundary condition is Dirichlet and the other is mixed, then the value prescribed by
the Dirichlet condition is used in an equation of the form given above. Finally, if both conditions are mixed or Neumann, then two ‘fictitious’ points are eliminated using two equations of the form
It is possible that equations for which the solution is known at all points on the boundary, have coefficients which are not defined on the boundary. Since this routine calls
points in the domain, including boundary points, arithmetic errors may occur in
which this routine cannot trap. If you have an equation with Dirichlet boundary conditions (i.e.,
at all points on the boundary), but with PDE coefficients which are singular on the boundary, then
could be called directly only using interior grid points at your discretization.
After the equations have been set up as described above, they are checked for diagonal dominance. That is to say,
$Aij4 > ∑k≠4 Aijk , i=1,2,…,nx and j=1,2,…,ny .$
If this condition is not satisfied then the routine returns with
. The multigrid routine
may still converge in this case, but if the coefficients of the first derivatives in the partial differential equation are large compared with the coefficients of the second derivative, you should
consider using upwind differences (
Since this routine is designed primarily for use with
, this document should be read in conjunction with the document for that routine.
4 References
Wesseling P (1982) MGD1 – a robust and efficient multigrid method Multigrid Methods. Lecture Notes in Mathematics 960 614–630 Springer–Verlag
5 Parameters
1: XMIN – REAL (KIND=nag_wp)Input
2: XMAX – REAL (KIND=nag_wp)Input
On entry: the lower and upper $x$ coordinates of the rectangular region respectively, ${x}_{A}$ and ${x}_{B}$.
Constraint: ${\mathbf{XMIN}}<{\mathbf{XMAX}}$.
3: YMIN – REAL (KIND=nag_wp)Input
4: YMAX – REAL (KIND=nag_wp)Input
On entry: the lower and upper $y$ coordinates of the rectangular region respectively, ${y}_{A}$ and ${y}_{B}$.
Constraint: ${\mathbf{YMIN}}<{\mathbf{YMAX}}$.
5: PDEF – SUBROUTINE, supplied by the user.External Procedure
must evaluate the functions
$\alpha \left(x,y\right)$
$\beta \left(x,y\right)$
$\gamma \left(x,y\right)$
$\delta \left(x,y\right)$
$\epsilon \left(x,y\right)$
$\varphi \left(x,y\right)$
$\psi \left(x,y\right)$
which define the equation at a general point
The specification of
SUBROUTINE PDEF ( X, Y, ALPHA, BETA, GAMMA, DELTA, EPSLON, PHI, PSI)
REAL (KIND=nag_wp) X, Y, ALPHA, BETA, GAMMA, DELTA, EPSLON, PHI, PSI
1: X – REAL (KIND=nag_wp)Input
2: Y – REAL (KIND=nag_wp)Input
On entry: the $x$ and $y$ coordinates of the point at which the coefficients of the partial differential equation are to be evaluated.
3: ALPHA – REAL (KIND=nag_wp)Output
4: BETA – REAL (KIND=nag_wp)Output
5: GAMMA – REAL (KIND=nag_wp)Output
6: DELTA – REAL (KIND=nag_wp)Output
7: EPSLON – REAL (KIND=nag_wp)Output
8: PHI – REAL (KIND=nag_wp)Output
9: PSI – REAL (KIND=nag_wp)Output
On exit
must be set to the values of
$\alpha \left(x,y\right)$
$\beta \left(x,y\right)$
$\gamma \left(x,y\right)$
$\delta \left(x,y\right)$
$\epsilon \left(x,y\right)$
$\varphi \left(x,y\right)$
$\psi \left(x,y\right)$
respectively at the point specified by
must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which D03EEF is called. Parameters denoted as
be changed by this procedure.
6: BNDY – SUBROUTINE, supplied by the user.External Procedure
must evaluate the functions
, and
involved in the boundary conditions.
The specification of
SUBROUTINE BNDY ( X, Y, A, B, C, IBND)
INTEGER IBND
REAL (KIND=nag_wp) X, Y, A, B, C
1: X – REAL (KIND=nag_wp)Input
2: Y – REAL (KIND=nag_wp)Input
On entry: the $x$ and $y$ coordinates of the point at which the boundary conditions are to be evaluated.
3: A – REAL (KIND=nag_wp)Output
4: B – REAL (KIND=nag_wp)Output
5: C – REAL (KIND=nag_wp)Output
On exit
must be set to the values of the functions appearing in the boundary conditions.
6: IBND – INTEGERInput
On entry
: specifies on which boundary the point (
) lies.
according as the point lies on the bottom, right, top or left boundary.
must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which D03EEF is called. Parameters denoted as
be changed by this procedure.
7: NGX – INTEGERInput
8: NGY – INTEGERInput
On entry
: the number of interior grid points in the
- and
-directions respectively,
. If the seven-diagonal equations are to be solved by
, then
should preferably be divisible by as high a power of
as possible.
□ ${\mathbf{NGX}}\ge 3$;
□ ${\mathbf{NGY}}\ge 3$.
9: LDA – INTEGERInput
On entry
: the first dimension of the array
and the dimension of the array
as declared in the (sub)program from which D03EEF is called.
: if only the seven-diagonal equations are required, then
${\mathbf{LDA}}\ge {\mathbf{NGX}}×{\mathbf{NGY}}$
. If a call to this routine is to be followed by a call to
to solve the seven-diagonal linear equations,
${\mathbf{LDA}}\ge \left(4×\left({\mathbf{NGX}}+1\right)×\left({\mathbf{NGY}}+1\right)\right)/3$
this routine only checks the former condition.
, if called, will check the latter condition.
10: A(LDA,$7$) – REAL (KIND=nag_wp) arrayOutput
On exit
, for
$\mathit{i}=1,2,\dots ,{\mathbf{NGX}}×{\mathbf{NGY}}$
$\mathit{j}=1,2,\dots ,7$
, contains the seven-diagonal linear equations produced by the discretization described above. If
, the remaining elements are not referenced by the routine, but if
${\mathbf{LDA}}\ge \left(4×\left({\mathbf{NGX}}+1\right)×\left({\mathbf{NGY}}+1\right)\right)/3$
then the array
can be passed directly to
, where these elements are used as workspace.
11: RHS(LDA) – REAL (KIND=nag_wp) arrayOutput
On exit
: the first
elements contain the right-hand sides of the seven-diagonal linear equations produced by the discretization described above. If
, the remaining elements are not referenced by the routine, but if
${\mathbf{LDA}}\ge \left(4×\left({\mathbf{NGY}}+1\right)×\left({\mathbf{NGY}}+1\right)\right)/3$
then the array
can be passed directly to
, where these elements are used as workspace.
12: SCHEME – CHARACTER(1)Input
On entry
: the type of approximation to be used for the first derivatives which occur in the partial differential equation.
Central differences are used.
Upwind differences are used.
generally speaking, if at least one of the coefficients multiplying the first derivatives (
as returned by
) are large compared with the coefficients multiplying the second derivatives, then upwind differences may be more appropriate. Upwind differences are less accurate than central differences, but
may result in more rapid convergence for strongly convective equations. The easiest test is to try both schemes.
13: IFAIL – INTEGERInput/Output
On entry
must be set to
$-1\text{ or }1$
. If you are unfamiliar with this parameter you should refer to
Section 3.3
in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$-1\text{ or }1$
is recommended. If the output of error messages is undesirable, then the value
is recommended. Otherwise, because for this routine the values of the output parameters may be useful even if
${\mathbf{IFAIL}}e {\mathbf{0}}$
on exit, the recommended value is
When the value $-\mathbf{1}\text{ or }1$ is used it is essential to test the value of IFAIL on exit.
On exit
unless the routine detects an error or a warning has been flagged (see
Section 6
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Note: D03EEF may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the routine:
On entry, ${\mathbf{XMIN}}\ge {\mathbf{XMAX}}$,
or ${\mathbf{YMIN}}\ge {\mathbf{YMAX}}$,
or ${\mathbf{NGX}}<3$,
or ${\mathbf{NGY}}<3$,
or ${\mathbf{LDA}}<{\mathbf{NGX}}×{\mathbf{NGY}}$,
or SCHEME is not one of 'C' or 'U'.
At some point on the boundary there is a derivative in the boundary conditions (
${\mathbf{B}}e 0$
on return from
) and there is a nonzero coefficient of the mixed derivative
$\frac{{\partial }^{2}U}{\partial x\partial y}$
${\mathbf{BETA}}e 0$
on return from
A null boundary has been specified, i.e., at some point both
are zero on return from a call to
The equation is not elliptic, i.e.,
after a call to
. The discretization has been completed, but the convergence of
cannot be guaranteed.
The boundary conditions are purely Neumann (only the derivative is specified) and there is, in general, no unique solution.
The equations were not diagonally dominant. (See
Section 3
7 Accuracy
Not applicable.
If this routine is used as a preprocessor to the multigrid routine
it should be noted that the rate of convergence of that routine is strongly dependent upon the number of levels in the multigrid scheme, and thus the choice of
is very important.
9 Example
The program solves the elliptic partial differential equation
$∂2U ∂x2 + ∂2 U ∂y2 +50 ∂U ∂x + ∂U ∂y =fx,y$
on the unit square
$0\le x$
$y\le 1$
, with boundary conditions
• $\frac{\partial U}{\partial n}$ given on $x=0$ and $y=0$,
• $U$ given on $x=1$ and $y=1$.
The function $f\left(x,y\right)$ and the exact form of the boundary conditions are derived from the exact solution $U\left(x,y\right)=\mathrm{sin}x\mathrm{sin}y$.
The equation is first solved using central differences. Since the coefficients of the first derivatives are large, the linear equations are not diagonally dominated, and convergence is slow. The
equation is solved a second time with upwind differences, showing that convergence is more rapid, but the solution is less accurate.
9.1 Program Text
9.2 Program Data
9.3 Program Results | {"url":"http://www.nag.com/numeric/FL/nagdoc_fl24/html/D03/d03eef.html","timestamp":"2014-04-19T02:48:01Z","content_type":null,"content_length":"86515","record_id":"<urn:uuid:5598ea6a-8eb0-4d31-9f6a-1cf50d24f98a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
RootsWeb: GENEALOGY-DNA-L Re: [DNA] P value (was chances are, it's wrong)
GENEALOGY-DNA-L Archives
Archiver > GENEALOGY-DNA > 2010-11 > 1290293679
From: James Heald <>
Subject: Re: [DNA] P value (was chances are, it's wrong)
Date: Sat, 20 Nov 2010 22:54:39 +0000
References: <F9C440A2-FC59-4A9E-AAAC-85DEE9D2FAB0@GMAIL.COM>, ,<COL115-W50D879F102DC3996D9D454A03A0@phx.gbl>, ,<4CE7A3C0.7050702@ucl.ac.uk>, ,<COL115-W1464B78AF0292D6AEFA183A03B0@phx.gbl>,
In-Reply-To: <COL115-W424C7732D1583F8960685CA03B0@phx.gbl>
On 20/11/2010 18:37, Steven Bird wrote:
> James wrote:
>> P-value has a very precise meaning in Frequentist statistics.
>> It is "the probability of obtaining a test statistic at least as extreme
>> as the one that was actually observed, assuming that the null hypothesis
>> is true".
>> http://en.wikipedia.org/wiki/P-value
> I reply:
> It is also defined as the probability of committing a Type I error (rejecting the null when it is in fact true or a false positive) when using a statistic such as student's T test. When p=0.05,
it means that the statistician have a 1 in 20 chance of being wrong (falsely rejecting the null) when the null is in fact true. To me, that is identical in meaning with the statement that he or
she also has a 19 out of 20 chance of being right. How is it different?
One of the important things when dealing with probabilities is always to
be aware what the probabilities in question are conditioned on.
The P-value gives the probability, *given* that the null hypothesis is
true, and without taking into account the specific data that has come
in, that the null hypothesis will be falsely rejected.
So for instance in the dog barking example, *if* the dog is not hungry
*then* 95% of the time it will not bark, so 95% of the time we will not
conclude the dog is hungry if it isn't.
It is worth emphasising that this is all predicated on what we can say
*before* we know whether the dog has barked or not.
It does *not* give any guarantees as to what proportion of times we will
be make a Type I error out of those cases where the dog has barked.
There is no reason, when we look at the proportion of Type I errors in
those particular cases, for it to be limited to 5%. In fact, in the
scenario I gave earlier, we can imagine getting 100% Type I errors,
whenever the dog barks.
This is the shortcoming of the P-value approach, that no attempt is
being made to try to calculate the probability of the dog actually being
hungry, given the data; so there is no reason to expect the test, in
cases of those particular circumstances, to be right 95% or any other
particular percentage of the time.
* * *
Turning to TMRCAs, the Bayesian distributions are typically very
long-tailed, for which the P-value/confidence approach tends to produce
values which under-report the full Bayesian range.
Suppose the upper confidence limit is 50 generations. That means that
if the TMRCA actually was 50 generations, it would produce n or fewer
mutations 5% of the time.
But it tells us nothing about how often if the TMRCA was actually 60
generations, or 70 generations, how often that would produce n or fewer
mutations (other than below 5% of the time) -- it tells us nothing about
how quickly this percentage falls off as the number of generations
For a particular large number N generations, it might be quite rare that
we see only n mutations. But on the other hand, there are an awful lot
more numbers greater than 50 than there are less than 50. This tends to
mean that, when you calculate the weight of probability, using for
example Bruce Walsh's TMRCA calculator,
*given* that n mutations have been observed, rather more than 5% of the
probability weight will be located beyond 50 generations, even though it
is 50 generations that is the frequentist 0.95 confidence limit.
This thread: | {"url":"http://newsarch.rootsweb.com/th/read/GENEALOGY-DNA/2010-11/1290293679","timestamp":"2014-04-17T06:58:13Z","content_type":null,"content_length":"10852","record_id":"<urn:uuid:aa2b9438-f237-448a-b2e5-3553f2d3c6c4>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: The handshake lemma question?
Replies: 3 Last Post: Feb 12, 2012 1:56 PM
Messages: [ Previous | Next ]
joe The handshake lemma question?
Posted: Feb 12, 2012 12:15 PM
Posts: 3
Registered: 2/12/12 Hi, everyone I'm having a bit of trouble with this problem. I don't understand it at all. Any help will be appreciated
Consider a planar graph with v vertices, e edges, and f face.
A)The degree of a vertex is de?ned as the number of edges touching it. Let?s
de?ne in an analogous way the degree of a face to be the number of edges encountered when we complete a walk around its boundary. What happens if we
add up the degrees of all faces (note: this includes the outer face too).
B)From part (a), conclude a relation between the sum of degrees of vertices
and the sum of degrees of faces.
Date Subject Author
2/12/12 The handshake lemma question? joe
2/12/12 RE: The handshake lemma question? Ben Brink
2/12/12 Re: RE: The handshake lemma question? joe
2/12/12 Re: The handshake lemma question? Walter Wallis | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2343466","timestamp":"2014-04-17T01:10:30Z","content_type":null,"content_length":"19845","record_id":"<urn:uuid:b5e442c3-c24b-4bba-a759-14dec07bcb70>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
Notice of public hearing for contiguous annexation – Town of Wake Forest
The public will take notice that the Board of Commissioners of the Town of Wake Forest will hold a public hearing at 7:00 p.m. on the 17th day of December 2013 in the Board Room at Town Hall on the
question of a contiguous annexation request petitioned by Forestville Road Investments, LLC for the properties located in the 1200 block of Forestville Road (east side) comprising 23.04 acres being
Wake County Tax Pin No. 1749-66-3478, 1749-66-9299 & 1749-76-5204.
Land Descriptions:
Being a 23.04 acre tract of land located on the east side of Forestville Road in the Wake Forest Township of Wake County North Carolina, and more particularly described as follows:
COMMENCING at the NCGS Monument “CHAPPEL” having coordinates of N:799626.92, E:2136571.19; thence with a bearing of S 76°37’07” E a distance of 11066.66 feet to a EIP; thence with a bearing of S
18°49’34” E a distance of 276.62 feet to a Point; thence with a bearing of S 18°26’13” E a distance of 2.93 feet to a point; which is the point OF BEGINNING.
Beginning at a point having coordinates of N:796801.15, E:2147427.59; thence with a bearing of S 19°07’53” E a distance of 461.97 feet to a EIS; thence with a bearing of S 17°53’33” E a distance of
304.64 feet to a EIS; thence with a bearing of N 84°49’07” W a distance of 215.22 feet to a EIP; thence with a bearing of N 84°47’59” W a distance of 427.33 feet to a point; thence with a bearing of
N 84°48’25” W a distance of 243.44 feet to a map; thence with a bearing of S 03°03’39” E a distance of 15.09 feet to a EIP; thence with a bearing of N 82°33’26” W a distance of 104.75 feet to a EIP;
thence with a bearing of N 85°16’53” W a distance of 30.27 feet to a map; thence with a bearing of N 86°51’50” W a distance of 285.07 feet to a EIP; thence with a bearing of N 04°54’24” W a distance
of 40.22 feet to a MAPLE TREE; thence with a bearing of N 89°39’19” W a distance of 431.78 feet to a EIP; thence with a bearing of N 09°18’09” W a distance of 415.72 feet to a point; thence with a
bearing of N 13°59’48” E a distance of 8.43 feet to a point; thence with a bearing of N 45°05’18” E a distance of 5.42 feet to a point; thence with a bearing of N 78°48’34” E a distance of 13.20 feet
to a point; thence with a bearing of N 69°13’49” E a distance of 19.81 feet to a point; thence with a bearing of S 75°09’59” E a distance of 28.82 feet to a point; thence with a bearing of S
69°48’25” E a distance of 27.49 feet to a point; thence with a bearing of S 88°15’57” E a distance of 8.54 feet to a point; thence with a bearing of N 60°39’31” E a distance of 39.61 feet to a point;
thence with a bearing of N 65°07’54” E a distance of 26.09 feet to a point; thence with a bearing of N 35°50’46” E a distance of 29.73 feet to a point; thence with a bearing of N 69°48’09” E a
distance of 22.32 feet to a point; thence with a bearing of N 83°00’42” E a distance of 75.08 feet to a point; thence with a bearing of N 77°28’40” E a distance of 28.00 feet to a point; thence with
a bearing of N 88°03’41” E a distance of 21.43 feet to a point; thence with a bearing of N 77°55’59” E a distance of 19.71 feet to a point; thence with a bearing of N 56°53’59” E a distance of 39.21
feet to a point; thence with a bearing of N 71°36’27” E a distance of 65.48 feet to a point; thence with a bearing of N 64°59’56” E a distance of 24.20 feet to a point; thence with a bearing of N
64°54’16” E a distance of 62.96 feet to a point; thence with a bearing of N 57°50’50” E a distance of 19.12 feet to a point; thence with a bearing of N 69°26’59” E a distance of 22.03 feet to a
point; thence with a bearing of S 86°00’07” E a distance of 29.74 feet to a point; thence with a bearing of N 75°49’15” E a distance of 18.10 feet to a point; thence with a bearing of N 69°11’08” E a
distance of 19.08 feet to a point; thence with a bearing of N 76°47’03” E a distance of 16.44 feet to a point; thence with a bearing of N 32°15’55” E a distance of 16.38 feet to a point; thence with
a bearing of S 86°21’39” E a distance of 19.00 feet to a point; thence with a bearing of S 42°48’29” E a distance of 14.54 feet to a point; thence with a bearing of S 10°53’46” W a distance of 11.76
feet to a point; thence with a bearing of S 20°17’27” W a distance of 48.17 feet to a point; thence with a bearing of S 02°54’42” W a distance of 9.90 feet to a point; thence with a bearing of S
60°28’59” E a distance of 10.62 feet to a point; thence with a bearing of N 87°40’11” E a distance of 40.22 feet to a point; thence with a bearing of N 61°41’42” E a distance of 18.15 feet to a
point; thence with a bearing of N 26°57’28” E a distance of 22.33 feet to a point; thence with a bearing of N 07°04’32” E a distance of 14.55 feet to a point; thence with a bearing of N 04°36’26” E a
distance of 11.93 feet to a point; thence with a bearing of N 43°06’59” E a distance of 12.81 feet to a point; thence with a bearing of N 66°06’04” E a distance of 16.10 feet to a point; thence with
a bearing of S 74°09’39” E a distance of 13.51 feet to a point; thence with a bearing of S 66°37’26” E a distance of 21.24 feet to a point; thence with a bearing of S 55°03’09” E a distance of 18.85
feet to a point; thence with a bearing of S 80°54’53” E a distance of 20.99 feet to a point; thence with a bearing of N 78°19’49” E a distance of 22.49 feet to a point; thence with a bearing of N
74°45’41” E a distance of 37.13 feet to a point; thence with a bearing of S 79°52’12” E a distance of 28.23 feet to a point; thence with a bearing of S 31°11’04” E a distance of 5.29 feet to a point;
thence with a bearing of S 25°29’57” E a distance of 13.74 feet to a point; thence with a bearing of S 77°22’16” E a distance of 24.33 feet to a point; thence with a bearing of N 89°43’24” E a
distance of 14.19 feet to a point; thence with a bearing of N 31°37’17” E a distance of 6.23 feet to a point; thence with a bearing of N 03°05’33” W a distance of 20.74 feet to a point; thence with a
bearing of N 11°00’09” E a distance of 24.65 feet to a point; thence with a bearing of N 36°54’14” E a distance of 8.05 feet to a point; thence with a bearing of N 85°29’45” E a distance of 24.29
feet to a point; thence with a bearing of S 70°16’36” E a distance of 54.22 feet to a point; thence with a bearing of S 51°59’30” E a distance of 25.86 feet to a point; thence with a bearing of S
71°41’14” E a distance of 44.66 feet to a point; thence with a bearing of N 83°44’54” E a distance of 43.74 feet to a point; thence with a bearing of S 80°27’53” E a distance of 69.65 feet to a
point; thence with a bearing of S 86°54’21” E a distance of 20.64 feet to a point; thence with a bearing of S 56°01’03” E a distance of 45.71 feet to a point; thence with a bearing of S 78°59’30” E a
distance of 29.49 feet to a point; thence with a bearing of N 81°14’55” E a distance of 28.26 feet to a point; thence with a bearing of N 66°59’08” E a distance of 35.45 feet to a point; thence with
a bearing of N 82°01’34” E a distance of 19.75 feet to a point; thence with a bearing of S 55°17’10” E a distance of 10.17 feet to a point; thence with a bearing of S 20°59’43” E a distance of 19.23
feet to a point; thence with a bearing of S 36°30’59” E a distance of 26.19 feet to a point; thence with a bearing of S 33°54’30” E a distance of 13.44 feet to a point; thence with a bearing of S
80°07’02” E a distance of 9.28 feet to a point; thence with a bearing of N 72°29’04” E a distance of 19.71 feet to a point; thence with a bearing of N 50°20’41” E a distance of 22.42 feet to a point;
thence with a bearing of N 61°12’42” E a distance of 31.40 feet to a point; thence with a bearing of N 50°12’28” E a distance of 19.74 feet to a point; thence with a bearing of N 45°01’46” E a
distance of 37.10 feet to a point; thence with a bearing of N 72°03’26” E a distance of 25.66 feet to a point; containing 1003731.62 square feet or 23.043 acres, and as further shown on an exhibit
prepared by Priest, Craven & Associates entitled, “Contiguous Annexation Exhibit for the Property of Forestville Road Investments, LLC” dated October 2, 2013.
Deeda Harris
Town Clerk
The Wake Forest Weekly
Dec. 5, 12, 2013
Latest from Twitter
• Pick up a copy of this weeks Wake Weekly, The Franklin Weekly and The Rolesville Weekly, Bob is front-page news! http://t.co/ELdqSk9Cmx 14 hours ago
• The Cotton Company, 306 S. White St., hosts Celebrate Spring, a ladies’ fashion show Friday (April 18) starting... http://t.co/vDl1vp1snn yesterday
• We'd like to thank the Wake Forest Area Chamber of Commerce for giving us some leftover BBQ, sides and tea after... http://t.co/lQhewQ37nB yesterday
• RT @JimChapin: @MiracleForErik Your Pop Pop is in the paper today! #WakeForestCares #miracleforerik Thanks @wakeweekly yesterday
Recent Posts | {"url":"http://www.wakeweekly.com/notice-of-public-hearing-for-contiguous-annexation-2/","timestamp":"2014-04-19T06:55:04Z","content_type":null,"content_length":"59020","record_id":"<urn:uuid:fc76eda4-0c5e-49f0-89a8-b1e4c00394cc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Diminishing Returns - Math!
Introduction Here we deal with mitigation due to armour using some Calculus to show its properties. We begin with the basic formula for physical damage mitigation from armour for opponents level 60
and higher: Code: Armour M = ------------------------------- Armour + (467.5*level - 22167.5) For the purposes here (467.5*level - 22167.5) is a constant (C). The variable x will be used to represent
armour (simply because I'm too lazy to re-do all the graphic calculus bits with an a.) Using these definitions, the basic formula for mitigation simply looks like: Code: x M = ----- x + C It's
important to realize that there are two significant effects of damage mitigation: 1) The effect on the rate of health loss 2) The effect on the amount of time it takes to go from full health to dead.
Both are significant to the healer in different ways. The former affects how much health/sec must be restored to keep you alive, and the latter affects the amount of mana that can be regenerated
between burst healing. Effect of Armour on Rate of Health Loss If h is the health at any instant in time, H is the maximum health, D is the pre-mitigation damage per second incoming, t is time, and P
(t) represents healing as a function of time, then the formula for health at a given time looks like: Then, this term represents post-mitigation damage: The rate of change of health would be the
derivative of health with respect to t: This makes sense: dh/dt increases as x increases, and goes to (0 + dP/dt) as x goes to infinity. Now we can examine the effect change in armour has on the rate
of health loss. The change in the rate of health loss with the change of armour (that is x) can be found by taking the derivative of the above equation with respect to x. Note that as healing is not
a function of armour, the healing term P(t) drops out as zero. (Remember back to the quotient rule if you don't see how to get here (d/dx)(u/v)=[v(du/dx)-u(dv/dx)]/v^2). This equation represents the
change in health/second of damage taken with change in armour. We see the slope of the armour v. mitigation curve is on the order of 1/(x+C)^2. The extent of diminishing returns (i.e. the change in
the slope of the relationship between armour and damage mitigation) can be found by taking the second derivative with respect to x: (If you've forgotten, the rule to get here is (d/dx)(u^n)=nu^(n-1)
(du/dx)). The negative second derivative signifies a flattening of the slope as armour increases, meaning a lessening of returns for each quantity of armour added. We see that there are diminishing
returns on armour with respect to health/sec mitigated. It diminishes as -2DC/((x+C)^3). We can also take the second derivative with respect to armour of the mitigation function above and see that it
is also negative, and thus there are diminishing returns on mitigation in the absence of time. As expected, mitigation alone diminishes as -2C/((x+C)^3) - the same as the health/sec mitigation, less
the damage per second component. Effect of armour on lifespan It may seem contradictory to say the effect of armour on the time it takes to go from full health to dead is different than the effect on
health/sec mitigated, but it is. For the purposes of this section, we refer to the amount of time it takes to go from full health to dead as "lifespan" which we'll call T going forward here. The
difference between the analysis of lifespan and the analysis of damage mitigated stems from the nature of mitigation. As mitigation approaches 100%, each 1% increase has a greater effect on lifespan
than the last. (e.g. going from 50% mitigation to 51% mitigation will extend your lifespan approximately 2%; whereas going from 98% mitigation to 99% mitigation will extend your lifespan by 100%).
Using the basic formula for health (h) from the previous section, we see that t=T when h=0. Therefore, solving for T with h=0 yields: Now with this equation, we can examine the effect of armour on
lifespan by taking the derivative of T with respect to x (uses the same division rule noted above). This is a very interesting equation, and can be rearranged as: After a bunch of factoring: The term
x^2+Cx factors into x(x+C), which cancels the denominator and results in simply x, giving: Which leads simply to: It is interesting (and significant) that dT/dx is not a function of x. This means
that the slope of armour vs. lifespan is a constant (i.e. d^2T/dx^2=0) so there are no diminishing returns on lifespan as armour increases. As the saying goes, mitigation is subject to diminishing
returns, but armour is not. More properly, mitigation is subject to diminishing returns, but lifespan is not. Summary 1) As armour increases, the damage mitigated per second is subject to diminishing
returns. The effects diminish as: Where D is the unmitigated damage per second, x is armour, and C = 467.5*level - 22167.5. 2) As armour increases, the effect of increasing the amount of time
required to go from full health to dead is not subject to diminishing returns. Each increase of some quantity of armour will increase a tank's lifespan equally (that is, if going from 4000 to 6000
armour increases your lifespan by 1 minute, then going from 12000 to 14000 armour will also increase your lifespan by 1 minute). | {"url":"http://www.tankspot.com/showthread.php?33097-Shield-Block-Crushing-Blows&goto=nextnewest","timestamp":"2014-04-19T05:18:26Z","content_type":null,"content_length":"63893","record_id":"<urn:uuid:569e80d5-47c6-4983-a4dc-8888ed8742c3>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
8 Times Table Multiplication Song - Learn Times Table. Play sample here.
So I had to take a plane, which didn't seem fair,
But I knew that I would get there even faster by air,
So when I fastened my seatbelt and took to the sky
I thought I would give my times tables a try.
I started out quietly and then started rappin',
Looked down my row and saw their feet were tappin',
By the time we were flyin' at ten thousand feet,
They were dancin' in the aisles, they were dancin' on the seats.
I charmed 'em with my three's, I rocked 'em with my four's.
But when I got to eight, I heard a scream "no more!"
I saw it was the captain, so I tried to explain,
He just strapped me to a parachute, and threw me out of the plane.
8 times 5 is 40
8 times 6 is 48
8 times 7 is 56
8 times 8 is 64.
I knew right then I would have to take the bus,
'Cause on planes and trains they make such a fuss,
It's just Multiply Rap, the "Times" made to rhyme,
The way they reacted, you swear it's some kind of crime.
I told this to the people I was sitting next to,
They asked me if I wouldn't mind rappin' a few,
And before we even drove a mile down the street
They were dancin' in the aisles, they were dancin' on the seats.
I charmed 'em with my three's, I rocked 'em with my four's.
But when I got to eight, I heard a scream "no more!",
It was the bus driver, so I tried to explain,
He just threw me off the bus and left me standin' in the rain.
8 times 9 is 72
8 times 10 is 80
8 times 11 is 88
8 times 12 is 96
So there I was, in front of the committee,
I was tellin' 'em why I had come to the city,
I had walked forty miles, got thrown off a train,
A plane and a bus without time to explain
This is TIMES TABLES RAP and you just can't deny
How important it is to learn how to multiply,
I charmed 'em with my three's, I rocked 'em with my four's,
But when I got to eight I heard 'em scream "no more!"
They said "Son, you're the greatest, and that's a fact,
We're gonna send you home with a recording contract,
And we want you to know that we appreciate your pain,
So here's a billion dollars and a ticket for the train."
8 times 1 is 8
8 times 2 is 16,
8 times 3 is 24,
8 times 4 is 32,
8 times 5 is 40,
8 times 6 is 48,
8 times 7 is 56,
8 times 8 is 64,
8 times 9 is 72,
8 times 10 is 80,
8 times 11 is 88,
8 times 12 is 96.
8 times 12 is 96...
8 times 12 is 96...
The End
The 8 Times Table Multiplication Song
© 1991 YORK 10 Industries
© 2004-13 Abbey World Media, Inc. | {"url":"http://www.math-help-multiplication-tables.com/8-times-table-multiplication-song.html","timestamp":"2014-04-19T12:43:43Z","content_type":null,"content_length":"20365","record_id":"<urn:uuid:ddbfee34-1e24-4390-9acb-af65ebe16a22>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analytical Solutions for the Mathematical Model Describing the Formation of Liver Zones via Adomian’s Method
Computational and Mathematical Methods in Medicine
Volume 2013 (2013), Article ID 547954, 8 pages
Research Article
Analytical Solutions for the Mathematical Model Describing the Formation of Liver Zones via Adomian’s Method
Department of Mathematics, Faculty of Science, University of Tabuk, P.O. Box 741, Tabuk 71491, Saudi Arabia
Received 27 May 2013; Revised 30 June 2013; Accepted 15 July 2013
Academic Editor: Eddie Ng
Copyright © 2013 Abdelhalim Ebaid. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The formation of liver zones is modeled by a system of two integropartial differential equations. In this research, we introduce the mathematical formulation of these integro-partial differential
equations obtained by Bass et al. in 1987. For better understanding of this mathematical formulation, we present a medical introduction for the liver in order to make the formulation as clear as
possible. In applied mathematics, the Adomian decomposition method is an effective procedure to obtain analytic and approximate solutions for different types of operator equations. This Adomian
decomposition method is used in this work to solve the proposed model analytically. The stationary solutions (as time tends to infinity) are also obtained through it, which are in full agreement with
those obtained by Bass et al. in 1987.
1. Introduction
The dark-red liver is the body’s largest single gland (1 to 1.5kg). It is a metabolic middleman because it takes up and secretes more than 500 different kinds of molecules. The liver stores and
releases glucose, keeping blood glucose levels relatively constant. The location of the liver reflects its middleman’s role (Figure 1). The gland lies between the diaphragm above and the stomach and
intestines below (Figure 2). Glucose and many other molecules enter the liver through the hepatic portal vein, and their products go directly through the inferior vena cava to the heart and lungs and
then into the systemic circulation. The liver takes its dome-like shape from the diaphragm, which covers its superior surface, called diaphragmatic surface. The sagittal fossa divides the liver into
two great lobes, right and left. The right lobe is larger and displays two smaller quadrate and caudate lobes on its visceral surface, defined by gallbladder and inferior vena cava, respectively [1].
The hepatic veins drain into the inferior vena cava arising from the posterior part of diaphragmatic surface of the liver. Visceral peritoneum binds the liver to the diaphragm and to the posterior
wall of the abdomen.
Although there is an extensive bare area on the diaphragmatic surface of the liver where the peritoneum does not reach, the connective tissue attaches this area directly to the diaphragm. Most of the
blood to the liver (70–80%) comes from the portal vein, and the smaller percentage is supplied by the hepatic artery (Figure 3). All the materials absorbed via the intestines reach the liver through
the portal vein, except the complex lipids which are transported mainly by lymph vessels. The position of the liver in the circulatory system is optimal for gathering, transforming, and accumulating
metabolites and for neutralizing and eliminating toxic substances. This elimination occurs in the form of bile, an exocrine secretion of the liver which is important in lipid digestion. The basic
structural component of the liver is the liver cell or hepatocyte. In light microscope, structural units called classic liver lobules can be seen. The liver lobule is formed of a polygonal mass of
tissue about mm in size (Figure 4).
In certain animals (e.g., the pig), lobules are separated from each other by a layer of connective tissue. In humans, it is difficult to establish the exact limits between different lobules since
they are in close contact in most of their extent (Figure 5). In some regions, the lobules are demarcated by connective tissue containing bile ducts, lymphatic vessels, nerves, and blood vessels.
These regions, located at the corners of the lobules and occupied by portal triads, are called portal spaces. The human liver contains 3–6 portal triads per lobule, each with a venule (a branch of
the portal vein); an arteriole (a branch of the hepatic artery); a duct (part of the bile duct system); and lymphatic vessels. The venule is usually the largest of these structures, containing blood
from the superior and inferior mesenteric and splenic veins. An arteriole contains blood from the celiac trunk of the abdominal aorta.
The hepatocytes in the liver lobule are radially disposed and arranged like the bricks of a wall. These cellular plates are directed from the periphery of the lobule to its center and anastomose
freely, forming a labyrinthine and sponge-like structure. The space between these plates contains capillaries, the liver sinusoids [2]. Portal and arterial blood mixes in the sinusoids and flows past
hepatocytes, draining through a central vein from each lobule that leads ultimately to the hepatic veins. Bile from the lobules drains into the interlobular branches of the bile duct by way of bile
canaliculi. The hepatic lobules act as endocrine and exocrine glands. In endocrine secretion, hepatocytes take up and secrete molecules into the sinusoids [1].
The liver has an extraordinary capacity for regeneration. Hence, the loss of hepatic tissue by surgical removal or from the action of toxic substances is restored. The liver performs its metabolic
functions with the aid of various enzymes fixed inside liver cells. These liver cells line many capillaries (hepatic sinusoids) through which the total hepatic blood flow is manifolded, whereby
exchange of substances between blood flow and cells is facilitated. The interplay of the unidirectional blood flow with local metabolism generates concentration gradients of blood-borne substances
(such as oxygen) between the inlet and the outlet of the liver. The unidirectionality of that blood, that is, the blood flows form the portal triads to the central vein (Figure 5), has a major
influence on the mathematical structure of the model, which appears to be capable of describing the formation of zones with a jump discontinuity at a certain distance along a capillary [3].
Several metabolic functions of the liver have been found to be organized in spatial zones arranged in relation to the direction of hepatic blood flow, in such a way that some enzymes act almost
wholly upstream others [3]. Bass et al. [3] attributed such distributions of enzymes activities to distributions of cell types. For the simplest case of two enzymes, there are two corresponding cell
types, each containing only one of the enzymes; separate metabolic zones occur when all cells of one type are located upstream all cells of the other type. Furthermore, it was reported in [3] that
each cell type reproduces itself by division. The mathematical model was discussed in [3], but for convenience the main steps in its derivation are repeated in the next section. The mathematical
model describing the formation of liver zones is a system of nonlinear integropartial differential equations. The objective of this paper is to apply Adomian’s decomposition method to the system in
order to find its stationary solutions (as the time tends to infinity) and the general solutions (at any position and any time ) for arbitrary initial conditions.
2. Mathematical Modelling and Solutions
2.1. Mathematical Formulation
About 1100 milliliters of blood flows from the portal vein into the liver sinusoids (Figure 6) each minute, and approximately an additional 350 milliliters flows into the sinusoids from the hepatic
artery, the total averaging is about 1450mL/min. This amounts to about 29% of the resting cardiac output. As the many capillaries comprising the liver are similar and act essentially in parallel,
Bass et al. [3] modelled a representative capillary lined with cells of two kinds. It was suggested to put the -axis along the blood flow, with inlet at and outlet at [3]. The density of cells of the
first kind is defined by as a continuous representation of the number of cells of the first kind per unit length of capillary at time at the position . The density of cells of the second kind is
defined analogously. The total cell density cannot exceed some fixed maximum density of cell sites, as division of the cell is limited by the familiar phenomenon of contact inhibition. The local rate
of change of the density of cells of the first kind is assumed to consist of a growth rate term proportional to (self-generation) and to the density of sites available, , and of a death rate term
proportional to , with a coefficient dependent on the local concentration of a controlling blood-borne substance. In what follows, for definiteness, oxygen is taken as that substance. Then with a
constant coefficient . A similar equation for is obtained from (1) by interchanging the suffices 1, 2. So Let be the steady rate of blood flow through the capillary. If oxygen is transported in the
-direction predominantly by convection with the blood and used up by the two cell types at the rates and (with positive constants , ), then changes in caused by changes in and are quasisteady.
Therefore, satisfies If (3) is integrated, then where is the steady oxygen concentration in the blood entering the liver. It is assumed that as oxygen concentration falls, the death rate of cells
increases , though not necessarily equally for both cell types. It is assumed that has the following form (similarly for ): where Introducing (4) and (5) into (1) and (2), we arrive at the pair of
equations If , then as for all , and similarly for . Therefore, it is assumed that and . For similar reasons, it is assumed that at least one of and is positive (say ). It is noted at once that
unless the first cell type is inevitably to die out, its greatest possible specific growth rate must exceed its least possible specific death rate . Similar remarks apply for the second cell type,
and accordingly it is assumed in [3] that To obtain some preliminary heuristic ideas about the formation of zones in their model, Bass et al. [3] supposed that (7) admits solutions which at all
finite times are everywhere positive and satisfy . For such solutions, (7) can be written in the form Multiplying the first equation in (9) by and the second by , we have
2.2. The Stationary Solutions
For such solutions, we can combine (10) in the form where Suppose that , , or so that the constants and are positive. Since the integral in (11) is bounded above by , the right-hand side of (11) is
positive at all times for sufficiently small , where Volterra’s argument [3] then applies: as , , and with bounded above by , must tends to zero. It is then plausible that for these values of in (14
), will approach a stationary form determined from the first equations of (7) with , namely [3, 4], In order to solve this equation by Adomian’s decomposition method [5–13], we put the equation in
the form where According to Adomian’s method, is assumed as Substituting (18) into (16), we obtain Let Then the solution can be elegantly computed by using the recurrence relation This gives
According to (18), we obtain in the form Set equal to zero and equal to , then (11) becomes We note that the right-hand side of (24) decreases with increasing and reaches zero at a value determined
by Provided that The point determined by (25) lies in the interval of interest provided that Under these conditions, it is then reasonable to suppose that, for , the right-hand side of (11) will in
fact be negative for sufficiently large values of [3]. Volterra’s argument then indicates that we can expect to find as for . Furthermore, we may also expect that, for , will approach a stationary
form determined from the second equation of (7) by This equation can be solved by Adomian’s method; we rewrite the equation in the form where We assume that Let , then the solution can be computed by
using the recurrence relation This gives Therefore So, as , the formation of liver zones can be described as follows:
3. Analytical Solutions
In applied mathematics, Adomian’s decomposition method is an effective procedure to obtain analytic and approximate solutions for different types of equations. This method is used here to obtain a
general solution for the system (7). Following Bass et al. [3], we define new variables and new parameters Then (7) becomes, on dropping at once the primes from the new independent variables, with
constant parameters The spatial interval of interest is now , where , and then we have [3] The stationary solutions become where now For searching analytical solutions, we firstly rewrite the system
we want to solve as two separate integro-partial differential equations: According to the decomposition method, we assume that Equation (43) can be put in the following operator form: Applying the
inverse operator , on both sides of this equation, yields Substituting (45) into (47), we obtain Now, the solution can be evaluated through the recursive scheme: By similar analysis, we can get the
solution by the recursive scheme: For simplicity, we assume that the two types of the liver cells have the same distribution along the hepatic capillary at ; that is, By this, we can get the first
few terms of Adomian’s series from the recurrence relations (49) and (50) as follows: where
4. Remarks
Here, we indicate that at particular values of the parameters , , and , the solutions and are equivalent. In order to do this, we prefer to put the solutions and in the form where Firstly,
substituting into the original equations (38), we obtain We can combine these equations in the form By integrating both sides with respect to from to , we get where we used the relation . Thus, .
Now, substituting into (55), we can easily observe that which leads also to .
5. Conclusion
In this paper, the Adomian decomposition method has been applied successfully to a system of nonlinear integro-partial differential equations describing the formation of liver zones. As time tends to
infinity, the stationary solutions are obtained in exact forms by using Adomian’s method, where full agreement with those obtained in the literature has been achieved. Also, at any time of the liver
regeneration process, the analytical solutions are obtained explicitly in series form. Finally, the current solutions may shed some light on the mathematical aspects of the formation of liver zones
and also on describing the distribution of the two types of the liver cells.
The author wishes to express his thanks to Professor Mostafa El-Shahed for his kind help and very valuable suggestions.
1. D. T. Lindsay, Functional Human Anatomy, Saunders Company, Philadelphia, Pa, USA, 1996.
2. L. Carlos, J. Carneiro, and R. O. Kelly, Basic Histology, Prentice Hall, Amazon, London, UK, 1992.
3. L. A. Bass, J. Bracken, K. Holmaker, and F. Jefferies, “Integro-differential equations for the self-organisation of liver zones by competitive exclusion of cell-types,” Journal of the Australian
Mathematical Society, vol. 29, pp. 156–194, 1987.
4. K. Holmaker, “Global asymptotic stability for a stationary solution of a system of Integro-differential equations describing the formation of liver zones,” SIAM Journal on Mathematical Analysis,
vol. 24, pp. 116–128, 1993.
5. G. Adomian, Solving Frontier Problems of Physics: The Decomposition Method, Kluwer Academic Publishers, Boston, Mass, USA, 1994.
6. A.-M. Wazwaz, “Adomian decomposition method for a reliable treatment of the Emden-Fowler equation,” Applied Mathematics and Computation, vol. 161, no. 2, pp. 543–560, 2005. View at Publisher ·
View at Google Scholar · View at Scopus
7. A. M. Wazwaz, Partial Differential Equations and Solitary Waves Theory, Springer, New York, NY, USA, 2009.
8. M. Kumar and N. Singh, “Modified Adomian decomposition method and computer implementation for solving singular boundary value problems arising in various physical problems,” Computers and
Chemical Engineering, vol. 34, no. 11, pp. 1750–1760, 2010. View at Publisher · View at Google Scholar · View at Scopus
9. A. Ebaid, “A new analytical and numerical treatment for singular two-point boundary value problems via the Adomian decomposition method,” Journal of Computational and Applied Mathematics, vol.
235, no. 8, pp. 1914–1924, 2011. View at Publisher · View at Google Scholar · View at Scopus
10. A.-M. Wazwaz, “A reliable study for extensions of the Bratu problem with boundary conditions,” Mathematical Methods in the Applied Sciences, vol. 35, no. 7, pp. 845–856, 2012. View at Publisher ·
View at Google Scholar · View at Scopus
11. F. A. Hendi and H. O. Bakodah, “Solution of nonlinear integro-differential equation using discrete Adomian decomposition method,” Far East Journal of Mathematical Sciences, vol. 66, no. 2, pp.
213–221, 2012.
12. J.-S. Duan, R. Rach, A.-M. Wazwaz, T. Chaolu, and Z. Wang, “A new modified Adomian decomposition method and its multistage form for solving nonlinear boundary value problems with Robin boundary
conditions,” Applied Mathematical Modelling, 2013. View at Publisher · View at Google Scholar
13. A.-M. Wazwaz, R. Rach, and J.-S. Duan, “A study on the systems of the Volterra integral forms of the Lane-Emden equations by the Adomian decomposition method,” Mathematical Methods in the Applied
Sciences, 2013. View at Publisher · View at Google Scholar | {"url":"http://www.hindawi.com/journals/cmmm/2013/547954/","timestamp":"2014-04-18T22:45:29Z","content_type":null,"content_length":"472952","record_id":"<urn:uuid:b198d4f9-32ae-4633-acc2-1c6565be5dab>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Electronic Product Contains 61 Integrated Circuits. ... | Chegg.com
An electronic product contains 61 integrated circuits. The probability that any integrated circuit is defective is 0.01720, and the integrated circuits are independent. The product operates only if
there are no defective integrated circuits. What is the probability that the product operates? Round your answer to three decimal places (e.g. 98.765).
Statistics and Probability | {"url":"http://www.chegg.com/homework-help/questions-and-answers/electronic-product-contains-61-integrated-circuits-probability-integrated-circuit-defectiv-q939139","timestamp":"2014-04-16T19:53:23Z","content_type":null,"content_length":"18408","record_id":"<urn:uuid:22427547-b2ee-45d4-a9e6-929f3475413f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayside, NY Algebra 2 Tutor
Find a Bayside, NY Algebra 2 Tutor
...I focus on teaching the core concepts tested, as well as proven test-taking strategies and time management skills that help students finish quickly and accurately. I have tutored discrete math
for over 4 years, from basic concepts introduced at the elementary school level, to undergraduate cours...
34 Subjects: including algebra 2, English, GRE, reading
...In preparation for any of the above math-based tests, I focus on imparting deep-level mathematical understanding, and that can only be taught with individual attention. One of my great
pleasures is helping students reach moments of epiphany when the light dawns and perplexity gives way to clarity. Throughout our sessions, I focus on making you think for yourself and develop
flexible mastery.
55 Subjects: including algebra 2, English, reading, writing
...I have worked with a few students mostly college level in this subject. SAT time can be the most stressful time for a child. Will I do well?
17 Subjects: including algebra 2, reading, chemistry, biology
...I can help improve your child’s reading and spelling. I draw from a variety of programs, but I particularly like the Wilson Reading Method. Their Fundations Program works well with young
children and the Just Words Program is excellent for older children to adults.
39 Subjects: including algebra 2, reading, geometry, English
Please be advised that I received a bachelor degree in Math major at Guangdong College of Education and an associate degree in Accounting major at Queensborough Community College. I was a math
teacher for 25 years in China. I taught Algebra, Geometry, Trigonometry and Calculus from 1st through 12th grade.
14 Subjects: including algebra 2, calculus, physics, geometry
Related Bayside, NY Tutors
Bayside, NY Accounting Tutors
Bayside, NY ACT Tutors
Bayside, NY Algebra Tutors
Bayside, NY Algebra 2 Tutors
Bayside, NY Calculus Tutors
Bayside, NY Geometry Tutors
Bayside, NY Math Tutors
Bayside, NY Prealgebra Tutors
Bayside, NY Precalculus Tutors
Bayside, NY SAT Tutors
Bayside, NY SAT Math Tutors
Bayside, NY Science Tutors
Bayside, NY Statistics Tutors
Bayside, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/Bayside_NY_Algebra_2_tutors.php","timestamp":"2014-04-20T13:58:50Z","content_type":null,"content_length":"24092","record_id":"<urn:uuid:2cd46e58-9683-4f49-9684-02000174bcf4>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |