content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
convert 225 pounds to kg
You asked:
convert 225 pounds to kg
102.05828325 kilograms
the mass 102.05828325 kilograms
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/convert_225_pounds_to_kg","timestamp":"2014-04-20T16:03:33Z","content_type":null,"content_length":"53386","record_id":"<urn:uuid:925143bc-b0fd-41ff-b72b-cf9f90f212c7>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Croatian Black Hole School
Croatian Black Hole School
The Croatian Black Hole School is a school inspired by the traditional Modave school in Belgiumč it took place on 21–25 June 2010 at Trpanj, on the peninsula Pelješac in southern Croatia. The
lectures and housing for most people were organized at Pansion Loviška whose personel was very kind and helpful. The organizer was Jarah Evslin and the main topic is the physics of black holes, both
theoretical and observational.
The official web page is http://antimodave.jimdo.com.
Lecture series:
• Chethan Krishnan (SISSA): Quantum field theory and black holes
• Gaston Giribet (Buenos Aires): Black hole physics and AdS3/CFT2 correspondence. (This includes Black hole physics in three-dimensional massive gravity, applications to four-dimensions (i.e. Kerr/
CFT correspondence) and other topics (e.g. Lifshitz black holes).)
• John Wang (Niagara Univ and Buffalo) Introduction into the physics of black holes
• Dieter Van den Bleeken (Rutgers): BPS black holes and the attractor mechanism
• Mario Pasquato (Pisa): Intermediate mass black holes - observational challenges
• Malcolm Fairbairn (King’s College): Cosmology, astrophysics and astrophysical black goles
• Holger Nielsen: discussion sessions – Black hole information paradox
Key words and related entries: AdS3/CFT2, Penrose diagram, Hawking radiation, asymptotic isometry, intermediate mass black hole?, cosmology, string theory, horizon, information paradox, supersymmetry
, supergravity, observing black holes, quantum gravity
Revised on September 7, 2010 17:32:18 by
Zoran Škoda | {"url":"http://ncatlab.org/nlab/show/Croatian+Black+Hole+School","timestamp":"2014-04-18T18:14:12Z","content_type":null,"content_length":"14127","record_id":"<urn:uuid:e788a050-3c77-46a0-b0c9-f81bfaad0575>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rate Word Problems - Problem 3
This is a tricky problem because we have a rate that's going to change from one situation to the next. Let's check it out. To keep in shape Alison walks a 12 mile course at a constant speed. If she
doubles her usual speed she can complete the course in one and a half hours less than her usual speed. What is Alison's usual speed?
This problem is more difficult from the ones we've already seen because it’s her original rate and then we need to take or her time is going to change by one and a half hours. Let's try to set us
some equations.
Well we know usually distance is equal to rate times time, so Alison's distance usually is 12, her rate usually we are just going to call R and her time usually we'll call T. We don't know either of
those. That's like what we know that Alison usually does, but then it tells us if she doubles her usual speed, okay doubles her speed. So I'm going to call that 2R now. If she doubles her usual speed
she can complete the course in one and a half hours less than her usual speed.
Okay so her time instead of being just T, it’s going to be T take away 1½ because now she is going one and a half hours more quickly. She's still going on a 12 mile course, that's my system of
equations that I can go ahead and solve.
I got these two equations this one represents her original situation, 12 miles some rate some time, we don’t know what they are and I’m going to be solving that with the same distance but twice her
usual speed and then her time take away an hour and a half.
So from here you guys have a choice using substitution, elimination, maybe matrices if you learnt about that you can maybe graph these nasty guys if you want to, up to you. I'm going to go ahead and
solve this using substitution that's my own personal favorite.
So if I was to for example solve this first equation for R, divide both sides by T I would get R is equal to 12/T. Then I’m going to substitute that equation right here so I only have one equation
with one variable. I'm going to be solving 12 equals 2 times 12/T times T take away 1.5. Once I have this all set up, this is just your standard straight forward solving and simplifying that I know
you guys know how to do.
12 is equal to 24/T when I distribute it there times T take away 1.5, draw a little arrow move over here, so if I go through and if I distribute this into both of these quantities I have 12 is equal
to 24/T times T/1, those Ts are going to be eliminated, I’ll just have 24 there take away, 24 times -1.5 is -36 and I need to put that on top of T, -36/T. What I’m going to do is subtract 24 from
both sides -12 equals -36/T. If you want to you could multiply both sides by T in this case to get T out of the denominator or you could turn this into a fraction and cross multiply it’s the same
process written a little bit differently, I’m going to multiply both sides by T so that I have this. Oops! What happened to my T, there it is and when I divide both sides by -12, I’ll get that T
equals 3 hours.
Now let me just think about what that means before I decide that I’m done, I want to decide, I want to figure out if I did this all correctly. This tells me that R represents her original speed I
chose R to represent her original speed, so if I go through and I know that T is three hours that's how long it usually took her. It doesn't tell me how fast she goes we need to find her speed.
It's told me how long it took her so in order to find her speed I need to go back to this original first problem I know that 12 is equal rate which I’m trying to find times 3, because 3 is my T
value, her original time, and it tells me she used to walk 4 miles per hour that's what her rate used to be, R.
So this one was kind of tricky because we had some Ts in the denominator we had some fractions to deal with but as long as you guys can set up this first step, if you can get this systems of
equations problem ready to go I’m confident you guys can do these problems.
These rate problems are tricky, they are difficult but you guys can do them if you just put in a little bit extra time and make sure you are focusing when you’re doing these problems.
rate ratio d=rt word problem system of equations | {"url":"https://www.brightstorm.com/math/algebra/word-problems-using-systems-of-equations/rate-word-problems-problem-3/","timestamp":"2014-04-17T15:58:46Z","content_type":null,"content_length":"61949","record_id":"<urn:uuid:bca6ff1a-3e7d-46d6-9cb0-4c639182515e>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Multiplier
In economics, a numerical coefficient showing the effect of a change in one economic variable on another. One macroeconomic multiplier, the autonomous expenditures multiplier, relates the impact of a
change in total national investment on the nation's total income; it equals the ratio of the change in total income to the change in investment. If, for example, the total investment in an economy is
increased by $1 million, a chain reaction of increases in consumption is set off. Producers of raw materials used in the investment projects and workers employed in the projects gain $1 million in
income. If they spend on average three-fifths of that income, $600,000 will be added to the incomes of others. The makers of the goods they buy will in turn spend three-fifths of their new income on
consumption. The process continues such that the amount by which total income increases may be computed by an algebraic formula. In this case, the multiplier equals 1/(1 − 3/5), or 2.5. This means
that a $1 million increase in investment creates a $2.5 million increase in total income. Other multipliers include the money multiplier, which measures money creation resulting from a change in
monetary policy; the government spending multiplier, which measures the change in national income resulting from changes in fiscal policy; and the tax multiplier, which measures the changes in
national income resulting from a change in taxes. The concept of the multiplier process was popularized in the 1930s by John Maynard Keynes as a means of measuring the effect of government spending.
Learn more about multiplier with a free trial on Britannica.com.
Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. | {"url":"http://www.reference.com/browse/Multiplier","timestamp":"2014-04-20T21:17:59Z","content_type":null,"content_length":"82772","record_id":"<urn:uuid:5495dd5c-2be0-4977-a93a-838dbe06bb02>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
History of Logic Development
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Where can I find a book which explains the development of modern logic, e.g. Tarski, Frege, Peano, up untill Wittgenstein, Russel?
up vote 6 down vote favorite lo.logic math-philosophy
add comment
Where can I find a book which explains the development of modern logic, e.g. Tarski, Frege, Peano, up untill Wittgenstein, Russel?
The scope of the figures you mention (Tarski, Frege, Peano, Wittgenstein, Russell) makes it a little unclear exactly what you're after. For instance, From Frege to Goedel (as mentioned by
Mahmud) is an excellent compilation of early texts in mathematical logic -- you get e.g. Frege, Peano, Hilbert, Zermelo, Skolem, Herbrand, Goedel -- with helpful introductions included,
but the focus is on the primary texts, rather than giving a single, unified account of the development of logic. And its relative lack of a philosophical focus means there's nothing like
Russell or Wittgenstein to be found. [N.B. Along similar lines to this work, the two volumes of From Kant to Hilbert offer a more wide-ranging (in terms of subject and chronology)
cross-section of works in the foundations of mathematics; note, though, that mathematical logic per se is not the focus there.]
up vote 6
down vote Not knowing your background, or your exact goal, I would tentatively recommend Benacerraf and Putnam's Philosophy of Mathematics: Selected Readings. It has a great selection of works by
accepted the likes of Frege, Russell, Hilbert, Brouwer, Goedel, Von Neumann, Quine, and so on (and Wittgenstein is mentioned aplenty). In total, you get a lot about the interplay between technical
matters in mathematical logic, foundations of math, and also related issues of a more straight-up philosophical nature (if you're into that sort of thing). It too doesn't give a single
chronological narrative, but just skipping around the articles in that collection will give you a lot to chew on, and ultimately give you a better account of the development of modern
logic than will primary sources (IMHO).
add comment
The scope of the figures you mention (Tarski, Frege, Peano, Wittgenstein, Russell) makes it a little unclear exactly what you're after. For instance, From Frege to Goedel (as mentioned by Mahmud) is
an excellent compilation of early texts in mathematical logic -- you get e.g. Frege, Peano, Hilbert, Zermelo, Skolem, Herbrand, Goedel -- with helpful introductions included, but the focus is on the
primary texts, rather than giving a single, unified account of the development of logic. And its relative lack of a philosophical focus means there's nothing like Russell or Wittgenstein to be found.
[N.B. Along similar lines to this work, the two volumes of From Kant to Hilbert offer a more wide-ranging (in terms of subject and chronology) cross-section of works in the foundations of
mathematics; note, though, that mathematical logic per se is not the focus there.]
Not knowing your background, or your exact goal, I would tentatively recommend Benacerraf and Putnam's Philosophy of Mathematics: Selected Readings. It has a great selection of works by the likes of
Frege, Russell, Hilbert, Brouwer, Goedel, Von Neumann, Quine, and so on (and Wittgenstein is mentioned aplenty). In total, you get a lot about the interplay between technical matters in mathematical
logic, foundations of math, and also related issues of a more straight-up philosophical nature (if you're into that sort of thing). It too doesn't give a single chronological narrative, but just
skipping around the articles in that collection will give you a lot to chew on, and ultimately give you a better account of the development of modern logic than will primary sources (IMHO).
Although it's not exactly what you asked for, you might take a look at the book "Foundations of Mathematics" by William S. Hatcher. It's primarily about the various foundational systems
up vote 1 themselves, but my recollection is that Hatcher includes a good deal of historical information.
down vote
add comment
Although it's not exactly what you asked for, you might take a look at the book "Foundations of Mathematics" by William S. Hatcher. It's primarily about the various foundational systems themselves,
but my recollection is that Hatcher includes a good deal of historical information.
I would suggest to start with "The Search for Mathematical Roots, 1870-1940" by I. Grattan-Guiness, which has some chapters on Cantor, which you can skip if it really doesn't interest you,
so that you begin with chapter 4, which is on Peirce and Frege. It has less on Tarski than you might want, in which case you can read the Feferman biography called "Alfred Tarski: Life and
up vote 1 Logic". It may also have less on Wittgenstein than you want, but there are myriad supplemental materials there.
down vote
add comment
I would suggest to start with "The Search for Mathematical Roots, 1870-1940" by I. Grattan-Guiness, which has some chapters on Cantor, which you can skip if it really doesn't interest you, so that
you begin with chapter 4, which is on Peirce and Frege. It has less on Tarski than you might want, in which case you can read the Feferman biography called "Alfred Tarski: Life and Logic". It may
also have less on Wittgenstein than you want, but there are myriad supplemental materials there.
There was an FOM thread a while back mentioning a number of titles:
• http://www.cs.nyu.edu/pipermail/fom/2011-February/thread.html
up vote 0 down vote
Look for messages with the subject "Book on the history of logic?". There are a couple more in the following month of the FOM archive.
add comment
There was an FOM thread a while back mentioning a number of titles:
Look for messages with the subject "Book on the history of logic?". There are a couple more in the following month of the FOM archive. | {"url":"http://mathoverflow.net/questions/85961/history-of-logic-development/86293","timestamp":"2014-04-21T02:47:52Z","content_type":null,"content_length":"67575","record_id":"<urn:uuid:b2e50a90-aaf6-4b43-a42a-4f7862397b63>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
Keasbey Algebra 2 Tutor
Find a Keasbey Algebra 2 Tutor
...I am an experienced bass guitar player, having played in a rock band for 4 years, and a jazz big band for 2 years, in addition to playing occasionally in a jazz small group setting. I
participated in a chess team in school that became finalists for NYC in a competition against other schools. Pr...
33 Subjects: including algebra 2, physics, calculus, GRE
...I was a math major at Washington University in St. Louis, and minored in German, economics, and writing. While there, I tutored students in everything from counting to calculus, and beyond.
26 Subjects: including algebra 2, calculus, physics, geometry
...Additionally, I have volunteered as a tutor for various urban schools. As an undergraduate I created a GEPA 8 (now NJ Ask 7/8) tutoring program where high risk students were tutored weekly for
6 intensive hours. There was a substantial increase - as over 80% of those students passed.
37 Subjects: including algebra 2, reading, writing, geometry
...I have five children of my own, ages 12 to 21 years old, who I have tutored through many of their subjects through the years. I've taught college chemistry for 20 years. I have taught
integrated science courses and science and society courses.
10 Subjects: including algebra 2, chemistry, algebra 1, GED
...My degree has taught me to view human beings in a person-in-environment model, which helps me to identify areas that need to be addressed in order to provide the client with some relief. I also
operate under a strengths-based perspective, and view human beings as highly capable, and adaptable en...
20 Subjects: including algebra 2, English, reading, writing | {"url":"http://www.purplemath.com/Keasbey_Algebra_2_tutors.php","timestamp":"2014-04-19T23:56:31Z","content_type":null,"content_length":"23745","record_id":"<urn:uuid:2f0defd8-e21a-449b-b17d-d9c5bf5fdc8f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Who's That Mathematician? Paul R. Halmos Collection - Page 47
For more information about Paul R. Halmos (1916-2006) and about the Paul R. Halmos Photograph Collection, please see the introduction to this article on page 1. A new page featuring six photographs
will be posted at the start of each week during 2012.
Halmos photographed his former Ph.D. student Donald Sarason (center, with Dan Halperin at left), in February of 1982 at the University of California, Davis (according to Halmos' notation on the back
of the photograph; please let us know of any corrections or further identifications and information). Sarason earned his Ph.D. in 1963 from the University of Michigan with the dissertation “The H^p
Spaces of Annuli.” He has spent his career at the Univerity of California, Berkeley, where he has advised 40 Ph.D. students and became Professor Emeritus in January of 2012. His first book was the
monograph The H^p Spaces of an Annulus (AMS, 1965) and his most recent Complex Function Theory (2^nd ed., AMS, 2007) with numerous papers in between and since; at his UC Berkeley website, he lists
his current research interests as complex function theory and operator theory.
In his I Want to Be a Mathematician: An Automathography (Springer 1985), Halmos wrote of Sarason:
I still haven’t decided whether Don was my best student or Errett Bishop; fortunately I don’t have to. ... Don is intelligent and quick, and although he doesn’t waste words, he never fails to say
what must be said. ... [He] is a quiet man; he never uses eight words when seven will do. He is one of the smoothest and clearest lecturers I know and he has an extraordinary sense of time. He
knows almost to the minute how long it will take him to explain something. (pp. 276-277)
For a photograph of Erret Bishop, see page 6 of this collection, where you can read more about him. (Sources: Mathematics Genealogy Project, UC Berkeley Mathematics)
Isaac J. Schoenberg (1903-1990) was photographed by Halmos in 1978. Born in Galati, Romania, Iso Schoenberg earned his Ph.D. in analytic number theory in 1926 from the Alexandru Ioan University of
Iasi (or Jassy), Moldavia, under advisors Simeon Sanielevici and Issai Schur. Schur was at the University of Berlin at the time, but Schoenberg had studied in Berlin with Schur and in Göttingen with
Edmund Landau from 1922 to 1925. From 1928 to 1930, Schoenberg visited Hebrew University of Jerusalem and, from 1930 onward, he held various positions in the U.S., including professorships at the
University of Pennsylvania from 1941 to 1966 and at the University of Wisconsin at Madison from 1966 onward. While doing ballistics research at the Aberdeen (Maryland) Proving Ground from 1943 to
1945, he introduced the theory of splines, publishing his first two papers on the subject in 1946. Splines really took off during the 1960s when computers became available for scientific research and
engineering design, and Schoenberg not only became well known for their invention but continued to make advances in both theory and practice. He continued to work on approximation theory for the rest
of his life. Do you recognize anyone in the crowd? (Sources: MacTutor Archive, Mathematics Genealogy Project)
Alan H. Schoenfeld, left, and David Blackwell (1919-2010) were photographed by Halmos in December of 1985 in Berkeley, California. Schoenfeld wrote recently:
I remember that meeting - Paul, David Blackwell, Leon Henkin and I had gotten together to discuss the mathematics content (specifically problem solving) that ought to be highlighted in AAAS's
Project 2061 mathematics report.
Two more photographs of Blackwell appear on page 1 and page 7 of this collection, where you can read more about him. Schoenfeld earned his Ph.D. in 1973 from Stanford University with the dissertation
“Topological and Measure-Theoretic Studies on Cantor Sets and Peano Spaces.” He has spent his career at the University of California, Berkeley, where he holds the Conner Chair in Education, is an
Affiliated Professor of Mathematics, and continues to advise Ph.D. students in mathematics education. He is a national leader in the field of mathematics education, specializing in cognition and
development or, as his UC Berkeley biography puts it, “Schoenfeld’s research deals with thinking, teaching, and learning. His book Mathematical Problem Solving characterized what it means to think
mathematically ....” Among his many accomplishments, he was lead author for grades 9-12 of the groundbreaking Principles and Standards for School Mathematics, produced by the National Council of
Teachers of Mathematics (NCTM) in 2000. In 2011 Schoenfeld was awarded the Felix Klein Medal for lifetime achievement by the International Commission on Mathematical Instruction. (Sources:
Mathematics Genealogy Project, UC Berkeley Graduate School of Education)
Halmos photographed his former Ph.D. student Morris Schreiber (d. 1988) on May 29, 1966, in Irvine, California. Schreiber earned his Ph.D. in 1955 from the University of Chicago with the dissertation
“Unitary Dilations of Operators.” In I Want to Be a Mathematician: An Automathography (Springer 1985), Halmos wrote of Schreiber’s thesis work:
He gave me one of the rare deep surprises that thesis advisors get. I suggested to him that he study the unitary power dilations of strict contractions and look at them as spectral invariants;
what information about a contraction can we infer from knowledge of its unitary power dilation? I didn’t believe Moe the first time when he, hesitantly, told me the answer. None, he said. All
those unitary power dilations are the same. ... Schreiber’s theorem (as it quickly became called) is that the spectral information conveyed by the unitary power dilation ... says something about
everybody but it doesn’t say anything about anybody. (page 222)
Schreiber was on the faculty at Cornell University in Ithaca, New York, during the 1960s, and he was a professor at Rockefeller University in New York City at the time of his death. In 1990,
Schreiber’s sister Hilda Schreiber established a fund at Princeton University to support graduate fellowships in mathematics. (Sources: Mathematics Genealogy Project, New York Times obituary,
Princeton University)
Halmos photographed Jacob T. Schwartz (1930-2009) on May 8, 1978, at the University of California, Santa Barbara. Born in New York City, Jack Schwartz earned his Ph.D. in 1952 from Yale University
with the dissertation “Linear Elliptic Differential Operators.” He remained on the faculty at Yale until 1957, when he joined the soon-to-be Courant Institute at New York University, where he
remained for the rest of his career and life. The first of his at least 28 Ph.D. students was Gian-Carlo Rota (Yale, 1956), who is pictured on page 45 of this collection, with his remaining Ph.D.
students at NYU. O’Connor and Robertson of the MacTutor Archive quote Schwartz’s NYU colleague Martin Davis (see page 11 of this collection) on Schwartz’s research modus operandi:
Jack's style has been to enter a new field, master quickly the existing research literature, add the stamp of his own forceful vision in a series of research contributions, and finally, leave
behind an active research group that continues fruitful research for many years along the lines he has laid down.
At first, these research fields were in mathematical analysis, but in 1964 Schwartz founded the computer science department at NYU and became its first chair. His interests in computer science
included parallel computing, compiler optimization, and robot design. By the time of his death, he was doing research in molecular biology. (Sources: MacTutor Archive, Mathematics Genealogy Project)
Halmos photographed Laurent Schwartz (1915-2002) in 1978. Born in Paris and educated there through the Agrégation de Mathématiques from the École Normale Supérieure in 1937, Schwartz earned his Ph.D.
in 1943 from the Université Louis Pasteur – Strasbourg with the dissertation “Sommes de Fonctions Exponentielles Reelles.” He spent his career at various universities in France, including Grenoble
(1944-45), Nancy (1945-53), and Paris (1953-59, 1980-83). He was on the faculty at the École Polytechnique in Paris from 1959 to 1980. Schwartz’s best known contribution to mathematics, the one for
which he won the Fields Medal in 1950, was the theory of distributions. In his I Want to Be a Mathematician (Springer 1985), Halmos described Schwartz’s difficulties in obtaining a visa to enter the
U.S. to attend the 1950 International Congress of Mathematicians (ICM) to be held at Harvard University in Cambridge, Massachusetts, describing Schwartz only partly tongue-in-cheek as a “known
Trotskyite activist” (page 163). Schwartz was eventually allowed to enter the U.S. and to speak at the ICM and receive his Fields Medal there. Halmos also wrote, “The last time I saw Schwartz was in
Berkeley in the 1970’s” (ibid), so this photograph was quite possibly taken in Berkeley, California. (Sources: MacTutor Archive, Mathematics Genealogy Project)
For an introduction to this article and to the Paul R. Halmos Photograph Collection, please see page 1. Watch for a new page featuring six new photographs each week during 2012.
Regarding sources for this page: Information for which a source is not given either appeared on the reverse side of the photograph or was obtained from various sources during 2011-12 by archivist
Carol Mead of the Archives of American Mathematics, Dolph Briscoe Center for American History, University of Texas, Austin. | {"url":"http://www.maa.org/publications/periodicals/convergence/whos-that-mathematician-paul-r-halmos-collection-page-47?device=mobile","timestamp":"2014-04-20T17:14:53Z","content_type":null,"content_length":"35180","record_id":"<urn:uuid:b1e5cac5-f378-47ec-881e-f4dcf3d7778a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Grade 7 » Statistics & Probability » Use random sampling to draw inferences about a population. » 2
Grade 7 » Statistics & Probability » Use random sampling to draw inferences about a population. » 2
Use data from a random sample to draw inferences about a population with an unknown characteristic of interest. Generate multiple samples (or simulated samples) of the same size to gauge the
variation in estimates or predictions. For example, estimate the mean word length in a book by randomly sampling words from the book; predict the winner of a school election based on randomly sampled
survey data. Gauge how far off the estimate or prediction might be. | {"url":"http://www.corestandards.org/Math/Content/7/SP/A/2/","timestamp":"2014-04-21T07:18:02Z","content_type":null,"content_length":"38438","record_id":"<urn:uuid:d460f947-02ae-4dc1-8500-5e2d36a3ce59>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Testing for independence in a two-way table: New interpretations of the chi-square statistic
Results 1 - 10 of 43
- Annals of Statistics , 1995
"... We construct Markov chain algorithms for sampling from discrete exponential families conditional on a sufficient statistic. Examples include generating tables with fixed row and column sums and
higher dimensional analogs. The algorithms involve finding bases for associated polynomial ideals and so a ..."
Cited by 192 (16 self)
Add to MetaCart
We construct Markov chain algorithms for sampling from discrete exponential families conditional on a sufficient statistic. Examples include generating tables with fixed row and column sums and
higher dimensional analogs. The algorithms involve finding bases for associated polynomial ideals and so an excursion into computational algebraic geometry.
- Journal of the ACM , 1991
"... We consider the problem of counting the number of contingency tables with given row and column sums. This problem is known to be #P-complete, even when there are only two rows [7]. In this paper
we present the first fully-polynomial randomized approximation scheme for counting contingency tables whe ..."
Cited by 115 (9 self)
Add to MetaCart
We consider the problem of counting the number of contingency tables with given row and column sums. This problem is known to be #P-complete, even when there are only two rows [7]. In this paper we
present the first fully-polynomial randomized approximation scheme for counting contingency tables when the number of rows is constant. A novel feature of our algorithm is that it is a hybrid of an
exact counting technique with an approximation algorithm, giving two distinct phases. In the first, the columns are partitioned into “small ” and “large”. We show that the number of contingency
tables can be expressed as the weighted sum of a polynomial number of new instances of the problem, where each instance consists of some new row sums and the original large column sums. In the second
phase, we show how to approximately count contingency tables when all the column sums are large. In this case, we show that the solution lies in approximating the volume of a single convex body, a
problem which is known to be solvable in polynomial time [5]. 1.
- J. Amer. Statist. Assoc
"... We describe a sequential importance sampling (SIS) procedure for analyzing two-way zero–one or contingency tables with fixed marginal sums. An essential feature of the new method is that it
samples the columns of the table progressively according to certain special distributions. Our method produces ..."
Cited by 51 (10 self)
Add to MetaCart
We describe a sequential importance sampling (SIS) procedure for analyzing two-way zero–one or contingency tables with fixed marginal sums. An essential feature of the new method is that it samples
the columns of the table progressively according to certain special distributions. Our method produces Monte Carlo samples that are remarkably close to the uniform distribution, enabling one to
approximate closely the null distributions of various test statistics about these tables. Our method compares favorably with other existing Monte Carlobased algorithms, and sometimes is a few orders
of magnitude more efficient. In particular, compared with Markov chain Monte Carlo (MCMC)-based approaches, our importance sampling method not only is more efficient in terms of absolute running time
and frees one from pondering over the mixing issue, but also provides an easy and accurate estimate of the total number of tables with fixed marginal sums, which is far more difficult for an MCMC
method to achieve.
- in Probabilistic Methods for Algorithmic Discrete Mathematics , 1998
"... 7.2 was jointly undertaken with Vivek Gore, and is published here for the first time. I also thank an anonymous referee for carefully reading and providing helpful comments on a draft of this
chapter. 1. Introduction The classical Monte Carlo method is an approach to estimating quantities that a ..."
Cited by 30 (1 self)
Add to MetaCart
7.2 was jointly undertaken with Vivek Gore, and is published here for the first time. I also thank an anonymous referee for carefully reading and providing helpful comments on a draft of this
chapter. 1. Introduction The classical Monte Carlo method is an approach to estimating quantities that are hard to compute exactly. The quantity z of interest is expressed as the expectation z = ExpZ
of a random variable (r.v.) Z for which some efficient sampling procedure is available. By taking the mean of some sufficiently large set of independent samples of Z, one may obtain an approximation
to z. For example, suppose S = \Phi (x; y) 2 [0; 1] 2 : p i (x; y) 0; for all i \Psi<F12
- SIAM J. COMPUT , 2004
"... Multiway tables with specified marginals arise in a variety of applications in statistics and operations research. We provide a comprehensive complexity classification of three fundamental
computational problems on tables: existence, counting, and entry-security. One outcome of our work is that eac ..."
Cited by 26 (7 self)
Add to MetaCart
Multiway tables with specified marginals arise in a variety of applications in statistics and operations research. We provide a comprehensive complexity classification of three fundamental
computational problems on tables: existence, counting, and entry-security. One outcome of our work is that each of the following problems is intractable already for “slim” 3tables, with constant
number 3 of rows: (1) deciding existence of 3-tables with specified 2-marginals; (2) counting all 3-tables with specified 2-marginals; (3) deciding whether a specified value is attained in a
specified entry by at least one of the 3-tables having the same 2-marginals as a given table. This implies that a characterization of feasible marginals for such slim tables, sought by much recent
research, is unlikely to exist. Another consequence of our study is a systematic efficient way of embedding the set of 3-tables satisfying any given 1-marginals and entry upper bounds in a set of
slim 3-tables satisfying suitable 2-marginals with no entry bounds. This provides a valuable tool for studying multi-index transportation problems and multi-index transportation polytopes.
Remarkably, it enables us to automatically recover a famous example due to Vlach of a “real-feasible integer-infeasible ” collection of 2-marginals for 3-tables of smallest possible size (3, 4, 6).
, 2003
"... We describe a divide-and-conquer technique for generating a Markov basis that connects all tables of counts having a fixed set of marginal totals ..."
Cited by 23 (8 self)
Add to MetaCart
We describe a divide-and-conquer technique for generating a Markov basis that connects all tables of counts having a fixed set of marginal totals
- Theoretical Computer Sciences , 1998
"... In this paper a Markov chain for contingency tables with two rows is defined. The chain is shown to be rapidly mixing using the path coupling method. The mixing time of the chain is quadratic in
the number of columns and linear in the logarithm of the table sum. We prove a lower bound for the mixing ..."
Cited by 22 (6 self)
Add to MetaCart
In this paper a Markov chain for contingency tables with two rows is defined. The chain is shown to be rapidly mixing using the path coupling method. The mixing time of the chain is quadratic in the
number of columns and linear in the logarithm of the table sum. We prove a lower bound for the mixing time, which is quadratic in the number of columns and linear in the logarithm of the number of
columns. Two extensions of the new chain are discussed: one for three-rowed contingency tables and one for m-rowed contingency tables. We show that, unfortunately, it is not possible to prove rapid
mixing for these chains by simply extending the path coupling approach used in the two-rowed case. 1 Introduction A contingency table is a matrix of nonnegative integers with prescribed positive row
and column sums. Contingency tables are used in statistics to store data from sample surveys (see for example [3, Chapter 8]). For a survey of contingency tables and related problems, see [8]. The
data is o...
- Proceedings of the 43rd Annual Symposium on Foundations of Computer Science (FOCS , 2002
"... Abstract. We consider the problem of sampling almost uniformly from the set of contingency tables with given row and column sums, when the number of rows is a constant. Cryan and Dyer [J.
Comput. System Sci., 67 (2003), pp. 291–310] have recently given a fully polynomial randomized approximation sch ..."
Cited by 21 (4 self)
Add to MetaCart
Abstract. We consider the problem of sampling almost uniformly from the set of contingency tables with given row and column sums, when the number of rows is a constant. Cryan and Dyer [J. Comput.
System Sci., 67 (2003), pp. 291–310] have recently given a fully polynomial randomized approximation scheme (fpras) for the related counting problem, which employs Markov chain methods indirectly.
They leave open the question as to whether a natural Markov chain on such tables mixes rapidly. Here we show that the “2 × 2 heat-bath ” Markov chain is rapidly mixing. We prove this by considering
first a heat-bath chain operating on a larger window. Using techniques developed
- Annals of Statistics , 2005
"... We describe an algorithm for the sequential sampling of entries in multiway contingency tables with given constraints. The algorithm can be used for computations in exact conditional inference.
To justify the algorithm, a theory relates sampling values at each step to properties of the associated to ..."
Cited by 21 (3 self)
Add to MetaCart
We describe an algorithm for the sequential sampling of entries in multiway contingency tables with given constraints. The algorithm can be used for computations in exact conditional inference. To
justify the algorithm, a theory relates sampling values at each step to properties of the associated toric ideal using computational commutative algebra. In particular, the property of interval cell
counts at each step is related to exponents on lead indeterminates of a lexicographic Gröbner basis. Also, the approximation of integer programming by linear programming for sampling is related to
initial terms of a toric ideal. We apply the algorithm to examples of contingency tables which appear in the social and medical sciences. The numerical results demonstrate that the theory is
applicable and that the algorithm performs well. 1. Introduction. Sampling
, 2005
"... We study the problem of counting and randomly sampling binary contingency tables. For given row and column sums, we are interested in approximately counting (or sampling) 0/1 n×m matrices with
the specified row/column sums. We present a simulated annealing algorithm with running time O((nm) 2 D 3 dm ..."
Cited by 21 (6 self)
Add to MetaCart
We study the problem of counting and randomly sampling binary contingency tables. For given row and column sums, we are interested in approximately counting (or sampling) 0/1 n×m matrices with the
specified row/column sums. We present a simulated annealing algorithm with running time O((nm) 2 D 3 dmax log 5 (n + m)) for any row/column sums where D is the number of non-zero entries and dmax is
the maximum row/column sum. In the worst case, the running time of the algorithm is O(n 11 log 5 n) for an n × n matrix. This is the first algorithm to directly solve binary contingency tables for
all row/column sums. Previous work reduced the problem to the permanent, or restricted attention to row/column sums that are close to regular. The interesting aspect of our simulated annealing
algorithm is that it starts at a non-trivial instance, whose solution relies on the existence of short alternating paths in the graph constructed by a particular Greedy algorithm. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=385512","timestamp":"2014-04-19T03:18:10Z","content_type":null,"content_length":"39052","record_id":"<urn:uuid:cca5def4-ef63-4d48-8d4e-d2d9f204f7c1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meaning and
Cosec ^
Dictionary | Wikipedia | Synonyms | News
Cosec Meaning and Definition
WordNet (r) 2.0
cosec n : ratio of the hypotenuse to the opposite side of a right-angled triangle [syn: cosecant]
See more meaning on Cosec...
In mathematics, the trigonometric functions (also called circular functions) are functions of an angle. They are used to relate the angles of a triangle to the lengths of the sides of a triangle.
Trigonometric functions are important in the study of triangles and modeling periodic phenomena, among many other applications.
The most familiar trigonometric functions are the sine, cosine, and tangent. In the context of the standard unit circle with radius 1, where a triangle is formed by a ray originating at the origin
and making some angle with the x-axis, the sine of the angle gives the length of the y-component (rise) of the triangle, the cosine gives the length of the x-component (run), and the tangent function
gives the slope (y-component divided by the x-component). More precise definitions are detailed below. Trigonometric functions are commonly defined as ratios of two sides of a right triangle
containing the angle, and can equivalently be defined as the lengths of various line segments from a unit circle. More modern definitions express them as infinite series or as solutions of certain
differential equations, allowing their extension to arbitrary positive and negative values and even to complex numbers.
[See more about Cosec at Dictionary 3.0 Encyclopedia] | {"url":"http://www.dictionary30.com/meaning/Cosec","timestamp":"2014-04-17T01:16:41Z","content_type":null,"content_length":"19892","record_id":"<urn:uuid:bb410433-3e00-48a7-ab16-25f8505f9e1f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
January 13th 2010, 01:19 AM #1
Super Member
Aug 2009
can someone explain this proof to me?
let sn be a convergent seq with sn -> l as n tends to infintiy. then every sunseq pf sn converges to l.
from my notes, it states that by induction, you can easily establise that nk > k for all k....
sorry, im new to the whole idea of proofs so im not sure how do i use induction to prove that nk > k..
I guess you mean: if $(n_k)_{k\geq 0}$ is a strictly increasing integer-valued sequence, then $n_k\geq k$ for all $k\in\mathbb{N}$.
Since $n_0\in\mathbb{N}$, we have $n_0\geq 0$, this is the base case.
Let $k\in\mathbb{N}$. Assume that $n_k\geq k$. Let us prove that $n_{k+1}\geq k+1$. Because $(n_k)_k$ is strictly increasing, $n_{k+1}>n_k$, hence $n_{k+1}>n_k\geq k$, and $n_{k+1}\in\mathbb{N}$,
thus $n_{k+1}\geq k+1$. This concludes the induction.
January 14th 2010, 01:05 AM #2
MHF Contributor
Aug 2008
Paris, France | {"url":"http://mathhelpforum.com/number-theory/123541-subsequences.html","timestamp":"2014-04-20T03:16:30Z","content_type":null,"content_length":"35022","record_id":"<urn:uuid:cc24fe63-c01d-4b82-b235-b5d64d8c3ba0>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tutors
Union City, CA 94587
Expert Math Tutor
I am a result-oriented
Tutor. I have 7+ years of tutoring experience. I can help you or your child to excel in
and overcome any problem areas. I specialize in all areas of
: Arithmetic, Algebra, Geometry, Trigonometry, Pre-Calculus, Calculus, etc. I...
Offering 10+ subjects including algebra 1, algebra 2 and calculus | {"url":"http://www.wyzant.com/Hayward_CA_Math_tutors.aspx","timestamp":"2014-04-20T13:40:54Z","content_type":null,"content_length":"61400","record_id":"<urn:uuid:5831732c-a213-4b6c-9764-b8dc2d10041f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
The intuition of robust standard errors
Commonly econometricians conduct inference based on covariance matrix estimates which are consistent in the presence of arbitrary forms of heteroskedasticity; the associated standard errors are
referred to as “robust” (also, confusingly, White, or Huber-White, or Eicker-Huber-White) standard errors. These are easily requested in Stata with the “robust” option, as in the ubiquitous
reg y x, robust
Everyone knows that the usual OLS standard errors are generally “wrong,” that robust standard errors are “usually” bigger than OLS standard errors, and it often “doesn’t matter much” whether one uses
robust standard errors. It is whispered that there may be mysterious circumstances in which robust standard errors are smaller than OLS standard errors. Textbook discussions typically present the
nasty matrix expressions for the robust covariance matrix estimate, but do not discuss in detail when robust standard errors matter or in what circumstances robust standard errors will be smaller
than OLS standard errors. This post attempts a simple explanation of robust standard errors and circumstances in which they will tend to be much bigger or smaller than OLS standard errors.
Expressions for OLS and robust standard errors.
Consider the univariate linear model
\((y_i – \bar y) = \beta (x_i – \bar x) + u_i,\)
where \(y\) is the dependent variable, \(x\) is a covariate, \(u\) is the error term, and \(\beta\) is the parameter over which we would like to make inferences. I’ve omitted a constant by expressing
the model in deviations from sample means, denoted with overbars. Assume \(u\) is mean independent of \(x\) and serially uncorrelated, but allow heteroskedasticity, \(V(u_i) = \sigma^2_i\). Let \(\
hat\beta\) denote the OLS estimate of \(\beta\).
If we erroneously assume the error is homoskedastic, we estimate the variance of \(\hat\beta\) with
\(\hat V^{OLS}(\hat\beta) =\frac{s^2}{\sum_i (x_i – \bar x)^2} \approx \frac{\bar\sigma^2}{ \sum_i (x_i – \bar x)^2}, \)
where \(s^2 = (n-2)^{-1}(SSR)\). I will refer to the square root of this estimate throughout as the “OLS standard error.” When the errors are heteroskedastic, \(s^2\) converges to the mean of \(\
sigma_i^2\), denote that \(\bar\sigma^2\). However, the true sampling variance of \(\hat\beta\) can easily be shown to be
\(V(\hat\beta) = \left ( {\frac{1}{\sum_i (x_i – \bar x)^2}}\right )^2 \sum_i \sigma_i^2 (x_i-\bar x)^2. \)
Robust standard errors are based on estimates of this expression in which the \(\sigma_i^2\) are replaced with squared OLS residuals, or sometimes slightly more complicated expressions designed to
perform better in small samples, see for example Imbens and Kolsar (2012).
When do robust standard errors differ from OLS standard errors?
Compare the expressions above to see that OLS and robust standard errors are (asymptotically) identical in the special case in which \(\sigma_i^2\) and \((x_i – \bar x)^2\) are uncorrelated, in which
\(\sum_i \sigma_i^2(x_i – \bar x)^2 \rightarrow \bar\sigma^2 \sum_i (x_i – \bar x)^2. \)
If, on the other hand, \(\sigma_i^2\) and \((x_i-\bar x)^2\) are positively correlated, then OLS standard errors are too small and robust standard errors will tend to be larger than OLS standard
errors. And if \(\sigma_i^2\) and \((x_i – \bar x)^2\) are negatively correlated, then OLS standard errors are too big and robust standard errors will tend to be smaller than OLS standard errors.
These cases are illustrated in the graphs: in the left panel, the variance of the error terms increases with the distance between \(x_i\) and its mean \(\bar x\), whereas in the right panel
observations are most dispersed around the regression line when \(x_i\) is at its mean.
The graphs have been constructed such that the unconditional variance of the errors terms and the variance of \(x\) are the same in each graph. But by inspection we can guess that our estimate of the
slope is much less precise if the data look like the left panel than the right panel: perform a thought experiment to see that lots of regression lines fit the data in the left panel quite well, but
the data in the right panel do a better job pinning down the slope. There is more information about the relationship between \(y\) and \(x\) in the data in the right panel even though the variance of
\(x\) and the unconditional variance of the error term are identical.
We see that heteroskedasticity doesn’t matter per se, what matters is the relationship between the variance of the error term and the covariates—if the errors are heteroskedastic but uncorrelated
with \((x_i-\bar x)^2\), we can safely ignore the heteroskedasticity. To see why this is so, recall that in the homoskedastic case the variance of \(\hat\beta\) is inversely proportional to \(\sum_i
(x_i – \bar x)^2\). If we add one more observation for which \(x_i\) happens to equal \(\bar x\), the variance of our estimate doesn’t change—there is no information in that observation about the
relationship between \(y\) and \(x\). As the draw of \(x_i\) moves farther from its mean, the variance of \(\hat\beta\) falls more and more, because such draws, in the homoskedastic case, are more
and more informative.
Now consider the case in which the variance of \(u_i\) increases with \((x_i-\bar x)^2\), as in the left panel of the graph above. When we get one more observation, the amount of information it
contains increases with \((x_i – \bar x)^2\) for the same reasons as the homoskedastic case, but this effect is blunted by the higher variance of \(u_i\). The amount of information contained in a
draw in which \(x_i\) is far from its mean is lower than the OLS variance estimate “thinks” there is, so to speak, because the OLS variance estimate ignores the fact that such draws are more highly
dispersed around the regression line. The OLS standard errors in this case are too small.
If on the other hand the variance of \(u_i\) decreases with \((x_i-\bar x)^2\), then observations of \(x_i\) far from its mean both contain more information for the usual reason in the homoskedastic
case and are less dispersed around the regression line, as in the right panel of the graph above. These observations are even more highly informative than the OLS variance estimate “thinks” they are,
and the OLS standard errors will tend to be too large. In this case, robust standard errors will tend to be smaller than OLS standard errors.
The upshot is this: if you have heteroskedasticity but the variance of your errors is independent of the covariates, you can safely ignore it, but if you calculate robust standard errors anyways they
will be very similar to OLS standard errors. However, if the variance of your error terms tends to be higher when \(x\) is far from its mean, OLS standard errors will tend to be biased down, and
robust standard errors will tend to be larger than OLS standard errors. In the opposite case in which the variance of the error terms tends to be lower when \(x\) is far from its mean, OLS standard
errors will tend to be too large, and robust standard errors will tend to be smaller than OLS standard errors. With real data it’s commonly but not always going to be the case that the variance of
the error will be higher when \(x\) is far from its mean, explaining the result that robust standard errors are typically larger than OLS standard errors in economic applications.
Tags: econometrics, robust standard error, statistics
• http://twitter.com/JoelWWood
Hi Chris. Great to see you blogging again. Thanks for this post; I am sure many applied researchers will find it a very worthwhile read.
thanks a lot for your insight!
Thank you so much!!
It helped a lot with my assignment!
• https://www.facebook.com/eastnile
This is the best blog post I’ve ever seen in my life. | {"url":"http://chrisauld.com/2012/10/31/the-intuition-of-robust-standard-errors/","timestamp":"2014-04-20T13:19:59Z","content_type":null,"content_length":"61105","record_id":"<urn:uuid:49a4b4a6-a0d1-4d8e-a402-cee406e5fb45>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Open neutral
Looks to me like the operation of a GFCI and an RCD is identical. However, they can be incorporated into many different devices. The most common type in the U.S. is in a receptacle. One receptacle at
the beginning of the circuit can be wired to protect the rest of the circuit. The loss of a neutral between the transformer and the meter will NOT cause a GFCI incorporated into a receptacle to trip
due to imbalance of current because there won't be an imbalance. The case of the GFCI built into a circuit breaker works in exactly the same way. Losing the neutral is ALWAYS bad news concerning
safety as well as harming equipment. There is a good reason that the U.S. code prohibits interrupting the neutral in ALL cases. The neutral is never, ever, under any circumstance, no matter what, | {"url":"http://www.physicsforums.com/showthread.php?s=22bf0df1b084e2b4bb85a4c89b83ed8e&p=4554915","timestamp":"2014-04-18T13:54:55Z","content_type":null,"content_length":"22367","record_id":"<urn:uuid:76af333c-75bd-4f6e-bc04-6c622d8094b6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Document Type
Conference Proceeding
Publication Date
In frequency-domain photonmigration (FDPM), two factors make high modulation frequencies desirable. First, with frequencies as high as a few GHz, the phase lag versus frequency plot has sufficient
curvature to yield both the scattering and absorption coefficients of the tissue under examination. Second, because of increased attenuation, highfrequency photon density waves probe smaller volumes,
an asset in small volume in vivo or in vitro studies. This trend toward higher modulation frequencies has led us to reexamine the derivation of the standard diffusion equation (SDE)from the Boltzman
transport equation. We find that a second-order time-derivative term, ordinarily neglected in the derivation, can be significant above 1GHzfor some biological tissue.
The revised diffusion equation, including the second-order time-derivative, is often termed the PI equation. We compare the dispersion relation of the PI equation with that of the SDE. The PI phase
velocity is slower than that predicted by the SDE; in fact, the SDE phase velocity is unbounded with increasing modulation frequency, while the PI phase velocity approaches c/sqrt(3) in the high
frequency limit. We emphasize that the phase velocity c/sqrt(3) is attained only at modulation frequencies with periods shorter than the mean time between scatterings of a photon, a frequency regime
that probes the medium beyond the applicability of diffusion theory. Finally we caution that values for optical properties deduced from FDPM data at high frequencies using the SDE can be in error by
30% or more.
Previously linked to as: http://ccdl.libraries.claremont.edu/u?/irw,379.
Presented at the Conference on Optical Tomography, Photon Migration, and Spectroscopy of Tissue and Model Media - Theory, Human Studies and Instrumentation [February 05-07, 1995, SAN JOSE, CA] Soc
Photo Opt Instrument at Engineers ISBN: 0-8194-1736-X
Pdf modified from ILL pdf of published version.
Rights Information
© 1995 International Society of Optical Engineering (SPIE)
Terms of Use & License Information
Recommended Citation
Haskell, R. C., L. O. Svaasand, S. J. Madsen, F. E. Rojas, T. C. C. Feng, and B. J. Tromberg. "Phase velocity limit of high-frequency photon density waves." Conference Proceedings of Optical
Tomography, Photon Migration, And Spectroscopy Of Tissue And Model Media: Theory, Human Studies Instrumentation (San Jose, CA, 7 February 1995). Ed. Britton Chance and Robert R. Alfano. 284-290. | {"url":"http://scholarship.claremont.edu/hmc_fac_pub/315/","timestamp":"2014-04-18T05:34:43Z","content_type":null,"content_length":"27191","record_id":"<urn:uuid:8981a421-0076-49ec-be8c-55ef5dd8c1cb>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Analysis
October 15th 2008, 04:33 PM #1
Complex Analysis
1. If $f$ is analytic in a closed bounded region $G$ and $f(z) eq 0$ in $G$, show
that $|f|$ assumes its minimum value on the boundary of $G$.
Hint: consider $\frac{1}{f}$.
2. Use Problem 1 to prove the Fundamental Theorem of Algebra.
The minimum value of $|f|$ is the maximum value of $1/|f|$.
But $1/|f|$ assumes it maximum on the boundary by the maximum modulos principle.
October 17th 2008, 11:44 AM #2
Global Moderator
Nov 2005
New York City
October 17th 2008, 04:55 PM #3 | {"url":"http://mathhelpforum.com/calculus/53931-complex-analysis.html","timestamp":"2014-04-17T10:48:55Z","content_type":null,"content_length":"39619","record_id":"<urn:uuid:3897a567-e3f1-4ecc-9b10-79f602931a62>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
NEWTON: Finite and Irrational Numbers
Finite and Irrational Numbers
Name: Jed
Status: other
Grade: 12+
Location: MN
Country: USA
Date: Fall 2013
Question: Was reading the Pi and Finite questions and answers in the Mathematics Archive. Interesting as I have always pondered the same question. Most answers were leaning toward no
number is finite. Here is another perspective. Let us say you have on paper a given whole number which corresponds to perimeter of a square.You make the square out of yarn, the exact
length is a whole number, and is known. You take the same yarn and make a perfect circle. Now you know the circle's circumference is an exact whole number. Why is it not possible to
mathematically come up with an exact whole number value as in the square? Or, if you make a perfect circle out of yarn, then open it up straight and measure it, why can math not
achieve the measured exact whole number?
Replies: Hi Jed,
Thanks for the question. Let N be a whole number which corresponds to the perimeter of a square. Then N/4 is is the length of one side of the square. N/4 may or may not be a whole
number, but such a number exists. The length N can be the circumference of a circle. In terms of the radius of said circle, the relation is N = 2*pi*R. Since pi is irrational, R must
be irrational.
Let me state the equation for the circumference, C, of a circle in terms of the radius R: C = 2*pi*R. If R is irrational, say 1/pi, then C could be rational. You cannot have both C and
R being rational.
I hope this helps. Thanks Jeff Grell
I may be misinterpreting your question, you seem to be arguing that a circle cannot be an exact or whole number. That's not what pi is suggesting. Since pi is the ratio of the
circumference of a circle to its diameter, and pi being an irrational number, then either the circumference (C) or diameter (d), in the ratio of C/d = pi, can be exact numbers -- but
not both at the same time. So while C = 22 (exact), d is nearly exact at 7.002.... In your examples, you can get the circumference to be an exact number, but the diameter will turn out
to be inexact.
Greg (Roberto Gregorius) Canisius College
This is a problem of relating numbers to measurements. If you define a finite number as that which can be measured, then no number can truly be finite. A yardstick is composed of
atoms, and an atom does not have an exact size. In fact, the yardstick will be at a certain temperature, and the atoms within the yardstick will be randomly moving to some extent so
that those at the very edges will not be even fixed in location. You could conceivably get a very accurate length, down to the rough size of an atom (even below), but mathematics
requires perfect accuracy. Such accuracy does not exist in the physical world, and as such, represents the flaw in using the concept of a physical measurement for an abstract concept.
The Greeks recognized the flaw in relating numbers to lengths, but they could not resolve it. Consider, though, the problem like you stated it. Assume a rope has a finite length, say 5
meters long. We have to assume that it actually can be exactly 5 meters long. I have made the argument for physical reasons that it cannot, but let us consider a “perfect” rope exactly
of this length. The problem is not that we join the ends together into a circle; we would still assert that the circumference is a finite length. But the diameter of the circle will be
circumference/pi. If we take another rope and cut it to the length of the diameter of the circle, it cannot be of a finite length. We assert, though, that it must be because we cut the
rope to that length. Thus the paradox. The same problem happens with a right triangle. A right triangle, as you recall, has a 90 degree angle, and if its two sides are exactly 1 meter
long, its hypotenuse (diagonal) has a length of the square root of 2. Thus the two sides would be defined as finite, but the diagonal would not. Our perfect ropes we could cut to each
of these lengths and thus be finite in length. Again, this problem came from the Greek notion of relating the physical concept of measurement to the abstract concepts of both geometry
and numbers. Mathematicians have since realized that numbers are truly abstract, and when we relate numbers to the physical world, we can additionally have numbers that do not
correspond to the physical world in the exact sense. If we drop the concept of “finite” as being “measurable,” however, we can define finite as “bounded or limited in magnitude.” In
other words, “finite” is the opposite of “infinite.” In this sense, a circle’s circumference and diameter are finite but not both measurable at the same time.
Kyle Bunch, PhD, PE
You need to distinguish between ?measuring? and ?geometry?. In measuring, there is no such thing as an ?exact number?. Putting the shape of the object aside, the distance between the
beginning and the end of the distance between two points (which are also not ?points? if you microscope is powerful enough) is at best the size of a few nanometers. Remember that the
sheet of paper you use to draw the line is a mountain range on the microscopic scale. So as an object of ?measurement? there is no such thing as a ?perfect? anything.
The counterpoint is ?geometric shape?. Geometrically a square (or rhombus) has four sides that are ?exactly? the same length. This is a mental construction, not an engineering
construction. The ratio of the opposite sides of a square or rhombus is EXACTLY UNITY (the mental construct).
In the case of a circle, the ratio of the circumference, C, and the diameter, D, (C/D), is the irrational number ?pi?. It does not matter how accurate your ?measuring? device is, ?pi?
can never be written as an ?exact? number.
These are fundamentally different concepts ?physical measurement? and ?mathematical concept?.
Vince Calder
Hi, Jed
Thanks for your question.
In a circle, the circumference = pi*D where D is the diameter. In the usual examples, the diameter might be a whole number and the circumference pi*D would be irrational.
But in your example, you are starting with a circumference as a whole number. There is nothing wrong with that. But now the diameter = cirumference/pi so it will be an irrational
number. In a circle, either the diameter or the circumference must be an irrational number. It is also possible that they can both be irrational.
I hope this helps. Best regards, Bob Zwicker
Jed, The circumference in that case can be an exact whole number, but the diameter and radius cannot. If the diameter is finite, then the circumference is not. If the circumference is
finite, then the diameter is not. This is because pi, which is not a finite decimal, is circumference divided by diameter.
Dr. Ken Mellendorf Physics Instructor Illinois Central College
Click here to return to the Mathematics Archives
NEWTON is an electronic community for Science, Math, and Computer Science K-12 Educators, sponsored and operated by
Argonne National Laboratory
Educational Programs
, Andrew Skipor, Ph.D., Head of Educational Programs.
For assistance with NEWTON contact a
System Operator (help@newton.dep.anl.gov)
, or at Argonne's
Educational Programs
NEWTON AND ASK A SCIENTIST
Educational Programs
Building 223
9700 S. Cass Ave.
Argonne, Illinois
60439-4845, USA
Update: November 2011 | {"url":"http://newton.dep.anl.gov/askasci/math99/math99312.htm","timestamp":"2014-04-18T10:53:31Z","content_type":null,"content_length":"16456","record_id":"<urn:uuid:358b190a-6d40-44b1-9767-ca8c989a99e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shaposhnikov Wetterich predicted 126 GeV Higgs in 2009
Peter Woit
a talk by Joseph Lykken
reviewing a number of non-susy approaches to explaining the tunedness of the Higgs mass. (Woit also links to a more theoretical
talk by Nathan Seiberg
about the hierarchy problem, that is also worth reading.)
It seems that causal explanations of the tuned Higgs, like Shaposhnikov-Wetterich and Nicolai-Meissner, are beginning to be recognized as a distinct class of theory, alongside "unnatural" and/or
anthropic finetuning (Arkani-Hamed) and new versions of SUSY which restore naturalness (numerous authors). This is heartening, and it's especially gratifying to see Lykken at the fore of this, since
it was his soundbite about the metastability of the universe, and the flurry of media it generated, which prompted my dismay in comment #92.
In fact, Lykken not only reviews several possibilities, but he devotes the most attention to a model in which
dark matter
plays a role in a Nicolai-Meissner-like mechanism. That is, he combines "radiative electroweak symmetry breaking" - in which the destabilizing Mexican-hat self-interaction of the Higgs field (that is
responsible for a ground state with a nonzero VEV, and thus for the Higgs mechanism) is induced by virtual effects - with high-energy boundary conditions that tune the resulting Higgs mass. In this
model, the new particle which induces radiative EWSB is also the dark matter!
So not only are causal models of Higgs tuning beginning to be recognized, but they are being combined with BSM facts from elsewhere in physics. Perhaps this will even become a popular topic while we
wait for the LHC to be switched on again...
What would really be dramatic, is a model of a "causally tuned Higgs" which also explains the observation that
the mass of the Higgs is half the sum of the Z, W+, and W- masses
. Like the tuning of the Higgs mass, this isn't just something that was noticed after the discovery, it was actually used to predict the correct value. Unfortunately, the "theory" which produced that
formula is
, so the formula really needs some other justification.
Also, like the Koide relation, it's a relation between low-energy masses which shouldn't have simple relations, because of renormalization group running. (This may be contrasted with theories like
Shaposhnikov-Wetterich, where the low-energy Higgs mass acquires its value from a simple boundary condition at high energies.) So most physicists will dismiss it as numerology and a coincidence. But
as Lykken says in his talk (slide 20), "dismissing striking features of the data as coincidence has historically not been a winning strategy..." | {"url":"http://www.physicsforums.com/showthread.php?p=4277769","timestamp":"2014-04-18T13:58:02Z","content_type":null,"content_length":"63928","record_id":"<urn:uuid:ebb97a8a-1bf0-427f-a062-a2556bf285bd>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Cutpoints in Monte Carlo Simulation
Sabrina Oesterle posted on Wednesday, October 20, 2004 - 1:58 pm
How do I specify a binary covariate in a Monte Carlo simulation that does not have a 50/50 split, but instead, for example, a 75/25 split?
Linda K. Muthen posted on Wednesday, October 20, 2004 - 2:16 pm
You use the CUTPOINT option of the MONTECARLO command. See Example 11.1 in the Mplus User's Guide. The value given is a z-score. So you use a z-score table to select the value that corresponds to a
75/25 split.
June Zhou posted on Thursday, February 07, 2013 - 3:17 pm
I have a similar question about generating a binary independent variable in a Monte Carlo simulation. I'd like to generate "Gender" variable that with population mean of 0.5 and variance of 0.25.
What code should I use? Thank you in advance.
Linda K. Muthen posted on Friday, February 08, 2013 - 11:56 am
The continuous variable you generate should have mean zero and variance 1. You should use a cutpoint of zero which cuts the sample 50/50 with a variance of .25.
Jamie Stagl posted on Wednesday, February 12, 2014 - 10:49 am
I am doing a Monte Carlo simulation of a growth model with 3 time points and a nominal predictor (3 intervention groups). I specified 2 binary dummy variables (x2 and x3) to represent my 3 groups.
What CUTPOINT value should I use for these 2 dummy variables, considering they are not a 50/50 split (1/3 of sample gets intervention A, 1/3 gets intervention B, and 1/3 gets intervention C)?
Thank you.
Bengt O. Muthen posted on Wednesday, February 12, 2014 - 4:07 pm
If you specify mean and variance = 0, 1 for the variable that you apply Cutpoints to, you can use a table for a standard normal distribution function to get the cutpoints. For an example, see 12.1.
Jamie Stagl posted on Sunday, February 16, 2014 - 9:06 pm
Thank you, that was very helpful. In general, we expect to see that 2 of the 3 groups do not change over the 3 time points, while the third group improves on the outcome. Would you say that the use
of 2 dummy variables is an accurate way to estimate the necessary sample size (does the simulation know there are 3 linked groups with these 2 dummy variables or is it only doing a 2-group
On a related note, would you suggest setting the slope growth factors at different values to reflect our hypothesis, and reference the power associated with the smaller parameter estimate to
determine sample size?
Bengt O. Muthen posted on Monday, February 17, 2014 - 2:24 pm
Actually, you are better off doing a multiple-group analysis with 3 groups, where you control the number of observations in the groups and have freedom to vary any parameter across the groups. So
don't use dummy variables.
Your hypothesis sounds like you would have the slope mean at zero in two of the groups.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=14&page=498","timestamp":"2014-04-19T19:35:04Z","content_type":null,"content_length":"25486","record_id":"<urn:uuid:1b66f8c3-f5e4-4920-aded-b7281f843258>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
ProTeacher Community - View Single Post - math game ideas
How to Play Multiplication 4 in a Row
This game is for two players. It is great for practicing those facts!
You need:
1 copy of the game board
2 paperclips
2 different kind of place markers (beans, color rocks, small game pieces)
The first player takes the two paper clips and places each of them on a number (2-9) at the bottom of the board. That player than says the multiplication sentence (ex. 4 X 7=28) and places one of
their markers on the product.
The second player then gets to move only one of the paperclips to make a new multiplication sentence. That player calls out the multiplication sentence and put on of their markers on the product.
Play continues as players move one of the paperclips making new multiplication sentences, calling out the fact, and marking the product. The first player with 4 of their markers in a row (up, down,
or diagonal) wins.
If someone’s marker is already on the product, you must come up with a new multiplication sentence.
Paperclips can be stacked on top of each other for number sentences such as 8X8.
If all products are covered moving only one paperclip, both may be picked up and placed on new numbers. | {"url":"http://www.proteacher.net/discussions/showpost.php?p=361923","timestamp":"2014-04-19T07:07:00Z","content_type":null,"content_length":"22859","record_id":"<urn:uuid:b6a1db20-e021-47bf-9d2b-651e807918a7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adsorption Isotherms via Monte-Carlo-Simulations
Hi there,
I'm having trouble understanding, have adsorption isotherms are determined via Monte-Carlo simulations.
What I've learned so far is:
- you do a "typical" Monte-Carlo run with translations, rotations and insertions/deletions
- the insertion/deletion probability is (mostly) determined by a fixed value for the chemical potential
- with this fixed value you eventually get an average number of sorbarte particles, so that you could theoretically (after some more simulations for other chem. potentials) plot the sorbate loading
in dependence of the chemical potential.
But how do I get to the pressure corresponding to that chemical potential now?
I always thought, one assumes, that the system is in (fictious) contact with an ideal gas and because that gas must have the same chemical potential in equilibrium, you can calculate the pressure.
But the formula for that is:
[tex]\mu = \mu_{0} + RT ln(\frac{p}{p_{0}}) [/tex]
with some unknown "reference" pressures and potentials...how do I obtain them?
greetings angu | {"url":"http://www.physicsforums.com/showthread.php?p=2586240","timestamp":"2014-04-25T05:32:47Z","content_type":null,"content_length":"20399","record_id":"<urn:uuid:3bdd71a7-bba7-4db7-9f65-f5e48c9b572a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
5.2. Some Operations on Relations
Definition 5.6 (Inverse Relation)
If R: X
R~: Y
Consequently, xRy
Definition 5.7 (Composition of R and S)
Let R: X The composition of R and S denoted by R O S contains the pairs (x,z) if and only if there is an intermediate object y such that (x,y)
x(R O S)z
□ There are five people: A, B, C, D and E. C owns a motorbike called Speedy and E owns a motorbike called Slow. A has the friends B and D, B has the friend C and C has the friend E.
Let R be the relation x has a friend y and let S be the relation y owns a motorbike.
Find the relation R O S. R: x has a friend y
R: { (x,y) | x has a friend y } and x,y
{ (A,B), (A,D), (B,C), (C,E) }
S: { (x,m) | y owns a motorbike } and
{ (C, Speedy), (E, Slow) }
So we know: R O S = { (B, Speedy), (C, Slow) }
A graphically representation of the composition
If x has a friend y means also y has a friend x. Calculate: R O S
Definition 5.8 (Associative Operations)
If R, S, and P are three Relations, then the following holds:
( R O S ) O P = R ( S O P )
R = { (x1, y3,), (x2,y1), (x3, y4)}
S = { (y1, z4,), (y2,z3), (y4, z1)}
R O S = { (x2, z4), (x3, z1) }
Created by unroff & hp-tools. © by Hans-Peter Bischof. All Rights Reserved (1998).
Last modified: 27/July/98 (12:14) | {"url":"http://www.cs.rit.edu/~hpb/Lectures/98_221/all-5.2.html","timestamp":"2014-04-21T07:03:41Z","content_type":null,"content_length":"4213","record_id":"<urn:uuid:f17b22a1-eb5c-4f7c-bf0c-8e77c5af61b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rotation Bodies - rolling cylinders
November 12th 2009, 01:28 PM #1
Junior Member
Nov 2009
Rotation Bodies - rolling cylinders
Every year at Coopers Hill in Gloucestershire (UK) there is a cheese rolling festival where people race down the hill after a rolling disc of Gloucestershire cheese, the winner getting to keep
the cheese. (No, I'm not kidding look here. It seems there will even be an event in Whistler, BC next year if you are really interested.)
Given that the moment of inertia of a disc of mass m and radius r about its axis of symmetry is , what is the speed of the cheese's centre of mass after it has descended a vertical distance of
17.0 m from the top of the hill? Assume that the cheese rolls without slipping.
[Acceleration due to gravity, g=9.81 ms-2]
[Specify your answer in units of 'm/s' (without quotes)]
Last edited by mr fantastic; November 17th 2009 at 03:08 PM. Reason: Changed post title
Every year at Coopers Hill in Gloucestershire (UK) there is a cheese rolling festival where people race down the hill after a rolling disc of Gloucestershire cheese, the winner getting to keep
the cheese. (No, I'm not kidding look here. It seems there will even be an event in Whistler, BC next year if you are really interested.)
Given that the moment of inertia of a disc of mass m and radius r about its axis of symmetry is , what is the speed of the cheese's centre of mass after it has descended a vertical distance of
17.0 m from the top of the hill? Assume that the cheese rolls without slipping.
[Acceleration due to gravity, g=9.81 ms-2]
[Specify your answer in units of 'm/s' (without quotes)]
use energy principles ...
$mgh = \frac{1}{2}mv^2 + \frac{1}{2}I \omega^2$
remember that if the disk rolls w/o slipping, $v = r\omega$
hey how would we get the answer of this question with out knowing the radius?
I based this question off of the time dilation
and since they are saying its at rest
would I assume that v^2/c^2 is zero?
and simply the equation is 2.2 micro seconds/ 1?
Simulateur de pret immobilier simulation credit | Simulateur pret immobilier simulation de credit | Simulateur de pret immo
November 12th 2009, 03:53 PM #2
November 12th 2009, 05:36 PM #3
Junior Member
Nov 2009
November 13th 2009, 04:43 AM #4
November 17th 2009, 05:21 AM #5
Nov 2009 | {"url":"http://mathhelpforum.com/math-topics/114183-rotation-bodies-rolling-cylinders.html","timestamp":"2014-04-17T16:48:54Z","content_type":null,"content_length":"47106","record_id":"<urn:uuid:eb00c529-6d59-4464-824a-0eafc4c5d56a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: ado coding help needed
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: ado coding help needed
From "Rajesh Tharyan" <R.Tharyan@exeter.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: ado coding help needed
Date Thu, 6 Mar 2008 10:24:53 -0000
Following the recent discussion on the bootstrapped skewness adjusted t
statistic. This is my attempt at a program to implement this. The following
ado calculates the skewness adjusted t statistic based on Johnson(1978) and
made very popular in finance area by LBT(1999). As I mentioned in an earlier
post there is an ado called Johnson which implements this test. But somehow
the skewnesss adjusted t stat values are different when I use that program.
I have double checked the calculation for this by manually calculating the
skewness adjusted t-stats.
I have two programs One to calculate the skewness adjusted t stats
(rtskew.ado) and the other to do the bootstrap (skewt.ado)
Could someone please tell me If I can do this with one program and how?
Ideally what I would want is for the user to say
. skewt varname
And the varname feeds into the rtskew program.
In the following line in the skewt program
bootstrap r(ratio), saving(C:\mydata, replace) reps(1000) size(int(_N/4)):
is there a way to say something like (int(_N/`x')), and get the value of x
from the user.
My programs follow...
capture program drop rtskew
program define rtskew, rclass
mac def S_1 /* the skewness adjusted t stats */
foreach var of local 0 {
capture confirm numeric variable `var'
if _rc==0 {
drop if `var'==.
quietly sum `var',detail
local n = sqrt(r(N))
local u = r(mean)
local v = r(sd)
local g = r(skewness)
local s= _result(3)/r(sd)
di in gr _col(20) "stats from the sample"
di ""
di in gr _col(20) "N coefficient = `n'"
di in gr _col(20) "S-coefficient = `s'"
di in gr _col(20) "G-coefficient = `g'"
di in gr _col(20) "Sample mean = `u'"
mac def S_1 = (`n') * ((`s') + ((1/3) * (`g') * ((`s')^2)) +
((1/(6*((`n')^2)))* (`g')))
else {
di "`var' is not a numeric variable skewness adjusted t-statistic cannot be
return scalar ratio= $S_1
To bootstrap the t statistic I need another program
capture program drop skewt
program define skewt, rclass
rtskew mpg
return scalar ratio= $S_1
bootstrap r(ratio), saving(C:\mydata, replace) reps(1000) size(int(_N)):
estat bootstrap, all
use C:\mydata,clear histogram _bs_1
centile _bs_1, centile(.5,99.5, 2.5, 97.5, 5,95)
Thank you very much
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-03/msg00227.html","timestamp":"2014-04-23T18:22:52Z","content_type":null,"content_length":"7437","record_id":"<urn:uuid:47c6a9d6-6dc5-4ad6-929b-ef9492f8e62e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quaternions and spatial rotation
are used in
computer graphics
and related fields because they allow for a compact representations of rotations in
space. This will be explained in this article.
Every quaternion z = a + bi + cj + dk can be viewed as a sum a + u of a real number a (called the "real part" of the quaternion) and a 3-vector u = (b, c, d) = bi + cj + dk in R^3 (called the
"imaginary part"). In this view, quaternions are "mixed sums" of scalars and 3-vectors and the quaternions i, j and k correspond to the unit vectors i, j and k.
Two such quaternions are added by adding the real parts and the imaginary parts separately:
(a + u) + (b + v) = (a + b) + (u + v)
The multiplication of quaternions translates into the following rule:
(a + u) (b + v) = (ab - <u,v>) + (av + bu + u×v)
Here, <u,v> denotes the scalar product and u×v the vector product of u and v.
This formula shows that two quaternions z and w commute, i.e. zw = wz, if and only if their imaginary parts are real multiples of each other (because in this case the vector product of these
imaginary parts will commute).
It is well known that the vector product is related to rotation in space. The goal then is to find a formula which expresses rotation in 3-d space using quaternion multiplication, similar to the
formula for a rotation in 2-d using complex multiplication:
f(w) = zw
= exp(
α) is used for rotation by an angle α.
The formula in 3-d cannot be a simple multiplication with a quaternion, because multiplying a vector with a non-trivial quaternion yields a result with non-zero real part, and thus not a vector.
Rotating a vector should yield a vector however.
It turns out that we can cancel the real part if we multiply by a quaternion from one side and with its inverse from the other side. Let z = a + u be a non-zero quaternion, and consider the function
f(v) = z v z^-1
is the multiplicative inverse of
is a vector, considered as a quaternion with zero real part. The function
is known as
conjugation by z
. Note that the real part of
) is zero, because in general
have the same real part for any quaternions
, and so
Re(z v z^-1) = Re(v z^-1 z) = Re(v 1) = 0.
is a
and we have
) =
if and only if
and the imaginary part
are real multiples of each other (because
) =
v z
z v
). Hence
is a rotation whose axis of rotation passes through the origin and is given by the real multiples of
Note that conjugation with z is the same as conjugation with rz for any real number r. We can thus restrict our attention to the quaternions of absolute value 1, the so-called unit-quaternions. (The
absolute value |z| of the quaternions z = a + v is defined as the square root of a^2 + ||v||^2. It is multiplicative: |zw| = |z| |w|.) Inverting unit quaternions is especially easy: if |z| = 1, then
z^-1 = z^* (the conjugate z^* of the quaternion z = a + v is defined as z^* = a - v) and this makes our rotation formula even easier.
It turns out that the angle of rotation α is also easy to read off if we are dealing with a unit quaternion z = a + v: we have cos(α/2) = a. To summarize:
A rotation in 3-d with axis v and angle α can be represented as conjugation with the unit quaternion z = cos(α/2) + sin(&alpha/2)v/||v|| (or with any real multiple of z).
The composition of two rotations corresponds to quaternion multiplication: if the rotation
is represented by conjugation with the quaternion
and the rotation
is represented by conjugation with
, then the composition
is represented by conjugation with
If one wishes to rotate about an axis that doesn't pass through the origin, then one first translates the vectors into the origin, conjugates, and translates back.
An example
Let us consider the rotation f around the axis u = i + j + k, with an angle of 120°, i.e. 2π/3 radians.
The length of u is √3, the half angle is π/3 with cosine 1/2 and sine (√3)/2. We are therefore dealing with a conjugation by the unit quaternion z = (1 + i + j + k)/2.
f(ai + bj + ck) = z (ai + bj + ck) z^*
which using the ordinary rules for quaternion arithmetic can be simplified to
ci + aj + bk
as expected.
Quaternions vs. other representations of rotations
The representation of a rotation as a quaternion (4 numbers) is more compact than the representation as a matrix (9 numbers). Furthermore, for a given axis and angle, one can easily construct the
corresponding quaternion, and conversely, for a given quaternion one can easily read off the axis and the angle. Both of these are much harder with matrices or Euler angles.
In computer games and other applications, one is often interested in "smooth rotations", meaning that the scene should slowly rotate and not in a single step. This can be accomplished by choosing a
curve in the quaternions, with one endpoint being the identity transformation 1 and the other being the intended total rotation. This is more problematic with other representations of rotations.
When composing several rotations on a computer, rounding errors necessarily accumulate. A quaternion that's slightly off still represents a rotation -- a matrix that's slightly off need not be
orthogonal anymore and therefore need not represent a rotation at all. It is hard to turn such a matrix back into a proper orthogonal one. | {"url":"http://www.fact-index.com/q/qu/quaternions_and_spatial_rotation.html","timestamp":"2014-04-18T03:45:25Z","content_type":null,"content_length":"11298","record_id":"<urn:uuid:4729875c-b71d-4356-8d07-572e0a9049b6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Measure Gallons per Minute
Measuring gallons per minutes is easy. The calculations required to perform this depends on the object for which the flow rate must be determined. Essentially, a gallon per minute is a unit of flow
rate in physical science. Flow through circular pipes, orifice plates and venturi metre is required to be calculated for engineering applications. Typically GPM is determined using Bernoulli’s
equation. The steps involved in obtaining the GPM value are relatively simple.
Things required :
- Calculator or Microsoft Excel spreadsheet
- Physical data for the instrument for which GPM is to be calculated
- Application data for the instrument
- Graph paper
• 1
Firstly determine the flow measurement application. For calculating the GPM using differential pressure in a pipe section, consider getting the physical data for the pipe. Once you obtained the
data, you should determine the internet and external pressure. Internal pressure is the pressure inside the pipe while external pressure is the environment pressure. Any liquid through the pipes
moves due to pressure differential and if you can determine that difference, it becomes very easy to find the flow rate using Bernoulli’s equation.
• 2
Pressure differential can be determined through dividing the elevation by external pressure/ statistic head pressure. Subtract this value from the initial pressure value to find the pressure
different over the length of the selected section of pipe. Now consider calculating the flow rate in gallons per minute by dividing the differential pressure by the static pressure. Use the
D’Arcy Weisbach equation to obtain the tabular data for pressure variations inside the pipe.
• 3
Calculating pressure differential through an orifice plate is also extremely easy. Determine the initial and final pressure before and after the orifice disk. When water or any other liquid moves
through the orifice plate, the velocity increases considerably while the pressure drops significantly. Calculate the pressure differential using Bernoulli’s equation to calculate the actual flow
through the head pipe. If you final answer is not converted in gallons per minute units, then you should use unit conversion equations to achieve this.
• 4
It is recommended to use engineering books to become familiar with the equations and quantities used for flow rate calculations in pipe shaped objects. Remember to study as much as you can to
help you understand how to apply these measurements and their overall significance. | {"url":"http://www.stepbystep.com/how-to-measure-gallons-per-minute-53520/","timestamp":"2014-04-18T23:15:58Z","content_type":null,"content_length":"41915","record_id":"<urn:uuid:583e2b3f-f86d-4a03-aebd-4f40f359e984>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Cassidy on Friday, June 5, 2009 at 9:32am.
I'm having a hard time figuring out a quadratic model for this data.
x y
How do I go about doing this?
Is there a website that could help me?
• Algebra II - MathMate, Friday, June 5, 2009 at 10:00am
First show that this is a quadratic relationship by taking differences and second differences:
y dy d^2y
If the second difference is constant, it is a quadratic relationship.
Assume the quadratic model as:
f(x)=ax^2 + bx + c
Thus, the second difference = 2a
By substituting values of x and a into one of the formulas, b can be found
for x=1,
Substitute a and b in f(1), we get
Related Questions
Econometric -SAS - I am having trouble figuring out the correct code. I am ...
Math (surveying) - I am having a really hard time figuring this problem out. I ...
algebra 2 - Consider the exam data below. Algebra score: 20 35 13 29 40 23 ...
Algebra II - what is the equation for the line that passes through the point (3,...
math - I am having a hard time figuring out the distane between two points the ...
choice - Explain how healthy behaviors and choices can positively effect your ...
accounting - I am having a hard time of figuring this out can you please help me...
accounting - I am having a hard time of figuring this out can you please help ...
algebra - At the first tri-city meeting, there are 8 people from town A, 7 ...
Finance - I am having a hard time figuring out the equation that I should be ... | {"url":"http://www.jiskha.com/display.cgi?id=1244208740","timestamp":"2014-04-18T14:21:31Z","content_type":null,"content_length":"9205","record_id":"<urn:uuid:745af8e7-6f81-4694-8f60-7e01a3dd26a0>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Variance, Mean, Normalizing Functions, Euclidean And Other Distances
DZone Snippets is a public source code repository. Easily build up your personal collection of code snippets, categorize them with tags / keywords, and share them with the world
Variance, Mean, Normalizing Functions, Euclidean And Other Distances
Here <b>sum</b>, <b>mean</b> and <b>variance</b> were inspired by the Peter's inline sum code:
class Array; def sum; inject( nil ) { |sum,x| sum ? sum+x : x }; end; end
class Array; def mean; self.sum/self.size.to_f; end; end
class Array; def variance; mean = self.mean; Math.sqrt(inject( nil ) { |var,x| var ? var+((x-mean)**2) : ((x-mean)**2)}/self.size.to_f); end; end
If you want to normalize a random variable (array) so that mean = 0 and variance = 1, you can transform your array <b>x</b> by calling:
# inputs a random variable, sets mean = 0 and variance = 1
def standardize_random_variable(x)
mean = x.mean
variance = x.variance
x.map!{|a| (a-mean)/variance }
If you want to compute distance, call these functions between two arrays of data, a and b.
## Distance Functions
# Sum of (x-y)^2
def euclidean_squared_distance(a,b)
b = b.to_a
a = a.to_a
sum_of_diff_sq = 0
(0...a.size).each { |i| sum_of_diff_sq+=((a[i].to_f-b[i].to_f)**2)}
# Square root of sum of (x-y)^2
def euclidean_distance(neighbor,xq)
# Sum of abs(x,y)
def cityblock_distance(neighbor,xq)
xq = xq.to_a
abs_diff = 0
(0...xq.size).each { |i| abs_diff+=(Math.abs(xq[i].to_f-neighbor[i].to_f)}
Snippets Manager replied on Sat, 2007/12/08 - 7:18pm
NOTE: the variance function in fact returns the standard deviation (sqrt of the variance)!!! | {"url":"http://www.dzone.com/snippets/variance-mean-normalizing","timestamp":"2014-04-18T14:10:48Z","content_type":null,"content_length":"57241","record_id":"<urn:uuid:6afcae40-92b8-4ce0-aaf2-893f0513dc8e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: A selfsimilar tiling generated by the minimal Pisot number
Shigeki Akiyama Taizo Sadahiro
Let fi be a Pisot unit of degree 3 with a certain finiteness condition. A large family of self
similar plane tilings can be constructed, by the digit expansion in base fi. (cf. [7], [5], [8]) In
this paper, we prove that the origin is an inner point of the central tile K. Further, in the
case corresponds to the minimal Pisot number, we shall give a detailed study on the fractal
boundary of each tile. Namely, a sufficient condition of ''adjacency'' of tiles is given and the
''vertex'' of a tile is determined. Finally, we prove that the boundary of each tile is a union of
5 self similar sets of Hausdorff dimension 1:10026 : : : .
1991 Mathematics Classification. Primary 11A68, 11R06
Key words and phrases. Fractal, Plane Tiling, Pisot number.
1 Plane tiling and Pisot numeration system
Let fi ? 1 be a real number. A representation in base fi (or a firepresentation) of a real number
x – 0 is an infinite sequence (x i ) k–i?01 , x i – 0, such that
x = x k fi k + x k01 fi k01 + 1 1 1 + x 1 fi + x 0 + x01 fi 01 + x02 fi 02 + 1 1 1
for a certain integer k – 0. It is denoted by
x = x k x k01 1 1 1 x 1 x 0 :x 01 x02 1 1 1 :
A particular firepresentation -- called the fiexpansion -- can be computed by the 'greedy algorithm':
Denote by [y] and fyg the integer part and the fractional part of y. There exists k 2 Z such that | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/542/3712747.html","timestamp":"2014-04-21T02:38:05Z","content_type":null,"content_length":"8559","record_id":"<urn:uuid:6db04b6e-4e86-41ed-a04f-6c5301f57c3c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonlinear Bayesian Tracking Loops for Multipath Mitigation
International Journal of Navigation and Observation
Volume 2012 (2012), Article ID 359128, 15 pages
Research Article
Nonlinear Bayesian Tracking Loops for Multipath Mitigation
^1Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Parc Mediterrani de la Tecnologia, Avenida Carl Friedrich Gauss 7, Barcelona, 08860 Castelldefels, Spain
^2DEIMOS Space S.L.U, Ronda de Poniente 19, Tres Cantos 28760, Madrid, Spain
Received 28 February 2012; Revised 22 June 2012; Accepted 3 July 2012
Academic Editor: Maarten Uijt de Haag
Copyright © 2012 Pau Closas et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
This paper studies Bayesian filtering techniques applied to the design of advanced delay tracking loops in GNSS receivers with multipath mitigation capabilities. The analysis includes tradeoff among
realistic propagation channel models and the use of a realistic simulation framework. After establishing the mathematical framework for the design and analysis of tracking loops in the context of
GNSS receivers, we propose a filtering technique that implements Rao-Blackwellization of linear states and a particle filter for the nonlinear partition and compare it to traditional delay lock loop/
phase lock loop-based schemes.
1. Introduction
Global Navigation Satellite Systems (GNSS) are the general concept used to identify those systems that allow user positioning based on a constellation of satellites. Specific GNSS are the well-known
American GPS, the Russian GLONASS, or the forthcoming European Galileo. All those systems rely on the same principle: the user computes its position by means of measured distances between the
receiver and the set of in-view satellites. These distances are calculated estimating the propagation time that synchronously transmitted signals take from each satellite to the receiver. Therefore,
GNSS receivers are only interested in estimating the delays of signals which are received directly from the satellites, referred to as line-of-sight signal (LOSS), since they are the ones that carry
information of direct propagation time. Hence, reflections distort the received signal in a way that may cause a bias in delay and carrier-phase estimations. Multipath is probably the dominant source
of error in high-precision applications, especially in urban scenarios, since it can introduce a bias up to a hundred of meters when employing a 1-chip wide (standard) delay lock loop (DLL) to track
the delay of the LOSS, which is a common synchronization method used in spread-spectrum receivers. This error might be unacceptable in many applications.
Sophisticated synchronization techniques estimate not only LOSS parameters but those of multipath echoes. This results in enhanced, virtually bias-free pseudorange measurements. In this paper, we
investigate multipath estimating tracking loops in realistic scenarios, where this effect is known to be severe. The analysis is driven in two directions. Firstly, a review of statistical
characterization of the channel model in such situations is performed and a commercial signal simulator. Secondly, a novel multipath estimating tracking loop is discussed, providing details on the
implementation, as well as comparisons to state-of-the-art techniques when different channel characteristics are considered. This tracking loop resorts to the Bayesian nonlinear filtering framework,
sequentially estimating the unknown states of the system (i.e., parameters of the LOSS and echoes) and providing robust pseudorange estimates, subsequently used in the positioning solution. The
so-called multipath estimating particle filter (MEPF) considers Rao-Blackwellization of signal amplitudes and the use of a suitable nonlinear filter for the rest of nonlinear states, for example,
time-delays and their rate. More precisely, Rao-Blackwellization involves marginalization of linear states and the use of a standard Kalman filter to track signal amplitudes with the goal of reducing
the estimation variance, since (i) the dimensionality of the problem that nonlinear filters solve is reduced and (ii) linear states are optimally tackled. For the nonlinear part of the state space we
consider sequential Monte-Carlo methods (specifically, the standard particle filtering) as one of the most promising alternatives in advanced GNSS receiver designs. Realistic computer simulation
results are presented using the GRANADA FCM signal simulator and the performance of the MEPF is evaluated.
The remainder of the paper is organized as follows. Section 2.1 provides a brief overview of the fundamentals of GNSS, their signal structure, available channel models, and receivers’ architecture
and describes a realistic simulation platform. Section 3 sketches the basics of particle filters, and Section 4 is devoted to their application to GNSS signal synchronization in the presence of
multipath. Section 5 presents computer simulations, and finally Section 6 concludes the paper. For the sake of completeness, the paper shows in the Appendix the equivalence between precorrelation and
postcorrelation processing of GNSS signals. Notice that in this paper, the MEPF method operates after correlation is performed in order to operate at a lower data rate.
2. Fundamentals of Global Navigation Satellite Systems
GNSS space vehicles broadcast a low-rate navigation message that modulates continuous repetitions of pseudorandom spreading codes, that in turn are modulating a carrier signal allocated in the L
band. The navigation message, after proper demodulation, contains among other information the so-called ephemeris, a set of parameters that allow the computation of the satellite position at any
time. These positions, along with the corresponding distance estimations, allow the receiver to compute its own position and time, as we will see hereafter. Basically, a GNSS receiver performs
trilateration, a method for determining the intersections of three or more sphere surfaces given the centers and radii of the spheres. In this case, the centers of the spheres are the satellites,
whose position can be computed from the navigation message, and the radii of the spheres are the distances between the satellites and the receiver, estimated from the time of flight.
The distance between the receiver and a given satellite can be computed by where m/s is the speed of light, is the receiving time in the receiver’s clock, and the time of transmission for a given
satellite . Receiver clocks are inexpensive and not perfectly in sync with the satellite clock, and thus this time deviation is another variable to be estimated. The clocks on all of the satellites
belonging to the same system , where , are in sync with each other, so the receiver’s clock will be out of sync with all satellites belonging to the same constellation by the same amount . In GNSS,
the term pseudorange is used to identify a range affected by a bias, directly related to the bias between the receiver and satellite clocks. There are other factors of error: since propagation at
speed is only possible in the vacuum, atmospheric status affects the propagation speed of electromagnetic waves modifying the propagation time and thus the distance estimation. For instance, the
ionosphere, that is the part of the atmosphere above km until km of the Earth surface, is a plasmatic medium that causes a slowdown in the group velocity and a speed up of the phase velocity, having
an impact in code and phase delays and, thus, impeding precise navigation when its effects are not mitigated. Actually, errors can be on the order of tens of meters in geomagnetic storm episodes [1].
For each in-view satellite of system , we can write where is the satellite’s position (known from the navigation message), the receiver’s position, and gathers other sources of error. Since the
receiver needs to estimate its own 3D position (three spatial unknowns) and its clock deviation with respect to the satellites’ time basis, at least satellites must be seen by the receiver at the
same time, where is the number of different navigation systems available (in-view) at a given time. Each received satellite signal, once synchronized and demodulated at the receiver, defines one
equation such as the one defined in (2), forming a set of nonlinear equations that can be solved algebraically by means of the Bancroft algorithm [2] or numerically, resorting to multidimensional
Newton-Raphson and weighted least square methods [3]. When a priori information is added we resort to Bayesian estimation, a problem that can be solved recursively by a Kalman filter or any of its
variants. The problem can be further expanded by adding other unknowns (for instance, parameters of ionospheric and tropospheric models), sources of information from other systems, mapping
information, and even motion models of the receiver. In the design of multi-constellation GNSS receivers, the vector of unknowns can also include the receiver clock offset with respect to each system
in order to take advantage of a higher number of in-view satellites and using them jointly in the navigation solution, therefore increasing accuracy.
2.1. Signal Model
A general signal model for most navigation systems consists of a direct-sequence spread-spectrum (DS-SS) signal [4], synchronously transmitted by all the satellites in the constellation. This type of
signals enables code division multiple access (CDMA) transmissions, that is, satellite signals are distinguished by orthogonal (or quasi-orthogonal) codes. At a glance, these signals consists of two
main components: a ranging code (the PRN spreading sequence) and a low rate data link (broadcasting necessary information for positioning such as satellites orbital parameters and corrections). The
complex baseband model of the signal transmitted by a GNSS space vehicle reads as where being the transmitting power, a parameter controlling the power balance, the data symbols, the bit period, the
number of repetitions of a full codeword that spans a bit period, the codeword period, a chip of a spreading codeword of length chips, the transmitting chip pulse shape, which is considered energy
normalized for notation clarity, and is the chip period. Figure 1 aims at clarifying the relation between those bits/chips parameters. Subindex refers to the in-phase component, and all parameters
are equivalently defined for the quadrature component, referred to with the subindex . This signal model describes all GNSS’s signals-in-space, for instance GPS L1, GPS L5, Galileo E1, and Galileo
E5. Refer to [5] for the details.
2.2. Propagation Channel Model
A key aspect in the definition of the propagation channel model between satellites’ antenna and the user’s receiver antenna is whether it can be considered narrowband or wideband, which depends on
the bandwidth of the propagation channel in which a given signal is transmitted, being assessed with respect to the channel coherence bandwidth. The coherence bandwidth is defined as the frequency
band within which all frequency components are equally affected by fading due to multipath. In narrowband systems, all the components of the signal are equally influenced by multipath, while in
wideband systems the various frequency components of the signal are differently affected by fading. Narrowband systems, therefore, are affected by nonselective fading, whereas wideband systems are
affected by selective fading. The coherence bandwidth depends on the environment and is given by where is the delay spread, which is the time span between the arrival of the first and the last
multipath signals that can be sensed by the receiver. In a fading environment, a propagated signal arrives at the receiver through multiple paths. For a typical GNSS multipath propagation channel in
which s (the limit can be greater in nonurban areas, but in general it is not lower), we obtain that the system is wideband if transmitted signals are wider than kHz, which is the case for GNSS
waveforms (in the order of MHz). Hence, we conclude that we need to define propagation channel models considering wideband systems. Another important definition within this context concerns coherence
time. The coherence time, , is defined as the time interval during which the characteristics of the propagation channel remain approximately constant, and it is given as where is the maximum Doppler
shift. The Doppler shift is given as , where is the radial speed of the mobile terminal with respect to the satellite and is the signal wavelength. A channel is considered WSSUS (wide-sense
stationary with uncorrelated scatterers) during the coherence time.
In the following, we describe four of the most relevant satellite channel models found in the literature.
2.2.1. Jahn’s Channel Characterization
Jahn et al. provided a wideband channel model for land mobile satellite services [6]. The model was derived from a channel measurements campaign performed in the L band at 1820MHz. An aircraft
transmitted a spread spectrum signal of 30MHz, being received by a mobile receiver (handheld or car terminal). From those measurements, authors characterized the channel assuming WSSUS and modeling
it as a filter structure with delay taps. Then, they provided statistical models for LOS (Rician probability density function for the amplitude of the direct path), shadowing (ray amplitude following
a Raileigh distribution with a lognormal distributed mean power), near echoes (the number of the near echoes follows a Poisson distribution, with delays being exponentially distributed and amplitudes
following a Rayleigh distribution), and far echoes (same distributions than near echoes but with other parameters). Table 1 summarizes the main features of Jahn’s statistical channel model.
2.2.2. Loo’s Channel Characterization
The Loo’s land mobile satellite channel model [7] is a statistical model that assumes that the LOS component under foliage attenuation (shadowing) is lognormally distributed and that the multipath
effect is Rayleigh distributed. This model provides complete statistical descriptions for different shadowing and multipath conditions based on an extensive measurement campaign for different
frequency bands. For the L band, the “Inmarsat’s Marecs A” satellite was used as transmitter, while a mobile laboratory was considered for signal reception, resulting in a fixed elevation. Many more
investigations on L-band measurements are also referred to in [8], obtaining results for other elevation angles. Table 1 summarizes the main features of Loo’s statistical channel model.
2.2.3. Pérez-Fontán’s Channel Characterization
The model presented by Fontán et al. in [9] addressed the statistical modeling of shadowing and multipath effects in land mobile satellite applications for a wide range of environments with different
clutter densities (from open to dense urban areas) and elevation angles (from 5° to 90°) at L, S, or Ka Bands, using a comprehensive experimental database to extract the model parameters for the
different bands, environments, and elevations. One of its main contributions consists of producing time series of any channel parameter whose study is required, instead of just cumulative
distribution functions. These ones may be computed later from the generated series. The model uses a first-order Markov chain to describe the slow variations of the direct signal, basically due to
shadowing/blockage effects. The overall signal variations due to shadowing and multipath effects within each individual Markov state are assumed to follow a Loo distribution with different parameters
for each shadowing condition (Markov state). Up to this point the model is of the narrow-band type since it does not account for time dispersion effects. These effects are introduced by using an
exponential distribution to represent the excess delays of the different echoes. Table 1 summarizes the main features of Pérez-Fontán’s channel model.
2.2.4. Steingass/Lehner’s Channel Characterization
The Steingass/Lehner land mobile channel model presented in [10] was developed using data recorded in a high-resolution measurement campaign carried out in Munich in 2002. Different types of
environments (urban, suburban, and rural) were measured for car and pedestrian applications. It has been approved as standard by the ITU [11]. For the measurements, a 100MHz signal near the GPS L1
band was used. This signal provided a time resolution of about 10ns. The received signal was processed using a super-resolution algorithm to extract the single reflections. With this information,
the probability density distribution of the parameters of the reflected rays, such as Doppler shift, power of echoes, duration of a reflector, and number of echoes, were extracted. In urban
environments, three major obstacles influence the propagation of the LOS signal: house fronts, trees, and lamp posts. The model is comprised of a deterministic part with a generated scenery, which
computes geometrically the LOS signal shadowing and knife-edge diffraction for house fronts, lamp posts, and trees. The other observables like the number of coexisting echoes, life span of
reflectors, and the mean power of the echoes are generated stochastically, using the probability density distribution extracted from the measurements. The output of the model is a complex
time-variant channel impulse response recalculated each time step. Table 1 summarizes the main features of Steingass/Lehner’s channel model.
2.3. A Realistic Signal/Channel Simulator
When transmitted, satellite’s signals travel through a propagation channel which modifies its amplitude, phase, and delay. Indeed, many replicas of the same transmitted signal can reach the
receiver’s antenna due to multipath propagation. In general, these replicas are caused by reflections of the direct signal in surrounding obstacles (e.g., buildings, trees, and ground etc.). As shown
above, such propagation channel is generically modeled by a linear time-varying impulse response with propagation paths: where , and are the amplitude, phase, and delay of the th propagation path for
the th satellite, is the multipath delay axis and the index stands for the line-of-sight signal. These channel parameters can be seen as realizations of random processes with underlying probability
density functions , , and , respectively, whose shape and parameters are approximated by the models outlined above.
Therefore, considering visible satellites, the signal received at the receiver’s antenna is the superposition of the transmitted signals, as propagated through the corresponding channel, and
corrupted by additive noise, . This reads as where is the transmitted signal corresponding to the -th satellite.
As shown in [12], the term can be approximated by its first-order Taylor expansion as . Hence, the general baseband equivalent model that will be used along this paper is
The first element in the receiver RF chain is a right hand circularly polarized (RHCP) antenna, usually with nearly hemispherical gain coverage, with the mission to receive the radionavigation
signals of all the satellites in view. The RF signals collected by the antenna are immediately amplified by a low noise amplifier (LNA), a key element which is the most contributing block to the
noise figure of the receiver. The LNA also acts as a filter, minimizing out-of-band RF interferences and setting the sharpness of the received code. After the LNA, the amplified and filtered RF
signals are then downconverted to an intermediate frequency (IF) using signal mixing frequencies from local oscillators (LOs). These LOs are derived from a receiver reference oscillator, often an
oven-stabilized clock with typical accuracies of . There is a need for one LO per down-conversion stage. Two or three down-conversion stages are commonly devoted to reject mirror frequencies or large
out of band jamming signals, in particular the 900MHz used by the GSM mobile communication system. However, depending on the subsequent analog-to-digital converter (ADC) characteristics, a one-stage
downconversion or even a direct L-band sampling is also possible [13]. The lower sideband generated by the mixer process is selected, while the upper sideband is filtered by a postmixer bandpass
filter. It is important to point out that signal Doppler’s and PRN codes are preserved after the mixing stage, only the carrier frequency is lowered.
In the sequel, we focus on the contribution of a single satellite and thus omit the dependence with of the signal model. Considering a generic data sequence , chip code , chip-shaping pulse , chip
period , full codes in a whole bit, and data period , the baseband equivalent received signal for a channel model as in (7) but particularized to (i.e., only one line of sight signal) can be put in
the form where is the pulse received at the antenna and then filtered by a precorrelation filter (usually the LNA), is the filtered version of , and the term stands for the filtered thermal noise and
other unmodeled terms. The objective of a synchronization method is to estimate the time delay , Doppler shift and the carrier phase information embedded into the phase of the complex amplitude .
The analog-to-digital conversion and the automatic gain control (AGC) processes take place at IF or baseband, where all the signals from GNSS satellites in view are buried in thermal noise. Once the
received signal is digitized, it is ready to feed each of the digital receiver channels. Every receiver channel is intended to acquire and track the signal of a single GNSS satellite; typical
receivers are equipped with channels. The multiplication of the IF digitized signal by a local replica of its carrier frequency allows to produce the in-phase (I) and quadrature-phase (Q) components
of the digitized signal.
Assuming as additive white Gaussian noise (AWGN), at least in the band of interest, it is well known that the optimum receiver is the code matched filter, expressed as where are local estimates of
the time delay, Doppler shift, and carrier phase of the received signal, and stands for the complex conjugate operator. Theoretically , but actual implementations make use of approximated versions:
while is a rectangular pulse filtered at the satellite, is digitally generated at the receiver and therefore not filtered. In addition, is usually filtered again by a precorrelation filter before the
matched filter, as expressed in (10) with . The code matched filter output can be written in the form
Notice that, in the matched filter, we have substituted the estimates , , and for trial values obtained from previous (in time) estimates of these parameters which we have defined as , , and ,
respectively. This is the usual procedure in GNSS receivers, since the estimates are not really available, but to be estimated after correlation.
In DS-SS terminology, the matched filter is often referred to as correlator, while the processing it performs is called despreading. Since the correlators perform accumulation of the sampled signal
during a period and then release an output, we can write the discrete version of the signal as where is the sampling period, is the integration time (usually, ) and stands for the nearest integer
towards zero.
Equation (13) can be expressed more conveniently by solving the convolution in (12), which yields [14] where we defined , and (i.e., the estimation errors), stands for the nearest integer toward
zero, and means the integer part of , being the navigation bit period, and is the correlation function. An equivalent derivation for the arm leads to
Terms , , and should be regarded as the average local phase error over the integration interval, that is, , assuming a frequency rate error (i.e., a phase acceleration error) equal to zero. In case
of inclusion of such effect in the model, the average phase error can be expanded as In this expression, the terms , , and are referred to the error values at the beginning of the integration
In the following, we will consider as the integer number of samples collected in an accumulation. This number will not be integer in receiver configurations having a sample rate incommensurable with
the chip rate, and thus some integration blocks will have samples instead of . This effect can be considered negligible for the analysis presented in this paper.
In the case of (i.e., in the presence of multipath), (12) becomes a sum of all the replicas convoluted with a filter matched to the line of sight signal, whose estimated parameters are possibly
biased by the presence of multipath. Since the convolution is a linear operator, the correlator output will be a linear combination of the contributions made by each signal path.
Note that an arbitrary number of correlators (very early, early, prompt, late, very late, etc.) can be used in the filter update, just adding or subtracting the correlator offset to the argument of
(i.e., , , etc.). The correlators’ output can be stacked in a vector , which will be the measurements used in next section.
In the context of this work, we used the GRANADA (Galileo Receiver ANAlysis and Design Application) simulation platform to simulate realistic channel and receiver scenarios. The GRANADA Factored
Correlator Model (FCM) blockset (see Figure 2) is a MATLAB/Simulink (MATLAB and Simulink are registered trademarks of The MathWorks, Inc.) library that provides a swift, flexible, and realistic way
of simulating different signal processing architectures, either of standalone GNSS receivers or multisystem solutions. The FCM was included in a Simulink blockset, which, since 2007, has been
commercially available as part of the GRANADA product family, whose remaining products were developed by DEIMOS Space in the frame of the Galileo Receiver Development activities (GARDA), funded by
the Galileo Joint Undertaking (now European GNSS Agency, GSA) under the 6th Framework Program of the European Union.
The FCM separates the effects of carrier and code Doppler and misalignment on a GNSS receiver’s correlator outputs into several multiplicative factors and allows the inclusion (or not) of each factor
independently. Since it is an analytical model, the computation rate can be as low as the tracking loop rate, dramatically increasing simulation speed: the FCM provides directly the correlators’
output, precluding the need of simulating the lower-level signal processing stages, significantly reducing the computational load and hence decreasing processing and memory requirements, while still
accounting for various effects (as filtering, carrier phase and frequency errors, code delay error, code Doppler, noise, and multipath), thus keeping a high level of realism [15]. Since,
statistically speaking, it is equivalent to work with samples before or after the correlation process (proof in the Appendix), we take advantage of working at the correlator output since it
considerably reduces the computational load.
Once configured (type of signal, propagation channel, user dynamics, sampling frequency before correlation, number of correlators and their spacing, integration period, environment, etc., see Figure
3), FCM provides the measurements used in the simulations presented in Section 5.
3. Particle Filtering
Bayesian filtering involves the recursive estimation of states given measurements at time based on all available measurements, . To that aim, we are interested in the filtering distribution , which
can be recursively expressed as with and referred to as the likelihood and the prior distributions, respectively. Unfortunately, (18) can only be obtained in closed-form in some special cases. For
instance, when the model is linear and Gaussian, the Kalman Filter (KF) [16] provides the optimal solution. In more general setups—nonlinear and/or non-Gaussian—we should resort to more sophisticated
methods [17]. In this paper we consider particle filters (PFs) [18, 19].
PFs approximate the filtering distribution by a set of weighted random samples, forming the so-called set of particles . These random samples are drawn from the importance density distribution, , and
weighted according to the general formulation
Algorithm 1 outlines the operation of the Standard PF (SPF) when a new measurement becomes available. After particle generation, weighting, and normalization, a minimum mean square error (MMSE)
estimate can be obtained by a weighted sum of particles. A typical problem of PFs is the degeneracy of particles, where all but one weight tend to zero. This situation causes the particle to collapse
to a single state point. To avoid the degeneracy problem, we apply resampling, consisting in eliminating particles with low importance weights and replicating those in high-probability regions [20,
21]. In this work, we consider a multinomial sampling scheme for the resampling step.
3.1. Rao-Blackwellized Particle Filter
In this paper, we analyze a way to alleviate the dimensionality problem based on the marginalization of linear states. The basic idea is that a KF can optimally deal with these states, while reducing
the dimension of the state space that the nonlinear filter has to explore. The procedure was proposed in [23, 24] for the case of dealing with the nonlinear states with a PF. The algorithm was termed
Marginalized particle filter (MPF), although the same concept is also referred to as Rao-Blackwellized PF (RBPF) in other works [25, 26]. The latter nomenclature is because marginalization resorts to
a general result due to [27, 28] referred to as the Rao-Blackwell theorem, which shows that the performance of an estimator can be improved by using information about conditional probabilities. The
Rao-Blackwell theorem states let represent any unbiased estimator for and be a sufficient statistic for under . Then the conditional expectation is independent of , and it is the uniformly minimum
variance unbiased estimator (cf. [29, 30] for the details) The result of a corollary points out that the use of a Rao-Blackwellized estimator effectively reduces the variance of the estimation error.
Therefore, when possible, it is desirable to apply marginalization procedures. Corollary: let be an unbiased estimator and let be the Rao-Blackwell estimator, then
Final remarks on Rao-Blackwellization are worth mentioning. (i)Rao-Blackwellization is a procedure suitable when linear substructures are present in the dynamical model. (ii)It is a variance
reduction technique, in the sense that the estimation variance of a filter considering this marginalization procedure is less than a filter estimating the complete state space. (iii)Filtering linear
states with a Kalman filter has twofold benefits: (1) linear states are optimally filtered and (2) the system coped by the nonlinear filter has reduced dimensionality (with large benefits in terms of
computational resources).
4. Joint Filtering of LOSS and Multipath Parameters
The technique herein investigated attempts to estimate the synchronization parameters of both the LOSS and multipath components. We refer to the algorithm as the multipath estimating particle
filtering, or MEPF for short. Here the term Bayesian means that the algorithm is using some sort of a priori information regarding these parameters (such as interdependencies and time evolution
models). This approach was first introduced in [31] and further refined in [32], although other papers might be found following the same scheme [33] with more complex time-evolving models. The
application of Bayesian filtering techniques becomes straightforward when one describes the problem at hand in terms of a measurement equation and a process equation (i.e., how unknowns evolve
randomly over time).
4.1. Observations
A receiver implementing such Bayesian tracking loops typically processes each satellite independently, and most of the work in the literature discusses architectures using IF signal. Here we are
interested in operating at the output of the bank of correlators.
Observations for the -th satellite are gathered into a random vector , where we omitted the subindex for the sake of clarity. The th element in corresponds to the sample of the -th correlator, and it
is expressed as accounting that corresponds to the point where the -th early/late sample is evaluated. As usual, denotes LOSS. Here we consider a noncoherent tracking architecture that operates with
the squared outputs. This scheme avoids the estimation of carrier phases, and thus it reduces the state-space dimension. In our implementation, a conventional PLL/FLL network is used in parallel to
the MEPF. Therefore, the observations are the parallel outputs of the correlation bank, which we denote as where is the total number of correlators used at the receiver. We made apparent the
dependence of measures on unknown states: real amplitude and time delay of each replica of the signal.
4.2. Process Dynamics
The state space is composed of the unknown parameters of the model, namely, delay, delay rate, and real amplitude of the LOSS and its multipath replica: where is the delay rate of the -th component,
related to the Doppler shift. We have introduced this delay rate to better capture the dynamics of the time-evolving delay of the signals.
One could adopt many alternatives to specify the time-evolving processes for each state, ranging from the simplistic (although effective in some situations) autoregressive model to more sophisticated
models. Here, we adopt a channel state model based on that presented in [34], adapted to the noncoherent scheme. This model was motivated by channel modeling work for multipath prone environments
such as the urban satellite navigation channel [35].
The dynamics of time delay and delay rate for the LOSS (i.e., ) are described by where is the integration period and the process noise is an uncorrelated zero-mean Gaussian random variable with
diagonal entries and .
The evolution of and for the echoes is modeled with a truncated Gaussian distribution as in [31], which allows us to introduce the fact that due to physical reasons in outdoor propagation channels [6
, 11, 36]. Taking (26) into account, we force this situation using the evolution with and being zero-mean Gaussian random variables with variances and , respectively. For the evolution of each we
consider independent autoregressive models with variance . The overall covariance matrix of the process is denoted as and is constructed with , , , , , and in its diagonal.
4.3. Algorithm Implementation
From the previous modeling, we realize that the state space can be partitioned into linear and nonlinear subspaces. Clearly, these can be identified as
By the chain rule of probability, linear states can be analytically marginalized out from : and, taking into consideration that generates a linear Gaussian state-space, can be updated analytically
via a KF conditional on and only the non-linear part of needs to be estimated with a nonlinear filter. In the proposed scheme, an SPF is run to characterize and a KF is executed to obtain .
Notice that both linear and nonlinear states are interdependent, thus the algorithm has to be aware of this coupling. The details might be consulted in [23] for the general algorithm and in [12] for
the specific GNSS setup considered here. At a glance, each particle in the PF has an associated KF that tracks amplitudes. Then, before particle generation, KF prediction is run and the results are
used in the particle filter. Similarly, once particles are weighted this information is used in the update step of the KF.
5. Results in Realistic Scenarios
We used the GRANADA FCM blockset of Simulink to simulate the GPS L1 C/A signal, the propagation channel, and the inaccuracies of the receiver front end. An initial set of controlled scenarios is
simulated to analyze the method. Then, from the set of reviewed channel models, we have selected Jahn’s to show simulation results in a realistic environment. The GPS signal is spread spectrum with a
code length of chips and a chip rate of Mchips/s (notice that a chip of the signal corresponds to approximately 300 meters in length and the duration of an entire codeword is one millisecond). The
carrier frequency of the transmitted signal was MHz and the receivers precorrelation bandwidth was MHz. Estimates of time delay were performed at a rate of Hz, which corresponds to an integration
time of milliseconds, assuming bit synchronization. The carrier-to-noise density ratio () of the simulated satellite was dB-Hz. The dynamics of the scenario were due to the relative motion of the
satellite-receiver, which is completely simulated by the GRANADA FCM blockset, and the receiver performed a pedestrian-like trajectory at m/s. Simulation time was seconds.
We compared the performance of the MEPF with the results of a narrow -chip spacing DLL (state-of-the-art in GNSS receivers) with an equivalent noise bandwidth of Hz. This architecture uses
correlators. Also, the benchmark receiver implements a coherent phase lock loop (PLL) carrier phase discriminator using a second-order filter and an error accumulator with equivalent noise bandwidth
Hz. The initial time-delay ambiguity at which the filter was initialized was drawn from , with the chip period.
It has been reported in [37] that the number of correlators () used in the PF plays an important role. For instance, in AWGN on the order of correlators are required to obtain stable results. Also,
the algorithm improves its performance with the number of particles although this improvement saturates at particles.
Figures 4–7 show the behavior of the classical DLL/PLL scheme and the proposed MEPF, respectively, in a multipath scenario. In this experiments, we used correlators for the MEPF in order to span
correlators along regions of interest in terms of multipath estimation and mitigation. The results are organized as follows. Top figure represents the obtained pseudorange error. Central figure is
the relative delay between the LOSS and the multipath replica, in the first representative interval () it has been set to chips and in the second interval () to chips. Bottom figure plots the
signal-to-multipath ratio (SMR) in linear scale of the simulated scenario. During the first interval, the SMR was abruptly kept constant to and during the second interval it grew linearly from to .
Since the MEPF is very sensitive to the tuning of process covariance matrix—as many Bayesian filtering solutions,—we have investigated three different setups with particles. Namely, (i) in Figure 5
we used standard deviations ,, ,, and ; (ii) in Figure 6 we used , , , ,, and ; and finally (iii) in Figure 7 we used , , , ,, and . At the light of the results, the latter configuration provided
a good performance as it allowed for sufficient delay excursions to explore the state space and fast variations in multipath amplitude were coped. A summary of results in terms of bias, variance, and
RMSE over the entire simulation can be consulted in Table 2. We can observe that, compared to DLL schemes, a remarkable performance improvement can be obtained after properly adjusting the
Finally, we tested the algorithm in a more realistic scenario. We selected the Jahn’s channel model with the same receiver parameters as before. Particularly, the considered channel was that of a
satellite at an elevation angle of in an urban scenario with an average of dB-Hz. The results can be consulted in Figure 8, where it can be observed that MEPF requires an initial convergence time
(depending on the covariance matrix set) larger than DLL schemes. Conversely, it appears more robust to channel impairments. Numerically, the RMSE in the overall simulation is of m and m for DLL and
MEPF, respectively. For the MEPF we used paths, particles, and , , , ,, and .
6. Conclusions
In this paper we have analyzed an advanced tracking loop for time-delay and carrier-phase estimation in a GNSS receiver based on sequential Monte-Carlo methods. The algorithm builds upon previous
work by the authors on Rao-Blackwellized particle filtering while introducing more realistic process dynamics and the usage of postcorrelation observations, that reduce the computational burden at
the receiver. The paper presents the general signal model, GNSS concept, and trade-offs the most common propagation channel models. A realistic scenario simulator based on the FCM blockset of
Simulink was used Section 5. Results point out the need for properly setting not only the number of particles but the number of correlation outputs used as observations. Also, degradation of
conventional DLL/PLL schemes in multipath-rich scenarios became clear. Nevertheless, the correct selection of a process covariance matrix was seen to affect significantly the performance of the MEPF
and future work should be devoted in self-adjustment of such matrix.
Equivalence of Pre/Postcorrelation Receiver Architectures
In this appendix we establish a basic result showing the equivalence between processing pre- and postcorrelation signals. That is to say, from a statistical point of view, an estimator of a given
parameter (e.g., time delay) computed using a bunch of snapshots taken at the IF signal level () is the same as that which is derived using the output of the correlators (). It is a well-known result
in statistical signal processing that both signals are sufficient statistics, and thus one is able to derive an estimator of using either. However, we will see that this equivalence becomes evident
when one examines the likelihood distribution (the density where the information from measurements is gathered) for each approach.
If we analyze first the case of using the IF signal we should be aware of the following.(i)This approach does not force an implementation based on early, prompt, and late samples; as observations are
directly the baseband signal at the sampling frequency. (ii)It is necessary to use a sufficiently large set of IF data to be able to infer any parameter from it. That is, one has to integrate over a
certain integration time, , since the signal-to-noise ratio of GNSS signals is typically well below the noise level.
The term stands for the vector of snapshots of the IF signal, as gathered for the th integration interval, defined as using the same notation conventions used along the document. Then, the likelihood
can be decomposed as the independent contribution of each snapshot and assuming Gaussianity for the noise term, we could identify that where stands for the precorrelation signal model, which was
defined earlier as
Further manipulation of the loglikelihood yields to with the latter step being clear if one accounts for the definition of as the output of a correlator. Recall that is the true unknown parameter of
the signal. An ML estimator of could be obtained after maximizing the latter equation.
Drawbacks of this approach are twofold. (i)It might be computationally expensive as large data sets need to be processed to increase the signal-to-noise ratio, and thus might be large depending on .
(ii)There is a requirement for performing signal processing operations at a high rate, since it operates at the sampling frequency.
If we turn our attention to the conventional approach in which one uses samples at the output of a bank of correlators, we should see the following. (i)This approach forces an implementation based on
early, prompt, and late samples; this means that samples are taken assuming a previous estimation (prompt) of the parameters, denoted as . (ii)Few samples are sufficient to infer estimates of . After
correlation an integration over a certain interval is already done, , and therefore the signal-to-noise ratio is relatively high.
In this case, measurements can be expressed as at the output of the -th integration interval. In this measurement we explicitly expressed that samples are taken with respect to the error between true
and prompt parameters, . Notice that we considered that only the prompt is used for the sake of clarity. It is easy to obtain a similar result, as the one shown here, when one accounts for several
early and late samples.
Then, the log-likelihood under the Gaussian assumption is with being the postcorrelation signal model and the unknown parameter we want to estimate at .
If we set , we can identify that
From the latter mathematical derivations, we can conclude an important result: for a given integration interval considering snapshots. As said, similar results apply for larger integration and more
early/late samples.
As a consequence, we can state the following: the ML estimator of computed from the data sets and is equivalent.
To sum up, from a statistical point of view, both approaches are equivalent and the choice should be made considering implementation aspects. For instance, it is clear that using precorrelation
measurements involves larger computational burden than using post-correlation samples. Another important conclusions is that since in the pre-correlation approach we also need to integrate in order
to increase the signal-to-noise ratio, effects happening faster than will not be captured by the estimation algorithm. The same happens in the post-correlation case. Therefore, the limitation of
which phenomena could be tracked is inherent to the GNSS signal, instead of the way it is processed (i.e., pre-or postcorrelated samples).
P. Closas and C. Fernández-Prades were supported by the European Commission under COST Action IC0803 (RFCSET).
1. M. Hernández-Pajares, J. M. J. Zornoza, J. S. Subirana, R. Farnworth, and S. Soley, “EGNOS test bed ionospheric corrections under the October and November 2003 storms,” IEEE Transactions on
Geoscience and Remote Sensing, vol. 43, no. 10, pp. 2283–2293, 2005. View at Publisher · View at Google Scholar · View at Scopus
2. S. Bancroft, “An algebraic solution of the GPS equations,” IEEE Transactions on Aerospace and Electronic Systems, vol. 21, no. 1, pp. 56–59, 1985. View at Scopus
3. G. Strang and K. Borre, Linear Algebra, Geodesy, and GPS, Wellesley Cambridge Press, 1997.
4. B. Hofmann-Wellenhof, H. Lichtenegger, and E. Wasle, GNSS -Global Navigation Satellite Systems: GPS, GLONASS, Galileo & More, Springer-Verlag, Wien, Austria, 2008.
5. C. Fernández-Prades, L. L. Presti, and E. Falletti, “Satellite radiolocalization from GPS to GNSS and beyond: novel technologies and applications for civil mass market,” Proceedings of the IEEE,
vol. 99, no. 11, pp. 1882–1904, 2011. View at Publisher · View at Google Scholar · View at Scopus
6. A. Jahn, H. Bischl, and G. Heiss, “Channel characterization for spread spectrum satellite communications,” in Proceedings of the 4th International Symposium on Spread Spectrum Techniques &
Applications (ISSSTA '96), pp. 1221–1226, September 1996. View at Scopus
7. C. Loo and J. S. Butterworth, “Land mobile satellite channel measurements and modeling,” Proceedings of the IEEE, vol. 86, no. 7, pp. 1442–1462, 1998. View at Scopus
8. M. A. V. Castro, F. P. Fontan, A. A. Villamarín, S. Buonomo, P. Baptista, and B. Arbesser, “L-band Land Mobile Satellite (LMS) amplitude and multipath phase modeling in urban areas,” IEEE
Communications Letters, vol. 3, no. 1, pp. 12–14, 1999. View at Scopus
9. F. P. Fontán, M. Vázquez-Castro, C. E. Cabado, J. P. García, and E. Kubista, “Statistical modeling of the LMS channel,” IEEE Transactions on Vehicular Technology, vol. 50, no. 6, pp. 1549–1567,
2001. View at Publisher · View at Google Scholar · View at Scopus
10. A. Steingass and A. Lehner, “A channel model for land mobile satellite navigation,” in Proceedings of the the European Navigation Conference, pp. 2132–2138, German Institute of Navigation (DGON),
July 2005. View at Scopus
11. Recommendation ITU-R P.681-7, “Propagation data required for the design of Earth-space land mobile telecommunication systems,” 2009, http://www.itu.int/rec/R-REC-P.681-7-200910-I/en.
12. P. Closas, Bayesian signal processing techniques for GNSS receivers: from multipath mitigation to positioning [Ph.D. dissertation], Universitat Politècnica de Catalunya (UPC), Department of
Signal Theory and Communications, Barcelona, Spain, 2009.
13. D. M. Akos, M. Stockmaster, J. B. Y. Tsui, and J. Caschera, “Direct bandpass sampling of multiple distinct RF signals,” IEEE Transactions on Communications, vol. 47, no. 7, pp. 983–988, 1999.
View at Publisher · View at Google Scholar · View at Scopus
14. B. Parkinson and J. Spilker, Eds., Global Positioning System: Theory and Applications, vol. 1 of Progress in Astronautics and Aeronautics, American Institute of Aeronautics, Washington, DC, USA,
15. J. S. Silva, P. F. Silva, A. Fernández, J. Diez, and J. F. M. Lorga, “Factored correlator model: a solution for fast, flexible, and realistic GNSS receiver simulations,” in Proceedings of the
20th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS '07), pp. 2676–2686, Fort Worth, TX, USA, September 2007. View at Scopus
16. R. E. Kalman, “A new approach to linear filtering and prediction problems,” Transactions of the ASME-Journal of Basic Engineering, vol. 82, pp. 35–45, 1960.
17. Z. Chen, “Bayesian filtering: from Kalman filters to particle filters, and beyond,” Tech. Rep., Adaptive Systems Laboratory, McMaster University, Ontario, Canada, 2003.
18. M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Transactions on Signal Processing, vol. 50, no.
2, pp. 174–188, 2002. View at Publisher · View at Google Scholar · View at Scopus
19. P. M. Djurić, J. H. Kotecha, J. Zhang et al., “Particle filtering,” IEEE Signal Processing Magazine, vol. 20, no. 5, pp. 19–38, 2003. View at Publisher · View at Google Scholar · View at Scopus
20. M. Bolić, P. M. Djuric, and S. Hong, “Resampling algorithms for particle filters: a computational complexity perspective,” Eurasip Journal on Applied Signal Processing, vol. 2004, no. 15, pp.
2267–2277, 2004. View at Publisher · View at Google Scholar · View at Scopus
21. R. Douc, O. Cappé, and E. Moulines, “Comparison of resampling schemes for particle filtering,” in Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis (ISPA
'05), pp. 64–69, Zagreb, Croatia, September 2005. View at Scopus
22. GRANADA Galileo Receiver ANalysis And Design Application. The Reference Galileo Simulation Toolkit for GNSS Receiver Research And Development. Factored Correlator Model Blockset v2.0 User Manual,
Deimos Engenharia, S.A., 2009.
23. T. Schön, F. Gustafsson, and P. J. Nordlund, “Marginalized particle filters for mixed linear/nonlinear state-space models,” IEEE Transactions on Signal Processing, vol. 53, no. 7, pp. 2279–2289,
2005. View at Publisher · View at Google Scholar · View at Scopus
24. R. Karlsson, Particle filtering for positioning and tracking applications [Ph.D. dissertation], Linköping University, Linköping, Sweden, 2005.
25. R. Chen and J. S. Liu, “Mixture Kalman filters,” Journal of the Royal Statistical Society B, vol. 62, no. 3, pp. 493–508, 2000. View at Scopus
26. A. Doucet, N. de Freitas, and N. Gordon, Eds., Sequential Monte Carlo Methods in Practice, Springer, 2001.
27. C. Rao, “Information and the accuracy attainable in the estimation of statistical parameters,” Bulletin of Calcutta Mathematical Society, vol. 37, pp. 81–91, 1945.
28. D. Blackwell, “Conditional expectation and unbiased sequential estimation,” The Annals of Mathematical Statistics, vol. 18, no. 1, pp. 105–110, 1947.
29. E. Lehmann, Theory of Point Estimation. Probability and Mathematical Statistics, John Wiley & Sons, 1983.
30. A. Papoulis and S. U. Pillai, Probability, Random Variables and Stochastic Processes, McGraw-Hill, New Delhi, India, 4th edition, 2001.
31. P. Closas, C. Fernández-Prades, and J. A. Fernández-Rubio, “Bayesian DLL for multipath mitigation in navigation systems using particle filters,” in Proceedings of the IEEE (ICASSP '06), Toulouse,
France, May 2006.
32. P. Closas, C. Fernández-Prades, and J. A. Fernández-Rubio, “A Bayesian approach to multipath mitigation in GNSS receivers,” IEEE Journal on Selected Topics in Signal Processing, vol. 3, no. 4,
pp. 695–706, 2009. View at Publisher · View at Google Scholar · View at Scopus
33. M. Lentmaier, B. Krach, and P. Robertson, “Bayesian time delay estimation of GNSS signals in dynamic multipath environments,” International Journal of Navigation and Observation, vol. 2008,
Article ID 372651, 11 pages, 2008. View at Publisher · View at Google Scholar
34. B. Krach, P. Robertson, and R. Weigel, “An efficient two-fold marginalized Bayesian filter for multipath estimation in satellite navigation receivers,” Eurasip Journal on Advances in Signal
Processing, vol. 2010, Article ID 287215, 2010. View at Publisher · View at Google Scholar · View at Scopus
35. A. Steingass and A. Lehner, “Measuring the navigation multipath channel—a statistical analysis,” in Proceedings of the 17th International Technical Meeting of the Satellite Division of the
Institute of Navigation (ION GNSS '04), pp. 1157–1164, Long Beach, Calif, USA, September 2004. View at Scopus
36. M. Irsigler, J. A. Ávila-Rodríguez, and G. W. Hein, “Criteria for GNSS multipath performance assessment,” in Proceedings of the International Technical Meeting of the Institute of Navigation(ION
GPS/GNSS '05), Long Beach, Calif, USA, September 2005.
37. P. Closas, C. Fernández-Prades, J. Diez, and D. de Castro, “Multipath estimating tracking loops in advanced GNSS receivers with particle filtering,” in Proceedings of the IEEE Aerospace
Conference, Big Sky, Mont, USA, March 2012. | {"url":"http://www.hindawi.com/journals/ijno/2012/359128/","timestamp":"2014-04-17T20:07:10Z","content_type":null,"content_length":"540475","record_id":"<urn:uuid:da8a3098-00e0-4992-84ae-fcea3b2f80aa>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: THE SCALING OF FLUVIAL LANDSCAPES
Björn Birnir # Terence R. Smith + \Lambda George E. Merchant +
# Department of Mathematics, University of California at Santa Barbara, California 93106, USA
and University of Iceland, Science Institute, 3 Dunhaga, 107 Reykjavík
+ Department of Geography, University of California at Santa Barbara, California 93106, USA
\Lambda Department of Computer Science, University of California at Santa Barbara, California 93106,
The analysis of a family of physicallybased landscape models leads to the analysis of two
stochastic processes that seem to determine the shape and structure of river basins. The par
tial differential equation determine the scaling invariances of the landscape through these
processes. The models bridge the gap between the stochastic and deterministic approach
to landscape evolution because they produce noise by sediment divergences seeded by in
stabilities in the water flow. The first process is a channelization process corresponding
to Brownian motion of the initial slopes. It is driven by white noise and characterized by
the spatial roughness coefficient of 0:5. The second process, driven by colored noise, is a
maturation process where the landscape moves closer to a mature landscape determined by
separable solutions. This process is characterized by the spatial roughness coefficient of
0:75 and is analogous to an interface driven through random media with quenched noise.
The values of the two scaling exponents, which are interpreted as reflecting universal, but | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/925/3863860.html","timestamp":"2014-04-20T12:50:47Z","content_type":null,"content_length":"8677","record_id":"<urn:uuid:8845483c-8b5e-480f-8a08-cf52788f2958>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
KS2, Measures
• It's the only way to access our downloadable files;
• You can use our search box tool;
• Registered users see fewer Adverts;
• You will receive our 'irregular' newsletters;
• It's free.
Unless specified otherwise in the individual descriptions MathSticks resources are licenced under a Creative Commons Licence.
You are free to use; share; copy; distribute and transmit the work. Provided that you give mathsticks.com credit for the work and logos remain intact. You may not alter, transfrom, or build upon the
work, nor may you use it in any form for commercial purposes. | {"url":"http://mathsticks.com/taxotouch/14,59","timestamp":"2014-04-20T00:38:37Z","content_type":null,"content_length":"42544","record_id":"<urn:uuid:910fdd3e-b240-4256-8324-4510e2820281>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimization Algorithms
Chapter Contents Previous Next
Optimization Algorithms
There are three groups of optimization techniques available in PROC NLP. A particular optimizer can be selected with the TECH=name option in the PROC NLP statement.
│ Algorithm │ TECH= │
│ Linear Complementary Problem │ LICOMP │
│ Quadratic Active Set Technique │ QUADAS │
│ Trust-Region Method │ TRUREG │
│ Newton-Raphson Method With Line-Search │ NEWRAP │
│ Newton-Raphson Method With Ridging │ NRRIDG │
│ Quasi-Newton Methods (DBFGS, DDFP, BFGS, DFP) │ QUANEW │
│ Double-Dogleg Method (DBFGS, DDFP) │ DBLDOG │
│ Conjugate Gradient Methods (PB, FR, PR, CD) │ CONGRA │
│ Nelder-Mead Simplex Method │ NMSIMP │
│ Levenberg-Marquardt Method │ LEVMAR │
│ Hybrid Quasi-Newton Methods (DBFGS, DDFP) │ HYQUAN │
Since no single optimization technique is invariably superior to others, PROC NLP provides a variety of optimization techniques that work well in various circumstances. However, it possible to devise
problems for which none of the techniques in PROC NLP can find the correct solution. Moreover, nonlinear optimization can be computationally expensive in terms of time and memory, so care must be
taken when matching an algorithm to a problem.
All optimization techniques in PROC NLP use O(n^2) memory except the conjugate gradient methods, which use only O(n) memory and are designed to optimize problems with many variables. Since the
techniques are iterative, they require the repeated computation of
• the function value (optimization criterion)
• the gradient vector (first-order partial derivatives)
• for some techniques, the (approximate) Hessian matrix (second-order partial derivatives)
• values of linear and nonlinear constraints
• the first-order partial derivatives (Jacobian) of nonlinear constraints
However, since each of the optimizers requires different derivatives and supports different types of constraints, some computational efficiencies can be gained. The following table shows, for each
optimization technique, which derivatives are needed (FOD: first-order derivatives; SOD: second-order derivatives) and what kind of constraints (BC: boundary constraints; LIC: linear constraints;
NLC: nonlinear constraints) are supported.
│ Algorithm │ FOD │ SOD │ BC │ LIC │ NLC │
│ LICOMP │ - │ - │ x │ x │ - │
│ QUADAS │ - │ - │ x │ x │ - │
│ TRUREG │ x │ x │ x │ x │ - │
│ NEWRAP │ x │ x │ x │ x │ - │
│ NRRIDG │ x │ x │ x │ x │ - │
│ QUANEW │ x │ - │ x │ x │ x │
│ DBLDOG │ x │ - │ x │ x │ - │
│ CONGRA │ x │ - │ x │ x │ - │
│ NMSIMP │ - │ - │ x │ x │ x │
│ LEVMAR │ x │ - │ x │ x │ - │
│ HYQUAN │ x │ - │ x │ x │ - │
Preparation for Using Optimization Algorithms
It is rare that a problem is submitted to an optimization algorithm "as is." By making a few changes in your problem, you can reduce its complexity, that would increase the chance of convergence and
save execution time.
• Whenever possible, use linear functions instead of nonlinear functions. PROC NLP will reward you with faster and more accurate solutions.
• Most optimization algorithms are based on quadratic approximations to nonlinear functions. You should try to avoid the use of funcions that cannot be properly approximated by quadratic functions.
Try to avoid the use of rational functions. For example,the constraint
should be replaced by the equivalent constraint
sin(x)(x+1) > 0
and the constraint
should be replaced by the equivalent constraint
sin(x) - (x+1) = 0
• Try to avoid the use of exponential functions, if possible.
• If you can reduce the complexity of your function by the addition of a small number of variables, that may help the algorithm avoid stationary points.
• Provide the best starting point you can. A good starting point leads to better quadratic approximations and faster convergence.
Choosing an Optimization Algorithm
The factors that go into choosing a particular optimizer for a particular problem are complex and may involve trial and error. The following should be taken into account: First, the structure of the
problem has to be considered: Is it quadratic? least-squares? Does it have linear or nonlinear constraints? Next, it is important to consider the type of derivatives of the objective function and the
constraints that are needed and whether these are analytically tractable or not. This section provides some guidelines for making the right choices.
For many optimization problems, computing the gradient takes more computer time than computing the function value, and computing the Hessian sometimes takes much more computer time and memory than
computing the gradient, especially when there are many decision variables. Optimization techniques that do not use the Hessian usually require more iterations than techniques that do use Hessian
approximations (such as finite differences or BFGS update) and so are often slower. Techniques that do not use Hessians at all tend to be slow and less reliable.
The derivative compiler is not efficient in the computation second-order derivatives. For large problems, memory and computer time can be saved by programming your own derivatives using the GRADIENT,
JACOBIAN, CRPJAC, HESSIAN, and JACNLC statements. If you are not able or willing to specify first- and second-order derivatives of the objective function, you can rely on finite difference gradients
and Hessian update formulas. This combination is frequently used and works very well for small and medium size problems. For large size problems, you are advised not to use an optimization technique
that requires the computation of second derivatives.
The following provides some guidance for matching an algorithm to a particular problem.
• Quadratic Programming
• General Nonlinear Optimization
□ Nonlinear Constraints
☆ Small Problems: NMSIMP
Not suitable for highly nonlinear problems or for problems with n > 20.
☆ Medium Problems: QUANEW
□ Only Linear Constraints
☆ Small Problems: TRUREG (NEWRAP, NRRIDG)
(n(n+1)/2 double words; TRUREG and NEWRAP need two such matrices.
☆ Medium Problems: QUANEW (DBLDOG)
(n(n+1)/2 double words).
☆ Large Problems: CONGRA
(n > 200) where the objective function and the gradient can be computed much faster than the Hessian and where too much memory is needed to store the (approximate) Hessian. CONGRA in
general needs more iterations than QUANEW or DBLDOG, but each iteration can be much faster. Since CONGRA needs only a factor of n double-word memory, many large applications of PROC NLP
can be solved only by CONGRA.
☆ No Derivatives: NMSIMP
• Least-Squares Minimization
□ Small Problems: LEVMAR (HYQUAN)
□ Medium Problems: QUANEW (DBLDOG)
□ Large Problems: CONGRA
□ No Derivatives: NMSIMP
The QUADAS and LICOMP algorithms can be used to minimize or maximize a quadratic objective function,
with linear or boundary constraints
where x = (x[1], ... ,x[n])^T, g = (g[1], ... ,g[n])^T, G is an n ×n symmetric matrix, A is an m ×n matrix of general linear constraints, and b = (b[1], ... ,b[m])^T. The value of c modifies only the
value of the objective function, not its derivatives, and the location of the optimizer x^* does not depend on the value of the constant term c. For QUADAS or LICOMP, the objective function must be
specified using the MINQUAD or MAX QUAD statement or using an INQUAD= data set. In this case, derivatives do not need to be specified. because the gradient vector
and the n ×n Hessian matrix
are easily obtained from the data input.
Simple boundary and general linear constraints can be specified using the BOUNDS or LINCON statement or an INQUAD= or INEST= data set.
General Quadratic Programming (QUADAS)
The QUADAS algorithm is an active set method that iteratively updates the QT decomposition of the matrix A[k] of active linear constraints and the Cholesky factor of the projected Hessian Z^TGZ
simultaneously. The update of active boundary and linear constraints is done separately; refer to Gill et al. (1984). Here Q is an n[free] ×n[free] orthogonal matrix composed of vectors spanning the
null space Z of A[k] in its first n[free] - n[alc] columns and range space Y in its last n[alc] columns; T is an n[alc] ×n[alc] triangular matrix of special form, t[ij]=0 for i < n-j, where n[free]
is the number of free parameters (n minus the number of active boundary constraints), and n[alc] is the number of active linear constraints. The Cholesky factor of the projected Hessian matrix Z^T[k]
GZ[k] and the QT decomposition are updated simultaneously when the active set changes.
The LICOMP technique solves a quadratic problem as a linear complementarity problem. It can be used only if G is positive (negative) semi-definite for minimization (maximization) and if the
parameters are restricted to be positive.
This technique finds a point that meets the Karush-Kuhn-Tucker conditions by solving the linear complementary problem
w = M z + q
with constraints
Only the LCEPSILON= option can be used to specify a tolerance used in computations.
General Nonlinear Optimization
Trust-Region Optimization (TRUREG)
The trust-region method uses the gradient g(x^(k)) and Hessian matrix G(x^(k)) and thus requires that the objective function f(x) have continuous first- and second-order derivatives inside the
feasible region.
The trust-region method iteratively optimizes a quadratic approximation to the nonlinear objective function within a hyperelliptic trust region with radius and Mor .
The trust region method performs well for small to medium-sized problems and does not require many function, gradient, and Hessian calls. If the computation of the Hessian matrix is computationally
expensive, use the UPDATE= option for update formulas (that gradually build the second-order information in the Hessian). For larger problems, the conjugate gradient algorithm may be more
Newton-Raphson Optimization With Line-Search (NEWRAP)
The NEWRAP technique uses the gradient g(x^(k)) and Hessian matrix G(x^(k)) and thus requires that the objective function have continuous first- and second-order derivatives inside the feasible
region. If second-order derivatives are computed efficiently and precisely, the NEWRAP method may perform well for medium-sized to large problems, and it does not need many function, gradient, and
Hessian calls.
This algorithm uses a pure Newton step when the Hessian is positive definite and when the Newton step reduces the value of the objective function successfully. Otherwise, a combination of ridging and
line-search is done to compute successful steps. If the Hessian is not positive definite, a multiple of the identity matrix is added to the Hessian matrix to make it positive definite (Eskow and
Schnabel 1991).
In each iteration, a line-search is done along the search direction to find an approximate optimum of the objective function. The default line-search method uses quadratic interpolation and cubic
extrapolation (LIS=2).
Newton-Raphson Ridge Optimization (NRRIDG)
The NRRIDG technique uses the gradient g(x^(k)) and Hessian matrix G(x^(k)) and thus requires that the objective function have continuous first- and second-order derivatives inside the feasible
This algorithm uses a pure Newton step when the Hessian is positive definite and when the Newton step reduces the value of the objective function successfully. If at least one of these two conditions
is not satisfied, a multiple of the identity matrix is added to the Hessian matrix. If this algorithm is used for least-squares problems, it performs a ridged Gauss-Newton minimization.
The NRRIDG method performs well for small to medium-sized problems and does not need many function, gradient, and Hessian calls. However, if the computation of the Hessian matrix is computationally
expensive, one of the (dual) quasi-Newton or conjugate gradient algorithms may be more efficient.
Since NRRIDG uses an orthogonal decomposition of the approximate Hessian, each iteration of NRRIDG can be slower than that of NEWRAP, that works with Cholesky decomposition. However, usually NRRIDG
needs fewer iterations than NEWRAP.
Quasi-Newton Optimization (QUANEW)
The (dual) quasi-Newton method uses the gradient g(x^(k)) and does not need to compute second-order derivatives since they are approximated. It works well for medium to moderately large optimization
problems where the objective function and the gradient are much faster to compute than the Hessian, but in general it requires more iterations than the techniques TRUREG, NEWRAP, and NRRIDG, which
compute second-order derivatives.
The QUANEW algorithm depends on whether or not there are nonlinear constraints.
[cnlpfunorlin]Unconstrained or Linearly Constrained Problems
If there are no nonlinear constraints, QUANEW is either
• the original quasi-Newton algorithm that updates an approximation of the inverse Hessian
• the dual quasi-Newton algorithm that updates the Cholesky factor of an approximate Hessian (default)
depending upon the value of the UPDATE= options. For problems with general linear inequality constraints, the dual quasi-Newton methods can be more efficient than the original ones.
Four update formulas can be specified with the UPDATE= option:
performs the dual BFGS (Broyden, Fletcher, Goldfarb, & Shanno) update of the Cholesky factor of the Hessian matrix. This is the default.
performs the dual DFP (Davidon, Fletcher, & Powell) update of the Cholesky factor of the Hessian matrix.
performs the original BFGS (Broyden, Fletcher, Goldfarb, & Shanno) update of the inverse Hessian matrix.
performs the original DFP (Davidon, Fletcher, & Powell) update of the inverse Hessian matrix.
In each iteration, a line-search is done along the search direction to find an approximate optimum. The default line-search method uses quadratic interpolation and cubic extrapolation to obtain a
step size
[cnlpfconstrained]Nonlinearly Constrained Problems
The algorithm used for nonlinearly constrained quasi-Newton optimization is an efficient modification of Powell's (1978, 1982) Variable Metric Constrained WatchDog (VMCWD) algorithm. A similar but
older algorithm (VF02AD) is part of the Harwell library. Both VMCWD and VF02AD use Fletcher's VE02AD algorithm (part of the Harwell library) for positive definite quadratic programming. The PROC NLP
QUANEW implementation uses a quadratic programming subroutine that updates and downdates the approximation of the Cholesky factor when the active set changes. The nonlinear QUANEW algorithm is not a
feasible point algorithm, and the value of the objective function need not decrease (minimization) or increase (maximization) monotonically. Instead, the algorithm tries to reduce a linear
combination of the objective function and constraint violations, called the merit function.
The following are similarities and differences between this algorithm and the VMCWD algorithm:
• A modification of this algorithm can be performed by specifying VERSION=1, that replaces the update of the Lagrange vector that is used in VF02AD. This can be helpful for some applications with
linearly dependent active constraints.
• If the VERSION option is not specified or if VERSION=2 is specified, the evaluation of the Lagrange vector
• Instead of updating an approximate Hessian matrix, this algorithm uses the dual BFGS (or DFP) update that updates the Cholesky factor of an approximate Hessian. If the condition of the updated
matrix gets too bad, a restart is done with a positive diagonal matrix. At the end of the first iteration after each restart, the Cholesky factor is scaled.
• The Cholesky factor is loaded into the quadratic programming subroutine, automatically ensuring positive definiteness of the problem. During the quadratic programming step, the Cholesky factor of
the projected Hessian matrix Z^T[k]GZ[k] and the QT decomposition are updated simultaneously when the active set changes. Refer to Gill dt al. (1984) for more information.
• The line-search strategy is very similar to that of Powell (1982). However, this algorithm does not call for derivatives during the line-search, so the algorithm generally needs fewer derivative
calls than function calls. VMCWD always requires the same number of derivative and function calls. Sometimes Powell's line-search method uses steps that are too long. In these cases, use the
INSTEP= option to restrict the step length
• The watchdog strategy is similar to that of Powell (1982); however, it doesn't return automatically after a fixed number of iterations to a former better point. A return here is further delayed
if the observed function reduction is close to the expected function reduction of the quadratic model.
• The Powell termination criterion still is used (as FTOL2) but the QUANEW implementation uses two additional termination criteria (GTOL and ABSGTOL).
The nonlinear QUANEW algorithm needs the Jacobian matrix of the first-order derivatives (constraints normals) of the constraints CJ(x).
You can specify two update formulas with the UPDATE=option:
• UPDATE=DBFGS performs the dual BFGS update of the Cholesky factor of the Hessian matrix. This is the default.
• UPDATE=DDFP performs the dual DFP update of the Cholesky factor of the Hessian matrix.
This algorithm uses its own line-search technique. All options and parameters (except the INSTEP= option) controlling the line-search in the other algorithms do not apply here. In several
applications, large steps in the first iterations were troublesome. You can use the INSTEP= option to impose an upper bound for the step size
Double Dogleg Optimization (DBLDOG)
The double dogleg optimization method combines the ideas of quasi-Newton and trust region methods. The double dogleg algorithm computes in each iteration the step s^(k) as the linear combination of
the steepest descent or ascent search direction s[1]^(k) and a quasi-Newton search direction s[2]^(k),
The step is requested to remain within a prespecified trust region radius, refer to Fletcher (1987, p. 107). Thus, the DBLDOG subroutine uses the dual quasi-Newton update but does not perform a
line-search. Two update formulas can be specified with the UPDATE= option:
performs the dual BFGS (Broyden, Fletcher, Goldfarb, & Shanno) update of the Cholesky factor of the Hessian matrix. This is the default.
performs the dual DFP (Davidon, Fletcher, & Powell) update of the Cholesky factor of the Hessian matrix.
The double dogleg optimization technique works well for medium to moderately large optimization problems where the objective function and the gradient are much faster to compute than the Hessian. The
implementation is based on Dennis & Mei (1979) and Gay (1983) but is extended for dealing with boundary and linear constraints. DBLDOG generally needs more iterations than the techniques TRUREG,
NEWRAP, or NRRIDG that need second-order derivatives, but each of the DBLDOG iterations is computationally cheap. Furthermore, DBLDOG needs only gradient calls for the update of the Cholesky factor
of an approximate Hessian.
Conjugate Gradient Optimization (CONGRA)
Second-order derivatives are not used by CONGRA. The CONGRA algorithm can be expensive in function and gradient calls but needs only O(n) memory for unconstrained optimization. In general, many
iterations are needed to obtain a precise solution, but each of the CONGRA iterations is computationally cheap. Four different update formulas for generating the conjugate directions can be specified
using the UPDATE= option:
performs the automatic restart update method of Powell (1977) and Beale (1972). This is the default.
performs the Fletcher-Reeves update (Fletcher 1987).
performs the Polak-Ribiere update (Fletcher 1987).
performs a conjugate-descent update of Fletcher (1987).
The default value is UPDATE=PB, since it behaved best in most test examples. You are advised to avoid the option UPDATE=CD, that behaved worst in most test examples.
The CONGRA subroutine should be used for optimization problems with large n. For the unconstrained or boundary constrained case, CONGRA needs only O(n) bytes of working memory, whereas all other
optimization methods require order O(n^2) bytes of working memory. During n successive iterations, uninterrupted by restarts or changes in the working set, the conjugate gradient algorithm computes a
cycle of n conjugate search directions. In each iteration, a line-search is done along the search direction to find an approximate optimum of the objective function. The default line-search method
uses quadratic interpolation and cubic extrapolation to obtain a step size
Nelder-Mead Simplex Optimization (NMSIMP)
The Nelder-Mead simplex method does not use any derivatives and does not assume that the objective function has continuous derivatives. The objective function itself needs to be continuous. This
technique requires a large number of function evaluations. It is unlikely to give accurate results for
Depending on the kind of constraints, one of the following Nelder-Mead simplex algorithms is used:
• unconstrained or only boundary constrained problems
The original Nelder-Mead simplex algorithm is implemented and extended to boundary constraints. This algorithm does not compute the objective for infeasible points. This algorithm is
automatically invoked if the LINCON or NLINCON statement is not specified.
The original Nelder-Mead algorithm cannot be used for general linear or nonlinear constraints but can be faster for the unconstrained or boundary constrained case. The original Nelder-Mead algorithm
changes the shape of the simplex adapting the nonlinearities of the objective function which contributes to an increased speed of convergence. The two NMSIMP subroutines use special sets of
termination criteria. For more details, refer to the section "Termination Criteria".
[cnlpfcoby]Powell's COBYLA Algorithm (COBYLA)
Powell's COBYLA algorithm is a sequential trust-region algorithm (originally with a monotonically decreasing radius and . The convergence to small values of
• Only linear approximations of the objective and constraint functions are used locally.
• Maintaining the regular-shaped simplex and not adapting its shape to nonlinearities yields very small simplexes for highly nonlinear functions (for example, fourth-order polynomials).
Nonlinear Least-Squares Optimization
Levenberg-Marquardt Least-Squares Method (LEVMAR)
The Levenberg-Marquardt method is a modification of the trust-region method for nonlinear least-squares problems and is implemented as in Mor
This is the recommended algorithm for small- to medium-sized least-squares problems. Large least-squares problems can be transformed into minimization problems, which can be processed with conjugate
gradient or (dual) quasi-Newton techniques. In each iteration, LEVMAR solves a quadratically constrained quadratic minimization problem that restricts the step to stay at the surface of or inside an
n dimensional elliptical (or spherical) trust region. In each iteration, LEVMAR uses the crossproduct Jacobian matrix J^TJ as an approximate Hessian matrix.
Hybrid Quasi-Newton Least-Squares Methods (HYQUAN)
In each iteration of one of the Fletcher and Xu (1987) (refer also to AlBaali and Fletcher 1985, 1986) hybrid quasi-Newton methods, a criterion is used to decide whether a Gauss-Newton or a dual
quasi-Newton search direction is appropriate. The VERSION= option can be used to choose one of three criteria (HY1, HY2, HY3) proposed by Fletcher and Xu (1987). The default is VERSION=2; that is,
HY2. In each iteration, HYQUAN computes the crossproduct Jacobian (used for the Gauss-Newton step), updates the Cholesky factor of an approximate Hessian (used for the quasi-Newton step), and does a
line-search to compute an approximate minimum along the search direction. The default line-search technique used by HYQUAN is especially designed for least-squares problems (refer to Lindstr Two
update formulas can be specified with the UPDATE= option:
performs the dual BFGS (Broyden, Fletcher, Goldfarb, and Shanno) update of the Cholesky factor of the Hessian matrix. This is the default.
performs the dual DFP (Davidon, Fletcher, and Powell) update of the Cholesky factor of the Hessian matrix.
The HYQUAN subroutine needs about the same amount of working memory as the LEVMAR algorithm. In most applications, LEVMAR seems to be superior to HYQUAN, and using HYQUAN is recommended only when
problems are experienced with the performance of LEVMAR.
Chapter Contents Previous Next Top
Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved. | {"url":"http://www.okstate.edu/sas/v8/sashtml/ormp/chap5/sect27.htm","timestamp":"2014-04-17T21:42:12Z","content_type":null,"content_length":"48575","record_id":"<urn:uuid:60007fa1-5a04-4764-8786-d6d5a9e92ffb>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evolutionary Games and Population Dynamics
Evolutionary Games and Population Dynamics:
Public goods games
by Christoph Hauert, Version 1.0, October 2006.
• Location:
• » Ecology
• » Public goods games
In a population of varying density, an attempt at gathering N individuals that engage in a public goods interaction might not always be successful at low population densities and instead of a group
of size N, only S≤N individuals participate. If S = 0 or 1 no interaction occurs. This leads to a natural feedback between population density and game theoretical interactions. The dynamics of
cooperators and defectors in public goods interactions is determined by their respective payoffs obtained in randomly formed groups of S individuals. Independent of whether the focal individual is a
cooperator or a defector, it receives the same expected payoff from its S - 1 co-players. Hence, the sole determinant of success is the return of the individual's own investment c, which is (r/S - 1)
c. For 1 < r < S defectors are always better off as required by the traditional formulation of the public goods game. However, for r > S the social dilemma is relaxed and cooperation dominates.
Nevertheless, defectors outperform cooperators in any group consisting of both types (this represents an instance of Simpson's paradox). Also note that this is a fleeting state since thriving
cooperators increases the average population payoff and hence the population density which in turn leads to larger interaction groups and puts defectors back into control.
The negative feedback between population density and interaction group size hinges on the fact that the group size can become smaller than r. For pairwise prisoner's dilemma interactions this is not
the case: because S cannot vary (and is always equal to N = 2), either r < S always holds (in which case the population goes extinct) or r > S always holds (in which case defectors disappear but
cooperators persist). The dynamic feed back cannot operate in either case.
This tutorial complements scientific articles co-authored with Miranda Holmes, Michael Doebeli and Joe Yuichiro Wakano and provides interactive Java applets to visualize and explore the systems'
dynamic for parameter settings of your choice.
Dynamical scenarios
The following panels illustrate the rich dynamics of this system. The phase space is spanned by the population density x + y (or 1 - z) and the relative fraction of cooperators f = x / (x + y). The
left boundary (z = 1) is attracting and consists of a line of stable fixed points (filled circles), which represent states where the population cannot maintain itself and disappears. Conversely, the
right boundary, which denotes the maximal population density (z = 0), is repelling. In absence of cooperators (bottom boundary, f = 0), population densities decrease and eventually vanish. Finally,
in absence of defectors (top boundary, f = 1), there are two saddle points (open circles) except for the last scenario where one is a stable node (filled circle). In addition, there may be an
interior fixed point Q present.
The following list illustrates the different dynamical scenarios. A click on the image to the left, opens a new window with an interactive Java applet that allows to explore the dynamics by numerical
integration of the differential equations. The initial condition is set with a mouse-click in the phase plane. Hit 'Run' for forward and backward integration of the differential equations for the
given initial condition. The numerical integration can be interrupted by another mouse-click on the phase plane.
Dynamical scenarios
Trajectory of the interior fixed point Q for increasing multiplication factors r. Along this trajectory, the system undergoes a series of various types of bifurcations:
1. No Q - extinction
2. Q unstable node - extinction
3. Q unstable focus - extinction
4. Q stable focus - co-existence
5. Q stable node - co-existence
6. No Q - cooperation
Series of bifurcations
Click to The position of the fixed point Q changes with the multiplication factor r. Q enters on the top left for low r and leaves at the top right for high r. For increasing r, the system undergoes
enlarge a series of different types of bifurcations. The different dynamical scenarios (i)-(vi) apply depending on the location of Q. In (i) and (vi) Q is absent, in (ii), (iii) it is unstable and
in (iv), (v) it is stable. Between scenarios (iii) and (iv) a Hopf bifurcation occurs and over a very narrow range of r, stable and unstable limit cycles can be observed.
No matter what the initial configuration of the cooperators and defectors, the population will invariably go extinct.
Hint: start from different initial configurations to get a better intuition of the dynamics.
The presence of the interior fixed point Q does not affect the evolutionary end state of the system - the population keeps going extinct irrespective of the initial conditions.
Hint: backwards intergration reveals the location of the unstable node when starting in a suitable part of the phase plane.
For larger r, the interior fixed point Q turns into an unstable focus and - depending on the initial conditions - the population faces extinction in an oscillatory manner.
For slightly higher r the interior fixed point Q is still an unstable focus but now surrounded by a stable limit cycle - the hallmark of a super critical Hopf bifurcation. Cooperators and
defectors co-exist in never ending periodic oscillations.
Hint: often, the forward integration will not stop and keep tracking the stable limit cycle. Just click on the phase plane to stop forward integration and start the backward integration.
Another click stops backward integration, too.
Increasing r further leads to a Hopf bifurcation, the interior fixed point Q becomes a stable focus and the limit cycle disappears. Depending on the initial conditions, cooperators and
defectors co-exist at some fixed densities. If exploitation by defectors is severe or population densities are too low, the population is unable to recover and goes extinct.
Another increase in r turns the interios fixed point Q into a stable node. As before, cooperators and defectors co-exist at some fixed densities only, they no longer approach the equilibrium
in an oscillatory manner. Severe exploitation and low population densities again result in extinction.
For high r, the interior fixed point Q disappears and the high density saddle node along f = 1, i.e. in absence of defectors, becomes a stable equilibrium. Cooperators and defectors can no
longer co-exist but now its only the defectors that disappear, at least for favorable initial conditions. As always, severe exploitation and low population densities result in extinction.
Complex bifurcations
For larger group sizes N fascinating and much more complex Hopf bifurcations and dynamical scenarios are possible, which includes multiple, stable and unstable limit cycles. However, also note that r
values for which these fascinating bifurcations occur is restricted to a tiny interval. Thus, despite their appeal from a dynamical systems' perspective, the limit cycles might be of only limited
relevance for biological applications.
In this example, for N = 12, a stable and an unstable limit cycle exist on one side of the Hopf bifurcation and another stable limit cycle on the other side.
Hint: Try lowering r slightly to just below the Hopf-bifurcation (set r = 3.04). The interior fixed point Q is now an unstable focus surrounded by a stable limit cycle (see above).
In a population with a fraction x cooperators, y defectors and z = 1 - x - y available space, then the average payoff to cooperators f[C] and defectors f[D] is given by:
f[D] = r x / (1 - z) (1 - (1 - z^N) / (N (1 - z)))
f[C] = f[D] - F(z)
F(z) = 1 + (r - 1) z^N - 1 - r (1 - z^N) / (N (1 - z)).
Note that this ivation assumes that the benefits of the public good is contingent on social interactions, i.e. a single participant in the public goods interaction cannot increase its capital. For a
detailed derivation of the formulas please consult the scientific articles in the reference section.
Virtual lab
The applet below illustrates the different components. Along the bottom there are several buttons to control the execution and the speed of the simulations. Of particular importance are the Param
button and the data views pop-up list on top. The former opens a panel that allows to set and change various parameters concerning the game as well as the population structure, while the latter
displays the simulation data in different ways.
Color code: Cooperators Defectors Empty
New cooperators New defectors New empty
Note: The shades of grey of the payoff scale are augmented by blueish and reddish shades indicating payoffs for mutual cooperation and defection, respectively.
Params Pop up panel to set various parameters.
Views Pop up list of different data presentations.
Reset Reset simulation
Run Start/resume simulation
Next Next generation
Pause Interrupt simulation
Slider Idle time between updates. On the right your CPU clock determines the update speed while on the left updates are made roughly once per second.
Mouse Mouse clicks on the graphics panels start, resume or stop the simulations.
Data views
Structure - Snapshot of the spatial arrangement of strategies.
Mean frequency Time evolution of the strategy frequencies.
Simplex S[3] Frequencies plotted in the simplex S[3]. Mouse clicks set the initial frequencies of strategies or stops the simulations.
Phase Plane 2D Frequencies plotted in the phase plane spanned by the population density (x + y = 1 - z) and the relative frequency of cooperators (f = x / (x + y)). Mouse clicks set the initial
frequencies of strategies, stop the simulations or switch to backward integration.
Structure - Snapshot of the spatial distribution of payoffs.
Mean Fitness Time evolution of average population payoff bounded by the minimum and maximum individual payoff.
Histogram - Snapshot of payoff distribution in population.
Game parameters
The list below describes only the parameters related to the public goods game and the population dynamics. Follow the link for a complete list and descriptions of all other parameters e.g. referring
to update mechanisms of players and the population.
multiplication factor r of public good.
cost of cooperation c (investment into common pool).
Lone cooperator's payoff:
payoff for a cooperator if no one else joins the public goods interaction.
Lone defector's payoff:
payoff for a defector if no one else joins the public goods interaction.
Base birthrate:
baseline reproductive rate of all individuals. The effective birthrate is affected by the individual's performance in the public goods game and additionally depends on the availability of empty
constant death rate of all individuals.
Init Coop, init defect, init empty:
initial densities of cooperators, defectors and empty space. If they do not add up to 100%, the values will be scaled accordingly. | {"url":"http://www.univie.ac.at/virtuallabs/Ecology/pgg.replicate.html","timestamp":"2014-04-19T19:38:42Z","content_type":null,"content_length":"20462","record_id":"<urn:uuid:c2eedcbf-ca1d-42dd-aaee-ae03314ccbd2>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Efficient modular exponentiation algorithms
March 28th, 2009 at 9:51 am
Earlier this week I’ve discussed efficient algorithms for exponentiation.
However, for real-life needs of number theoretic computations, just raising numbers to large exponents isn’t very useful, because extremely huge numbers start appearing very quickly [1], and these
don’t have much use. What’s much more useful is modular exponentiation, raising integers to high powers $$\pmod n$$ [2]
Luckily, we can reuse the efficient algorithms developed in the previous article, with very few modifications to perform modular exponentiation as well. This is possible because of some convenient
properties of modular arithmetic.
Modular multiplication
Given two numbers, a and b, their product modulo n is $$ab \pmod n$$. Consider the number x < n, such that $$x\equiv a\pmod n$$. Such a number always exists, and we usually call it the remainder of
dividing a by n. Similarly, there is a y < b, such that $$y\equiv b\pmod n$$. It follows from basic rules of modular arithmetic that $$xy\equiv ab\pmod n$$ [3]
Therefore, if we want to know the product of a and b modulo n, we just have to keep their remainders when divided by n. Note: a and b may be arbitrarily large, but x and y are always smaller than n.
A naive algorithm
What is the most naive way you can think of for raising computing $$a^{b} \pmod n$$? Raise a to the power b, and then reduce modulo n. Right?
Indeed, this is a very unsophisticated and slow method, because raising a to the power b can result in a really huge number that takes long to compute.
For any useful number, this algorithm is so slow that I’m not even going to run it in the tests.
Using the properties of modular multiplication
As we’ve learned above, modular multiplication allows us to just keep the intermediate result $$\pmod n$$ at each step. So we don’t have to ever hold numbers larger than the modulo. Here’s the
implementation of a simple repeated multiplication algorithm for computing modular exponents this way:
def modexp_mul(a, b, n):
r = 1
for i in xrange(b):
r = r * a % n
return r
It’s much better than the naive algorithm, but as we saw in the previous article it’s quite slow, requiring b multiplications (and reductions modulo n).
We can apply the same modular reduction rule to the more efficient exponentiation algorithms we’ve studied before.
Modular exponentiation by squaring
Here’s the right-to-left method with modular reductions at each step.
def modexp_rl(a, b, n):
r = 1
while 1:
if b % 2 == 1:
r = r * a % n
b /= 2
if b == 0:
a = a * a % n
return r
We use exactly the same algorithm, but reduce every multiplication $$\pmod n$$. So the numbers we deal with here are never very large.
Similarly, here’s the left-to-right method:
def modexp_lr(a, b, n):
r = 1
for bit in reversed(_bits_of_n(b)):
r = r * r % n
if bit == 1:
r = r * a % n
return r
With _bits_of_n being, as before:
def _bits_of_n(n):
""" Return the list of the bits in the binary
representation of n, from LSB to MSB
bits = []
while n:
bits.append(n % 2)
n /= 2
return bits
Relative performance
As I’ve noted in the previous article, the RL method does a worse job of keeping its multiplicands low than the LR method. And indeed, for smaller n, RL is somewhat faster than LR. For larger n, RL
is somewhat slower.
What’s obvious is that now the built-in pow is superior to both hand-coded methods [4]. My tests show it’s anywhere from twice to 10 times as fast.
Why is pow so much faster? Is it only the efficiency of C versus Python? Not really. In fact, pow uses an even more sophisticated algorithm for large exponents [5]. Indeed, for small exponents the
runtime of pow is similar to the runtime of the implementations I presented above.
The k-ary LR method
It turns out that the LR method of repeated squaring can be generalized. Instead of breaking the exponent into bits of its base-2 representation, we can break it into larger pieces, and save some
computations this way.
I’ll present the k-ary LR method that breaks the exponent into its "digits" in base $$m=2^k$$ for some integer k. The exponent can be written as:
$$b=t_{i}m^{i}+t_{i-1}m^{i-1}+\cdots t_{0}m^{0}$$
Where $$t_i$$ are the digits of b in base m. $$a^b$$ is then:
$$a^{t_{i}m^{i}}\cdot a^{t_{i-1}m^{i-1}}\cdot \cdots a^{t_{0}}$$
We compute this iteratively as follows [6]:
Raise $$a^{t_0}$$ to the m-th power and multiply by $$a^{t_1}$$. We get $$r_1 = a^{t_{0}m+t_1}$$. Next, raise $$r_1$$ to the m-th power and multiply by $$a^{t_2}$$, obtaining $$r_2 = a^{t_{0}m^{2}+t_
{1}m+t_{2}}$$. If we continue with this, we’ll eventually get $$a^b$$.
This translates into the following code:
def modexp_lr_k_ary(a, b, n, k=5):
""" Compute a ** b (mod n)
K-ary LR method, with a customizable 'k'.
base = 2 << (k - 1)
# Precompute the table of exponents
table = [1] * base
for i in xrange(1, base):
table[i] = table[i - 1] * a % n
# Just like the binary LR method, just with a
# different base
r = 1
for digit in reversed(_digits_of_n(b, base)):
for i in xrange(k):
r = r * r % n
if digit:
r = r * table[digit] % n
return r
Note that we save some time by pre-computing the powers of a for exponents that can be digits in base m. Also, the _digits_of_n is the following generalization of _bits_of_n:
def _digits_of_n(n, b):
""" Return the list of the digits in the base 'b'
representation of n, from LSB to MSB
digits = []
while n:
digits.append(n % b)
n /= b
return digits
Performance of the k-ary method
In my tests, the k-ary LR method with k = 5 is about 25% faster than the binary LR method, and is within 20% of the built-in pow function.
Experimenting with the value of k affects these results, but 5 seems to be a good value that produces the best performance in most cases. This is probably why it’s also used as the value of k in the
implementation of pow.
Python’s built-in pow
I’ve mentioned Python’s pow function several times in this article. The Python version I’m talking about is 2.5, though I doubt this functionality has changed in 2.6 or 3.0. The pow I’m interested in
is implemented in the long_pow function in objects/longobject.c in the Python source code distribution. As mentioned in [5], it uses the binary LR method for small exponents, and the k-ary LR method
for large exponents.
These implementations follow closely algorithms 14.79 and 14.82 in the excellent Handbook of Applied Cryptography, which is freely available online.
As we’ve seen, exponentiation and modular exponentiation are one of those applications in which an efficient algorithm is required for feasibility. Using the trivial/naive algorithms is possible only
for small cases which aren’t very interesting. To process realistically large numbers (such as the ones required for cryptographic algorithms), one needs powerful methods in his toolbox.
[1] For instance, $$3^{10000}$$ is a 4772-digit number.
[2] Modular exponentiation is essential for the RSA algorithm, for example.
[3] To be a bit more rigorous, we start with $$x\equiv a\pmod n$$. This means that $$n|(a-x)$$, so also $$n|(ab-xb)$$. Similarly $$n|(b-y)$$, so also $$n|(bx-yx)$$. Adding these two we get $$n|
(ab-yx$$), which means that $$xy\equiv ab\pmod n$$.
[4] Using the 3-argument form of pow, you can perform modular exponentiation.
[5] FIVEARY_CUTOFF in the code of pow is set to 8. This means that for exponents with more than 8 digits, a special 5-ary algorithm is used. For smaller exponents, the regular LR binary method is
used – just like the one I presented, just coded in C.
[6] Note that for m = 2 this is the familiar binary LR method.
Related posts: | {"url":"http://eli.thegreenplace.net/2009/03/28/efficient-modular-exponentiation-algorithms/","timestamp":"2014-04-17T01:19:23Z","content_type":null,"content_length":"29745","record_id":"<urn:uuid:044d7a64-720f-4ec1-b89c-e7673034cb04>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Slovin s formula by Ettevymyr
Slovin's Formula Sampling Techniques
* By Steph Ellen, eHow Contributor
* When it is not possible to study an entire population (such as the population of the United States), a smaller sample is taken using a random sampling technique. Slovin's formula allows a
researcher to sample the population with a desired degree of accuracy. It gives the researcher an idea of how large his sample size needs to be to ensure a reasonable accuracy of results. * When to
Use Slovin's Formula
* If a sample is taken from a population, a formula must be used to take into account confidence levels and margins of error. When taking statistical samples, sometimes a lot is known about a
population, sometimes a little and sometimes nothing at all. For example, we may know that a population is normally distributed (e.g., for heights, weights or IQs), we may know that there is a
bimodal distribution (as often happens with class grades in mathematics classes) or we may have no idea about how a population is going to behave (such as polling college students to get their
opinions about quality of student life). Slovin's formula is used when nothing about the behavior of a population is known at all. * How to Use Slovin's Formula
* -------------------------------------------------
Slovin's formula is written as:
n = N / (1 + Ne^2)
n = Number of samples
N = Total population
e = Error tolerance
To use the formula, first figure out what you want your error of tolerance to be. For example, you may be happy with a confidence level of 95 percent (giving a margin error of 0.05), or you may
require a tighter accuracy of a 98 percent confidence level (a margin of error of 0.02). Plug your population size and required margin of error into the formula. The result will be the number of
samples you need to take.
For example, suppose that you have a group of 1,000 city government employees and you want to survey them to find out which tools are best suited to... | {"url":"http://www.studymode.com/essays/Slovin's-Formula-622971.html","timestamp":"2014-04-16T17:02:06Z","content_type":null,"content_length":"32103","record_id":"<urn:uuid:c00d9aef-482e-4355-9b7d-1918502dbfe7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
On Differential Subordinations of Multivalent Functions Involving a Certain Fractional Derivative Operator
International Journal of Mathematics and Mathematical Sciences
Volume 2010 (2010), Article ID 952036, 10 pages
Research Article
On Differential Subordinations of Multivalent Functions Involving a Certain Fractional Derivative Operator
Department of Mathematics Education, Daegu National University of Education, 1797-6 Daemyong 2 dong, Namgu, Daegu 705-715, South Korea
Received 18 December 2009; Accepted 28 February 2010
Academic Editor: Mohamed Kamal Aouf
Copyright © 2010 Jae Ho Choi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
We investigate several results concerning the differential subordination of analytic and multivalent functions which is defined by using a certain fractional derivative operator. Some special cases
are also considered.
1. Introduction and Definitions
Let denote the class of functions of the form
which are analytic in the open unit disk Also let denote the class of all analytic functions with which are defined on . If and are analytic in with , then we say that is said to be subordinate to in
, written or , if there exists the Schwarz function , analytic in such that , and In particular, if the function is univalent, then the above subordination is equivalent to and .
Let , and be complex numbers with Then the Gaussian hypergeometric function is defined by
where is the Pochhammer symbol defined, in terms of the Gamma function, by
The hypergeometric function is analytic in and if or is a negative integer, then it reduces to a polynomial.
There are a number of definitions for fractional calculus operators in the literature (cf., e.g., [1, 2]). We use here the Saigo-type fractional derivative operator defined as follows (see [3]; see
also [4]).
Definition 1.1. Let and . Then the generalized fractional derivative operator of a function is defined by The function is an analytic function in a simply-connected region of the -plane containing
the origin, with the order for and the multiplicity of is removed by requiring that be real when .
Definition 1.2. Under the hypotheses of Definition 1.1, the fractional derivative operator of a function is defined by
With the aid of the above definitions, we define a modification of the fractional derivative operator by
for and . Then it is observed that also maps onto itself as follows:
It is easily verified from (1.8) that
Note that , and , where is the fractional derivative operator defined by Srivastava and Aouf [5, 6].
In this manuscript, we will use the method of differential subordination to derive certain properties of multivalent functions defined by fractional derivative operator .
2. Main Results
In order to establish our results, we require the following lemma due to Miller and Mocanu [7].
Lemma 2.1. Let be univalent in and let and be analytic in a domain containing with when . Set , and suppose that(1) is starlike (univalent) in ,(2).If is analytic in , with , , and then and is the
best dominant.
We begin by proving the following
Theorem 2.2. Let and , and let , , , and . Suppose that is univalent in and satisfies If and then and is the best dominant.
Proof. Define the function by Then is analytic in with . A simple computation using (2.5) gives By applying the identity (1.9) in (2.6), we obtain Making use of (2.5) and (2.7), we have In view of (
2.8), the subordination (2.3) becomes and this can be written as (2.1), where Since , we find from (2.10) that and are analytic in with . Let the functions and be defined by Then, by virtue of (2.2),
we see that is starlike and Hence, by using Lemma 2.1, we conclude that , which completes the proof of Theorem 2.2.
Remark 2.3. If we put in Theorem 2.2, then we get new subordination result for the fractional derivative operator due to Srivastava and Aouf [5, 6].
Theorem 2.4. Let and , and let , , , and . Suppose that is univalent in and satisfies If and then and is the best dominant.
Proof. Define the function by Then is analytic in with . By a simple computation, we find from (2.16) that By using the identity (1.9) in (2.17), we obtain Applying (2.16) and (2.18), we have In view
of (2.19), the subordination (2.14) becomes and this can be written as (2.1), where Since , it follows from (2.21) that and are analytic in with . Let the functions and be defined by Then, by virtue
of (2.13), we see that is starlike and Hence, by using Lemma 2.1, we conclude that , which proves Theorem 2.4.
If we put in Theorem 2.4, then we have the following.
Corollary 2.5. Let and , and let . Suppose that is univalent in and satisfies If and then and is the best dominant.
By putting in Corollary 2.5, we obtain the following.
Corollary 2.6. Let and , and let . Suppose that is univalent in and satisfies If and then and is the best dominant.
By using Lemma 2.1, we obtain the following.
Theorem 2.7. Let and , and let , , , and . Suppose that is univalent in and satisfies If and then and is the best dominant.
Proof. Define the function by Then is analytic in with . A simple computation using (1.9) and (2.31) gives By using (2.29), (2.31), and (2.32), we get And this can be written as (2.1) when and . Note
that and and are analytic in . Let the functions and be defined by Then, by virtue of (2.28), we see that is starlike and Hence, by applying Lemma 2.1, we observe that , which evidently proves
Theorem 2.7.
Finally, we prove
Theorem 2.8. Let and , and let , , , and . Suppose that be univalent in and satisfies If and then and is the best dominant.
Proof. If we define the function by then is analytic in with . Hence, by using the same techniques as detailed in the proof of Theorem 2.2, we obtain the desired result.
By taking in Theorem 2.8 and after a suitable change in the parameters, we have the following.
Corollary 2.9. Let and . Suppose that is univalent in and satisfies If and then and is the best dominant.
This work was supported by Daegu National University of Education Research Grant in 2008.
1. H. M. Srivastava and R. G. Buschman, Theory and Applications of Convolution Integral Equations, vol. 79 of Mathematics and Its Applications, Kluwer Academic Publishers, Dordrecht, The
Netherlands, 1992. View at MathSciNet
2. S. G. Samko, A. A. Kilbas, and O. I. Marichev, Fractional Integrals and Derivatives, Theory and Applications, Gordon and Breach, New York, NY, USA, 1993. View at MathSciNet
3. R. K. Raina and H. M. Srivastava, “A certain subclass of analytic functions associated with operators of fractional calculus,” Computers & Mathematics with Applications, vol. 32, no. 7, pp.
13–19, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
4. R. K. Raina and J. H. Choi, “On a subclass of analytic and multivalent functions associated with a certain fractional calculus operator,” Indian Journal of Pure and Applied Mathematics, vol. 33,
no. 1, pp. 55–62, 2002. View at MathSciNet
5. H. M. Srivastava and M. K. Aouf, “A certain fractional derivative operator and its applications to a new class of analytic and multivalent functions with negative coefficients. I,” Journal of
Mathematical Analysis and Applications, vol. 171, no. 1, pp. 1–13, 1992.
6. H. M. Srivastava and M. K. Aouf, “A certain fractional derivative operator and its applications to a new class of analytic and multivalent functions with negative coefficients. II,” Journal of
Mathematical Analysis and Applications, vol. 192, no. 3, pp. 673–688, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. S. S. Miller and P. T. Mocanu, Differential Subordinations. Theory and Application, vol. 225 of Monographs and Textbooks in Pure and Applied Mathematics, Marcel Dekker, New York, NY, USA, 2000.
View at Zentralblatt MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/ijmms/2010/952036/","timestamp":"2014-04-18T08:22:15Z","content_type":null,"content_length":"556482","record_id":"<urn:uuid:9d6f7303-6941-4fdb-b8d7-787efee9c490>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathFiction: The Shiloh Project (David R. Beaucage)
a list compiled by Alex Kasman (College of Charleston)
Home All New Browse Search About
The Shiloh Project (1993)
David R. Beaucage
This is a Christian science fiction novel with mathematical undertones written by an author with a doctorate in mathematics. In it, a Jewish math teacher falsely accused of sexually abusing a student
travels through time and converses with biblical figures finding, among other things, support for the Christian faith.
One of the reviews of the book claims "He's written about faith without being preachy, love without being mushy, and math without being DULL." I guess each person is entitled to their own opinion. As
I attempted to read the book, I found quite the opposite to be true. The book appears to be very amateurishly written, with the science fiction and romance elements being insipid and cliched. It may
be difficult for anyone who does not share the author's religious views to read the book as it was quite `preachy'. Finally, as far as math goes, I think that what he writes is going to be
meaningless to someone who doesn't already know a lot of math (what is a mathematically naive person to make of a brief description of multivalued logarithms from path integrals in the complex
plane?) but not sufficiently original or eloquent to interest someone with real mathematical training.
Apparently the book was intended to be the first in a series called `Mathematicians in Love', but I have been unable to find a sequel.
Thanks to Vijay Fafat for bringing this book to my attention.
Buy this work of mathematical fiction and read reviews at amazon.com.
(Note: This is just one work of mathematical fiction from the list. To see the entire list or to see more works of mathematical fiction, return to the Homepage.)
Works Similar to The Shiloh Project
According to my `secret formula', the following works of mathematical fiction are similar to this one:
1. Mister God, This is Anna by Fynn
2. Oracle by Greg Egan
3. The Elusive Chauffeur by David H. Brown
4. False Witness by Randy D. Singer
5. Conceiving Ada by Lynn Hershman-Leeson
6. The Difference Engine by William Gibson / Bruce Sterling
7. The Bones of Time by Kathleen Ann Goonan
8. The Loom of God: Mathematical Tapestries at the Edge of Time by Clifford Pickover
9. Summer Solstice by Charles Leonard Harness
10. Eifelheim by Michael Flynn
Ratings for The Shiloh Project:
Content: Have you seen/read this work of mathematical fiction? Then click here to enter your own votes on its mathematical content and literary quality or send me comments to post on
3/5 (2 votes) this Webpage.
Literary Quality:
2/5 (2 votes)
Genre Historical Fiction, Science Fiction,
Motif Time Travel, Math Education, Religion,
Topic Analysis/Calculus/Differential,
Medium Novels,
Home All New Browse Search About
Your Help Needed: Some site visitors remember reading works of mathematical fiction that neither they nor I can identify. It is time to crowdsource this problem and ask for your help! You would help
a neighbor find a missing pet...can't you also help a fellow site visitor find some missing works of mathematical fiction? Please take a look and let us know if you have seen these missing stories
(Maintained by Alex Kasman, College of Charleston) | {"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf769","timestamp":"2014-04-19T06:53:10Z","content_type":null,"content_length":"9814","record_id":"<urn:uuid:1864bcb9-9a83-4a84-b5f4-a4c8fb0988de>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAS-L archives -- August 1996, week 2 (#143)LISTSERV at the University of Georgia
Date: Mon, 12 Aug 1996 14:55:25 -0400
Reply-To: "Dr. Steven P Ellis" <ellis@NEURON.CPMC.COLUMBIA.EDU>
Sender: "SAS(r) Discussion" <SAS-L@UGA.CC.UGA.EDU>
From: "Dr. Steven P Ellis" <ellis@NEURON.CPMC.COLUMBIA.EDU>
Organization: Research Foundation for Mental Hygiene
Subject: proc mixed w/ nested random factors
I'd like to do a variance components analysis using proc mixed and a
model in which the fixed part contains 2 fixed factors, a few
covariates, and assorted interactions. In addition, there are 2 random
factors. The first random factor is subjects (repeated measures). This
is nested in the second random factor, blocks. Different blocks are
independent. Different subjects are conditionally independent given
blocks. I may also want to include an interaction of block and subject.
I've discussed this with colleagues more learned than I concerning these
matters. They have some ideas but none of them are sure of how to
If someone out there knows how to handle this problem, I'd like to hear
from him/her.
Thanks in advance.
-- Steve Ellis | {"url":"http://www.listserv.uga.edu/cgi-bin/wa?A2=ind9608b&L=sas-l&D=1&O=D&F=&S=&P=15821","timestamp":"2014-04-20T03:10:54Z","content_type":null,"content_length":"9568","record_id":"<urn:uuid:2b8205ee-27a4-4e5c-89d0-04ca7649930f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
work is force acting over time true or false?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
not exactly,prefer to give a negative answer
Best Response
You've already chosen the best response.
can u explain please
Best Response
You've already chosen the best response.
Work is the product of force and the distance over which the force acts. The unit of work is the newton-meter.
Best Response
You've already chosen the best response.
work is said to be done when the point of application of the force MOVES,(Can be also not moved because of 2 equal &opposite forces acting @one point-another case,torque-another case,leave it
now) ie,W=FS, where Sis the distance moved by the application of the forceF(=ma)
Best Response
You've already chosen the best response.
work= force x distance moved by the object W= F.S or W= F S cos(theta) so there is no dependence of work on time... which means that statement is false..
Best Response
You've already chosen the best response.
thanks alot everybody i have a better understanding now :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/507063b6e4b0c2dc83407ecf","timestamp":"2014-04-17T10:00:00Z","content_type":null,"content_length":"39722","record_id":"<urn:uuid:47459fcd-ef99-43e3-9e6e-fd5679ce47c2>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
169 cm in inches
You asked:
169 cm in inches
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/169_cm_in_inches","timestamp":"2014-04-20T11:24:46Z","content_type":null,"content_length":"57942","record_id":"<urn:uuid:75cb7e40-6ad9-42e2-834e-1b284bf48c23>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Second Derivatives and Beyond
Sample Problem
The maximum value of the function f (x) = -x^2 – 1 is y = -1:
Sample Problem
The maximum value of the function f (x) = cos x is y = 1:
Extreme points, also called extrema, are places where a function takes on an extreme value—that is, a value that is especially small or especially large in comparison to other nearby values of the
function. Extrema look like the tops of hills and the bottoms of valleys. Time to go hiking.
There are two types of extreme points, minima (the valleys) and maxima (the hills).
Extreme points can be local or global, but we'll talk about this later.
We need to define minimum and maximum values without the on an interval bit.
A minimum value of a function is a y-value of the function that is as low, or lower, than other values of the function nearby. A minimum looks like a valley:
The plural of minimum is minima.
Sample Problem
The minimum value of the function f (x) = x^2 + 1 is y = 1:
Sample Problem
The minimum value of the function f (x) = cos x is y = -1:
A function may have multiple minima.
Sample Problem
The function graphed below has two minima: y = 0 and y = 1.
A function may have infinitely many minima.
Sample Problem
The function graphed below has infinitely many minima:
A function may have no minima at all.
Sample Problem
The function f (x) = -x^2 has no minima, because for every value of the function there are smaller values nearby:
Be Careful:There is a difference between a minimum of a function (a y-value) and where that minimum occurs (an x-value).
Sample Problem
The minimum value of the function f (x) = x^2 + 1 is y = 1, and this minimum occurs at x = 0:
Sample Problem
The function f (x) = cos x has only one minimum value, y = -1. However, this minimum value occurs at infinitely many places, as it occurs at x = π + 2nπ for every integer n:
A function may have multiple maxima.
Sample Problem
The function graphed below has two maxima: y = 2 and y = 3.
A function may have infinitely many maxima.
Sample Problem
The function graphed below has infinitely many maxima:
A function may have no maxima at all.
Sample Problem
The function f (x) = x^2 has no maxima, because for every value of the function there are larger values nearby:
Be Careful:There is a difference between a maximum of a function (a y-value) and where that maximum occurs (an x-value).
Sample Problem
The maximum value of the function f (x) = -x^2 – 1 is y = -1, and this maximum occurs at x = 0:
Sample Problem
The function f (x) = cos x has only one maximum value, y = 1. However, this maximum value occurs at infinitely many places, as it occurs at x = 2nπ for every integer n: | {"url":"http://www.shmoop.com/second-derivatives/extreme-points.html","timestamp":"2014-04-21T09:54:28Z","content_type":null,"content_length":"30633","record_id":"<urn:uuid:2b49035c-aa4a-4e90-81e0-35f1c6a6bb94>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Tutor] How to calculate pi with another formula?
Barry Sperling barry at angleinc.com
Fri Oct 29 21:37:16 CEST 2004
Thanks, Gregor!
As a newby I didn't know about such improvements, so I did what you
suggested and added decimal.py to my lib directory in 2.3 and it worked.
Below is the code for the iterative answer ( different from the
recursive one that Bill Mill gave ) that I suggested to Dick, originally
as a text hint, but now using the decimal module:
import decimal
decimal.getcontext().prec = 40 # ARBITRARY UPPER LIMIT TO PRECISION
numer = decimal.Decimal(100) # ARBITRARY UPPER LIMIT TO NUMERATOR
subtotal = decimal.Decimal(1) # INITIALIZE
Last_Numerator = decimal.Decimal(1) # COUNTING DOWN WE'LL STOP AT 1
Subtraction_Amount = decimal.Decimal(1) # COUNTING DOWN BY 1s
Pi_Correction = decimal.Decimal(2) # SINCE THE PRIOR CALC GIVES PI/2
while numer >= Last_Numerator:
denom = 2 * numer + 1 #THE FORMULA DERIVING EACH DENOM FROM EACH NUMER
subtotal = 1 + numer / denom * subtotal
numer -= Subtraction_Amount # WORKING FROM INSIDE-OUT
print Pi_Correction * subtotal
Gregor Lingl wrote:
> Hi Dick!
> Accidentally I just was tinkering around with the new
> decimal module of Python2.4. (By the way: it also works
> with Python 2.3 - just copy it into /Python23/Lib)
> Dick Moores schrieb:
>> Is it possible to calculate almost-pi/2 using the (24) formula on
>> <http://mathworld.wolfram.com/PiFormulas.html> without using (23)?
>> Dick Moores
>> rdm at rcblue.com
More information about the Tutor mailing list | {"url":"https://mail.python.org/pipermail/tutor/2004-October/032888.html","timestamp":"2014-04-21T08:27:22Z","content_type":null,"content_length":"4245","record_id":"<urn:uuid:f13d8eaf-39c6-4700-81c6-68f4502e773b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics 110, Section 003 - 2011-2012
"Differential Calculus"
Section Details
Lecturer: Thomas Wong.
Email: twong (at) math (dot) ubc (dot) ca
Webpage: www.math.ubc.ca/~twong/
Office: Mathematics Building Room 201
Class time: Mon/Wed/Fri: 09:00 - 10:00.
Location: Chemistry Room C124.
Office Hours:
□ Wednesday 10:00 - 12:00. Rm: MATH201
□ Thursday 11:00 - 13:00. Rm: MATH201
□ By Appointment.
Workshop Times:
□ W1J: Tue 12:30 - 14:00. Rm: MATH103
□ W1L: Thu 09:30 - 11:00. Rm: MATH102
□ W1M: Thu 09:30 - 11:00. Rm: MATH225
Additional Resources: Here
• Apr. 13th
□ Sample MathXL problems updated.
□ Review/Problem sessions on:
☆ Thur Apr 19th 14:00 - 16:00. Rm: Chem C124
☆ Mon Apr 23rd 15:00 - 17:00. Rm: Chem C124
□ Details for Final exam:
☆ Tue Apr 24th 15:30 - 18:00. Rm: Math 100
☆ Remember: To bring ID and leave your valuables at home.
• Apr. 3rd
□ Term 1 Learning Objectives updated (See Learning Objectives).
□ Previous exam papers posted (See exam section).
□ MathXl is due on Thursday.
□ Office hours as normal this week.
□ Review Sessions times currently in the works and will be posted when I know for certain.
• Apr. 1st
□ Reminder that MathXL is due last day of class
□ Quiz 9 and solution posted.
□ Skills test in class tomorrow.
□ No workshops this week.
□ Learning Objectives updated.
□ Workshops updated.
• Mar. 24th
□ MathXL posted.(Due last day of class)
□ Assignment 9 solution posted.
□ Last quiz this Friday on Optimisation.
□ Skills test on the last Monday of term.
□ No workshops in the last week.
• Mar. 16th
□ MathXL posted.
□ Assignment 9 posted.
□ Quiz and solution will be posted after class.
• Mar. 12th
□ MathXL/ Skills Work posted.
□ Assignment solution posted.
□ Learning Objectives posted for Week 2.9.
□ Reminder about Quiz this Friday.
• Mar. 3rd
□ Skills Work posted.
□ Learning Objectives posted for Week 2.7 and 2.8.
• Feb. 29th
□ Assignment 8 posted.
□ Happy Leap Day.
• Feb. 26th
□ MathXL HW now online
□ Workshops start this week.
□ No Assignment/Quiz this week.
Course Outline and Objectives
MATH 110 is a two-term, six-credit course in differential calculus. It covers the same calculus content as the one-term courses MATH 100, 102, 104, 180, and 184, but with additional material designed
to strengthen understanding of essential pre-calculus topics. There is also an increased emphasis on mathematical proofs and problem solving.
• The course outline can be found here.
• For more information about the course, please visit the UBC mathematics department website here.
• Learning Objectives and additional practice problems posted on this website .
• Calculus, Early Transcendentals by William Briggs and Lyle Cochran;
• Just-In-Time, Algebra and Trigonometry for Calculus by Guntram Mueller and Ronald I.Brent
Assessment and Grading Scheme
• The final grades will be calculated as follows
□ 5% = Skills test (September 23rd)
□ 15% = Assignments/In class quizzes/MathXL (5% each)
□ 15% = Workshops
□ 20% = Midterms (October 19th and Februrary 8th)
□ 20% = December Exam
□ 25% = April Exam
• There will be no make-up midterms or exams.
• Allowance for a missed Midterm Test may be granted in the following two circumstances:
□ (a) prior notice of a valid, documented absence (e.g. out-of-town varsity athletic commitment) on the scheduled date; or
□ (b) notification to the instructor within 72 hours of absence due to medical condition.
Otherwise the score is 0. For missed Midterm Tests, original written documentation (for example a doctor s note or letter from a coach) is required.
Academic Integrity
• Please be aware of the guideline for acceptable and non-acceptable conduct for graded assessments.
• All submitted work with your name on it should be your own work, written in your own words, regardless of whether it was done individually or as a group
• Copying someone else's solution or copying solutions found through other sources such as the web is considered as a breach in academic integrity.
UBC takes academic integrity as a very serious issue. Students found guilty of breaching these guidelines are usually given a final grade of 0 in the course, suspended from UBC for one year, and a
notation made on their Transcript of Academic Record.
Midterms and Final Exam
• The midterms will be held on Wednesday October 19 and Wednesday February 8 at 6pm. (Location will be announced closer to time.)
• Each Midterm will be 90 minutes in duration.
• The December and April exams will be during the final exam period. Please do not make any travel plans before knowing the dates.
• Each Exam will be 150 minutes in duration.
• Calculators, notes and books are not allowed in the midterms or the final exam.
│Practice │October Midterm │
│Practice │Practice December Exam │
│April Past Paper 1 │April Past Paper 2 │
• Written assignments are due on Friday of every second week (See course outline).
• You should also attempt other questions from the text as we cover each topic.
• Assignments and their solutions will be posted here after.
• Written assignments will be held to standards of writing as well as to standards of mathematics. As an example, here are solutions to an assignment from a previous calculus class.
│Assignment 1│Solutions│ │
│Assignment 2│Solutions│ │
│Assignment 3│Solutions│ │
│Assignment 4│Solutions│ │
│Assignment 5│Solutions│ │
│Assignment 6│Solutions│ │
│Assignment 7│Solutions│Q1 should read "dW/dt = a W(t) - b W(t) D(t)". This has now been fixed. │
│Assignment 8│Solutions│ │
│Assignment 9│Solutions│Survey Link │
• A 10-15 minute in-class quiz will be held on week when there are no assignments due (See course outline).
• The quiz will consist of one or two questions very similar to those covered by MathXL and the workshop.
│Quiz 1 │Solutions │
│Quiz 2 │Solutions │
│Quiz 3 │Solutions │
│Quiz 4 │Solutions │
│Quiz 5 │Solutions │
│Quiz 6 │Solutions │
│Quiz 7 │Solutions │
│Quiz 8 │Solutions │
│Quiz 9 │Solutions │
MathXL Homework
• MathXL Homework will be assigned weekly.
• MathXL is a valuable tool for you to practice technical questions and receive immediate feedback.
• To access MathXL, visit mathxl.com
• You will need two pieces of information to register on the website
• If it does not specify, Please register using the same name as that on the UBC SSC website. This will save a lot of hassle later in the term.
□ An access code: which can be found in the textbook package
□ An course code: For this section it is: XL0P-M15H-201Z-01V2
• Note: Each access code is only valid for one course, please contact me if you have changed sections.
• Once you have signed up, be sure to complete the "Browser Check" to access all the additional features of the program.
Alternative Homework:
│Week 2.4│Text File │Excel File │
Skills Test
• On September 23, there wiill be an in class skills test.
• It will be similar, but shorter than the Department's Basic Skills Test.
• This test is assessed on a pass/fail basis (5% for a pass, 0% for a fail).
• If you fail, there will be opportunities to make up the 5% during the year.
• The math department has some useful resources for the skills test. Additional practice material will be posted later this weekend.
• A set of practice problems can also be found Here. Note: Q6 should say 'Find the volume...'. This has been fixed on the pdf.
• More details about the results of the skills test can be found on this page.
• Attendance is compulsory. It makes up 15% of your grades.
• It is very different from lectures. There will be a list of problems every week that you will be working on in small groups.
• Each Workshop will be lead by a Graduate and an Undergraduate Teaching assistant.
• They are introduced to give you a chance to practice what is covered during lectures as well as additional material that is not covered in lectures.
• The workshops are not a replacement for assignment and study.
│Workshop 1.2 │Workshop 1.3 │Workshop 1.4 │Workshop 1.5 │
│Workshop 1.6 │Workshop 1.8 │Workshop 1.9 │Workshop 1.10│
│Workshop 1.11│Workshop 1.12│Workshop 1.13│ │
│Workshop 2.2 │Workshop 2.3 │Workshop 2.4 │Workshop 2.5 │
│Workshop 2.6 │Workshop 2.8 │Workshop 2.9 │Workshop 2.10│
│Workshop 2.11│ │ │ │ | {"url":"http://www.math.ubc.ca/~twong/math110-2011.html","timestamp":"2014-04-18T03:22:43Z","content_type":null,"content_length":"16801","record_id":"<urn:uuid:dae0ab0e-6133-42ab-9fc1-9574b7170479>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
last updated January 8, 2001
INTRODUCTION: This is a slightly edited and updated version of the Final Report approved by INTAS, ommitting some adminstrative details not of general relevance.
INTAS FINAL REPORT
1. TITLE: Algebraic K-theory, groups and categories
2. REF: INTAS93-436 ext
3. PROJECT COORDINATOR: Professor R. Brown
4. PERIOD COVERED: April, 1997 to February, 2000
2.1 Scientific Objectives
The origin of this project was the amalgamation in 1995 of two separate proposals for INTAS support in the areas of Algebraic K-theory from A. Bak at Bielefeld, and of Categorical Methods in
Algebraic Homotopy and related topics from R. Brown at Bangor, in the general context of Grothendieck's programme in Galois Theory, homotopical algebra, and multiple categories. The INTAS Scientific
Committee ruled that these proposals should be amalgamated. The accepted proposal was extended in 1997 and this is the report on the extension.
The agreed title of the joint proposal `Algebraic K-theory, groups and categories' indicates well the variety of interconnections and analogies which were envisaged. `Algebraic K-theory' is an area
which has been notable from the start for its interactions and the problems it has produced. `Groups' occur as algebraic groups, classical groups, homology groups, homotopy groups, Galois groups,
abstract groups, K-groups, and in many other ways. Further the Bangor scientific programme has long investigated and developed higher dimensional analogues of groups, including crossed modules, cat^
1-groups, crossed complexes, and various forms of multiple groupoids.
In all these areas categorical methods are vital, both for guiding the theory as well as achieving specific calculations. The interaction across methods and techniques has been fully vindicated.
2.2 Research Activities
In each of the following sections, the references and links are to the Annexe pages with the publications for each group.
The support of INTAS has been valuable for all the groups, both in terms of extended contact and in terms of support for mathematical activity in the NIS. In particular, some of the NIS supported
members would have found it difficult to keep in mathematical research without this support.
The amalgamation has been a notable success. In particular, members from the Bak proposal and the Brown proposal are collaborating in ways not originally envisaged.
As examples of this we mention:
(i) the appointment of T. Porter (Bangor) to the Editorial Board of the journal K-theory, of which Bak is Managing Editor, with the aim of extending publication into homotopical algebra and its
(ii) a successful British Council/ARC supported collaboration between Bangor and Bielefeld, including a number of visits both ways and several workshops on `Global actions and algebraic homotopy',
which invited members from teams in the original proposals of both Bangor and Bielefeld, and had external participants,
(iii) a successful INTAS proposal `Algebraic homotopy, Galois theory and Descent' (Bangor, with Coimbra (Portugal) and the Georgian Academy of Sciences),
(iv) a successful DFG/RFBR proposal `Structure of classical-like groups over rings, nonabelian K-theory, and algebraic homotopy theory' (Bielefeld with St. Petersburg State University).
Bak's notion of `Global action' has been exploited by members of St.Petersburg State University, and has been developed in a broader context in collaboration with Bangor. `Higher dimensional algebra'
as developed at Bangor has been exploited by workers at Tbilisi, who have themselves developed a new range of categorical techniques related to Galois theory and are incorporating into their work
nonabelian aspects of the theory of global actions..
The joint Bangor/Bielefeld workshops have been notable for the range of discussion and the free exchange of ideas.
The group at Bangor working on mathematics related to the INTAS project consists of R. Brown (as Project Coordinator), T. Porter, C.D. Wensley, and research students I. Içen, M. Alp, A. Mutlu, Anne
Heyworth, M.A.Kadir, Emma Moore. Prof. L. A. Lambe (Rutgers and Stockholm) has also advised on symbolic computation, under other support, and he was appointed Honorary Professor at Bangor in 1996. In
the year 2000, M.V. Lawson (Bangor) has started a collaboration with A. Patchkoria (Tbilisi) on inverse monoids and simplicial methods, and so some of Lawson's work and the theses of his students
have been added to the publication report. His work is becoming more related to the overall programme because of the relations between inverse semigroups and ordered groupoids, and the influence of
homotopical methods in inverse semigroup theory.
As a result of this INTAS project, an extensive collaboration has been developed between Bangor and Bielefeld supported additionally from 1998 by a British Council/ARC Grant. This has led to a number
of visits in both directions for research discussions and workshops, and the development of joint research on `Global actions and groupoid atlases', in which a long paper is in preparation [5]. This
paper gives a broader basis and also a detailed expository account of this new area, which has applications to unstable higher K-theory and to combinatorial group theory, particularly identities
among relations.
Work at Bangor over many years has investigated the extension of the notion of abstract group from group to groupoid and to multiple groupoid, where the latter is viewed as a form of `higher
dimensional group'. This has led to new results, new calculations, new constructions and new viewpoints in algebraic topology, cohomology theory, group theory and differential topology. The programme
is related to Grothendiek's programme of non abelian methods in homological algebra, and to recent increasing use of n-categories, for example in theoretical physics and in computer science. A recent
aspect is the use of current tools of symbolic computation for examples and experimentation.
Considerable progress has been made in utilising the non linear methods of crossed modules and crossed complexes, notably in:
(i) recent work on applying crossed complexes to compute algebraically the module of identities among relations for group presentations [37,38] and for the fundamental groupoid of a graph of groups [
(ii) work of Brown and Wensley giving finiteness theorems and a range of determinations and computations of induced crossed modules (a construction of Brown and Higgins, 1978), with applications to
homotopy 2-types [21];
(iii) work of Brown, Golasi\'nski (Toru\'n), Porter, Tonks applying crossed complexes and homotopy coherence (Cordier and Porter [27]) to equivariant homotopy theory, obtaining results on function
spaces of equivariant maps out of reach of previous methods [9,10];
(iv) the application of crossed modules and double groupoids to second order holonomy, in Içen's thesis [14];
(v) a paper of Brown with Janelidze (Tbilisi), applying the latter's generalised Galois theory to a general Van Kampen theorem in lextensive categories [16], and further work on second order covering
maps of simplicial sets [17];
(vi) work in Ehler's thesis on simplicial groupoids (papers by Ehlers and Porter [28,29]);
(vii) work in Arvasi's thesis (Arvasi and Porter [3,4]) on higher Peiffer identities in commutative algebras, using Carasco and Cegarra's notion of hypercrossed complex which generalised 1978 work of
Bangor student Ashley on a non Abelian Dold-Kan theorem;
(viii) Porter's paper on TQFTs [56], which uses simplicial groups to generalise to all dimensions low dimensional work of Yetter;
(ix) work of Brown and Porter applying crossed resolutions to recast in modern form work by Turing (1938) on non-abelian extensions and identities among relations, and apply it to computations(cf [21
(x) work of A. Mutlu in his thesis on higher order Peiffer operations, resulting in the publications [50-55].
We should also mention that the notion of non abelian tensor product of groups found by Brown and Loday in 1987 continues to have a wide range of applications and extensions, and is applied by
N.Inassaridze of the Tbilisi group. A bibliography of 74 papers on this tensor product may be found on
http://www.bangor.ac.uk/~mas010/nonabtens.html, including papers by members of the Tbilisi group.
Closely related to the above is work of Porter on abstract homotopy theory, homotopy coherence, proper homotopy theory, and shape theory. A paper with J.-M. Cordier (Amiens) [27] (in the Trans. Amer.
Math. Soc.) on homotopy coherence has appeared, and has had substantial applications as mentioned above (e.g. (iii)). Another application is in the thesis of M.A. Kadir, which gives fundamental
coherence results for Cech and Vietoris complexes, and hypercoverings. Porter has written a substantial survey article on proper homotopy theory for the Handbook of Algebraic Topology, ed. I. M.
James, and a book with H. K. Kamps (Hagen) on `Abstract homotopy theory', (World Scientific) has been published. The work with Kamps, Kieboom, and Hardie continues with the development of double
groupoid methods in homotopy theory.
Wensley has worked extensively with Brown on the theory and calculation of induced crossed modules, obtaining new determinations, unobtainable by other methods, of homotopy 2-types of mapping cones
(published in 1995-6). With their joint student, M. Alp, he has produced a substantial GAP package for computation of crossed modules which has been accepted by the GAP Council as a share package [58
]. Current work is with research student Emma Moore on GAP code for normal forms in the fundamental groupoid of a graph of groups, and the construction of free crossed resolutions for this
fundamental groupoid in terms of the free crossed resolutions of the individual groups [21,59].
Other work at Bangor on symbolic computation has involved initially the package AXIOM, and collaborations with W. Dreckmann and Prof. L. Lambe. This led to substantial work of research student Anne
Heyworth on generalising rewriting theory to left Kan extensions [10] and to a variety of applications of this and of Gröbner bases [30-36], including the guidance system for a mechanical excavator [
26]. Most of this work is done in the Computer Algebra System GAP. One aim of this work is the computation of free crossed resolutions of a group from a presentation: a new algorithm for this is
presented in the paper with Razak Salleh [22], thus solving a problem going back to Reidemeister (1933) and this algorithm has been implemented [37,38]. A GAP package `IDREL' is in preparation by
Heyworth and Wensley and planned to be submitted in 2001.
An extension of the important notion of local equivalence relation has been obtained with the notion of local subgroupoid [12,13,14]. The main results use delicate previous work on holonomy and
monodromy groupoids. It is expected that this work will give a unification of ideas of holonomy from foliations and from bundle theory, which have previously been unrelated, despite the same name and
some related intuitions. The main point is that earlier work on holonomy based on ideas of J. Pradines allows for an algebraic expression of `iteration of local procedures'. This work is part of the
overall programme and relied on funds from the Bangor allocation to support Dr Içen's visit. Work on higher order holonomy and monodromy is underway with Içen.
The paper with Al-Agl and Steiner [1] solves a 10 year old problem on the equivalence between two notions of (strict) multiple category, and so allows for useful descriptions of tensor products and
internal homs for globular w-categories. This work has recently been applied in concurrency theory in computer science. For a general survey of work on `Higher dimensional group theory', see the web
Professors Mikhalev and Artamonov visited Bangor for one week in February, 2000, and this visit revealed a number of possibilities for future work, particularly in the fields of Gröbner bases, and of
identities among relations.
This INTAS programme also led to an INTAS grant Descent Theory and its Higher Dimensional Analogues involving Bangor, Coimbra and Tbilisi, coordinated by T.Porter from Bangor, which develops ideas of
descent, Galois Theory, and homotopical algebra .
The main interaction of Bangor is with Tbilisi and Bielefeld.
Joint papers of Brown and Janelidze on coverings and Van Kampen theorems and on extensions of Galois theory have been published (the latest in 1999). This continues the original submission on the
extension of Grothendieck's programme in Galois theory.
The extension of the project allowed for the development and planning of increased and important interactions between these overall areas of research, with regard to the application of categorical
and computational methods in K-theory, and in particular the more general application of methods of the theme of global actions developed by Bak at Bielefeld. This INTAS Project has led to a
successful Bangor/ Bielefeld collaboration supported by the British Council/ARC, with visits to Bielefeld in December, 1997, June 1998, and planned in April, 1999 and later, and from Bielefeld to
Bangor in December 1997, and January 1999. A series of joint papers has been planned, leading from global actions to a new concept of groupoid atlas which seems more suited to homotopy questions.
The Bielefeld group has cooperated closely with members of the groups at St. Petersburg State University, Steklov Institute, Moscow State University, and Bangor University and is beginning
cooperation with the Mathematical Institute of the Georgian Academy of Sciences. Members from all of the universities and institutes in the INTAS project have made research visits to Bielefeld and 5
members (Izhboldin, Merkurjev, Nenashev, Panin, and Vavilov) have been or are currently Humboldt Fellows in Bielefeld. The cooperation with St. Petersburg State University has produced 3 joint
articles [5], [6], and [7] and others are being written. There is one joint article with Bangor [8] and others are being written as well. Furthermore, work at Moscow State University overlaps with
joint work between Bielefeld University and St. Petersburg State University. A. Nenashev from the Steklov group has been for over a year in Bielefeld and has cooperated with A. Bak and members of the
Bangor group who made several short research visits to Bielefeld during the past year. Members of the Bielefeld and Bangor group are cooperating in a joint British-German research project centered
around global actions.
The principal activities in the Bielefeld group have been centered around global actions [3], [4], [9], [10], [17], [18], dimension theory [1], [7], [22], the structure of classical-like groups [5],
[7], [11], [12], [14], [15], [22], Hermitian K-theory [11], [12], [29], and K[1] and K[2] of exact categories [23] - [28].
The joint papers [5], [6] and [7] with the St. Petersburg State University group have been commented on already in the report on that group. The results in [7] overlap with those in [23] (of the
Moscow State University references). Nenashev's articles are discussed in the report on the Steklov Institute group.
Global actions are the algebraic counterpart of topological spaces. Putting a global action structure on an algebraic object such as a group allows one to construct paths in the objects and to
develop in a classical way a homotopy theory of the objects. The papers [3], [4],[8], and [9] develop the foundations of the subject and [10] provides a completely algebraic construction of algebraic
K-theory using global actions. The papers [17] and [18] give a model categorical account in the tradition of Quillen and Baues of the homotopy theory of global actions and simplicial complexes.
The papers [7], [15] and [22] develop a notion of dimension in categories and apply it to determining the structure of group valued functors on categories with dimension. The papers [7] and [22] have
their focus on the general linear group and the paper [15] on the general quadratic group.
The paper [29] provides foundations for the K-theory of not necessarily even Hermitian forms. The articles [11] and [12] establish basic results for this theory.
2.2.C Moscow State University
Several members of the Moscow State University group have made research visits to Bielefeld. Below is a summary of their research output.
A. V. Mikhalev and coworkers obtained a full solution [1], [2], [4], [6] of the Riesz-Radon problem (1908) for integral representations of a Radon measure on an arbitrary Hausdorff space. Results on
Frobenius type theorems for semilinear mappings of matrices over skew fields were established in [5]. Very recently, results giving a description of universal central extensions of matrix Lie
algebras were published in [39] and a book [40] `Differential and Difference Dimension Polynomials' was published by Kluwer.
V. A. Artamonov has carried out extensive work on quantum polynomials in [7] - [16]. The paper [17] provides a useful survey of recent results on quantum polynominals and their applications to
K-theory and quantum groups and [10] a detailed survey of quantum polynomials and their role in noncommutative algebra including their K-theory and relations with Hopf algebras and noncommutative
geometry. The paper [16] shows that if a general quantum polynomial ring is Morita equivalent to a quantum polynomial ring then the rings are isomorphic and solves the Zarisky problem for quantum
polynomials. The paper [15] written jointly with R. Wisbauer determines all automorphisms of general quantum polynomial rings and finds invariants and trace maps. The article [12] written jointly
with P. M. Cohn shows that the division ring of a coordinate ring of a quantum plane has the property that a centralizer of any nonconstant is commutative and finds generators of the automorphism
group of this division ring.
The article [14] gives a necessary and sufficient condition for the triviality of the center of division rings of coordinate rings of quantum spaces.
The paper [13] surveys new results on division rings of coordinate rings of quantum affine spaces.
Very recently, a classification of automorphisms of division rings of quantum rational functions was given in [42] and a survey of recent results on identities in various classes of algebras was
provided in [43].
Y. P. Solovjev and coworkers have carried out work on elliptic functions and Feinman integrals in their publications [17] - [21]. The article [17] presents a generalization of perturbation theory
with convergent series for Feinman integrals. [17] - [18] taken together provide a new construction of Hermitian K-theory based on a root system. The articles [2] and [21] supply a new method of
approximative calculation of Euclidean functional integrals with arbitrary accuracy.
I. Z. Golubchik has conducted research on the Schreier- van der Waerden problem [22] and on the structure of linear groups over P. I. and related rings [23] - [25]. This has relations to work carried
out by A. Bak and A. Stepanov and reported on in the Bielefeld University and St. Petersburg State University groups. The article [22] provides a complete solution of the Schreier-van der Waerden
problem on determining all isomorphisms of projective and linear groups over arbitrary associative rings. The paper [23] gives a description of normal subgroups of linear groups over P. I. and weakly
Noetherian rings and the paper [25] establishes analogs of this result for groups of Lie type.
A. A. Mikhalev and coworkers have done extensive work on noncommutative algebras, free algebras, Lie algebras, and Leibnitz algebras. The paper [26] describes algorithms for symbolic computation in
Lie superalgebras. The paper [27] surveys results on orbits of elements of free groups and algebras under the actions of automorphism groups. The article [29] proves that test elements of a free Lie
algebra are elements not contained in proper retracts. The article [34] characterizes test elements in free algebra satisfying the Artin-Schreier property. The paper [31] shows that the variety of
Leibnitz algebras has the property of differential separability for subalgebras, that the Jacobian conjecture is true for free Leibnitz algebras and that free Leibnitz algebras are finitely solvable.
The papers [32], [33], [36] and [37] obtain algorithms for standard bases of ideals in various algebras.
2.2.D Mathematical Institute, Georgian Academy of Sciences
G. Janelidze
Categorical Galois theory (called CGT below for short) was developed by G. Janelidze in 1984-90, and one of the major objectives of this project was to investigate its various connections with
higher-dimensional homotopical algebra developed by R. Brown, T. Porter and other members of the Bangor team. Since CGT adequately extends Galois theory of commutative rings (see [6] - [9], [30],
[32], [33], [41]) and the theory of central extensions of groups and more general algebraic structures ([30], [36], [42]), those connections should help to realize our extended version of the
Grothendieck program, which is supposed to provide a unified foundation not only to algebraic geometry and algebraic topology, but also to the commutator/homology theory of ``group-like" algebraic
structures. The two most important results in this area of Bangor-Tbilisi collaboration are described in [3] and [4]. The first of them is a new extension of the Van Kampen theorem, based on
Grothendieck's Descent theory: it turns out to be a consequence of the so-called lextensivity property of the category of topological spaces, which simplifies the usual complicated form of the
descent data. The second one applies CGT to the adjunction between simplicial sets and groupoids; the resulting second order covering maps of simplicial sets are classified by the internal actions of
a new double groupoid, which turned out to a ``many-object version" of certain known constructions, notably of Quillen and Loday. Recently, the geometrical description of that double groupoid was
also obtained ([5]). Independently of that, many new results in CGT, its non-homotopical examples, and in related areas of category theory and categorical algebra have been obtained in collaboration
with colleagues from Australia, Canada, France, Italy, Hungary, USA, and Portugal. Among those are:
1. CGT was extended in three directions ([31], [32], [35]) with new interesting examples. Let us just mention that as shown in [35], CGT contains the so-called Tannaka duality as a special case, and
that the results of [31] establish a deep link between CGT and the Kurosh-Amitsur theory of radicals.
2. The fundamental theorem of Galois theory of commutative rings was originally proved in full generality by A. R. Magid about thirty years ago. Later he found a mistake, and now - together with him
- it was corrected, and the correct formulation and proof based on CGT was obtained ([6]).
3. Separability in lextensive and general categories was investigated. It was shown that various basic results on (commutative) separable algebras and decidable objects in a topos whose known proofs
involved specific techniques (like projective modules or internal logic of the topos) have simple purely categorical proof. The relationship with the categorical notion of a covering morphism (used
in CGT) was established. See [7], [33], [43].
4. Grothendieck's Descent theory. A second expository paper with simplified presentations of various known results (often previously unpublished), and some new results, was written ([28]). All
existing problems of finite topological descent were solved, and for the general topological descent theory some counter-examples to open problems are constructed, and it is shown that the finite
case provides simple motivations for all existing results ([40]).
5. Factorizations in Galois theory. The categorical version of the classical purely inseparable-separable factorization of finite algebraic field extensions was obtained. The well known
monotone-light factorization in topology turned out to be another very special case of this: it was obtained by applying CGT to the adjunction between all compact Hausdorff spaces and the
0-dimensional ones ([8]). More complicated cases (commutative rings, locally connected spaces with local homeomorphisms, and others), where the factorization still exists, but no "good" description
of the purely inseparable morphisms can be given, were also investigated ([30]).
6. Central extensions, internal groupoids, and commutators. In several steps during several trips of G. Janelidze to Australia, it was finally proved that the three approaches to central extensions
of ``group-like" algebraic structures - homological-algebraic (Froehlich's school), universal-algebraic (commutator theory), and of CGT - perfectly agree ([36], [42]). The Galois theory of central
extensions involves internal groupoids in algebraic categories. And there are several important levels of generality, where it is desirable to have certain simplified descriptions of internal
groupoids (for example it is well known that in the category of groups they are precisely crossed modules). A reasonably wide description was obtained in [29]. In connection with this a categorical
reformulation of commutator theory with many new results was given in [34].
7. Semi-abelian categories were discovered in [39]. S. Mac Lane, who proposed the first version of definition of abelian category in 1950, in fact proposed to find a nonabelian version of it, where
the isomorphism theorems and some other basic facts and constructions of group/ring/module theory could be formulated. In the next twenty years various attempt have been made, especially for the
purposes of homological algebra and theory of radicals. However all the proposed definitions used ``strange" conditions on normal mono-/epimorphisms, which excluded any reasonable use of
general-categorical methods. On the other hand as shown in [1] using descent theory, there is a categorical notion of semi-direct product in Bourn protomodular categories - and that was one of
ingredients that helped to prove that the above mentioned old attempts yield a notion that can be equivalently expressed with modern categorically natural axioms. The resulting "semi-abelian"
categories provide a good environment to simplify and unify many algebraic constructions. The results of [2] can be considered as an example of this.
8. Absolute homological algebra in general additive categories with kernels was obtained in [37]. It uses the old work of Yoneda and Grothendieck's descent Theory, and includes the known
constructions for abelian topological groups and modules. The reason why such a long-standing problem was solved only in 1998 should probably be attributed to the various improvements in descent
theory and to previous work on the semi-abelian case (see above).
9. The Kurosh-Amitsur radical theory was already mentioned in connection with CGT and with semi-abelian categories. It was also understood that it has a non-trivial purely combinatorial aspect,
related to the known fact that the pairs consisting of a radical class and the corresponding semisimple class sometimes do not occur from a Galois connection (non-associative rings), and an
appropriate combinatorial structure to replace the Galois connection was obtained in [38]. Moreover, it was shown there that the categorical setting of the theory of radicals is based on the more
fundamental and rather simple combinatorial setting.
A. Patchkoria
The notion of a Schreier internal category in the category of monoids was introduced and it was proved that the category of Schreier internal categories in the category of monoids is equivalent to
the category of crossed semimodules. This extends a well-known equivalence of categories between the category of internal categories in the category of groups and the category of crossed modules [55]
Homology and cohomology monoids of presimplicial semimodules (in particular, presimplicial abelian monoids) were introduced and some algebraic and topological applications of them are given.
(Constructions of homology and cohomology monoids of topological spaces with coefficients in abelian monoids, a generalisation of the construction of derived functors via simplicial resolutions to
semimodule-valued functors, etc.) Relations between our homology monoids and the classical homotopy groups of simplicial abelian monoids are studied. These and other results on homological algebra of
monoids and semimodules are included in [51] - [54] and [56]. This work is perceived as being potentially important in homological algebra and its applications where finer invariants than abelian
groups are needed (for example where a Grothendieck group loses too much information). The origin of this work was in fact for applications to Cousin's problem in analysis.
In joint work of M. Lawson (Bangor team), homological descriptions of the homotopy groups arising from 0-dimensional idempotents of simplicial inverse semigroups are obtained in some special cases
T. Datuashvili
During the INTAS project period among the other questions the problem of the internal Kan extension, suggested by G. Janelidze, was investigated. The crossed module approach to this question enabled
us to obtain under certain conditions the necessary and sufficient conditions for the existence of internal Kan extensions [11]. Later the same results were obtained under more general conditions
[13]. Working on the topological approach to the above mentioned problem the well-known equivalence of the category of 3-types (in he sense of [J. H. C. Whitehead, Combinatorial homotopy I, Bull.
A.M.S. 55 (1949) 214-245]) with the localized category of crossed modules [H. J. Baues, Combinatorial homotopy and 4-dimensional complexes (Max Plank Institut 1990), preprint], [J.-L. Loday, Spaces
with finitely many non-trivial homotopy groups, J. Pure Appl. Algebra 24 (1982) 179-202] is not useful. One needs to deal with (internal) equivalence of internal categories (equivalently, crossed
modules), not with a weak equivalence of corresponding crossed modules. This procedure leads to the search of more general relation between crossed modules and connected cell complexes. The main
result obtained here (see [12]) is the existence of adjoint pair of functors between homotopy category of internal categories (=crossed modules) and the category of 3-types. The constructions given
in [S. Mac Lane, Cohomology theory in abstract groups III, Ann. Math., 50 (1949) 736-761], [S. Mac Lane and J. H. C. Whitehead, On the 3-type of a complex, Proc. Nat. Acad. Sci. USA 36 (1950) 41-48],
[J. H. C. Whitehead, Combinatorial homotopy II, Bull. A.M.S. 55 (1949), 453-496] were used and their functoriality was shown. It was important to pay attention to the ``middle" category between
crossed modules and homotopy systems, in the process of realization of algebraic 3-types. This is a subcategory XModF of crossed modules XMod, in which the objects are crossed modules with free group
of operators. We show that correspondence between algebraic 3-types, crossed sequences of a special type, 3-dimensional homotopy systems and connected cell complexes define functorial relations,
equivalence or adjoint between corresponding homotopy categories. These results can be applied to the problem of existence of internal Kan extension by reducing it to the problem of the unique
extension of a continuous map between connected cell complexes. T. Datuashvili is also working in collaboration with T. Pirashvili [14] on homology of crossed modules, and with J.-L. Loday on Leibniz
N. Inassaridze
The important problem of the derived functors of the non abelian tensor product of Brown and Loday has been solved using methods of non abelian derived functors. This requires extending the original
definition to the case of non compatible actions [1], and this also leads to new problems such as those of finiteness of this new product [2]. Further the existence of these derived functors leads to
new notions of non abelian homology H[n](G,A) where G and A are two groups acting on each other [1]. They coincide with the usual Eilenberg-Mac Lane homology groups when A is a G-module. A number of
new results on this nonabelian homology are obtained in the papers [3,4,5,6,7]. These include: A Mayer-Vietoris sequence; a description of H[n](G,A) as the left derived functors of the functor H[1]
(G,A); explicit formulas for the second and third nonabelian homology groups using ^Cech resolutions; sufficient conditions for the nonabelian homology groups in dimensions ³ 2 to be finitely
generated, finite, p-groups, torsion groups or groups of exponent q. For instance, suppose that the action of A on G is trivial, that G is finite, and that A is a finite group (or p-group or finitely
generated group). Then H[n](G,A), n ³ 2, is a finite group (or p-group or finitely generated group).
Some properties of the nonabelian tensor product modulo q of two crossed modules, introduced by Conduché and Rodriguez-Fernandez, are established (commutativity, compatibility with the direct limit
of crossed modules) [8]. The extension of the tensor product to a tensor product modulo q leads to introduce and develop certain aspects of a q-modular version (q-homology) of the classical
Eilenberg-Mac Lane homology theory of groups, where q is a nonnegative integer [8]. Its functorial properties (exactness, universal coefficient formulas) and calculations (for free groups, finite
cyclic groups) are given [8]. The relationship between q-homology groups and derived functors of tensor product modulo q is studied [8].
Homology groups modulo q of a precrossed P-module in any dimensions are defined in terms of nonabelian derived functors [9]. The Hopf formula is proved for the second homology group modulo q of
precrossed P-modules which shows that for q = 0 our definition is a natural extension of Conduché and Ellis' definition of the second homology group of precrossed P-modules [9]. Other properties of
homology groups modulo q of precrossed P-modules are investigated, in particular for any short exact sequence of precrossed P-modules a five term exact homology sequence modulo q is obtained [9].
Some properties of the nonabelian tensor product of two Lie algebras M and N acting on each other are established [10]. Using techniques of nonabelian homological algebra a nonabelian homology of Lie
algebra M with coefficients in any Lie algebra N (here M and N act on each other) are constructed as the nonabelian left derived functors of the nonabelian tensor product of Lie algebras, which
generalize the classical theory of the homology of Lie algebras [10]. Functorial properties of nonabelian homology of Lie algebras are established.
Defining and using higher (n-fold) ^Cech resolutions of groups and abelianization of crossed n-cubes a new approach to the classical Hopf formula for higher homology of groups is given [11].
The notion of q-modular cohomology of a group G with coefficients in a G-module A is introduced [12], where q is a nonnegative integer. Its description in terms of extensions, some its properties and
calculations are given. For a finite group G Tate cohomology modulo q are defined [11].
B. Mesablishvili
The connection between abstract Galois theories of Ligon and of Chase and Sweedler was investigated. In particular, it was shown that they give the same Galois theory in the category of modules over
an elementary topos [44]. A.Grothendieck's descent theorem was extended to a wider class of morphisms of schemes [45].
D. Zangurashvili
D. Zangurashvili was working on construction of various factorization systems in general categories, and prepared the papers [57] - [60]; one of them [59] was prepared during her visit in Coimbra
University (Portugal) in collaboration with M. Sobral.
Z. Omiadze
Z. Omiadze continued his work on II-categories, which he introduced before ([46], [47]), and on their higher-dimensional versions ([48], [50]). He also began to investigate a new type of
2-dimensional categorical enrichment ([49]).
2.2.E St. Petersburg State University
All members of this group have made research visits to Bielefeld. N. Vavilov visited on a Humboldt Fellowship for a year.
The members of this group have cooperated much with one another and with members of the Bielefeld group. Nine joint papers [2], [7], [8], [10], [24] - [26], [28] of this kind were written. Several
articles will appear in K-Theory.
The paper [3] provides a detailed and extensive study of weight elements in Chevalley groups. This theme is carried further in [11] and [16].
The papers [5], [6] are part of a continuing cooperation between N. Vavilov and L. Di Martino on (p, q)-generation of subgroups and groups of Lie type.
The paper [7] with A. Bak poses a very interesting and deep extension of Milnor's conjecture relating K-theory to quadratic forms and provides evidence for the conjecture.
The paper [8] is part of an ongoing cooperation between A. Bak and N. Vavilov on the structure of hyperbolic unitary groups. The current paper is the first in a planned series and is concerned with
definitions and basic results, including the normality of the elementary subgroup. A successor is currently being written.
The paper [10] of N. Vavilov and A. Stepanov provides new insight of a geometric nature into the normality of elementary subgroups. These ideas are carried further in [14]. Both papers are related to
joint work described above of A. Bak and N. Vavilov.
The papers [17] - [19], [21] and [23] of E. V. Dybkova concern the structure of net subgroups in linear and hyperbolic unitary groups. These papers are related to that of A. Bak and A. Stepanov [28]
which develops general procedures to determine when classifying sandwiches in classical-like groups are nilpotent. The results here include applications to net subgroups. These applications overlap
with results of Golubchik [23] (in the Moscow State University references).
2.2.F Steklov Institute, St. Petersburg
All the members of this group have made research visits to Bielefeld and three have or will be spending a year or longer as Humboldt Fellows. Below is a summary of the results achieved by the group.
The 30 year old problem to find a Grothendieck K[0]-construction of K[1] of an exact category is solved in two papers [25] and [26] (of the Bielefeld references) of A. Nenashev who is currently
finishing his stay in Bielefeld as a Humboldt Fellow. The paper [27] of Nenashev makes a start at solving the same problem for K[2] of an exact category and the paper [28] applies the results in [25]
and [26] to l-operations on K[1].
A. Suslin and coworkers have made significant advances to the cohomology of group schemes, the K-theory and cohomology of sheaves, Chow groups, and the homology, cohomology, and K-theory of GL and
related functors. The results on group schemes are contained in [1] - [3]. Work on Chow groups and the K-theory and cohomology of sheaves are contained in the articles [6] and [7] which will appear
in an upcoming monograph in the Annals of Mathematics Studies. Two very important articles [8] and [9] concerning the cohomology of GL and related functors are appearing in the Annals of Mathematics.
The article [8] builds on an earlier success [2] of Suslin and Friedlander solving the long standing conjecture that the cohomology algebra of a finite group scheme is finitely generated, by
extending significantly the scope for making Ext-group calculations. Results here include a complete determination of all Ext-groups between classical functors in the category of strict polynomial
functors of finite degree. Methods and results developed in [8] are used in [9]. Further articles on the homology, cohomology, and K-theory of GL are found under [28] - [32] and [34].
A. Merkurjev and coworkers have carried out extensive research on the K-theory of algebraic groups. Several of their articles have appeared in K-theory. Definitive results on R-equivalence and index
theory for algebraic groups are contained in the articles [16] - [19] and [23] - [27]. Further aspects of the K-theory of algebraic groups are found in [20], [21], [26] and [27] and a survey is
published in [22].
Problems concerning isotropy and splitting for quadratic forms over fields are studied and solved in [10] - [13] and [36].
The article [25] (of the St. Petersburg State University references) handles certain stable range questions for affine algebras and the paper [5] has results on a conjecture of Grothendieck for
Azumaya algebras.
2.2.H Comparison with the Work Programme
We see the results achieved as fulfilling the essence of the Objectives of the Work Programme, and as properly taking up new opportunities which arose in the course of the work.
2.3 Scientific Results
Here we mention Bak's new methods of Global actions, giving a purely algebraic version of a topological space, and whose homotopy groups give higher algebraic K-groups. This has also been developed
in a number of directions with Bak's direct collaborators, particularly with the groups of Suslin and of Vavilov. A recent development is the link with algebraic homotopy expertise from Bangor and
nonabelian homological algebra expertise from Tiblisi.
Another stream which comes in via the Tbilisi-Bangor connection is the generalisation of Grothendieck's Galois Theory by Janelidze. Links with the theory of descent are already clear. In terms of the
impact of INTAS, we mention that the work at Tbilisi has been considerably influenced by the work at Bangor on crossed modules and related topics, such as non-abelian tensor product of groups. The
late development of input of Lawson's work on inverse semigroups to the programme is a direct result of INTAS.
Other very significant work includes the full solution of the Riesz-Radon problem (1908) obtained by Mikhalev and coworkers at Moscow State University, results of Suslin and coworkers on the K-theory
and cohomology of sheaves, chow groups and GL, results of Merkurjev on R-equivalence and the rationality problem for semisimple adjoint classical groups and on index reduction formulas of twisted
flag varieties of semisimple algebraic groups, the solution by Nenashev of the classical problem of finding a Grothendieck K[0]-construction for K[1] of an exact category, and a generalization of
Milnor's conjecture for quadratic forms together with supporting evidence by Bak and Vavilov.
This is just a mention of considerable work given in more detail in the individual reports and in the lists of papers. Of course the INTAS support is one part of a thriving range of connections which
makes it difficult to quantify exactly how much this support is responsible for the overall progress. This support has clearly significant effect, both in terms of the actual help given to the
participating NIS teams, and in terms of the cooperation which it has encouraged and will continue to encourage.
│Scientific Output │published│in press/accepted│submitted│in preparation│
│Paper in an International Journal │ 135 │ 19 │ 24 │ 42 │
│Paper in a National Journal *) │ 3 │ 3 │ 4 │ 6 │
│Abstract in proceedings of a conference │ 1 │ │ │ │
│Book, Monograph *) │ 2 │ 0 │ 0 │ 0 │
│Internal Report **) │ │ │ │ │
│Thesis (MSc, PhD, etc.) *) │ 11 │ │ │ │
│Patent │ 0 │ 0 │ 0 │ 0 │
│Oral Presentation, Public Lecture │ Many! │ │
3.1 Meetings and visits
In general this project has worked as part of a multifunded project. Bangor and Bielefeld have kept in excellent contact but using funding from British Council/ARC and the SOCRATES Programme. Other
funds have supported a seminar in Tbilisi for NIS participants. Bak took part in the Pontrjagin 80th Birthday meeting in Moscow in the Summer of 1998, and gave two talks. In fact this meeting was
partly sponsored by INTAS, but Bak's funding was from the DFG. Many NIS visitors to Bielefeld have been supported by funding from the DFG and Humboldt Foundation.
The Bielefeld group has cooperated closely with members of the groups at St. Petersburg State University, Steklov Institute, Moscow State University, and Bangor University and is beginning a
cooperation with the Mathematical Institute of the Georgian Academy of Sciences. Members from all of the universities and institutes in the INTAS project have made research visits to Bielefeld and 5
members (Izhboldin, Merkurjev, Nenashev, Panin, and Vavilov) have been or are currently Humboldt Fellows in Bielefeld.
There is considerable contact and collaboration between the two groups in St. Petersburg.
The INTAS funds at Bangor largely supported visits of Brown to Utrecht to a PSSL meeting where he discussed with Janelidze; to Dunkerque to meet Janelidze who was visiting there; a visit of Dr R.
Vilanueva (Valencia) to discuss the book project on crossed modules; a visit of Prof Buchberger (Linz) an expert on Grobner bases, indeed the founder of the algorithms in the area, to give advice on
our developing work in the area; a visit of Bak to Bangor; a two month stay of Dr \.I. Içen (Inonu) to work with Brown on new methods in holonomy - this work is in press or in preparation.
Members from all of the universities and institutes in the INTAS project have made research visits to Bielefeld and 5 members (Izhboldin, Merkurjev, Nenashev, Panin, and Vavilov) have been or are
currently Humboldt Fellows in Bielefeld. A. Nenashev from the Steklov group has been for over a year in Bielefeld. INTAS funding extended this support conveniently.
Janelidze from Tbilisi has been travelling in the West throughout this period with other support, and communication with him and Bangor has taken place at various meetings. Inassaridze also visited
Bielefeld and Bangor with support from an INTAS Fellowship in Dec 1999-Jan 2000.
Mikhalev and Artamonov visited Bangor for a week in Jan, 2000.
3.2 Visits
The following visits took place under INTAS funding.
To Bangor:
Tbilisi group: Inassaridze: 1 month October 1997
Patchkoria: 1 month Jan-Feb 2000
Moscow State University: Artamonov 1 week Jan 2000
Mikhalev 1 week Jan 2000
To Bielefeld
St Petersburg State University: Vavilov: 2 weeks Nov 1997, 3 weeks Jan - Feb 2000
Stepanov: 5 weeks May - June 1998
Sivatski: 3 weeks May - June 1998
Dybkhova: 3 weeks Nov - Dec 1998
Mischenko: 2 weeks Nov 1997
Moscow State University: Artamonov 2 weeks Jan 1999
Mischenko: 2 weeks Nov 1997
Steklov Institute: Pushin 3 months April - June 1999
Joukovitski 2 months April - May 1999
Yagounov 2-3 weeks
Nenashev 2 weeks
From Bangor:
Brown visited Utrecht, Dunkerque to meet Janelidze (4 days):
From Bielefeld:
Bak visited Moscow for one week.
Funds at Bangor were used to support other visitors relevant to the work programme:
Içen (Inonu) 2 months subsistence (for work on holonomy and local subgroupoids)
Sivera (Valencia) 2 weeks subsistence (for work on a planned book on higher dimensional group theoretic methods in topology and algebra)
Buchberger (Linz) 1 week travel and subsistence for discussions on Gröbner bases (May 1998).
Visit of Bak (1 week, Jan 1998). Other strong contacts Bangor-Bielefeld were supported by an ARC/British Council grant.
Funds at Bielefeld were used also to support 3 workshops of 1 week to 10 days, two in 1998 and one in 1999.
3.3 Collaboration
│Intensity of Collaboration │high│rather high│rather low│low│
│West Û East │ * │ │ │ │
│WestÛ West │ * │ │ │ │
│East Û East │ │ * │ │ │
The East-East collaboration is largely at St Petersburg.
3.4 Time Schedule
Time Planning
The results achieved are completely consonant with the overall thrust of the Work Programme.
3.5 Problems encountered
│Problems encountered │major│minor│none│not applicable│
│Co-operation of team members │ │ │ * │ │
│Transfer of funds │ │ * │ │ │
│Telecommunication │ │ │ * │ │
│Transfer of goods │ │ │ │ * │
│Other │ │ │ │ * │
3.6 Actions required
No action required.
4 FINANCES (in EURO)
4.1 This grant
The spending has been in accordance with the work programme allowing for the final 10%. The funds on salaries and travel are according to the work programme allowing for the final 10%.
4.2 Other funding
This has been a multifunded project and the various support has contributed greatly to the success of the project.
1. A British Council/ARC Grant supported Bangor/Bielefeld visits for the collaboration (workshops in Bielfeld and in Bangor). 2. Janelidize was widely supported for work in the West in the period of
the grant and meetings of Brown and Janelidze in the West took place as a result.
3. Inassaridze was partially supported by an INTAS Young Scientist Grant, and this included visits to Bielefeld and Bangor Dec 1999-Jan 2000.
4. A DFG/RFBR Grant supported Bielefeld/St. Petersburg and Bielefeld/Moscow visits for collaboration.
5. Each of five NIS members Izhboldin, Merkurjev, Nenashev, Panin, and Vavilov were supported as Humboldt Fellows for a year in Bielefeld.
6. Members of all teams (except Bielefeld) were supported for stays in Bielefeld of various lengths by the SFB 343.
5.1 Summary
Work by participants has led to the publication of some spectacular results.
These include a solution of the long standing conjecture that the cohomology algebra of a finite group scheme is finitely generated and a further development of methods here to achieve a complete
determination of all Ext-groups between classical functors in the category of strict polynomial functors of finite degree.
A solution of the long standing problem of finding a Grothendieck K[0]-construction for K[1] of an exact category is obtained.
A generalization of Milnor's conjecture giving criteria when sums of n-fold Pfister classes in the Witt ring are trivial is made and evidence in support of this criteria is proved.
The group of R-equivalence classes for all adjoint semisimple classical groups is computed and index reduction formulas are determined for the twisted flag varieties of any semisimple algebraic
A full solution of the Riezz-Radon problem for integral representations of a Radon measure on an arbitrary Hausdorff space is obtained.
A dimension theory for categories is developed and applied to determining structural properties of certain classical-like groups.
Foundations for global actions have been laid and a model category description of the homotopy theory of global actions is developed.
Work on homotopy coherence and on computational methods has confirmed the fundamental rôle of crossed complexes as an extension of chain complex methods which can deal with non simply connected
spaces. These yield new methods of computing modules of identities among relations for presentations of groups. Rewriting methods have been vastly extended from presentations of monoids to
presentations of induced actions of categories, extending the field to computational category theory.
A wide range of applications of the Brown-Loday non abelian tensor product have been found, including new forms of non abelian homology and of homology mod q.
The main results in category theory continue the programme of Categorical Galois Theory, which is shown to have a wide range of applications, involving Descent Theory, internal groupoids and
commutator theory, and to link with the homotopical algebra methods developed by the Bangor group.
The project involved about 30 participants, from two EU and four NIS centres.
Some key papers
E. Friedlander and A. Suslin, Cohomology of finite group schemes over a field, Inventiones Math. 127 (1997), pp. 209-270
V. Franjou, E. Friedlander, A. Scorichenko and A. Suslin, General Linear and Functor Cohomology over Finite Fields - Annals of Math 150 (1999), 663 - 728
A. Bak, Global Actions: The algebraic counterpart of a topological space, invited paper for the 100'th anniversary of P.S. Alexandroff, Uspekhi Mat. Nauk 52:5 (1997), 71 - 112, English translation:
Russian Math. Surveys 52:5 (1997), 955 - 996
A. V. Mikhalev, V. K. Zakharov, Integral representation for Radon measures on an arbitrary Hausdorff space, Fundamental and Applied Mathematics 3 (1997), N 4, 1135-1172.
A. Merkurjev and I.A. Panin and A.R. Wadsworth, Index reduction formulas for twisted flag varieties. II. K-Theory 14 (1998), 101-196.
A. Nenashev, Double short exact sequences and K[1] of an exact category, K-Theory 14 (1998), no. 1, 23-41.
G. Janelidze and R.H. Street, Galois theory in Symmetric Monoidal Categories, J. Algebra 220 (1999) 174-187.
Brown, R., Golasinski, M., Porter, T. and Tonks, A., ``On function spaces of equivariant maps and the equivariant homotopy theory of crossed complexes'', Indag. Math. 8 (1997) 157-172; `II : the
general topological group case'' K-theory (to appear).
Coordinator's home page
The extensive article on `Higher dimensional group theory' should also be noticed. There is also a link to a workshop at Bangor in January, 2000.
6 ROLE AND IMPACT OF INTAS
`The project' is very broad and many aspects would have been started and continued whatever the funding. None the less, the scope of the project and the actual and potential interconnections have
been enormously increased by the existence of the project, and I would strongly rate the success of the support in scientific terms, both already realised and for the future.
│Role of INTAS │Definitely yes│rather yes│rather not│definitely not│
│Would the project have been started │ │ │ │ │
│without funding from INTAS? │ │ │ * │ │
│Would the project have been carried out │ │ │ │ │
│without funding from INTAS? │ │ │ * │ │
In the above it should be emphasised that the project carried out was multifaceted and multifunded. None the less, I consider that the INTAS funding gave an extension of the overall work plan of the
participants leading to many results which would have not been conceived without this funding.
│Main achievement of the project │Very important│quite important│less important│not important│
│Exciting science │ * │ │ │ │
│new international contacts │ * │ │ │ │
│additional prestige for my lab │ │ * │ │ │
│additional funds for my lab │ │ * │ │ │
│helping scientists in the NIS │ │ * │ │ │
│other: The opening out of a broader range of interactions than originally envisaged, │
│ and so the development of new prospects. │
The project will continue in its multifaceted mode.
A further INTAS proposal with these and some further partner(s) is being planned.
No recommendations.
8 ANNEXES
The relevant publication lists of the participants are accessible as follows:
Moscow State University
St Petersburg State University
Steklov Institutue, St Petersburg
File translated from T[E]X by T[T]H, version 2.78.
On 6 Jan 2001, 23:25. | {"url":"http://pages.bangor.ac.uk/~mas010/intasrep.html","timestamp":"2014-04-18T05:36:50Z","content_type":null,"content_length":"68300","record_id":"<urn:uuid:47f7e005-3c93-4e2b-90d3-c01b34bb9b62>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
PLEASE HELP ME WITH MY MATH H.W
Number of results: 278,219
math urgent please...please...
I couldn't do g). someone help i need it in the morning. help me please. do it for me please. Show me the steps too please...please...
Sunday, December 15, 2013 at 12:56pm by kavi
Math, Please help :/
I cant do this !! PLease help !! -y+12 divided by 4 > 8 PLEASE PLEASE PLEASE HELP!!!!! (:
Thursday, October 7, 2010 at 8:25pm by weronika
To Ms.Sue or any Math helpers
Can you please help me with my math law of exponent quesions, please check my answers, please its due very soon.
Sunday, September 8, 2013 at 8:08pm by Sophie
Quantum Physics
come on please please please please please please please please please tell the right answer
Saturday, October 12, 2013 at 10:36pm by ss01
can someone take a look at my problem please? its Geometry please please please help
Wednesday, December 12, 2012 at 4:19pm by BaileyBubble
Math ms sue please please please help!!!!!!
Can u at least tell me if I was right about numbers 1&2?
Sunday, November 10, 2013 at 6:59pm by Math help
Please help me math is boring and I'm a kinestetic learner please help its not fun please help man!!
Tuesday, November 13, 2012 at 7:13pm by Sorrycanttellyou
Math ms sue please please please help!!!!!!
Number 4 would be 50%
Sunday, November 10, 2013 at 6:59pm by Math help
Need HELP ASAP - PLEASE ` economics
Can i please get this help please so i can get me a place to live for me and my kids please i cry for dis help for 6 years can i please get it and can i please get it bye next weekly please i don't
have a job at all and will like to get me and my little family help please and ...
Wednesday, July 7, 2010 at 12:35am by Linda Snow
Please be quiet. Please stop talking. Please don't talk. Please refrain from speaking. Remember, silence is golden. You should be able to take these five, add yours, and use all of them to make
Wednesday, September 2, 2009 at 9:22pm by DrBob222
OK Ms.Sue But can you please report this person ''PsyDAG'' please :( ??? please the person hurt my feelings :( !!!!!
Wednesday, December 5, 2012 at 3:48pm by Sammy, !!!Please Answer!!
Social Studies
Please Help I know but please I'm the parent and my son tell me to check and i don't know 2 much please in your own words please :( :( :( :( please pretty please please please !!!! :( don't give give
me links please in your own words please please :( !!!!! Identify two ...
Saturday, April 6, 2013 at 9:49pm by Alejandra
hey help me please 1/5 =6/10 = 0.6 is this right n here r some more decimals 4/5 = 9/10 =0/9 and more 2/10 = 5/5 = 0.7 and if u know the answer please help me please
???????????????????????????????????????????????????????????????????? please i need help please?
Monday, May 2, 2011 at 5:07pm by Hailey
computers please check my answer
is there any 4th grade answers to math please help me find it please!!!!!!!!
Wednesday, June 4, 2008 at 8:28am by lilesn
Math ms sue please please please help!!!!!!
#1 independent #2 dependent #3 6/25 * 5/24 #4 1/2 or 1:2 and 50% #5 3/6 * 1/2
Sunday, November 10, 2013 at 6:59pm by Steve
Monday, January 12, 2009 at 8:21pm by George
math(Please please help!!!)
1) Find the period and the amplitude. y= 3 sin 2x Please explain!!! I do not know how to do this.
Monday, February 1, 2010 at 9:32pm by Hannah
math (please please please help)
How to convert into polar form? z = 1 - i w = 1 - √3i
Thursday, October 6, 2011 at 7:47pm by Lynn
Math ms. Sue please help
Or is it 13 27/80? Please answer!!! Please!!
Wednesday, September 11, 2013 at 5:47pm by Gabby
math ergent please please...
Reiny help me with using intervals. u forgot to help me with that. its urgent please
Friday, November 29, 2013 at 9:55am by Kaplan
Math (PLEASE HELP)
2x-3y=-2 -2y+3x=12 ~Explanation too, please! I really need help! Please and thanks!
Saturday, January 21, 2012 at 12:28pm by Izzy
Math, Please Help!!!
how to divide x^3 - x^2 - 22x + 13 by x^2 + 4x - 2. Please Help!!! Please Hurry!!!!
Thursday, September 15, 2011 at 3:09pm by Misty
Well, Can u give me the answer please it because i'm in a rush..... please and later i tell u the rest please please :( :( :( :( ??????? please please
Tuesday, March 12, 2013 at 10:24pm by Destiny
please can you answer this is all about sums and differences of rational algebraic expression...please i need it please ?this is the problem....3/5+4/5=? another x/3-y/3=? another is 7/a+2/a
Friday, October 8, 2010 at 7:34am by jazmin
MS.SUE Here are my answers can you pretty please check them please I'm in a hurry, sorry thank you so much :( !!!!!
Wednesday, December 5, 2012 at 3:48pm by Sammy, !!!Please Answer!!
Math help please last question
Solve the equation. 10. 2(4x -4) + 2(3x + 2) = 360 Can you please help me on this problem? Could you explain it please? I'm confused...
Friday, September 27, 2013 at 5:52pm by Charlotte
Math please help please help
Out of 42 kids in a class twice as many failed Ela as math,4 failed both. If 7 failed neither, find how many failed each subject. Please help and show work it would be great thanks i really need help
Monday, June 6, 2011 at 7:26pm by Ray
math please help me
18n-20 = 36-10n then what is n? please please help me
Wednesday, September 7, 2011 at 11:02pm by jacob
Please help me with this math question please. Am I correct in my computing? 11-6 5 ____ = _____ = undefined 7-7 0 If I am wrong please help and can you show the work? Thank you.
Sunday, August 29, 2010 at 9:59am by cindy
MATH HELP ME NOW PLEASE!!
DETERMINE THE SUM OF THE FOLLOWING GEOMETRIC SERIES A. -1/32+1/16-...+256 B. 50 over Σ 8(.5)^n-2 where n=1 HELP PLEASE!!!!!!
Thursday, January 20, 2011 at 3:57pm by Please help me!
pre algebra-math
two fractions equivalent to the given factor: 1) 18/21 2)3/17 thank you so much if u helped me
Wednesday, January 26, 2011 at 10:46pm by .please please please help
Math Help Please
Would someone please be kind enough to explain steps for this one. Thank you. If m and n are positive integers and m is 250% of n, what percent of m is 2n?
Monday, February 27, 2012 at 5:54am by Help Please
Math PLEASE HELP PLEASE PLEASE PLEASE
Use the formulas provided with each question. The first term is called a1. In the first problem, clearly a1 = 2 and a2 = 5 Try doing these yourself or you will learn nothing.
Tuesday, July 10, 2012 at 10:42am by drwls
Math ms sue please please please help!!!!!!
You don't know anything?
Sunday, November 10, 2013 at 6:59pm by Math help
but serioulsy i CAN'T DO IT PLEASE PLEASE OH PLEASE GIVE ME THE ANSWERS!!!
Tuesday, November 20, 2007 at 6:20pm by Veronica
Please help
Sunday, November 22, 2009 at 4:58pm by Lisa willsomeone look at this please
Math ms sue please help
Please answer!!!! Please!!!!
Wednesday, September 11, 2013 at 5:26pm by Gabby
Math ms sue please please please help!!!!!!
4. is right.
Sunday, November 10, 2013 at 6:59pm by Anonymous
Math ms sue please please please help!!!!!!
Sunday, November 10, 2013 at 6:59pm by Math help
math is kinded of hard for me but I try my best in math and please solve my math problem and see if its right sinccer kim and 1 think can you spell sinccer please thank you very much anybody and sue
Tuesday, January 6, 2009 at 7:32pm by kim
Whats a consisnent. like suppose 2,3,6,9,8 Can someone help please? Can someone help please: What are conssitent numbers in math? i need help on my math its algebra
Tuesday, December 5, 2006 at 6:29pm by Paris
Math - please, I really need help with this
please help me understand this; I have ten more math problems that are similar to still do.
Thursday, October 23, 2008 at 11:17pm by Luci
I need a good studying website for math 8. The book I have is mathlinks 8. please help :-)
Wednesday, June 10, 2009 at 9:57pm by elina MS Sue Please Help Me
Math for Steve or someone good at math
Please check my answers. 1. b 2. a 3. c 4. b 5. d If i got any wrong please tell me the right ones.
Thursday, November 8, 2012 at 2:05pm by Jman
Math ms sue please please please help!!!!!!
1. independent 2. either both or dependent...not sure. 3. I think this should be 6/25 and 5/24. 4. there are 10 numbers, with 5 being odd. What is the answer?
Sunday, November 10, 2013 at 6:59pm by Anonymous
Math ms sue please please please help!!!!!!
Sunday, November 10, 2013 at 6:59pm by Anonymous
8th grade science :(
Please Please Please Please Please help me with this because I have looked at a million websites and I have read the whole chapter in my science book just trying to find the answer so can someone
please help me with this question? Please? Thank you. Counting on you Ms. Sue ;)
Monday, December 9, 2013 at 6:54pm by Ira
Hi i have a math exam tomorrow and i wanted to get some extra practice so can you please find me free online math tests. Please do not give me "Proprofs" as one of the websites. I have already tried
their website and it did not help :D
Wednesday, January 27, 2010 at 3:56pm by clueless
1) Write a translation rule that maps point D(7, –3) onto point D'(2, 5). Note: PLEASE ANSWER, NEAT & CORRECT PLEASE AND THANK YOU!! =) :-) AND EXPLAIN IT GOOD STEP BY STEP, PLEASE. :)
Friday, November 30, 2012 at 8:03pm by !!!Please Answer!! NOW
please someone tell me how to solve this. im very confused and im homeschooled so my teachers are no help and my mom cant do math. please please help!!!
Thursday, January 17, 2013 at 9:46am by Corie
Please help with this math problem. The math question asks, "Identify the numerator and denominator of the fraction 3/8. Please let me know if I am correct? numerator is the 3 and te denominator is
the 8. Is this a trick question? My professor want me to show my work, so this ...
Friday, August 27, 2010 at 6:31pm by adore001
Reiny help please...
can u please explain to me how to find these sign < >. how do u know which direction it goes to. help me please. i have a test tomorrow. so please let me know. please quick.
Wednesday, November 27, 2013 at 12:31pm by kavi
For my class my topic is circles but i need a catchy phrase or slogan or anything that rhymes with circle to remember circle by. can anyone help me please PLEASE PLEASE!!! thank you
Monday, October 15, 2007 at 4:53pm by Anonymous
Math! Please help! anyone!
Please help me on my previous question for math! I really need help!
Wednesday, January 9, 2013 at 11:56pm by Rina
math 098
I made 3 f's on my tests in math 098, do you think i need to withdraw and take math 090 instead. this is my first semester and i really want to do my best please help. math 098 is algebra math 090 is
basic for college. the score were equations--52,distance problems---55, ...
Friday, July 6, 2012 at 7:09am by %%%Rena%%%
Um do u guys know what dpe ape mpe and spe means please i dont get it its like when u do 2step equations like 5x + 5=15
Wednesday, November 14, 2007 at 9:40pm by please please help
Math repost for Jordan
REALLY NEED HELP WITH MY HOMEWORK AND I WAS WANTING TO KNOW IF YOU GUYS HELP ME WITH IT SO PLEASE PLEASE PLEASE HELP ME WITH IT BECAUSE IT REALLY IS HARD YOU KNOW YEA OR NEA OR MAYBAY!
Friday, February 29, 2008 at 10:27pm by Ms. Sue
Math SOmeone please please help
Find the derivative of the function. g(u) = (5+u^2)^5(3-9u^2)^8 Could someone please explain the steps that would lead me to the answer? I'm completely stuck.
Monday, February 21, 2011 at 11:15pm by Anonymous
Math (please please please help!)
Solve for x: 2 to x+1=x+2 to 3 or 2 to x+3= x+2 to x+5 (the proportions end up being the same in the end) I keep getting to the part after you FOIL, and then I get stuck
Tuesday, April 5, 2011 at 9:58pm by Rosie
How many positive integers in the set {50, 51,, . . . , 298, 299} do not contain any even digits? If you do not mind please explain the steps along the way in detail. Thank you
Wednesday, March 28, 2012 at 12:21am by Math help please
Find the volume of the cylinder. Use 3.14 for (π Pi) 34 m on top and 27 m down on the right. Please Help Me I try and try and nothing please please help me !!!!! :( :(
Saturday, March 16, 2013 at 4:00pm by Selena
1) Write a translation rule that maps point D(7, –3) onto point D'(2, 5). 2) Triangle ABC has coordinates A(1, 4); B(3, –2); and C(4, 2). Find the coordinates of the image A'B'C' after a reflection
over the x-axis. Note: PLEASE ANSWER NICE AND NEAT AND CORRECT PLEASE AND THANK...
Friday, November 30, 2012 at 6:20pm by Tommy Please Answer!!
math (please please please help)
In polar form, the first is (1/sqrt(2),-pi/4) The second is (2,-pi/3)
Thursday, October 6, 2011 at 7:47pm by Steve
What is the horizontal asymptote? I think it doesn't have vertical and oblique asymptote. Please please help me please
Sunday, November 13, 2011 at 5:26am by Meron
6th grade math (please please answer!)
Please see my answer below.
Friday, September 14, 2012 at 6:35pm by Ms. Sue
Math PLEASE HELP ME!!
Micheal recives a 9.8% raise. He currently earns $1,789.46 per month. Estimate the amount by which his monthly earnings will increase. Help please I do not understand this problem!! :) SHOW ALL WORK
PLEASE & THANK YOU!! :)
Wednesday, March 7, 2012 at 8:18pm by Madelyn Santiago
Math ms sue please please please help!!!!!!
And 4?
Sunday, November 10, 2013 at 6:59pm by Math help
math ms sue please help, PLEASE!!!!
oh. do you know anyone else who can help me? if so, please get them to help me!!!
Friday, January 31, 2014 at 7:03pm by math help
Algebra 1
I need some help please. I dont understand the math that we are doing. Please explains to me in detail how to do radical expressions and like how to simplify them and stuff. I dont get it and it
would be much appreciated
Wednesday, January 7, 2009 at 4:47pm by Can sombody please help me
Is this a trick question? I need to write the number 79 in short word form, would it be seventy 10's and one 9 or 10 sevens and 9 ones? Please, please, please help me Thank You
Wednesday, September 4, 2013 at 7:29pm by Sandy
Harriet needs to ship a small vase. The box she will use has a volume of 216 cubic inches. If the side lengths are all the same, what is the length of each side of the box? Hint: V = S^3 PLEASE
ANSWER NICE AND NEAT AND CORRECT PLEASE AND THANK YOU!! =) :-) AND EXPLAIN IT GOOD...
Friday, November 30, 2012 at 6:13pm by Sammy, !!!Please Answer!!
math ms sue please help, PLEASE!!!!
thnx kuai:)
Wednesday, November 13, 2013 at 6:15pm by math help
math ms sue please help, PLEASE!!!!
oh... 1500 thanks so much!!!!!!!!!!!!!!
Thursday, December 5, 2013 at 4:14pm by math help
math ms sue please help, PLEASE!!!!
wow of course! thnx
Wednesday, December 11, 2013 at 5:34pm by math help
Consider the function f(x)= 3 ------. x^2-25 a) Determine any restrictions on x. b) State the domain and range. c) State equation(s) for the asymptote(s). d) Determine any x- and y-intercepts. e)
Sketch a graph of the function. f) Describe the behaviour of the function as x ...
Wednesday, November 27, 2013 at 1:01pm by karthi
Math (PLEASE ANSWER!)
Hi everyone! May anyone please tell me how to tell how many zeroe's are required in your answer when moving the decimal points in converting metric units? Thank you very much! Please help soon, if
possible, as I have a big test coming up! For example, 5.17 kl= 51,700 Liters. ...
Sunday, October 17, 2010 at 5:12pm by Anonymous
PLEASE, PLEASE HELP ME with this problem.
Tuesday, April 23, 2013 at 6:51pm by Karen PLEASE HELP!
SOMEONE DO THIS FOR ME PLEASE.....PLEASE DO THIS FOR ME PLEASE.....
Monday, January 6, 2014 at 10:54am by kavi
Write the decimal 0.079 as a percent. Please Answer Correct Please :(
Friday, November 30, 2012 at 2:44pm by !!PLEASE ANSWER!!!
Math Algebra
Find the Values of X 12p^2+11p-22=0 I'm having some trouble with this math question can somebody please explain it. Please I would like to know how to do it since I have a Benchmark Tomorrow. -thank
Tuesday, March 11, 2008 at 11:06pm by Jenny
Math(Please Check!)
1. Which line is the flattest (or is less steep)? Why? a.y = 5x -6 b.y = 5x + 6 c.y = x - 3 d.y = 1/2x + 3 I think It is C.It has the smallest slope. Someone Please Help me! ...Help me... Doesn't d
have the smallest slope? 0.5x. Please...I just want to see.. Margie -- You need...
Tuesday, February 6, 2007 at 7:31pm by Margie
sin(θ)=opposite/hypotenuse I don't have the picture of the triangle in front of me. Double check if b corresponds to the side opposite to θ and c correspond to the hypotenuse side.
Monday, December 13, 2010 at 4:08pm by TutorCat
Find sin(θ), cos(θ), tan(θ). Assume a = 40, b = 9, and c = 41. (Do not use mixed numbers in your answers.) Okay for some reason I keep getting this problem wrong. For sin(θ) I put 9/41, but it's
wrong. Could someone please tell what I did wrong.
Monday, December 13, 2010 at 4:08pm by Anonymous
Bio Help
somebody please help me. I have no clue what the answers are. please please please PLEASE help me!!!
Friday, December 7, 2012 at 3:07pm by BaileyBubble
math please help!!!!!!!!!
Express the fractions 1/2, 3/16, and 7/8 with an LCD. A. 1/4, 3/4, 7/4 B. 1/32, 3/32, 7/32 C. 4/8, 6/8, 14/8 D. 8/16, 3/16, 14/16 I think its b, but im really bad at math. can someone please help me
out? I have to finish this by 12:15 so I can be on time to volunteer at the ...
Wednesday, September 25, 2013 at 12:09pm by corie
Hi i need one more example of this algebraic expression please. and explain please. Fig# Value 1 0 2 1 3 8 4 27 Math-Nick
Sunday, November 22, 2009 at 1:27pm by Nick
To sam!
Was the five part unknown question for compounds A through E yours? If so, did you get it answered ok? no it was not mine! can you please check my math? can you please check my math?
Monday, December 4, 2006 at 1:26pm by DrBob222
Math SOMEONE PLEASE PLEASE PLEASE HELP
Find sin(θ), cos(θ), tan(θ). Assume a = 40, b = 9, and c = 41. (Do not use mixed numbers in your answers.)
Monday, December 13, 2010 at 3:20pm by Mary-ann
8th grade math
Hi i need one more example of this algebraic expression please. and explain please. Fig# Value 1 0 2 1 3 8 4 27 Math-Nick
Sunday, November 22, 2009 at 1:28pm by Nick
math ms sue please help, PLEASE!!!!
also, did the boys make reasonable prediction based on their own probabilities? explain did they do something wrong with their calculations? explain.
Friday, January 31, 2014 at 7:03pm by math help
You have 33 dogs and 4 kennels. Divide the dogs up evenly so the same number of dogs is in each kennel. I was told to show work and that I could not count part of a dog. Please help. Thank you.
Wednesday, September 14, 2011 at 7:17pm by PLEASE HELP
math ergent please please...
Explain how to find the equation of the vertical asymptotes of a reciprocal function in full details. * Steve told me to look up in the web, and did, but still i couldn't find the correct one.
someone help please....
Monday, December 2, 2013 at 12:54pm by Thaya
this is a math riddle that i can't do so please help i am a fraction eguivalent to 2/4 the sum of my numerator and my denominator is 21 what fraction am i here it is help me please's
Tuesday, November 29, 2011 at 3:15pm by 5 grade
Math repost for jet
Please be patient and wait for a tutor with these math skills to answer you. It's Saturday afternoon, and apparently they aren't at their computers right now. Please also know that posting a topic of
"help" with umpteen exclamation marks won't get you any help any faster. ...
Saturday, September 29, 2007 at 2:19pm by Writeacher
please please please please please help me mrs.sue!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! please
Wednesday, May 16, 2012 at 5:38pm by Lee
To "bad@math" (and other names)
Please post under one name. Please do not post to demand that someone go back and answer your previous questions. A math volunteer will get to them when he/she can. Thanks.
Thursday, September 13, 2007 at 6:32pm by Writeacher
Math Symbols
PLEASE LIST ALL 7TH GRADE MATH SYMBOLS!!! PLEASE!!! I NEED THIS ASAP !!! :) THANK YOU!!!!
Wednesday, October 12, 2011 at 8:04pm by Laruen
math(please help)
mr.wilson charges 215 dollars a month to board horses. if he boards a total of 25 horses,how much money will he earn? please show your work. my answer is 5,375 am i right if not please tell me why
and how.
Sunday, August 26, 2012 at 10:53am by lidia
math please help
Inverse Trigonometric Functions please explain this to me in mathematical steps please how to solve sin^(-1) (-1/2) this equals - pi/6 i know that sin^(-1) domain and range switch from original sin
but i don't know how to apply that... i need an mathematical explanation or if ...
Thursday, April 14, 2011 at 8:59pm by Amy~
excuse me? can you please help me with my math problems please? Thank You.
Wednesday, January 27, 2010 at 11:01am by Kim
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=PLEASE+HELP+ME+WITH+MY+MATH+H.W","timestamp":"2014-04-16T08:04:50Z","content_type":null,"content_length":"35337","record_id":"<urn:uuid:e7dc27f6-43e5-4028-b608-afe9e7ec5d3f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solutions should be submitted to
Dr. Valeria Pandelieva
641 Kirkwood Avenue
Ottawa, ON K1Z 5X5
no later than May 31, 2001, and no sooner than May 21, 2001.
73. Solve the equation:
$\left(\sqrt{2+\sqrt{2}}{\right)}^{x}+\left(\sqrt{2-\sqrt{2}}{\right)}^{x}={2}^{x} .$
74. Prove that among any group of $n+2$ natural numbers, there can be found two numbers so that their sum or their difference is divisible by $2n$.
75. Three consecutive natural numbers, larger than 3, represent the lengths of the sides of a triangle. The area of the triangle is also a natural number.
(a) Prove that one of the altitudes ``cuts'' the triangle into two triangles, whose side lengths are natural numbers.
(b) The altitude identified in (a) divides the side which is perpendicular to it into two segments. Find the difference between the lengths of these segments.
76. Solve the system of equations:
$\mathrm{log}x+\frac{\mathrm{log}\left({\mathrm{xy}}^{8}\right)}{\mathrm{log}{}^{2}x+\mathrm{log}{}^{2}y}=2 ,$
$\mathrm{log}y+\frac{\mathrm{log}\left({x}^{8}/y\right)}{\mathrm{log}{}^{2}x+\mathrm{log}{}^{2}y}=0 .$
(The logarithms are taken to base 10.)
77. $n$ points are chosen from the circumference or the interior of a regular hexagon with sides of unit length, so that the distance between any two of them is less than $\sqrt{2}$. What is the
largest natural number $n$ for which this is possible?
78. A truck travelled from town $A$ to town $B$ over several days. During the first day, it covered $1/n$ of the total distance, where $n$ is a natural number. During the second day, it travelled $1/
m$ of the remaining distance, where $m$ is a natural number. During the third day, it travelled $1/n$ of the distance remaining after the second day, and during the fourth day, $1/m$ of the
distance remaining after the third day. Find the values of $m$ and $n$ if it is known that, by the end of the fourth day, the truck had travelled $3/4$ of the distance between $A$ and $B$.
(Without loss of generality, assume that $m<n$.) | {"url":"http://cms.math.ca/Competitions/MOCP/2001/prob_apr.mml","timestamp":"2014-04-21T02:05:32Z","content_type":null,"content_length":"24946","record_id":"<urn:uuid:54351f24-2d94-4379-a645-2c0acf407a6f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
On Cartesian Products of Orthogonal Double Covers
International Journal of Mathematics and Mathematical Sciences
Volume 2013 (2013), Article ID 265136, 4 pages
Research Article
On Cartesian Products of Orthogonal Double Covers
Department of Physics and Engineering Mathematics, Faculty of Electronic Engineering, Menoufiya University, Menouf 32952, Egypt
Received 9 December 2012; Revised 11 February 2013; Accepted 18 March 2013
Academic Editor: Ilya M. Spitkovsky
Copyright © 2013 R. El Shanawany et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
Let be a graph on vertices and a collection of subgraphs of , one for each vertex, where is an orthogonal double cover (ODC) of if every edge of occurs in exactly two members of and any two members
share an edge whenever the corresponding vertices are adjacent in and share no edges whenever the corresponding vertices are nonadjacent in . In this paper, we are concerned with the Cartesian
product of symmetric starter vectors of orthogonal double covers of the complete bipartite graphs and using this method to construct ODCs by new disjoint unions of complete bipartite graphs.
1. Introduction
For the definition of an orthogonal double cover (ODC) of the complete graph by a graph and for a survey on this topic, see [1]. In [2], this concept has been generalized to ODCs of any graph by a
graph .
While in principle any regular graph is worth considering (e.g., the remarkable case of hypercubes has been investigated in [2]), the choice of is quite natural, and also in view of a technical
motivation, ODCs of such graphs are a helpful tool for constructing ODCs of (see [3, page 48]).
In this paper, we assume , the complete bipartite graph with partition sets of size each.
An ODC of is a collection of subgraphs (called pages) of such that(i)every edge of is in exactly one page of and in exactly one page of ;(ii)for and ; and for all .
If all the pages are isomorphic to a given graph , then is said to be an ODC of by .
Denote the vertices of the partition sets of by and . The length of an edge of is defined to be the difference , where . Note that sums and differences are calculated in (i.e., sums and differences
are calculated modulo ).
Throughout the paper we make use of the usual notation: for the complete bipartite graph with partition sets of sizes and , for the path on vertices, for the cycle on vertices, for the complete graph
on vertices, for an isolated vertex, for the disjoint union of and , and for disjoint copies of .
An algebraic construction of ODCs via “symmetric starters” (see Section 2) has been exploited to get a complete classification of ODCs of by for , a few exceptions apart, all graphs are found this
way (see [3, Table ]). This method has been applied in [3, 4] to detect some infinite classes of graphs for which there are ODCs of by .
In [5], Scapellato et al. studied the ODCs of Cayley graphs and they proved the following. (i) All -regular Cayley graphs, except , have ODCs by . (ii) All -regular Cayley graphs on Abelian groups,
except , have ODCs by . (iii) All -regular Cayley graphs on Abelian groups, except and the -prism (Cartesian product of and ), have ODCs by .
Much research on this subject focused on the detection of ODCs with pages isomorphic to a given graph . For a summary of results on ODCs, see [1, 4]. The other terminologies not defined here can be
found in [6].
2. Symmetric Starters
All graphs here are finite, simple, and undirected. Let be an (additive) abelian group of order . The vertices of will be labeled by the elements of . Namely, for we will write for the corresponding
vertex and define if and only if , for all and . If there is no chance of confusion, will be written instead of for the edge between the vertices .
Let be a spanning subgraph of and let . Then the graph with is called the a-translate of . The length of an edge is defined by .
is called a half starter with respect to if and the lengths of all edges in are mutually distinct; that is, . The following three results were established in [3].
Theorem 1. If is a half starter, then the union of all translates of forms an edge decomposition of ; that is, .
Hereafter, a half starter will be represented by the vector , where and is the unique vertex that belongs to the unique edge of length in .
Two half starter vectors and are said to be orthogonal if .
Theorem 2. If two half starter vectors and are orthogonal, then with is an ODC of .
The subgraph of with is called the symmetric graph of . Note that if is a half starter, then is also a half starter.
A half starter is called a symmetric starter with respect to if and are orthogonal.
Theorem 3. Let be a positive integer and let be a half starter represented by the vector . Then is symmetric starter if and only if .
The above results on ODCs of graphs motivated us to consider ODCs of if we have the ODCs of by and ODCs of by where are symmetric starters. In this paper, we have settled the existence problem of
ODCs of by few infinite families of graphs presented in the next section.
3. The Main Results
In the following, if there is no danger of ambiguity, if we can write as .
Theorem 4. The Cartesian product of any two symmetric starter vectors is a symmetric starter vector with respect to the Cartesian product of the corresponding groups.
Proof. Let be a symmetric starter vector of an ODC of by with respect to , then Let be a symmetric starter vector of an ODC of by with respect to , then
Then where and .
From (1) and (2), we conclude
Then is a symmetric starter vector of an ODC of , with respect to , by a new graph which can be described as follows.
Since and , then . It should be noted that is not the usual Cartesian product of the graphs and that has been studied widely in the literature.
All our results based on the following two major points:(1)the cartesian product construction in Theorem 4,(2)The existence of symmetric starters for a few classes of graphs that can be used as
ingredients for cartesian product construction to obtain new symmetric starters. These are as follows.(1) which is a symmetric starter of an ODC of whose vector is , see Corollary in [7].(2) which is
a symmetric starter of an ODC of whose vector is , see [7, Lemma ].(3) which is a symmetric starter of an ODC of whose vector is , and it is easily checked that , and hence .(4) which is a symmetric
starter of an ODC of whose vector is , for this vector, and it is easily checked that and hence .(5) which is a symmetric starter of an ODC of whose vector is , see [4, Theorem ].
These known symmetric starters will be used as ingredients for the cartesian product construction to obtain new symmetric starters.
Theorem 5. For all positive integers with , there exists an ODC of by .
Proof. Since and are symmetric starter vectors, then is a symmetric starter vector with respect to (Theorem 4). The resulting symmetric starter graph has the following edges set:
Lemma 6. For any positive integer with , there exists an ODC of by .
Proof. Since and are symmetric starter vectors, then is a symmetric starter vector with respect to (Theorem 4), and the resulting symmetric starter graph has the following edges set:
Lemma 7. For any positive integer with , there exists an ODC of by .
Proof. Since and are symmetric starter vectors, then is a symmetric starter vector with respect to (Theorem 4), and the resulting symmetric starter graph has the following edges set:
The following conjecture generalizes Lemmas 6 and 7.
Conjecture 8. For all positive integers with and , there exists an ODC of by .
Theorem 9. For all positive integers , there exists an ODC of by .
Proof. Since and are symmetric starter vectors, then is a symmetric starter vector with respect to (Theorem 4), and the resulting symmetric starter graph has the following edges set:
Theorem 10. For all positive integers , there exists an ODC of by .
Proof. Since and are symmetric starter vectors, then is a symmetric starter vector with respect to (Theorem 4), and the resulting symmetric starter graph has the following edges set:
Theorem 11. For all positive integers with , there exists an ODC of by .
Proof. Since and are symmetric starter vectors, then is a symmetric starter vector with respect to (Theorem 4), and the resulting symmetric starter graph has the following edges set:
Theorem 12. For all positive integers , there exists an ODC of by .
Proof. Since and are symmetric starter vectors, then is a symmetric starter vector with respect to (Theorem 4), and the resulting symmetric starter graph has the following edges set:
Lemma 13. For any positive integer , there exists an ODC of by .
Proof. Since and are symmetric starter vectors, then is a symmetric starter vector with respect to (Theorem 4), and the resulting symmetric starter graph has the following edges set:
4. Conclusion
In conclusion, the known symmetric starters are used as ingredients for the cartesian product construction to obtain new symmetric starters which are , and .
1. H.-D. O. F. Gronau, M. Grüttmüller, S. Hartmann, U. Leck, and V. Leck, “On orthogonal double covers of graphs,” Designs, Codes and Cryptography, vol. 27, no. 1-2, pp. 49–91, 2002. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
2. S. Hartmann and U. Schumacher, “Orthogonal double covers of general graphs,” Discrete Applied Mathematics, vol. 138, no. 1-2, pp. 107–116, 2004. View at Publisher · View at Google Scholar · View
at Zentralblatt MATH · View at MathSciNet
3. R. El-Shanawany, Hans-D.O. F. Gronau, and M. Grüttmüller, “Orthogonal double covers of ${K}_{n,n}$ by small graphs,” Discrete Applied Mathematics, vol. 138, no. 1-2, pp. 47–63, 2004. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
4. M. Higazy, A study of the suborthogonal double covers of complete bipartite graphs [Ph.D. thesis], Menoufiya University, 2009.
5. R. Scapellato, R. El-Shanawany, and M. Higazy, “Orthogonal double covers of Cayley graphs,” Discrete Applied Mathematics, vol. 157, no. 14, pp. 3111–3118, 2009. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
6. R. Balakrishnan and K. Ranganathan, A Textbook of Graph Theory, Universitext, chapter 1, Springer, New York, NY, USA, 2nd edition, 2012. View at Publisher · View at Google Scholar · View at
7. R. El-Shanawany, Orthogonal double covers of complete bipartite graphs [Ph.D. thesis], Universität Rostock, 2002. | {"url":"http://www.hindawi.com/journals/ijmms/2013/265136/","timestamp":"2014-04-19T08:36:24Z","content_type":null,"content_length":"494672","record_id":"<urn:uuid:06042e49-f7e4-4b12-a62a-42378c133ef1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
alokkhg @ PaGaLGuY
bhai log koi to batao where is the centre in Bangalore
where is the centre in Bangalore?
Last two digits of : 817^673
58 users have answered this question.
An atmosphere that ________ the value and growth of individuals as well as the organization is important, and _________between individuals and the organization must be a team effort.
(a) exudes, discussion (b) encompasses, agglomerate(c) promotes, collaboration (d) assesses, links(e) stimulate...
170 users have answered this question.
There are 4 oranges, 5 apricots and 6 alphonso in a fruit basket. In howmany ways can a person make a selection of fruits from among the fruits in thebasket?a. 209b. 210c. 120d. 119
79 users have answered this question.
A cube is painted in such a way that three mutually adjacent faces are painted brown, two faces are painted black and the remaining face is painted blue. The cube is now cut into 343 smaller but
identical cubes.
How many of the smaller cubes are painted on exactly two faces and have exactly tw...
74 users have answered this question.
ration of HM of two no's to GM is 12:13
find the ratio of no's
53 users have answered this question. | {"url":"http://www.pagalguy.com/u/alokkhg","timestamp":"2014-04-16T13:07:28Z","content_type":null,"content_length":"129341","record_id":"<urn:uuid:7077e797-6c08-4353-bb89-1ed02b82b9f2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
NCERT Solutions for Class 6th Maths: Chapter 14 – Practical Geometry
National Council of Educational Research and Training (NCERT) Book Solutions for Class 6th Subject: Maths Chapter: Chapter 14 – Practical Geometry
Class 6th Maths Chapter 14 Practical Geometry NCERT Solution is given below.
Click Here to view All Chapters Solutions for Class 6th Maths
Stay Updated. Get All Information in Your Inbox. Enter your e-Mail below: | {"url":"http://schools.aglasem.com/?p=8392","timestamp":"2014-04-20T17:09:58Z","content_type":null,"content_length":"53256","record_id":"<urn:uuid:29cd2649-0549-4b63-a4ad-c9dc291a61d7>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/ineedbiohelpandquick/answered","timestamp":"2014-04-19T10:30:49Z","content_type":null,"content_length":"106561","record_id":"<urn:uuid:cd1bc27b-f5b9-4dfe-805c-840fb5b9cf31>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sandy Springs, GA Statistics Tutor
Find a Sandy Springs, GA Statistics Tutor
...Subsequently became the Assistant for TA development for the entire campus. Tutored fellow MBA students in Accounting. Awarded Mason Gold Standard Award for contributing to the academic
achievement of my peers.
28 Subjects: including statistics, calculus, GRE, physics
...I had an overall GPA of 3.75 throughout 6 years of college, and my math GPA was 4.0. I also worked as a math tutor to other college students. More importantly, I know how to make learning fun
and easy.
29 Subjects: including statistics, reading, English, GED
...I possess a degree in Mechanical Engineering. Several courses in my curriculum entail the use of AutoCAD. Additionally, I worked for a laboratory, Center for Biophotonics Science and
Technology, in which I used an AutoCAD-like program in my work.
25 Subjects: including statistics, chemistry, calculus, physics
I have a Ph.D. in sociology and am proficient in SPSS. Not only can I help with data analysis and writing syntax, but also I can explain the logic behind the statistical analyses. With strong
background in mathematics, I can explain the concepts easily.
9 Subjects: including statistics, Japanese, algebra 1, precalculus
I have a wide array of experience working with and teaching kids grades K-10. I have tutored students in Spanish, Biology, and Mathematics in varying households. I have instructed religious
school for 5 years with different age groups, so I am accustomed to working in multiple settings with a lot of material and different student skill.
16 Subjects: including statistics, Spanish, chemistry, calculus
Related Sandy Springs, GA Tutors
Sandy Springs, GA Accounting Tutors
Sandy Springs, GA ACT Tutors
Sandy Springs, GA Algebra Tutors
Sandy Springs, GA Algebra 2 Tutors
Sandy Springs, GA Calculus Tutors
Sandy Springs, GA Geometry Tutors
Sandy Springs, GA Math Tutors
Sandy Springs, GA Prealgebra Tutors
Sandy Springs, GA Precalculus Tutors
Sandy Springs, GA SAT Tutors
Sandy Springs, GA SAT Math Tutors
Sandy Springs, GA Science Tutors
Sandy Springs, GA Statistics Tutors
Sandy Springs, GA Trigonometry Tutors
Nearby Cities With statistics Tutor
Alpharetta statistics Tutors
Atlanta statistics Tutors
Chamblee, GA statistics Tutors
Decatur, GA statistics Tutors
Doraville, GA statistics Tutors
Dunwoody, GA statistics Tutors
Johns Creek, GA statistics Tutors
Lawrenceville, GA statistics Tutors
Mableton statistics Tutors
Marietta, GA statistics Tutors
Norcross, GA statistics Tutors
Roswell, GA statistics Tutors
Smyrna, GA statistics Tutors
Tucker, GA statistics Tutors
Woodstock, GA statistics Tutors | {"url":"http://www.purplemath.com/Sandy_Springs_GA_statistics_tutors.php","timestamp":"2014-04-17T07:33:51Z","content_type":null,"content_length":"24126","record_id":"<urn:uuid:979b81ee-9f45-4251-8213-e31aa5246e63>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Riverdale Pk, MD Algebra 2 Tutor
Find a Riverdale Pk, MD Algebra 2 Tutor
...There are a lot of similarities but also a lot of differences, particularly when it comes to pronunciation. I find mathematics to be a fascinating and fun subject. I have an MS in Engineering.
10 Subjects: including algebra 2, Spanish, calculus, geometry
...I teach basic through advanced mathematics and sciences. I am a research chemist by profession and hold a PhD in Physical Chemistry with a BS in both Mathematics and Chemistry. I was a
teaching assistant in graduate school, teaching primarily Chemistry.
14 Subjects: including algebra 2, chemistry, physics, algebra 1
...I have developed fun activities for students to actually have fun while they are learning. I look forward to helping your child become a huge success. In fact, several of my past students have
earned As and Bs on their exams and major tests after scoring Ds, Es, and Fs.
18 Subjects: including algebra 2, reading, writing, calculus
...You may not use math a lot in your day to day life, but solving mathematical problems will increase your logical thinking and reasoning. I have experience teaching students with various
backgrounds in math. If you are already smart in math, I can give you further guidance.
12 Subjects: including algebra 2, calculus, prealgebra, precalculus
...I'm a very mature senior, but i can also relate to the people I tutor, which will make for a better tutoring experience. I've had experiences with bad tutoring, which is why I will always ask
for honest feedback so the tutoring sessions will be more helpful. However, transportation might be an ...
7 Subjects: including algebra 2, reading, calculus, algebra 1
Related Riverdale Pk, MD Tutors
Riverdale Pk, MD Accounting Tutors
Riverdale Pk, MD ACT Tutors
Riverdale Pk, MD Algebra Tutors
Riverdale Pk, MD Algebra 2 Tutors
Riverdale Pk, MD Calculus Tutors
Riverdale Pk, MD Geometry Tutors
Riverdale Pk, MD Math Tutors
Riverdale Pk, MD Prealgebra Tutors
Riverdale Pk, MD Precalculus Tutors
Riverdale Pk, MD SAT Tutors
Riverdale Pk, MD SAT Math Tutors
Riverdale Pk, MD Science Tutors
Riverdale Pk, MD Statistics Tutors
Riverdale Pk, MD Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Bladensburg, MD algebra 2 Tutors
Brentwood, MD algebra 2 Tutors
Cheverly, MD algebra 2 Tutors
College Park algebra 2 Tutors
Edmonston, MD algebra 2 Tutors
Greenbelt algebra 2 Tutors
Hyattsville algebra 2 Tutors
Landover Hills, MD algebra 2 Tutors
Lanham Seabrook, MD algebra 2 Tutors
Mount Rainier algebra 2 Tutors
New Carrollton, MD algebra 2 Tutors
North Brentwood, MD algebra 2 Tutors
Riverdale Park, MD algebra 2 Tutors
Riverdale, MD algebra 2 Tutors
University Park, MD algebra 2 Tutors | {"url":"http://www.purplemath.com/Riverdale_Pk_MD_algebra_2_tutors.php","timestamp":"2014-04-20T06:33:31Z","content_type":null,"content_length":"24314","record_id":"<urn:uuid:a64f1d0e-c15b-4ccb-ab5f-f48837c58d11>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
A predicate is a single conditional expression which evaluates to either true, false or unknown. Predicates are used in constructing search conditions, see Search Conditions.
Predicate Syntax
The general predicate syntax is shown below:
Each individual predicate construction is explained in more detail in the following sections.
The Basic Predicate
A basic predicate compares a value with one and only one other value, and has the syntax:
The comparison operators, comp-operator, are described in Comparison and Relational Operators.
The expressions on either side of the comparison operator must have compatible data types, see Comparisons.
Within the context of a basic predicate, a select-specification must result in either an empty set or a single value.
The result of the predicate is unknown if either of the expressions used evaluates to NULL, or if the select-specification used results in an empty set.
The Quantified Predicate
A quantified predicate compares an expression with a set of values addressed by a subselect (as opposed to a basic predicate which compares two single-valued expressions).
The form of the quantified expression is:
The comparison operators, comp-operator, are described in Comparison and Relational Operators.
Within the context of a quantified predicate, a select-specification must result in either an empty set or a set of single values.
ALL Predicate
The result is true if the select-specification results in an empty set or if the comparison is true for every value returned by the select-specification.
The result is false if the comparison is false for at least one value returned by the select-specification.
The result is unknown if any of the values returned by the select-specification is NULL and no value is false.
ANY or SOME Predicates
The keywords ANY and SOME are equivalent.
The result is true if the comparison is true for at least one value returned by the select-specification.
The result is false if the select-specification results in an empty set or if the comparison is false for every value returned by the select-specification.
The result is unknown if any of the values returned by the select-specification is NULL and no value is true.
Quantified predicates may always be replaced by alternative formulations using EXISTS, which can often clarify the meaning of the predicates.
The IN Predicate
The IN predicate tests whether a value is contained in a set of discrete values and has the form:
If the set of values on the right hand side of the comparison is given as an explicit list, an IN predicate may always be expressed in terms of a series of basic predicates linked by one of the
logical operators AND or OR:
│ IN predicate │ Equivalent basic predicates │
│ x IN (a,b,c) │ x = a OR x = b OR x = c │
│ x NOT IN (a,b,c) │ x <> a AND x <> b AND x <> c │
If the set of values is given as a select-specification, an IN predicate is equivalent to a quantified predicate:
│ IN predicate │ Equivalent quantified predicates │
│ x IN (subselect) │ x = ANY (subselect) │
│ x NOT IN (subselect) │ x <> ALL (subselect) │
The result of the IN predicate is unknown if the equivalent predicates give an unknown result.
The BETWEEN Predicate
A BETWEEN predicate tests whether or not a value is within a range of values (including the given limits).
The BETWEEN predicate can always be expressed in terms of two basic predicates.
│ Between predicate │ Equivalent basic predicates │
│ x BETWEEN a AND b │ x >= a AND x <= b │
│ x NOT BETWEEN a AND b │ x < a OR x > b │
All expressions in the predicate must have compatible data types.
The result of the predicate is unknown if the equivalent basic predicates give an unknown result.
The LIKE Predicate
The LIKE predicate compares the value in a string expression with a character string pattern which may contain wildcard characters (meta-characters).
The string-value on the left hand side of the LIKE operator must be a string expression.
The character-pattern on the right hand side of the LIKE operator is a string expression that can be specified as a string literal or by using a host variable.
The character-value must be a string expression of length 1. To search for the escape character itself it must appear twice in immediate succession.
The following meta-characters (wildcards) may be used in the character-pattern:
_ stands for any single character
% stands for any sequence of zero or more characters.
Note: Wildcard characters are only used as such in LIKE predicates. In any other context, the characters _ and % have their exact values.
Escape Characters
The optional escape character is used to allow matching of the special characters _ and %. When the escape character prefixes _ and %, they are interpreted without any special meaning.
An escape character used in a pattern string may only be followed by another escape character or one of the wildcard characters, unless it is itself escaped (i.e. preceded by an escape
│ LIKE predicate │ Matches │
│ LIKE '%A%' │ any string containing an uppercase A │
│ LIKE '%A\%\\' ESCAPE '\' │ any string ending with A%\ │
│ LIKE '_ABC' │ any 4-character string ending in ABC │
A LIKE predicate where the pattern string does not contain any wildcard characters is essentially equivalent to a basic predicate using the = operator.
The comparison strings in the LIKE predicate are not conceptually padded with blanks, in contrast to the basic comparison.
'artist ' LIKE 'artist ' is true
'artist ' LIKE 'artist%' is true
'artist ' LIKE 'artist' is false
The NULL Predicate
The NULL predicate is used to test if the specified expression is the NULL value, and has the form:
If the predicate specifies expression IS NULL, then the result is true if any operand in the expression is NULL.
The result is false if no operand in the expression is NULL.
The result of the NULL predicate is never unknown.
The use of composite expressions in NULL predicates provides a shorthand for testing whether any of the operands is NULL.
Thus the predicate A+B IS NULL is an alternative to A IS NULL OR B IS NULL, provided that A+B does not result in overflow.
Note: The actual arithmetical operator(s) used in numerical expressions in NULL predicates is irrelevant since all arithmetical operations involving a NULL value evaluate to the NULL value.
The NULL predicate is the only way to test for the presence of the NULL value in a column, since all other predicates where at least one of the operands is NULL evaluate to unknown.
The EXISTS Predicate
The EXISTS predicate tests whether the set of values addressed by a select-specification is empty or not, and has the form:
The result of the predicate is true if the select-specification does not result in an empty set. Otherwise the result of the predicate is false. A set containing only NULL values is not empty.
The result is never unknown.
The EXISTS predicate is the only predicate which does not compare a value with one or more other values. The columns selected in the select-specification of an EXISTS predicate are irrelevant.
Most commonly, the SELECT * shorthand is used.
The EXISTS predicate may be negated in the construction of search conditions. Observe however that NOT EXISTS predicates must be handled with care, particularly if empty result sets arise in the
selection condition.
Consider the four following examples, and note particularly that the last example is true if all guests have undefined names:
EXISTS (SELECT * FROM BOOK_GUEST
WHERE GUEST = 'DATE')
requires that at least one guest is called DATE
NOT EXISTS (SELECT * FROM BOOK_GUEST
WHERE GUEST = 'DATE')
requires that no guest may be called DATE
EXISTS (SELECT * FROM BOOK_GUEST
WHERE NOT GUEST = 'DATE')
requires that at least one guest is not called DATE
NOT EXISTS (SELECT * FROM BOOK_GUEST
WHERE NOT GUEST = 'DATE')
requires that no guest may not be called DATE, i.e. every guest must be called DATE (or be NULL).
The OVERLAPS Predicate
The OVERLAPS predicate tests whether two 'events' cover a common point in time or not, and has the form:
Each of the two events specified on either side of the OVERLAPS keyword is a period of time between two specified points on the time-line. The two points can be specified as a pair of datetime
values or as one datetime value and an INTERVAL offset.
Each event is defined by a two expressions constituting a row value expression having two columns.
The first column in each row value expression must be a DATE, TIME or TIMESTAMP and the value in the first column of the first event must be comparable, see Datetime Assignment Rules, to the
value in the first column of the second event.
The second column in each row value expression may be either a DATE, TIME or TIMESTAMP that is comparable with the value in the first column or an INTERVAL with a precision that allows it to be
added to the value in the first column.
The value in the first column of each row value expression defines one of the points on the time-line for the event.
If the value in the second column of the row value expression is a datetime, it defines the other point on the time-line for the event.
If the value in the second column of the row value expression is an INTERVAL, the other point on the time-line for the event is defined by adding the values in the two column of the row value to
expression together.
The NULL value is assumed to be a point that is infinitely late in time.
Either of the two points may be the earlier point in time.
If the value in the first column of the row value expression is the NULL value, then this is assumed to be the later point in time.
Standard Compliance
This section summarizes standard compliance concerning predicates.
│ Standard │ Compliance │ Comments │
│ X/Open-95 │ EXTENDED │ Support for an IN predicate with only one element is a Mimer SQL extension. │
│ SQL-92 │ │ │ | {"url":"http://developer.mimer.com/documentation/html_91/Mimer_SQL_Mobile_DocSet/express_pred5.html","timestamp":"2014-04-18T15:40:32Z","content_type":null,"content_length":"36169","record_id":"<urn:uuid:14a32f93-2faa-4082-a90f-2ae946bdd18d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Vector Library (VL) provides a set of vector and matrix classes, as well as a number of functions for performing arithmetic with them. Equation-like syntax is supported via C++ class operators,
for example:
#include "VLfd.h"
Vec3f v(1.0, 2.0, 3.0);
Mat3d m(1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0);
v = 2 * v + m * v;
v *= (m / 3.0) * norm(v);
cout << v << endl;
Both generic (arbitrarily-sized), and fixed-size (2, 3 and 4 element) vectors and matrices are supported. The latter are provided for the efficient manipulation of vectors or points in 2D or 3D
space, and make heavy use of inlining for frequently-used operations. (One of the design goals of VL was to ensure that it was as fast as the C-language, macro-based libraries it was written to
Vectors and matrices can be composed of either floats or doubles; the element type is indicated by the suffix. It is possible to mix (for example) matrices of doubles with vectors of floats, as in
the example above. It is also possible to instantiate VL for other other element types with their own suffixes (e.g., complex numbers).
VL also contains classes for sparse vector/matrices, sub-vector/matrices, and implementations of some iterative solvers.
VL is free for commercial and non-commercial use; see the LICENSE file for redistribution conditions. (These apply only to the source code; binaries may be freely redistributed, no strings attached.)
VL requires C++. It is known to compile under g++, MSVC++ 5.0, Irix CC, and Metrowerks C++ (macintosh). The latest version can be retrieved from http://www.cs.cmu.edu/~ajw/public/dist/. This
documentation can be found online at http://www.cs.cmu.edu/~ajw/doc/vl.html.
VL contains the following types and classes:
Vec2[fd] 2-vector
Vec3[fd] 3-vector
Vec4[fd] 4-vector
Mat2[fd] 2 x 2 matrix
Mat3[fd] 3 x 3 matrix
Mat4[fd] 4 x 4 matrix
Vec[fd] n-vector
Mat[fd] n x m matrix
SparseVec[fd] n-vector optimised for sparse storage
SparseMat[fd] n x m matrix optimised for sparse storage
SubVec[fd] n-vector which is a subset of another vector
SubMat[fd] n x m matrix which is a subset of another matrix
SubSVec[fd] the same for sparse vectors & matrices
The elements of a vector or matrix are accessed with standard C array notation:
v[2] = 4.0; // set element 2 of the vector
m[3][4] = 5.0 // set row 3, column 4 of the matrix
m[2] = v; // set row 2 of the matrix
For the resizeable vector types, the current size can be obtained from the Elts() method for vectors, and the Rows() and Cols() methods for matrices. To iterate over all members of these types, you
can use code of the form:
for (i = 0; i < v.Elts(); i++)
v[i] = i;
for (i = 0; i < m.Rows(); i++)
for (j = 0; j < m.Cols(); j++)
m[i][j] = i + j;
Though it seems slightly unintuitive, if you have a pointer to a vector or matrix, you must dereference it first before indexing it:
(*vPtr)[20] = 3.0;
If you need a pointer to the data belonging to a vector or matrix, use the Ref() method. (Matrices are stored by row.)
Float *vecDataPtr = v.Ref(), *matDataPtr = m.Ref();
Warning: Any pointer to the data of a generic matrix or vector will become invalid as soon as it is resized.
Note: If you compile with -DVL_CHECKING, index range checks will be performed on all element accesses.
Arithmetic Operators
The following binary operators are defined for all vector and matrix classes, as long as both operands are of the same type.
Basic arithmetic: + - * /
Accumulation arithmetic: += -= *= /=
Comparison: ==, !=
Vector multiplication and division is pairwise: (a * b)[i] = a[i] * b[i]. (See below for how to form the dot product of two vectors with dot().) Matrix multiplication is defined as usual, and matrix
division is undefined.
For both matrices and vectors, multiplication and division by a scalar is also allowed. Matrices can be multiplied either on the left or the right by a vector. In the expression m * v, v is treated
as a column vector; in the expression v * m, it is treated as a row vector.
Vector Functions
The following is a list of the various vector functions, together with a short description of what they return.
Float dot(const Vec[fd] &a, const Vecf &b); // inner product of a and b
Float len(const Vecf &v); // length of v: || v ||
Float sqrlen(const VecNf &v); // length of v, squared
VecNf norm(const VecNf &v); // v / || v ||
Vec2f cross(const Vec2f &a); // vector orthogonal to a
Vec3f cross(const Vec3f &a, const Vec3f &b); // vector orthogonal to a and b
Vec4f cross(const Vec4f &a, const Vec4f &b, const Vec4f &c);
// vector orthogonal to a, b and c
Vec2f proj(const Vec3f &v); // homog. projection: v[0..1] / v[2]
Vec3f proj(const Vec4f &v); // homog. projection: v[0..2] / v[3]
In the above, VecN is either a Vec or a Vec[234], and all functions have corresponding Double/VecNd versions. For more on the use of the proj() operator, see Transformations.
Matrix Functions
The following functions can be used with matrices.
MatNf trans(const MatNf &m); // Transpose of m
Float trace(const MatNf &m); // Trace of m
MatNf adj(const MatNf &m); // Adjoint of m
Float det(const MatNf &m); // Determinant of m
MatNf inv(const MatNf &m); // Inverse of m, if it exists.
Here MatN is any matrix type, though the det() and adj() functions are only defined for Mat[234][fd].
There are a number of 'magic' constants in VL that can be used to initialise vectors or matrices with simple assignment statements. For example:
Vec3f v; Mat3f m; Vecf v8(8);
v = vl_0 [0, 0, 0]
v = vl_y [0, 1, 0]
v = vl_1 [1, 1, 1]
m = vl_0; 3 x 3 matrix, all elts. set to zero.
m = vl_1; 3 x 3 identity matrix
m = vl_B; 3 x 3 matrix, all elts. set to one.
v8 = vl_axis(6); [0, 0, 0, 0, 0, 0, 1, 0]
Below is a summary of the constants defined by VL.
vl_one/vl_1/vl_I vector of all 1s, or identity matrix
vl_zero/vl_0/vl_Z vector or matrix of all 0s
vl_B matrix of all 1s
vl_x, vl_y, vl_z, vl_w x, y, z and w axis vectors
vl_axis(n) zero vector with element n set to 1
vl_pi pi!
vl_halfPi pi/2
In general, a vector or matrix constructor should be given either one of the initialiser constants listed above, or a list of values for its elements. If neither of these is supplied, the variable
will be uninitialised. The first arguments to the constructor of a generic vector or matrix should always be the required size. Thus matrices and vectors are declared as follows:
Vec[fd][234] v([initialisation_constant | element_list]);
Vec[fd] v([elements, [initialisation_constant | element_list]]);
Mat[fd][234] m([initialisation_constant | element_list]);
Mat[fd] m([rows, columns, [initialisation_constant | element_list]]);
If generic vectors or matrices are not given a size when first created, they are regarded as empty, with no associated storage. This state persists until they are assigned a matrix/vector or the
result of some computation, at which point they take on the dimensions of that result.
Vec3f v(vl_1);
Vec3f v(1.0, 2.0, 3.0);
Vecf v(6, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0);
Vecf v(20, vl_axis(10));
Mat2f m(1.0, 2.0, 3.0, 4.0);
Matf m(10, 20, vl_I);
Warning: When initialising a generic vector or matrix with a list of elements, you must always ensure there is no possibility of the element being mistaken for an integer. (This is due to limitations
of the stdarg package.) Make sure that each element value has either an exponent or a decimal point, i.e., use '2.0' rather than just '2'.
Finally, to set the size of a empty matrix or vector explicitly, or resize an existing matrix or vector, use the SetSize method:
m.SetSize(10, 20);
All of the vector and matrix types in VL can be used in iostream-type expressions. For example:
#include <iostream.h>
Vec3d v(vl_1);
cout << v << 2 * v << endl;
cin >> v;
will output
[1 1 1][2 2 2]
and then prompt for input. Vectors and matrices are parsed in the same format that they are output: vectors are delimited by square brackets, elements separated by white space, and matrices consist
of a series of row vectors, again delimited by square brackets.
The following are the transformations supported by VL.
Mat2f Rot2f(Double theta)
// rotate a 2d vector CCW by theta
Mat2f Scale2f(const Vec2f &s)
// scale by s around the origin
Mat3f HRot3f(Double theta)
// rotate a homogeneous 2d vector CCW by theta
Mat3f HScale3f(const Vec2f &s)
// scale by s around the origin, in homogeneous 2d coords.
Mat3f HTrans3f(const Vec2f &t)
// translate a homogeneous 2d vector by t
Mat3f Rot3f(const Vec3f &axis, Double theta)
// rotate a 3d vector CCW around axis by theta
Mat3f Rot3f(const Vec4f &q)
// rotate a 3d vector by the quaternion q
Mat3f Scale3f(const Vec3f &s)
// scale by s around the origin
Mat4f HRot4f(const Vec3f &axis, Double theta)
// rotate a homogeneous 3d vector CCW around axis by theta
Mat4f HRot4f(const Vec4f &q)
// rotate a homogeneous 3d vector by the quaternion q
Mat4f HScale4f(const Vec3f &s)
// scale by s around the origin, in homogeneous 3d coords
Mat4f HTrans4f(const Vec3f &t)
// translate a homogeneous 3d vector by t
All transformations have corresponding Double versions with a 'd' suffix. Transformations with a prefix of 'H' operate in the homogeneous coordinate system, which allows translation and shear
transformations, as well as the usual rotation and scale. In this coordinate system an n-vector is embedded in a (n+1)-dimensional space, e.g., a homogeneous point in 2d is represented by a 3-vector.
To convert from non-homogeneous to homogeneous vectors, make the extra coordinate (usually 1) the second argument in a constructor of/cast to the next-higher dimension vector. To project from a
homogeneous vector down to a non-homogeneous one (performing a homogeneous divide in the process), use the proj() function. This process can be simplified by the use of the xform() function, which
applies a transform to a vector or composes two transformations, performing homogeneous/nonhomogeneous conversions as necessary. For example:
Vec3d x,y;
// apply homogeneous transformations to a 3-vector, assuming column vectors
x = proj(Scale4d(...) * Trans4d(...) * Vec4d(y, 1.0));
// do the same thing with xform()
x = xform(xform(Scale4d(...), Trans4d(...)), y);
By default, VL assumes that transformations should operate on column vectors (v = T * v), though it can be compiled to assume row vectors instead (v = v * T). Using the xform functions is a good way
of isolating yourself from this assumption.
VL contains both a sparse vector type, which stores only the non-zero elements of the vector, and a sparse matrix type, whose rows are sparse vectors. Sparse vectors can be efficiently combined with
other sparse vectors or normal vectors using the standard vector operations:
SparseMatf sm;
SparseVecf sv1, sv2, sv3;
Vecf v;
v = sm * v;
sm[0] += v;
sv1 = sv2 + sv3;
Sparse vectors are typically initialised by giving a list of index, element pairs:
SparseVecf sv; // unsized sparse vector
SparseVecf sv(20); // zero vector of length 20.
SparseVecf sv(5, 1, 1.0, 4, 4.0, VL_SV_END);
// the vector [0.0, 1.0, 0.0, 0.0 4.0]
The standard vector initialisers can also be used.
Once you have your sparse vector, it can be changed in the following ways:
* Re-initialise with the SetElts() method:
sv.SetElts(1, 1.0, 4, 4.0, VL_SV_END);
// sets sv to [0.0, 1.0, 0.0, 0.0 4.0]
* Use the SVIter[fd] iterator, which lets you iterate over the elements of a sparse vector, and access them using the methods:
j.Data() : returns the current element's data
j.Index() : returns the current element's index
Note: It is highly recommended that you use the SVIter[fd] class to manipulate sparse vectors, as it is written to be as efficient as possible, even performing a binary search for elements when
necessary. The iterator class can be used in a number of ways:
* Use Begin(), Inc(), AtEnd() to iterate over the non-zero elements of the vector:
SVIterf j;
// sv = sv * 2
for (j.Begin(sv); !j.AtEnd(); j.Inc())
j.Data() *= 2.0;
* Use one of the following methods:
Inc(Int i) moves on to elt i, where i will increase by 1 on each call
IncTo(Int i) moves on to elt i, where i will increase monotonically
within another for-loop to access the elements of the sparse vector corresponding to i. For example:
// v = v + sv
for (j.Begin(sv), i = 0; i < v.NumItems(); i++)
v[i] += j.Data();
// a += dot(sv1, sv2)
for (j.Begin(sv2), k.Begin(sv1); !k.AtEnd(); k.Inc())
j.IncTo(k.Index()); // find corresponding elt in sv2
if (j.Exists())
a += j.Data() * k.Data();
* Use the Overlay() method: a.Overlay(b) performs a[i] = b[i] for all non-zero b[i].
* Direct access: Begin(),AddElt() or AddNZElt() new element pairs in order, then call End(). (Use AddNZElt() if you know for certain the added element will be non-zero.) For example:
// set sv to [0.0, 1.0, 0.0, 0.0 4.0]
sv.AddNZElt(1, 1.0);
sv.AddNZElt(4, 4.0);
* As a last resort, use the Get() and Set() methods. These calls are not efficient for multiple accesses, but will at least perform a binary search to locate the requested element quickly.
sv1.Set(10, sv2.Get(20)); // sv1[10] = sv2[20]
Note: The best way to write code for sparse vectors and matrices is to use the SVIter[fd] class, and recast code to use the efficient vector operations where possible.
Sparse Fuzziness
The SparseVec class has a tolerance level for elements to be considered zero, referred to as the fuzz. (If |x| < fuzz, it is treated as zero.) This can be set with the method SparseVec[fd]::SetFuzz
(fuzz). The default value of fuzz is 1e-10.
A convenient way to test if an element is zero according to the current fuzz setting is to use the SparseVec[fd]::IsNonZero(elt) method.
VL provides the following functions for accessing subregions of vectors and matrices:
Vec[fd] sub(const Vec[fd] &v, Int start, Int length);
Vec[fd] first(const Vec[fd] &v, Int length);
Vec[fd] last(const Vec[fd] &v, Int length);
SubMat[fd] sub(const Mat[fd] &m, Int top, Int left, Int height, Int width = 1);
SubMat[fd] sub(const Mat[fd] &m, Int rows, Int cols);
SubVec[fd] col(const Mat[fd] &m, Int i);
SubVec[fd] row(const Mat[fd] &m, Int i);
SubVec[fd] diag(const Mat[fd] &m, Int diagNum);
The utility of these functions is best illustrated with some examples:
u = sub(v, 2, 4); // return the 4 elements of v starting at element 2.
u = first(v, 2); // return the first 2 elements of v.
u = last(v, 2); // return the last 2 elements of v.
v = m[i]; // extract row i of m
v = col(m, i); // extract column i of m
v = row(m, i); // extract row i of m
v = diag(m); // extract main diagonal of m
v = diag(m, i) // extract diagonal starting on column i
v = diag(m, -i) // extract diagonal starting on row i
n = sub(m, 2, 3); // returns the upper-left 2 rows and three columns of m.
n = sub(m, i, j, 2, 3); // returns the 2 rows and 3 columns of m starting at
// row i, column j.
Warning: remember that indexing is 0-based in VL, so row 2 above refers to the third row from the top of the matrix, and so on.
The subvector and submatrix types returned by the sub(), col() and diag() functions can, in addition to being assigned to normal vectors as above, also be assigned to:
diag(m) = diag(m) * 2.0; // multiply diagonal elements of m by 2
sub(m, 2, 2) = sub(m, 2, 2) + Matf(2, 2, vl_1);
// add 1 to each of the upper-left 2 x 2
// elements of m.
Warning: The standard in-place operations are not defined on submatrix regions, so the following will not work:
diag(m) *= 2.0;
sub(m, 2, 2) += Matf(2, 2, vl_1);
VL comes with a number of solvers, which are routines for finding the solution of the linear system of equations Ax = b. Currently these include SolveOverRelax(), which uses the overrelaxation form
of Gauss-Seidel iteration, and SolveConjGrad(), which uses the conjugate gradient algorithm. Conjugate gradient is asymptotically faster than Gauss-Seidel, but it assumes that A is both positive
definite and symmetric. If A is not symmetric, the routine instead solves the system 0.5(A + A^T)x = b.
The solvers are defined as follows:
Double SolveOverRelax(const [Sparse]Matd &A, Vec[fd] &x, const Vec[fd] &b,
Double epsilon, Double omega = 1.0, Int *steps = 0);
Double SolveConjGrad(const [Sparse]Matd &A, Vec[fd] &x, const Vec[fd] &b,
Double epsilon, Int *steps = 0);
Each iteration of a solver modifies the current approximate solution x. You must set x to an initial guess before first calling the solver routine; a good starting value is often just b.
The solvers return the squared residual of the linear system, ||Ax - b||^2, which is a measure of the error in the solution.
The epsilon parameter controls the accuracy of the solution: the solver will return as soon as its estimate of the squared residual drops below epsilon.
For SolveOverRelax, omega controls the amount of overrelaxation. A value of one gives pure Gauss-Seidel iteration. Values higher than that cause the solver to overshoot towards the estimated solution
on each iteration. If the system is smooth and well behaved, this can lead to faster convergence times. Generally, setting omega somewhere between 1 and 2 results in the fastest convergence rate, but
the exact value is system-specific.
If you want, you can supply a maximum number of iterations to perform via steps. In this case, the routines will set the actual number of interations performed when they return. This can be useful if
you wish to interleave steps of the solver with some other activity.
// solve Ax = b from an initial guess of x = b
x = b;
SolveOverRelax(A, x, b, 1e-6);
// perform one iteration of the conjugate gradient solver
Int steps = 1;
error = SolveConjGrad(A, x, b, 0, &steps);
VL contains two routines for factoring matrices; the QR-factorization, and the SVD (singular value decomposition). The former factors any given matrix A into two matrices Q and R, such that A = QR.
The Q matrix is orthogonal, and the R matrix is upper-triangular.
The SVD decomposes an m x n matrix A into three matrices, A = UDV^T, where:
• U is m x n, and orthogonal.
• D is n x n, and is diagonal and positive semi-definite; its elements are the square roots of the eigenvalues of A^TA.
• V is n x n, and orthogonal.
The SVD has the following interesting properties:
• The SVD says that we can view any matrix transformation as: rotate the vector from source space, scale about the axes, and rotate the vector again into the destination space.
• The matrix's condition number is ratio of the largest entry of D to the smallest.
• U and V^T are column-orthonormal.
• The smallest least squares solution of Ax = b is x = V F U^T b, where F = D^-1, but with zero entries on the diagonal of D kept as zero rather than replaced with their inverse.
The factoring routines are defined as follows:
#include "Factor.h"
Void QRFactorization(Matd &A, Matd &Q, Matd &R);
Void SVDFactorization(Matd &A, Matd &U, Matd &V, Vecd &diagonal);
Both routines destroy the input matrix, A. Currently, it is required that A have the same or more rows than columns. If your matrix has more columns than rows, add enough zero rows to the bottom of
it to make it square.
For basic use, the only header file needed is VL.h.
For your final build, link with -lvl (libvl.a). To use the debugging version of VL, which has assertions and range checking turned on, use -lvl.dbg (libvl.dbg.a), and add -DVL_CHECKING to your
compile flags. This debugging version includes checks for correct matrix and vector sizes during arithmetic operations.
Compile options
VL uses the following compile-time options:
VL_CHECKING - turn on index checking and assertions
VL_ROW_ORIENT - transformations operate on row vectors instead of column vectors
VL comes with a header file, VLgl.h, which makes using VL vectors with OpenGL more convenient. For example:
#include "VLgl.h"
Vec3f x(24, 0, 100), y(40, 20, 10);
Please forward bug reports, comments, or suggestions to:
Andrew Willmott (ajw+vl@cs.cmu.edu), Graphics Group, SCS, CMU. | {"url":"https://www.cs.cmu.edu/~ajw/doc/vl.html","timestamp":"2014-04-16T22:27:51Z","content_type":null,"content_length":"39047","record_id":"<urn:uuid:dfd63b0d-8901-4acd-bbd3-9a19fee4c27b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chen Formula
Chen Formula
Starting Hand Selection: Chen Formula : Sklansky Starting Hand Groups
The Chen formula is a system for scoring different starting hands in Texas Hold'em. It was created by Bill Chen for use in the book Hold'em Excellence by Lou Krieger. Bill Chen is also the guy that
wrote The Mathematics of Poker.
The process looks a little tricky at first, but it's really quite straightforward and logical after you have worked through a handful of examples.
The Chen formula.
1. Score your highest card only. Do not add any points for your lower card.
□ A = 10 points.
□ K = 8 points.
□ Q = 7 points.
□ J = 6 points.
□ 10 to 2 = 1/2 of card value. (e.g. a 6 would be worth 3 points)
2. Multiply pairs by 2 of one card's value. However, minimum score for a pair is 5.
□ (e.g. KK = 16 points, 77 = 7 points, 22 = 5 points)
3. Add 2 points if cards are suited.
4. Subtract points if their is a gap between the two cards.
□ No gap = -0 points.
□ 1 card gap = -1 points.
□ 2 card gap = -2 points.
□ 3 card gap = -4 points.
□ 4 card gap or more = -5 points. (Aces are high this step, so hands like A2, A3 etc. have a 4+ gap.)
5. Add 1 point if there is a 0 or 1 card gap and both cards are lower than a Q. (e.g. JT, 75, 32 etc, this bonus point does not apply to pocket pairs)
6. Round half point scores up. (e.g. 7.5 rounds up to 8)
For step 5, it's easier to refer to this extra 1 point as a "straight bonus" to save confusion between steps 4 and 5. Subtracting 1 point for 1 gap and then adding it back again for lower cards seems
a bit awkward I know, but that's the way it works.
Chen Formula calculator.
Chen formula hand example scores.
• A
□ A = +10 points.
□ Suited = +2 points.
□ Final score = 12 points.
• T
□ T = 10 x 1/2 = +5 points.
□ Pair = multiply by 2.
□ Final score = 10 points.
• 5
□ 7 = 7 x 1/2 = +3.5 points.
□ Suited = +2 points.
□ 1 card gap = -1 point.
□ 0 - 1 card gap, both cards under Q = +1 point.
□ Final score = 6 points. (5.5 points rounded up)
• 2
□ 7 = 7 x 1/2 = +3.5 points.
□ 4+ card gap = -5 points.
□ Final score = -1 point. (-1.5 points rounded up)
• A
□ A = +10 points.
□ Pair = multiply by 2.
□ Final score = 20 points.
So now we know how to work out how many points different hands are worth, what can we do with the numbers to help us with starting hand selection?
Have you check out my videos section yet? There are a bunch of free strategy vids there for NLHE cash games.
Using Chen formula hand points.
The main reason behind using the Chen formula for different starting hands was so that you can categorize them based on the Sklansky and Malmuth hand groups table.
That's all well and good for helping you to compare the strength of different starting hand in Hold'em, but it doesn't really do much in the way of strategy for starting hand selection. Therefore, I
have done a little bit of work and created a starting hand strategy using the Chen formula.
Chen formula starting hand strategy.
• Only ever consider calling a raise with 10 points or more.
• Always raise or reraise with 12 points or more.
Short-handed strategy. (6 players)
Early position.
• Raise = 9 points or more.
• Fold = 8 points or less.
Mid position.
• Raise = 8 points or more.
• Fold = 7 points or less.
Late position.
• Raise = 7 points or more.
• Fold = 6 points or less.
Full-ring strategy. (10 players)
Early position.
• Raise = 10 points or more.
• Fold = 9 points or less.
Mid position.
• Raise = 9 points or more.
• Fold = 8 points or less.
Late position.
• Raise = 7 points or more.
• Fold = 6 points or less.
"Raise" = Raise if there have been no raises or calls before you.
"Fold" = Fold regardless if there has been a raise before you or not. Just fold.
About my Chen formula starting hand strategy.
As with any set of rules or guidelines in poker, this Chen formula starting hand strategy isn't perfect and will have it's flaws. However, I like to think that this is an easy-to-use and solid
preflop strategy using the Chen formula.
Most of the strategy involves either raising or folding preflop, which is a solid approach to take as a new player and a style that you will grow accustomed to as your game progresses. The starting
hand requirements are also a little tight, but that's only to be expected if you're using a guide and you haven't quite found your feet when it comes to starting hand selection yet.
I took inspiration from the Chen formula article at SimplyHoldem.com to create this starting hand strategy. I decided to develop my own because I believe that the guidelines at Simply Holdem were
flawed because:
1. It does not distinguish between short and full ring games.
2. Just calling the big blind is not a profitable way to play NL Hold'em for the most part.
Did you think this article was useful? Wait until you see the strategy videos at Deuces Cracked.
Chen formula evaluation.
The Chen formula is never going to be a complete substitute for proper preflop starting hand strategy. It will also take a little getting used to if you want to work hand scores out on the fly.
However, this is as good a formula as you are going to find for working out preflop starting hand strengths in NL Hold'em.
The starting hand strategy I worked out will also have its own flaws, but again this is as good as a simple guideline is going to get for those preflop decisions.
At the end of the day, if you're new to Texas Hold'em and like the idea of the Chen formula it's not a bad place to start.
Go back to the awesome Texas Hold'em Strategy.
How Much More Money Could
You Be Winning?
“I played break-even online poker for 4 years before finding
DeucesCracked, for the last 5 months I've made more money playing
poker than at my full-time job.”
- liquid_quik, DC Member | {"url":"http://www.thepokerbank.com/strategy/basic/starting-hand-selection/chen-formula/","timestamp":"2014-04-21T09:36:24Z","content_type":null,"content_length":"27864","record_id":"<urn:uuid:50a92a15-be16-4eac-bed0-99f25b417ee0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH 1010 Survey of Mathematics 3 Credits
Topics include critical thinking skills, problem solving, logic, geometry, measurement, consumer math, probability and statistics. Prerequisite(s): High school algebra I and algebra II and ACT math
score of at least 19, or learning support math requirements or equivalent math placement score
MATH 1030 Introduction to College Mathematics 3 Credits
This course includes the study of quadratics and rational functions and their graphs, exponents, polynomial expressions and factoring, quadratic equations, rational expressions and equations, radical
expressions, and related applications. The TI-83 or TI-84 Plus calculator is required and used throughout the course. This course is a prerequisite to MATH 1130, 1710, and 1730 for students with MATH
ACT scores below 19. Prerequisite(s): High school algebra I and algebra II and ACT math score of at least 19, or learning support math requirements or equivalent math placement score
MATH 1130 College Algebra 3 Credits
This course is designed for students who are not in University Parallel/College Transfer programs of science, mathematics, engineering, or computer science. Topics include linear, polynomial,
rational, exponential, and logarithmic functions and their graphs and applications; linear and nonlinear regression models. Prerequisite(s): High school algebra I and algebra II and ACT math score of
at least 21, or MATH 1030 or equivalent course
MATH 1410 Numbers & Operations for Teachers 3 Credits
Topics include problem solving, numeration systems, integers, elementary number theory and rational numbers with an emphasis on mathematical understanding necessary to teach effectively. Prerequisite
(s): High school algebra I and algebra II and geometry and ACT math score of at least 19, or learning support math requirements or equivalent math placement score
MATH 1420 Geometry for Teachers 3 Credits
Topics include two- and three-dimensional geometry, congruence and similarity, constructions, transformations, area, volume, surface area and measurements, with an emphasis on mathematical
understanding necessary to teach effectively. Prerequisite(s): High school algebra I and algebra II and geometry and ACT math score of at least 19, or learning support math requirements or equivalent
math placement score
MATH 1530 Elementary Probability & Statistics 3 Credits
Topics include elementary probability theory, concepts of descriptive statistics, discrete and continuous distributions, hypothesis testing, confidence intervals, sample sizes, correlation,
regression, multinominal and contingency tables. Noncalculus-based computer applications will be investigated. Prerequisite(s): High school algebra I and algebra II and ACT math score of at least 19,
or learning support math requirements or equivalent math placement score
MATH 1630 Finite Mathematics 3 Credits
Linear functions and applications, interest, annuities, amortization, systems of linear equations, including Gauss-Jordan elimination, and matrix theory. Linear programming using graphical and
simplex methods. Prerequisite(s): High school algebra I and algebra II and precalculus and ACT math score of at least 22, or MATH 1130, or 1710
MATH 1710 Precalculus Algebra 3 Credits
Precalculus algebra for students in University Parallel/Transfer Programs of science, mathematics, engineering or computer science. This is the first of two courses in a sequence that prepares
students for Calculus I. Topics include algebraic concepts, equations, inequalities, complex numbers, maximization, and exponential and logarithmic functions. Prerequisite(s): High school algebra I
and algebra II and ACT math score of at least 22, or MATH 1030 or equivalent course
MATH 1720 Precalculus Trigonometry 3 Credits
Precalculus trigonometry for students in University Parallel/Transfer Programs of science, mathematics, engineering or computer science. This is the second of two courses in a sequence that prepares
students for Calculus I. Topics include the unit circle, right triangle trigonometry, graphs of trigonometric functions, inverse trigonometric functions, verifying trigonometric identities, solving
trigonometric equations, law of sines, law of cosines and vectors. Prerequisite(s): MATH 1710 or consent of mathematics department
MATH 1730 Precalculus 5 Credits
Precalculus for students in University Parallel/College Transfer programs of science, mathematics, engineering or computer science. This course prepares students for Calculus I. Review of algebraic,
trigonometric, logarithmic and exponential functions for students with a previous precalculus/trigonometry course. All topics in MATH 1710 and MATH 1720 will be covered in this course. MATH 1710
followed by MATH 1720 is recommended for students with an ACT math score below 22 or no previous precalculus/trigonometry course. Prerequisite(s): High school algebra I and algebra II and precalculus
/trigonometry ACT math score of at least 23, or MATH 1030, or equivalent course
MATH 1830 Basic Calculus & Modeling 4 Credits
Topics include differentiation and integration of polynomial, rational, exponential, and logarithmic functions and methods of numerical integration. Topics from business modeling, such as economic
applications and case studies, are explored with computer simulations, computer labs, or calculators. A graphing calculator is required. Prerequisite(s): High school algebra I and algebra II and
precalculus and an ACT math score of at least 23, or MATH 1130 or 1710 or 1730
MATH 1910 Calculus I 4 Credits
Single variable calculus for students majoring in science, mathematics, engineering and computer science. Limits and differentiation of polynomial, rational, trigonometric, exponential and
logarithmic functions and applications. Prerequisite(s): High school algebra I and algebra II and geometry and precalculus/ trigonometry and an ACT math score of at least 26, or MATH 1730, or MATH
1710 and 1720
MATH 1920 Calculus II 4 Credits
Integral calculus with applications. Topics include methods of integration, sequences, series, polar coordinates and differential equations. Applications include real-world problems in physics,
engineering, economics and biology. Prerequisite(s): MATH 1910
MATH 2000 Matrix Computations 1 Credit
Introduction to matrix calculations, including determinants, eigenvalues and eigenvectors. For students in engineering transfer programs. Prerequisite(s): MATH 1920
MATH 2010 Matrix Algebra 3 Credits
Topics include solutions of systems of linear equations and Euclidean vector operations. Concepts of linear independence, basis and dimension, rank, and nullity are defined and illustrated.
Additional topics include eigensystems and general linear transformations. A computer laboratory component is required. Prerequisite(s): MATH 1920
MATH 2050 Introduction to Statistics 3 Credits
Descriptive statistics, including bivariate trends, time series, concepts of probability and probability distributions, binomial and normal distributions, linear correlation and regression,
estimation and significance tests for means, contingency tables, chi-square tests for goodness of fit and independence. A computer laboratory component is included. Prerequisite(s): MATH 1830 or 1910
MATH 2110 Calculus III 4 Credits
Calculus of functions in two or more dimensions. Topics include solid analytic geometry, partial differentiation, multiple integration and selected topics in vector calculus. Prerequisite(s): MATH
MATH 2120 Differential Equations 3 Credits
A first course in differential equations emphasizing solution techniques. Includes first-order equations and applications, theory of linear equations, basic second-order equations and applications,
Laplace transforms, and series solutions. Prerequisite(s): MATH 1920
Return to The Catalog Index
Return to the Course Descriptions Index | {"url":"http://www.pstcc.edu/catalog/12-13/cd/math.php","timestamp":"2014-04-18T18:19:21Z","content_type":null,"content_length":"34840","record_id":"<urn:uuid:67e69124-8798-43dc-8706-35086ea6012e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Professor, Department of Astronomy and Astrophysics
University of Chicago
Baryons in the Power Spectrum
Key Concepts
• Power spectrum shows baryons enhance every other peak.
• Second peak is suppressed compared with the first and third
• Additional effects on the peak position and damping yield consistency checks
When we do the full calculation of the power spectrum, the basic physics of a mass on the spring appears as advertised. The odd numbered acoustic peaks in the power spectrum are enhanced in amplitude
over the even numbered ones as we increase the baryon density of the universe.
[Note: Cosmologists label the baryon density in terms of its fraction of the critical density W[b]times the Hubble constant squared (in units of 100 km/s/Mpc) to get something proportional to the
physical density of the baryons.]
There are two other related effects due to the baryons: since adding mass to a spring slows the oscillation down, adding baryons to the plasma decrease the frequency of the oscillations pushing the
position of the peaks to slightly higher multipoles l.
Baryons also affect the way the sound waves damp and hence how the power spectrum falls off at high multipole moment lor small angular scales as we will see later.
The many ways that baryons show up in the power spectrum imply that the power spectrum has many independent checks on the baryon density of the universe. The baryon density is a quantity that the CMB
can measure to exquisite precision. | {"url":"http://background.uchicago.edu/~whu/intermediate/baryons3.html","timestamp":"2014-04-20T18:27:08Z","content_type":null,"content_length":"18515","record_id":"<urn:uuid:680a776e-3c0c-42cd-b3a1-a40c7d51f217>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical validation of megavariate effects in ASCA
Innovative extensions of (M) ANOVA gain common ground for the analysis of designed metabolomics experiments. ASCA is such a multivariate analysis method; it has successfully estimated effects in
megavariate metabolomics data from biological experiments. However, rigorous statistical validation of megavariate effects is still problematic because megavariate extensions of the classical F-test
do not exist.
A permutation approach is used to validate megavariate effects observed with ASCA. By permuting the class labels of the underlying experimental design, a distribution of no-effect is calculated. If
the observed effect is clearly different from this distribution the effect is deemed significant
The permutation approach is studied using simulated data which gave successful results. It was then used on real-life metabolomics data set dealing with bromobenzene-dosed rats. In this metabolomics
experiment the dosage and time-interaction effect were validated, both effects are significant. Histological screening of the treated rats' liver agrees with this finding.
The suggested procedure gives approximate p-values for testing effects underlying metabolomics data sets. Therefore, performing model validation is possible using the proposed procedure.
1 Background
In life science research many measuring tools emerged in recent years. These tools give a coarse profile of biological classes such a transcripts (transcriptomics), proteins (proteomics) and
metabolites (metabolomics). This paper focuses on the field of metabolomics; the comprehensive quantitative and qualitative analysis of all small molecules of cells, body fluids, and tissues. The mix
of hypothesis and discovery driven omics-experiments create novel biostatistical challenges noted since combining pattern recognition and body fluid profiling in the early eighties [1]. Interpreting
the multivariate metabolomics results means integrating biological knowledge with possible contributing metabolites.
Metabolomics data sets comprise hundreds of metabolites measured in typically tenths of samples. Multivariate statistics on data that have fewer samples than metabolites is cumbersome. Usually there
is an experimental design underlying the metabolomics data sets. The obvious technique for analyzing such data, Multivariate Analysis of Variance (MANOVA) [2] cannot deal with data that consists of
more metabolites than samples.
The recent introduction of ANOVA-based extensions of multivariate data analysis methods may open new angles to analyze metabolomics data. These methods aim to analyze designed experiments with more
measured metabolites than samples. Among the new methods are ANOVA-principal component analysis (PCA), principal response curves (PRC) and ASCA [3-5]. All these methods are a combination of PCA and
ANOVA. In this paper we will provide a validation procedure for ASCA using a randomization strategy. The models of other ANOVA-based methods may also use this validation procedure.
Analysis of variance simultaneous component analysis, ANOVA-SCA or ASCA is a generalized version of analysis of variance for univariate data to the multivariate case [5]. With this method it is
possible to isolate the variation in the data induced by a factor varied in the experimental design. Analyzing this isolated variation with simultaneous component analysis may reveal the relation
between the samples and metabolic profile. ASCA successfully helped the quality control in an application of the metabolomics platforms NMR, GC-MS and LC-MS [6]. In an experiment with toxin dosed
animals, ASCA successfully disentangled the effects and helped to visualize the homeostatic capacity of the animals [7].
The independent factors in the experimental design translate into a mathematical model that associates the factors to the measured metabolites. It is essential to question whether an effect found in
the sample reflects the effect of this specific factor in the population or that it is merely a sampling fluctuation. This paper tries to answers that question and to provide a way to validate ASCA
models. Experiments in metabolomics typically have few samples and normality and equal variances can neither be assumed nor tested. Therefore, we propose a procedure for validating megavariate
effects in ASCA without the common assumptions of normality or equal variance.
Section two will define the goal and explains some of the theory of statistical validation. That section also explains the ASCA method by defining the model constraints and the used notation scheme.
Some of the essential properties, like orthogonality of effect estimates are explained. Explaining ASCA ends with an example of the ASCA model SCA notation. The section that follows details how to
randomize the data given the experimental setup of the study. It also details why not to use jackknife or bootstrap, but why permutations are the way to go. A simple example details the model
validation, followed by an explanation of how to randomize the data. In section three a simulated data set will serve as an example to certify the validation procedure. Also in that section an
experiment with bromobenzene dosed rats will be analyzed and validated. Finally, the last section gives some closing remarks.
2 Methods and Theory
2.1 Definitions and purpose
The experiments in a metabolomics study often follow an experimental design with varying levels of treatment conditions, also known as factors [8]. Typically the observed metabolic profiles of two
different levels of one factor are not the same. This inequality of levels is due to sampling fluctuations and the effect of the varied factor.
2.2 ASCA models
This section explains the ANOVA-SCA method. The basis of ASCA is the variation partitioning property of ANOVA that allows estimating the effects of the factors encoded in the experimental design [5].
ASCA has some desirable properties such as orthogonality of effect estimates. Orthogonal effect estimates suit metabolomics experiment analysis well as it allows unique isolation of effect specific
variation. Consider, for instance, the case where the treatment regime consists of metabolite data from two dosage levels and three measured time points. ASCA allows isolating the time effects
independently from the dosage effects; it can isolate general aging from drug intervention effects.
The variation isolation works as in ANOVA; the preceding example of metabolite data from two dosage levels at three time points translates to a two-way ANOVA design. This design consists of two main
effects, time and dosage, and a time dosage interaction effect. The main effects and the interaction effect are all orthogonal; this enables perfect isolation of effect specific variation.
In the following text the boldface uppercase characters represent matrices (X), vectors are in lowercase bold-italic (x) and scalars in lowercase italic (x). The experimental data is shown as X (I ×
J). The I rows contain the samples while the J columns in X describe the metabolite levels within the samples.
The following text assumes that X is mean centered, that is, the mean of each column in X is 0, equation 1.
If the matrix X[δ ]contains the estimates of an effect, then equation 2 defines the sum of squares (SSQ) of that effect, here shown for effect δ.
X[τ], X[δ ]and X[τδ ]represent the isolated variation due to time, dose, and their interaction respectively. X[e ]contains the individual variation that is not induced by the factors.
A general two-way ANOVA model is shown in equation 3, where τ and δ are the main effects with levels c and d, j is the variable index. An example of 2-way ANOVA model common in the metabolomics field
shows in (equation 4) how the variation is composed of time effects, dosage effects, interaction between time and dosage and residuals (equation 4 and 5). Each of the effect partitions in equation 4
is orthogonal to the others, equation 6. This orthogonal property allows for the variation decomposition shown in equation 4 [5]. The effect estimates are not normalized.
x[cdj ]= τ[cj ]+ δ[dj ]+ (τδ)[cdj](3)
Alternatively equation 3 can be written in the matrix form, shown in equation 4.
X = X[τ ]+ X[δ ]+ X[τδ ]+ X[e](4)
The variation measured in sum of squares, can be uniquely partitioned into the effect, equation 5.
||X||^2 = ||X[τ]||^2 + ||X[δ]||^2 + ||X[τδ]||^2 +||X[e]||^2(5)
The orthogonality of the effects is shown in equation 6.
The SCA estimates the information in the partitions time, dosage and dosage time interaction. The two-way ANOVA style ASCA model (equations 3, 4) give the following ASCA model after SCA (equation 7):
A more detailed review of the ASCA method properties is found elsewhere [9].
2.3 Type of resampling to use
A way to tackle the problem of validating ASCA models is by using resampling techniques, being jackknife, bootstrapping and permutation tests [10]. The basic idea will be explained by a univariate
analysis of two groups of equal size. Later, this will be generalized to the megavariate case. The standard way of testing the difference between group means, with the underlying null-hypothesis that
the population group means are not different, is with a t-test. The ANOVA F-test comes down to a t-test for the two group case. Under the assumptions of normality and equal group variances the
t-statistic is
where and are the group means, n the num-ber of samples in the groups and s[1 ]and s[2 ]are the group standard deviations [11]. The pooled standard deviation s[p ]can be calculated easily from s[1
]and s[2 ]given the assumptions of normality and equal group variances. Actually, s[p ]is the standard deviation of ( − ), showing the rationale of the t-statistic: a measure of the devia-tion ( −
) in its standard deviation units s[p]. Including the proper degrees of freedom allows for testing the null-hypothesis of equal group means.
The bootstrap and jackknife work by resampling the samples in the groups, keeping the grouping structure intact, estimating from those resamplings the group standard deviations s[1 ]and s[2].
However, this does not directly give the wanted result, because the value needed for the t-test is the standard deviation of ( − ). Assuming normality and equal variances, this value can be
calculated from the group variances using equation 9. This is a reasonable assumption for analytical replicates of a sample, but not directly for the biological variation across subjects. The
assumption of equal group variances is questionable in this case. These assumptions cannot easily be tested given the small group sample sizes. Thus, it is not clear how to obtain a standard
deviation value for ( − ) from the jackknifed or bootstrapped s[1 ]and s[2 ]without making extra assumptions.
Permutation tests work directly on the variability of ( − ) by randomly permuting class labels and recalculating the group-mean differences. Actually, such permutation tests go back a long way [12]
as an alternative for t-tests and are now also routinely used in gene-expression data analysis, as for instance Significance Analysis of Microarrays (SAM) [13].
The standard deviation of ( − ) has the squared Euclidean distance in its numerator, the denominator is constant over the permutations. In centered data the squared Euclidean distance equals the sum
of squares (SSQ) of ( − ). Using the SSQ as effect statistic, the generalization from univariate to multivariate follows from summing the univariate SSQ's for all variables.
2.4 How to randomize the data
Randomizing or permutation is the uncoupling of the data from the group labels [14,15]. Take note that in data with a zero mean (equation 1) the random sampling expectated value is 0. Considering the
level averages, the randomization procedure tests whether the results with randomized labels are as different from zero as the original result is. The randomization, or permutation, does not change
the metabolite values for a sample, but it reassigns each sample randomly to one of the treatment groups.
2.5 Model validation example
This section gives a detailed example of how the permutation works and how it will help to validate models.
In most experimental designs it is important to assess the statistical confidence of the effect estimates. An experiment with two measurement series and three measurements in each series will serve
as example for the validation. If these series are a and b, the two series comprise the levels of the effect δ, giving the model shown d in equation 10. This equation holds for both vectors and
matrices, shown here is the matrix form of effect of factor δ. The null hypothesis (H[0]) is the sum-of-squares (SSQ) associated with the effect of factor δ is zero (Equation 11). The alternative
hypothesis (H[1]) states the that SSQ of the effect of factor δ is larger than zero.
X = X[δ ]+ X[e](10)
H[0 ]: ||X[δ]||^2 = 0;H[1 ]: ||X[δ]||^2 > 0(11)
The chosen distance measure that marks how far the group averages are apart is the squared Euclidean distance. In the hypothetical case with a known population, the factors without effect have an SSQ
that is zero. Due to the small sample size, the distance between a and b will never be exactly zero, giving an ||X[δ]||^2 > 0. The SSQ is by its nature also a distance measure that describes how far
the effect levels are from zero. In the univariate context, usually variances of the groups are analyzed. In the multivariate context the analysis focuses on SSQ's. The SSQ also conveniently
describes the variation in the data.
The measured results for series a are 5, 4 and 3, for series b the results are -3, -4 and -5. These values satisfy equation 1. The average of level a is 4 and the average of level b is -4, shown in
equation 12.
Randomization is uncoupling the group labels from the data and randomly reassigning them. To show the randomization the samples with the ± 3 will switch groups. Level a now has the measured values
-3, 4 and 5, while level b has 3, -4 and -5, equation 13. x[p ]shows the permuted x and its average.
The distance between the averages is much smaller in the randomized set than the averages of the original data, equation 14.
The distance between series a and b is much larger than any of the SSQ's after randomization. There is no permutation that gives a larger SSQ than the original grouping. The larger distance in the
original model suggests a significant difference in the series a and b, equation 15.
Randomly reassigning multivariate samples to a group works in the same way as described in the preceding paragraphs for univariate data. The randomization leaves the order of metabolites of the
sample unaffected. The SSQ, equation 2, allows for univariate and multivariate calculation of the sum of squares; thereby forming the generalization to the multivariate case.
Repeating the randomization procedure many times gives just as many SSQ values. These values define the reference distribution. When most of the randomization SSQ results are larger than the original
group assignment result, the effect SSQ is a sampling fluctuations and H[0 ]is not to be rejected. Finally, the probability value or p-value is defined to be the number of SSQs in the reference
distribution that is larger than the original SSQ. So when 35 of the 1000 SSQs in the reference distribution are larger than the original SSQ, the probability of finding a larger than original SSQ
value is (p-value) 35/1000 = 0.035.
A good estimate on the randomized SSQ distribution needs many randomization iterations. How many randomization iterations are enough for a good probability estimate is difficult to establish before
starting, because that largely depends on the data. However, repeating the randomization series should give similar results, this suggests enough iteration. The permutations are a random subset of
all the possible permutations [15]. This approach is also known as Monte Carlo resampling.
This method validates the multivariate ANOVA partitioning, not the SCA part of ASCA. The SCA method subsequently describes most variation within each partition.
2.6 One-Way ANOVA Design
The preceding example is an example of a one-way ANOVA design with two levels (equation 12). To get a reference distribution one can simply permute the group labels. If the original SSQ is larger
than most of the reference distribution the model is considered significant, otherwise it is not. In the preceding example series a is significantly different from series b.
2.7 Two-Way ANOVA Design
A two-way ANOVA extends the one-way ANOVA design to two factors. In metabolomics experiments common factor examples are time and a drug intervention.
Unlike one-way designs two-way ANOVA designs may also include an interaction term. The interaction captures the relation between two main factors. In a time and drug example, the interaction effect
means the drug shows a different at different time points.
Each main effect needs to be validated separately, getting the reference distribution for the main effect is the same as for the one-way ANOVA. Getting the reference distribution for the interaction
term is a bit more complicated. The best option is to permute the residual samples (equation 16) [14]. Residual samples are samples that have the main effects removed.
X[r ]= X - X[τ ]- X[δ](16)
2.8 Nested ANOVA Design
Nested ANOVA designs are extensions of ANOVA design with another factor nested in the main effect. Some special cases need nested ANOVA models, like experiments that measure one animal at different
times. The repeated measuring nests the factor time in the animal. The randomization strategy in such cases only allows for placing the animal time series in other levels of the one-way ANOVA factor.
In a nested ANOVA design, the permutable unit is the animal itself [2,14,15].
2.9 Related Methods
A widely used method in metabolomics is principal component analysis. This method, however, does not take group structures into account, hindering the analysis of effects. Methods that are more
closely related to ASCA are SMART and PRC [4,16]. However, these methods differ on a key issue, namely orthogonality of effect estimates (equation 5). The effect estimates of SMART are not
orthogonal, as a result the here proposed validation procedure cannot be used. In PRC the effect estimates are orthogonal up to the deflation of the control condition. The proposed validation
procedure can be used in PRC as long as it is used before deflating the control effect.
2.10 Experimental environment
The ASCA algorithm was implemented in MATLAB script code, using The MathWorks MATLAB version 7.1 release 14 running on Fedora Core 3 on an Intel Corporation Pentium IV (3.0 GHz) computer.
The ASCA algorithm is in the download section of our website [17]. The validation algorithm can be found there as well.
3 Results & Discussion
This section shows results to certify the proposed validation method for synthetic data and real world experimental data.
The real world experiment deals with toxin-dosed rats [7]. Various other methods already analyzed this experiment; the results strongly suggest the toxin is affecting the animal. This experiment
serves as a real world certification of the suggested statistical validation approach in multivariate data sets.
3.1 Examples; certifying the procedures with designed data
This example study showcases two data sets. The first data set has two effect levels that are significantly different. The second data set has two effect levels that are not significantly different.
This is to test the suggested procedure in the simple case with known true statistics.
ASCA describes each effect level by the averages of the metabolites in that level. In this example study, ASCA will test if the multivariate average of the first 10 rows is different from the
multivariate average of the last 10 rows. When many randomizations give an SSQ that is equally large as the original SSQ, the groups probably do not differ. When only a minor fraction of the
randomizations give a larger group distance, the groups most likely differ.
In the model the effect δ has two levels (equation 10). d In the first data set the first 10 rows are filled with ones and the last 10 rows are filled with zeros. Normal distributed white noise (N(σ
= 1, μ = 0)) is added to this data. The second data set is filled with zeros and white noise (N(σ = 1, μ = 0)) is added to it.
Figures 1b &1d show the two example data sets, the rows are individual samples and the columns are the metabolites. The colored cells show each metabolite value of every sample.
Figure 1. Example study to certify the validation procedure, it consists of one significantly different and one nonsignificantly different data set. Figures A and C show the SSQ reference
distribution found by permuting the data. If the red dot is outside most the reference distribution and is on the right side, the group is significantly different. The figures B and D show the data
from this example experiment. Careful inspection of figure B reveals the top half differs from the bottom half, it is more yellow and red then the bottom half. The D figure lacks this property.
In the true significant example the effect δ is designed to be different. The top half of figure 1b has more red colored cells while the bottom half has more blue colored cells. Figure 1a shows the
reference distribution of randomized SSQ's, using a vertical line to show the SSQ of the original grouping.
Following the proposed validation procedure, the conclusion is clear: the halves are unlikely to be the same because all the permuted SSQ's are smaller than the original SSQ (p = 0.00012, SSQ =
57.96). Conclusion: the difference in levels is significant. This model validation used 100,000 randomization iterations taking about 5 minutes of computing time.
Repeating the validation procedure on data without a designed difference between the two dosage levels, serves as a negative control. The level averages will differ a little, but these differences
are sampling fluctuations.
Figure 1d is similar to figure 1b but without the designed differences in the dosage levels. Figure 1d does not show a seeming difference between the top and bottom half. Figure 1c shows the
reference distribution for the data set equal level averages.
Following the proposed validation procedure, the conclusion is clear: the halves are likely to be the same because many (19.46%) of the permuted SSQ's are larger than the original SSQ (p = 0.19463,
SSQ = 15.91). Conclusion: the difference in levels is not significant. This model validation used 100,000 randomization iterations taking about 5 minutes of computing time.
To test if the proposed validation procedure rejects the H[0 ]in the fraction of the significance threshold, the model from equation 10 was used with 1000 different realisations of white noise (σ =
1, μ = 0). With an significance threshold of α = 0.05, 50 of the 1000 H[0]'s are expected to be rejected. The number of rejections were in the expected range, given a 95% confidence interval from a
binomial distribution with α = 0.05 for n = 1000.
3.2 Experimental results: Rats dosed with hep-atotoxicant bromobenzene
In this experiment there are five groups of rats; a control group, a corn oil (the toxin vehicle) control group and a low, medium and high dosage of bromobenzene. The collected urine from three
individual rats of each treatment group is measured on the NRM platform, at 6, 24 and 48 hours after the toxin administration [7,18]. The rats are sacrificed after each sampling to collect tissue
sample for histology and transcriptomics analysis.
One sample from the highest dosage group is missing. To avoid unbalanced ANOVA issues we assume this missing sample equals the average of the two samples collected and measured from that group at the
same time point.
The main effects and the interaction effect of the 2-way ANOVA models were tested by the ASCA validation. Here the focus is on the factor dosage and dosage-time interaction. The models are
significant, with a drug dose difference p ≤ 0.0001, SSQ = 3.181 (figure 2a) and dosage-time interaction p ≤ 0.0001, SSQ = 1.344 (figure 2b). The interaction significance was calculated on the
residuals, thus after removing the time and dosage effect (equation 16). The not nested experimental design allows the use of a simple two-way ANOVA permutation scheme.
Figure 2. Validation of the ASCA model for bromobenzene treated rats, validation of the dosage and the dosage-time interaction and the X[δ ]+ X[τδ ]score plot. This experiment deals with the urine
analysis of bromobenzene treated rats, the experimental design includes two types of controls and 3 dosage levels of the hepatotoxicant bromobenzene. The dosage and the interaction models are both
significant as is clear from the reference distributions (p ≤ 0.0001). Because the dosage and the interaction models are significant they are superimposed and analyzed by SCA. The score plot of the
SCA solution is shown. From this plot it is clear by visual inspection that the average dosage levels differ and that the interaction effect exists.
The dosage and the interaction effect are significant, combining the dosage and interaction gives a data set that describes all effects that depend on dosage (equation 17). SCA helps to reduce the
dimensionality of this data set.
X[δ+τδ ]= X[δ ]+ X[τδ](17)
SCA summarizes the validated toxin and interaction variation. Grouping the scores (T in equation 18) according to the factor levels gives figure 2c. The conclusion is the treatment with the
hepatotoxicant differs between dosage groups and the dosage responses change over time. Additionally, the results suggest the animals treated with the lowest dosage fully recover or go back to the
state of the controls. The animals dosed with the medium dosage need more time, but also go back to the control state. The animals given the highest dosage do not recover to the control state.
Histological liver examination revealed extensive damage caused by the bro-mobenzene, corroborating these findings.
4 Conclusion
Extending ASCA with a permutation procedure enables validation of ASCA models. Referencing the ASCA models to the permutation based reference distribution gives validation statistics. If the model is
significant, the following SCA decomposition describes the validated induced effects.
The proposed method gives validation statistics to the ASCA models. ASCA itself allows for summarizing the designed experimental data. Combining ASCA and the ASCA model validation forms a powerful
summary of designed experimental data.
Margriet Hendriks is thanked for suggesting permutation based validation, Huub Hoefsloot is thanked for his discussions that helped to solidify the theory and Marieke Timmerman is thanked for
critical reading.
Sign up to receive new article alerts from BMC Bioinformatics | {"url":"http://www.biomedcentral.com/1471-2105/8/322","timestamp":"2014-04-17T18:37:34Z","content_type":null,"content_length":"125123","record_id":"<urn:uuid:2f97da99-f567-41e8-919a-7b7ccdb23054>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
Set Manipulation
September 30th 2009, 03:17 PM #1
Sep 2009
Set Manipulation
I am having trouble with this problem.
Suppose that A and B are subsets of R (reals) and that |a - L| < p for all a in A and that |3b - L| < q for all b in B.
Prove that |a - 6b| < p + 2q + |L|.
Any help would be appreciated.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/discrete-math/105283-set-manipulation.html","timestamp":"2014-04-17T20:01:54Z","content_type":null,"content_length":"28536","record_id":"<urn:uuid:c0f7a8b2-c4b9-4279-a3d0-cda7724af67a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] can the classicist understand the intuitionist? if not, why?
Joao Marcos botocudo at gmail.com
Wed Nov 23 17:40:11 EST 2005
> Arnon Avron wrote:
> >2) Can one define in intuitionistic logic counterparts of the
> >classical connectives so that the resulting translation
> >preserves the consequence relation of classical logic?
Giovanni Sambin then wrote:
> The so-called double-negation interpretation, or Goedel-Gentzen
> translation of the 30s, which sends a formula A to a formula A*,
> allows one to prove that:
> Gamma|- Delta is provable in classical logic
> if and only if
> Gamma*|- Delta* is provable in intuitionistic logic,
> actually, in minimal logic (Gamma* is of course A1*,...,An*
> if Gamma=A1,...An)
> (for a proof, see e.g. Troelstra-van Dalen, Constructivism in
> mathematics, an introduction, vol. 1, North-Holland 1988, pp. 57-59)
> When I explain this to students I say: the intuitionist can
> understand what the classicist says, including his proofs,
> but not conversely (unless one adds an extra modality).
This last observation makes me wonder:
Does anyone know of a PROOF that there is NO converse translation,
from (the consequence relation of) intuitionistic logic into classical
logic, i.e., a proof that there is no (recursive?) mapping * such that
Gamma |- Delta is provable in intuitionistic logic
Gamma* |- Delta* is provable in classical logic
Or maybe someone can exhibit here such a translation, for my illustration?
BTW, how is the procedure of "adding an extra modality" that will help
the classicist understand the intuitionist?
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2005-November/009378.html","timestamp":"2014-04-17T04:20:07Z","content_type":null,"content_length":"4180","record_id":"<urn:uuid:c93b8c63-97df-4b6d-8d10-a628101816c3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Methods of Information Geometry, volume 191 of Translations of mathematical monographs
Results 1 - 10 of 17
, 2004
"... A family of kernels for statistical learning is introduced that exploits the geometric structure of statistical models. The kernels are based on the heat equation on the Riemannian manifold
defined by the Fisher information metric associated with a statistical family, and generalize the Gaussian ker ..."
Cited by 87 (6 self)
Add to MetaCart
A family of kernels for statistical learning is introduced that exploits the geometric structure of statistical models. The kernels are based on the heat equation on the Riemannian manifold defined
by the Fisher information metric associated with a statistical family, and generalize the Gaussian kernel of Euclidean space. As an important special case, kernels based on the geometry of
multinomial families are derived, leading to kernel-based learning algorithms that apply naturally to discrete data. Bounds on covering numbers and Rademacher averages for the kernels are proved
using bounds on the eigenvalues of the Laplacian on Riemannian manifolds. Experimental results are presented for document classification, for which the use of multinomial geometry is natural and well
motivated, and improvements are obtained over the standard use of Gaussian or linear kernels, which have been the standard for text classification.
- Journal of Machine Learning Research , 2003
"... In biological data, it is often the case that observed data are available only for a subset of samples. When a kernel matrix is derived from such data, we have to leave the entries for
unavailable samples as missing. In this paper, the missing entries are completed by exploiting an auxiliary kernel ..."
Cited by 42 (6 self)
Add to MetaCart
In biological data, it is often the case that observed data are available only for a subset of samples. When a kernel matrix is derived from such data, we have to leave the entries for unavailable
samples as missing. In this paper, the missing entries are completed by exploiting an auxiliary kernel matrix derived from another information source. The parametric model of kernel matrices is
created as a set of spectral variants of the auxiliary kernel matrix, and the missing entries are estimated by fitting this model to the existing entries. For model fitting, we adopt the em algorithm
(distinguished from the EM algorithm of Dempster et al., 1977) based on the information geometry of positive definite matrices. We will report promising results on bacteria clustering experiments
using two marker sequences: 16S and gyrB.
- Neural Computation , 2004
"... We aim to extend from AdaBoost to U-Boost in the paradigm to build up a stronger classification machine in a set of weak learning machines. A geometric understanding for the Bregman divergence
defined by a generic function U being convex leads to U-Boost method in the framework of information geomet ..."
Cited by 23 (8 self)
Add to MetaCart
We aim to extend from AdaBoost to U-Boost in the paradigm to build up a stronger classification machine in a set of weak learning machines. A geometric understanding for the Bregman divergence
defined by a generic function U being convex leads to U-Boost method in the framework of information geometry for the finite measure functions over the label set. We propose two versions of U-Boost
learning algorithms by taking whether the domain is restricted to the space of probability functions or not. In the sequential step we observe that the two adjacent and the initial classifiers
associate with a right triangle in the scale via the Bregman divergence, called the Pythagorean relation. This leads to a mild convergence property of the U-Boost algorithm as seen in the EM
algorithm. Statistical discussion for consistency and robustness elucidates the properties of U-Boost methods based on a probabilistic assumption for a training data. 1
, 2001
"... The mystery of belief propagation (BP) decoder, especially of the turbo decoding, is studied from information geometrical viewpoint. The loopy belief network (BN) of turbo codes makes it
difficult to obtain the true "belief" by BP, and the characteristics of the algorithm and its equilibrium are not ..."
Cited by 6 (5 self)
Add to MetaCart
The mystery of belief propagation (BP) decoder, especially of the turbo decoding, is studied from information geometrical viewpoint. The loopy belief network (BN) of turbo codes makes it difficult to
obtain the true "belief" by BP, and the characteristics of the algorithm and its equilibrium are not clearly understood. Our study gives an intuitive understanding of the mechanism, and a new
framework for the analysis. Based on the framework, we reveal basic properties of the turbo decoding. 1
"... Recently, several attempts have been made for deriving datadependent kernels from distribution estimates with parametric models (e.g. the Fisher kernel). In this paper, we propose a new kernel
derived from any distribution estimators, parametric or nonparametric. This kernel is called the Leave-one ..."
Cited by 6 (1 self)
Add to MetaCart
Recently, several attempts have been made for deriving datadependent kernels from distribution estimates with parametric models (e.g. the Fisher kernel). In this paper, we propose a new kernel
derived from any distribution estimators, parametric or nonparametric. This kernel is called the Leave-one-out kernel (i.e. LOO kernel), because the leave-one-out process plays an important role to
compute this kernel. We will show that, when applied to a parametric model, the LOO kernel converges to the Fisher kernel asymptotically as the number of samples goes to infinity.
- Advances in Neural Information Processing Systems 15 , 2003
"... Recently the Fisher score (or the Fisher kernel) is increasingly used as a feature extractor for classification problems. The Fisher score is a vector of parameter derivatives of loglikelihood
of a probabilistic model. This paper gives a theoretical analysis about how class information is preserv ..."
Cited by 4 (1 self)
Add to MetaCart
Recently the Fisher score (or the Fisher kernel) is increasingly used as a feature extractor for classification problems. The Fisher score is a vector of parameter derivatives of loglikelihood of a
probabilistic model. This paper gives a theoretical analysis about how class information is preserved in the space of the Fisher score, which turns out that the Fisher score consists of a few
important dimensions with class information and many nuisance dimensions. When we perform clustering with the Fisher score, K-Means type methods are obviously inappropriate because they make use of
all dimensions. So we will develop a novel but simple clustering algorithm specialized for the Fisher score, which can exploit important dimensions. This algorithm is successfully tested in
experiments with artificial data and real data (amino acid sequences).
- Neural Computation , 2003
"... This paper analyses the Fisher kernel (FK) from a statistical point of view. The FK is a particularly interesting method for constructing a model of the posterior probability that makes
intelligent use of unlabeled data, i.e. of the underlying data density. It is important to analyse and ultimate ..."
Cited by 3 (0 self)
Add to MetaCart
This paper analyses the Fisher kernel (FK) from a statistical point of view. The FK is a particularly interesting method for constructing a model of the posterior probability that makes intelligent
use of unlabeled data, i.e. of the underlying data density. It is important to analyse and ultimately understand the statistical properties of the FK. To this end, we first establish su#cient
conditions that the constructed posterior model is realizable, i.e. that it contains the true distribution.
"... Abstract. This paper studies the geometrization of spaces of stochastic processes. Our main motivation is the problem of pattern recognition in high-dimensional time-series data (e.g., video
sequence classification and clustering). First, we review some existing approaches to defining distances on s ..."
Cited by 2 (2 self)
Add to MetaCart
Abstract. This paper studies the geometrization of spaces of stochastic processes. Our main motivation is the problem of pattern recognition in high-dimensional time-series data (e.g., video sequence
classification and clustering). First, we review some existing approaches to defining distances on spaces of stochastic processes. Next, we focus on the space of processes generated by (stochastic)
linear dynamical systems (LDSs) of fixed size and order (this space is a natural choice for the pattern recognition problem). When the LDSs are represented in state-space form, the space of LDSs can
be considered as the base space of a principal fiber bundle. We use this fact to introduce a large class of easy-to-compute group action-induced distances on the space of LDSs and hence on the
corresponding space of stochastic processes. We call such a distance an alignment distance. One of our aims is to demonstrate the usefulness of control-theoretic tools in problems related to
stochastic processes.
- In the
"... Abstract. Kullback-Leibler relative-entropy, in cases involving distributions resulting from relative-entropy minimization, has a celebrated property reminiscent of squared Euclidean distance:
it satisfies an analogue of the Pythagoras ’ theorem. And hence, this property is referred to as Pythagoras ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. Kullback-Leibler relative-entropy, in cases involving distributions resulting from relative-entropy minimization, has a celebrated property reminiscent of squared Euclidean distance: it
satisfies an analogue of the Pythagoras ’ theorem. And hence, this property is referred to as Pythagoras ’ theorem of relative-entropy minimization or triangle equality and plays a fundamental role
in geometrical approaches of statistical estimation theory like information geometry. Equvalent of Pythagoras’ theorem in the generalized nonextensive formalism is established in (Dukkipati at
, 2003
"... Bethe Free Energy and Contrastive Divergence Approximations for Undirected Graphical Models Yee Whye Teh Doctorate of Philosophy Graduate Department of Computer Science University of Toronto
2003 As the machine learning community tackles more complex and harder problems, the graphical models ..."
Cited by 1 (0 self)
Add to MetaCart
Bethe Free Energy and Contrastive Divergence Approximations for Undirected Graphical Models Yee Whye Teh Doctorate of Philosophy Graduate Department of Computer Science University of Toronto 2003 As
the machine learning community tackles more complex and harder problems, the graphical models needed to solve such problems become larger and more complicated. As a result performing inference and
learning exactly for such graphical models become ever more expensive, and approximate inference and learning techniques become ever more prominent. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=160071","timestamp":"2014-04-21T07:32:09Z","content_type":null,"content_length":"37875","record_id":"<urn:uuid:7d56262f-32f2-4618-81f2-708134bdbff1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new representation for linear lists
"... In this paper we show how a slight modification of (a; b)-trees allows us to perform member and neighbor queries in O(log n) time and updates in O(1) worst-case time (once the position of the
inserted or deleted key is known). Our data structure is quite natural and much simpler than previous worst- ..."
Cited by 20 (0 self)
Add to MetaCart
In this paper we show how a slight modification of (a; b)-trees allows us to perform member and neighbor queries in O(log n) time and updates in O(1) worst-case time (once the position of the
inserted or deleted key is known). Our data structure is quite natural and much simpler than previous worst-case optimal solutions. It is based on two techniques : 1) bucketing, i.e. storing an
ordered list of 2 log n keys in each leaf of an (a; b) tree, and 2) lazy splitting, i.e. postponing necessary splits of big nodes until we have time to handle them. It can also be used as a finger
tree with O(log n) worst-case update time. 1 . Introduction One of the most common (and most important) data structures used in efficient algorithms is the balanced search tree. Hence there exists a
great variety of them in literature. Basically, they all store a set of n keys such that location, insertion and deletion of keys can be accomplished in O(log n) worst-case time. In general, updates
(insertions or ...
, 1992
"... 2 1 1 Figure 1: A red-black tree. The darkened nodes are black nodes. The external nodes are denoted by squares. Shown with each node is its rank. Wyk give another, simpler, implementation of
finger trees. They describe a finger data structure which is a modification of red-black trees, but othe ..."
Add to MetaCart
2 1 1 Figure 1: A red-black tree. The darkened nodes are black nodes. The external nodes are denoted by squares. Shown with each node is its rank. Wyk give another, simpler, implementation of finger
trees. They describe a finger data structure which is a modification of red-black trees, but other forms of balanced trees could be used as a basis for the structure. The two problems presented in
Chapters 3 and 4 rely on the use of redblack and finger trees respectively. In this chapter we give a fairly complete overview of red-black trees, of the finger trees introduced by Tarjan and Van
Wyk, and of a variant of these which we use in Chapter 4. The material here is intended to be comprehensive and useful as an introduction to these two types of data structures. Re - ack rees A
red-black tree is a full binary tree in which each node is assigned a color, either red or black. The leaves are called | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2846882","timestamp":"2014-04-21T09:09:47Z","content_type":null,"content_length":"15159","record_id":"<urn:uuid:bb97bb8a-80c7-4ca5-9a64-b6305363da15>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
» Learn CSS3 | Cheat Sheet | CSS Tutorial | Selectors | Properties
Understanding :nth-child Pseudo-class Expressions
CSS3 provides four powerful pseudo-classes that allow the CSS designer to select multiple elements according to their positions in a document tree. Using these pseudo-classes can be a little
confusing at first, but it’s easy once you get the hang of it. The pseudo-classes are:
The argument, N, can be a keyword, a number, or a number expression of the form an+b.
These pseudo-classes accept the keywords odd, for selecting odd-numbered elements, and even, for selecting even-numbered elements.
If the argument N is a number, it represents the ordinal position of the selected element. For example, if the argument is 5, the fifth element will be selected.
The argument N can also be given as an+b, where a and b are integers (for example, 3n+1).
In that expression, the number b represents the ordinal position of the first element that we want to match, and the number a represents the ordinal number of every element we want to match after
that. So our example expression 3n+1 will match the first element, and every third element after that: the first, fourth, seventh, tenth, and so on. The expression 4n+6 will match the sixth element
and every fourth element after that: the sixth, tenth, fourteenth, and so on. The keyword value odd is equivalent to the expression 2n+1.
If a and b are equal, or if b is zero, b can be omitted. For example, the expressions 3n+3 and 3n+0 are equivalent to 3n—they refer to every third element. The keyword value even is equivalent to the
expression 2n.
If a is equal to 1, it can be omitted. So, for example, 1n+3 can be written as n+3. If a is zero, which indicates a non-repeating pattern, only the element b is required to indicate the ordinal
position of the single element we want to match. For example, the expression 0n+5 is equivalent to 5, and as we saw above, it’ll match the fifth element.
Both a and b can be negative, but elements will only be matched if N has a positive value. If b is negative, replace the + sign with a - sign.
If your head’s spinning by now, you’re not alone, but hopefully Table 1 will help put things into perspective. The expression represents a linear number set that’s used to match elements. Thus, the
first column of the table represents values for n, and the other columns display the results (for N) of various example expressions. The expression will match if the result is positive and an element
exists in that position within the document tree.
Table 1. Result Sets for
Pseudo-class Expressions
│0│ 1 │ 1 │ 4 │- │ - │ 3 │
│1│ 3 │ 5 │ 8 │4 │ 3 │ 2 │
│2│ 5 │ 9 │ 12 │8 │ 8 │ 1 │
│3│ 7 │ 13 │ 16 │12│ 13 │ - │
│4│ 9 │ 17 │ 20 │16│ 18 │ - │
│5│ 11 │ 21 │ 24 │20│ 23 │ - │
Thus the expression 4n+1 will match the first, fifth, ninth, thirteenth, seventeenth, twenty-first, and so on, elements if they exist, while the expression -n+3 will match the third, second, and
first elements only.
The difference, then, between the nth- and nth-last- pseudo-classes is that nth- pseudo-classes count from the top of the document tree down—they select elements that have N-1 siblings before them;
meanwhile, the nth-last- pseudo-classes count from the bottom up—they select elements that have N-1 siblings after them.
User-contributed notes
by Rayzur
Thu, 10 Jun 2010 05:47:00 GMT
["mention that these only work in opera 9.5 and konqueror at the top of the reference"]
The browser support chart is shown on the :nth-child page
by sinagod
Thu, 23 Apr 2009 22:33:55 GMT
It would have been nice to mention that these only work in opera 9.5 and konqueror at the top of the reference
Related Products | {"url":"http://reference.sitepoint.com/css/understandingnthchildexpressions","timestamp":"2014-04-16T13:03:34Z","content_type":null,"content_length":"52394","record_id":"<urn:uuid:21b3d449-bbe9-4fe8-915b-772768ca27b8>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meeting Details
For more information about this meeting, contact Anatole Katok.
Title: The new (and wonderful) notion of entropy for actions of higher rank abelian groups, and its connections to slow entropy and rigidity
Seminar: Center for Dynamics and Geometry Seminars
Speaker: Anatole Katok, Penn State
Since ordinary measure-theoretic entropy for a smooth measure preserving action of any countable group other that cyclic or its finite extension vanish, alternative notions of entropy for such
actions are of interest. In particular, for a smooth action of Z^k slow entropy based on the scale function n^{1/k} provides a proper normalization and gives a first cut into ``zero entropy'' and
``positive entropy'' actions. However, in order to ascribe a numerical value to slow entropy one needs to fix a norm on the acting group and this (unlike fixing a ``volume element'' ) is somewhat
arbitrary. In this talk I will discuss a new and natural notion of average entropy that is equal to the inverse of the volume of the unit ball in the entropy norm. Thus is it positive if and only if
all non-identity elements of the action have positive entropy. Average entropy is equal to the infimum of the values of the n^{1/k} slow entropy over all norms on the acting group normalized to the
volume element. If it is positive, the infimum is achieved at the entropy norm. A corollary of strong rigidity results that are discussed in Federico Rodriguez Hertz's talk is that for the maximal
rank actions (where the rank is at least two and dimension is greater then rank by one), the average entropy is either equal to zero or is bounded from below by a positive number that depends only on
the dimension. Conjecturally this bound is uniform in dimension. This is a joint work in progress with Federico Rodriguez Hertz.
Room Reservation Information
Room Number: MB106
Date: 09 / 26 / 2011
Time: 03:35pm - 05:30pm | {"url":"http://www.math.psu.edu/calendars/meeting.php?id=11427","timestamp":"2014-04-19T17:03:29Z","content_type":null,"content_length":"4624","record_id":"<urn:uuid:44ae799c-bf27-44fc-bd3c-4dc5c188b3f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Introduction to Dynamical Systems: Continuous and Discrete
ISBN: 9780821891353 | 0821891359
Edition: 2nd
Format: Hardcover
Publisher: Amer Mathematical Society
Pub. Date: 1/12/2013
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/introduction-dynamical-systems-continuous/bk/9780821891353","timestamp":"2014-04-18T09:05:56Z","content_type":null,"content_length":"24248","record_id":"<urn:uuid:504f405b-335c-4b3c-b920-eb055d9766ab>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determine whether the following relation is transitive
May 4th 2010, 09:53 AM #1
May 2010
Determine whether the following relation is transitive
Q) Determine whether the following relation is transitive
Relation R in the set N of natural numbers defined as
R = { (x,y) : y=x+5 and x<4}
The text book says that it is transitive. I dd not understand how it is transitive. Please help.
This is a trick question in that the relation $\mathcal{R}$ is vacuously transitive.
If $(a,b)\in \mathcal{R}$ then $b=a+5$ which implies that $b\ge 5$.
So that means that $\left( {\forall c \in \mathbb{N}} \right)\left[ {(b,c) otin \mathcal{R}} \right]$
So the relation is transitive by default.
May 4th 2010, 10:25 AM #2 | {"url":"http://mathhelpforum.com/discrete-math/143007-determine-whether-following-relation-transitive.html","timestamp":"2014-04-19T20:50:48Z","content_type":null,"content_length":"35735","record_id":"<urn:uuid:d8d6028b-72a6-46cc-b5df-1eed93bfb5b1>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Miscellaneous Polyhedra
Near Misses based on dodecahedra
The left hand figure above with tetrahedral symmetry is termed a 'tetrated dodecahedron' by Robert Austin. It was discovered (independently) by Alex Doskey and Robert Austin. It consists of four
groups of three pentagons separated by six pairs of triangles and four single triangles. With the pentagons regular, the six triangle-triangle edges are lengthened by 0.07 (stress map). Distortion (E
=0.42, P=0 , A=113°).
The centre figure is a tetrahedrally expanded tetrated dodecahedron, again with the pentagons regular. (E=0.567, P=1.776, A=67.0°). Stress Map. OFF.
The right hand figure above is a 'snub expanded tetrated dodecahedron' discovered by Mick Ayrton. It can also be generated with the pentagons regular (stress map). Distortion (E=0.66, P=0 , A=88°).
I am grateful to Mick Ayrton for allowing me to display his discoveries of some near misses that have a dodecahedral origin. In all cases the distortion is confined to the triangle-triangle edges.
The left hand model (which Mick terms a 'saucer') contains two caps from the Johnson Solid 'trigyrate rhombicosidodecahedron'. Stress map. Distortion (E=0.214, P=0 , A=56°). Interestingly, if the
pentagons are replaced by pentagonal pyramids (here), the model becomes regular. Mick has pointed out that this occurs to a surprising number of near misses.
The right hand model (which Mick terms a 'curvy octahedroid' due to the fact that it can be envisaged as having eight slightly curved compound faces) contains an interestingly twisted 'cingulum' of
pentagons and triangles. Stress map. Distortion (E=0.529, P=0 , A=104°). Again this forms a regular figure if the pentagons are replaced by pentagonal pyramids (here).
The left hand model is termed a 'tripentagonal snub dodecahedron'. The pentagons of the original dodecahedron are left in four groups of three and snub triangles added between these groups. This
polyhedron is not convex Stress map. Distortion (E=0.64, P=0 , A=179°).
The right hand model is a variation on this theme, the pentagons are now left in six groups of two to form a 'bipentagonal snub dodecahedron' with a distortion of 0.646 (stress map). Distortion (E=
0.64, P=0 , A=107°).
The above polyhedron was discovered by Mason Green in 2006. The distortion is confined to the triangular faces. Stress map. Distortion (E=0.570, P=0 , A=66°). It has 74 faces (6 hexagons, 12
pentagons, and 56 triangles in eight chiral clusters of seven). The hexagons and pentagons are regular, with the distortion confined to the triangles. The triangle-triangle edges are compressed by
1.78% (if adjacent to threefold axial triangles), or compressed by 0.6% (if not adjacent to axial triangles). Mason calls this the "hexagonally expanded snubbed dodecahedron"
The above polyhedron is an example of a 'fullerene', and was brought to my attention by Robert Austin and Roger Kaufman. It has tetrahedral symmetry with 12 pentagons and 4 hexagons. The above model
has the distortion confined to the pentagons, stress map. Distortion (E=0.316, P=3.07 , A=52°). If the hexagons are allowed to distort (here, stress map) then the distortion becomes (E=0.059, P=2.19
, A=108°) | {"url":"http://www.orchidpalms.com/polyhedra/acrohedra/nearmiss/Tetrated%20Dodecahedra.html","timestamp":"2014-04-19T07:13:21Z","content_type":null,"content_length":"7719","record_id":"<urn:uuid:d079e944-ad31-4b82-bcfd-84d62bc00d66>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
On This Day in Math - Nov 7
I am one of those who think, like Nobel, that humanity will draw more good than evil from new discoveries.
~Marie Curie
The 311th day of the year; 311 not only prime, but is a prime under any permutation of it's digits (113, 131, 311) (
students might search for the next smaller, or larger number which is also a permutable prime
Also, 311 is prime, and is also the sum of three, five, seven, eleven, and thirteen consecutive primes. *PB
EVENTS 1631
Transit of Mercury across the sun, the first observation of a transit of a planet, observed by Pierre Gassendi. This had been predicted by Kepler in 1629. [Scott, Works of Wallis, p. 191, had 1621]
*VFR When Gassendi observed the dot of Mercury passing across the face of the Sun, he was surprized - it seemed far too small, according to ancient conceptions of the relative sizes of heavenly
objects. With a Galilean telescope he observed the transit by projecting the sun's image on a screen of paper. He recorded this in Mercurius in sole visus (1632; Mercury in the Face of the Sun) as
support for the new astronomy of Johannes Kepler. His instrument was not strong enough, however, to disclose the occultations and transits of Jupiter's satellites. *TIS (for more on the march to
accepting a heliocentric system, see this blog by The
Renaissance Mathematicus
1749 Benjamin Franklin enters in his notebook a list of 12 ways in which lightening and electrical fluid agree, from 1) "giving light", to 12) "sulphurous smell". He then considers whether lightning
will be as equally attracted to "points" and lays out the framework for an experiment. *A history of physics in its elementary branches By Florian Cajori
The official opening of Queen’s College in Cork, Ireland. George Boole was the professor of mathematics—the only university post he ever applied for. [MacHale, George Boole, His Life and Work, p 88].
In 1908
, Prof. Ernest Rutherford announced in London that he had isolated a single atom of matter. *TIS
In connection with the celebration of the centenary of his birth (31 October 1815), a memorial tablet was unvieled at his birthplace, Osterfelde, near Warendorf in Westphalia. It reads” “An dieser
St¨att wurde am 31•X•1815 Karl Weierstrass, der grosse Mathematiker, eine Leuchte der Berliner Universit¨at, geboren.” *VFR
The “October Revolution” of the Bosheviks broke out in Russia. It is now celebrated on 7 November as the Gregorian calendar was not adopted there until 1918. *VFR
at approximately 11:00 am, the first Tacoma Narrows suspension bridge collapsed due to wind-induced vibrations. Situated on the Tacoma Narrows in Puget Sound, near the city of Tacoma, Washington, the
bridge had only been open for traffic a few months. *TIS “Galloping Gertie,” suspension bridge over the Narrows of Puget Sound, Tacoma, Washington, breaks up from a torsional oscillation of steadily
increasing amplitude caused by the wind known as the von Karman vortice street. The film is instructive for classes in Differential Equations.
BIRTHS 1660 Thomas Fantet de Lagny
(7 Nov 1660 in Lyon, France - 11 April 1734 in Paris, France) De Lagny is well known for his contributions to computational mathematics, calculating π to 120 places and also making useful comments on
the convergence of the series he was using. In about 1690 he developed a method of giving approximate solutions of algebraic equations and, in 1694, Halley published a twelve page paper in the
Philosophical Transactions of the Royal Society giving his method of solving polynomial equations by successive approximation which is essentially the same as that given by Lagny a few years earlier.
One should note that although methods based on the differential calculus were being developed at this time, neither Lagny not Halley used these new ideas. Lagny's publications on this topic are
Méthodes nouvelle infiniment générale et infiniment abrégée pour l'extraction des racines quarrées, cubique (1691) and Méthodes nouvelles et abrégée pour l'extraction et l'approximation des racines
Lagny constructed trigonometric tables and used binary arithmetic in his text Trigonométrie française ou reformée published in Rochefort in 1703. In 1733 he examined the continued fraction expansion
of the quotient of two integers and, as an example, considered adjacent Fibonacci numbers as the worst case expansion for the Euclidean algorithm in his paper Analyse générale ou Méthodes nouvelles
pour résoudre les problèmes de tous les genres et de tous les degrés à l'infini.*SAU
1799 Karl Gräffe
was a German mathematician best remembered for his method of numerical solution of algebraic equations.*SAU
1867 Marie Marja Sklodowska Curie
(7 Nov 1867; 4 Jul 1934) was a Polish-born French chemist and physicist. In 1898, her celebrated experiments on uranium minerals led to discovery of two new elements. First she separated polonium,
and then radium a few months later. The quantity of radon in radioactive equilibrium with a gram of radium was named a curie (subsequently redefined as the emission of 3.7 x 1010 alpha particles per
sec.) With Henri Becquerel and her husband, Pierre Curie, she was awarded the 1903 Nobel Prize for Physics. She was then sole winner of a second Nobel Prize in 1911, this time in Chemistry. Her
family won five Nobel awards in two generations. She died of radiation poisoning from her pioneering work before the need for protection was known. *TIS
1878 Lise Meitner
(7 Nov 1878; 27 Oct 1968) Austrian physicist who shared the Enrico Fermi Award (1966) with the chemists Otto Hahn and Fritz Strassmann for their joint research beginning in 1934 that led to the
discovery of uranium fission. She refused to work on the atom bomb. In 1917, with Hahn, she had discovered the new radioactive element protactinium. She was the first to describe the emission of
Auger electrons. In 1935, she found evidence of four other radioactive elements corresponding to atomic numbers 93-96. In 1938, she was forced to leave Nazi Germany, and went to a post in Sweden. Her
other work in the field of nuclear physics includes study of beta rays, and study of the three main disintegration series. Later, she used the cyclotron as a tool. *TIS
1888 Sir Chandrasekhara Venkata Raman
(7 Nov 1888; 21 Nov 1970) Indian physicist whose work was influential in the growth of science in India. He was the recipient of the 1930 Nobel Prize for Physics for the 1928 discovery now called
Raman scattering: a change in frequency observed when light is scattered in a transparent material. When monochromatic or laser light is passed through a transparent gas, liquid, or solid and is
observed with the spectroscope, the normal spectral line has associated with it lines of longer and of shorter wavelength, called the Raman spectrum. Such lines, caused by photons losing or gaining
energy in elastic collisions with the molecules of the substance, vary with the substance. Thus the Raman effect is applied in spectrographic chemical analysis and in the determination of molecular
structure. *TIS
1906 Jean Leray
was a French mathematician who worked on algebraic topology and differential equations. *SAU
DEATHS 1872 Rudolf Friedrich Alfred Clebsch
(19 January 1833 – 7 November 1872) was a German mathematician who made important contributions to algebraic geometry and invariant theory. He attended the University of Königsberg and was
habilitated at Berlin. He subsequently taught in Berlin and Karlsruhe. His collaboration with Paul Gordan in Giessen led to the introduction of Clebsch–Gordan coefficients for spherical harmonics,
which are now widely used in quantum mechanics.
Together with Carl Neumann at Göttingen, he founded the mathematical research journal Mathematische Annalen in 1868. *Wik
1918 Artemas Martin
was a self-taught mathematician and book-collector whose output covered a wide range of mathematical problems. *SAU
1913 Alfred Russel Wallace
(8 Jan 1823, 7 Nov 1913) British naturalist, and biogeographer (who studies the distribution of organisms). He was the first westerner to describe some of the most interesting natural habitats in the
tropics. He is best known for devising a theory of the origin of species through natural selection made independently of Darwin. Between 1854 and 1862, Wallace assembled evidence in the Malay
Archipelago, sending his conclusions to Darwin in England. Their findings were presented to the Linnaean Society in 1858. Wallace found that Australian species were more primitive, in evolutionary
terms, than those of Asia, and that this reflected the stage at which the two continents had become separated. He proposed an imaginary line (now known as Wallace's line) dividing the fauna of the
two regions.*TIS
1936 Gury Vasilievich Kolosov
was a Russian mathematician who worked on the theory of elasticity.*SAU
1968 Aleksandr Osipovich Gelfond
(24 Oct 1906, 7 Nov 1968) Russian mathematician who originated basic techniques in the study of transcendental numbers (numbers that cannot be expressed as the root or solution of an algebraic
equation with rational coefficients). He profoundly advanced transcendental-number theory, and the theory of interpolation and approximation of complex-variable functions. He established the
transcendental character of any number of the form ab, where a is an algebraic number different from 0 or 1 and b is any irrational algebraic number, which is now known as Gelfond's theorem. This
statement solved the seventh of 23 famous problems that had been posed by the German mathematician David Hilbert in 1900. *TIS
2003 Donald R(edfield) Griffin
(3 Aug 1915, 7 Nov 2003) American biophysicist, known for his research in animal navigation, animal behaviour, and sensory biophysics. With Robert Galambos, he studied bat echolocation (1938), a term
he coined (1944) for how the bat's ears replace eyes in flight guidance. Using specialized high-frequency sound equipment by G.W. Pierce, they found that bats in flight produced ultrasonic sounds
used to avoid obstacles. In WW II, he used physiological principles to design such military equipment as cold-weather clothing and headphones. Griffin also worked extensively on bird navigation. In
the late 1940s, he flew in a Piper Cub to observe the flight paths of gannets and gulls. In his career, he pioneered rigorous techniques to study animals in their natural environment. *TIS
*VFR = V Frederick Rickey, USMA
*TIS= Today in Science History
*Wik = Wikipedia
*SAU=St Andrews Univ. Math History
*CHM=Computer History Museum | {"url":"http://pballew.blogspot.com/2011/11/on-this-day-in-math-nov-7.html","timestamp":"2014-04-20T00:38:11Z","content_type":null,"content_length":"109696","record_id":"<urn:uuid:b715a66f-8dcf-4f4a-b714-0b52b98411a3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Approximate Bayesian computation in population genetics
Beaumont, M.A. and Zhang, W. and Balding, D.J. (2002) Approximate Bayesian computation in population genetics. Genetics, 162 (4). pp. 2025-2035. ISSN 0016-6731. (The full text of this publication is
not available from this repository)
We propose a new method for approximate Bayesian statistical inference on the basis of summary statistics. The method is suited to complex problems that arise in population genetics, extending ideas
developed in this setting by earlier authors. Properties of the posterior distribution of a parameter, such as its mean or density curve, are approximated without explicit likelihood calculations.
This is achieved by fitting a local-linear regression of simulated parameter values on simulated summary statistics, and then substituting the observed summary statistics into the regression
equation. The method combines many of the advantages of Bayesian statistical inference with the computational efficiency of methods based on summary statistics. A key advantage of the method is that
the nuisance parameters are automatically integrated out in the simulation step, so that the large numbers of nuisance parameters that arise in population genetics problems can be handled without
difficulty. Simulation results indicate computational and statistical efficiency that compares favorably with those of alternative methods previously proposed in the literature. We also compare the
relative efficiency of inferences obtained using methods based on summary statistics with those obtained directly from the data using MCMC.
• Depositors only (login required): | {"url":"http://kar.kent.ac.uk/10599/","timestamp":"2014-04-16T08:20:03Z","content_type":null,"content_length":"24157","record_id":"<urn:uuid:d27f6213-6a5d-473d-bd05-e68b6b0bf2f7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fraction Concepts
Certain division problems create a need for numbers that are not integers. For example, fractions make it possible to write the solution to 17 ÷ 3 as
17 ÷ 3 =
When a and b are integers and b ≠ 0, then the solution of the division problem a ÷ b can be expressed as a fraction, a ≤ b, and b ≠ 0, the fraction proper fraction. If a ≥ b ≥ 0, and b ≠ 0, then
improper fraction. Improper fractions can also be written as the sum of a whole number and a proper fraction. For example,
When the plus sign is omitted and mixed number.
At this grade level, students should learn to identify fractions with models that convey their properties. Proper fractions can be modeled in terms of a “part of a whole.” Here the “whole” may be a
group consisting of n objects where “part” of the group consists of k objects and k < n. In the case of
Equivalently, the whole may consist of a region that is divided into n congruent parts, k of which belong to a subregion. For example, the fraction
A unit fraction is defined as a fraction with a numerator of 1 (for example, n equal parts. One of these smaller parts is the amount represented by the unit fraction. On the number line, the unit
n = 3
The fraction m and n, or m ÷ n. If the fraction is defined in terms of the unit fraction m unit fractions of m abutting segments each of length m ×
m = 5, n = 6
Mixed Numbers
A fraction like
Teaching Model 18.1: Fractions and Regions | {"url":"http://www.eduplace.com/math/mw/models/overview/3_18_1.html","timestamp":"2014-04-16T10:46:27Z","content_type":null,"content_length":"9701","record_id":"<urn:uuid:bcae6eef-b384-4df5-be9f-c98a2297ce16>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
From the Not-Sure-I-Really-Want-To-Know Department...
From the Not-Sure-I-Really-Want-To-Know Department…
As readers know, some physicists believe that the universe as we know it is actually a giant hologram, giving us the illusion of three-dimensions, while in fact all the action is occurring on a
two-dimensional boundary region (see here, here, and here)… shadows on the walls of a cave, indeed.
But lest one mistake that for the frontier of freakiness, others (c.f., e.g., here and here) believe that the existence we experience is nothing more (or less) than a Matrix-like simulation…
A common theme of science fiction movies and books is the idea that we’re all living in a simulated universe—that nothing is actually real. This is no trivial pursuit: some of the greatest minds
in history, from Plato, to Descartes, have pondered the possibility. Though, none were able to offer proof that such an idea is even possible. Now, a team of physicists working at the University
of Bonn have come up with a possible means for providing us with the evidence we are looking for; namely, a measurable way to show that our universe is indeed simulated. They have written a paper
describing their idea and have uploaded it to the preprint server arXiv…
Phys.Org has the whole story at “Is it real? Physicists propose method to determine if the universe is a simulation“; the paper mentioned above can be downloaded here.
As we reach for the “reset” button, we might send carefully-calculated birthday greetings to Paul Isaac Bernays; he was born on this date in 1888. A close associate of David Hilbert (of “Hilbert’s
Hotel” fame), Bernays was one the foremost philosophers of mathematics of the Twentieth Century, who made important contributions to mathematical logic and axiomatic set theory. Bernays is perhaps
best remembered for his revision and improvement of the (early, incomplete) set theory advanced by John von Neumann in the 1920s; Bernays’s work, with some subsequent modifications by Kurt Gödel, is
now known as the Von Neumann–Bernays–Gödel set theory.
Lest, per the simulation speculation above suggest that cosmology has a hammerlock on weirdness: Set theory is used, among other purposes, to describe the symmetries inherent in families of
elementary particles and in crystals. Materials such as a liquid or a gas in equilibrium, made of uniformly distributed particles, exhibit perfect spatial symmetry—they look the same everywhere and
in every direction… a condition that “breaks” at very low temperature, when the particles form crystals (which have some symmetry, but less)… Now Nobel Laureate Frank Wilczek has suggested that
there may exist “Time Crystals“– whose structure would repeat periodically, as with an ordinary crystal, but in time rather than in space… a kind of “perpetual motion ‘machine’” (weirder yet, one
that doesn’t violate the laws of thermodynamics). | {"url":"http://roughlydaily.com/2012/10/17/from-the-not-sure-i-really-want-to-know-department/","timestamp":"2014-04-17T09:35:46Z","content_type":null,"content_length":"46396","record_id":"<urn:uuid:5dabbb9e-ee5c-4f1d-82d5-bafe0c60fc78>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
simultaneous equations problem
January 25th 2010, 04:18 PM #1
Dec 2009
simultaneous equations problem
for what value of a is there no unique solution for the following ?
3x - 2y = 5
5x + ay = 4
There are two answers, here.
1) No solution AT ALL, certainly would be "No Unique Solution"
2) More than one solution also would be "No Unique Solution"
#1 is trivial: 3*a - (5)*(-2) = 3a + 10 = 0 ==> a = -10/3
Another way to do this would be to put both linear equations into Slope-Intercept form. Setting the slopes equal and solving for 'a' gives the desired result. If the slopes are equal, the lines
must be parallel and there is no common solution, unless...
#2 is a bit trickier, since we must show the two linear equations to represent exactly the same line. Unfortunately, we have a Degrees of Freedom problem. We can make EITHER the slopes equal OR
the y-intercepts equal. We can't do both simultaneously.
Having said that, I sincerely hope the problem statement meant #1.
There is a unique solution if the lines have a single point of intersection,
in which case the gradients must differ.
$\frac{3}{2} e -\frac{5}{a}$, for non-parallel lines
$a e -\frac{10}{3}$
If $a=-\frac{10}{3}$, there is no unique solution.
If the lines are parallel, there is no unique solution,
as there is no solution, since a solution gives the point of intersection.
A unique solution means the point of intersection of the two lines.
There are two answers, here.
1) No solution AT ALL, certainly would be "No Unique Solution"
2) More than one solution also would be "No Unique Solution"
#1 is trivial: 3*a - (5)*(-2) = 3a + 10 = 0 ==> a = -10/3
Another way to do this would be to put both linear equations into Slope-Intercept form. Setting the slopes equal and solving for 'a' gives the desired result. If the slopes are equal, the lines
must be parallel and there is no common solution, unless...
#2 is a bit trickier, since we must show the two linear equations to represent exactly the same line. Unfortunately, we have a Degrees of Freedom problem. We can make EITHER the slopes equal OR
the y-intercepts equal. We can't do both simultaneously.
No, this can't happen. If we multiply the second equation by $\frac{3}{5}$, we have
$3x- 2y= 5$ and
$3x- \frac{3a}{5} y= \frac{12}{5}$
There is NO value of a which will make the second equation the same as the first.
Having said that, I sincerely hope the problem statement meant #1.
January 25th 2010, 08:09 PM #2
MHF Contributor
Aug 2007
January 25th 2010, 09:16 PM #3
MHF Contributor
Dec 2009
January 26th 2010, 04:24 AM #4
MHF Contributor
Apr 2005
January 26th 2010, 01:01 PM #5
MHF Contributor
Aug 2007
January 27th 2010, 03:34 AM #6
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/pre-calculus/125450-simultaneous-equations-problem.html","timestamp":"2014-04-18T04:02:06Z","content_type":null,"content_length":"51884","record_id":"<urn:uuid:70e9b9f5-240b-4f3e-ab07-91a400739d28>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relationship Between Load, Shear, and Moment
previous section (also shown to the right) is taken as
where R[1] = R[2] = wL/2
The moment at C is
If we differentiate M with respect to x:
Thus, the rate of change of the bending moment with respect to x is equal to the shearing force, or .
Differentiate V with respect to x gives
Thus, the rate of change of the shearing force with respect to x is equal to the load or .
Properties of Shear and Moment Diagrams
The following are some important properties of shear and moment diagrams:
1. The area of the shear diagram to the left or to the right of the section is equal to the moment at that section.
2. The slope of the moment diagram at a given point is the shear at that point.
3. The slope of the shear diagram at a given point equals the load at that point.
4. The maximum moment occurs at the point of zero shears. This is in reference to property number 2, that when the shear (also the slope of the moment diagram) is zero, the tangent drawn to the
moment diagram is horizontal.
5. When the shear diagram is increasing, the moment diagram is concave upward.
6. When the shear diagram is decreasing, the moment diagram is concave downward.
Sign Convention
The customary sign conventions for shearing force and bending moment are represented by the figures below. A force that tends to bend the beam downward is said to produce a positive bending moment. A
force that tends to shear the left portion of the beam upward with respect to the right portion is said to produce a positive shearing force.
An easier way of determining the sign of the bending moment at any section is that upward forces always cause positive bending moments regardless of whether they act to the left or to the right of
the exploratory section.
Without writing shear and moment equations, draw the shear and moment diagrams for the beams specified in the following problems. Give numerical values at all change of loading positions and at all
points of zero shear. (Note to instructor: Problems 403 to 420 may also be assigned for solution by semi-graphical method describes in this article.) | {"url":"http://www.mathalino.com/reviewer/mechanics-and-strength-of-materials/relation-between-load-shear-and-moment","timestamp":"2014-04-20T00:42:48Z","content_type":null,"content_length":"60731","record_id":"<urn:uuid:76969ac7-418c-4a73-add5-98dbf9dbb565>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00553-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polar Coordinates
Many systems and styles of measure are in common use today. When graphing on a flat surface, the rectangular coordinate system and the polar coordinate system are the two most popular methods for
drawing the graphs of relations. Polar coordinates are best used when periodic functions are considered. Although either system can usually be used, polar coordinates are especially useful under
certain conditions.
The rectangular coordinate system is the most widely used coordinate system. Second in importance is the polar coordinate system. It consists of a fixed point 0 called the pole, or origin. Extending
from this point is a ray called the polar axis. This ray usually is situated horizontally and to the right of the pole. Any point, P, in the plane can be located by specifying an angle and a
distance. The angle, θ, is measured from the polar axis to a line that passes through the point and the pole. If the angle is measured in a counterclockwise direction, the angle is positive. If the
angle is measured in a clockwise direction, the angle is negative. The directed distance, r, is measured from the pole to point P. If point P is on the terminal side of angle θ, then the value of r
is positive. If point P is on the opposite side of the pole, then the value of r is negative. The polar coordinates of a point can be written as an ordered pair ( r, θ). The location of a point can
be named using many different pairs of polar coordinates. Figure 1 illustrates three different sets of polar coordinates for the point P (4,50°).
Figure 1
Polar forms of coterminal angles.
Conversion between polar coordinates and rectangular coordinates is illustrated as follows and in Figure 2.
Figure 2
Polar to rectangular conversion.
Example 1: Convert P(4,9) to polar coordinates.
The polar coordinates for P (4, 9) are
Example 2: Convert P (5,20°) to rectangular coordinates.
The rectangular coordinates for P (5,20°) are P (4.7, 1.7).
Example 3: Transform the equation x ^2 + y ^2 + 5x = 0 to polar coordinate form.
The equation r = 0 is the pole. Thus, keep only the other equation.
Graphs of trigonometric functions in polar coordinates are very distinctive. In Figure 3 , several standard polar curves are illustrated. The variable a in the equations of these curves determines
the size (scale) of the curve.
Figure 3
Graphs of some common figures in polar form. | {"url":"http://www.cliffsnotes.com/math/trigonometry/polar-coordinates-and-complex-numbers/polar-coordinates","timestamp":"2014-04-19T20:18:54Z","content_type":null,"content_length":"111613","record_id":"<urn:uuid:14744c19-84cf-40ae-8fde-b0d0e306a2a1>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Would We Count?
Copyright © University of Cambridge. All rights reserved.
Why do this problem?
enables the adult to learn something of how children visualise numbers when counting and/or adding. The activity is also a good catalyst for children having a discussion together and sharing their
ideas, as it is easy to see that there is no "right" way of going about it. A key element of this is the adult creating an encouraging environment where children feel comfortable to share their
Possible approach
Display the 'blue' image so that everyone can see it - it may be helpful to use a whiteboard, if one is available. The aim is to encourage the children to 'say' what is in their heads about the
counting such as,
'I can see three lines. One is a line of six and one is a line of five and there's a line of four and then there is these other ones in between.'
They may well see something different to what you see. Be prepared for surprises!
A valuable discussion can then follow with the children saying what they think about others' 'say' about their counting.
Here are some further examples that can be used in a similar way .doc pdf
Key questions
How many?
How do you know?
What did you do?
How would you check?
What made you decide to change your way of counting this time?
Possible extension
A handful of pebbles scattered on to a tray can lead to similar questions and discussions.
The children could then go on to try counting other things that they come across in the real world such as seeds on a sunflower, patterns on clothing, flowers in a flower-bed or spots on different
animals' skin.
Possible support
A handful of pebbles scattered on to a tray may help and be easier to count than dots on a page.
The difficulty is often keeping track of what has been counted and what there is still to count. See whether the children can devise strategies to help such as making a small dot beside the dots as
they count or moving pebbles from one side of the tray to the other. Counting is not as easy as you might think!
Try working one-to-one and break the activity down into small steps.
Use 'we' rather than 'you' and 'you may like to start with...'
You could say: 'We want to find out how many dots there are.....can we do that?' | {"url":"http://nrich.maths.org/8123/note?nomenu=1","timestamp":"2014-04-16T10:11:39Z","content_type":null,"content_length":"6142","record_id":"<urn:uuid:e781ef59-0759-403a-a6e7-d9efc1fed17a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
Antenna, transmission line
I want to clarify some notiona in an antenna. The major components of an antenna are a generator, transmission line and an antenna (dipole). Let's say the whole system has a perfect matche, i.e. the
dipole is about 70 ohms as the transmission line. If my generator ,the output voltage, is an impulse which has a duration in a micro second ( let's say 1 micro sec). So, the frequency of this
generator is 1 Mhz? If the impluse is a square, the main fundamental frequency is 1 MHz and also , with others harmonics, I means if I do the Fourier Transformation am I right? | {"url":"http://www.physicsforums.com/showthread.php?t=42262","timestamp":"2014-04-18T21:25:08Z","content_type":null,"content_length":"31764","record_id":"<urn:uuid:0dd9d93e-56cc-4e13-ac11-00b8e7122557>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fluid Mechanics program help
03-08-2008 #1
Registered User
Join Date
Mar 2008
Fluid Mechanics program help
Hey all, I have programming assignment in school.I need some mathematical help in it.
I dont want you to do the whole programming assignment to me.I just have problem how is the calculation being done for the sample given.If I just understand how the calculation is done, I can sit
and write the code myself.I just need this help.Thanks alot.
The Walid acqua Company (WAC) manages water storage facilities. They are considering a system of storing water in a series of connected vertical tanks. Each tank has a horizontal cross-sectional
area of one square meter, but the tanks have different heights. The base of each tank is at ground level. Each tank is connected by a pipe to the previous tank in the series and by another pipe
to the next tank in the
series. The pipes connecting the tanks are level and are at increasing heights (that is, the pipe connecting tank i to tank i+1 is at a higher level than the pipe connecting tank i to tank i-1.)
Tank 1 is open so that air and water can flow into it freely at the top. All the other tanks are closed so that air and water can flow in and out only through the connecting pipes. The connecting
pipes are large enough that water and air can
flow through them freely and simultaneously but small enough that their dimensions can be ignored in this problem.
The series of tanks is filled by pouring water slowly into the top of tank 1, continuing until the water level reaches the top of tank 1.
As the water level rises above the connecting pipes, water flows among the tanks. WAC needs a program to compute the cubic meters of water that can be poured into the series of tanks before the
water level reaches the top of tank 1.
The figure below illustrates a simple case involving only two tanks. After the filling procedure is completed, the air in the upper part of the second tank is compressed (its air pressure is
greater than one atmosphere), so the water level in the second tank is lower than the water level in the first tank.
The following physical principles are helpful in solving this problem (some of these are approximations that are acceptable for the purposes of this problem):
1. Water flows downhill.
2. In an open space, the air pressure is equal to one atmosphere.
3. Air is compressible (the volume occupied by a given amount of air depends on pressure). Water is not compressible (the volume occupied by a given amount of water is constant, independent of
4. Air pressure is the same everywhere within a closed space. If the volume of the closed space changes, the product of the volume and the air pressure within the space remains constant. For
example, suppose an enclosed airspace has an initial volume V1 and pressure P1. If the volume of the airspace changes to V2, then the new pressure P2 satisfies P1V1 = P2V2.
5. In a column of water below an airspace, the water pressure at a level D meters below the water surface is equal to the air
pressure at the surface plus 0.097·D atmospheres. This is true regardless of whether the airspace is open or enclosed.
6. In a connected body of water (for example, when two or more tanks are connected by pipes below the water line), the water
pressure is constant at any given level.
The input consists of several test cases representing different series of water tanks. Each test case has three lines of data. The first line
contains an integer N (2 ≤ N ≤ 10) which is the number of tanks in the test case. The second line contains N positive floating point
numbers that are the heights, in meters, of tanks 1 through N. The third line contains N-1 floating point numbers. On this line, the kth
number represents the height above the ground of the pipe that connects tank k and tank k+1. The numbers on the third line are
increasing (each number is greater than the preceding number).
The last test case is followed by a line containing the integer zero.
For each test case, print a line containing the test case number (beginning with 1) followed by the amount of water, in cubic meters,
that can be poured into tank 1 before the water level reaches the top of tank 1. Print the results with three digits to the right of the
decimal point.
Print a blank line after the output for each test case. Use the format of the sample output.
Sample input
10.0 8.0
Sample output for input:
Case 1:15.260
Last edited by Hajjo; 03-08-2008 at 09:13 AM.
SPOJ problems have an http address, maybe try posting a link to that problem. And besides the whole purpose of the SPOJ site, is that you dissect and analyze a problem and that is why it accepts
a variety of languages as solutions, however C tends to provide the fastest...
goto johny_walker_red_label;
johny_walker_blue_label: exit(-149$);
johny_walker_red_label : exit( -22$);
A typical example of ...cheap programming practices.
I am not getting what you saying.
Can you just show me mathematic calulction of the sample.
there are 2 tanks, height of tank1 10.0 and height of tank 2 8.0. the length of pipe is 4.0.
15.260 cubic meters can be poured into tank 1 before the water level reaches the top of tank1.
|~~~~~~~~~~~| |P |
|:::::::::::| | |
|:::::::::::| |~~~~~~~~~~~|
|:::::::::::| |:::::::::::|
|::::::::::: ===::::::::::::|
|:::::::::::| |:::::::::::|
------------- -------------
tank1 tank2
Unfortunately, tank2 is broken, but I think the below equations are enough to solve the problem:
P = (rho * g * delta_h_water) + 1 atmosphere
V = pi r^2 * h
P2 = P1V1/V2
|~~~~~~~~~~~| |P |
|:::::::::::| | |
|:::::::::::| |~~~~~~~~~~~|
|:::::::::::| |:::::::::::|
|::::::::::: ===::::::::::::|
|:::::::::::| |:::::::::::|
------------- -------------
tank1 tank2
Unfortunately, tank2 is broken, but I think the below equations are enough to solve the problem:
P = (rho * g * delta_h_water) + 1 atmosphere
V = pi r^2 * h
P2 = P1V1/V2
Can you tell me what each one of those variables are?
Thanks for help.That is how is the computation being done?
can you show it on the example given.
Last edited by Hajjo; 03-08-2008 at 09:16 AM.
If you use SI:
P = pressure in N/m^2
rho = density in kg/m^3
h = height in m
V = volume in m^3
The math isn't that hard. Try two tanks first, with a pen and a piece of paper. If you get stuck, show how far you've got and where you got stuck.
what is g , delta_h_water and r?
g is 0.097 like given..and delta_h_water 4.0(considering the pipe is 4 meter above ground)..and what is r?
g = 9.8
r = the radius of the tank
delta = the difference between one thing and another (of the same kind)
rho is desnity , how can I get that number?
v = 3.14 x 0.5 * 0.5* 10 for tank 1 which is equal to 7.85
v = 3.14 * 0.5 *0.5 * 8 for tank 2 which is 6.sthg..
whats the use of p's? whats rho? its 1???
delta_h_water is 4 right?
still am clueless, I need more hints.
thank a lot heras
1 what? Does 1 cubic meter of water weigh 1 kg? How much does 1 liter weigh?
delta_h_water is the relative difference in water levels between the tanks.
The volume V you care about is that of the trapped air in tank2. It is under pressure P.
1 cubic meter is 1000 kg
p2 = 1000 * 9.8 * (10 - 1.72) +1 = s
v20 = pi * pow(r, 2) * h; = 3.14 * 0.5*0.5 * 8 = 6.28
p2 = x and v21 = 8 - 6.28 = 1.72
v1 =( p2 * v21)/p1 = s * 1.72/p1
is this the way to solve it?
the relative difference how can I bring it. V1 is height 10, so i can fill 10..while v2 i can fill 1.72 as 6.28 is trapped air...
Hajjo, please note that I'm a C noob and am here for help also, so I do not understand your implementation or where all those magic numbers came from. I can only help you with the 'pen and paper'
math. Additionally, I only skimmed your opening post the first time so I missed that "Each tank has a horizontal cross-sectional area of one square meter". This means that you may replace pi r^2
with 1 m^2 (which I will call A). The following is irrelevant but I'll point it out anyway:
r = ((1 m^2 / pi)^0.5) = 0.564... and not 0.5
P1 = 1atm (~ 100.000 N/m^2)
V1 = h1 * A
V2 = h2 * A
P2 = 1000 * 9.8 * h2 + P1
--------- ---
|~~~~~~~|---| | ^
| | ^ | air | |
| | | | | |
| | h2| | |
| | | | | h1
| |---|~~~~~~~| |
| water | | | |
| | | | |
| === | ---
| | | water |
| | | |
--------- ---------
Last edited by heras; 03-09-2008 at 12:48 PM.
Man thanks for all help your providing.But I still cant do it.
Can you please show me the how its done, so I get into the programming.I pretty much wasted big time thinking on the mathematical part.
please, thanks.
if you calculate p2, its a huge number.,
h2 is 8..
8 *9.8*1000 +1 is huge number..
v1 = 10 * 0.564
v2 = 8 * 0.564
p1 = 1
p1v1 is not equal to p2v2..in this case..
03-08-2008 #2
03-08-2008 #3
Registered User
Join Date
Mar 2008
03-08-2008 #4
Registered User
Join Date
Mar 2008
03-08-2008 #5
Registered User
Join Date
Mar 2008
03-08-2008 #6
Registered User
Join Date
Mar 2008
03-08-2008 #7
Registered User
Join Date
Mar 2008
03-08-2008 #8
Registered User
Join Date
Mar 2008
03-09-2008 #9
Registered User
Join Date
Mar 2008
03-09-2008 #10
Registered User
Join Date
Mar 2008
03-09-2008 #11
Registered User
Join Date
Mar 2008
03-09-2008 #12
Registered User
Join Date
Mar 2008
03-09-2008 #13
Registered User
Join Date
Mar 2008
03-09-2008 #14
Registered User
Join Date
Mar 2008
03-09-2008 #15
Registered User
Join Date
Mar 2008 | {"url":"http://cboard.cprogramming.com/c-programming/100078-fluid-mechanics-program-help.html","timestamp":"2014-04-20T01:05:59Z","content_type":null,"content_length":"99885","record_id":"<urn:uuid:1d6db3e9-15e3-446e-b10d-bab5cf1a9d83>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Correlation of Dummy and Metric Variables?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Correlation of Dummy and Metric Variables?
From Stas Kolenikov <skolenik@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Correlation of Dummy and Metric Variables?
Date Mon, 21 Sep 2009 14:20:37 -0500
In psychometrics, there are concepts of polychoric and polyserial
correlations. The first one is between two ordinal variables, and the
second one is between an ordinal variable and a continuous variable.
If your variables are truly nominal (like gender or geography), then
the correlations are likely meaningless, although you can meaningfully
ask whether the distributions of the continuous variables differ
between the values of the discrete variable (answered by ANOVA,
Kruskal-Wallis test and such). I wrote -polychoric- package some while
ago that computes these correlations.
On Mon, Sep 21, 2009 at 9:45 AM, Christian Weiß <mail@cweiss.org> wrote:
> Dear Statalist,
> although it's not a particularly Stata specific question , I am hoping
> to get advise on the following (basic?) question:
> I am using the following command to get a correlation matrix
> quietly estpost correlate `vars', matrix
> esttab using correlations.csv, not unstack compress noobs star(* 0.10
> ** 0.05 *** 0.01) long b(%9.2f) replace
> `vars' containts a battery of mostly metric variables. Besides the
> metric variables, there is also three dummy variables.
> I am wondering now if the reported (relatively high) correlation
> coefficients among the dummy variables and between some of the metric
> variables and the dummy variables are actually meaningful. How to
> interpret them / which correlation test to use?
Stas Kolenikov, also found at http://stas.kolenikov.name
Small print: I use this email account for mailing lists only.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-09/msg00848.html","timestamp":"2014-04-20T05:51:24Z","content_type":null,"content_length":"7369","record_id":"<urn:uuid:2d7299a6-ea34-4494-8c6c-3634131d7698>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Structural insight
Just a small structural insight into US politics (as advertised).
Republicans in Congress have allowed their agenda to be set by President Obama.
Republicans in Congress are being obstructionist; this shouldn't be a controversial statement, since historically they haven't made a secret of it (though, in something rather like a Catch-22, the
insight I'm heading for makes it natural to expect disagreement on this along party lines). But what one ought to be asking is
It's simple, really. When an administration comes in, the opposition usually aligns itself squarely against the central priority of the new administration. Although this
be a disagreement that predates the new administration, a sadder scenario —for all parties, and for the electorate— is that the opposition may be in disarray and simply not have any better focus
than, well, opposing. On secondary issues there may be all kinds of cooperation, but by default, not on that central priority.
And here we have a president who was really pretty explicit, before elected, that his central priority is
. It's the message that got him national attention in the first place: just because we have disagreements doesn't mean we can't cooperate.
Now, consider how one would go about opposing that priority, and compare it to the current situation.
And, to see the other side of the coin, consider how one would go about
that priority — in the face of opposition to it.
[On the phone] There's a man here with some sort of a parasite on his arm, assimilating his flesh at a frightening speed. I may have to get ahead of it and amputate. No... I don't know what it is
or where it came from.
— Dr. Hallen, The Blob
I took some tolerably advanced math courses in college and graduate school. My graduate research group of choice was the Theory Umbrella Group (THUG), joint between the math and computer science
departments. But one thing I never encountered in any of those courses, nor that-I-recall even in the THUG talks, was a type. Sets aplenty, but not types. Types seem to arise from specifically
studying computer science, mathematics proper having no native interest in them. There are the "types" in Russell and Whitehead's Principia Mathematica, but those don't seem to me to have anything
really to do with types as experienced in programming.
Yet, over in the computer science department, we're awash in types. They're certainly used for reasoning about programs (both practically and theoreticially) — but at some point our reasoning may
become more about the types themselves than about the programs they apply to. Type systems can be strikingly reminiscent of bureaucratic red tape when one is getting tangled up in them. So, if they
aren't a natively mathematical concept, why are they involved in our reasoning at all? Are they natural to what we're reasoning about (programs), or an unfortunate historical artifact? From the other
side, is reasoning in mathematics simpler because it doesn't use types, or does it not need to use types because what it's reasoning about is simpler?
Representation format
Looking back at the early history of programming, types evidently arose from the need to keep track of what format was being used by a given block of binary data. If a storage cell was assigned a
value using a floating-point numerical representation, and you're trying to treat it as a series of ASCII characters, that's probably because you've lost track of what you meant to be doing. So we
associate format information with each such cell. Note that we are not, at this point, dealing directly with the abstract entities of mathematics, but with sequences of storage bits, typically
fixed-width sequences at that. Nor does the type even tell us about a sort of mathematical entity that is being stored, because within the worldview presented by our programming language, we aren't
storing a mathematical entity, we're representing a data value. Data values are more abstract than bit sequences, but a lot less abstract than the creatures we'd meet in the math department. The
essential difference, I'd suggest, is that unlike their mathematical cousins, data values carry about with them some of their own representation format, in this case bit-level representation format.
A typical further development in typing is user-defined (which is to say, programmer-defined) types. Each such type is still stored in a sequence of storage bits, and still tells us how the storage
is being used to represent a data value, rather than store a mathematical entity. There is a significant difference from the earlier form of typing, in that the language will (almost certainly)
support a practically infinite number of possible user-defined types, so that the types themselves have somewhat the character of mathematical abstract entities, rather than data values (let alone
bit sequences). If, in fact, mathematics gets much of its character by dealing with its abstract entities unfettered by representational issues (mathematics would deal with representation itself as
just another abstract domain), a computer scientist who wants that character will prefer to reason as much as possible about types rather than about data values or storage cells.
Another possible development in typing, orthogonal to user-defined types, is representation-independence, so that the values constrained by types are understood as mathematical entities rather than
data values. The classic example is type bignum, whose values are conceptually mathematical integers. Emphasis on runtime efficiency tends to heighten awareness of representational issues, so one
expects an inverse relation between that emphasis, and likelihood of representation-independent types. It's not a coincidence that bignums flourish in Lisp. Note also that a key twist in the
statement of the expression problem is the phrase "without recompiling existing code".
Complicated type systems as crutches
Once we have types, since we're accustomed to thinking about programs, we tend to want to endow our type systems with other properties we know from our programming models. Parametric types. Dependent
types. Ultimately, first-class types.
I've felt the lure of first-class types myself, because they abandon the pretense that complicated types systems aren't treating types computationally. There's an incomplete language design in my
files wherein a type is an object with two methods, one for determining membership and one for determining sub/supertyping. That way leads to unbounded complications — the same train of thought has
led me more recently to consider tampering with incompleteness of the continuum (cf. Section 8.4.2 of my dissertation; yet another potential blog topic [later did blog on this, here]). As soon as I
envisioned that type system, I could see it was opening the door to a vast world of bizarre tricks that I absolutely didn't want. I really wanted my types to behave as mathematical sets, with stable
membership and transitive subtyping — and if that's what you want, you probably shouldn't try to get there by first giving the methods Turing power and then backing off from it.
But, from the above, I submit that these complicated type systems are incited, to begin with, when we start down the slippery slope by
• tangling with data values —halfway between the abstract and concrete worlds— instead of abstract mathematical entities, and
• placing undue emphasis on types, rather than the things they describe. This we did in the first place, remember, because types were more nearly mathematical; the irony of that is fairly intense.
In contrast to the muddle of complicated typing in computer science, folks over in the math department deal mostly with sets, a lightweight concept that fades comfortably into the background and only
has to be attended carefully under fairly extreme circumstances. Indeed, contrasting types and sets, a major difference between them is that types have object identity — which is itself a borderline
representational concept (able to come down on either side of the line), and jibes with experience that sophisticated types become data structures in their own right. Yes, there are such things as
types that don't have object identity; but somehow it seems we've already crossed the Rubicon on that one, and can no longer escape from the idea even in languages that don't endorse it.
Where next?
What we need, it seems, is the lightweight character of mathematical reasoning. There's more to it than mathematical "purity"; Haskell is fairly pure, but tbh I find it appallingly heavy. I find no
sense of working with simple primitives — it feels to me more like working on a scaffold over an abyss. In mathematics, there may be several different views of things any one of which could be used
as a foundation from which to build the others. That's essentially perfect abstraction, in that from any one of these levels, you not only get to ignore what's under the hood, but you can't even tell
whether there is anything under the hood. Going from one level to the next leaves no residue of unhidden details: you could build B from A, C from B, and A from C, and you've really gotten back to A,
not some flawed approximation of it that's either more complicated than the original, more brittle than the original, or both.
Making that happen in a language design should involve some subtle shifts in the way data is conceptualized. That isn't a digression in a discussion of types, because the way we conceptualize data
has deep, not to say insidious, effects on the nature of typing. As for the types themselves, I suggest we abandon the whole notion of types in favor of a lightweight mathematical notion of sets —
and avoid using the word "type" as it naturally drags us back toward the conceptual morass of type theory that we need to escape.
Here are two problems in programming language design that are often treated as if they had to be traded off against each other. I've found it enormously productive to assume that high-level tradeoffs
are accidental rather than essential; that is, to assume that if only we find the right vantage to view the problems, we'll see how to have our cake and eat it too. A good first step toward finding a
fresh vantage on a problem is to eliminate unnecessary details and assumptions from the statement of the problem. So here are spare, general statements of these two problems.
• Allow maximally versatile ways of doing things, with maximal facility.
• Disallow undesirable behavior.
I've been accused of promoting unmanageable chaos because my publicly visible work (on Kernel and fexprs) focuses on the first problem with some degree of merry disregard for the second. So here I'll
explain some of my thoughts on the second problem and its relationship to the first.
How difficult are these problems? One can only guess how long it will actually take to tame a major problem; there's always the chance somebody could find a simple solution tomorrow, or next week.
But based on their history, I'd guess these problems have a half-life of at least half a century.
To clarify my view of these problems, including what I mean by them, it may help to explain why I consider them important.
Allowing is important because exciting, new, and in any and all senses profitable innovations predictably involve doing things that hadn't been predicted. Software technology needs to grow
exponentially, which is a long-term game; in the long term, a programming language either helps programmers imagine and implement unanticipated approaches, or the language will be left in the dust by
better languages. This is a sibling to the long-term importance of basic research. It's also a cousin to the economic phenomenon of the Long Tail, in which there's substantial total demand for all
individually unpopular items in a given category — so that while it would be unprofitable for a traditional store to keep those items in stock, a business can reap profits by offering the whole range
of unpopular items if it can avoid incurring overhead per item.
Disallowing is important because, bluntly, we want our programs to work right. A couple of distinctions immediately arise.
• Whose version of "right" are we pursuing? There's "right" as understood by the programmer, and "right" as understood by others. A dramatic divergence occurs in the case of a malicious programmer.
Of course, protecting against programmer malfeasance is especially challenging to reconcile with the allowing side of the equation.
• Some things we are directly motivated to disallow, others indirectly. Direct motivation means that thing would in itself do something we don't want done. Indirect motivation means that thing
would make it harder to prove the program doesn't do something we don't want done.
If allowing were a matter of computational freedom, the solution would be to program in machine code. It's not. In practice, a tool isn't versatile or facile if it cannot be used at scale. What we
can imagine doing, and what we can then work out how to implement, depends on the worldview provided by the programming language, within which we work, so allowing depends on this worldview. Nor is
the worldview merely a matter of crunching data — it also determines our ability to imagine and implement abstractions within the language — modulating the local worldview, within some broader
metaphysics. Hence my interest in abstractive power (on which I should blog eventually [note: eventually I did]).
How ought we to go about disallowing? Here are some dimensions of variation between strategies — keeping in mind, we are trying to sort out possible strategies, rather than existing strategies (so
not to fall into ruts of traditional thinking).
• One can approach disallowance either by choosing the contours of the worldview within which the programer works, or by imposing restrictions on the programmer's freedom to operate within the
worldview. The key difference is that if the programmer thinks within the worldview (which should come naturally with a well-crafted worldview), restriction-based disallowance is directly
visible, while contour-based disallowance is not. To directly see contour-based disallowance, you have to step outside the worldview.
To reuse an example I've suggested elsewhere: If a Turing Machine is disallowed from writing on a blank cell on the tape, that's a restriction (which, in this case, reduces the model's
computational power to that of a linear bounded automaton). If a Turing Machine's read/write head can move only horizontally, not vertically, that's a contour of the worldview.
• Enforcement can be hard vs soft. Hard enforcement means programs are rejected if they do not conform. Soft enforcement is anything else. One soft contour approach is the principle I've blogged
about under the slogan dangerous things should be difficult to do by accident. Soft restriction might, for example, take the form of a warning, or a property that could be tested for (either by
the programmer or by the program).
• Timing can be eager vs lazy. Traditional static typing is hard and eager; traditional dynamic typing is hard and lazy. Note, eager–lazy is a spectrum rather than a binary choice. Off hand, I
don't see how contour-based disallowance could be lazy (i.e., I'd think laziness would always be directly visible within the worldview); but I wouldn't care to dismiss the possibility.
All of which is pretty straightforward. There's another dimension I'm less sure how to describe. I'll call it depth. Shallow disallowance is based on simple, locally testable criteria. A flat type
system, with a small fixed set of data types that are mutually exclusive, is very shallow. Deep disallowance is based on more sophisticated criteria that engage context. A polymorphic function type
has a bit of depth to it; a proof system that supports sophsiticated propositions about code behavior is pretty deep.
Shallow vs deep tends to play off simplicity against precision. Shallow disallowance strategies are simple, therefore easily understood, which makes them more likely to be used correctly and
—relatively— less likely to interfere with programmers' ability to imagine new techniques (versatility/facility of allowance). However, shallow disallowance is a blunt instrument, that cannot take
out a narrow or delicately structured case of bad behavior without removing everything around it. So some designers turn to very deep strategies —fully articulated theorem-proving, in fact— but
thereby introduce conceptual complexity, and the conceptual inflexibility that tends to come with it.
Recalling my earlier remark about tradeoffs, the tradeoffs we expect to be accdiental are high-level. Low-level tradeoffs are apt to be essential. If you're calculating reaction mass of a rocket,
you'd best accept the tradeoff dictated by F=ma. On the other hand, if you step back and ask what high-level task you want to perform, you may find it can be done without a rocket. With disallowance
depth, deep implies complex, and shallow implies some lack of versatility; there's no getting around those. But does complex disallowance imply brittleness? Does it preclude conceptual clarity?
One other factor that's at play here is level of descriptive detail. If the programming language doesn't specify something, there's no question of whether to disallow some values of it. If you just
say "sort this list", instead of specifying an algorithm for doing so, there's no question —within the language— of whether the algorithm was specified correctly. On the other hand, at some point
someone specified how to sort a list, using some language or other; whatever level of detail a language starts at, you'll want to move up to a higher level later, and not keep respecifying
lower-level activities. That's abstraction again. Not caring what sort algorithm is used may entail significantly more complexity, under the hood, than requiring a fixed algorithm — and again, we're
always going to be passing from one such level to another, and having to decide which details we can hide and how to hide them. How all that interacts with disallowance depth may be critical: can we
hide complex disallowance beneath abstraction barriers, as we do other forms of complexity?
Merry disregard
You may notice I've had far more to say about how to disallow, than about how to allow. Allowing is so much more difficult, it's hard to know what to say about it. Once you've chosen a worldview, you
have a framework within which to ask how to exclude what you don't want; but finding new worldviews is, rather by definition, an unstructured activity.
Moreover, thrashing about with specific disallowance tactics may tend to lock you in to worldviews suited to those tactics, when what's needed for truly versatile allowing may be something else
entirely. So I reckon that allowing is logically prior to disallowing. And my publicly visible work does, indeed, focus on allowing with a certain merry disregard for the complementary problem of
disallowing. Disallowing is never too far from my thoughts; but I don't expect to be able to tackle it properly till I know what sort of allowing worldview it should apply to. | {"url":"http://fexpr.blogspot.com/2011_11_01_archive.html","timestamp":"2014-04-17T12:29:57Z","content_type":null,"content_length":"87390","record_id":"<urn:uuid:a2abc850-f399-475f-a14f-085a0803e3d8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Seminar Schedule
Location: Math Dept Seminar Room (25-208B)
Time: Thursdays, 12:10pm-1:00pm
The applied math seminar is open to any and all interested participants. We discuss topics in which mathematics is used to solve a wide range of scientific and engineering problems. Applied math
faculty will present some of the seminars, but faculty from other departments are also encouraged to discuss their projects in this seminar series.
Date Speaker Title
Nov. 5, 2008 Colleen Kirk Numerical Approaches to Blow-up Problems
Dec. 3, 2008 Paul Choboter Wind-driven coastal upwelling: How deep is the source of upwelled water
March 4, 2009 Al Jimenez Computer Solutions of Linear Equations: Detecting when solutions are inaccurate.
April 23, 2009 Charles Camp The Science and Mathematics of Climate
June 4, 2009 Paul Choboter Exact solutions of upwelling and downwelling over sloping topography
Previous years' seminars:
Winter and Spring 2006
Fall 2006 - Spring 2007 Fall 2007 - Spring 2008
To schedule a seminar, please contact
Charles D. Camp | {"url":"http://www.calpoly.edu/~pchobote/applied_math/seminars08_09.html","timestamp":"2014-04-16T10:11:07Z","content_type":null,"content_length":"10236","record_id":"<urn:uuid:36ecf6e5-58f6-45a2-a844-f4f9e79a8f3f>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archives of the Caml mailing list > Message from Andreas Rossberg
Re: [Caml-list] Polymorphic Variants and Number Parameterized Typ es
Date: -- (:)
From: Andreas Rossberg <rossberg@p...>
Subject: Re: [Caml-list] Re: Encoding "abstract" signatures
Hi François,
Francois Pottier wrote:
> > This is just one reason. More generally, it's the need for a coherent
> > encoding in the higher-order setting we face. If we say that type
> >
> > functor(X : sig type t val x : t end) -> ...
> >
> > maps to something like
> >
> > forall t. t -> ...
> >
> > then consequently
> >
> > functor(Y : sig module type T end) -> functor(X : Y.T) -> ...
> >
> > must map to some type that yields the above as the result of some sequence
> > of applications.
> Oh, I see what you mean. It's a good point. But still I think I can encode
> the second functor as
> forall T. () -> T -> ...
> (where (), the empty structure type, corresponds to Y and T corresponds to X)
> which, when applied to an empty structure, yields
> forall T. T -> ...
> as expected (provided the ``forall'' quantifier doesn't get in the way of
> application, i.e. polymorphic instantiation and abstraction are transparent,
> as in ML).
OK, consider applying
module type T = sig type t type u val x : t * u end
Then, in the encoding, application should yield the result type
forall t,u. t * u -> ...
So you cannot simply `reuse' T's quantifier. Also note that in general
the quantifier(s) in question might be buried deep inside the RHS of the
arrow, even in contravariant position. (Moreover, it is not obvious to
me whether we could use implicit type application, because polymorphism
is first-class (a field or argument of functor type would be
> > functor in question would not differ from
> >
> > functor F (X : sig module type T end) (Y : X.T) = Y
> Indeed it wouldn't. But I fail to see the point; if Y's type is X.T,
> there is no difference between Y and (Y : X.T), is there? The two
> functors in question have the same type in O'Caml.
Ah, yes, you are right, I forgot about that. Actually, I see that as an
unfortunate limitation of OCaml's module typing: it has to forget some
sharing because it lacks proper singleton types (and thus loses
principality). Ideally, the type of the above variant of F should be
functor(X : sig module type T end) -> functor(Y : X.T) -> that Y
where I write "that Y" for the singleton type inhabited by Y only (a
subtype of X.T). That functor is essentially the polymorphic identity
functor, while the other variation was a polymorphic eta-expansion of
the abstraction operator.
But in fact, what that means is that in OCaml both functors must be
represented by a type with an existential quantifier. Otherwise you
would not witness any difference between module expressions
F (struct module type T = ABSTRACT_SIG_OF_M end) (M)
Andreas Rossberg, rossberg@ps.uni-sb.de
"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.
To unsubscribe, mail caml-list-request@inria.fr Archives: http://caml.inria.fr
Bug reports: http://caml.inria.fr/bin/caml-bugs FAQ: http://caml.inria.fr/FAQ/
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners | {"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2002/05/62a282ddc9e59669980dd67a7e6ef52e.en.html","timestamp":"2014-04-17T13:04:08Z","content_type":null,"content_length":"12720","record_id":"<urn:uuid:ebeca8f7-07c7-4689-bafc-38da23d165a0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Necessary and Sufficient Conditions for Collision-Free Hashing
, 1993
"... When we ask what makes a hash function `good', we usually get an answer which includes collision freedom as the main (if not sole) desideratum. However, we show here that given any
collision-free function, we can derive others which are also collision-free, but cryptographically useless. This explai ..."
Cited by 24 (3 self)
Add to MetaCart
When we ask what makes a hash function `good', we usually get an answer which includes collision freedom as the main (if not sole) desideratum. However, we show here that given any collision-free
function, we can derive others which are also collision-free, but cryptographically useless. This explains why researchers have not managed to find many interesting consequences of this property. We
also prove Okamoto's conjecture that correlation freedom is strictly stronger than collision freedom. We go on to show that there are actually rather many properties which hash functions may need.
Hash functions for use with RSA must be multiplication free, in the sense that one cannot find X , Y and Z such that h(X)h(Y ) = h(Z); and more complex requirements hold for other signature schemes.
Universal principles can be proposed from which all the freedom properties follow, but like most theoretical principles, they do not seem to give much value to a designer; at the practical level, the
main imp...
- In EUROCRYPT , 2005
"... We revisit the following question: what are the minimal assumptions needed to construct statistically-hiding commitment schemes? Naor et al. show how to construct such schemes based on any
one-way permutation. We improve upon this by showing a construction based on any approximable preimage-size one ..."
Cited by 24 (8 self)
Add to MetaCart
We revisit the following question: what are the minimal assumptions needed to construct statistically-hiding commitment schemes? Naor et al. show how to construct such schemes based on any one-way
permutation. We improve upon this by showing a construction based on any approximable preimage-size one-way function. These are one-way functions for which it is possible to efficiently approximate
the number of pre-images of a given output. A special case is the class of regular one-way functions where all points in the image of the function have the same number of pre-images. We also prove
two additional results related to statistically-hiding commitment. First, we prove a (folklore) parallel composition theorem showing, roughly speaking, that the statistical hiding property of any
such commitment scheme is amplified exponentially when multiple independent parallel executions of the scheme are carried out. Second, we show a compiler which transforms any commitment scheme which
is statistically hiding against an honest-but-curious receiver into one which is statistically hiding even against a malicious receiver. 1
- In Proc. Vietcrypt ’06 , 2006
"... Abstract. There is a foundational problem involving collision-resistant hash-functions: common constructions are keyless, but formal definitions are keyed. The discrepancy stems from the fact
that a function H: {0, 1} ∗ → {0, 1} n always admits an efficient collision-finding algorithm, it’s just t ..."
Cited by 22 (0 self)
Add to MetaCart
Abstract. There is a foundational problem involving collision-resistant hash-functions: common constructions are keyless, but formal definitions are keyed. The discrepancy stems from the fact that a
function H: {0, 1} ∗ → {0, 1} n always admits an efficient collision-finding algorithm, it’s just that us human beings might be unable to write the program down. We explain a simple way to sidestep
this difficulty that avoids having to key our hash functions. The idea is to state theorems in a way that prescribes an explicitly-given reduction, normally a black-box one. We illustrate this
approach using well-known examples involving digital signatures, pseudorandom functions, and the Merkle-Damg˚ard construction. Key words. Collision-free hash function, Collision-intractable hash
function, Collision-resistant hash function, Cryptographic hash function, Provable security. 1
- In Proc. Crypto ’04 , 2004
"... Abstract. Many cryptographic primitives begin with parameter generation, which picks a primitive from a family. Such generation can use public coins (e.g., in the discrete-logarithm-based case)
or secret coins (e.g., in the factoring-based case). We study the relationship between publiccoin and secr ..."
Cited by 22 (0 self)
Add to MetaCart
Abstract. Many cryptographic primitives begin with parameter generation, which picks a primitive from a family. Such generation can use public coins (e.g., in the discrete-logarithm-based case) or
secret coins (e.g., in the factoring-based case). We study the relationship between publiccoin and secret-coin collision-resistant hash function families (CRHFs). Specifically, we demonstrate that: –
there is a lack ofattention to the distinction between secret-coin and public-coin definitions in the literature, which has led to some problems in the case ofCRHFs; – in some cases, public-coin
CRHFs can be built out ofsecret-coin CRHFs; – the distinction between the two notions is meaningful, because in general secret-coin CRHFs are unlikely to imply public-coin CRHFs. The last statement
above is our main result, which states that there is no black-box reduction from public-coin CRHFs to secret-coin CRHFs. Our prooffor this result, while employing oracle separations, uses a novel
approach, which demonstrates that there is no black-box reduction without demonstrating that there is no relativizing reduction.
- In Proceedings of the 2nd Theory of Cryptography Conference , 2005
"... Abstract. We present several new constructions of collision-resistant hash-functions (CRHFs) from general assumptions. We start with a simple construction of CRHF from any homomorphic
encryption. Then, we strengthen this result by presenting constructions of CRHF from two other primitives that are i ..."
Cited by 14 (2 self)
Add to MetaCart
Abstract. We present several new constructions of collision-resistant hash-functions (CRHFs) from general assumptions. We start with a simple construction of CRHF from any homomorphic encryption.
Then, we strengthen this result by presenting constructions of CRHF from two other primitives that are implied by homomorphic-encryption: one-round private information retrieval (PIR) protocols and
homomorphic one-way commitments. Keywords. Collision-resistant hash functions, homomorphic encryption, private information-retrieval. 1 Introduction Collision resistant hash-functions (CRHFs) are an
important cryptographic prim-itive. Their applications range from classic ones such as the "hash-and-sign " paradigm for signatures, via efficient (zero-knowledge) arguments [14, 17, 2],
tomore recent applications such as ones relying on the non-black-box techniques of [1]. In light of the importance of the CRHF primitive, it is natural to study itsrelations with other primitives and
try to construct it from the most general
- COLUMBIA UNIVERSITY , 2002
"... In the analysis of many cryptographic protocols, it is useful to distinguish two classes of attacks: passive attacks in which an adversary eavesdrops on messages sent between honest users and
active attacks (i.e., “man-in-the-middle ” attacks) in which — in addition to eavesdropping — the adversary ..."
Cited by 12 (2 self)
Add to MetaCart
In the analysis of many cryptographic protocols, it is useful to distinguish two classes of attacks: passive attacks in which an adversary eavesdrops on messages sent between honest users and active
attacks (i.e., “man-in-the-middle ” attacks) in which — in addition to eavesdropping — the adversary inserts, deletes, or arbitrarily modifies messages sent from one user to another. Passive attacks
are well characterized (the adversary’s choices are inherently limited) and techniques for achieving security against passive attacks are relatively well understood. Indeed, cryptographers have long
focused on methods for countering passive eavesdropping attacks, and much work in the 1970’s and 1980’s has dealt with formalizing notions of security and providing provably-secure solutions for this
setting. On the other hand, active attacks are not well characterized and precise modeling has been difficult. Few techniques exist for dealing with active attacks, and designing practical protocols
secure against such attacks remains a challenge. This dissertation considers active attacks in a variety of settings and provides new, provably-secure protocols preventing such attacks. Proofs of
security are in the standard cryptographic model and rely on well-known cryptographic assumptions. The protocols presented here are efficient and
- In Automata, Languages and Programming: 31st International Colloquium, ICALP 2004 , 2003
"... A consistent query protocol allows a database owner to publish a very short string c which commits her to a particular database D with special consistency property (i.e., given c, every
allowable query has unique and well-defined answer with respect to D.) Moreover, when a user makes a query, any ..."
Cited by 8 (1 self)
Add to MetaCart
A consistent query protocol allows a database owner to publish a very short string c which commits her to a particular database D with special consistency property (i.e., given c, every allowable
query has unique and well-defined answer with respect to D.) Moreover, when a user makes a query, any server hosting the database can answer the query, and provide a very short proof # that the
answer is well-defined, unique, and consistent with c (and hence with D). One potential application of consistent query protocols is for guaranteeing the consistency of many replicated copies of
D---the owner can publish c, and users can verify the consistency of a query to some copy of D by making sure # is consistent with c. This strong guarantee holds even for owners who try to cheat,
while creating c.
- In ISC05, LNCS 3650 , 2005
"... We present a universally composable time-stamping scheme based on universal one-way hash functions. ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.55.5227","timestamp":"2014-04-17T14:08:07Z","content_type":null,"content_length":"34234","record_id":"<urn:uuid:5e5ad74f-c1df-48af-b3e9-c5fe79070e7d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the difference betwee a failure criterion and a yield condition?
Submitted by W. Brocks on Thu, 2013-03-21 12:33.
You may meet natural and engineering scientists who blame their colleagues from social sciences or humanities for working unscholarly, not adhering to an explicit and unique terminology but
substituting scientific cognition by adopting novel terms. Those sitting in a glasshouse should not throw stones, however. Imprecise terminology and hazy definitions are not at all a “privilege” of
social scientists. When I started learning fracture mechanics, I discovered that nearly every anomaly in the real failure behaviour of components which did not fit into the common concept was
attributed to “constraint” - but few people had a precise idea what constraint actually is and how to quantify it. The multifarious usage of “damage” in the current literature is an actual example,
and “plasticity” is another.
Though von Mises, Drucker, Hill and many others established a precise foundation of phenomenological plasticity, it has become a bad habit to call any inelastic, nonlinear mechanical behaviour
“plastic”. One will find applications of the Mises-Prandtl-Reuss equations to polymers, and the authors do not even query, much less justify this approach. In my previous blog, #4, I criticised
Mäkelä and Östlund (Engineering Fracture Mechanics, Vol. 79, 2012) for modelling the deformation of paper by means of plasticity. One year later I find an “application” to wood.
Henrik Danielsson and Per Johan Gustafsson: A three dimensional plasticity model for perpendicular to grain cohesive fracture in wood, Engineering Fracture Mechanics Vol. 98 2013, pp.137–152.
The authors’ misconception is a different one. The deformation behaviour of wood is considered as linear elastic and, of course, orthotropic. But they add a new facet to the term “plasticity”, namely
the irreversible and unstable material softening in some process zone: “Initiation of softening, i.e. the formation of a fracture process zone, is determined by an initial yield function F according
to the Tsai–Wu failure criterion”. This is a failure criterion, correct, and the respective limit surface in the stress space may be assumed as convex as the yield surface in the theory of
plasticity. For the sake of a thermodynamically consistent theory, one may also define a corresponding damage potential, but this is not a plastic potential! Once again: The theory of plasticity
deals with the stress-strain relationship of ductile materials, having metals in mind, where plastic flow occurs by sliding along crystallographic planes or by twinning. “A physical theory of
plasticity starts with these microscopic details and attempts to explain why and how plastic flow occurs” (Khan & Huang: Continuum Theory of Plasticity, Wiley, 1995, p. 310). Following Drucker,
classical phenomenological plasticity describes stable, i.e. strain-hardening, material behaviour.
The authors continue “The change in size of the yield surface f is described by the softening parameter K which is a function of an internal variable that memorizes the plastic loading and determines
the softening behavior”, and they introduce a “dimensionless deformation δeff“, as internal variable, which is „related to the plastic straining of the material” (wood?), whatever this is supposed to
mean. It is not just the “size” of the failure surface that changes, by the way, as Fig 2 shows. In the context of cohesive models, δ is commonly called “separation”, i.e. a jump in the discontinuous
displacement field, and Fig 3 is a typical traction-separation law. So why introduce a terminology divergent from the established one?
Roberto Balarini stated in a blog http://www.imechanica.org/node/7622 : "Cohesive models are linear elasticity”. In contrast, the present authors apparently assert that cohesive models “are”
plasticity. What is so difficult in understanding the model of a cohesive zone? Cohesive models “are” neither elasticity nor plasticity. They describe the nonlinear decohesion process in a continuum
that obeys any kind of constitutive equations, for instance plasticity, visco-plasticity or, as in the present case, orthotropic elasticity.
More generally: What is so complicated in applying a unique terminology which is established in the scientific community, and how about the reviewers of manuscripts like this: Are they not aware of
the correct terminology themselves or do they just don’t care about it? Remember: The corruption of reasoning starts with a false handling of language!
Recent comments | {"url":"http://www.imechanica.org/node/14387","timestamp":"2014-04-16T21:52:08Z","content_type":null,"content_length":"26394","record_id":"<urn:uuid:c7b4e675-71a5-4416-9d2c-e7163eef2a6b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
descent for formally smooth maps
up vote 2 down vote favorite
Let $f:X\rightarrow Y$ a morphism between schemes and $Y'\rightarrow Y$ a fpqc morphism such that the base change $f'$ of $f$ to $Y'$ is formally smooth, does it imply that $f$ is formally smooth?
schemes ag.algebraic-geometry smoothness
2 The answer to this MO question seems relevant: mathoverflow.net/questions/10731/… – Alberto García-Raboso Jun 16 '13 at 22:54
In the infinitesimal lifting property, can we reduce to easier rings, such local henselian or local for example? – prochet Jun 17 '13 at 9:42
Section 1.7 of arxiv.org/abs/math/9812034 contains the claim that formal smoothness is a local property in the fpqc topology (presumably meaning local on the target), and says that Gabber can
explain why. – S. Carnahan♦ Jul 29 '13 at 6:22
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged schemes ag.algebraic-geometry smoothness or ask your own question. | {"url":"http://mathoverflow.net/questions/133908/descent-for-formally-smooth-maps","timestamp":"2014-04-16T13:46:09Z","content_type":null,"content_length":"48437","record_id":"<urn:uuid:53244fa1-cee1-4718-810c-d1f7d4fc03a3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math and Multimedia Blog Carnival 13
Welcome to the 13th edition of the Mathematics and Multimedia blog carnival.
Before we begin the carnival, let’s have some trivia about the number thirteen:
• The fear of thirteen is called triskaidekaphobia.
• Thirteen is the number of stripes in the US flag.
• The number of characters in E Pluribus Unum
• The number of Archimedean Solids
• The atomic number of Aluminum.
Now, let the carnival begin!
Romeo Vitelli give an excellent account of the life and works of the brilliant Kurt Godel in Deconstructing Godel posted at Providentia.
vmathtutor presents an ‘almost prime number’ generator in A remarkable quadratic polynomial posted at Virtual Math Tutor.
Mr. Ho takes us to the ancient paradoxes his article Zeno’s Paradox in Ancient China posted at Mathing.
Erlina Ronda discusses Counting Principles, Pascal’s Triangles, and Powers of 2 posted at Mathematics for Teaching
Mathematics Teaching
Earl Samuelson has explanations and notes about Integration – Surface Areas & Volumes of Revolution posted at samuelson mathxp.
Sam Shah’s guest blogger presents several excellent tips on how to take down notes from the board in his Sticky Notes posted at Continuous Everywhere, Differentiable Nowhere
Terrance Banks presents Another Menu and Back 2 School.. posted at So I Teach Math and Coach?.
Jennifer Bardsley shares her experience about constructivist’s approach in teaching math in her article titled Math posted at Teaching My Baby To Read.
Chris Solomon shares about Creative Problem Solving Tools and Technique (Part 1/2) posted at Jazz Presentation.
David Wees has a great series on Math in the Real World.
Technology Integration
John Golden shares his experience in conducting a GeoGebra training in GeoGebra 4 Teachers posted at Math Hombre.
Another good calculator is in town, the Desmos Calculator, presented by Colleen Young in her blog Mathematics, Learning, and Web 2.0.
William Emeny has discovered a new interactive time line software Time Toast in his blog Great Maths Teaching Ideas.
Guillermo Bautista (that’s me) presents an applet on Triangle Angle Sum Proof at GeoGebra Applet Central.
Are you sending your child to Kumon? If you do, you may want to read Caroline Mikusa’s Kumon: The Good, the Bad, and the Ugly.
You may check out my other blogs:
That concludes this edition. Submit your blog article to the next edition of Mathematics and Multimedia blog carnival using our carnival submission form. The next edition will be posted at
Mathematics, Learning and Web 2.0. Past posts and future hosts can be found on our blog carnival index page.
7 thoughts on “Math and Multimedia Blog Carnival 13” | {"url":"http://mathandmultimedia.com/2011/08/01/carnival-13/","timestamp":"2014-04-19T23:21:39Z","content_type":null,"content_length":"342392","record_id":"<urn:uuid:f2d4591b-cc59-42d7-b32b-256fb38b3f42>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Maxima] Non-exclusive pattern matching? e.g. distributiveness of a function, was: Predictable pattern matching?
[Maxima] Non-exclusive pattern matching? e.g. distributiveness of a function, was: Predictable pattern matching?
Robert Dodier robert.dodier at gmail.com
Wed Dec 24 12:29:46 CST 2008
On 12/23/08, Martin Schönecker <ms_usenet at gmx.de> wrote:
> (1) int[a_ + b_, x] := int[a, x] + int[b, x]
> (2) int[c_ f_, x] := c int[f, x] /; FreeQ[c, x]
> (3) int[x^n_, x] := 1/(n + 1) * x^(n + 1) /; n != -1
> (4) int[c_, x] := c x /; FreeQ[c, x]
The part that is difficult for the Maxima pattern matcher at present
is the freeof stuff, since the result for one pattern variable depends
on another. Maxima matches variables in the order they are
encountered in the pattern, so it matches x after c (too late).
It seems feasible to detect dependencies among pattern variables
and to order them accordingly. That would not require any change
to the run-time code, in particular, it doesn't require backtracking.
Of course, it 's easy to make up examples which have cycles in
the dependency graph.
I'll take a look at it.
PS. On second glance, it appears x is not a variable.
Is that right? If so, that makes the matching problem simpler.
More information about the Maxima mailing list | {"url":"http://www.ma.utexas.edu/pipermail/maxima/2008/015070.html","timestamp":"2014-04-16T13:07:26Z","content_type":null,"content_length":"4280","record_id":"<urn:uuid:efaef0c2-d549-4b03-9bf0-c4727ab100a1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number formatting
up vote 2 down vote favorite
Hi all I am using below method to get string from NSNumber
-(NSString *)stringFromNumber:(NSNumber *)number
NSNumberFormatter *numFormatter = [[[NSNumberFormatter alloc] init] autorelease];
[numFormatter setMaximumFractionDigits:5];
[numFormatter setMinimumFractionDigits:2];
//[numFormatter setExponentSymbol:@"e"];
NSString *str_num = [numFormatter stringFromNumber:number];
return str_num;
in console i am getting like below
but I need the output to look like below for above inputs (order)
How would I do it? Someone please help me.
iphone objective-c ios
1 i think this should do it stackoverflow.com/questions/2215867/… – Chakalaka Apr 23 '12 at 10:47
add comment
2 Answers
active oldest votes
i tried this
-(NSString *)stringFromNumber:(NSNumber *)number
NSNumberFormatter *numFormatter = [[NSNumberFormatter alloc] init];
[numFormatter setMaximumFractionDigits:5];
[numFormatter setMinimumFractionDigits:2];
NSString *temp = [NSString stringWithFormat:@"%@",number];
NSRange range = [temp rangeOfString:@"e"];
if(range.length > 0){
[numFormatter setNumberStyle:NSNumberFormatterScientificStyle];
[numFormatter setExponentSymbol:@"e"];
NSString *str_num = [numFormatter stringFromNumber:number];
return str_num;
up vote 1 down vote accepted
and Got like this
thanq very much @bala – Narayana Apr 24 '12 at 11:54
till its not proper for me but u r answer is very good.. – Narayana Apr 24 '12 at 11:55
second input has got an output as you want ?? did you notice that – Bala Apr 24 '12 at 11:56
add comment
Take a look at NSNumberFormatter Class.
up vote 1 down vote setMinimumIntegerDigits:
add comment
Not the answer you're looking for? Browse other questions tagged iphone objective-c ios or ask your own question. | {"url":"http://stackoverflow.com/questions/10279031/number-formatting","timestamp":"2014-04-20T04:41:49Z","content_type":null,"content_length":"74248","record_id":"<urn:uuid:622be58f-323e-4b6a-9f3e-d039087e68ba>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Language-Defined Attributes
Annex K
Language-Defined Attributes
For a
P that denotes a subprogram:
P'Access yields an access value that designates the subprogram denoted by P. The type of P'Access is an access-to-subprogram type (
), as determined by the expected type. See
For a
X that denotes an aliased view of an object:
X'Access yields an access value that designates the object denoted by X. The type of X'Access is an access-to-object type, as determined by the expected type. The expected type shall be a general
access type. See
For a
X that denotes an object, program unit, or label:
Denotes the address of the first of the storage elements allocated to X. For a program unit or label, this value refers to the machine code associated with the corresponding body or
. The value of this attribute is of type System.Address. See
For every subtype S of a floating point type T:
S'Adjacent denotes a function with the following specification:
function S'Adjacent (X, Towards : T)
return T
, the function yields
; otherwise, it yields the machine number of the type
adjacent to
in the direction of
, if that machine number exists.
If the result would be outside the base range of S, Constraint_Error is raised. When
'Signed_Zeros is True, a zero result has the sign of
. When
is zero, its sign has no bearing on the result. See
For every fixed point subtype S:
S'Aft yields the number of decimal digits needed after the decimal point to accommodate the
of the subtype S, unless the
of the subtype S is greater than 0.1, in which case the attribute yields the value one. (S'Aft is the smallest positive integer N for which (10**N)*S'Delta is greater than or equal to one.) The value
of this attribute is of the type
. See
For every subtype S:
The value of this attribute is of type universal_integer, and nonnegative.
For an object X of subtype S, if S'Alignment is not zero, then X'Alignment is a nonzero integral multiple of S'Alignment unless specified otherwise by a representation item. See
For a
X that denotes an object:
The value of this attribute is of type universal_integer, and nonnegative; zero means that the object is not necessarily aligned on a storage element boundary. If X'Alignment is not zero, then X is
aligned on a storage unit boundary and X'Address is an integral multiple of X'Alignment (that is, the Address modulo the Alignment is zero).
This paragraph was deleted.
For every scalar subtype S:
S'Base denotes an unconstrained subtype of the type of S. This unconstrained subtype is called the
base subtype
of the type. See
For every specific record subtype S:
Denotes the bit ordering for the type of S. The value of this attribute is of type System.Bit_Order. See
For a
P that statically denotes a program unit:
Yields a value of the predefined type String that identifies the version of the compilation unit that contains the body (but not any subunits) of the program unit. See
Yields the value True when the task denoted by T is
, and False otherwise; See
For a
E that denotes an
Yields a value of the type Task_Id that identifies the task whose call is now being serviced. Use of this attribute is allowed only inside an
corresponding to the
denoted by E. See
For every subtype S of a floating point type T:
S'Ceiling denotes a function with the following specification:
function S'Ceiling (X : T)
return T
The function yields the value
), i.e., the smallest (most negative) integral value greater than or equal to
. When
is zero, the result has the sign of
; a zero result otherwise has a negative sign when S'Signed_Zeros is True. See
For every subtype S of an untagged private type whose full view is tagged:
Denotes the class-wide subtype corresponding to the full view of S. This attribute is allowed only from the beginning of the private part in which the full view is declared, until the declaration of
the full view. After the full view, the Class attribute of the full view can be used. See
For every subtype S of a tagged type T (specific or class-wide):
S'Class denotes a subtype of the class-wide type (called T'Class in this International Standard) for the class rooted at T (or if S already denotes a class-wide subtype, then S'Class is the same as
S'Class is unconstrained. However, if S is constrained, then the values of S'Class are only those that when converted to the type
belong to S. See
For a
X that denotes an array subtype or array object (after any implicit dereference):
Denotes the size in bits of components of the type of X. The value of this attribute is of type
. See
For every subtype S of a floating point type T:
S'Compose denotes a function with the following specification:
function S'Compose (Fraction : T;
Exponent : universal_integer)
return T
be the value
, where
is the normalized exponent of
. If
is a machine number of the type
, or if |
| ≥
, the function yields
; otherwise, it yields either one of the machine numbers of the type
adjacent to
Constraint_Error is optionally raised if
is outside the base range of S. A zero result has the sign of
when S'Signed_Zeros is True. See
For a
A that is of a discriminated type (after any implicit dereference):
Yields the value True if A denotes a constant, a value, or a constrained variable, and False otherwise. See
For every subtype S of a floating point type T:
S'Copy_Sign denotes a function with the following specification:
function S'Copy_Sign (Value, Sign : T)
return T
If the value of
is nonzero, the function yields a result whose magnitude is that of
and whose sign is that of
; otherwise, it yields the value zero.
Constraint_Error is optionally raised if the result is outside the base range of S. A zero result has the sign of
when S'Signed_Zeros is True. See
For a
E that denotes an entry of a task or protected unit:
Yields the number of calls presently queued on the entry E of the current instance of the unit. The value of this attribute is of the type
. See
For a
S that denotes a formal indefinite subtype:
S'Definite yields True if the actual subtype corresponding to S is definite; otherwise it yields False. The value of this attribute is of the predefined type Boolean. See
For every fixed point subtype S:
S'Delta denotes the
of the fixed point subtype S. The value of this attribute is of the type
. See
For every subtype S of a floating point type T:
Yields the value True if every value expressible in the form
is a nonzero
'Machine_Mantissa-digit fraction in the number base
'Machine_Radix, the first digit of which is zero, is a machine number (see
) of the type
; yields the value False otherwise. The value of this attribute is of the predefined type Boolean. See
For every decimal fixed point subtype S:
S'Digits denotes the
of the decimal fixed point subtype S, which corresponds to the number of decimal digits that are representable in objects of the subtype. The value of this attribute is of the type
. See
For every floating point subtype S:
S'Digits denotes the requested decimal precision for the subtype S. The value of this attribute is of the type
. See
For every subtype S of a floating point type T:
S'Exponent denotes a function with the following specification:
function S'Exponent (X : T)
return universal_integer
The function yields the normalized exponent of
. See
For every subtype S of a tagged type T (specific or class-wide):
S'External_Tag denotes an external string representation for S'Tag; it is of the predefined type String. External_Tag may be specified for a specific tagged type via an
; the expression of such a clause shall be static. The default external tag representation is implementation defined. See
. See
For a
A that is of an array type (after any implicit dereference), or denotes a constrained array subtype:
A'First denotes the lower bound of the first index range; its type is the corresponding index type. See
For every scalar subtype S:
S'First denotes the lower bound of the range of S. The value of this attribute is of the type of S. See
For a
A that is of an array type (after any implicit dereference), or denotes a constrained array subtype:
A'First(N) denotes the lower bound of the N-th index range; its type is the corresponding index type. See
For a component C of a composite, non-array object R:
If the nondefault bit ordering applies to the composite type, and if a
specifies the placement of C, denotes the value given for the
of the
; otherwise, denotes the offset, from the start of the first of the storage elements occupied by C, of the first bit occupied by C. This offset is measured in bits. The first bit of a storage element
is numbered zero. The value of this attribute is of the type
. See
For every subtype S of a floating point type T:
S'Floor denotes a function with the following specification:
function S'Floor (X : T)
return T
The function yields the value
), i.e., the largest (most positive) integral value less than or equal to
. When
is zero, the result has the sign of
; a zero result otherwise has a positive sign. See
For every fixed point subtype S:
S'Fore yields the minimum number of characters needed before the decimal point for the decimal representation of any value of the subtype S, assuming that the representation does not include an
exponent, but includes a one-character prefix that is either a minus sign or a space. (This minimum number does not include superfluous zeros or underlines, and is at least 2.) The value of this
attribute is of the type
. See
For every subtype S of a floating point type T:
S'Fraction denotes a function with the following specification:
function S'Fraction (X : T)
return T
The function yields the value
, where
is the normalized exponent of
. A zero result, which can only occur when
is zero, has the sign of
. See
Yields a value of the type Task_Id that identifies the task denoted by T. See
For a
E that denotes an exception:
E'Identity returns the unique identity of the exception. The type of this attribute is Exception_Id. See
For every scalar subtype S:
S'Image denotes a function with the following specification:
function S'Image(Arg : S'Base)
return String
The function returns an image of the value of
as a String. See
For every subtype S'Class of a class-wide type T'Class:
S'Class'Input denotes a function with the following specification:
function S'Class'Input(
Stream : not null access Ada.Streams.Root_Stream_Type'Class)
return T'Class
First reads the external tag from
and determines the corresponding internal tag (by calling Tags.Descendant_Tag(String'Input(
), S'Tag) which might raise Tag_Error — see
) and then dispatches to the subprogram denoted by the Input attribute of the specific type identified by the internal tag; returns that result. If the specific type identified by the internal tag is
not covered by
'Class or is abstract, Constraint_Error is raised. See
For every subtype S of a specific type T:
S'Input denotes a function with the following specification:
function S'Input(
Stream : not null access Ada.Streams.Root_Stream_Type'Class)
return T
S'Input reads and returns one value from
, using any bounds or discriminants written by a corresponding S'Output to determine how much to read. See
For a
A that is of an array type (after any implicit dereference), or denotes a constrained array subtype:
A'Last denotes the upper bound of the first index range; its type is the corresponding index type. See
For every scalar subtype S:
S'Last denotes the upper bound of the range of S. The value of this attribute is of the type of S. See
For a
A that is of an array type (after any implicit dereference), or denotes a constrained array subtype:
A'Last(N) denotes the upper bound of the N-th index range; its type is the corresponding index type. See
For a component C of a composite, non-array object R:
If the nondefault bit ordering applies to the composite type, and if a
specifies the placement of C, denotes the value given for the
of the
; otherwise, denotes the offset, from the start of the first of the storage elements occupied by C, of the last bit occupied by C. This offset is measured in bits. The value of this attribute is of
the type
. See
For every subtype S of a floating point type T:
S'Leading_Part denotes a function with the following specification:
function S'Leading_Part (X : T;
Radix_Digits : universal_integer)
return T
Let v be the value T'Machine_Radix^k–Radix_Digits, where k is the normalized exponent of X. The function yields the value
● Floor(X/v) · v, when X is nonnegative and Radix_Digits is positive;
● Ceiling(X/v) · v, when X is negative and Radix_Digits is positive.
Constraint_Error is raised when
is zero or negative. A zero result, which can only occur when
is zero, has the sign of
. See
For a
A that is of an array type (after any implicit dereference), or denotes a constrained array subtype:
A'Length denotes the number of values of the first index range (zero for a null range); its type is
. See
For a
A that is of an array type (after any implicit dereference), or denotes a constrained array subtype:
A'Length(N) denotes the number of values of the N-th index range (zero for a null range); its type is
. See
For every subtype S of a floating point type T:
S'Machine denotes a function with the following specification:
function S'Machine (X : T)
return T
is a machine number of the type
, the function yields
; otherwise, it yields the value obtained by rounding or truncating
to either one of the adjacent machine numbers of the type
Constraint_Error is raised if rounding or truncating
to the precision of the machine numbers results in a value outside the base range of S. A zero result has the sign of
when S'Signed_Zeros is True. See
For every subtype S of a floating point type T:
Yields the largest (most positive) value of
such that every value expressible in the canonical form (for the type
), having a
'Machine_Mantissa digits, is a machine number (see
) of the type
. This attribute yields a value of the type
. See
For every subtype S of a floating point type T:
Yields the smallest (most negative) value of
such that every value expressible in the canonical form (for the type
), having a
'Machine_Mantissa digits, is a machine number (see
) of the type
. This attribute yields a value of the type
. See
For every subtype S of a floating point type T:
Yields the largest value of
such that every value expressible in the canonical form (for the type
), having a
and an
'Machine_Emin and
'Machine_Emax, is a machine number (see
) of the type
. This attribute yields a value of the type
. See
For every subtype S of a fixed point type T:
Yields the value True if overflow and divide-by-zero are detected and reported by raising Constraint_Error for every predefined operation that yields a result of the type
; yields the value False otherwise. The value of this attribute is of the predefined type Boolean. See
For every subtype S of a floating point type T:
Yields the value True if overflow and divide-by-zero are detected and reported by raising Constraint_Error for every predefined operation that yields a result of the type
; yields the value False otherwise. The value of this attribute is of the predefined type Boolean. See
For every subtype S of a fixed point type T:
Yields the radix of the hardware representation of the type
. The value of this attribute is of the type
. See
For every subtype S of a floating point type T:
Yields the radix of the hardware representation of the type
. The value of this attribute is of the type
. See
For every subtype S of a floating point type T:
S'Machine_Rounding denotes a function with the following specification:
function S'Machine_Rounding (X : T)
return T
The function yields the integral value nearest to
. If
lies exactly halfway between two integers, one of those integers is returned, but which of them is returned is unspecified. A zero result has the sign of
when S'Signed_Zeros is True. This function provides access to the rounding behavior which is most efficient on the target processor.
For every subtype S of a fixed point type T:
Yields the value True if rounding is performed on inexact results of every predefined operation that yields a result of the type
; yields the value False otherwise. The value of this attribute is of the predefined type Boolean. See
For every subtype S of a floating point type T:
Yields the value True if rounding is performed on inexact results of every predefined operation that yields a result of the type
; yields the value False otherwise. The value of this attribute is of the predefined type Boolean. See
For every scalar subtype S:
S'Max denotes a function with the following specification:
function S'Max(Left, Right : S'Base)
return S'Base
The function returns the greater of the values of the two parameters. See
For every subtype S:
Denotes the maximum value for Size_In_Storage_Elements that could be requested by the implementation via Allocate for an access type whose designated subtype is S. For a type with access
discriminants, if the implementation allocates space for a coextension in the same pool as that of the object having the access discriminant, then this accounts for any calls on Allocate that could
be performed to provide space for such coextensions. The value of this attribute is of type
. See
For every scalar subtype S:
S'Min denotes a function with the following specification:
function S'Min(Left, Right : S'Base)
return S'Base
The function returns the lesser of the values of the two parameters. See
For every modular subtype S:
S'Mod denotes a function with the following specification:
function S'Mod (Arg : universal_integer)
return S'Base
This function returns
Arg mod
S'Modulus, as a value of the type of S. See
For every subtype S of a floating point type T:
S'Model denotes a function with the following specification:
function S'Model (X : T)
return T
If the Numerics Annex is not supported, the meaning of this attribute is implementation defined; see
for the definition that applies to implementations supporting the Numerics Annex. See
For every subtype S of a floating point type T:
If the Numerics Annex is not supported, this attribute yields an implementation defined value that is greater than or equal to the value of
'Machine_Emin. See
for further requirements that apply to implementations supporting the Numerics Annex. The value of this attribute is of the type
. See
For every subtype S of a floating point type T:
Yields the value
T'Machine_Radix^1 – T'Model_Mantissa
. The value of this attribute is of the type
. See
For every subtype S of a floating point type T:
If the Numerics Annex is not supported, this attribute yields an implementation defined value that is greater than or equal to
· log(10) / log(
)) + 1, where
is the requested decimal precision of
, and less than or equal to the value of
'Machine_Mantissa. See
for further requirements that apply to implementations supporting the Numerics Annex. The value of this attribute is of the type
. See
For every subtype S of a floating point type T:
Yields the value
T'Machine_Radix^T'Model_Emin – 1
. The value of this attribute is of the type
. See
For every modular subtype S:
S'Modulus yields the modulus of the type of S, as a value of the type
. See
For every subtype S'Class of a class-wide type T'Class:
S'Class'Output denotes a procedure with the following specification:
procedure S'Class'Output(
Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Item : in T'Class)
First writes the external tag of
(by calling String'Output(
, Tags.External_Tag(
'Tag)) — see
) and then dispatches to the subprogram denoted by the Output attribute of the specific type identified by the tag. Tag_Error is raised if the tag of Item identifies a type declared at an
accessibility level deeper than that of S. See
For every subtype S of a specific type T:
S'Output denotes a procedure with the following specification:
procedure S'Output(
Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Item : in T)
S'Output writes the value of
, including any bounds or discriminants. See
For a
D that denotes a library-level declaration, excepting a declaration of or within a declared-pure library unit:
Denotes a value of the type
that identifies the partition in which D was elaborated. If D denotes the declaration of a remote call interface library unit (see
) the given partition is the one where the body of D was elaborated. See
For every discrete subtype S:
S'Pos denotes a function with the following specification:
function S'Pos(Arg : S'Base)
return universal_integer
This function returns the position number of the value of
, as a value of type
. See
For a component C of a composite, non-array object R:
If the nondefault bit ordering applies to the composite type, and if a
specifies the placement of C, denotes the value given for the
of the
; otherwise, denotes the same value as R.C'Address – R'Address. The value of this attribute is of the type
. See
For every scalar subtype S:
S'Pred denotes a function with the following specification:
function S'Pred(Arg : S'Base)
return S'Base
For an enumeration type, the function returns the value whose position number is one less than that of the value of
Constraint_Error is raised if there is no such value of the type. For an integer type, the function returns the result of subtracting one from the value of
. For a fixed point type, the function returns the result of subtracting
from the value of
. For a floating point type, the function returns the machine number (as defined in
) immediately below the value of
Constraint_Error is raised if there is no such machine number. See
For a
P that denotes a protected object:
Denotes a non-aliased component of the protected object P. This component is of type System.Any_Priority and its value is the priority of P. P'Priority denotes a variable if and only if P denotes a
variable. A reference to this attribute shall appear only within the body of P. See
For a
A that is of an array type (after any implicit dereference), or denotes a constrained array subtype:
A'Range is equivalent to the range A'First .. A'Last, except that the
A is only evaluated once. See
For every scalar subtype S:
S'Range is equivalent to the
S'First .. S'Last. See
For a
A that is of an array type (after any implicit dereference), or denotes a constrained array subtype:
A'Range(N) is equivalent to the range A'First(N) .. A'Last(N), except that the
A is only evaluated once. See
For every subtype S'Class of a class-wide type T'Class:
S'Class'Read denotes a procedure with the following specification:
procedure S'Class'Read(
Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Item : out T'Class)
Dispatches to the subprogram denoted by the Read attribute of the specific type identified by the tag of Item. See
For every subtype S of a specific type T:
S'Read denotes a procedure with the following specification:
procedure S'Read(
Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Item : out T)
S'Read reads the value of
. See
For every subtype S of a floating point type T:
S'Remainder denotes a function with the following specification:
function S'Remainder (X, Y : T)
return T
For nonzero
, let
be the value
, where
is the integer nearest to the exact value of
; if |
| = 1/2, then
is chosen to be even. If
is a machine number of the type
, the function yields
; otherwise, it yields zero.
Constraint_Error is raised if
is zero. A zero result has the sign of
when S'Signed_Zeros is True. See
For every decimal fixed point subtype S:
S'Round denotes a function with the following specification:
function S'Round(X : universal_real)
return S'Base
The function returns the value obtained by rounding X (away from 0, if X is midway between two values of the type of S). See
For every subtype S of a floating point type T:
S'Rounding denotes a function with the following specification:
function S'Rounding (X : T)
return T
The function yields the integral value nearest to
, rounding away from zero if
lies exactly halfway between two integers. A zero result has the sign of
when S'Signed_Zeros is True. See
For every subtype S of a floating point type T:
Yields the lower bound of the safe range (see
) of the type
. If the Numerics Annex is not supported, the value of this attribute is implementation defined; see
for the definition that applies to implementations supporting the Numerics Annex. The value of this attribute is of the type
. See
For every subtype S of a floating point type T:
Yields the upper bound of the safe range (see
) of the type
. If the Numerics Annex is not supported, the value of this attribute is implementation defined; see
for the definition that applies to implementations supporting the Numerics Annex. The value of this attribute is of the type
. See
For every decimal fixed point subtype S:
S'Scale denotes the
of the subtype S, defined as the value N such that S'Delta = 10.0**(–N).
The scale indicates the position of the point relative to the rightmost significant digits of values of subtype S. The value of this attribute is of the type
. See
For every subtype S of a floating point type T:
S'Scaling denotes a function with the following specification:
function S'Scaling (X : T;
Adjustment : universal_integer)
return T
be the value
. If
is a machine number of the type
, or if |
| ≥
, the function yields
; otherwise, it yields either one of the machine numbers of the type
adjacent to
Constraint_Error is optionally raised if
is outside the base range of S. A zero result has the sign of
when S'Signed_Zeros is True. See
For every subtype S of a floating point type T:
Yields the value True if the hardware representation for the type
has the capability of representing both positively and negatively signed zeros, these being generated and used by the predefined operations of the type
as specified in IEC 559:1989; yields the value False otherwise. The value of this attribute is of the predefined type Boolean. See
For every subtype S:
If S is definite, denotes the size (in bits) that the implementation would choose for the following objects of subtype S:
● A record component of subtype S when the record type is packed.
● The formal parameter of an instance of Unchecked_Conversion that converts from subtype S to some other subtype.
If S is indefinite, the meaning is implementation defined. The value of this attribute is of the type
. See
For a
X that denotes an object:
Denotes the size in bits of the representation of the object. The value of this attribute is of the type
. See
For every fixed point subtype S:
S'Small denotes the
of the type of S. The value of this attribute is of the type
. See
For every access-to-object subtype S:
Denotes the storage pool of the type of S. The type of this attribute is Root_Storage_Pool'Class. See
For every access-to-object subtype S:
Yields the result of calling Storage_Size(S'Storage_Pool), which is intended to be a measure of the number of storage elements reserved for the pool. The type of this attribute is
. See
For a
T that denotes a task object (after any implicit dereference):
Denotes the number of storage elements reserved for the task. The value of this attribute is of the type
. The Storage_Size includes the size of the task's stack, if any. The language does not specify whether or not it includes other storage associated with the task (such as the “task control block”
used by some implementations.) See
For every subtype S of an elementary type T:
Denotes the number of bits occupied in a stream by items of subtype S. Hence, the number of stream elements required per item of elementary type T is:
T'Stream_Size / Ada.Streams.Stream_Element'Size
The value of this attribute is of type
and is a multiple of Stream_Element'Size. See
For every scalar subtype S:
S'Succ denotes a function with the following specification:
function S'Succ(Arg : S'Base)
return S'Base
For an enumeration type, the function returns the value whose position number is one more than that of the value of
Constraint_Error is raised if there is no such value of the type. For an integer type, the function returns the result of adding one to the value of
. For a fixed point type, the function returns the result of adding
to the value of
. For a floating point type, the function returns the machine number (as defined in
) immediately above the value of
Constraint_Error is raised if there is no such machine number. See
For a
X that is of a class-wide tagged type (after any implicit dereference):
X'Tag denotes the tag of X. The value of this attribute is of type Tag. See
For every subtype S of a tagged type T (specific or class-wide):
S'Tag denotes the tag of the type
(or if
is class-wide, the tag of the root type of the corresponding class). The value of this attribute is of type Tag. See
Yields the value True if the task denoted by T is terminated, and False otherwise. The value of this attribute is of the predefined type Boolean. See
For every subtype S of a floating point type T:
S'Truncation denotes a function with the following specification:
function S'Truncation (X : T)
return T
The function yields the value
) when
is negative, and
) otherwise. A zero result has the sign of
when S'Signed_Zeros is True. See
For every subtype S of a floating point type T:
S'Unbiased_Rounding denotes a function with the following specification:
function S'Unbiased_Rounding (X : T)
return T
The function yields the integral value nearest to
, rounding toward the even integer if
lies exactly halfway between two integers. A zero result has the sign of
when S'Signed_Zeros is True. See
For a
X that denotes an aliased view of an object:
All rules and semantics that apply to X'Access (see
) apply also to X'Unchecked_Access, except that, for the purposes of accessibility rules and checks, it is as if X were declared immediately within a library package. See
For every discrete subtype S:
S'Val denotes a function with the following specification:
function S'Val(Arg : universal_integer)
return S'Base
This function returns a value of the type of S whose position number equals the value of
. See
For a
X that denotes a scalar object (after any implicit dereference):
Yields True if and only if the object denoted by X is normal and has a valid representation. The value of this attribute is of the predefined type Boolean. See
For every scalar subtype S:
S'Value denotes a function with the following specification:
function S'Value(Arg : String)
return S'Base
This function returns a value given an image of the value as a String, ignoring any leading or trailing spaces. See
For a
P that statically denotes a program unit:
Yields a value of the predefined type String that identifies the version of the compilation unit that contains the declaration of the program unit. See
For every scalar subtype S:
S'Wide_Image denotes a function with the following specification:
function S'Wide_Image(Arg : S'Base)
return Wide_String
The function returns an image of the value of
as a Wide_String. See
For every scalar subtype S:
S'Wide_Value denotes a function with the following specification:
function S'Wide_Value(Arg : Wide_String)
return S'Base
This function returns a value given an image of the value as a Wide_String, ignoring any leading or trailing spaces. See
For every scalar subtype S:
S'Wide_Wide_Image denotes a function with the following specification:
function S'Wide_Wide_Image(Arg : S'Base)
return Wide_Wide_String
The function returns an
of the value of
, that is, a sequence of characters representing the value in display form. See
For every scalar subtype S:
S'Wide_Wide_Value denotes a function with the following specification:
function S'Wide_Wide_Value(Arg : Wide_Wide_String)
return S'Base
This function returns a value given an image of the value as a Wide_Wide_String, ignoring any leading or trailing spaces. See
For every scalar subtype S:
S'Wide_Wide_Width denotes the maximum length of a Wide_Wide_String returned by S'Wide_Wide_Image over all values of the subtype S. It denotes zero for a subtype that has a null range. Its type is
. See
For every scalar subtype S:
S'Wide_Width denotes the maximum length of a Wide_String returned by S'Wide_Image over all values of the subtype S. It denotes zero for a subtype that has a null range. Its type is
. See
For every scalar subtype S:
S'Width denotes the maximum length of a String returned by S'Image over all values of the subtype S. It denotes zero for a subtype that has a null range. Its type is
. See
For every subtype S'Class of a class-wide type T'Class:
S'Class'Write denotes a procedure with the following specification:
procedure S'Class'Write(
Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Item : in T'Class)
Dispatches to the subprogram denoted by the Write attribute of the specific type identified by the tag of Item. See
For every subtype S of a specific type T:
S'Write denotes a procedure with the following specification:
procedure S'Write(
Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Item : in T)
S'Write writes the value of
. See | {"url":"http://www.adaic.org/resources/add_content/standards/05rm/html/RM-K.html","timestamp":"2014-04-17T12:47:13Z","content_type":null,"content_length":"89104","record_id":"<urn:uuid:cff5efb1-1334-40a8-852e-42a42aeee0b1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
flipcode - Dirtypunk's Column
Well, here we must go forwards and make some definitions of things elementary to what will be discussed below.
The first thing that must be introduced is that visibility is defined in terms of lines. Normally, maximal free segments. This is basic, a line which stabs (intersects/touches) two polygons/points/
edges without stabbing anything in between means that the two things in question are visible to each other. Pretty simple.
From this we can extract the concept of visibility events. In singularity theory these are what are known as Catastrophes, and often are also referred to as Discontinues (eg Discontinuity meshing
which gets more accurate results in radiosity, by subdividing patches based on visibility events, instead of dividing uniformly).
Visibility events are where a change of visibility occurs. For example, where does an occluder (or set of occluders) stop occluding an occludee? In 2d, a line tangent between two objects defines all
these events. However, in 3d a line is not a hyper-plane, as in 2D (if we got on to super string theory etc, we could go on about a line being a brane of 3d), meaning that a line does not divide the
dimension in to two spaces. This means that we can not use lines as a bound to a 3d space. Where as visibility in 2D we can, as a set of 3 non-parallel lines form a bounded area.
In 3D, there are a few types of visibility events such as VV, VE and EEE. VV stands for vertex vertex, and is just a line between two vertices (if there isn't something in between). VE stands for
vertex edge, as it is (oddly enough) the visibility event between an edge and a vertex. This event can be defined in 3d space as a planer area restricted by the two lines passing through the vertex
and the end edge points. EEE events are the interactions in visibility of 3 edges and in 3d represent a ruled quadric surface, also known as a swath (all visibility events in 3D can be defined by a
ruled quadric, as a quadric surface can be flat). Moving into linespace, the 4D representation of all lines in 3D space, we can define as line swaths, which are the bounds of the visibility event for
lines. You may hear the term “critical line swath” when referring to 3d visibility. Note, in degenerate cases other interactions are possible. For example, it is possible to get E^X events where a
swath intersects more than 3 edges. However, in most cases you don’t have to worry about these as no more than 3 edges are important.
Another thing about 3D is what is known as an extremal stabbing line. This is a line that is a locus of visibility. These are the lines which “rule” the swath. They are the furthest line which stabs
all the objects in an event. They are also the line which is where 2 visibility events meet (adjacent). There are a few types of extremal stabbing line, such as VV (vertex to vertex) VEE (vertex
through 2 edges ) and EEEE (through 4 edges). Face lines can also be used, but these are basically other stabbing lines that happen to stab a vertex or an edge on the same face.
A property of scenes is, that if lines exists that stabs a set of objects, then extremal stabbing lines which stab all the objects must exist. This property can be used to make an accurate PVS. | {"url":"http://www.flipcode.com/archives/Dirtypunks_Column-Issue_03_Visibility_Theory.shtml","timestamp":"2014-04-20T10:48:18Z","content_type":null,"content_length":"24527","record_id":"<urn:uuid:e282bfee-2fb3-43d7-ba51-869f50414cc7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Cython] New early-binding concept [was: CEP1000]
mark florisson markflorisson88 at gmail.com
Thu Apr 19 12:53:55 CEST 2012
On 19 April 2012 08:17, Dag Sverre Seljebotn <d.s.seljebotn at astro.uio.no> wrote:
> On 04/19/2012 08:41 AM, Stefan Behnel wrote:
>> Dag Sverre Seljebotn, 18.04.2012 23:35:
>>> from numpy import sqrt, sin
>>> cdef double f(double x):
>>> return sqrt(x * x) # or sin(x * x)
>>> Of course, here one could get the pointer in the module at import time.
>> That optimisation would actually be very worthwhile all by itself. I mean,
>> we know what signatures we need for globally imported functions throughout
>> the module, so we can reduce the call to a single jump through a function
>> pointer (although likely with a preceding NULL check, which the branch
>> prediction would be happy to give us for free). At least as long as sqrt
>> is
>> not being reassigned, but that should hit the 99% case.
>>> However, here:
>>> from numpy import sqrt
> Correction: "import numpy as np"
>>> cdef double f(double x):
>>> return np.sqrt(x * x) # or np.sin(x * x)
>>> the __getattr__ on np sure is larger than any effect we discuss.
>> Yes, that would have to stay a .pxd case, I guess.
> How about this mini-CEP:
> Modules are allowed to specify __nomonkey__ (or __const__, or
> __notreassigned__), a list of strings naming module-level variables where
> "we don't hold you responsible if you assume no monkey-patching of these".
> When doing "import numpy as np", then (assuming "np" is never reassigned in
> the module), at import time we check all names looked up from it in
> __nomonkey__, and if so treat them as "from numpy import sqrt as 'np.sqrt'",
> i.e. the "np." is just a namespace mechanism.
I like the idea. I think this could be generalized to a 'final'
keyword, that could also enable optimizations for cdef class
attributes. So you'd say
cdef final object np
import numpy as np
For class attributes this would tell the compiler that it will not be
rebound, which means you could check if attributes are initialized in
the initializer, or just pull such checks (as wel as bounds checks),
at least for memoryviews, out of loops, without worrying whether it
will be reassigned in the meantime.
> Needs a bit more work, it ignores the possibility that others could
> monkey-patch "np" in the Cython module.
> Problem with .pxd is that currently you need to pick one overload (np.sqrt
> works for n-dimensional arrays too, or takes a list and returns an array).
> And even after adding 3-4 language features to Cython to make this work,
> you're stuck with having to reimplement parts of NumPy in the pxd files just
> so that you can early bind from Cython.
> Dag
> _______________________________________________
> cython-devel mailing list
> cython-devel at python.org
> http://mail.python.org/mailman/listinfo/cython-devel
More information about the cython-devel mailing list | {"url":"https://mail.python.org/pipermail/cython-devel/2012-April/002314.html","timestamp":"2014-04-21T05:58:27Z","content_type":null,"content_length":"6593","record_id":"<urn:uuid:caadcb50-1fd7-4a06-a070-691b704c8afb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
d A
Department of Physics and Astronomy
Dr. Mohamed Azzouz
Full Professor
Department of Physics and Astronomy,
Laurentian University,
Sudbury, Ontario, Canada, P3E 2C6
Phone: +1-705-675-1151 Ext 2224
Office: Fraser Building F-518
email: mazzouz@laurentian.ca
My current research work deals with the study of:
• high-temperature superconductors
• quantum magnetism,
• the microscopic origin of the friction phenomenon, and
• models of spatiotemporal dynamics in ecology
In the past, I have also worked in the following areas:
• the spin-Peierls materials, and
• the crossover scaling effects in the quasi-one-dimensional organic conductors.
Journal Articles
M. Azzouz and B. Doucot, `Effect of small interchain coupling on one-dimensional antiferromagnetic quantum Heisenberg model: The integer-spin case', Physical Review B 47, 8660 (1993).
M. Azzouz, `Interchain coupling effect on the one-dimensional spin 1/2 antiferromagnetic Heisenberg model', Physical Review B 48, 6136 (1993).
M. Azzouz, L. Chen and S. Moukouri, `Calculation of the singlet-triplet gap of the antiferromagnetic Heisenberg model on the ladder', Physical Review B 50, 6233 (1994).
M. Azzouz and T. Dombre, `The motion of holes on the triangular lattice studied using the t-J model', Physical Review B 53, 402 (1996).
M. Azzouz and C. Bourbonnais, `Mean-field theory of the spin-Peierls state under magnetic field: Application to CuGeO[3], Physical Review B 53, 5090 (1996).
M. Azzouz, B. Dumoulin, and A. Benyoussef, `Incommensurate nodes in the energy spectrum of coupled antiferromagnetic Heisenberg ladders', Physical Review B 55, R11957 (1997).
M. Azzouz, H. J. Kreuzer, and M. R. A. Shegelski, `Long jumps in surface diffusion: A microscopic derivation of the jump frequencies', Physical Review Letters 80, 1477 (1998).
M. Azzouz, `Identification of the physical parameters of the paramagnetic phase of the one-dimensional Kondo lattice model done by introducing a nonmagnetic quantum state with rotating order
parameters'. Physical Review B 62, 710 (2000).
B. Bock and M. Azzouz, `The generalization of the Jordan-Wigner Transformation in three dimensions and its application to the Heisenberg bilayer antiferromagnet', Physical Review B, 64, 054410
M. Azzouz, H.J. Kreuzer, and M.R.A. Shegelski, "Microscopic derivation of the master and Fokker-Planck equations for surface diffusion and friction", Physical Review B 66, 125403 (2002).
M. Azzouz, "Rotating antiferromagnetism in high-temperature superconductors", Physical Review B 67, 134510 (2003).
M. Azzouz, "Thermodynamics of high-TC materials in the rotating antiferromagnetism theory", Physical Review B 68, 174523 (2003).
M. Azzouz, "Chemical potentials of high-temperature superconductors", Physical Review B 70, 052501 (2004)
F. Hanke and M. Azzouz, The Moroccan Journal of Cond. Matter V. 6, 1 (2005). Title: Modeling the mid-infrared optical gap in La2-xSrxCuO4.
C. Pagnutti, M. Anand, and M. Azzouz, Journal of Theoretical Biology V. 236, 79 (2005), Title: Lattice geometry, gap formation and scale invariance in forests.
H. Saadaoui and M. Azzouz, Phys. Rev. B V. 72, 184518 (2005), Title: Doping dependence of coupling between charge carriers and bosonic modes in the normal state of high-Tc superconductors.
M. Azzouz and K. A. Asante, Phys. Rev. B V. 72, 094433 (2005), Title: Spin locking and freezing phenomena in the antiferromagnetic Heisenberg model on the three leg-ladder.
M. Azzouz, Phys. Rev. B V. 74, 144422 (2006), Title: Filed-induced quantum criticality in low-dimensional Heisenberg spin systems.
Department of Physics and Astronomy
Laurentian University
Université Laurentienne
Last Modified: 28 January 2007 by C. Roy | {"url":"http://oldwebsite.laurentian.ca/physics/FACSTAFF/mazzouz.html","timestamp":"2014-04-20T03:11:22Z","content_type":null,"content_length":"5891","record_id":"<urn:uuid:7ab85c63-4fd3-451c-9f66-927b93fae824>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
On double-diffusive convection and cross diffusion effects on a horizontal wavy surface in a porous medium
An analysis of double diffusive convection induced by a uniformly heated and salted horizontal wavy surface in a porous medium is presented. The wavy surface is first transformed into a smooth
surface via a suitable coordinate transformation and the transformed nonsimilar coupled nonlinear parabolic equations are solved using the Keller box method. The local and average Nusselt and
Sherwood numbers are given as functions of the streamwise coordinate and the effects of various physical parameters are discussed in detail. The effects of the Lewis number, buoyancy ratio, and wavy
geometry on the dynamics of the flow are studied. It was found, among other observations, that the combined effect of Dufour and Soret parameters is to reduce both heat and mass transfer.
MSC: 34B15, 65N30, 76M20.
double diffusive convection; nonsimilar solutions; porous medium; Keller box method
1 Introduction
The study of double-diffusive convection has received considerable attention during the last several decades because of its occurrence in a wide range of natural and technological settings.
Double-diffusive convection is an important fluid dynamic phenomenon that involves motions driven by two different density gradients diffusing at different rates (Mojtabi and Charrier-Mojtabi [1]). A
common example of double diffusive convection is seen in oceanography, where heat and salt concentrations exist with different gradients and diffuse at differing rates. Double diffusive convection
manifests in the form of “salt-fingers” (see Stern [2,3]) which are observable in laboratory settings. The input of cold fresh-water from an iceberg can affect both of these variables.
Double-diffusive convection has also been cited as being important in the modeling of solar ponds (Akbarzadeh and Manins [4]) and magma chambers (Huppert and Sparks [5], Fernando and Brandt [6]).
Double-diffusive free convection is also seen in sea-wind formations, where upward convection is also modified by Coriolis forces. This is of particular interest in oceans where the Earth’s rotation
plays a dominant role in many of the motions observed.
In engineering applications, double diffusive convection is commonly visualized in the formation of microstructures during the cooling of molten metals, and fluid flows around shrouded
heat-dissipation fins. Typical technological motivations for the study of double-diffusive convection range from such diverse fields as the migration of moisture through air contained in fibrous
insulations, grain storage systems, the dispersion of contaminants through water-saturated soil, crystal growth, solidification of binary mixtures, and the underground disposal of nuclear wastes.
Other important applications can be found the fields of geophysical sciences and electrochemistry. A comprehensive review of the literature concerning natural convection in fluid-saturated porous
media may be found in the books by Ingham and Pop [7,8], Nield and Bejan [9], Vafai [10,11], and Vadasz [12].
Most free convection studies mainly focus on the cases where the thermal boundary conditions allow the use of similarity transformations to reduce the governing boundary layer equations to a system
of ordinary differential equations which can be handled either analytically or numerically. The existence of self-similar solutions point to the fact that, in general, the heated surface must have a
plane geometry. In reality, the heated surfaces need not always be planar. Surfaces are sometimes deliberately roughened to achieve enhanced heat transport. Heat transfer devices like flat-plate
solar collectors and flat-plate condensers in refrigerators possess a nonuniform surface. Further, in cavity wall insulating systems and grain storage systems, one can witness pronounced surface
roughness. Yao [13] and Moulic and Yao [14,15] were the first to include the effects of surface nonuniformities on the free convection thermal boundary layer flow of a Newtonian fluid. Rees and Pop [
16,17] investigated parallel thermal boundary layers due to natural convection induced by a vertical surface with sinusoidal undulations embedded in a in porous medium. Subsequent studies by Rees and
Pop [18,19] considered the effect of surface waves on the convection due to either horizontal or vertical wavy surfaces embedded in a porous medium. In the case of horizontal wavy surface, Rees and
Pop [18] found that the amplitude of the surface waves must be within an range in order to balance direct and indirect buoyancy forces. They showed that for wall amplitudes greater than about , the
flow separates with one or more regions of reverse flow. Rees and Pop [19] obtained similarity solutions for the case of longitudinal surfaces waves with (i) a prescribed power-law temperature is set
on the surface, and (ii) a prescribed power-law heat flux.
Cheng [20-23], in a series of papers investigated free convection due to vertical/inclined wavy surfaces in porous media. Cheng [20] studied natural convection, heat and mass transfer near a vertical
wavy surface with constant wall temperature and concentration in a porous medium. He showed that the average Nusselt and Sherwood numbers for a sinusoidal wavy surface are constantly smaller than the
corresponding results for a flat plate. This work was generalized in Cheng [21] to a non-Darcy model for natural convection heat and mass transfer from a vertical wavy surface in a porous media.
Cheng [22] considered a non-Newtonian power law liquid to study combined heat and mass transfer in free convection flow due to a vertical wavy surface in a porous medium with thermal and mass
stratifications. It was found that an increase in the power-law index, the thermal stratification parameter, or the concentration stratification parameter leads to a smaller fluctuation of the local
Nusselt and Sherwood numbers with the streamwise coordinate. Double diffusive natural convection along an inclined wavy surface in a porous medium was investigated by Cheng [23] and showed that the
inclination angle enhanced the total heat and mass transfer rates.
Pop and Na [24] studied natural convection due to a frustum of a wavy cone in a porous medium and discussed the effects of geometry of a wavy cone on the heat transfer. Cheng [25] studied the natural
convection heat and mass transfer near a wavy cone with constant wall temperature and concentration in a porous medium. Using a cubic spline method of solution, he showed that the buoyancy ratio
leads to higher heat and mass transports while the diffusivity ratio (Lewis number) reduces heat while increasing mass transports. Cheng [26] obtained nonsimilar solutions for double diffusive
convection near a frustum of a wavy cone in porous media and showed that the average Nusselt and Sherwood numbers for the wavy cone are smaller than the corresponding smooth cone. Double-diffusive
natural convection along a vertical truncated wavy cone in non-Newtonian fluid saturated porous media with thermal and mass stratification has been investigated by Cheng [27]. In this study, he
showed that the power-law index leads to smaller fluctuations in the local Nusselt and Sherwood numbers and increasing the thermal and concentration stratification parameters reduce the heat and mass
transfer rates.
An energy flux is often generated by both temperature and solute gradients leading to Dufour or diffusion-thermo effect and Soret or thermal-diffusion effects. Both effects have been extensively
studied in gases, while the Soret effect has been studied both theoretically and experimentally in liquids; see Mortimer and Eyring [28]. It is generally accepted that the Dufour and the Soret
effects are small compared to the effects described by Fourier and Fick’s laws (Mojtabi and Charrier-Mojtabi [1]) and can therefore be neglected in many heat and mass-transfer processes. However, it
has been shown in a number of studies that there are exceptions in areas such as in geosciences where Dufour and Soret effects are significant and cannot be ignored; see for instance Kafoussias and
Williams [29], Awad et al.[30], and the references therein.
Mortimer and Eyring [28] used an elementary transition state approach to obtain a simple model for Soret and Dufour effects in thermodynamically ideal mixtures of substances with molecules of nearly
equal size. In their model, the flow of heat in the Dufour effect was identified as the transport of the enthalpy change of activation as molecules diffuse. The results were found to fit the Onsager
reciprocal relationship, Onsager [31]. Shariful et al.[32] investigated the Dufour and Soret effects on steady combined free-forced convective and mass transfer flow past a semi-infinite vertical
flat plate of hydrogen-air mixtures. They used the fourth-order Runge-Kutta method to solve the governing equations of motion. Their study showed that the Dufour and Soret effects should not be
neglected. Shateyi et al.[33] investigated the effects of diffusion-thermo and thermal-diffusion on MHD fluid flow over a permeable vertical plate in the presence of radiation and hall current.
Recently, Narayana et al.[34] and Malashetty and Biradar [35] have investigated the cross diffusion effects on the onset of double diffusive convection in a binary Maxwell fluid saturated porous
medium. Other related recent studies on the effects of Soret and Dufour parameters include those by Makinde [36-38].
In this paper, we extend the work by Rees and Pop [18] to include the solute diffusion in natural convection induced by a horizontal wavy surface profile in a porous medium. The wall temperature and
concentrations are assumed to be constants and the amplitude of the wave is assumed to be small enough so as not to engender any regions of reverse flow (these issues are dealt in detail by Rees and
Pop [18]). Assuming the Rayleigh number Ra to be very large, the boundary layer approximation is invoked leading to a set of nonsimilar parabolic partial differential equations whose solution is
obtained using the Keller box method (see, Keller [39], Keller and Cebeci [40]).
2 Mathematical formulation
Consider the problem of double-diffusive convection in a fluid around a horizontal surface with transverse sinusoidal undulations embedded in a porous medium. Figure 1 shows the schematic sketch of
the problem. The wavy surface profile is described by
where is the amplitude of the surface wave, l is the characteristic length of the wave, and φ is the phase of the wave.
Figure 1. Schematic diagram of the physical problem.
The temperature and the solute concentrations at the surface are assumed to be constants and which are greater than the ambient values and , respectively. We also assume a steady convective flow
with the properties of the fluid and porous medium to be constant except for the buoyancy term. Following the Boussinesq approximation, the governing continuity, momentum, heat, and solute
concentration equations can be written in the form
subject to boundary conditions
Here, g is the acceleration due to gravity, K is permeability of the porous medium, ν is the kinematic viscosity, β is the coefficient of thermal expansion, is the coefficient of solutal expansion,
κ is the thermal diffusivity, D is the solutal diffusivity, and are the coefficients of cross diffusion. We now use the following nondimensional variables:
The governing equations (2)-(5) take the form:
The dimensionless parameters appearing in the above set of equations are the Rayleigh number Ra, the buoyancy ratio N, Dufour number , the Soret number Sr, and the Lewis number Le defined as
The boundary conditions in (6) take the following nondimensional form:
Introducing the stream function , such that
the continuity equation (8) is satisfied automatically and Eqs. (9)-(11) can be written in the following form:
with the boundary conditions
The effect of the wavy surface and the usual boundary layer scalings are incorporated into the governing equations (15)-(17) using the transformations,
along with the substitutions
Equations (19) transform the wavy surface to a smooth surface in the physical configuration. Substituting (19) and (20) into Eqs. (15)-(18) and letting , we obtain the following boundary layer
subject to the boundary conditions
The associated local Nusselt and Sherwood numbers
The mean Nusselt and Sherwood numbers from the leading edge to the free stream position x are given by
3 Solution procedure
The nonsimilar parabolic partial differential equations that govern the dynamics of the fluid flow were solved using the Keller box implicit finite difference method. This section gives details of
the Keller box scheme used to solve the coupled nonlinear parabolic equations (21)-(23). We present the implementation of the Keller box method for a problem that involves coupling, nonlinearity,
variable coefficients. The method consists of four main steps; decomposition, discretization, linearization, and the solution of the linearization difference equation.
The decomposition of Eqs. (21)-(23) into first-order equations is achieved by setting
where . The boundary conditions take the form
In the discretization and formation of finite difference equations, the computational domain (ξ-η plane) is divided into a finite number of mesh points that form the corners of the rectangle shown in
Figure 2(a). The net points are defined as
where and are respectively the spacings in the ξ and η directions. The choice of net spacings is done manually in the present paper but can be generated automatically. A typical mesh rectangle over
which Eqs. (27)-(32) are to be approximated is indicated in Figure 2(b). The equations are centered around one of the points marked by “×.”
A finer grid that points toward the surface ( ) and a coarser grid that points away from the surface ( ) are usually chosen for nonsimilar boundary layer flows, however, here we use uniform grid
points in both directions. We fix which is well outside the plane boundary layer surface. Since the boundary-layer equations have been formulated as a first-order system, all derivatives can be
approximated by simple centered differences and two-point averages, using only values at the corners of the box. This type of differencing is as compact as possible and is, as we shall see, one of
the most attractive features of the Box scheme. We use the notation for any quantity represented midway between net points and this notion should not be confused with the tensorial one. Then, for
any net quantity w we use the following finite difference approximations:
Using the box-scheme approximations (35), Eqs. (27)-(32) yield the following nonlinear finite difference equations:
Further manipulation yields the following equations:
The boundary conditions take the form
The difference equations (42)-(47) are imposed at each grid points , giving rise to 6J algebraic equations. The boundary conditions (48) along with the difference equations (42)-(47) comprises a
system of equations for as many variables. There are several methods for solving these equations. In this study, we employ Newton’s method which is easy to implement.
We use Newton’s method to linearize the nonlinear finite difference equations (42)-(47) which possesses quadratic nonlinearity. Assuming that the nth iterates of the variables are known the iterate (
)th iterates are given by
We further define the following variables for brevity.
Substituting Eqs. (49)-(50) into Eqs. (42)-(47) and retaining only linear terms in δ, the linearized finite difference equations can be written as:
From the boundary conditions, we get the following difference equations:
The linearized difference system of Eqs. (51)-(57) can be solved by any standard method like elimination or iterative methods. Since the linearized system has a block tridiagonal structure, we employ
a block-elimination method. The block tridiagonal system in its matrix form is written as follows:
We first factorize the coefficient matrix as follows:
where I is an identity matrix of order 6. The procedure involves two sweeps, i.e., a forward sweep and a backward sweep. In the forward sweep, the unknowns , , and the intermediate variables are
determined and in the backward sweep the required solution is determined in terms of and .
Forward sweep
Initial matrices:
Using the above, the successive matrices are obtained using the following formulae:
Backward sweep
The required solution of the linearized system can be written as
After obtaining the solution of the linearized system, the corrections are added to the solutions at the nth iteration and the procedure is repeated until convergence is achieved. This gives the
solution at a given . The procedure is to be repeated at the next streamwise location and so on. In boundary layer flows, the greatest error is associated with the wall gradients , , and , and
hence these are used to determine convergence. The iterations are stopped when the successive iterate values of the gradients match up to 10^−6. Since Newton’s method has second-order convergence,
usually three to four iterations suffice to achieve the chosen tolerance.
4 Results and discussion
The problem of double diffusive convection induced by a horizontal wavy surface in a porous medium in presence of cross diffusion effect has been analyzed. The problem is governed by a system of
coupled nonlinear parabolic partial differential equations (21)-(24). The Keller box method was used to find the numerical solutions of the problem. The amplitude of the wavy surface was assumed to
be small and the wave phase was considered to be zero. The computations were carried out for the following range of parameter values; , , and , . We note here that represents the aiding
buoyancy condition while represents the opposing buoyancy condition. The computations are shown in Figures 3-13. We mainly focus on the parametric effects on slip velocity and heat and mass
transport coefficients.
Figure 3. Effect of cross diffusion on velocity, temperature and concentration profiles for different values of steam wise locationξwith , , and .
Figure 3 shows the cross diffusion effects on the velocity, temperature, and concentration profiles for different values ξ. Due to the wavy nature of the surface, the dynamics of the fluid flow at
different stream wise locations will be different. We notice the existence of locations where the velocity of the fluid attain extreme values over every undulation of the wavy surface. The profiles
are shown for two selected stream wise locations, namely, and . At both of these locations, the effect of cross diffusion is to increase both transverse and axial velocity profiles. The thermal and
concentration boundary layers also thicken as a result of the cross diffusion phenomena as can be seen from Figures 3(c) and (d).
The variation of the slip velocity as a function of streamwise position ξ is shown for different wave amplitudes in Figure 4. For , the slip velocity remains constant indicating that self-similar
solution exists in the plane surface case. For , the slip velocity is oscillatory, increasing first to a maximum before decreasing to a minimum before the cycle is repeated. The fluctuations in the
slip velocity increase with increasing amplitudes.
Figure 4. Variation of slip velocity withξfor varying wave amplitudeαwith , , and .
The heat transfer results are shown in Figures 5(a) and (b) for different values of the wave amplitude α. The local Nusselt number remains smooth and increases with ξ for the plane surface ( ). For
the wavy surface ( ), the local Nusselt number fluctuates over every cycle and these fluctuations rise as ξ increases. The fluctuations in the local Nusselt number increase with the wave amplitude.
The mean Nusselt number increases with the stream position ξ and with increasing amplitude values.
Figure 5. Variation of local and mean Nusselt numbers withξfor different values of wave amplitudeαwith , , and .
Figures 5(c) and (d) highlight the mass transfer results for different values of α. Analogous to the heat transfer case, the local Sherwood number increases with ξ in the case of a plane surface ( )
while for a wavy surface ( ) the local Sherwood number fluctuates. These fluctuations increase as ξ increases. Further, the fluctuations in the local Sherwood number increases with the wave
amplitude. Figure 5(d) shows that the mean Sherwood number further increases with the stream position ξ and with increasing values of α. These results are qualitatively similar to those of Rees and
Pop [18].
Figure 6 shows the effect of the Lewis number Le and the buoyancy ratio N on the slip velocity. The Lewis number reduces the slip velocity while the buoyancy ratio has an enhancing effect as
expected. Increasing aiding buoyancy generates faster moving fluid flow as is evident in Figure 6(b).
Figure 6. Variation of slip velocity withξfor different values of Lewis numberLeand buoyancy ratioNwith and .
Figure 7 shows the effect of buoyancy ratio N on the local and mean Nusselt and Sherwood numbers. It is evident that the buoyancy ratio enhances both the local and mean heat and mass transfer
coefficients. The buoyancy ratio enhances the local and the mean Nusselt numbers. As observed earlier, and exhibits fluctuations which rise with the streamwise coordinate ξ. These results are in
agreement with those reported by Cheng [21] with respect to a vertical wavy surface.
Figure 7. Variation of local and mean heat and mass transfer rates withξfor different values of buoyancy numberNwith , , and .
Figure 8 shows the effect of the Lewis number Le on the local and mean Nusselt and Sherwood numbers. The Lewis number tends to reduce the local and average heat transfer rates. Further, and exhibit
fluctuations which increase with the streamwise coordinate ξ. These results regarding the heat and mass transfer coefficients coincide with those reported by Cheng [21] with respect to the vertical
wavy surface. The effect of cross diffusion coefficients on the slip velocity is shown in Figure 9. The slip velocity is higher in the presence of cross diffusion effect as compared to the case when
cross diffusion effect is absent.
Figure 8. Variation of local and mean heat and mass transfer rates withξfor different values of Lewis numberLewith , , and .
Figure 9. Cross diffusion effect on the slip velocity for varyingξwith , , and .
The cross diffusion effect on the local and mean heat and mass transfer are depicted in Figure 10. These figures suggest that the presence of cross diffusion effects reduces both heat and mass
transfer rates. The cross diffusion is more pronounced in case of heat transfer while the variation is infinitesimal in case of mass transfer. Figures 11(a) and (b) show the effect of the Dufour and
Soret numbers on the slip velocity. Both parameters tend to accelerate the fluid in the boundary layer thereby increasing the slip velocity.
Figure 10. Cross diffusion effect on local and mean heat and mass transfer rates for varyingξwith , , and .
Figure 11. Variation of slip velocity withξfor different values of Dufour number and Soret numberSrwith , , and .
The effects of Dufour number on , , and is shown respectively in Figures 12(a)-(d). It is evident that the Dufour number reduces local and mean heat transfer rates while increasing the local and
mean mass transfer rates. Here also we see the significant effect of in the case of heat transfer as compared to the mass transfer.
Figure 12. Variation of local and mean Nusselt and Sherwood numbers withξfor different values of Dufour number with , , and .
The effects of the Soret number on , , , and are shown in Figures 13(a)-(d). These results show that the Soret number enhances the local and mean heat transfer rates while reducing the local and
mean mass transfer rates. The effect of the Soret number is the exact opposite of that of the Dufour number on heat and mass transfer.
Figure 13. Variation of local and mean Nusselt and Sherwood numbers withξfor different values of Soret numberSrwith , , and .
5 Conclusions
The paper analyzed double diffusive convection induced by a horizontal wavy surface embedded in a porous medium taking the cross diffusion effect into account. The governing parabolic partial
differential equations have been solved using the Keller box method. The convergence rate depends on the parametric values chosen. Computations show that the Lewis number enhances mass transfer while
reducing heat transfer. The buoyancy ratio enhances both the heat and mass transfer rates. The surface geometry also plays a crucial role in controlling heat and mass transfer rates. The effect of
the Dufour number is to reduce heat transfer and enhance mass transfer. The effect of the Soret number is the exact opposite of the Dufour effect. We observe reduced heat and mass transfer in the
presence of cross diffusion terms.
Authors’ contributions
MN carried out the numerical computations and drafted the manuscript. PS, SSM, and PGS participated in the design of the study and helped to draft the manuscript. All authors read and approved the
final manuscript.
The authors are grateful to the National Research Foundation (NRF) and the University of KwaZulu-Natal for financial support.
1. Mojtabi, A, Charrier-Mojtabi, MC: Double diffusive convection in porous media. In: Vafai K (ed.) Hand-book of Porous Media, pp. 269–320. Taylor and Francis, New York (2005)
2. Stern, ME: The ‘salt fountain’ and thermohaline convection. Tellus. 12, 172–175 (1960). Publisher Full Text
3. Stern, ME: Collective instability of salt fingers. J. Fluid Mech.. 35, 209–218 (1969). Publisher Full Text
4. Akbarzadeh, A, Manins, P: Convective layers generated by side walls in solar ponds. Sol. Energy. 41(6), 521–529 (1988). Publisher Full Text
5. Huppert, HE, Sparks, RSJ: Double-diffusive convection due to crystallization in magmas. Annu. Rev. Earth Planet. Sci.. 12, 11–37 (1984). Publisher Full Text
6. Fernando, HJS, Brandt, A: Recent advances in double-diffusive convection. Appl. Mech. Rev.. 47, c1–c7 (1994)
7. Yao, LS: Natural convection along a vertical wavy surface. J. Heat Transf.. 105, 465–468 (1983). Publisher Full Text
8. Moulic, SG, Yao, LS: Mixed convection along a wavy surface. J. Heat Transf.. 111, 974–979 (1989). Publisher Full Text
9. Moulic, SG, Yao, LS: Natural convection along a vertical wavy surface with uniform heat flux. J. Heat Transf.. 111, 1106–1108 (1989). Publisher Full Text
10. Rees, DAS, Pop, I: A note on free convection along a vertical sinusoidally wavy surface in a porous medium. J. Heat Transf.. 116(1994), 505–508 (1994)
11. Rees, DAS, Pop, I: Free convection induced by a vertical wavy surface with uniform heat flux in a porous medium. J. Heat Transf.. 117, 545–550 (1995)
12. Rees, DAS, Pop, I: Free convection induced by a horizontal wavy surface in a porous medium. Fluid Dyn. Res.. 14, 151–166 (1994). Publisher Full Text
13. Rees, DAS, Pop, I: The effect of longitudinal surface waves on free convection from vertical surfaces in porous media. Int. Commun. Heat Mass Transf.. 24, 419–425 (1997). Publisher Full Text
14. Cheng, CY: Natural convection heat and mass transfer near a vertical wavy surface with constant wall temperature and concentration in a porous medium. Int. Commun. Heat Mass Transf.. 27,
1143–1154 (2000). Publisher Full Text
15. Cheng, CY: Non-Darcy natural convection heat and mass transfer from a vertical wavy surface in saturated porous media. Appl. Math. Comput.. 182, 1488–1500 (2006). Publisher Full Text
16. Cheng, CY: Combined heat and mass transfer in natural convection flow from a vertical wavy surface in a power-law fluid saturated porous medium with thermal and mass stratification. Int. Commun.
Heat Mass Transf.. 36, 351–356 (2009). Publisher Full Text
17. Cheng, CY: Double diffusive natural convection along an inclined wavy surface in a porous medium. Int. Commun. Heat Mass Transf.. 37, 1471–1476 (2010). Publisher Full Text
18. Pop, I, Na, TY: Natural convection of a frustum of wavy cone in a porous medium. Mech. Res. Commun.. 22, 181–190 (1995). Publisher Full Text
19. Cheng, CY: Natural convection heat and mass transfer near a wavy cone with constant wall temperature and concentration in a porous medium. Mech. Res. Commun.. 27, 613–620 (2000). Publisher Full
20. Cheng, CY: Nonsimilar solutions for double diffusive convection near a frustum of a wavy cone in porous media. Appl. Math. Comput.. 194, 156–167 (2007). Publisher Full Text
21. Cheng, CY: Double-diffusive natural convection along a vertical wavy truncated cone in non-Newtonian fluid saturated porous media with thermal and mass stratification. Int. Commun. Heat Mass
Transf.. 35, 985–990 (2008). Publisher Full Text
22. Mortimer, RG, Eyring, H: Elementary transition state theory of the Soret and Dufour effects. Proc. Natl. Acad. Sci. USA. 77, 1728–1731 (1980). PubMed Abstract | Publisher Full Text | PubMed
Central Full Text
23. Kafoussias, NG, Williams, EW: Thermal-diffusion and diffusion-thermo effects on mixed free-forced convective and mass transfer boundary layer flow with temperature dependent viscosity. Int. J.
Eng. Sci.. 33, 1369–1384 (1995). Publisher Full Text
24. Awad, FG, Sibanda, P, Motsa, SS: On the linear stability analysis of a Maxwell fluid with double-diffusive convection. Appl. Math. Model.. 34, 3509–3517 (2010). Publisher Full Text
25. Onsager, L: Reciprocal relations in irreversible processes-I. Phys. Rev.. 37, 405–426 (1931). Publisher Full Text
26. Alam, MS, Rahman, MM, Maleque, MA, Ferdows, M: Dufour and Soret effects on steady MHD combined free-forced convective and mass transfer flow past a semi-infinite vertical plate. Thammasat Int. J.
Sci. Tech.. 11, 1–12 (2006)
27. Shateyi, S, Motsa, SS, Sibanda, P: The effects of thermal radiation, Hall currents, Soret, and Dufour on MHD flow by mixed convection over a vertical surface in porous media. Math. Probl. Eng..
2010, Article ID 627475. doi:10.1155/2010/627475 (2010)
28. Narayana, M, Sibanda, P, Motsa, SS, Lakshmi-Narayana, PA: Linear and nonlinear stability analysis of binary Maxwell fluid convection in a porous medium. Heat Mass Transf.. 48(5), 863–874 (2012).
Publisher Full Text
29. Malashetty, MS, Biradar, BS: The onset of double diffusive convection in a binary Maxwell fluid saturated porous layer with cross-diffusion effects. Phys. Fluids. 23, Article ID 064102 (2011)
30. Makinde, OD: On MHD mixed convection with Soret and Dufour effects past a vertical plate embedded in a porous medium. Lat. Am. Appl. Res.. 41, 63–68 (2011)
31. Makinde, OD, Olanrewaju, PO: Unsteady mixed convection with Soret and Dufour effects past a porous plate moving through a binary mixture of chemically reacting fluid. Chem. Eng. Commun.. 198(7),
920–938 (2011). Publisher Full Text
32. Makinde, OD: MHD mixed-convection interaction with thermal radiation and nth order chemical reaction past a vertical porous plate embedded in a porous medium. Chem. Eng. Commun.. 198(4), 590–608
33. Keller, HB: Numerical methods in boundary-layer theory. Annu. Rev. Fluid Mech.. 10, 417–433 (1978). Publisher Full Text
Sign up to receive new article alerts from Boundary Value Problems | {"url":"http://www.boundaryvalueproblems.com/content/2012/1/88","timestamp":"2014-04-19T00:14:42Z","content_type":null,"content_length":"211034","record_id":"<urn:uuid:51b7fb02-d016-45bf-ae57-34a9ca1861cd>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rules for Arithmetic Operations
Fixed-point arithmetic refers to how signed or unsigned binary words are operated on. The simplicity of fixed-point arithmetic functions such as addition and subtraction allows for cost-effective
hardware implementations.
The sections that follow describe the rules that the Simulink^® software follows when arithmetic operations are performed on inputs and parameters. These rules are organized into four groups based on
the operations involved: addition and subtraction, multiplication, division, and shifts. For each of these four groups, the rules for performing the specified operation are presented with an example
using the rules.
Computational Units
The core architecture of many processors contains several computational units including arithmetic logic units (ALUs), multiply and accumulate units (MACs), and shifters. These computational units
process the binary data directly and provide support for arithmetic computations of varying precision. The ALU performs a standard set of arithmetic and logic operations as well as division. The MAC
performs multiply, multiply/add, and multiply/subtract operations. The shifter performs logical and arithmetic shifts, normalization, denormalization, and other operations.
Addition and Subtraction
Addition is the most common arithmetic operation a processor performs. When two n-bit numbers are added together, it is always possible to produce a result with n + 1 nonzero digits due to a carry
from the leftmost digit. For two's complement addition of two numbers, there are three cases to consider:
● If both numbers are positive and the result of their addition has a sign bit of 1, then overflow has occurred; otherwise the result is correct.
● If both numbers are negative and the sign of the result is 0, then overflow has occurred; otherwise the result is correct.
● If the numbers are of unlike sign, overflow cannot occur and the result is always correct.
Fixed-Point Simulink Blocks Summation Process
Consider the summation of two numbers. Ideally, the real-world values obey the equation
where V[b] and V[c] are the input values and V[a] is the output value. To see how the summation is actually implemented, the three ideal values should be replaced by the general [Slope Bias] encoding
scheme described in Scaling:
The equation in Addition gives the solution of the resulting equation for the stored integer, Q[a]. Using shorthand notation, that equation becomes
where F[sb] and F[sc] are the adjusted fractional slopes and B[net] is the net bias. The offline conversions and online conversions and operations are discussed below.
Offline Conversions. F[sb], F[sc], and B[net] are computed offline using round-to-nearest and saturation. Furthermore, B[net] is stored using the output data type.
Online Conversions and Operations. The remaining operations are performed online by the fixed-point processor, and depend on the slopes and biases for the input and output data types. The worst (most
inefficient) case occurs when the slopes and biases are mismatched. The worst-case conversions and operations are given by these steps:
It is important to note that bit shifting, rounding, and overflow handling are applied to the intermediate steps (3 and 4) and not to the overall sum.
Streamlining Simulations and Generated Code
If the scaling of the input and output signals is matched, the number of summation operations is reduced from the worst (most inefficient) case. For example, when an input has the same fractional
slope as the output, step 2 reduces to multiplication by one and can be eliminated. Trivial steps in the summation process are eliminated for both simulation and code generation. Exclusive use of
binary-point-only scaling for both input signals and output signals is a common way to eliminate mismatched slopes and biases, and results in the most efficient simulations and generated code.
The Summation Process
Suppose you want to sum three numbers. Each of these numbers is represented by an 8-bit word, and each has a different binary-point-only scaling. Additionally, the output is restricted to an 8-bit
word with binary-point-only scaling of 2^-3.
The summation is shown in the following model for the input values 19.875, 5.4375, and 4.84375.
Applying the rules from the previous section, the sum follows these steps:
1. The first number to be summed (19.875) has a fractional slope that matches the output fractional slope. Furthermore, the binary points and storage types are identical, so the conversion is
2. The second number to be summed (5.4375) has a fractional slope that matches the output fractional slope, so a slope adjustment is not needed. The storage data types also match, but the difference
in binary points requires that both the bits and the binary point be shifted one place to the right:
Note that a loss in precision of one bit occurs, with the resulting value of Q[Temp] determined by the rounding mode. For this example, round-to-floor is used. Overflow cannot occur in this case
because the bits and binary point are both shifted to the right.
3. The summation operation is performed:
Note that overflow did not occur, but it is possible for this operation.
4. The third number to be summed (4.84375) has a fractional slope that matches the output fractional slope, so a slope adjustment is not needed. The storage data types also match, but the difference
in binary points requires that both the bits and the binary point be shifted two places to the right:
Note that a loss in precision of two bit occurs, with the resulting value of Q[Temp] determined by the rounding mode. For this example, round-to-floor is used. Overflow cannot occur in this case
because the bits and binary point are both shifted to the right.
As shown here, the result of step 7 differs from the ideal sum:
Blocks that perform addition and subtraction include the Sum, Gain, and Discrete FIR Filter blocks.
The multiplication of an n-bit binary number with an m-bit binary number results in a product that is up to m + n bits in length for both signed and unsigned words. Most processors perform n-bit by
n-bit multiplication and produce a 2n-bit result (double bits) assuming there is no overflow condition.
Fixed-Point Simulink Blocks Multiplication Process
Consider the multiplication of two numbers. Ideally, the real-world values obey the equation
where V[b] and V[c] are the input values and V[a] is the output value. To see how the multiplication is actually implemented, the three ideal values should be replaced by the general [Slope Bias]
encoding scheme described in Scaling:
The solution of the resulting equation for the output stored integer, Q[a], is given below:
Multiplication with Nonzero Biases and Mismatched Fractional Slopes. The worst-case implementation of the above equation occurs when the slopes and biases of the input and output signals are
mismatched. In such cases, several low-level integer operations are required to carry out the high-level multiplication (or division). Implementation choices made about these low-level computations
can affect the computational efficiency, rounding errors, and overflow.
In Simulink blocks, the actual multiplication or division operation is always performed on fixed-point variables that have zero biases. If an input has nonzero bias, it is converted to a
representation that has binary-point-only scaling before the operation. If the result is to have nonzero bias, the operation is first performed with temporary variables that have binary-point-only
scaling. The result is then converted to the data type and scaling of the final output.
If both the inputs and the output have nonzero biases, then the operation is broken down as follows:
These equations show that the temporary variables have binary-point-only scaling. However, the equations do not indicate the signedness, word lengths, or values of the fixed exponent of these
variables. The Simulink software assigns these properties to the temporary variables based on the following goals:
● Represent the original value without overflow.
The data type and scaling of the original value define a maximum and minimum real-world value:
The data type and scaling of the temporary value must be able to represent this range without overflow. Precision loss is possible, but overflow is never allowed.
● Use a data type that leads to efficient operations.
This goal is relative to the target that you will use for production deployment of your design. For example, suppose that you will implement the design on a 16-bit fixed-point processor that
provides a 32-bit long, 16-bit int, and 8-bit short or char. For such a target, preserving efficiency means that no more than 32 bits are used, and the smaller sizes of 8 or 16 bits are used if
they are sufficient to maintain precision.
● Maintain precision.
Ideally, every possible value defined by the original data type and scaling is represented perfectly by the temporary variable. However, this can require more bits than is efficient. Bits are
discarded, resulting in a loss of precision, to the extent required to preserve efficiency.
For example, consider the following, assuming a 16-bit microprocessor target:
where Q[Original] is an 8-bit, unsigned data type. For this data type,
The minimum possible value is negative, so the temporary variable must be a signed integer data type. The original variable has a slope of 1, but the bias is expressed with greater precision with two
digits after the binary point. To get full precision, the fixed exponent of the temporary variable has to be -2 or less. The Simulink software selects the least possible precision, which is generally
the most efficient, unless overflow issues arise. For a scaling of 2^-2, selecting signed 16-bit or signed 32-bit avoids overflow. For efficiency, the Simulink software selects the smaller choice of
16 bits. If the original variable is an input, then the equations to convert to the temporary variable are
Multiplication with Zero Biases and Mismatched Fractional Slopes. When the biases are zero and the fractional slopes are mismatched, the implementation reduces to
The quantity
is calculated offline using round-to-nearest and saturation. F[Net] is stored using a fixed-point data type of the form
where E[Net] and Q[Net] are selected automatically to best represent F[Net].
Online Conversions and Operations
Multiplication with Zero Biases and Matching Fractional Slopes. When the biases are zero and the fractional slopes match, the implementation reduces to
No offline conversions are performed.
Online Conversions and Operations
1. The integer values Q[b] and Q[c] are multiplied:
To maintain the full precision of the product, the binary point of Q[RawProduct] is given by the sum of the binary points of Q[b] and Q[c].
2. The previous product is converted to the output data type:
This conversion includes any necessary bit shifting, rounding, or overflow handling. Signal Conversions discusses conversions.
The Multiplication Process
Suppose you want to multiply three numbers. Each of these numbers is represented by a 5-bit word, and each has a different binary-point-only scaling. Additionally, the output is restricted to a
10-bit word with binary-point-only scaling of 2^-4. The multiplication is shown in the following model for the input values 5.75, 2.375, and 1.8125.
Applying the rules from the previous section, the multiplication follows these steps:
1. The first two numbers (5.75 and 2.375) are multiplied:
Note that the binary point of the product is given by the sum of the binary points of the multiplied numbers.
2. The result of step 1 is converted to the output data type:
Signal Conversions discusses conversions. Note that a loss in precision of one bit occurs, with the resulting value of Q[Temp] determined by the rounding mode. For this example, round-to-floor is
used. Furthermore, overflow did not occur but is possible for this operation.
3. The result of step 2 and the third number (1.8125) are multiplied:
Note that the binary point of the product is given by the sum of the binary points of the multiplied numbers.
4. The product is converted to the output data type:
Signal Conversions discusses conversions. Note that a loss in precision of 4 bits occurred, with the resulting value of Q[Temp] determined by the rounding mode. For this example, round-to-floor
is used. Furthermore, overflow did not occur but is possible for this operation.
Blocks that perform multiplication include the Product, Discrete FIR Filter, and Gain blocks.
This section discusses the division of quantities with zero bias.
Fixed-Point Simulink Blocks Division Process
Consider the division of two numbers. Ideally, the real-world values obey the equation
where V[b] and V[c] are the input values and V[a] is the output value. To see how the division is actually implemented, the three ideal values should be replaced by the general [Slope Bias] encoding
scheme described in Scaling:
For the case where the slope adjustment factors are one and the biases are zero for all signals, the solution of the resulting equation for the output stored integer, Q[a], is given by the following
This equation involves an integer division and some bit shifts. If E[a] > E[b]–E[c], then any bit shifts are to the right and the implementation is simple. However, if E[a] < E[b]–E[c], then the bit
shifts are to the left and the implementation can be more complicated. The essential issue is that the output has more precision than the integer division provides. To get full precision, a
fractional division is needed. The C programming language provides access to integer division only for fixed-point data types. Depending on the size of the numerator, you can obtain some of the
fractional bits by performing a shift prior to the integer division. In the worst case, it might be necessary to resort to repeated subtractions in software.
In general, division of values is an operation that should be avoided in fixed-point embedded systems. Division where the output has more precision than the integer division (i.e., E[a] < E[b]–E[c])
should be used with even greater reluctance.
The Division Process
Suppose you want to divide two numbers. Each of these numbers is represented by an 8-bit word, and each has a binary-point-only scaling of 2^-4. Additionally, the output is restricted to an 8-bit
word with binary-point-only scaling of 2^-4.
The division of 9.1875 by 1.5000 is shown in the following model.
For this example,
Assuming a large data type was available, this could be implemented as
where the numerator uses the larger data type. If a larger data type was not available, integer division combined with four repeated subtractions would be used. Both approaches produce the same
result, with the former being more efficient.
Nearly all microprocessors and digital signal processors support well-defined bit-shift (or simply shift) operations for integers. For example, consider the 8-bit unsigned integer 00110101. The
results of a 2-bit shift to the left and a 2-bit shift to the right are shown in the following table.
┃ Shift Operation │ Binary Value │ Decimal Value ┃
┃ No shift (original number) │ 00110101 │ 53 ┃
┃ Shift left by 2 bits │ 11010100 │ 212 ┃
┃ Shift right by 2 bits │ 00001101 │ 13 ┃
You can perform a shift using the Simulink Shift Arithmetic block. Use this block to perform a bit shift, a binary point shift, or both
Shifting Bits to the Right
The special case of shifting bits to the right requires consideration of the treatment of the leftmost bit, which can contain sign information. A shift to the right can be classified either as a
logical shift right or an arithmetic shift right. For a logical shift right, a 0 is incorporated into the most significant bit for each bit shift. For an arithmetic shift right, the most significant
bit is recycled for each bit shift.
The Shift Arithmetic block performs an arithmetic shift right and, therefore, recycles the most significant bit for each bit shift right. For example, given the fixed-point number 11001.011 (-6.625),
a bit shift two places to the right with the binary point unmoved yields the number 11110.010 (-1.75), as shown in the model below:
To perform a logical shift right on a signed number using the Shift Arithmetic block, use the Data Type Conversion block to cast the number as an unsigned number of equivalent length and scaling, as
shown in the following model. The model shows that the fixed-point signed number 11001.001 (-6.625) becomes 00110.010 (6.25). | {"url":"http://www.mathworks.com/help/fixedpoint/ug/rules-for-arithmetic-operations.html?nocookie=true","timestamp":"2014-04-19T02:44:37Z","content_type":null,"content_length":"66047","record_id":"<urn:uuid:fb36157a-6034-4b25-898b-7e423ef3eb07>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Which statement is the contrapositive of the given statement?If a person is a banjo player, then the person is a musician. 1.) If a person is not a musician, then the person is not a banjo player.
2.) If a person is not a banjo player, then the person is a musician. 3.) If a person is not a banjo player, then the person is not a musician. 4.) If a person is a musician, then the person is a
banjo player.
Best Response
You've already chosen the best response.
1. if a person is not a musician, then the person is not a banjo player banjo player > musician contrapositive is not musician > not banjo player I just googled this so I don't know how correct
this answer is. I'm fairly confident though.
Best Response
You've already chosen the best response.
okay. It's ok. thanks though. :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f17a21de4b0aeb795f58e5f","timestamp":"2014-04-17T16:03:50Z","content_type":null,"content_length":"30526","record_id":"<urn:uuid:d15e18fc-90d6-43a9-ba51-e5d3109c6fc5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: August 2004 [00164]
[Date Index] [Thread Index] [Author Index]
Re: populate a list with random numbers from normal distribution?
• To: mathgroup at smc.vnet.net
• Subject: [mg49974] Re: populate a list with random numbers from normal distribution?
• From: Bill Rowe <readnewsciv at earthlink.net>
• Date: Sun, 8 Aug 2004 05:38:03 -0400 (EDT)
• Sender: owner-wri-mathgroup at wolfram.com
On 8/7/04 at 3:52 AM, sean_incali01 at yahoo.com (sean_incali) wrote:
>Only reason I wanted to use the integers is because of the issues
>raised previously, and because i didn't understand them fully.
>I wanted to pick the integers from a distribution in a range and
>then scale the integers to make real numbers.
While there are some technical issues with the subtract with borrow algorithm used by Mathematica for reals, these issues don't have any impact on many applications of pseudo-random numbers. Quite possibly by using Mathematica to generate psuedo-random integers then converting these to reals, you are going to a lot of trouble without having any significant effect on your application. Obviously, for me or someone else to determine whether this is the case or not, details of your application are needed which you haven't yet supplied.
>You said the discrete uniform distribution will pick intergers in
>the range {10000,99999}, or any other distribution.
>Will it do normal or poissonian distribution in that range?
I take "it" here to mean Random[Integer,{10000, 99999}]. If this is correct, then the answer to your question is no. Integers uniformly selected in a given range cannot have either a Poisson distribution or a normal distribution. Each of these distributions has different statistical properties, different relationships between say the mean, standard deviation etc.
>if so how do I implement that?
If you need normal deviates then the simplest way to get them would be
Random[NormalDistribution[mean, stdDev]]
or if it is Poisson deviates you need then try
Also, do note the Poisson distribution is a discrete distribution, meaning the output of Random[PoissonDistribution[mu]] is an integer. So, starting with uniformly distributed integers and converting these to reals would be counterproductive if what you want are Poisson distributed integers.
If for some reason the algorithms used by Mathematica to generate psuedo-random values are not adequate for your application, then it isn't difficult to implement your own algorithm in Mathematica. Knuth in Seminumerical Algorithms Vol 2 discusses a variety of algorithms for generating psuedo-random values from any desired distribution. But do note there are lot of very bad algorithms that have been used in the past. Writing a good algorithm for generating psuedo-random values and validtating it is definitely a non-trivial exercise.
To reply via email subtract one hundred and four
• Follow-Ups: | {"url":"http://forums.wolfram.com/mathgroup/archive/2004/Aug/msg00164.html","timestamp":"2014-04-17T10:04:00Z","content_type":null,"content_length":"37047","record_id":"<urn:uuid:8a3d45b9-7374-4027-b0ae-2ea0dc88703d>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to get the equation from a graph
April 7th 2013, 11:06 AM #1
How to get the equation from a graph
Re: How to get the equation from a graph
There exist an infinite number of functions that will give those (finite number of) points. What you have, A cos((pi/12)t+ 240)+ D, won't work. That would have each loop going up to A+ D and Down
to D- A, not varying as in your graph.
Re: How to get the equation from a graph
A cos((pi/12)t+ 240)+ D is not correct?
I know that it wouldn't work because I need to find A and D. But I think cos((pi/12)t+ 240) is correct at least.
[Acos(Bt+C)+D] ----> B and C are constant in this graphic I think.
What kind of trigonometric function describes this graph?
April 7th 2013, 02:20 PM #2
MHF Contributor
April 7th 2013, 04:05 PM #3 | {"url":"http://mathhelpforum.com/trigonometry/216923-how-get-equation-graph.html","timestamp":"2014-04-17T13:06:06Z","content_type":null,"content_length":"34898","record_id":"<urn:uuid:4a25df43-e366-47f2-8d39-2de665d04530>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/boilacture/asked","timestamp":"2014-04-17T04:09:59Z","content_type":null,"content_length":"106790","record_id":"<urn:uuid:36af9001-8965-4bf4-91a5-526f11f61a9c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maximal Ideals of Q[x]
Find all maximal ideals of $Q[x]$ wich contains $\langle x^3+2x\rangle$. Thanks!!
both ideals $I=<x>, \ J=<x^2+2>$ are maximal in $R=\mathbb{Q}[x]$ and $IJ=<x^3+2x>.$ if $K$ is a maximal ideal of $R$ and $IJ \subseteq K,$ then we have either $I \subseteq K$ or $J \subseteq K,$
because every maximal ideal is prime (more precisely, since $R$ here is a PID, maximality = primeness.) now since $I,J$ are maximal, we will either have $K=I$ or $K=J. \ \ \ \Box$ | {"url":"http://mathhelpforum.com/advanced-algebra/51023-maximal-ideals-q-x.html","timestamp":"2014-04-17T22:26:27Z","content_type":null,"content_length":"35223","record_id":"<urn:uuid:c71ed8bc-d587-4040-8867-61fb8daf333b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
SP: Software for Semidefinite Programming. User’s Guide
Results 1 - 10 of 30
- SIAM REVIEW , 1996
"... ..."
, 1997
"... . We consider least-squares problems where the coefficient matrices A; b are unknown-butbounded. We minimize the worst-case residual error using (convex) second-order cone programming, yielding
an algorithm with complexity similar to one singular value decomposition of A. The method can be interpret ..."
Cited by 149 (13 self)
Add to MetaCart
. We consider least-squares problems where the coefficient matrices A; b are unknown-butbounded. We minimize the worst-case residual error using (convex) second-order cone programming, yielding an
algorithm with complexity similar to one singular value decomposition of A. The method can be interpreted as a Tikhonov regularization procedure, with the advantage that it provides an exact bound on
the robustness of solution, and a rigorous way to compute the regularization parameter. When the perturbation has a known (e.g., Toeplitz) structure, the same problem can be solved in polynomial-time
using semidefinite programming (SDP). We also consider the case when A; b are rational functions of an unknown-but-bounded perturbation vector. We show how to minimize (via SDP) upper bounds on the
optimal worst-case residual. We provide numerical examples, including one from robust identification and one from robust interpolation. Key Words. Least-squares, uncertainty, robustness, second-order
, 1997
"... this paper is organized as follows. First, we discuss the formulation of the semidefinite programming problem used by CSDP. We then describe the predictor corrector algorithm used by CSDP to
solve the SDP. We discuss the storage requirements of the algorithm as well as its computational complexity. ..."
Cited by 144 (1 self)
Add to MetaCart
this paper is organized as follows. First, we discuss the formulation of the semidefinite programming problem used by CSDP. We then describe the predictor corrector algorithm used by CSDP to solve
the SDP. We discuss the storage requirements of the algorithm as well as its computational complexity. Finally, we present results from the solution of a number of test problems. 2 The SDP Problem We
consider semidefinite programming problems of the form max tr (CX)
- In Recent advances in LMI methods for control , 1995
"... . A variety of analysis and design problems in control, communication and information theory, statistics, combinatorial optimization, computational geometry, circuit design, and other fields can
be expressed as semidefinite programming problems (SDPs) or determinant maximization problems (max-det pr ..."
Cited by 46 (19 self)
Add to MetaCart
. A variety of analysis and design problems in control, communication and information theory, statistics, combinatorial optimization, computational geometry, circuit design, and other fields can be
expressed as semidefinite programming problems (SDPs) or determinant maximization problems (max-det problems). These problems often have matrix structure, i.e., some of the optimization variables are
matrices. This matrix structure has two important practical ramifications: first, it makes the job of translating the problem into a standard SDP or maxdet format tedious, and, second, it opens the
possibility of exploiting the structure to speed up the computation. In this paper we describe the design and implementation of sdpsol, a parser/solver for SDPs and max-det problems. sdpsol allows
problems with matrix structure to be described in a simple, natural, and convenient way. Although the current implementation of sdpsol does not exploit matrix structure in the solution algorithm, the
- IEEE Transaction on Automatic Control , 1997
"... Abstract—This paper describes a linear matrix inequality (LMI)-based algorithm for the static and reduced-order output-feedback synthesis problems of nth-order linear time-invariant (LTI)
systems with nu (respectively, ny) independent inputs (respectively, outputs). The algorithm is based on a “cone ..."
Cited by 30 (0 self)
Add to MetaCart
Abstract—This paper describes a linear matrix inequality (LMI)-based algorithm for the static and reduced-order output-feedback synthesis problems of nth-order linear time-invariant (LTI) systems
with nu (respectively, ny) independent inputs (respectively, outputs). The algorithm is based on a “cone complementarity ” formulation of the problem and is guaranteed to produce a stabilizing
controller of order m n 0 max(nu;ny), matching a generic stabilizability result of Davison and Chatterjee [7]. Extensive numerical experiments indicate that the algorithm finds a controller with
order less than or equal to that predicted by Kimura’s generic stabilizability result (m n0nu0ny+1). A similar algorithm can be applied to a variety of control problems, including robust control
synthesis. Index Terms — Complementarity problem, linear matrix inequality, reduced-order stabilization, static output feedback. I.
- AIAA Journal of Guidance, Control, and Dynamics , 1999
"... In this paper we address the problem of low-authority controller (LAC) design. The premise is that the actuators have limited authority, and hence cannot significantly shift the eigenvalues of
the system. As a result, the closed-loop eigenvalues can be well approximated analytically using perturbati ..."
Cited by 30 (14 self)
Add to MetaCart
In this paper we address the problem of low-authority controller (LAC) design. The premise is that the actuators have limited authority, and hence cannot significantly shift the eigenvalues of the
system. As a result, the closed-loop eigenvalues can be well approximated analytically using perturbation theory. These analytical approximations may suffice to predict the behavior of the
closed-loop system in practical cases, and will provide at least a very strong rationale for the first step in the design iteration loop. We will show that LAC design can be cast as convex
optimization problems that can be solved efficiently in practice using interior-point methods. Also, we will show that by optimizing the ℓ1 norm of the feedback gains, we can arrive at sparse
designs, i.e., designs in which only a small number of the control gains are nonzero. Thus, in effect, we can also solve actuator/sensor placement or controller architecture design problems.
Keywords: Low-authority control, actuator/sensor placement, linear operator perturbation theory, convex optimization, second-order cone programming, semi-definite programming, linear matrix
inequality. 1
- in Proc. Int. Conf. on Computer Aided Design , 1997
"... Conventional methods for optimal sizing of wires and transistors use linear RC circuit models and the Elmore delay as a measure of signal delay. If the RC circuit has a tree topology the sizing
problem reduces to a convex optimization problem which can be solved using geometric programming. The tree ..."
Cited by 28 (11 self)
Add to MetaCart
Conventional methods for optimal sizing of wires and transistors use linear RC circuit models and the Elmore delay as a measure of signal delay. If the RC circuit has a tree topology the sizing
problem reduces to a convex optimization problem which can be solved using geometric programming. The tree topology restriction precludes the use of these methods in several sizing problems of
significant importance to high-performance deep submicron design including, for example, circuits with loops of resistors, e.g., clock distribution meshes, and circuits with coupling capacitors,
e.g., buses with crosstalk between the lines. The paper proposes a new optimization method which can be used to address these problems. The method uses the dominant time constant as a measure of
signal propagation delay in an RC circuit, instead of Elmore delay. Using this measure, sizing of any RC circuit can be cast as a convex optimization problem which can be solved using the recently
developed efficient interi...
- In Proc. 38th IEEE Conf. Decision Control , 1999
"... Abstract — In this paper we consider dynamical systems which are driven by “events ” that occur asynchronously. It is assumed that the event rates are fixed, or at least they can be bounded on
any time period of length T. Such systems are becoming increasingly important in control due to the very ra ..."
Cited by 27 (0 self)
Add to MetaCart
Abstract — In this paper we consider dynamical systems which are driven by “events ” that occur asynchronously. It is assumed that the event rates are fixed, or at least they can be bounded on any
time period of length T. Such systems are becoming increasingly important in control due to the very rapid advances in digital systems, communication systems, and data networks. Examples of such
systems include, control systems in which signals are transmitted over an asynchronous network; distributed control systems in which each subsystem has its own objective, sensors, resources and level
of decision making; parallelized numerical algorithms in which the algorithm is separated into several local algorithms operating concurrently at different processors; and queuing networks. We
present a Lyapunov-based theory for asynchronous dynamical systems and show how Lyapunov functions and controllers can be constructed for such systems by solving linear matrix inequality (LMI) and
bilinear matrix inequality (BMI) problems. Examples are also presented to demonstrate the effectiveness of the approach.
, 1999
"... In this paper we present a path-following (homotopy) method for (locally) solving bilinear matrix inequality (BMI) prob- lems in control. The method is to linearize the BMI using a first order
perturbation approximation, and then iteratively compute a perturbation that "slightly" improves the contro ..."
Cited by 19 (3 self)
Add to MetaCart
In this paper we present a path-following (homotopy) method for (locally) solving bilinear matrix inequality (BMI) prob- lems in control. The method is to linearize the BMI using a first order
perturbation approximation, and then iteratively compute a perturbation that "slightly" improves the controller performance by solving a semidefinite program (SDP). This process is repeated un- til
the desired performance is achieved, or the performance cannot be improved any further. While this is an approximate method for solving BMIs, we present several examples that illustrate the
effectiveness of the approach.
, 1996
"... We propose to use the dominant time constant of a resistor-capacitor (RC) circuit as a measure of the signal propagation delay through the circuit. We show that the dominant time constant is a
quasiconvex function of the conductances and capacitances, and use this property to cast several interestin ..."
Cited by 16 (8 self)
Add to MetaCart
We propose to use the dominant time constant of a resistor-capacitor (RC) circuit as a measure of the signal propagation delay through the circuit. We show that the dominant time constant is a
quasiconvex function of the conductances and capacitances, and use this property to cast several interesting design problems as convex optimization problems, specifically, semidefinite programs
(SDPs). For example, assuming that the conductances and capacitances are affine functions of the design parameters (which is a common model in transistor or interconnect wire sizing), one can
minimize the power consumption or the area subject to an upper bound on the dominant time constant, or compute the optimal tradeoff surface between power, dominant time constant, and area. We will
also note that, to a certain extent, convex optimization can be used to design the topology of the interconnect wires. This approach has two advantages over methods based on Elmore delay
optimization. First, it handles a far wider class of circuits, e.g., those with non-grounded capacitors. Second, it always results in convex optimization problems for which very efficient
interiorpoint methods have recently been developed. We illustrate the method, and extensions, with several examples involving optimal wire and transistor sizing. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1634685","timestamp":"2014-04-20T02:17:44Z","content_type":null,"content_length":"38706","record_id":"<urn:uuid:2467ed29-c62b-4c45-a290-2d8ed2915492>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |