content
stringlengths
86
994k
meta
stringlengths
288
619
Lennart Axel Edvard Carleson Born: 18 March 1928 in Stockholm, Sweden Click the picture above to see three larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Lennart Carleson completed his secondary schooling in Karlstad, Sweden, graduating in 1945. He then entered Uppsala University, obtaining his first degree (Fil. kand.) in 1947, and his Master's Degree (Fil. lic.) in 1949. Carleson's research thesis was supervised by Arne Beurling and he was awarded his doctorate in 1950 for On a Class of Meromorphic Functions and Its Exceptional Sets. He spoke of his supervisor in 1984 [1]:- It was my great fortune to have been introduced to mathematics by Arne Beurling; the tradition he, T Carleman and Marcel Riesz initiated is very obviously responsible for the good standard of mathematics in our country. Personally I am very happy for this opportunity to express to Arne Beurling my gratitude for having guided me into a fruitful area of mathematics and for having given an example that only hard problems count. Following the award of his doctorate he was appointed as a lecturer in mathematics at Uppsala University. Carleson spent session 1950-51 in the United States, undertaking post-doctoral work at Harvard University. There he was greatly influenced by A Zygmund and R Salem who were both at Harvard that year and, as we explain below, it was Zygmund's influence which set him on the path to proving his most famous result. Carleson returned to Sweden, taking up his lectureship at Uppsala University at the beginning of session 1951-52. In 1954 he was appointed to a professorship at the University of Stockholm but he returned to Uppsala in the following year, holding a chair of mathematics there until 1993. During this time he made a number of research visits to the United States, being a visiting research scientist at MIT during the autumn of 1957, spending session 1961-62 at the Institute for Advanced Studies, Princeton, being a guest professor at Stanford University in session 1965-66, and holding a similar position at MIT during 1974-75. Among many important roles which Carleson has occupied, we should mention three in particular. First his very significant role as Director of the Mittag-Leffler Institute, Stockholm, from 1968 to 1984, during which time he built the Institute from a small base into one of the leading mathematical research institutes in the world. His other highly significant roles were that of editor of Acta Mathematica from 1956 to 1979, and as President of the International Mathematical Union from 1978 to 1982. In this last mentioned position, he worked tirelessly to have the People's Republic of China represented on the Union and was the main driving force behind the creation of the Nevanlinna Prize honoring the contributions of computer science to mathematics by rewarding young theoretical computer scientists. Carleson's mathematical contributions have been far too many, and much too deep, to be described in any detail in a biography of this type. However we will try to give some idea of the importance of his contributions. We begin with a quote from Marcus du Sautoy who writes:- The mark of a great mathematician is someone who not only cracks a big open problem that has defeated previous generations of mathematicians but who then goes on to create tools for future generations. During his career as a mathematician Carleson has been influential in several major areas of analysis and dynamical systems over nearly half a century of mathematical activity. Carleson's mathematics is characterized by a deep geometric insight combined with an amazing control of the branching complexities of the proofs. His contributions have provided future generations with tools to carry out a systematic study of analysis and dynamical systems. A major problem solved by Carleson in 1962 was the famous 'corona problem' in the paper Interpolations by bounded analytic functions and the corona problem. As so often in his work, not only did he solve the problem but in doing so he introduced what are today called 'Carleson measures' which went on to become a fundamental tool in complex analysis and harmonic analysis. In 1967 Hörmander introduced some ideas to simplify Carleson's proof and Carleson lectured on The corona theorem to the Fifteenth Scandinavian Congress in Oslo in 1968. The conference Proceedings contains a complete proof by Carleson:- In the paper ... a complete proof is given incorporating Hörmander's ideas. Moreover, the presentation is quite clear, so that the proof, while remaining non trivial, is now reasonably easy to In 1966 Carleson solved one of the outstanding problems of mathematics in his paper On convergence and growth of partial sums of Fourier series. Fourier, in 1807, had claimed that every function equals the sum of its Fourier series. Of course Fourier was thinking about 'well-behaved' functions so his initial claim has to be modified somewhat. A major research area throughout the 19^th century concerned the convergence of Fourier series, and continuous functions whose Fourier series diverges everywhere were constructed by du Bois-Reymond. In 1913 Luzin conjectured that if a function f is square integrable then the Fourier series of f converges pointwise to f Lebesgue almost everywhere. Kolmogorov proved results in 1928 which seemed to suggest that Luzin's conjecture must be false but Carleson amazed the world of mathematics when he proved Luzin's long-standing conjecture in 1966. He explained in [4] how he was led to prove the theorem:- ... the problem of course presents itself already when you are a student and I was thinking about the problem on and off, but the situation was more interesting than that. The great authority in those days was Zygmund and he was completely convinced that what one should produce was not a proof but a counter-example. When I was a young student in the United States, I met Zygmund and I had an idea how to produce some very complicated functions for a counter-example and Zygmund encouraged me very much to do so. I was thinking about it for about 15 years on and off, on how to make these counter-examples work and the interesting thing that happened was that I realised why there should be a counter-example and how you should produce it. I thought I really understood what was the background and then to my amazement I could prove that this "correct" counter-example couldn't exist and I suddenly realised that what you should try to do was the opposite, you should try to prove what was not fashionable, namely to prove convergence. The most important aspect in solving a mathematical problem is the conviction of what is the true result. Then it took 2 or 3 years using the techniques that had been developed during the past 20 years or so. Carleson lectured on his spectacular result at the International Congress of Mathematicians at Moscow in 1966 when he gave the address Convergence and summability of Fourier series. He began his address with the words:- I do not intend to give in this lecture any survey of the very large field covered by the title. I rather want to present my personal interests which are concentrated on the almost everywhere behaviour of the partial sums. Also the subject of summability will only be touched upon. In 1967 he published the book Selected problems on exceptional sets which Ahlfors describes as follows:- The author announces that he had originally prepared a survey of the theory of small sets in 1959. At that time several books covering parts of the subject were published, and he found that a survey was less desirable. In 1961 he collected those parts that seemed to contain new or less known aspects, methods of proof or results. Some later results were added, and the author states that the selection reflects his own personal tastes. Readers will agree that he has successfully eliminated the dull parts. A substantial portion of the results are original and once again bear witness to the author's extraordinary technical skill. Carleson received the Wolf Prize 1992 together with John G Thompson. The authors of [6] write:- The citation emphasizes not only Carleson's fundamental scientific contributions, the best known of which perhaps are the proof of Luzin's conjecture on the convergence of Fourier series, the solutions of the corona problem and the interpolation problem for bounded analytic functions, the solution of the extension problem for quasiconformal mappings in higher dimensions, and the proof of the existence of 'strange attractors' in the Hénon family of planar maps, but also his outstanding role as scientific leader and advisor. In 2006 Carleson received his greatest honour when he received the Abel Prize:- ... for his profound and seminal contributions to harmonic analysis and the theory of smooth dynamical systems. The citation by the Abel Committee states:- Carleson is always far ahead of the crowd. He concentrates on only the most difficult and deep problems. Once these are solved, he lets others invade the kingdom he has discovered, and he moves on to even wilder and more remote domains of Science. ... Carleson's work has forever altered our view of analysis. Not only did he prove extremely hard theorems, but the methods he introduced to prove them have turned out to be as important as the theorems themselves. His unique style is characterized by geometric insight combined with amazing control of the branching complexities of the proofs. On 23 May 2006 he received the Prize from Queen Sonja. He said in reply:- Carl Friedrich Gauss once described mathematics at the queen of science, and for a servant of this queen like me to stand here in these beautiful surroundings and receive the grand Abel Prize from a real queen is really an overwhelming event in my life. Peter W Jones in [5] gives this summary:- Carleson's influence extends far beyond his research, a fact well known to the broad mathematical community. Besides his papers Carleson has published an influential book on potential theory 'Selected problems in the theory of exceptional sets' and helped make accessible the unpublished work of Arne Beurling (i.e., as co-editor with P Malliavin, J Neuberger, and J Wermer of 'The collected works of Arne Beurling' 2 Vols 1989) ... But Carleson's influence extends far beyond his publications. He has trained many PhD students, and many more mathematicians who came from around the world to learn from him. As director of the Mittag-Leffler Institute, he not only developed a world-class research centre, but moulded an entire generation of analysts. His research in analysis is a series of towering and fundamental discoveries. His friends know well his generosity, encouragement and selfless giving of himself. Carleson has received a host of honours for his truly outstanding contributions. Many learned societies around the world have been eager to elect him to membership. These are the Royal Swedish Academy of Sciences, the American Academy of Arts and Sciences, the Russian Academy of Sciences, the Royal Society, London, the French Academy of Sciences, the Royal Danish Academy of Sciences and Letters, the Norwegian Academy of Science and Letters, the Royal Norwegian Society of Sciences and Letters, the Finnish Academy of Science and Letters, and the Hungarian Academy of Sciences. He had also won numerous prizes, some of which we have mentioned above. These are the Leroy Steel Prize from the American Mathematical Society (1984), the Wolf Prize (1992), the Lomonosov Gold Medal from the Russian Academy of Sciences (2002), the Sylvester Medal from the Royal Society, London (2003), and the Abel Prize (2006). He has been awarded honorary doctorates by the University of Helsinki (1982), the University of Paris (1988), and the Royal Institute of Technology, Stockholm (1989). Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (6 books/articles) A Poster of Lennart Carleson Mathematicians born in the same country Honours awarded to Lennart Carleson (Click below for those honoured in this way) Speaker at International Congress 1966 BMC Plenary speaker 1971 LMS Honorary Member 1981 AMS Steele Prize 1984 Wolf Prize 1992 Sylvester Medal 2003 Abel Prize 2006 Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © August 2006 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Carleson.html","timestamp":"2014-04-19T17:01:21Z","content_type":null,"content_length":"27921","record_id":"<urn:uuid:cd4f75ff-ad3a-4721-9f40-f2ba8c1275bb>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the function rule for the graph shown? A graph of the ordered pairs: 9 comma zero, eleven comma 2, twelve comma 3, and fourteen comma 5. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fab0e00e4b059b524f6d886","timestamp":"2014-04-16T13:44:02Z","content_type":null,"content_length":"72316","record_id":"<urn:uuid:3e4ff5a8-612f-4199-b8cc-7239cb4a2beb>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Yelm Math Tutor Find a Yelm Math Tutor ...I also tutored a student in Algebra 2 who received A's on every test following my instruction. I enjoy working one on one with students, whether helping them with homework or preparing for an exam. I am willing to create practice tests for students to ensure their success. 19 Subjects: including calculus, reading, statistics, ACT Math ...I strive to assist students in understanding the 'big picture' as well as the specifics of the material at hand. Organic chemistry proves more challenging for most due to the fact that you are, in essence, learning a new language. However, with sufficient 'vocabulary', the individual topics ble... 12 Subjects: including geometry, ASVAB, algebra 1, algebra 2 I have tutored math for over 3 years. I am happy to help with many different math classes, from Elementary math to Calculus. I have helped my former classmates and my younger brother many times with Physics. 16 Subjects: including algebra 1, algebra 2, calculus, chemistry ...I have extensive knowledge of windows systems and how to make the OS more efficient. If you don't see it listed please ask as I have formally worked with most Windows operating systems. I am a highly adaptable self starter with exceptional SDLC, design integration, testing, technical writing, presentation/CBT, problem solving, IT project management, and administrative engineering 53 Subjects: including algebra 1, chemistry, ACT Math, geometry ...After that, I came over States and attended University of Washington. I can read and speak Korean as a native Korean. When I was a UW student, after I took differential equations, I tutored this subject to college students. 20 Subjects: including trigonometry, linear algebra, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Yelm_Math_tutors.php","timestamp":"2014-04-16T16:44:17Z","content_type":null,"content_length":"23297","record_id":"<urn:uuid:4484b7b7-225e-4d1b-9d98-7cf04f857c42>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
What is Mathematics? Date: 09/22/2000 at 04:40:38 From: Erum Subject: Definition of maths Please tell me the definition of maths. If there is no single definition, is there a group of definitions? Date: 09/22/2000 at 16:15:03 From: Doctor Ian Subject: Re: Definition of maths Hi Erum, Stripped to its barest essence, mathematics is the derivation of theorems from axioms. So what does that mean? It means that mathematics is a collection of extended, collaborative games of 'what if', played by mathematicians who make up sets of rules (axioms) and then explore the consequences (theorems) of following those rules. For example, you can start out with a few rules like: A point has only location. A line has direction and length. Two lines interesect at a point. and so on, and then you see where that takes you. That's what Euclid did, and ended up more or less inventing geometry. And that's what other mathematicians have done over the centuries, inventing arithmetic, and number theory, and calculus, and group theory, and so It's a little like what you do when you invent a board game like chess. You specify that there are such-and-such pieces, and they can move in such-and-such ways, and then you let people explore which board positions are possible or impossible to achieve. The main difference is that in chess, you're trying to win, while in math, you're just trying to figure out what kinds of things can - and can't - happen. So a 'chessamatician', instead of playing complete games, might just sit and think about questions like this: If I place a knight (the piece that looks like a horse, and moves in an L-shaped jump) on any position, can it reach all other What is the minimum number of moves that would be required to get from any position to any other position? But they would also think about questions like this: What would happen if I changed the shape of the chessboard? What would happen if I allowed some pieces ('ghosts') to move through other pieces as if they weren't there? What would happen if I made the board three dimensional, or let pieces disappear for specified periods, or made them appear and disappear at regular intervals (for example, if a rook becomes invisible for three moves, then visible for three, then invisible again, and so on)? What would happen if I allowed more than two players, or let players take turns in parallel instead of in sequence? In other words, mathematicians are interested not only in what happens when you adopt a particular set of rules, but also in what happens when you change the rules. For example, mathematicians in Germany and Russia started with Euclid's geometry, but asked: "What if parallel lines _could_ intersect each other? How would that change things?" And they ended up inventing an entirely new branch of geometry, which turned out to be just what Einstein needed for his theory of general I hope this helps. Write back if you'd like to discuss this some more, or if you have any other questions about math. - Doctor Ian, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/52350.html","timestamp":"2014-04-18T10:49:50Z","content_type":null,"content_length":"8060","record_id":"<urn:uuid:f1aed7b5-b8a8-4c92-88d1-cde5fb0e9926>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
CMS Winter 2005 Meeting Plenary Speakers We shall discuss the use of probabilistic methods in generation of finite groups. We shall give a survey of some of the main results in the area and also discuss some more recent results. We shall also discuss some related results on derangements in finite primitive permutation groups. Does every operator T on a Hilbert space H have a non-trivial closed invariant subspace? This is the famous and still open "invariant subspace problem" for operators on a Hilbert space. A natural generalization of the problem is: Let M be a von Neumann algebra on a Hilbert space H. Does every operator T in M have a non-trivial closed invariant subspace K affiliated with M? (K is affiliated with M, iff the orthogonal projection on K belongs to M.) In the special case, when M is a II[1]-factor (i.e., a infinite dimensional von Neumann factor with a bounded trace), it turns out, that "almost all" operators in M have non-trivial closed invariant subspaces affiliated with M. More precisely, it holds for all operators in M for which L. G. Brown's spectral distribution measure for T is not concentated in a single point of the complex plane. The result is obtained in collaboration with Hanne Schultz, and it relies in a crucial way on Voiculescu's free probability theory. A beautiful result in combinatorial number theory is Szemeredi's Theorem: a set of integers with positive upper density contains arbitrarily long arithmetic progressions. In the 1970s, Furstenberg established the deep connections between combinatorics and ergodic theory, using ergodic theory to prove Szemeredi's Theorem. This development lead to the field of Ergodic Ramsey Theory and many new combinatorial and number theoretic statements were proven using ergodic theory. In the last year, this interaction took a new twist, with ergodic methods playing an important role in Green and Tao's proof that the prime numbers contain arbitrarily long arithmetic progressions. I will give an overview of this interplay, with a focus on recent developments in ergodic theory. One of the most important and prototype multi-scale systems involves a comprehensive model for the coupled atmosphere ocean system for both climate change and longer term weather prediction. One of the striking recent observational discoveries is the profound impact of variations in the tropics on all of these issues. The talk has four parts: (1) an introduction to these issues; (2) novel behavior of waves in the simplest tropical climate models and their mathematical analysis compared/contrasted with relaxation limits and combustion waves; (3) systematic mathematical strategies for coarse graining stochastic lattice models for both material science and climate physics; (4) application of (3) to show the dramatic effect in simple tropical climate models of stochastic effects on both the climatology and fluctuations. There are numerous predictions from statistical physics regarding random systems in the plane which were until recently beyond the reach of mathematical understanding. Some of the better-known examples include percolation and the Ising model. We will focus on percolation and describe our growing understanding of it through a sequence of insights (which are simple in hindsight) from 1960 through today.
{"url":"http://cms.math.ca/Events/winter05/abs/Plen.html","timestamp":"2014-04-18T03:01:16Z","content_type":null,"content_length":"14168","record_id":"<urn:uuid:c31a03e3-026e-4c8b-bd29-453121116f56>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
s o Donald Coxeter Taking geometry into the realms of art, footballing, kaleidoscopes and relativistic quantum field theory Donald Coxeter was one of the leading mathematicians, and perhaps the greatest geometer, of the last century. British by birth and training, he spent his working life in Canada, as a professor at the University of Toronto from 1936 until his retirement. His tremendous output, extending over 70 years, included 12 books -- at least four of them classics -- and some two hundred papers. At two and a half millennia old, geometry has claims to be the oldest and noblest of the branches of mathematics. It was, with arithmetic and algebra, part of the core curriculum of school mathematics and especially of university mathematics until the middle of the last century, when it began to lose ground. Coxeter's best-known book, Introduction to Geometry (1961), was a deliberate and partly successful attempt to halt this erosion. The Ancient Greeks were aware of the five "Platonic solids", solid figures whose faces were identical regular polygons. Three have (equilateral) triangular faces -- the tetrahedron (three faces meeting at each vertex), the octahedron (with four) and the icosahedron (with five). In addition, there are the cube (three squares meeting at each vertex), and dodecahedron (three regular pentagons at each vertex). Coxeter's Regular Polytopes (1948) gives a systematic account of these, and their relatives. One of these, involving both hexagons and pentagons, is now well-known for its use on soccer balls, and it is an illustration of the power and universal reach of mathematics that this figure has had a profound impact in two quite different fields. The first is chemistry, where it led Sir Harry Kroto and collaborators to the discovery of carbon 60. The second is architecture, where it inspired Buckminster Fuller to create his famous geodesic dome: the carbon 60 molecule is accordingly called Buckminsterfullerene or the Buckyball. The ordinary geometry of the world, studied by the Ancient Greeks, is called Euclidean geometry. But in the 19th century mathematics received a profound and liberating shock when it was discovered that other geometries exist, in which, for instance, parallel lines meet, and the angles of a triangle do not add up to two right angles. Coxeter's Non-Euclidean Geometry (1942) was a classic treatment of this field. The Real Projective Plane (1949) was an equally important treatment of the mathematics of perspective (and now of computer graphics and virtual reality). When Coxeter spoke at the International Congress of Mathematicians in 1954, he attended the special exhibition there of the graphic work of the Dutch artist M. C. Escher (1989-1972). The two men became friends, and there was a remarkable cross-fertilisation. Coxeter's mathematical knowledge of non-Euclidean geometry inspired Escher's series of prints Circle Limit I-IV. These are based on the standard model of non-Euclidean geometry, represented in the interior of a circle where as one approaches the circumference one "goes off to infinity". This represents one of the three basic types of geometry (negative curvature). The other two (ordinary plane geometry, with zero curvature) and spherical geometry (with positive curvature) are also represented in Escher's work, the first in his honeycomb patterns and the second in his woodcuts on spheres. Escher wrote to Coxeter in 1958 thanking him for his booklet A Symposium on Symmetry and adding "the text of your article on Crystal Symmetry and its Generalisations is much too learned for a simple, self-made pattern man like me." Coxeter's comment on this collaboration was: "Escher did it by instinct, I did it by trigonometry." Much of Coxeter's time was devoted to group theory, or ways of measuring symmetry. This concerns the geometry of, for instance, kaleidoscopes and reflections in different planes, now known as Coxeter groups. His book Generators and Relations for Discrete Groups (with W. O. J. Moser, 1957) contains an introduction to his extensive work on geometrical figures. Another strand of his thinking became influential in theoretical physics, where his ideas played a role in such areas as relativistic quantum field theory, the marriage of quantum theory with Einstein's special theory of relativity. Coxeter numbers, Coxeter diagrams and the like play their part in the physics of elementary particles and their classification. Although he never used computers in his own mathematics, similar ideas are crucial to the information age, specifically, in the theory of error-correcting codes, and Coxeter's work abounds in such marvellous illustrations of the power of mathematics in areas apparently far removed from it. Harold Scott MacDonald Coxeter was born in Kensington in 1907. His father was in manufacturing, his mother was an artist. He was a mathematical prodigy, and also highly musical, becoming a fine pianist and composing an opera at the age of 12. His achievements at St George's School, Harpenden, led his father to consult Bertrand Russell about his son's future. The recommendation that he should leave school and use a private maths tutor led to a scholarship to Trinity College, Cambridge. There he attended lectures by Ludwig Wittgenstein, took a first, and in 1931 obtained a doctorate under H. F. Baker, who was then the leading figure in geometry in Britain. He became a research fellow in Cambridge, and spent two years at Princeton as a visiting fellow. The year 1936 was a crucial one in Coxeter's life. He accepted the offer of an assistant professorship at the University of Toronto, where he was to stay for the rest of his life. In the same year he married Hendrina Brouwer. Coxeter was widely honoured. He was elected a Fellow of the Royal Society of Canada in 1948 and a Fellow of the Royal Society in 1950. He was president of the Canadian Mathematical Society (1962-63), vice-president of the American Mathematical Society (1968), president of the International Congress of Mathematicians in Vancouver (1974), a foreign member of the American Academy of Arts and Sciences, and a Companion of the Order of Canada (1997). Coxeter, whose hobby was magic, attributed his longevity to vegetarianism and his love of mathematics. His happy family life and the esteem of students and colleagues worldwide doubtless contributed His wife died in 1999. He is survived by his son and daughter. [Donald Coxeter, mathematician, was born on February 9, 1907.He died on March 31, 2003, aged 96.] Copyright © The Times, 2003
{"url":"http://www-gap.dcs.st-and.ac.uk/history/Obits/Coxeter.html","timestamp":"2014-04-20T08:15:48Z","content_type":null,"content_length":"7373","record_id":"<urn:uuid:69a2bab2-13ad-4a68-9c57-58b9e7d17017>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
nag_2d_spline_fit_scat (e02ddc) NAG Library Function Document nag_2d_spline_fit_scat (e02ddc) 1 Purpose nag_2d_spline_fit_scat (e02ddc) computes a bicubic spline approximation to a set of scattered data. The knots of the spline are located automatically, but a single argument must be specified to control the trade-off between closeness of fit and smoothness of fit. 2 Specification #include <nag.h> #include <nage02.h> void nag_2d_spline_fit_scat (Nag_Start start, Integer m, const double x[], const double y[], const double f[], const double weights[], double s, Integer nxest, Integer nyest, double *fp, Integer * rank, double *warmstartinf, Nag_2dSpline *spline, NagError *fail) 3 Description nag_2d_spline_fit_scat (e02ddc) determines a smooth bicubic spline approximation $s x,y$ to the set of data points $x r , y r , f r$ with weights $w r$, for $r=1,2,…,m$. The approximation domain is considered to be the rectangle $x min , x max × y min , y max$, where $x min y min$ and $x max y max$ denote the lowest and highest data values of $x y$. The spline is given in the B-spline representation $s x,y = ∑ i=1 n x - 4 ∑ j=1 n y - 4 c ij M i x N j y ,$ (1) $M i x$ $N j y$ denote normalized cubic B-splines, the former defined on the knots $λ i$ $λ i+4$ and the latter on the knots $μ j$ $μ j+4$ . For further details, see Hayes and Halliday (1974) for bicubic splines and de Boor (1972) for normalized B-splines. The total numbers $n x$ $n y$ of these knots and their values $λ 1 , … , λ n x$ $μ 1 , … , μ n y$ are chosen automatically by the function. The knots $λ 5 , … , λ n x - 4$ $μ 5 , … , μ n y - 4$ are the interior knots; they divide the approximation domain $x min , x max × y min , y max$ $n x - 7 × n y - 7$ $λ i , λ i+1 × μ i , μ i+1$ , for $i=4,5,…, n x - 4$ $j=4,5,…, n y - 4$ . Then, much as in the curve case (see nag_1d_spline_fit (e02bec) );, the coefficients $c ij$ are determined as the solution of the following constrained minimization problem: subject to the constraint $θ = ∑ r=1 m ε r 2 ≤ S ,$ (3) is a measure of the (lack of) smoothness of $s x,y$ . Its value depends on the discontinuity jumps in $s x,y$ across the boundaries of the subpanels. It is zero only when there are no discontinuities and is positive otherwise, increasing with the size of the jumps (see Dierckx (1981b) for details). $ε r$ denotes the weighted residual $w r f r - s x r , y r$ , and is a non-negative number to be specified. By means of the argument , ‘the smoothing factor’, you will then control the balance between smoothness and closeness of fit, as measured by the sum of squares of residuals in 3. If is too large, the spline will be too smooth and signal will be lost (underfit); if is too small, the spline will pick up too much noise (overfit). In the extreme cases the method would return an interpolating spline were set to zero, and the least squares bicubic polynomial is set very large. Experimenting with values between these two extremes should result in a good compromise. (See Section 8.3 for advice on choice of .) Note however, that this function, unlike nag_1d_spline_fit (e02bec) nag_2d_spline_fit_grid (e02dcc) , does not allow to be set exactly to zero. The method employed is outlined in Section 8.5 and fully described in Dierckx (1981a) Dierckx (1981b) . It involves an adaptive strategy for locating the knots of the bicubic spline (depending on the function underlying the data and on the value of ), and an iterative method for solving the constrained minimization problem once the knots have been determined. Values and derivatives of the computed spline can subsequently be computed by calling nag_2d_spline_eval (e02dec) nag_2d_spline_eval_rect (e02dfc) nag_2d_spline_deriv_rect (e02dhc) as described in Section 8.6 4 References de Boor C (1972) On calculating with B-splines J. Approx. Theory 6 50–62 Dierckx P (1981a) An improved algorithm for curve fitting with spline functions Report TW54 Department of Computer Science, Katholieke Univerciteit Leuven Dierckx P (1981b) An algorithm for surface fitting with spline functions IMA J. Numer. Anal. 1 267–283 Hayes J G and Halliday J (1974) The least-squares fitting of cubic spline surfaces to general data sets J. Inst. Math. Appl. 14 89–103 Peters G and Wilkinson J H (1970) The least-squares problem and pseudo-inverses Comput. J. 13 309–316 Reinsch C H (1967) Smoothing by spline functions Numer. Math. 10 177–183 5 Arguments 1: start – Nag_StartInput On entry must be set to $start=Nag_Cold$ (cold start) The function will build up the knot set starting with no interior knots. No values need be assigned to $spline→nx$ and $spline→ny$ and memory will be internally allocated to $spline→lamda$, $spline→mu$ and $spline→c$. $start=Nag_Warm$ (warm start) The function will restart the knot-placing strategy using the knots found in a previous call of the function. In this case, all arguments except s must be unchanged from that previous call. This warm start can save much time in searching for a satisfactory value of $S$. Constraint: $start=Nag_Cold$ or $Nag_Warm$. 2: m – IntegerInput On entry , the number of data points. The number of data points with nonzero weight (see ) must be at least 16. 3: x[m] – const doubleInput 4: y[m] – const doubleInput 5: f[m] – const doubleInput On entry: $x[r-1]$, $y[r-1]$, $f[r-1]$ must be set to the coordinates of $x r , y r , f r$, the $r$th data point , for $r=1,2,…,m$. The order of the data points is immaterial. 6: weights[m] – const doubleInput On entry must be set to $w r$ , the th value in the set of weights, for . Zero weights are permitted and the corresponding points are ignored, except when determining $x min$ $x max$ $y min$ $y max$ Section 8.4 ). For advice on the choice of weights, see the e02 Chapter Introduction Constraint: the number of data points with nonzero weight must be at least 16. 7: s – doubleInput On entry : the smoothing factor, . For advice on the choice of , see Section 3 Section 8.2 Constraint: $s>0.0$. 8: nxest – IntegerInput 9: nyest – IntegerInput On entry : an upper bound for the number of knots $n x$ $n y$ required in the directions respectively. In most practical situations, $nxest = nyest = 5 + m$ is sufficient. See also Section 8.3 Constraint: $nxest≥8$ and $nyest≥8$. 10: fp – double *Output On exit : the weighted sum of squared residuals, , of the computed spline approximation. should equal within a relative tolerance of 0.001 unless $spline→nx = spline→ny = 8$ , when the spline has no interior knots and so is simply a bicubic polynomial. For knots to be inserted, must be set to a value below the value of produced in this case. 11: rank – Integer *Output On exit gives the rank of the system of equations used to compute the final spline (as determined by a suitable machine-dependent threshold). When $rank = spline→nx-4 × spline→ny-4$ , the solution is unique; otherwise the system is rank-deficient and the minimum-norm solution is computed. The latter case may be caused by too small a value of 12: warmstartinf – double *Output On exit: if the warm start option is used, its value must be left unchanged from the previous call. 13: spline – Nag_2dSpline * Pointer to structure of type Nag_2dSpline with the following members: nx – IntegerInput/Output On entry: if the warm start option is used, the value of $nx$ must be left unchanged from the previous call. On exit: the total number of knots, $n x$, of the computed spline with respect to the $x$ variable. lamda – double *Input/Output On entry : a pointer to which if , memory of size is internally allocated. If the warm start option is used, the values $lamda[0] , lamda[1] , … , lamda[nx-1]$ must be left unchanged from the previous call. On exit contains the complete set of knots $λ i$ associated with the variable, i.e., the interior knots $lamda[4] , lamda[5] , … , lamda[nx-5]$ as well as the additional knots $lamda[0] = lamda[1] = lamda[2] = lamda[3] = x min$ $lamda[nx-4] = lamda[nx-3] = lamda[nx-2] = lamda[nx-1] = x max$ needed for the B-spline representation (where $x min$ $x max$ are as described in Section 3 ny – IntegerInput/Output On entry: if the warm start option is used, the value of $ny$ must be left unchanged from the previous call. On exit: the total number of knots, $n y$, of the computed spline with respect to the $y$ variable. mu – double *Input/Output On entry : a pointer to which if , memory of size is internally allocated. If the warm start option is used, the values $mu[0] , mu[1] , … , mu[ny-1]$ must be left unchanged from the previous call. On exit contains the complete set of knots $μ i$ associated with the variable, i.e., the interior knots as well as the additional knots $mu[0] = mu[1] = mu[2] = mu[3] = y min$ $mu[ny-4] = mu[ny-3] = mu[ny-2] = mu[ny-1] = y max$ needed for the B-spline representation (where $y min$ $y max$ are as described in Section 3 c – double *Output On exit : a pointer to which, if , memory of size $nxest-4 × nyest-4$ is internally allocated. $c[ n y - 4 × i-1 + j - 1 ]$ is the coefficient $c ij$ defined in Section 3 Note that when the information contained in the pointers is no longer of use, or before a new call to nag_2d_spline_fit_scat (e02ddc) with the same , you should free this storage using the NAG macro . This storage will have been allocated only if this function returns with or, possibly, 14: fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). 6 Error Indicators and Warnings If the function fails with an error exit of NE_NUM_KNOTS_2D_GT_SCAT, NE_NUM_COEFF_GT, NE_NO_ADDITIONAL_KNOTS or NE_SPLINE_COEFF_CONV, then a spline approximation is returned, but it fails to satisfy the fitting criterion (see (2) and (3)) – perhaps by only a small amount, however. On entry, all the values in the array must not be equal. On entry, all the values in the array must not be equal. Dynamic memory allocation failed. On entry, argument had an illegal value. at the first call of this function. must be set to at the first call. On entry, $nxest=value$. Constraint: $nxest≥8$. On entry, $nyest=value$. Constraint: $nyest≥8$. No more knots added; the additional knot would coincide with an old one. Possibly an inaccurate data point has too large a weight, or is too small. On entry, the number of data points with nonzero weights $=value$. Constraint: the number of nonzero weights $≥ 16$. No more knots can be added because the number of B-spline coefficients already exceeds . Either is probably too small: The number of knots required is greater than allowed by . Possibly is too small, especially if $nyest > 5 + m$ On entry, must not be less than or equal to 0.0: The iterative process has failed to converge. Possibly is too small: 7 Accuracy On successful exit, the approximation returned is such that its weighted sum of squared residuals is equal to the smoothing factor , up to a specified relative tolerance of 0.001 – except that if $n x = 8$ $n y = 8$ may be significantly less than : in this case the computed spline is simply the least squares bicubic polynomial approximation of degree 3, i.e., a spline with no interior knots. 8 Further Comments 8.1 Timing The time taken for a call of nag_2d_spline_fit_scat (e02ddc) depends on the complexity of the shape of the data, the value of the smoothing factor $S$, and the number of data points. If nag_2d_spline_fit_scat (e02ddc) is to be called for different values of $S$, much time can be saved by setting $start=Nag_Warm$ after the first call. It should be noted that choosing $S$ very small considerably increases computation time. 8.2 Choice of $S$ If the weights have been correctly chosen (see the e02 Chapter Introduction ), the standard deviation of $w r f r$ would be the same for all , equal to , say. In this case, choosing the smoothing factor in the range $σ 2 m ± 2m$ , as suggested by Reinsch (1967) , is likely to give a good start in the search for a satisfactory value. Otherwise, experimenting with different values of will be required from the start. In that case, in view of computation time and memory requirements, it is recommended to start with a very large value for and so determine the least squares bicubic polynomial; the value returned for , call it $fp 0$ , gives an upper bound for . Then progressively decrease the value of to obtain closer fits – say by a factor of 10 in the beginning, i.e., $S = fp 0 / 10$ $S = fp 0 / 100$ , and so on, and more carefully as the approximation shows more details. To choose very small is strongly discouraged. This considerably increases computation time and memory requirements. It may also cause rank-deficiency (as indicated by the argument ) and endanger numerical stability. The number of knots of the spline returned, and their location, generally depend on the value of $S$ and on the behaviour of the function underlying the data. However, if nag_2d_spline_fit_scat (e02ddc) is called with $start=Nag_Warm$, the knots returned may also depend on the smoothing factors of the previous calls. Therefore if, after a number of trials with different values of $S$ and $start=Nag_Warm$, a fit can finally be accepted as satisfactory, it may be worthwhile to call nag_2d_spline_fit_scat (e02ddc) once more with the selected value for $S$ but now using $start=Nag_Cold$. Often, nag_2d_spline_fit_scat (e02ddc) then returns an approximation with the same quality of fit but with fewer knots, which is therefore better if data reduction is also important. 8.3 Choice of nxest and nyest The number of knots may also depend on the upper bounds . Indeed, if at a certain stage in nag_2d_spline_fit_scat (e02ddc) the number of knots in one direction (say $n x$ ) has reached the value of its upper bound ( ), then from that moment on all subsequent knots are added in the other direction. This may indicate that the value of is too small. On the other hand, it gives you the option of limiting the number of knots the function locates in any direction. For example, by setting (the lowest allowable value for ), you can indicate that you want an approximation which is a simple cubic polynomial in the variable 8.4 Restriction of the Approximation Domain The fit obtained is not defined outside the rectangle $λ 4 , λ n x - 3 × μ 4 , μ n y - 3$. The reason for taking the extreme data values of $x$ and $y$ for these four knots is that, as is usual in data fitting, the fit cannot be expected to give satisfactory values outside the data region. If, nevertheless, you require values over a larger rectangle, this can be achieved by augmenting the data with two artificial data points $a,c,0$ and $b,d,0$ with zero weight, where $a,b × c,d$ denotes the enlarged rectangle. 8.5 Outline of Method Used First suitable knot sets are built up in stages (starting with no interior knots in the case of a cold start but with the knot set found in a previous call if a warm start is chosen). At each stage, a bicubic spline is fitted to the data by least squares and $θ$, the sum of squares of residuals, is computed. If $θ>S$, a new knot is added to one knot set or the other so as to reduce $θ$ at the next stage. The new knot is located in an interval where the fit is particularly poor. Sooner or later, we find that $θ≤S$ and at that point the knot sets are accepted. The function then goes on to compute a spline which has these knot sets and which satisfies the full fitting criterion specified by 2 and 3. The theoretical solution has $θ=S$. The function computes the spline by an iterative scheme which is ended when $θ=S$ within a relative tolerance of 0.001. The main part of each iteration consists of a linear least squares computation of special form. The minimal least squares solution is computed wherever the linear system is found to be rank-deficient. An exception occurs when the function finds at the start that, even with no interior knots $n x = n y = 8$, the least squares spline already has its sum of squares of residuals $≤ S$. In this case, since this spline (which is simply a bicubic polynomial) also has an optimal value for the smoothness measure $η$, namely zero, it is returned at once as the (trivial) solution. It will usually mean that $S$ has been chosen too large. For further details of the algorithm and its use see Dierckx (1981b) 8.6 Evaluation of Computed Spline The values of the computed spline at the points $tx r-1 , ty r-1$ , for , may be obtained in the array , of length at least , by the following code: e02dec(n, tx, ty, ff, &spline, &fail) is a structure of type Nag_2dSpline which is an output argument of nag_2d_spline_fit_scat (e02ddc). To evaluate the computed spline on a rectangular grid of points in the plane, which is defined by the coordinates stored in $tx q-1$ , for , and the coordinates stored in $ty r-1$ , for , returning the results in the array which is of length at least $kx × ky$ , the following call may be used: e02dfc(kx, ky, tx, ty, fg, &spline, &fail) is a structure of type Nag_2dSpline which is an output argument of nag_2d_spline_fit_scat (e02ddc). The result of the spline evaluated at grid point is returned in element $ky × q-1 + r - 1$ of the array 9 Example This example program reads in a value of , followed by a set of data points $x r , y r , f r$ and their weights $w r$ . It then calls nag_2d_spline_fit_scat (e02ddc) to compute a bicubic spline approximation for one specified value of S, and prints the values of the computed knots and B-spline coefficients. Finally it evaluates the spline at a small sample of points on a rectangular grid. 9.1 Program Text 9.2 Program Data 9.3 Program Results
{"url":"http://www.nag.com/numeric/cl/nagdoc_cl23/html/E02/e02ddc.html","timestamp":"2014-04-16T13:50:09Z","content_type":null,"content_length":"61893","record_id":"<urn:uuid:2c40c7b6-f230-46c0-87f2-c111075759f1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Categories, Allegories Categories, Allegories Categories, Allegories is a highly original work on categorical algebra by Peter Freyd and Andre Scedrov. • Peter Freyd and Andre Scedrov, Categories, Allegories, Mathematical Library Vol 39, North-Holland (1990). ISBN 978-0-444-70368-2. (sometimes whimsically referred to as “Cats, Alligators” or “Cats and Alligators”). On the Categories side, the book centers on that part of categorical algebra that studies exactness properties, or other properties enjoyed by nice or convenient categories such as toposes, and their relationship to logic (for example, geometric logic). A major theme throughout is the possibility of representation theorems (aka completeness theorems or embedding theorems) for various categorical structures, spanning back now about five decades (as of this writing) to the original embedding theorems for abelian categories, such as the Freyd-Mitchell embedding theorem. On the Allegories side: it may be said they were first widely publicized in this book. They comprise many aspects of relational algebra corresponding to the categorical algebra studied in the first part of the book. The book, while it covers an extraordinary amount of ground in less than 300 pages, is fairly idiosyncratic, especially in the choice of terminology and in the overall arrangement (designed to be self-contained for the diligent reader). There is no list of references given. Revised on August 25, 2012 22:45:28 by Urs Schreiber
{"url":"http://ncatlab.org/nlab/show/Categories%2C+Allegories","timestamp":"2014-04-19T02:29:26Z","content_type":null,"content_length":"13575","record_id":"<urn:uuid:a4aecabd-c326-4380-b0e6-840140e61b07>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Department of Physics and Astronomy Density is something that affects many of our everyday decisions. Conciously or not, we make mental calculations of density every time we interract with the physical world around us. Can we slide that box? Can we lift that rock? This lab examines some of the ways density effects our everyday lives. People are often confused about the difference between weight and density. There is an old riddle which highlights this confusion: “What weighs more – a pound of feathers or a pound of lead?” The answer, of course, is that both weight same – one pound. However, feathers are much less dense than lead, and therefore take up much more space. Density is the ratio of an object’s mass to its volume. This means that to find density, you must measure an object’s mass and divide it by the amount of space it takes up. The standard units of density are [kg/m^3], although other units are commonly used such as [g/ml], [g/cm^3], or [kg/l]. 1 ml has the same volume as 1 cm^3. Preliminary questions 1. What two things do you need to know about a sample if you are to determine its density? 2. What does density indicate? 3. If you measured the density of a nail on the earth and then on the moon, would the densities vary? Why? 4. If you measured the density of a gallon of water and then a teaspoon of water, would the densities vary? Why? 5. Consider the concrete blocks below. Explain your answers. □ Which has the greatest volume? □ The greatest density? □ the greatest mass? 6. Consider the balloons above. These balloons were the same size, but the second one has gotten smaller due to a change in temperature. Explain your answers. □ Which has the greatest volume? □ The greatest density? □ the greatest mass? • Copper sample • Graduated cylinder • Triple beam balance with platform • Stack of post-1982 pennies Activity 1: Finding density using volume 1. Design and describe an experiment that determines the density of water. 2. Measure and record the mass of the copper sample. 3. Determine the volume of the copper sample, first geometrically then by calculating the amount of water displaced in the graduated cylinder. Show your measured values and all your calculations. 4. Which method to you think is more accurate? Why? 5. Using the method you feel is more accurate, determine the density of copper. 6. Using the table below, compare the value you just calculated to the accepted value using percent error. Substance Aluminum Zinc Tin Iron Copper Silver Lead Gold Density 2.70 7.08 7.31 7.87 8.92 10.50 11.34 19.32 Activity 2: Finding density without measuring the volume It is difficult to find the volume of an irregularly shaped object, e.g. an intricate golden crown. First of all, it is very difficult to determine the volume geometrically. Secondly, it is difficult to attain great precision by observing a change in water level. As observed in the density lab, volume is needed in order to determine the density of an object. ρ = m/V (1) If volume cannot be determined to any great accuracy, then how can one accurately determine the density of an object? Archimedes, according to legend, solved this problem while bathing. King Hieron had provided a quantity of pure gold to a smith to make into a crown. When the crown was complete, the king suspected the goldsmith of stealing some of the gold and substituting some other metal. The crown weighed the same as the original measure of gold, so Archimedes needed to know the density of the crown in order to determine whether there had been any foul play. He knew that the volume of the crown was equal to the amount of water it displaced, but needed a more precise method of measurement. While pondering this in the bath, Archimedes suddenly realized that he didn’t need to know the volume; only the weight of the water displaced. Since Archimedes was able to measure weight much more accurately than volume, this was very good news indeed! Archimedes’ Principle: A body immersed in fluid is buoyed up by a force equal to the weight of the fluid displaced. F[buoyant] = W[water dispaced] (2) Since the immersion of an object in water results in some water being lifted, then that action would cause an equal and opposite force back on the object (see Newton’s 3rd law). This is why an object in water seems to weigh less in water than out of the water. The observed difference in weight is equal to the weight of the water that the object displaces. W[water displaced] = (W[object out] – W[object in]) (3) Now Archimedes could determine weight of the displaced water. He already knew the density of water (1.000 g/ml), and so he was now ready to calculate the volume of the displaced water, and then go on to calculate the density of the crown! 1. How might Archimedes have determined the buoyant force (the difference in the weight of the crown in and out of the water)? 2. How is the buoyant force related to the weight of the displaced water? 3. If Archimedes knew the density of water and the weight of the displaced water, how could he then calculate the volume of the displace water? 4. How is the volume of the displaced water related to the volume of the crown? 5. If Archimedes knew the volume of the crown and the weight of the crown, how could he then calculate the density or the crown? 6. As the tradition goes, Archimedes discovered that a quantity of silver had been mixed in with the gold, and so the goldsmith was exposed as a thief. What about his density results might have made Archimedes think that there could be some silver mixed in? 7. Did Archimedes ever have to directly measure the volume of the crown or the volume of the displaced water? 8. Given that the density of water is 1.000 g/ml, use Archimedes’ Principle to experimentally determine the density of the copper sample. When measuring the sample in the water, be sure that it is completely submerged and try to remove as many air bubbles as possible. Use the platform of the triple beam balance to support the cup while you measure the mass of the copper in and out of the water. Record these measurements and show all your calculations in determining the density. 9. Compare your value of the density of copper to the accepted value using percent error. 10. Which method of determining density of copper gave you the better result? The result from step #11 or from step #20? Activity 3: What are pennies really made of? Before the middle of 1982, pennies were made of solid copper. Beginning about halfway into the year 1982, pennies started being made of a less dense metal core with very thin copper plating. You will determine what post-1982 pennies are made of. 1. Is it better to use a single penny and determine its density, or it is better to combine multiple pennies together to determine their density as a unit? Why? 2. What is your method for determining the density of a penny? Describe in detail. 3. Calculate the density of a post-1982. Show all your work, and all your measured values. 4. How will the small coating of copper on the outside of the mystery metal affect your measured density? How did you determine this? 5. Compare your density calculation to the densities of the metals in the table above. What is your best estimate of the metal of the post-1982 pennies? Mail Address: Department of Physics and Astronomy ASU Box 32106 Boone, NC 28608-2106 Physical Address (also for shipping): 231 CAP Building 525 Rivers Street Boone, NC, 28608-2106 Telephone: 828-262-3090 Fax: 828-262-2049
{"url":"http://physics.appstate.edu/laboratory/quick-guides/density","timestamp":"2014-04-17T13:11:35Z","content_type":null,"content_length":"23176","record_id":"<urn:uuid:c2b6cd8e-3d71-4743-8191-78688fdf017c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help What is the distance from the point P(x,y,z)to the z axis? Explain your answer by graphing. • calculus - mo, Saturday, April 4, 2009 at 9:52am Determine whether the plane passing through the points A(1,2,3) , B(0,1,0) and C(0,2,2) passes through the origin • calculus - Count Iblis, Saturday, April 4, 2009 at 9:55am That's the distance from the point p1 = (0, 0, z) to p2 = (x, y, z). The square of the distance is the square of the norm of the difference of p1 and p2, which is the inner product of that difference vector with itself (also conventionally denoted as the square of the vector): d^2 = (p1-p2)^2 = x^2 + y^2 • calculus - Count Iblis, Saturday, April 4, 2009 at 10:42am Determine whether the plane passing through the points A(1,2,3) , B(0,1,0) and C(0,2,2) passes through the origin. Let's perform a translation, so that B moves to the origin and the old origin moves to minus B: A = (1,1,3) C = (0,1,2) (I prefer this notation instead of writing A(x,y,z), so I consider A, B, C etc as vectors). The location of the old origin is denoted by X: X = (0,-1,0) If X can be written as a linear combination of A and C, then X is on the plane. So, what you need to do is to determine the rank of the matrix which has A, B and X as its rows (or columns). I find that the rank is 3, so X is not a point on the plane. • calculus - Reiny, Saturday, April 4, 2009 at 11:11am >> Determine whether the plane passing through the points A(1,2,3) , B(0,1,0) and C(0,2,2) passes through the origin. << alternate method: vector AB = (1,1,2) vector AC = (1,0,1) normal to these is (1,1,-1) , I took the cross-product so the equation of the plane is x + y - z = k put in (1,2,3) , or any of the other two points, 1 + 2 - 3 = k = 0 plane equation is x + y - z = 0 and the point (0,0,0) satisfies this • calculus - Reiny, Saturday, April 4, 2009 at 11:24am >> What does the following equation represent in R^3 ? Justify your answer. x=y=z << suppose I rewrote the statement this way (x-0)/1 = (y-0)/1 = (z-0)/1 does that help?
{"url":"http://www.jiskha.com/display.cgi?id=1238852570","timestamp":"2014-04-19T02:43:10Z","content_type":null,"content_length":"11001","record_id":"<urn:uuid:47e87020-0275-4a69-9f70-8f0d06651c14>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverse hyperbolic cosine: Transformations (subsection 16/01) Transformations and argument simplifications Argument involving basic arithmetic operations Involving cosh^-1(- z) Involving cosh^-1(-z) and cosh^-1(z) Involving cosh^-1(cz) Involving cosh^-1(i z) and cosh^-1(1+2 z^2) Involving cosh^-1(-i z) and cosh^-1(1+2 z^2) Involving cosh^-1((z[2])^1/2) Involving cosh^-1((z[2])^1/2) and cosh^-1(z) Involving cosh^-1(a (b z^c)^m) Involving cosh^-1(a (b z^c)^m) and cosh^-1(a b^m z^m c) Involving cosh^-1(1-2z^2) Involving cosh^-1(1-2 z^2) and cosh^-1(z) Involving cosh^-1(2z^2-1) Involving cosh^-1(2 z^2-1) and cosh^-1(z) Involving cosh^-1(z^2-2/z^2) Involving cosh^-1(z^2-2/z^2) and cosh^-1(1/z) Involving cosh^-1(2-z^2/z^2) Involving cosh^-1(2-z^2/z^2) and cosh^-1(1/z) Involving cosh^-1((1-z)^1/2) Involving cosh^-1((1-z)^1/2) and cosh^-1(z^1/2) Involving cosh ^-1(1+c z/2^1/2) Involving cosh^-1(1+z/2^1/2) and cosh^-1(z) Involving cosh^-1(1-z/2^1/2) and cosh^-1(z) Involving cosh^-1((z-1)^1/2/z^1/2) Involving cosh^-1((z-1)^1/2/z^1/2) and cosh^-1(1/z^1/2) Involving cosh^-1((z-1)^1/2/z^1/2) and cosh^-1(1/z^1/2) Involving cosh^-1((1-z)^1/2/(-z)^1/2) Involving cosh^-1((1-z)^1/2/(-z)^1/2) and cosh^-1(1/z^1/2) Involving cosh^-1((1-z)^1/2/(-z)^1/2) and cosh^-1(1/z^1/2) Involving cosh^-1(z-1/z^1/2) Involving cosh^-1(z-1/z^1/2) and cosh^-1(1/z^1/2) Involving cosh^-1(z-1/z^1/2) and cosh^-1(1/z^1/2) Involving cosh^-1((z+c)^1/2/(2 z)^1/2) Involving cosh^-1((z-1)^1/2/(2 z)^1/2) and cosh^-1(1/z) Involving cosh^-1((z+1)^1/2/(2 z)^1/2) and cosh^-1(1/z) Involving cosh^-1((a-z)^1/2/(-2 z)^1/2) Involving cosh^-1((-z-1)^1/2/(-2 z)^1/2) and cosh^-1(1/z) Involving cosh^-1((1-z)^1/2/(-2 z)^1/2) and cosh^-1(1/z) Involving cosh^-1(z+c/2 z^1/2) Involving cosh^-1(z-1/2 z^1/2) and cosh^-1(1/z) Involving cosh^-1(z+1/2 z^1/2) and cosh^-1(1/z) Involving cosh^-1((1-z^2)^1/2) Involving cosh^-1((1-z^2)^1/2) and cosh^-1(z) Involving cosh^-1((z^2-1)^1/2/z) Involving cosh^-1((z^2-1)^1/2/z) and cosh^-1(1/z) Involving cosh^-1((z^2-1)^1/2/(z[2])^1/2) Involving cosh^-1((z^2-1)^1/2/(z[2])^1/2) and cosh^-1(1/z) Involving cosh^-1((1-z^2)^1/2/(-z^2)^1/2) Involving cosh^-1((1-z^2)^1/2/(-z^2)^1/2) and cosh^-1(1/z) Involving cosh^-1(z^2-1/z^2^1/2) Involving cosh^-1(z^2-1/z^2^1/2) and cosh^-1(1/z) Involving cosh^-1(2 z (1-z^2)^1/2) Involving cosh^-1(2 z (1-z^2)^1/2) and cosh^-1(z) Involving cosh^-1(2 (-1+z^2)^1/2/z^2) Involving cosh^-1(2 (z^2-1)^1/2/z^2) and cosh^-1(1/z) Involving cosh^-1(((1-(1+c z^2)^1/2)/2)^1/2) Involving cosh^-1(((1-(1+z^2)^1/2)/2)^1/2) and cosh^-1(i z) Involving cosh^-1(((1-(1-z^2)^1/2)/2)^1/2) and cosh^-1(z) Involving cosh^-1(z (1-(1-z^2)^1/2)^1/2/(2z^2)^1/2) Involving cosh^-1(z (1-(1-z^2)^1/2)^1/2/(2z^2)^1/2) and cosh^-1(z) Involving cosh^-1(z ((1-(1-z^2)^1/2)/(2z^2))^1/2) Involving cosh^-1(z ((1-(1-z^2)^1/2)/(2z^2))^1/2) and cosh^-1(z) Involving cosh^-1((z-(z^2-1)^1/2)^1/2/(2z)^1/2) Involving cosh^-1((z-(z^2-1)^1/2)^1/2/(2z)^1/2) and cosh^-1(1/z) Involving cosh^-1(((z-(z^2-1)^1/2)/(2z))^1/2) Involving cosh^-1(((z-(z^2-1)^1/2)/(2z))^1/2) and cosh^-1(1/z)
{"url":"http://functions.wolfram.com/ElementaryFunctions/ArcCosh/16/01/ShowAll.html","timestamp":"2014-04-18T18:20:27Z","content_type":null,"content_length":"125662","record_id":"<urn:uuid:cb97af05-1dd7-4ca7-8443-2a1922742f51>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Online learning for matrix factorization and sparse coding Results 1 - 10 of 90 , 904 "... We consider the empirical risk minimization problem for linear supervised learning, with regularization by structured sparsity-inducing norms. These are defined as sums of Euclidean norms on certain subsets of variables, extending the usual ℓ1-norm and the group ℓ1-norm by allowing the subsets to ov ..." Cited by 97 (15 self) Add to MetaCart We consider the empirical risk minimization problem for linear supervised learning, with regularization by structured sparsity-inducing norms. These are defined as sums of Euclidean norms on certain subsets of variables, extending the usual ℓ1-norm and the group ℓ1-norm by allowing the subsets to overlap. This leads to a specific set of allowed nonzero patterns for the solutions of such problems. We first explore the relationship between the groups defining the norm and the resulting nonzero patterns, providing both forward and backward algorithms to go back and forth from groups to patterns. This allows the design of norms adapted to specific prior knowledge expressed in terms of nonzero patterns. We also present an efficient active set algorithm, and analyze the consistency of variable selection for least-squares linear regression in low and high-dimensional settings. - In: ICML "... This paper proposes to combine two approaches for modeling data admitting sparse representations: On the one hand, dictionary learning has proven very effective for various signal restoration and representation tasks. On the other hand, recent work on structured sparsity provides a natural framework ..." Cited by 72 (19 self) Add to MetaCart This paper proposes to combine two approaches for modeling data admitting sparse representations: On the one hand, dictionary learning has proven very effective for various signal restoration and representation tasks. On the other hand, recent work on structured sparsity provides a natural framework for modeling dependencies between dictionary elements. We propose to combine these approaches to learn dictionaries embedded in a hierarchy. We show that the proximal operator for the tree-structured sparse regularization that we consider can be computed exactly in linear time with a primal-dual approach, allowing the use of accelerated gradient methods. Experiments show that for natural image patches, learned dictionary elements organize themselves naturally in such a hierarchical structure, leading to an improved performance for restoration tasks. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models. Learned sparse representations, initially introduced by Olshausen and Field [1997], have been the focus of much research in machine learning, signal processing and neuroscience, leading to state-of-theart algorithms for several problems in image processing. Modeling signals as a linear combination of a "... We develop an online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA). Online LDA is based on online stochastic optimization with a natural gradient step, which we show converges to a local optimum of the VB objective function. It can handily analyze massive document collection ..." Cited by 61 (8 self) Add to MetaCart We develop an online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA). Online LDA is based on online stochastic optimization with a natural gradient step, which we show converges to a local optimum of the VB objective function. It can handily analyze massive document collections, including those arriving in a stream. We study the performance of online LDA in several ways, including by fitting a 100-topic topic model to 3.3M articles from Wikipedia in a single pass. We demonstrate that online LDA finds topic models as good or better than those found with batch VB, and in a fraction of the time. 1 "... Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a p ..." Cited by 44 (3 self) Add to MetaCart Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: (i) building a sparsifying dictionary based on a mathematical model of the data, or (ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1-D and 2-D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the K-SVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures. , 2010 "... Sparse coding consists in representing signals as sparse linear combinations of atoms selected from a dictionary. We consider an extension of this framework where the atoms are further assumed to be embedded in a tree. This is achieved using a recently introduced tree-structured sparse regularizatio ..." Cited by 39 (8 self) Add to MetaCart Sparse coding consists in representing signals as sparse linear combinations of atoms selected from a dictionary. We consider an extension of this framework where the atoms are further assumed to be embedded in a tree. This is achieved using a recently introduced tree-structured sparse regularization norm, which has proven useful in several applications. This norm leads to regularized problems that are difficult to optimize, and we propose in this paper efficient algorithms for solving them. More precisely, we show that the proximal operator associated with this norm is computable exactly via a dual approach that can be viewed as the composition of elementary proximal operators. Our procedure has a complexity linear, or close to linear, in the number of atoms, and allows the use of accelerated gradient techniques to solve the tree-structured sparse approximation problem at the same computational cost as traditional ones using the ℓ1-norm. Our method is efficient and scales gracefully to millions of variables, which we illustrate in two types of applications: first, we consider fixed hierarchical dictionaries of wavelets to denoise natural images. Then, we apply our optimization tools in the context of dictionary learning, where learned dictionary elements naturally organize in a prespecified arborescent structure, leading to a better performance in reconstruction of natural image patches. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models. - In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2010. 8. Image taken with a Canon 1D Mark III, at 35mm f/4.5. Images "... Blur from camera shake is mostly due to the 3D rotation of the camera, resulting in a blur kernel that can be significantly non-uniform across the image. However, most current deblurring methods model the observed image as a convolution of a sharp image with a uniform blur kernel. We propose a new p ..." Cited by 35 (3 self) Add to MetaCart Blur from camera shake is mostly due to the 3D rotation of the camera, resulting in a blur kernel that can be significantly non-uniform across the image. However, most current deblurring methods model the observed image as a convolution of a sharp image with a uniform blur kernel. We propose a new parametrized geometric model of the blurring process in terms of the rotational velocity of the camera during exposure. We apply this model to two different algorithms for camera shake removal: the first one uses a single blurry image (blind deblurring), while the second one uses both a blurry image and a sharp but noisy image of the same scene. We show that our approach makes it possible to model and remove a wider class of blurs than previous approaches, including uniform blur as a special case, and demonstrate its effectiveness with experiments on real images. 1. - IN ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS , 2010 "... Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turnedinto a convex optimization problem byreplacing the cardinality function by its convex en ..." Cited by 30 (9 self) Add to MetaCart Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turnedinto a convex optimization problem byreplacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the ℓ1-norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nonincreasing submodular set-functions, the corresponding convex envelope can be obtained from its Lovász extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning. "... A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse ..." Cited by 21 (6 self) Add to MetaCart A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistent constraint called ‘discriminative sparse-code error ’ and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single over-complete dictionary and an optimal linear classifier jointly. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse coding techniques for face and object category recognition under the same learning conditions. 1. "... Abstract. Sparse coding of sensory data has recently attracted notable attention in research of learning useful features from the unlabeled data. Empirical studies show that mapping the data into a significantly higherdimensional space with sparse coding can lead to superior classification performan ..." Cited by 20 (0 self) Add to MetaCart Abstract. Sparse coding of sensory data has recently attracted notable attention in research of learning useful features from the unlabeled data. Empirical studies show that mapping the data into a significantly higherdimensional space with sparse coding can lead to superior classification performance. However, computationally it is challenging to learn a set of highly over-complete dictionary bases and to encode the test data with the learned bases. In this paper, we describe a mixture sparse coding model that can produce high-dimensional sparse representations very efficiently. Besides the computational advantage, the model effectively encourages data that are similar to each other to enjoy similar sparse representations. What’s more, the proposed model can be regarded as an approximation to the recently proposed local coordinate coding (LCC), which states that sparse coding can approximately learn the nonlinear manifold of the sensory data in a locally linear manner. Therefore, the feature learned by the mixture sparse coding model works pretty well with linear classifiers. We apply the proposed model to PASCAL VOC 2007 and 2009 datasets for the classification task, both achieving stateof-the-art performances. Key words: Sparse coding, highly over-complete dictionary training, mixture model, mixture sparse coding, image classification, PASCAL VOC challenge - JMLR "... We consider a class of learning problems regularized by a structured sparsity-inducing norm defined as the sum of ℓ2- or ℓ∞-norms over groups of variables. Whereas much effort has been put in developing fast optimization techniques when the groups are disjoint or embedded in a hierarchy, we address ..." Cited by 16 (5 self) Add to MetaCart We consider a class of learning problems regularized by a structured sparsity-inducing norm defined as the sum of ℓ2- or ℓ∞-norms over groups of variables. Whereas much effort has been put in developing fast optimization techniques when the groups are disjoint or embedded in a hierarchy, we address here the case of general overlapping groups. To this end, we present two different strategies: On the one hand, we show that the proximal operator associated with a sum of ℓ∞norms can be computed exactly in polynomial time by solving a quadratic min-cost flow problem, allowing the use of accelerated proximal gradient methods. On the other hand, we use proximal splitting techniques, and address an equivalent formulation with non-overlapping groups, but in higher dimension and with additional constraints. We propose efficient and scalable algorithms exploiting these two strategies, which are significantly faster than alternative approaches. We illustrate these methods with several problems such as CUR matrix factorization, multi-task learning of tree-structured dictionaries, background subtraction in video sequences, image denoising with wavelets, and topographic dictionary learning of natural image patches.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=10310326","timestamp":"2014-04-21T16:34:25Z","content_type":null,"content_length":"41386","record_id":"<urn:uuid:bf48d932-5e2d-418e-aeb2-0d47329a982a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Game theory September 29th 2012, 10:26 PM #1 Sep 2012 Game theory In the two-player game "Morra", the players simultaneously hold up some fingers and each guesses the total number of fingers help up. If exactly one player guesses correctly, then the other player pays her the amount of her guess(in dollars, say). If either both players guess correctly or neither does, then no payments are Consider a version of the game in which the number of fingers each player may hold up is restricted to either one or two. a. Given the symmetry of the game, each player's equilibrium payoffs is 0 by the result from (some exercise that says: show that in any symmetric strictly competitive game in which U2 = -U1, where Ui is player i's expected payoff function for i = 1,2, each player's payoff in every mixed strategy Nash equilibrium is 0.). Find the mixed strategies of player 1 that guarantee that her payoff is at least 0 (i.e. the strategies such that her payoff is at least 0 for each pure strategy of player 2) and hence find all the mixed strategy equilibria of the game. b. Find the rationalizable actions of each player in this game. Re: Game theory Help would be appreciated! September 29th 2012, 10:47 PM #2 Sep 2012
{"url":"http://mathhelpforum.com/advanced-statistics/204323-game-theory.html","timestamp":"2014-04-17T06:04:02Z","content_type":null,"content_length":"31956","record_id":"<urn:uuid:c0070f3d-8dc1-4af8-9225-6c184625b253>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding A Delta For A Given Epsilon May 31st 2012, 07:22 AM Finding A Delta For A Given Epsilon I need help finding the delta for a given epsilon. I tried to solve by using algebra but I hit a wall. (Headbang) May 31st 2012, 07:29 AM Possible Solution Would this work? May 31st 2012, 08:10 AM Re: Finding A Delta For A Given Epsilon You need the prove that the function $f(x)=2-\frac{1}{x}$ is continuous in $x=1$, that means you have to prove: $\forall \epsilon>0, \exists \delta>0, \forall x: |x-1|<\delta \Rightarrow |f(x)-1|<\epsilon$ Choose $\epsilon>0$ arbitrary, we obtain: $|f(x)-1| = \left|\left(2-\frac{1}{x}\right)-1\right|=\left|1-\frac{1}{x}\right|=\left|\frac{x-1}{x}\right|=\frac{|x-1|}{|x|}<\epsilon$ You know that $x$ lies in a $\delta-$ neighbourhoud of 1 thus choose an upperbound for $|x-1|$ so that you can get rid of the $|x|$. May 31st 2012, 09:22 AM Re: Finding A Delta For A Given Epsilon to continue, suppose that we require that no matter what δ ≤ 1/2. this means that x is between 1/2 and 3/2. hence |x| = x, and we have: |x| ≥ 1/2, so that 1/|x| ≤ 2. if we also require that δ ≤ ε/2, we have: |f(x) - 1| = |x-1|/|x| ≤ 2|x-1| < 2δ ≤ 2(ε/2) = ε. therefore, one possibility is: δ = min(1/2,ε/2). (intuitively, you can see we want δ < 1, for if we let x get near 0, f(x) behaves very badly, and it will be hard to "make sure it's changing less than ε"). May 31st 2012, 09:33 AM Re: Finding A Delta For A Given Epsilon I lost you at $|\frac{x-1}{x}|<\epsilon$ becasue $\epsilon=0.1$. (see intial post above) Also, I could not understand the answer provided at CalcChat.com: <http://www.calcchat.com/book/Calculus-9e/>, (chapter 1, section 2, problem 37). May 31st 2012, 09:49 AM Re: Finding A Delta For A Given Epsilon Let $\delta=0.05$. If $|x-1|<\delta$ then $\left| {\left( {2 - \frac{1}{x}} \right) - 1} \right| = \frac{{\left| {x - 1} \right|}}{{\left| x \right|}} \leqslant \frac{\delta }{{\left| x \right|}} < \frac{{0.05}}{2} < 0.1$ May 31st 2012, 10:18 AM Re: Finding A Delta For A Given Epsilon Thanks for helping me see the problem a little better. I apologize, your statements was correct. I forgot about $|f(x)-L|<\epsilon$, and $|x-c|<\delta$. I guess I had ran into the wall to many times and got discouraged. Lol! Also, your word choice caught me off guard; though, one day I will be able to correctly interpret statements in this manner by self studying. I'm currently studying Anatomy of Mathematics by R. B. Kershner for fun and hopefully I can get my hands on many more post-modern books of mathematics. Alot of 21st century books are not written with the same understadning! May 31st 2012, 11:30 PM Re: Finding A Delta For A Given Epsilon i think it can be hard to see how "epsilon-delta" arguments capture the essence of continuity. intuitively, we think of continuous functions as ones for which, if we only move "over" (left-or-right) a "little bit", we only move "up-or-down" a "little bit". perhaps a little more clearly, we mean if x is near the number a, then f(x) should be near the number f(a). so one of the first things we do is quantify what we mean by "near". the distance between two numbers a and b (how far apart they are) can be expressed by |a-b|. so to say that x is near a is to |x - a| < δ, where δ = "a small positive number". we want to have this imply f(x) is near f(a), so: |f(x) - f(a)| < ε, where ε = "another small number". so why do we start with ε first, and THEN find δ? this is hard to explain. but the basic idea is: discontinuities can be very slight: the graph can be broken, but you might only see it under a magnifying glass, or a microscope. so we don't just want the difference between f(x) and f(a) to be "small", we want it to be arbitrarily small (constant functions are nice, they don't vary a bit, so f(x) - f(a) = 0, no matter what. but other functions usually vary a lot more than that. they might even go up and down very rapidly even as x travels a short distance, but we want to consider these as continuous, too). so if we want the difference between f(x) and f(a) to be "arbitrarily small", we have to find a delta for EACH ε > 0 (especially the very tiny ones). note that the closer f(x) is to "flat", the bigger a delta we can use, since f(x) doesn't change very much even if x changes a LOT. so to ensure that "ε" is "arbitrarily small", we might not even need a "small" δ (but if a "big one" works, a "smaller" one will, too). another way to look at this is: "how large a deviation in our input" (think of δ = "deviation") can we tolerate while still keeping the "error of our results" small (think of ε = "error")? for continuous functions, a small error in input, should mean a small error in output. functions that are discontinuous, like: f(x) = -1, x < 0 f(x) = 1, x ≥ 0 fail this in a BIG way: we could move a TINY little bit left of 0 (like 0.000001), or the same tiny bit right of 0 (0.000001 again), and yet the difference of values is HUGE compared to the error of input (it is 2, which is 20,000,000 times the "delta". and making delta smaller doesn't help). you can see that if we pick 0 < ε < 2, we're not going to find ANY δ that works for a = 0.
{"url":"http://mathhelpforum.com/calculus/199507-finding-delta-given-epsilon-print.html","timestamp":"2014-04-16T05:18:57Z","content_type":null,"content_length":"15570","record_id":"<urn:uuid:fc4ff7dc-81cb-43d0-91f6-c4e6676e3083>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Pine Lake ACT Tutor Find a Pine Lake ACT Tutor ...I have a degree in genetics, and have taught genetics as a teacher in 9th grade biology. Also I was an instructor at Emory University in an introductory genetics course. As an undergraduate and graduate student in genetics, this subject is one that I know inside and out. 15 Subjects: including ACT Math, chemistry, geometry, biology ...Helped two twins with the math portion of the PSAT. Taught similar topics as a GMAT instructor for three years. Tutored ACT math topics during high school, college, and as a GMAT instructor for three years. 28 Subjects: including ACT Math, calculus, GRE, finance ...As well as tutoring, I have volunteered in my local elementary school to help student with their homework for their homework club. Also I mentor students from middle school to high school on behavior, studies, and other topics. I have attended many lectures on best study skills, tutored other c... 14 Subjects: including ACT Math, chemistry, geometry, biology ...Math and science has opened many doors for me and they can do the same for you!Differential Equations is an intimidating and potentially frustrating course. The course is usually taken by engineering students and taught by mathematics professors. The pure mathematical approach can be discouraging to engineering students and make the course seem like a waste of time. 15 Subjects: including ACT Math, calculus, physics, algebra 2 ...I have been teaching for 4 years. I am a certified teacher in prek-5th grade and have been teaching reading and phonics for 4 years. I am certified in prek-5th. 12 Subjects: including ACT Math, reading, writing, grammar
{"url":"http://www.purplemath.com/pine_lake_act_tutors.php","timestamp":"2014-04-17T07:31:30Z","content_type":null,"content_length":"23392","record_id":"<urn:uuid:8efe6578-cadb-4656-b97d-75ac6f441061>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> a simple identifiaction problem LR posted on Monday, March 04, 2013 - 11:39 am In the path model which is discussed here: there are 4 observed variables. Cursory inspection tells me that there should be 1 model degree of freedom: 5*4/2=10 available DF, and 9 estimated parameters. However, the "Degrees of Freedom" is reported by Mplus as 0. Could you explain what has happened here ? Thanks !! Linda K. Muthen posted on Monday, March 04, 2013 - 3:42 pm The model is just-identified. The sample statistics for the dependent variables are two means, two variances, one covariance, and four covariances between the dependent and independent variables for nine total. LR posted on Monday, March 04, 2013 - 4:20 pm Thanks Linda. I also worked out nine total DF used. But I thought there were 10 available: 5*4/2=10, leaving 1 free ? Where is my logic going wrong ? Linda K. Muthen posted on Monday, March 04, 2013 - 4:51 pm You are assuming all variables are dependent variables. The degrees of freedom are calculated differently when there is a combination of dependent and independent variables. In the model you refer to both the H1 and H0 models have nine parameters resulting in a just-identified model with zero degrees of freedom. LR posted on Monday, March 04, 2013 - 10:52 pm Thanks again Linda. Could you indulge me and explain how this calculation of 9 available DF (free parameters) is done ? Linda K. Muthen posted on Tuesday, March 05, 2013 - 6:33 am The sample statistics for the H1 model are two means, two variances, one covariance for the two dependent variables, and four covariances between the dependent and independent variables for nine total parameters. Linda K. Muthen posted on Tuesday, March 05, 2013 - 7:46 am If you want to treat all of the variables as dependent variables, the model still has zero degrees of freedom. The H1 model has 4 means and 10 variances and covariances for a total of 14 parameters. The H0 model has 2 means, 2 variances, and one covariance for the exogenous variables and 2 intercepts, 2 residual variances, and 5 regression coefficients for the endogenous variables for a total of 14 parameters and zero degrees of freedom. LR posted on Wednesday, March 06, 2013 - 4:54 am Hi Linda, and thanks again. I know what has caused my confusion. Mplus does not output the covariance between the exogenous variables. If hs with col; is added to the model, Mplus very helpfully outputs 14 estimates so the model can easily be seen as saturated, and all original estimates are unchanged. Linda K. Muthen posted on Wednesday, March 06, 2013 - 6:57 am The reason Mplus does not give the means, variances, and covariances of the observed exogenous variables is that in regression the model is estimated conditional on these variables. When you include them in the MODEL command, you treat them as dependent variables and distributional assumptions are made about them. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=11&page=11946","timestamp":"2014-04-18T13:11:25Z","content_type":null,"content_length":"26765","record_id":"<urn:uuid:4b189975-2015-47d9-a2a5-efe7aa8f0328>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
UNIVERSITY PHYSICS LAB #5: Projectile Motion Physical measurements Lab #3: Projectile Motion To study projectile motion and determine the muzzle velocity of the PASCO mini-launcher. Pasco mini-launcher, carbon paper, white paper, meter stick, c-clamp, masking tape. The coordinate system used in this experiment is assumed to be at the edge of the table where the projectile (a steel ball) is given its initial speed v[o]. In general, projection angle θ and time t can be eliminated between the to yield, Note that when θ=0, the equation above simplifies further. A-Measuring the muzzle velocity: 1) Clamp the projectile launcher to the tabletop using C-clamps. Set the angle of projection to zero by using the plumb line and protractor attached to the launcher. 2) Place the steel ball inside the launching tube and push it with the clear plastic rod into the first trigger locking position. At the first click, the ball is in the first firing position. Now the projectile is ready to fire. CHECK TO SEE THAT THE RANGE IS CLEAR OF ANY OBSTACLE, ESPECIALLY A HUMAN BEING! 3) Fire the projectile (by pulling the yellow string) to see roughly, where it will land. It should be about 1.25 m from the initial position. Tape a piece of paper at the point where the projectile hit. Place a carbon paper on top of it, with the inked face down. Do not tape the carbon paper. 4) Also tape a blank paper on the floor to mark the point (x = 0), which will be right underneath the projectile’s starting point. You can mark it either with the help of a plumb bob (if you have available) or a dangling meter stick. Measure the distance y as indicated in the figure above. 5) Now you are ready to fire the projectile. Fire the projectile five times (x[i], i=1,2,3,4,5) and record the distance between the plumb line and the point on the white paper ( distance x in the figure). Compute the average x. 6) Use the equation [] to compute the muzzle velocity v[o]. B) Projection at an angle: 7) Incline the launcher to 20.0^o, and lock it in place. 8) Fire the projectile as a practice shot to determine where you should lay the white paper. 9) Lay the white paper and the carbon paper as instructed earlier. 10) Repeat the shots five times and record in the second table. Take and average of the measured range record as x [measured.] [ ] 11) Compute the expected distance (x [expt].) from [] with θ=20^o. Note that this will give you a quadratic equation in x. Seek help if you do not know how to solve the quadratic equation. 7) Compute the percentage difference from, C) Maximum Range: To find the maximum range, fire the projectile at20,25,30,35,40,45,50,55,60,65 agnles and record the range. Enter your data into Projectile Motion NAME_______________ TABLE 1: Horizontal projection Average =x=_______________ Height of the projection position = y =_______________ Muzzle velocity v[o] =________________ TABLE 2: Projection with θ=20^o. average = x[measured] = _________________ x [expt]. = ___________________ % Error = _____________________ Table 3: The maximum rage occurs at______degree.
{"url":"http://www.utm.edu/staff/cerkal/201lab3.html","timestamp":"2014-04-20T05:44:29Z","content_type":null,"content_length":"33070","record_id":"<urn:uuid:07c0410c-5d6a-440c-bad7-221a90ebd044>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Encryption #11 The purpose of this article is to give you an idea of how to attack Encryption #11 logically and successfully. This is certainly one of the harder cryptography missions on this site. However, with research into the type of encryption used and some programming experience it can be solved. There are a number of things you should familiarize yourself with before attempting this mission. I will attempt to cover some of them briefly. The are as follows: one time pad encryption techniques XOR encryption(a type of one time pad) the use of a dictionary to aide in deciphering These ideas are crucial to solving this challenge. The One Time Pad When using a one time pad, the plain text is encrypted using a pseudo random key that is the same length as said plain text. For example, encrypting the word &#39;hacker&#39; with the random key &# 39;grvtbs&#39; yields the following result. plain text: HACKER key: GRVTBS result: NRXDFJ As you can see, the process involved is adding the numeric value of the two letters, then taking that value modulo 26 and converting it back to a letter. The most common numbering system starts with A = 0, and ends with Z = 25. Ex: K = 10, T = 19. 10 + 19 = 29. 29 % 26 = 3. Therefore the encrypted letter would be D. XOR Encryption XOR encryption is a type of one time pad encryption that utilizes binary numbers, and happens to be the encryption used in this mission. To encrypt, the first step is to convert the letters of the plain text word into all caps, then into their ASCII values. Next, take those numbers and convert to binary. Ex. H is ASCII is 72 which is 01001000 in binary. Then do the same with the key. To encode, line up the binary representation of the plain text and the key and toggle the bits. Example using one letter of plain text and a one letter key: H = 01001000 G = 01000111 Enc = 00001111 This is where XOR gets its name. XOR stands for &#39;exclusive or&#39;. Using exclusive or, for an expression to evaluate to true only one of the operands can be true. Ex (using 0 = False and 1 = 0 xor 0 = 0 1 xor 0 = 1 0 xor 1 = 1 1 xor 1 = 0 Deciphering text with xor uses the same process as encryption. You take the coded binary value and the value of the key and toggle the bits. This leaves you with a binary representation of the original word. Using a Dictionary The above forms of encryption, when used properly, are reputed to be impossible to crack. The operative phrase there is &#39;used properly.&#39; Two mistakes that can make a one time pad encryption crackable were made in this mission. They are: using a nonrandom key encrypting multiple messages with the same key Keeping in mind the hints given on the challenge page, this encryption is vulnerable to a dictionary attack. This is where programming comes in. We know the following facts: each packet is a four letter word, as is the key the same key was used to encrypt each packet Using this knowledge you need to find a way, using a dictionary, to find a four letter word that when put through the xor algorithm with each of the three packets, produces valid English words. How you do this is up to you, but doing it by hand is probably not an option. Good luck! Helpful links: korgon February 07 2010 - 14:43:16 Good basic coverage on how the one time pad works. Should help people with this challenge. ArgonQon February 14 2010 - 15:18:42 Clean, straightfoward, easy to follow and no spoilers. Nice kodeizxon July 20 2010 - 18:38:36 kindaa good help.......:ninja::ninja:...... tuere816on January 29 2012 - 09:45:05 nice 1 article , scourged the net and found a lot of information , but here it is summed up well Post Comment You must have completed the challenge Basic 1 and have 100 points or more, to be able to post.
{"url":"https://www.hellboundhackers.org/articles/read-article.php?article_id=915","timestamp":"2014-04-18T08:33:04Z","content_type":null,"content_length":"23328","record_id":"<urn:uuid:c3dc7df4-d71a-4e40-b42d-2805f26c4442>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Roman Numerals │ Contents of this Page │ │What are Roman Numerals? Playings with Matches │ │Values of Roman Numerals Roman Numerals Today │ │Conversions Roman Numerals on the Internet│ │To the History of Roman Numerals Solutions │ │Chronograms References │ │666 . │ What are Roman Numerals? ... Roman numerals are the letters on the left, which are used for writing natural numbers. On this page the (standard-) rules are described, which are (were?) taught in German schools. Examples: LXIV=64, CCXXVI=226, CM=900, CMXCVIII=998 or MDCCXII=1712 You don't find the words Roman numbers in an encyclopedia or a reference book of mathematics. Otherwise you could think that the Romans had their own numbers and didn't use the natural numbers. Obviously Roman numbers mean the way of writing a number. - On the other hand you must not be too exact in number names. A number like MDCCXII is called a Roman number and is understood as that. I say this because google.com e.g. finds many web sites only with the string "roman numbers". Values of Roman Numerals top Each numeral has a certain value: I=1 V=5 X=10 L=50 C=100 D=500 M=1000. You can keep the values in your mind as follows. >V is the upper half of X >C stands for centum=100, known from centimeter=cm >L is the lower half of C, if you have some imagination >M stands for mille=1000, known from millimeter=mm >D is the right half of (I), an old writing of 1000 You must not think that the numerals developed this way in former times. Conversions top From the Roman to the Decimal System First you look at numbers which are written in Roman numerals in decreasing values. There is always a numeral with a smaller value on the right side of a numeral. This is a number like MCCXII. In this case you must only add the values: MCCXII = M+C+C+X+I+I = 1212. Sometimes there are numbers with a numeral of a smaller value on the left side. This happens twice in CMXLVIII. Here you calculate the differences CM=M-C=1000-100=900 and XL=L-X=50-10=40 first and then you add: CMXLVIII = CM+XL+V+I+I+I = 900+40+5+1+1+1=948. From the Decimal to the Roman System If you have the opposite problem to convert a number written in Arabic numerals to one in Roman numbers, choose numbers without 4 or 9 in the beginning. An example is 1687. You break the number into thousands, hundreds, tens and units. In a second step you also consider the numbers 5, 50 and 500 (if possible) in the reduction. In the end you use Roman numerals. If the number has 4 or 9, you must take differences. Take the example 1942. Use the difference M-C=CM for 900 and L-X=XL for 40. Then you add: 1942 = 1000+900+40+2 = M+CM+XL+II = MCMXLII. Maybe this way of using differences is confusing. Obviously one wanted to avoid four equal letters side by side. You write instead of IIII=IV=4 VIIII=IX=9 XXXX=XL=40 LXXXX=XC=90 CCCC=CD=400 DCCCC=CM=900. Only these six differences are allowed. You can understand other figures like VM=995 or IC=99 easily and they are elegant, but if you allow them, there are different writings of the same number. So 995=VM=CMXCV and 99=IC=XCIX would be possible. And where is the sign for the Roman zero? There is none, because there is no need for zero. To the History of Roman Numerals top If you want to describe the history of these numerals, you must go far afield. I will restrict myself to three remarks. The "old Romans" used four equal numerals like IIII, XXXX, CCCC and MMMM and didn't allow differences. The letters D and M as numerals came up later. The Roman numerals spread in many countries up to the end of the Middle Ages. So you can understand that there were and are many different ways of writing the numbers during the centuries. In this respect the rules I describe on this page and which are used in schools are constructed in some way, but have become standard today. Maybe the main reason is that the writing is definite. Chronograms top When letters are used as numerals, there are ingenious connections between words/sentences and numerals. There is the number M+M+I+C+L+I=2152 in MATHEMATISCHE BASTELEIEN. (This isn't ingenious, but an example.) The equation EINTAUSENDUNDZWEI=1002 is well known. If the sum is a date, you call words or sentences chronograms. The Priest Johann Loofher lived in my hometown Bad Salzuflen around 1630. He designed Latin house inscriptions as chronograms for several houses. The year of building was hidden in the inscriptions. You can read more about this on my page chronograms. 666 top The sum of six of the seven numerals is D+C+L+X+V+I=666. The letter M is missing. This is a nice playing with numbers and results from the pairs (1,5), (10,50) und (100,500). It is more interesting that 666 is the largest triangular number of equal numerals. There is 666=1+2+3+...+35+36=36*37/2. The number 666 came in twilight, because it was called the "number of the animal" in the Bible: Hier ist Weisheit! Wer Verstand hat, der überlege die Zahl des Tieres; denn es ist eines Menschen Zahl, und seine Zahl ist sechshundertsechsundsechzig (Offenbarung des Johannes 13,18 in Luthers In interpretations of the Bible the number of the animal is a bad number and is also called Number of the Beast, Satan's Number or Antichrist's Number. Consequently people found 666 in the Roman emperors' names Nero and Deocletian, because they persecuted Christians. In the 16th century, the times of the Religious Wars, 666 was connected with Luther's name and - in reverse - with that of the Pope. (Book 2, Seite 347 ff.) The Pope's example uses the principle of the chronogram. The Pope was called VICARIUS FILII DEI (Representative of God's son). If you add the values of the Roman numerals, you get 666 (VICARIVS FILII DEI). Look on the internet with the string 666 and you are flooded with informations, if you like. Playings with Matches top There is a tradition to lay equations with matches, which are obviously wrong. If you only move one match, the equation becomes correct. Here are some classics among others. The solutions are at the bottom of this page. - It is easy to discover more examples. Roman Numerals Today top Today you don't use Roman numerals often. You use them for numbering chapters in books or pages of forewords or list of contents depending on taste or fashion. Sometimes dates are given in Roman numerals, for instance for houses or in connection with the copyright of books. Even Words for Windows is able to number lines automatically [ :-( ] in Roman numerals. Most frequently you see Roman numerals on clocks. Some public clocks of my hometown Bad Salzuflen follow as examples. Watch maker shops use large clocks for gaining attention. They prefer decorative Roman faces to show their high quality. Clock in the Begastraße of Schötmar, part of Bad Salzuflen A second clock hangs in the Brüderstraße It is conspicuous that most of the Roman faces have letters directed to the centre and that IV is written as IIII. So there is no confusion with VI (=I/\). This is one explanation. Please look at Gordon T. Uber's page, URL below for further informations. One of three clocks of the Stadtkirche (town church) follows. Clock at the former high school and Volkshochschule (adult education centre) now. Clock in the gable of a farm house in Rhiene. Clock in the Bismarckstraße. The clock was at the tower of the village church of Lieme. It is at the top of an advertisement tower for the casino at Bad Oeynhausen. far away You can visit a strange clock in the nice village Siána on the island of Rhodos in Greece. ... ... There are two clock towers beside the village church. One has a clock and it always tells the time 9.33 o'clock. It is only painted ;-). Today the village could afford a real clock because of the many tourists who visit the village. But why should it loose such an attraction? Roman Numerals on the Internet top Horst Hicke (Unterrrichtsmaterial-schule.de) Römische Zahlen Römische Ziffern Michael Bradke Numeralia - Zahlwörter Römische Zahlen, Mathematik-digital/Römische Zahlen Sechshundertsechsundsechzig Wolfgang Back Römische Zahlen Christopher Handy Roman Numeral Year Dates Gordon T. Uber FAQ: Roman IIII vs. IV on Clock Dials Jim Loy Roman Numerals Defined On Roman Numerals Paul Lewis Roman numerals, Number of the Beast Solutions top References top (1) Jan Gullberg: Mathematics - From the Birth of Numbers, New York / London 1997 [ISBN 0-393-04002-X] (2) Georges Ifrah: Universalgeschichte der Zahlen, Köln 1998 [ISBN 3-88059-956-4] (3) Johannes Lehmann: So rechneten Griechen und Römer, Reinhardt Becker Verlag [ISBN 3-930640-11-2] or (3) Johannes Lehmann: So rechneten Griechen und Römer, Urania , Leipzig, Jena, Berlin , 1994 [ISBN 3-332-00522-7] Thank you Gail from Oregon Coast for supporting me in my translation. Feedback: Email address on my main page This page is also available in German. URL of my Homepage: © 2003 Jürgen Köller
{"url":"http://www.mathematische-basteleien.de/romannumerals.htm","timestamp":"2014-04-18T18:10:13Z","content_type":null,"content_length":"21549","record_id":"<urn:uuid:e507e52f-6f99-4bd9-b8b4-78ef3b674ee7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
The Easter Egg Published January 20, 2010 curriculum , pedagogy 3 Comments Tags: easter egg, row game, worksheets Gotta thank Kate for introducing me to the Row Game. I also like the idea of using box.net as a way for teachers to upload and share them. But, me being me, I couldn’t leave well enough alone. I like the fact that these activities are self checking and that if students find that they have different answers, then there is a mistake. The problem with that is if I create two sets of 10 problems, I would like my students to work as many of them as possible. So I introduced the “Easter Egg.” I have used this concept in the past when doing test review. Basically, I hide wrong answers so students need are a little more alert when looking at the solution to a problem. How does this work for row games? Well, in the row game, if the partners have different answers, then someone messed up. This opens the door for discussion. But what if they never disagree? Then there was no real need to discuss anything. With the Easter Egg, I will make a couple of the problems diverge, that way agreement doesn’t necessarily equal correctness. Now they have to talk even if they get the same answer. Today I rolled out this row game on slope with my 7th graders. Once they got used to the concept, the did pretty well. I look forward to doing more of these. I don’t know, maybe this defeats the purpose of the row game. Maybe not. What say you? 3 Responses to “The Easter Egg” 1. January 20, 2010 at 3:45 pm Looks like a great activity, David. Now that you and Kate have posted so many times (relatively speaking) about these row game activities, I think I’ll give it a try. I like the idea in theory, but wonder if it’s really much different than just posting the answers? What I mean by this is…if Students A and B are working together and B thinks A is “smarter” then won’t B just use A’s answer and work backwards (or do whatever students do when provided the answer)? I’m not trying to discount this idea at all, David, because it’s beyond what I’ve attempted to do myself up to this point. Just trying to pick your brain. How do you see these types of activities as being fundamentally different from a regular worksheet based on my comments above? 2. January 20, 2010 at 4:08 pm I suppose that can happen. That’s why I embedded the “Easter Egg.” Student B can’t use A’s answer because there is no guarantee that their answer should match. Now, if student B still assumes that A has the same answer, then we have an entirely different problem. I think making answers available is a good thing because it does allow for working backwards. However, the row game fosters communication between students and is self checking at the same time. I know that my allowing for times when their answers won’t match might fly in the face of the row game purists, but I like the possiblities it provides. 3. August 19, 2013 at 10:36 am It has been three and a half years since this blog was posted but I recently discovered the work you and Kate put in to making various Row Games. Initially, the idea intrigued me and I am curious to see how the self checking and discussion work in the class setting. Would you be able to provide me with any information to your revised and expanded thoughts of Row Games after the fact? I too, considered implementing “Easter Eggs” into the row games I develop but I’d like to know if the time investment of creating such activities is worth it. Do you still use Row Games in class and would you recommend them today as a meaningful classroom activity?
{"url":"http://coxmathblog.wordpress.com/2010/01/20/the-easter-egg/","timestamp":"2014-04-21T05:13:05Z","content_type":null,"content_length":"59110","record_id":"<urn:uuid:b19b45a6-6135-4cb5-b90f-665dc373808e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Near-wall treatment for k-omega models From CFD-Wiki As described in Two equation turbulence models low and high reynolds number treatments are possible. Standard wall functions Main page: Two equation near-wall treatments For $k$ the boundary conditions imposed at the solid boundary are: $\begin{matrix} \frac{\partial k}{\partial n} = 0 & & \frac{\partial \omega}{\partial n} = 0 \end{matrix}$ where $n$ is the normal to the boundary. Moreover the centroid values in cells adjacent to solid wall are specified as $\begin{matrix} k_p = \frac{u^2_\tau}{\sqrt{C_\mu}y_p}, && \omega_p = \frac{u_\tau}{\sqrt{C_\mu}\kappa y_p} = \frac{\sqrt{k_p}}{{C_\mu^{1/4}}\kappa y_p}. \end{matrix}$ In the alternative approach $k$ production terms is modified. Automatic wall treatments The purpose of automatic wall treatments is to make results insensitive with respect to wall mesh refinement. Many blending approaches have been proposed. The one by Menter takes advantage of the fact that the solution to $\omega$ equations is known for both viscous and log layer $\begin{matrix} \omega_\text{vis} = \frac{6u}{\beta y^2} & \omega_\text{log} = \frac{u_\tau}{C_\mu^{1/4} \kappa y} \end{matrix}$ where $y$ is the cell centroid distance from the wall. Using this a blending can take the following form: $\omega_p = \sqrt{\omega_{\text{vis}}^2 + \omega_{\text{log}}^2},$ Note that for low $y$ values the $1/y^2$ will dominate and therefore viscous value of $\omega$ will be reproduced. Conversely, for larger values of $y$, $1/y$ will be dominant and logarithmic value will be recovered. Subsequently Menter proposes also blending for friction velocity. Friction velocity for viscous and logarithmic region are: $\begin{matrix} u^\text{vis}_\tau = \frac{U}{y^{+}} & & u_\tau^\text{log} = \frac{U}{\log E y^{+}} \end{matrix}$ And the blending suggested: $u_\tau = \sqrt[4]{(u_\tau^{\text{vis}})^4 + (u_\tau^{\text{log}})^4},$ Both k- omega models (std and sst) are available as low-Reynolds-number models as well as high-Reynolds-number models. The wall boundary conditions for the k equation in the k- omega models are treated in the same way as the k equation is treated when enhanced wall treatments are used with the k- epsilon models. This means that all boundary conditions for - wall-function meshes will correspond to the wall function approach, while for the - fine meshes, the appropriate low-Reynolds-number boundary conditions will be applied. In Fluent, that means: If the Transitional Flows option is enabled in the Viscous Model panel, low-Reynolds-number variants will be used, and, in that case, mesh guidelines should be the same as for the enhanced wall (y+ at the wall-adjacent cell should be on the order of y+ = 1. However, a higher y+ is acceptable as long as it is well inside the viscous sublayer (y+ < 4 to 5).) If Transitional Flows option is not active, then the mesh guidelines should be the same as for the wall functions. (For [...] wall functions, each wall-adjacent cell's centroid should be located within the log-law layer, 30 < y+ < 300. A y+ value close to the lower bound y+ = 30 is most desirable.) • Menter, F., Esch, T. (2001), "Elements of industrial heat transfer predictions", 'COBEM 2001, 16th Brazilian Congress of Mechanical Engineering.'. • ANSYS (2006), "FLUENT Documentation", .
{"url":"http://www.cfd-online.com/Wiki/Near-wall_treatment_for_k-omega_models","timestamp":"2014-04-18T01:32:06Z","content_type":null,"content_length":"49819","record_id":"<urn:uuid:db65eb21-c226-47a6-afcc-97c2a08eea50>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Starting Log and Ln Chapter 4 (Need Help) April 18th 2006, 04:58 PM Starting Log and Ln Chapter 4 (Need Help) In class I am breifing chapter 4 through 4.5 then we are jumping to 9 so I have to get 4 down quickly. In my homework are two questions I'm getting fustrated on. The first should be simple since its basically substituiting for variables. The second delts with the relationship between ln and to the power variable "e". Any help would be much appreciated as I have typed out the questions in full below, thanks! Question on Chapter 4 use formula below M=Lrk/12(k-1) where k=(1+r/12) to the power of 12t and t is the number of years that loan is in effect Some lending institutions calculate the monthly payment M on a loan of L dollars at an interest rate r (expressed as a decimal) by using the formula a) Find the monthly payment on a 30–year $90,000 mortgage if the interest rate is 7% b) Find the total interest paid on the loan in (a). c) Find the largest 25–year home mortgage that can be obtained at an interest rate of 8% if the monthly payment is to be $800. Question on LOG and LN The Jenss model is generally regarded as the most accurate formula for predicting the height of preschool children. If y is the height in cm and x is the age in years, then y=79.041 + 6.39x-e to the power of 3.261-0.993x for 1/4 (less than or eual to) x (less than or equal to 6) years. What is the height of a typical two year old? April 18th 2006, 08:51 PM Originally Posted by OverclockerR520 In class I am breifing chapter 4 through 4.5 then we are jumping to 9 so I have to get 4 down quickly. In my homework are two questions I'm getting fustrated on. The first should be simple since its basically substituiting for variables. The second delts with the relationship between ln and to the power variable "e". Any help would be much appreciated as I have typed out the questions in full below, thanks! Question on Chapter 4 Some lending institutions calculate the monthly payment M on a loan of L dollars at an interest rate r (expressed as a decimal) by using the formula where and t is the number of years the loan is in effect a) Find the monthly payment on a 30–year $90,000 mortgage if the interest rate is 7% b) Find the total interest paid on the loan in (a). c) Find the largest 25–year home mortgage that can be obtained at an interest rate of 8% if the monthly payment is to be $800. Question on LOG and LN The Jenss model is generally regarded as the most accurate formula for predicting the height of preschool children. If y is the height in cm and x is the age in years, then for years. What is the height of a typical two year old? Vital equations have disappeared from you post. April 19th 2006, 02:24 AM I typed them back in as best I could. April 19th 2006, 04:30 AM Originally Posted by OverclockerR520 Question on Chapter 4 use formula below M=Lrk/12(k-1) where k=(1+r/12) to the power of 12t and t is the number of years that loan is in effect Some lending institutions calculate the monthly payment M on a loan of L dollars at an interest rate r (expressed as a decimal) by using the formula a) Find the monthly payment on a 30–year $90,000 mortgage if the interest rate is 7% b) Find the total interest paid on the loan in (a). c) Find the largest 25–year home mortgage that can be obtained at an interest rate of 8% if the monthly payment is to be $800. First lets write your repayment formula clearly: $<br /> M=L\ \frac{r}{12}\ \frac{(1+r/12)^{12k}}{(1+r/12)^{12k}-1}<br />$ Now 7% is a rate of 0.07, so plugging the given values for part a) into the formula gives: $<br /> M=90000\ \frac{0.07}{12}\ \frac{(1+0.07/12)^{360}}{(1+0.07/12)^{360}-1}\approx \598.77<br />$ b) Total interest $TI$ is total repaid minus the principal: $<br /> TI=360*598.77-90000=\125557<br />$ c)Putting $r=0.08$ and $k=25$ into the formula give that for $L=90000$, $M=\694.64$, but the monthly repayments over a fixed period at a fixed rate is proportional to the loan amount. so if the repayments are to be $\800$ then the loan is: $L=\frac{800}{694.64}\ 90000 \approx \103650$ (you will need to check the arithmetic - no guarantee given) April 19th 2006, 06:08 AM Thank you for your help! Im checking it in the calc right now :D April 19th 2006, 10:26 AM Originally Posted by OverclockerR520 Question on LOG and LN The Jenss model is generally regarded as the most accurate formula for predicting the height of preschool children. If y is the height in cm and x is the age in years, then y=79.041 + 6.39x-e to the power of 3.261-0.993x for 1/4 (less than or eual to) x (less than or equal to 6) years. What is the height of a typical two year old? Lets write the equation clearly: $<br /> y=79.041+6.39x-e^{3.261-0.993x}<br />$ for $1/4 \le x \le 6$ For a two year old $x=2$ so the typical height is: $<br /> y=79.041+6.39\times 2-e^{3.261-0.993\times 2}=91.821-e^{1.275}<br />$ Now your calculator should have an "exp" function. This allows you to find $e^{1.275}=\exp(1.275)\approx 3.579$, so: $<br /> y=95.400 \mbox{cm}<br />$
{"url":"http://mathhelpforum.com/pre-calculus/2607-starting-log-ln-chapter-4-need-help-print.html","timestamp":"2014-04-17T10:08:21Z","content_type":null,"content_length":"14127","record_id":"<urn:uuid:2eb0a790-1d30-4470-b7af-12f4e84c13ca>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
Easy word problem has me stumped. October 28th 2008, 07:16 PM Easy word problem has me stumped. For the person responding, can you please explain the logic of how you set up the equation. Thank anyone who responds in advance Q) The sum of two numbers is 15. Three times one of the numbers is 11 less than five times the other. Find the numbers. *from the back of the book, I already know that the answer is 6, 7, What I put together was 3x=5x-11 but considering that there are two number to answer for, I realize I am wrong. October 29th 2008, 08:08 AM No, answer is not 6;7. How could 6+7 be 15? Well.. you got one equation right ( 3x=5y-11 ) Since you have two variables you also need two equations. And the other one would be in the first sentence I wrote. (the sum of those two variables is 15). So it is x+y=15.. Then you could write out X, which is x=15-y and then you could just replace it to the other equation. Rest should be easy enough. And then.. October 29th 2008, 10:38 AM Thank you. Yes the I mis-wrote that it should be 7 and 8.
{"url":"http://mathhelpforum.com/algebra/56332-easy-word-problem-has-me-stumped-print.html","timestamp":"2014-04-17T20:03:21Z","content_type":null,"content_length":"4549","record_id":"<urn:uuid:a14d07a5-4744-48d5-a707-d28afc7d5c90>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Solve by substitution method Hello i need help on this problem Solve by substitution x+y=6 y=x+2 As it's name suggests, when we solve by substitution, we substitute the expression for one variable in one equation for that variable in the other equation. from the second equation, we see that y = x + 2, so we simply substitute x + 2 for y in the other eqaution. we would then have an equation with only x, which is easy to solve. $x + y = 6 ...........(1)$ $y = x + 2 ...........(2)$ From (2) we see $y = x + 2$ substitute $x + 2$ for $y$ in (1), we get: $x + (x + 2) = 6$ $\Rightarrow 2x + 2 = 6$ $\Rightarrow 2x = 4$ $\Rightarrow x = 2$ Now, can you tell me what $y$ is? no, elimination is a different concept. here we try to ELIMINATE one variable from one of our equations by adding or subtracting the two equations. Sometimes we may have to multiply one or both equations by a constant so we can eliminate a variable. here we see that we have a y in one equation and a -y in the other. if we add the two equations then, we would get rid of the y, since y + (-y) = 0y $x + y = 15 ............(1)$ $6x - y = 41 ..........(2)$ $\Rightarrow 7x = 56 ............(1)+(2)$ $\Rightarrow x = 8$ Can you tell me what $y$ is? Correct! Last edited by ThePerfectHacker; June 2nd 2007 at 05:24 PM.
{"url":"http://mathhelpforum.com/algebra/15558-solve-substitution-method.html","timestamp":"2014-04-18T21:21:17Z","content_type":null,"content_length":"45844","record_id":"<urn:uuid:ad0d4026-1732-4fbf-aa41-e69a7c231d92>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
How many feet to the moon? You asked: How many feet to the moon? Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/how_many_feet_to_the_moon","timestamp":"2014-04-17T16:01:16Z","content_type":null,"content_length":"55958","record_id":"<urn:uuid:e542a9ea-711b-44f9-a785-32583fbb3edc>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Compaction and Separation Algorithms for Non-Convex Polygons and their Results 1 - 10 of 20 - Proc. ACM SIGGRAPH ’01 , 2001 "... a b c d e f Figure 1: By overwriting voronoi regions, tile centroids are displaced away from an edge. Recentering tiles at their new centroids eventually moves them clear of the edge. This paper presents a method for simulating decorative tile mosaics. Such mosaics are challenging because the square ..." Cited by 79 (0 self) Add to MetaCart a b c d e f Figure 1: By overwriting voronoi regions, tile centroids are displaced away from an edge. Recentering tiles at their new centroids eventually moves them clear of the edge. This paper presents a method for simulating decorative tile mosaics. Such mosaics are challenging because the square tiles that comprise them must be packed tightly and yet must follow orientations chosen by the artist. Based on an existing image and user-selected edge features, the method can both reproduce the image’s colours and emphasize the selected edges by placing tiles that follow the edges. The method uses centroidal voronoi diagrams which normally arrange points in regular hexagonal grids. By measuring distances with an manhattan metric whose main axis is adjusted locally to follow the chosen direction field, the centroidal diagram can be adapted to place tiles in curving square grids instead. Computing the centroidal voronoi diagram is made possible by leveraging the z-buffer algorithm available in many graphics cards. 1 , 2002 "... A new paradigm for rigid body simulation is presented and analyzed. Current techniques for rigid body simulation run slowly on scenes with many bodies in close proximity. Each time two bodies collide or make or break a static contact, the simulator must interrupt the numerical integration of velocit ..." Cited by 36 (1 self) Add to MetaCart A new paradigm for rigid body simulation is presented and analyzed. Current techniques for rigid body simulation run slowly on scenes with many bodies in close proximity. Each time two bodies collide or make or break a static contact, the simulator must interrupt the numerical integration of velocities and accelerations. Even for simple scenes, the number of discontinuities per frame time can rise to the millions. An efficient optimization-based animation (OBA) algorithm is presented which can simulate scenes with many convex threedimensional bodies settling into stacks and other “crowded” arrangements. This algorithm simulates Newtonian (second order) physics and Coulomb friction, and it uses quadratic programming (QP) to calculate new positions, momenta, and accelerations strictly at frame times. The extremely small integration steps inherent to traditional simulation techniques are avoided. Contact points are synchronized at the end of each frame. Resolving contacts with friction is known to be a difficult problem. Analytic force calculation can have ambiguous or non-existing solutions. Purely impulsive techniques avoid these ambiguous cases, but still require an excessive and computationally expensive number of updates in the case of - Computational Geometry , 1998 "... An algorithm and a robust floating point implementation is given for rotational polygon containment:given polygons P 1 ,P 2 ,P 3 ,...,P k and a container polygon C, find rotations and translations for the k polygons that place them into the container without overlapping. A version of the algorithm a ..." Cited by 34 (6 self) Add to MetaCart An algorithm and a robust floating point implementation is given for rotational polygon containment:given polygons P 1 ,P 2 ,P 3 ,...,P k and a container polygon C, find rotations and translations for the k polygons that place them into the container without overlapping. A version of the algorithm and implementation also solves rotational minimum enclosure: givenaclass C of container polygons, find a container C in C of minimum area for which containment has a solution. The minimum enclosure is approximate: it bounds the minimum area between (1-epsilon)A and A. Experiments indicate that finding the minimum enclosure is practical for k = 2, 3 but not larger unless optimality is sacrificed or angles ranges are limited (although these solutions can still be useful). Important applications for these algorithm to industrial problems are discussed. The paper also gives practical algorithms and numerical techniques for robustly calculating polygon set intersection, Minkowski sum, and range in... - Algorithmica, special issue on Computational , 1996 "... In Part I we present an algorithm for finding a solution to the two-dimensional translational approximate multiple containment problem: find translations for k polygons which place them inside a polygonal container so that no point of any polygon is more than 2ffl inside of the boundary of any other ..." Cited by 17 (9 self) Add to MetaCart In Part I we present an algorithm for finding a solution to the two-dimensional translational approximate multiple containment problem: find translations for k polygons which place them inside a polygonal container so that no point of any polygon is more than 2ffl inside of the boundary of any other polygon. The polygons and container may be nonconvex. The value of ffl is an input to the algorithm. In industrial applications, the containment solution acts as a guide to a machine cutting out polygonal shapes from a sheet of material. If one chooses ffl to be a fraction of the cutter's accuracy, then the solution to the approximate containment problem is sufficient for industrial purposes. Given a containment problem, we characterize its solution and create a collection of containment subproblems from this characterization. We solve each subproblem by first restricting certain two-dimensional configuration spaces until a steady state is reached, and then testing for a solution inside the... , 1997 "... Designers often need to decompose a product into functioning parts during the product design stage. This decomposition is critical for product development, as it determines the geometric configuration of parts, and has direct impact on product cost. Most decomposition decisions are based primarily u ..." Cited by 14 (0 self) Add to MetaCart Designers often need to decompose a product into functioning parts during the product design stage. This decomposition is critical for product development, as it determines the geometric configuration of parts, and has direct impact on product cost. Most decomposition decisions are based primarily upon end-user requirements instead of product manufacturability. The resulting parts can be expensive to manufacture or are sometimes impossible to make. This thesis presents a manufacturability-driven approach which can help designers decompose bent sheet metal products into manufacturable parts. The decomposition approach presented in this thesis takes the geometric description of an initial product design, analyzes its manufacturability, and decomposes the product into manufacturable parts. The decomposition continues until all decomposed parts are manufacturable. Near-optimal solutions are generated based on some primary concerns of design for manufacture (DFM) and design for assembly (DFA). Designers can then examine the decomposition results and decide whether they meet end-user requirements. Cutting, bending, and assembly processes are considered as the major manufacturing , 1994 "... We present exact algorithms for finding a solution to the two-dimensional translational containment problem: find translations for k polygons which place them inside a polygonal container without overlapping. The term kCN denotes the version in which the polygons are convex and the container is non ..." Cited by 13 (7 self) Add to MetaCart We present exact algorithms for finding a solution to the two-dimensional translational containment problem: find translations for k polygons which place them inside a polygonal container without overlapping. The term kCN denotes the version in which the polygons are convex and the container is nonconvex, and the term kNN denotes the version in which the polygons and the container are nonconvex. The notation (r; k)CN, (r; k)NN, and so forth refers to the problem of finding all subsets of size k out of r objects that can be placed in a container. The polygons have up to m vertices, and the container has n vertices, where n is usually much larger than m. We present exact algorithms for the following: 2CN in O(mn log n) time, (r; 2)CN in O(r 2 m log n) time (for r ?? n), 3CN in O (m 3 n log n) time, kCN in O(m 2k n k log n) or O((mn) k+1 ) time, and kNN in O((mn) 2k+1 LP(2k; 2k(2k + 1)mn + k(k \Gamma 1)m 2 )) time, where LP(a; b) is the time to solve a linear program with... , 1995 "... Layout and packing are NP-hard geometric optimization problems of practical importance for which finding a globally optimal solution is intractable if P!=NP. Such problems appear in industries such as aerospace, ship building, apparel and shoe manufacturing, furniture production, and steel construct ..." Cited by 13 (6 self) Add to MetaCart Layout and packing are NP-hard geometric optimization problems of practical importance for which finding a globally optimal solution is intractable if P!=NP. Such problems appear in industries such as aerospace, ship building, apparel and shoe manufacturing, furniture production, and steel construction. At their core, layout and packing problems have the common geometric feasibility problem of containment: find a way of placing a set of items into a container. In this thesis, we focus on containment and its applications to layout and packing problems. We demonstrate that, although containment is NP-hard, it is fruitful to: 1) develop algorithms for containment, as opposed to heuristics, 2) design containment algorithms so that they say "no" almost as fast as they say "yes", 3) use geometric techniques, not just mathematical programming techniques, and 4) maximize the number of items for which the algorithms are practical. Our approach to containment is based on a new , 2000 "... Exact implementations of algorithms of computational geometry are subject to exponential growth in running time and space. In particular, coordinate bit-complexity can grow exponentially when algorithms are cascaded: the output of one algorithm becomes the input to the next. Cascading is a signic ..." Cited by 12 (4 self) Add to MetaCart Exact implementations of algorithms of computational geometry are subject to exponential growth in running time and space. In particular, coordinate bit-complexity can grow exponentially when algorithms are cascaded: the output of one algorithm becomes the input to the next. Cascading is a signicant problem in practice. We propose a geometric rounding technique: shortest path rounding. Shortest path rounding trades accuracy for space and time and eliminates the exponential cost introduced by cascading. It can be applied to all algorithms which operate on planar polygonal regions, for example, set operations, transformations, convex hull, triangulation, and Minkowski sum. Unlike other geometric rounding techniques, shortest path rounding can round vertices to arbitrary lattices, even in polar coordinates, as long as the rounding cells are connected. (Other rounding techniques can only round to the integer grid.) On the integer grid, shortest path rounding introduces less com... , 2000 "... A translation lattice packing of k polygons P 1 ; P 2 ; P 3 ; : : : ; P k is a (non-overlapping) packing of the k polygons which can be replicated without overlap at each point of a lattice i 0 v 0 + i 1 v 1 , where v 0 and v 1 are vectors generating the lattice and i 0 and i 1 range over all inte ..." Cited by 7 (3 self) Add to MetaCart A translation lattice packing of k polygons P 1 ; P 2 ; P 3 ; : : : ; P k is a (non-overlapping) packing of the k polygons which can be replicated without overlap at each point of a lattice i 0 v 0 + i 1 v 1 , where v 0 and v 1 are vectors generating the lattice and i 0 and i 1 range over all integers. A densest translational lattice packing is one which minimizes the area jv 0 v 1 j of the fundamental parallelogram. An algorithm and implementation is given for densest translation lattice packing. This algorithm has useful applications in industry, particularly clothing manufacture. 1 Introduction A number of industries generate new parts by cutting them from stock material: cloth, leather (hides), sheet metal, glass, etc. These industries need to generate dense non-overlapping layouts of polygonal shapes. Because fabric has a grain, apparel layouts usually permit only a nite set of orientations. Since cloth comes in rolls, the most common layout problem in the apparel - Computational Geometry: Theory and Applications , 1998 "... An effective and fast algorithm is given for rotational overlap minimization: given an overlapping layout of polygons P 1 ,P 2 ,P 3 ,...,P k in a container polygon Q, translate and rotate the polygons to diminish their overlap to a local minimum. A (local) overlap minimum has the property that any p ..." Cited by 5 (1 self) Add to MetaCart An effective and fast algorithm is given for rotational overlap minimization: given an overlapping layout of polygons P 1 ,P 2 ,P 3 ,...,P k in a container polygon Q, translate and rotate the polygons to diminish their overlap to a local minimum. A (local) overlap minimum has the property that any perturbation of the polygons increases the overlap. Overlap minimization is modified to create a practical algorithm for compaction: starting with a non-overlapping layout in a rectangular container, plan a non-overlapping motion that diminishes the length or area of the container to a local minimum. Experiments show that both overlap minimization and compaction work well in practice and are likely to be useful in industrial applications. 1998 Published by Elsevier Science B.V. Keywords: Layout; Packing or nesting of irregular polygons; Containment; Minimum enclosure; Compaction; Linear programming 1. Introduction A number of industries generate new parts by cutting them from stock mater...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=796752","timestamp":"2014-04-18T13:28:48Z","content_type":null,"content_length":"40318","record_id":"<urn:uuid:15047378-0cbd-4c03-ab28-100137249ee3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
Bethesda, MD Find a Bethesda, MD Math Tutor ...Let me introduce myself briefly: I have a B.S. and M.S. in Chemistry and taught high school chemistry in Baltimore for six years. Currently, I am working toward my Ph.D. in Chemical Education at the Catholic University of America in Washington, D.C.. This doctoral degree involves the in-depth ... 5 Subjects: including algebra 1, algebra 2, geometry, chemistry ...I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and algebra problems. I enjoy teaching and working through problems with students since that is the best way to learn.Have studied and scored high marks in econometric... 14 Subjects: including geometry, linear algebra, probability, STATA ...Teachers diagnose learning difficulties on a daily basis. Invest in a quality tutor to get quality results. As a certified PA Math (7-12) teacher, you can trust that I have the knowledge, skills and experience to address your student's and your needs. 19 Subjects: including precalculus, algebra 1, algebra 2, SAT math ...You can expect to improve your command of the subject right away.American history is uniquely important to Americans, because it influences our lives and stirs our emotions more than any other, just as Japanese history is more important to the Japanese, or Micronesian history to Micronesians. Bu... 15 Subjects: including ACT Math, Spanish, English, reading ...I do my best to keep the learning as practical and hands-on as possible, using word problems, pictures, and topics of interest to the student. I am a certified teacher, specializing in math and English, as well as a home-schooling mom with years of experience in teaching reading. For the beginning reader, I combine repetitious practice with fun activities like the memory game. 17 Subjects: including algebra 1, SAT math, trigonometry, geometry Related Bethesda, MD Tutors Bethesda, MD Accounting Tutors Bethesda, MD ACT Tutors Bethesda, MD Algebra Tutors Bethesda, MD Algebra 2 Tutors Bethesda, MD Calculus Tutors Bethesda, MD Geometry Tutors Bethesda, MD Math Tutors Bethesda, MD Prealgebra Tutors Bethesda, MD Precalculus Tutors Bethesda, MD SAT Tutors Bethesda, MD SAT Math Tutors Bethesda, MD Science Tutors Bethesda, MD Statistics Tutors Bethesda, MD Trigonometry Tutors Nearby Cities With Math Tutor Arlington, VA Math Tutors Chevy Chase Math Tutors Chevy Chase Village, MD Math Tutors Chevy Chs Vlg, MD Math Tutors Falls Church Math Tutors Gaithersburg Math Tutors Hyattsville Math Tutors Martins Add, MD Math Tutors Martins Additions, MD Math Tutors Mc Lean, VA Math Tutors Rockville, MD Math Tutors Silver Spring, MD Math Tutors Somerset, MD Math Tutors Takoma Park Math Tutors Washington, DC Math Tutors
{"url":"http://www.purplemath.com/bethesda_md_math_tutors.php","timestamp":"2014-04-17T01:36:52Z","content_type":null,"content_length":"24043","record_id":"<urn:uuid:7d2dbdde-32ad-41a6-a23d-8e54232bd89c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
O/R Series Epilogue 2: Towards a not-so-simple explanation of Object Relational Database Management Systems Object relational databases build on the relational model but focus on answering questions regarding information that is derived from other information using a variety of general purpose tools. Modelling of the information then shifts not only to what is known but what can be easily (or even not so easily) extrapolated from what is known. Existing ORDBMS-type systems appear to include Informix (of course, given the PostgreSQL legacy), Oracle, and DB2, but PostgreSQL and Informix have the longest lineage in this area. The key features of an ORDBMS are: 1. The ability to model easily extrapolated information within the data model itself, through use of user defined functions 2. Extensibility with regard to types and functions, written in general purpose programming languages These features work together in important ways and it is possible, as we have seen, to build a great deal of intelligence into a data model by only moving slightly beyond the standard relational approach. Although every step further brings with it complexity costs, these are very often useful and allow problems to be solved close to the database which cannot be really easily solved Towards an Object-Relational Algebra One way to think of object-relational modelling is in the language of relational algebra. I haven't been able to find accepted notation for functions on individual tuples, so here I use f(relation) notation, to show that the function operates over the set of tuples in the relation. Relational algebra as defined by Dr Codd is extremely important but it cannot solve certain classes of important queries and so SQL has gone beyond it. Important blind-spots include: 1. Transitive operations at arbitrary depth. Transitive closure and "n next highest values" are both examples of this. 2. Calculated data problems Most problems fall into one of these two categories if they are outside the relational model. We already know that elements of a tuple are functionally dependent on others if, and only if, for each value of the dependency, there is only one functionally dependent value. So if a is functionally dependent on b, for every b there is exactly one valid value of a. Functional dependency, as in algebra, is value-based, not expression-based. I am choosing an algebraic-inspired notation for this, where f(R) is a function of relation R if, and only if, for every tuple in relation R, there is only one f(R). f(R) is if for every tuple (a, b, c) in relation R, f(R) returns a value or a tuple that is a subset of the tuple processed. So if for every tuple (a, b, c), if f(R) = a, or if f(R) = (a, c), then the function is trivial. Every trivial function also represents a trivial functional dependency within the tuple. A function is if it can be expressed solely through relational algebra. All trivial functions can be expressed relationally (using π operations) and therefore are also relational. A relational function thus always specifies a functional data dependency in or between relations. Relational functions have the property of always denoting global functional dependencies. A function is if it cannot be expressed solely through non-relational algebra, for example if it involves processing of the actual value of one or more of the tuple elements or their functional dependencies. If we have a relation (Emp) containing an attribute salary_per_wk, and annual_salary(Emp) = π * 52(Emp), then annual_salary is non-relational because it involves actual processing of data inside the tuple. Relational functions often can be expanded in relational operations but as far as relational operations, non-relational functions are black boxes and function very much like attributes of a relation. For example id(R) = π (R) and c(R) = π (C)) are both relational functions, but only id(R) is trivial. c(R) represents essentially a join and subselect. An example of a general operation in functional notation might be: (R). Similarly we can π Of course, we must be careful. Since age(R) is only locally functionally dependant, indexes are out of the question and we must be careful about specifying constraints. However defining a relation such that age(R) < 65 might prove problematic unless we are re-checking every day. This would be similar to the following statement in PostgreSQL: SELECT r.name FROM employee r WHERE r.age = 41; where name and age are table methods. This allows us to store less information in the database (and hence with less duplication and chances for error) and extrapolate important information out of the data that is stored. It also allows us to store data in ways which are less traditional (nested structures etc) for the sole purpose of writing functions against it in that specific format and thus modelling constraints which cannot be modelled using more traditional structures (though as we have seen that poses significant gotchas and complexity costs). Similarly recursive queries require recursive operators to work. I place a capital Greek Sigma (Σ) to signify recursion above the join operator. This is borrowed because it is the series designator elsewhere in mathematics. An optional maximum depth is specified as a subscript to the Σ. So Σ would indicate that the expression or join should be subject to no more than 5 iterations. In a recursive join, the join is repeated until the θ condition is no longer satisfied. The functions of path(R) and depth(R) are functionally dependent on the output of a recursive join, so Σ is identical to (σ (Σ(...) r)). The choice of the Σ is also helpful because while Σ always returns an extended superset, σ always returns a subset. Since path is functionally dependent on the output of a recursive join, we can prove transitive closure over a finite set using recursive self-joins, functions, and boolean operators. We can also express next highest N tuples results. Alternatively the Σ can be followed by parentheses to show that an entire expression should be repeated until it brings no further results into the set. In a Σ expression the set is divided into two subsets: previous and new. New results are those returned by the last iteration, and are the only ones processed for join conditions. On each iteration, the "new" tuples are moved into the previous set and the tuples which satisfied the join condition are moved into the "new" set. I also use three other symbols for specifying order-dependent information. ω (omega) denotes a "window" order and partition in which aggregates can be applied to tuples in order, tuples can be removed from the beginning (α with a subscript for number of tuples to "skip") or truncated from the end (τ with a subscript for the number of tuples after which the set is to be truncated). These allow me to approach the problems which SQL can address but relational algebra cannot. An interesting property of ω is that the window order only is valid for some specific operations and is lost on any join or select operations. These operations have interesting properties as well but they are somewhat outside the scope of this posting. I will however note that it is usually cleaner to solve next N result issues with window ordering, tuple omission, and truncation than it is with recursion and aggregates. Next: Towards a simple explanation of object-relational database systems. No comments:
{"url":"http://ledgersmbdev.blogspot.com/2012/09/or-series-epilogue-2-towards-not-so.html","timestamp":"2014-04-18T19:12:15Z","content_type":null,"content_length":"72140","record_id":"<urn:uuid:2f55d660-2330-412c-995e-d7507df546a9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: A New Algorithm Based on Givens Rotations for Solving Linear Equations on Fault-Tolerant Mesh-Connected Processors August 1998 (vol. 9 no. 8) pp. 825-832 ASCII Text x K.n. Balasubramanya Murthy, K. Bhuvaneswari, C. Siva Ram Murthy, "A New Algorithm Based on Givens Rotations for Solving Linear Equations on Fault-Tolerant Mesh-Connected Processors," IEEE Transactions on Parallel and Distributed Systems, vol. 9, no. 8, pp. 825-832, August, 1998. BibTex x @article{ 10.1109/71.706053, author = {K.n. Balasubramanya Murthy and K. Bhuvaneswari and C. Siva Ram Murthy}, title = {A New Algorithm Based on Givens Rotations for Solving Linear Equations on Fault-Tolerant Mesh-Connected Processors}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {9}, number = {8}, issn = {1045-9219}, year = {1998}, pages = {825-832}, doi = {http://doi.ieeecomputersociety.org/10.1109/71.706053}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Parallel and Distributed Systems TI - A New Algorithm Based on Givens Rotations for Solving Linear Equations on Fault-Tolerant Mesh-Connected Processors IS - 8 SN - 1045-9219 EPD - 825-832 A1 - K.n. Balasubramanya Murthy, A1 - K. Bhuvaneswari, A1 - C. Siva Ram Murthy, PY - 1998 KW - Linear equations KW - Givens rotations KW - parallel algorithm KW - mesh-connected processor array KW - fault tolerance. VL - 9 JA - IEEE Transactions on Parallel and Distributed Systems ER - Abstract—In this paper, we propose a new I/O overhead free Givens rotations based parallel algorithm for solving a system of linear equations. The algorithm uses a new technique called two-sided elimination and requires an N× (N + 1) mesh-connected processor array to solve N linear equations in (5N$-$log N$-$ 4) time steps. The array is well suited for VLSI implementation as identical processors with simple and regular interconnection pattern are required. We also describe a fault-tolerant scheme based on an algorithm based fault tolerance (ABFT) approach. This scheme has small hardware and time overhead and can tolerate up to N processor failures. [1] H.M. Ahmed, J. Delosme, and M. Morf, "Highly Concurrent Computing Structures for Matrix Arithmetic and Signal Processing," Computer, pp. 65-79, Jan. 1982. [2] A. Benaini and Y. Robert, "A Modular Systolic Linear Array for Gaussian Elimination," Int'l J. Computer Math., vol. 36, pp. 105-118, 1990. [3] A. Benaini and Y. Robert, "Spacetime-Minimal Systolic Arrays for Gaussian Elimination and Algebraic Path Problem," Parallel Computing, vol. 15, pp. 211-225, 1990. [4] A. Bojanczyk, R.P. Brent, and H.T. Kung, "Numerically Stable Solution of Dense Systems of Linear Equations Using Mesh-Connected Processors," SIAM J. Scientific and Statistical Computing, vol. 5, no. 1, pp. 95-104, Mar. 1984. [5] M. Cosnard, M. Tchuente, and B. Tourancheau, "Systolic Gauss-Jordan Elimination for Dense Linear Systems," Parallel Computing, vol. 10, pp. 117-122, 1989. [6] W.M. Gentleman and H.T. Kung, "Matrix Triangularization by Systolic Arrays," Proc. SPIE 298, Real Time Signal Processing IV, pp. 19-26, 1981. [7] K.H. Huang and J.A. Abraham, " Algorithm Based Fault Tolerance for Matrix Operations," IEEE Trans. Computers, vol. 33, no. 6, pp. 518-528, June 1984. [8] J.Y. Jou and J.A. Abraham, "Fault Tolerant Matrix Arithmetic and Signal Processing on Highly Concurrent Computing Structures," IEEE Proc., vol. 74, no. 5, pp. 732-741, May 1986. [9] G.M. Megson and D.J. Evans, "Triangular Systolic Arrays for Matrix Product and Factorization," Int'l J. Computer Math., vol. 25, pp. 321-343, 1988. [10] R. Melhem, "Parallel Gauss-Jordan Elimination for Solution of Dense Linear Equations," Parallel Computing, vol. 4, pp. 339-343, 1987. [11] J.G. Nash and S. Hansen, "Modified Faddeeva Algorithm for Concurrent Execution of Linear Algebraic Operations," IEEE Trans. Computers, vol. 37, no. 2, pp. 129-136, Feb. 1988. [12] J. Rexford and N.K. Jha, "Partitioned Encoding Schemes for Algorithm-Based Fault Tolerance in Massively Parallel Systems," IEEE Trans. Parallel and Distributed Systems, Vol. 5, No. 6, June 1994, pp. 649-653. [13] A.H. Sameh and D.J. Kuck, "On Stable Parallel Linear System Solvers," J. ACM, vol. 25, no. 1, pp. 81-91, Jan. 1978. [14] M.K. Sridhar, R. Srinath, and K. Parthasarathy, "On the Direct Parallel Solution of Systems of Linear Equations : New Algorithms and Systolic Structures," Information Sciences, vol. 43, pp. 27-53, 1987. [15] R. Wyrzykowski, "Processor Arrays for Matrix Triangularization With Partial Pivoting," IEE Proc. (Part E—Computers and Digital Techniques), vol. 139, no. 2, pp. 165-169, Mar. 1992. [16] R. Wyrzykowski, J.S. Kanevski, and H. Piech, "One-Dimensional Processor Arrays for Linear Algebraic Problems," IEE Proc. (Part E—Computers and Digital Techniques), vol. 142, no. 1, pp. 1-4, Jan. [17] Y.M. Yeh and T.Y. Feng, “Algorithm Based Fault Tolerance for Matrix Inversion with Maximum Pivoting,” J. Parallel and Distributed Computing, vol. 14, pp. 373-389, 1992. Index Terms: Linear equations, Givens rotations, parallel algorithm, mesh-connected processor array, fault tolerance. K.n. Balasubramanya Murthy, K. Bhuvaneswari, C. Siva Ram Murthy, "A New Algorithm Based on Givens Rotations for Solving Linear Equations on Fault-Tolerant Mesh-Connected Processors," IEEE Transactions on Parallel and Distributed Systems, vol. 9, no. 8, pp. 825-832, Aug. 1998, doi:10.1109/71.706053 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/td/1998/08/l0825-abs.html","timestamp":"2014-04-18T22:16:03Z","content_type":null,"content_length":"55759","record_id":"<urn:uuid:5d4952f9-0e99-4e30-9365-04d4b1fac9d8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: standard model, FD work, etcetera V.Z. Nuri vznuri at yahoo.com Tue Jul 31 15:05:28 EDT 2001 hi all. it is true that theory-edge could be considered higher noise compared to this list. on the other hand FOM has a half-dozen dedicated editors/moderators & lower traffic. theory-edge is a little more fastmoving & rambunctious for sure and relies more on readers hitting "delete" rather than moderator(s). not for the fainthearted. let us agree civily at least for the moment that it is akin to apples & oranges, grin to sign up see the home page I write this again because someone asked me in email how to sign up. re: JS's msg. I probably wrote the "confused query" he refers to asking about the large cardinal hypothesis related to P=?NP on theory-edge. its true its all somewhat new to me; I approached the field from complexity theory literature & not logic. I'm a clueless newbie on various logic theory aspects, not at all embarrassed to admit that even among those who are most familiar with it in the world. I found references to "standard model" in FOM archives after hearing FD talk about it. its not clear to me how "standard model" is formally defined. is it a set of axioms? how does it relate to ZFC+AC? so far I havent found a nice online survey of the key aspects of logic theory. I promise to read one when I find one. those who work in the field, consider having something like this for "outsiders" if you want to bring in new recruits. I always have a bunch of surveys on complexity theory handy & on the tip of my fingers for anyone who wants to get into it. imho, its "professional courtesy". re: AU's note about the FD los alamos preprint. (I share the skepticism about P=?NP being unprovable.) francisco doria has a very intuitive style of math that is well suited for the theory-edge forum. he's been working out kinks in his paper & lines of thinking based on responses from while I dont think all of it is fully baked yet, that's what science is all about imho-- trying to get half baked ideas into fully baked form. on theory-edge we celebrate this process, rather than smirk at it. imho (from 6 mos of talking to him almost every week, and much personal email) FD is at the cutting edge of world scientific inquiry into the P=?NP question. some may wait until they have solid results before using the LANL archives, others may dash off some musings. its great it can be used either way. & how many great researchers get into a kind of perfection-paralysis where they dont want to publish yet because its not perfect? arguably this happened to einstein in the latter half of his life, possibly to the detriment of scientific advancement. FD surely embodies the opposite extreme! yes, I'm an admirer; I should let FD expand on his work further here if he wants to (it seems very well suited for this forum in some ways), I suspect he may pop in to chat about it when he finds its caused a little stir, I'll let him know. he has a really premiere background imho, "your mileage may vary" <wink> by the way, "the mathematical experience" coauthored by reuben hersh is one of my favorite books on the subject & I object somewhat to MD's devaluation of it I found in the archives. but thats a whole other thread I guess haha re: continuum hypothesis. well it looks to me like FD is one of the first to try to link up results in logic/set theory to complexity theory. lacking something definitive, I think there are some open questions about how they relate, CH may be one of them. what I suspect is that just about everything in logic can be reformulated in complexity theory terms & this process is just beginning such as with FD's work. Do You Yahoo!? Make international calls for as low as $.04/minute with Yahoo! Messenger More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2001-July/004988.html","timestamp":"2014-04-17T22:01:46Z","content_type":null,"content_length":"6142","record_id":"<urn:uuid:3df63cf3-8465-4bf2-8636-4816ae173597>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
12 February 2001 Vol. 6, No. 7 THE MATH FORUM INTERNET NEWS Planetqhe.com - D.K. Harris | Visual Fractions - R.E. Rand LOGO Foundation - Michael Tempel PLANETQHE.COM - David Kay Harris David Kay Harris started the Planetqhe project in April of 1999, when he was a graduate student at the University of Bristol, England. The site focuses on using methods of probability to solve counter-intuitive problems and deepen students' understanding and competence in the domain. The content of the site emphasizes the many situations where human beings answer not only the fundamental questions of "How many?" or "How much?" but also the question, "How Likely?" Sections include: - Why Planetqhe? Brief history and rationale of the site - Essential questions: The frame for the Planetqhe Curriculum and a call for papers - Teacher's notes: Please read these before use - Student support materials: Learn the basics needed - Communication Centre - Student projects - Recommended reading - The Activities: Probabilistic learning activities, applets, spreadsheets, and suggested Websites - Technical specifications VISUAL FRACTIONS - Richard E. Rand Visual Fractions aims to reduce "fractionitis," or fraction anxiety, by helping adults and children alike picture fractions and the operations that can be performed on them. There are instructions and problems to work through for the operations of addition, subtraction, multiplication, and division, first using fractions and then working with mixed numbers. Number lines are used to picture the addition and subtraction problems, while an area grid model is used to illustrate multiplication and division problems. Also included are short games where the user guesses fractional parts of a hedge while trying to find the location of Grampy's and then Granny's various hiding places. Separate activities include: - Identify fractions - Rename fractions - Compare fractions - Add fractions - Subtract fractions - Multiply fractions - Divide fractions A Java plug-in is required. Each fraction application should run on the latest versions of Netscape Navigator or Internet Explorer. Specific set-up instructions are included on the information page: LOGO FOUNDATION - Michael Tempel Information and resources for learning and teaching LOGO. This programming environment has roots in constructivist educational philosophy, and has been developed over the past 28 years to support constructive learning. Table of Contents includes: - What's New - Calendar of Events - About the Foundation - What is LOGO? - LOGO Foundation Services - On Line Publications - LOGO Resources - LOGO Products One of the Web-based and public domain versions of LOGO can be accessed here: This site includes a tutorial consisting of eight sample lessons, programmer's reference, source information, and a set of downloadable Java runtime classes that incorporates rLogo programs into your web pages. CHECK OUT OUR WEB SITE: The Math Forum http://mathforum.org/ Ask Dr. Math http://mathforum.org/dr.math/ Problems of the Week http://mathforum.org/pow/ Mathematics Library http://mathforum.org/library/ Teacher2Teacher http://mathforum.org/t2t/ Discussion Groups http://mathforum.org/discussions/ Join the Math Forum http://mathforum.org/join.forum.html Send comments to the Math Forum Internet Newsletter editors _o \o_ __| \ / |__ o _ o/ \o/ __|- __/ \__/o \o | o/ o/__/ /\ /| | \ \ / \ / \ /o\ / \ / \ / | / \ / \
{"url":"http://mathforum.org/electronic.newsletter/mf.intnews6.7.html","timestamp":"2014-04-19T17:29:43Z","content_type":null,"content_length":"7770","record_id":"<urn:uuid:1df3ef7b-6656-4977-9f1a-5693cf055b2c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Eoq Model Questions Eoq Model Questions DOC Sponsored High Speed Downloads Logistics Exam #2 Review Questions. ... What are characteristics of the fixed order quantity EOQ model? What are characteristics of the fixed order period model ... What is the formula for the Economic Order Quantity for the fixed order quantity model? One of the models that tries to answer these questions is the EOQ model. EOQ model is applicable when the demand rate is known. While ordering very frequently results in higher setup costs and smaller average inventory, ... Chapter 22. Other Topics in Working Capital Management. ANSWERS TO BEGINNING-OF-CHAPTER QUESTIONS. 22-1 The Economic Ordering Quantity (EOQ) model combines the annual costs associated with ordering different quantities with the annual costs of carrying different average inventory balances. The EOQ model assumes any real quantity is feasible. The actual quantity ordered may need to be an integer value and may be affected by packaging or other item characteristics. In the following Problems an EOQ of 268 is assumed. Problem 4: Answer the following questions as they pertain to the simple EOQ model: a. What costs are included in this model? b. What are some of the limiting assumptions of this model or, under what conditions is . this model appropriate? OMS511 Review Questions (Chapters 8, 11, 13, E, and G) 1) Which one of the following statements about capacity is best? ... A cycle inventory derived from using the EOQ model can be reduced by increasing the ordering cost. Five assumptions made when using the simplest version of the EOQ model are: 1. The same quantity is ordered at each reorder point. 2. Demand, ... 20-17 (20 min.) Economic order quantity, effect of parameter changes (continuation of 20-16). 1. D = 10,000, P = $30, C = $7 These same questions are crucial for a manufacturer but are placed in terms of the supply chain with the manufacturer dependent upon that retailer, ... Balance ordering costs with carrying costs using the economic-order-quantity (EOQ) decision model. Using the economic order quantity model, which of the following is the total ordering cost of inventory given an annual demand of 36,000 units, ... What two basic questions must be answered by an inventory-control decision rule? 5 Table 1. Use the output to answer the questions. LINEAR PROGRAMMING PROBLEM. MAX 25X1+30X2+15X3. S.T. 1) 4X1+5X2+8X3<1200. 2) 9X1+15X2+3X3<1500. OPTIMAL SOLUTION. ... The EOQ model. a. determines only how frequently to order. b. considers total cost. For every type of item held in inventory, two questions must be asked: 1. When to order ( When should a replenishment order be placed) 2. How much to order (what is the order quantity Q)? ... The Economic Order Quantity (EOQ) Inventory Model ... Answer questions 6-11 based upon using the EOQ model. What is the economic order quantity (EOQ) that will minimize inventory costs? 661.8 456.5 371.7 278.3 What is the average number of units in inventory based upon ordering using the EOQ? CHAPTER 12. DISCUSSION QUESTIONS. 1. The advent of low-cost computing should not be seen as obviating the need for the ABC inventory classification scheme. The following questions are worth three (3) points each. 1. ... 8. What costs are considered in the basic EOQ model? a) annual ordering costs + annual holding costs. b) ... What is the economic order quantity for your product? DISCUSSION QUESTIONS: With the advent of low-cost computing, ... The standard EOQ model assumes instantaneous delivery ... Economic Order Quantity: where: D = annual demand, S = setup or order cost, H = holding cost. Problem 12.18. What are the two basic questions in inventory management? 15. What are EOQ models used for? 16. Which EOQ model should be used if replenishment occurs over time instead of as a . single delivery? 17. When is the fixed order interval model appropriate? The following questions are asked using the economic order quantity model: 1) What is the total annual relevant cost of the company's current inventory policy? 2) What are the optimal order quantity and its cost? Will ... DISCUSSION QUESTIONS. 1. ... 12.17 Economic Order Quantity, noninstantaneous delivery: or 1,651 units. where: ... The solution below uses the simple EOQ model with reorder point and safety stock. It ignores the seasonal nature of the demand. In transportation model analysis the stepping-stone method is used to. a. obtain an initial optimum solution. ... The two most basic inventory questions answered by the typical inventory model are. ... d. results in larger average inventory than an equivalent EOQ model. Compared to the basic EOQ model Q* = (2DS/H) ... The economic order quantity has been computed to be 500. Your accountant now informs you that the holding cost you used was too high. ... Review Questions for Topics 13 & 14: #1, 2, 4, 5, 6, 7, 9, 14. 4. The two most basic inventory questions answered by the typical inventory model are. a. timing and cost of orders. b. quantity and cost of orders. ... Which of the following statements about the basic EOQ model is true? a. If the ordering cost were to double, the EOQ would rise. b. ** TRUE-FALSE QUESTIONS (3 pts. each: 63 pts) ____ 1. The term "MRP" in the inventory management stands for Materials Resources Planning. ... The basic EOQ model cannot be used if holding costs are stated as a percentage of unit price. ____15. Fundamental Questions: - How much to order? - When to order? 1. Continuous Review System: ... -- EOQ Model. Assumptions. There is one product type. Demand is known and constant. Lead time is known and constant. Receipt of inventory is instantaneous(one batch, same time) OSM311 Review Questions (Chapters 3, 5, 13, and 15) 1. Which would not generally be considered as a feature common to all forecasts? ... The EOQ model is most relevant for which one of the following? A) ordering items with dependent demand . B) ... Multiple Choice Questions (i) Walter’s Model suggests for 100% DP Ratio when (a) ke = r, (b) ke < r, (c) ke ... Jest in Time (JIT),(d)Economic Order Quantity. 18. A firm has inventory turnover of 6 and cost of goods sold is Rs. 7,50,000. With better inventory management, the inventory turnover ... Economic Order Quantity (EOQ) Model. Economic Production Quantity Model. Quantity Discounts Model. Reorder Point (Q System) ... Managing independent demand inventory involves answering two questions: How much to order? When to order? Five Assumptions of EOQ. Demand is known and constant. Whole lots. Long Questions: How does EOQ Model select most economical order quantity? State assumptions, limitations and exceptions. Why is safety stock necessary? Establish its relation with re order, consumption and lead time. What is an ideal warehouse? Review Questions II: MGMT 3165. ... Compared to the basic EOQ model Q* = (2DS/H)1/2, the inclusion of a shortage cost into the. model has the effect of making Q* larger. 21. The basic logic in ABC type analysis is that generally, very few items are very important and. Discussion Questions. Consider a supermarket deciding on the size of its replenishment order from Proctor & Gamble. ... Recall that the EOQ model is based on a one-product-at-a-time assumption; if multiple products are aggregated, ... QUESTIONS ON “TRANSPORTATION ... 100 200 400 5,000 How do carrying costs and order costs vary in the simple EOQ model? according to the time of the year and seasonality of demand. directly. inversely. not at all. Review Questions for Final Exam: USTB – Beijing 20. 12. ... Compared to the basic EOQ model Q* = (2DS/H) ... The economic order quantity has been computed to be 500. Your accountant now informs you that the holding cost you used was too high. Review Questions II (Final Exam): MGMT 3165. I. True or False Questions: 1. Holding costs include costs for handling, insurance, theft, and breakage. ... Compared to the basic EOQ model Q* = (2DS/H) ½, the inclusion of a shortage cost into the model has. the effect of making Q* larger. Recall that the EOQ model is based on a one-product-at-a-time assumption; if multiple products are aggregated, ... Chapter 11 discussion questions Subject: Supply Chain Management - 5th edition Last modified by: UMURRM2 Company: BASIC ECONOMIC ORDER QUANTITY (EOQ) MODEL. The EOQ model is a technique for determining the best answers to the how much and when questions. It is based on the premise that there is an optimal order size that will yield the lowest possible value of the total inventory cost. EOQ. EOQ model assumptions. factors affecting sales in companies. family grouping. fixed locator method. FOB destination/FOB origin. general area configuration. grid technique. ... BONUS QUESTIONS (5 points) There are five bonus questions based on two articles: The following questions are worth three (3) points each. 1. Hand tools, lubricants, and cleaning supplies are usually examples of what? ... 2. What costs are considered in the basic EOQ model? a) annual ordering costs + annual holding costs. b) annual purchasing costs + annual holding costs. Q= economic order quantity = EOQ = D= annual demand (expressed in units/ year) ... Questions: Q-1: Why is it important to have an accurate ordering cost for your inventory management? ... (standard EOQ model), ... 2.After the students have worked though the basic EOQ model and costs, have them split into small groups to try to identify other costs beyond the basic ordering and holding costs that might affect inventory decisions. Sensitivity analysis on the economic order quantity (EOQ) formula can help the operations manager answer several questions on how to manage inventories. Which one of the following questions is NOT answered by EOQ sensitivity analysis? A. A good inventory control system must answer three questions. What to stock? ... Economic Order Quantity (EOQ) Based on the cost of owning inventory ... Review of the Economic Order Quantity model ( There are 20 multiple questions on the test. Write multiple choice answers on the . summary. ... In the economic order quantity (EOQ) model, if the holding cost and the ordering cost both double, the value of Q* will: decrease by 50%. remain unchanged. There are 20 multiple questions on the test. (Questions 1-19: 5 points each, Question 20: 15 points) Write multiple choice answers on the . summary. ... In the basic economic order quantity (EOQ) model, a doubling of estimated annual demand would lead to what change in Q*? doubling. No change. University of Maryland College Challenge Questions: ____ 1. ... All of the following are assumptions of the simple EOQ model except: A. price is dependent on quantity. B. no inventory in transit. C. continuous, constant, and known rate of demand. My exam questions tend to come from those materials ... the operation of the classic EOQ model, and how to calculate such things as the optimal order size (i.e., the EOQ) once the EOQ has been calculated, ... State the questions answered in a simple inventory model. Give an overview of the simplest inventory model: ... In a plain economic-order-quantity (EOQ) model, you are assuming: Periodic review. Known constant demand rate. Out-of-stock conditions are permitted. Questions: Consider the changes that Woolworths Limited has introduced to its inventory management practices, ... each of the components in the economic order quantity (EOQ) model equation, and what is likely to happen to the level of the optimal order quantity (Q*), ... Five assumptions made when using the simplest version of the EOQ model are: 1. The same fixed quantity is ordered at ... 20-17 (20 min.) Economic order quantity, effect of parameter changes ... The case questions challenge students to apply the concepts learned in the chapter to a ... Discussion Questions: With the advent of low-cost computing, do you see alternatives to the popular ABC classification? What is the difference between the standard EOQ model and the production inventory model? What are the main reasons that an organization keeps inventory? Some direct questions that a firm needs to pose about its customers are: ... The Economic Order Quantity Model (EOQ) The EOQ model, which is one of the earliest applications of operations management, was proposed by F. W. Harris in 1913. Sample questions/ problems for Inventory Management: 1. When developing inventory cost models, which of the following are not included as costs to place an order? A) Phone calls . ... Using the economic order quantity model, ...
{"url":"http://ebookily.org/doc/eoq-model-questions","timestamp":"2014-04-23T07:47:32Z","content_type":null,"content_length":"41356","record_id":"<urn:uuid:400dfbe7-955a-4942-9eec-f70fd341a60a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
The Origin of the Half-Angle Identities for Sine The trig identities come in sums, differences, ratios, multiples, and halves. With a half-angle identity, you can get the value of a sine for a 15-degree angle using a function of of 30 degree angle. You can also get the value of the tangent of a 22 1/2-degree angle by using a function of an angle of 45 degrees. These identities just create more and more ways to establish an exact value for many of the more commonly used trig functions. These half-angle identities find the function value for half the measure of angle θ: The half-angle identities are a result of taking the double-angle identities and scrunching them around. A more-technical term for scrunching is to solve for the single angle in a double-angle identity. Here’s how the half-angle identity for sine came to be: 1. Write the double-angle identity for cosine that has just a sine in it. cos 2θ = 1 – 2sin^2θ Using the double-angle identity for cosine works better than the double-angle identity for sine, because the sine formula has both sine and cosine functions on the right side of the equation, and you can’t easily get rid of one or the other. 2. Solve for sinθ. First, get the sin^2θ term by itself on the left. 3. Divide each side by 2 and then take the square root of each side. 4. Replace 2θ with α and θ with α/2. By switching the letters, you can see the relationship between the two angles, that one is half as big as the other, more easily.
{"url":"http://www.dummies.com/how-to/content/the-origin-of-the-halfangle-identities-for-sine.navId-420745.html","timestamp":"2014-04-18T13:41:33Z","content_type":null,"content_length":"52843","record_id":"<urn:uuid:966bb933-519c-460c-9f1a-7f45a0327097>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Games and Full Abstraction for the Lazy lambda-calculus Results 1 - 10 of 101 - Handbook of Logic in Computer Science , 1994 "... Least fixpoints as meanings of recursive definitions. ..." - Information and Computation , 1996 "... New tools are presented for reasoning about properties of recursively defined domains. We work within a general, category-theoretic framework for various notions of `relation' on domains and for actions of domain constructors on relations. Freyd's analysis of recursive types in terms of a property o ..." Cited by 99 (5 self) Add to MetaCart New tools are presented for reasoning about properties of recursively defined domains. We work within a general, category-theoretic framework for various notions of `relation' on domains and for actions of domain constructors on relations. Freyd's analysis of recursive types in terms of a property of mixed initiality/finality is transferred to a corresponding property of invariant relations. The existence of invariant relations is proved under completeness assumptions about the notion of relation. We show how this leads to simpler proofs of the computational adequacy of denotational semantics for functional programming languages with user-declared datatypes. We show how the initiality/finality property of invariant relations can be specialized to yield an induction principle for admissible subsets of recursively defined domains, generalizing the principle of structural induction for inductively defined sets. We also show how the initiality /finality property gives rise to the co-induct... - ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS , 2001 "... Stack inspection is a security mechanism implemented in runtimes such as the JVM and the CLR to accommodate components with diverse levels of trust. Although stack inspection enables the finegrained expression of access control policies, it has rather a complex and subtle semantics. We present a ..." Cited by 90 (4 self) Add to MetaCart Stack inspection is a security mechanism implemented in runtimes such as the JVM and the CLR to accommodate components with diverse levels of trust. Although stack inspection enables the finegrained expression of access control policies, it has rather a complex and subtle semantics. We present a formal semantics and an equational theory to explain how stack inspection a#ects program behaviour and code optimisations. We discuss the security properties enforced by stack inspection, and also consider variants with stronger, simpler properties. - Semantics and Logics of Computation , 1997 "... ..." , 1997 "... We introduce a calculus which is a direct extension of both the and the π calculi. We give a simple type system for it, that encompasses both Curry's type inference for the -calculus, and Milner's sorting for the π-calculus as particular cases of typing. We observe that the various continuation pas ..." Cited by 64 (2 self) Add to MetaCart We introduce a calculus which is a direct extension of both the and the π calculi. We give a simple type system for it, that encompasses both Curry's type inference for the -calculus, and Milner's sorting for the π-calculus as particular cases of typing. We observe that the various continuation passing style transformations for -terms, written in our calculus, actually correspond to encodings already given by Milner and others for evaluation strategies of -terms into the π-calculus. Furthermore, the associated sortings correspond to well-known double negation translations on types. Finally we provide an adequate cps transform from our calculus to the π-calculus. This shows that the latter may be regarded as an "assembly language", while our calculus seems to provide a better programming notation for higher-order concurrency. - In Proceedings of the Twenty-Third Annual ACM Symposium on Principles of Programming Languages , 1996 "... Bisimilarity (also known as `applicative bisimulation ') has attracted a good deal of attention as an operational equivalence for -calculi. It approximates or even equals Morris-style contextual equivalence and admits proofs of program equivalence via co-induction. It has an elementary construction ..." Cited by 43 (2 self) Add to MetaCart Bisimilarity (also known as `applicative bisimulation ') has attracted a good deal of attention as an operational equivalence for -calculi. It approximates or even equals Morris-style contextual equivalence and admits proofs of program equivalence via co-induction. It has an elementary construction from the operational definition of a language. We consider bisimilarity for one of the typed object calculi of Abadi and Cardelli. By defining a labelled transition system for the calculus in the style of Crole and Gordon and using a variation of Howe's method we establish two central results: that bisimilarity is a congruence, and that it equals contextual equivalence. So two objects are bisimilar iff no amount of programming can tell them apart. Our third contribution is to show that bisimilarity soundly models the equational theory of Abadi and Cardelli. This is the first study of contextual equivalence for an object calculus and the first application of Howe's method to subtyping. By the... - Proc. POPL'99, ACM , 1999 "... Machine The semantics presented in this section is essentially Sestoft's \mark 1" abstract machine for laziness [Sestoft 1997]. In that paper, he proves his abstract machine 6 A. K. Moran and D. Sands h fx = Mg; x; S i ! h ; M; #x : S i (Lookup) h ; V; #x : S i ! h fx = V g; V; S i (Update) h ; ..." Cited by 40 (7 self) Add to MetaCart Machine The semantics presented in this section is essentially Sestoft's \mark 1" abstract machine for laziness [Sestoft 1997]. In that paper, he proves his abstract machine 6 A. K. Moran and D. Sands h fx = Mg; x; S i ! h ; M; #x : S i (Lookup) h ; V; #x : S i ! h fx = V g; V; S i (Update) h ; M x; S i ! h ; M; x : S i (Unwind) h ; x:M; y : S i ! h ; M [ y = x ]; S i (Subst) h ; case M of alts ; S i ! h ; M; alts : S i (Case) h ; c j ~y; fc i ~x i N i g : S i ! h ; N j [ ~y = ~x j ]; S i (Branch) h ; let f~x = ~ Mg in N; S i ! h f~x = ~ Mg; N; S i ~x dom(;S) (Letrec) Fig. 1. The abstract machine semantics for call-by-need. semantics sound and complete with respect to Launchbury's natural semantics, and we will not repeat those proofs here. Transitions are over congurations consisting of a heap, containing bindings, the expression currently being evaluated, and a stack. The heap is a partial function from variables to terms, and denoted in an identical manner to a , 1996 "... This paper provides foundations for a reasoning principle (coinduction) for establishing the equality of potentially infinite elements of self-referencing (or circular) data types. As it is well-known, such data types not only form the core of the denotational approach to the semantics of programmin ..." Cited by 37 (3 self) Add to MetaCart This paper provides foundations for a reasoning principle (coinduction) for establishing the equality of potentially infinite elements of self-referencing (or circular) data types. As it is well-known, such data types not only form the core of the denotational approach to the semantics of programming languages [SS71], but also arise explicitly as recursive data types in functional programming languages like Standard ML [MTH90] or Haskell [HPJW92]. In the latter context, the coinduction principle provides a powerful technique for establishing the equality of programs with values in recursive data types (see examples herein and in [Pit94]). , 1995 "... In this thesis we present and analyse a set of automatic source-to-source program transformations that are suitable for incorporation in optimising compilers for lazy functional languages. These transformations improve the quality of code in many different respects, such as execution time and memory ..." Cited by 32 (1 self) Add to MetaCart In this thesis we present and analyse a set of automatic source-to-source program transformations that are suitable for incorporation in optimising compilers for lazy functional languages. These transformations improve the quality of code in many different respects, such as execution time and memory usage. The transformations presented are divided in two sets: global transformations, which are performed once (or sometimes twice) during the compilation process; and a set of local transformations, which are performed before and after each of the global transformations, so that they can simplify the code before applying the global transformations and also take advantage of them afterwards. Many of the local transformations are simple, well known, and do not have major effects on their own. They become important as they interact with each other and with global transformations, sometimes in non-obvious ways. We present how and why they improve the code, and perform extensive experiments wit... , 1992 "... We present a very simple and powerful framework for indeterminate, asynchronous, higher-order computation based on the formula-as-agent and proof-ascomputation interpretation of (higher-order) linear logic [Gir87]. The framework significantly refines and extends the scope of the concurrent constrai ..." Cited by 30 (5 self) Add to MetaCart We present a very simple and powerful framework for indeterminate, asynchronous, higher-order computation based on the formula-as-agent and proof-ascomputation interpretation of (higher-order) linear logic [Gir87]. The framework significantly refines and extends the scope of the concurrent constraint programming paradigm [Sar89] in two fundamental ways: (1) by allowing for the consumption of information by agents it permits a direct modelling of (indeterminate) state change in a logical framework, and (2) by admitting simply-typed -terms as dataobjects, it permits the construction, transmission and application of (abstractions of) programs at run-time. Much more dramatically, however, the framework can be seen as presenting higher-order (and if desired, constraint-enriched) versions of a variety of other asynchronous concurrent systems, including the asynchronous ("input guarded") fragment of the (first-order) ß-calculus, Hewitt's actors formalism, (abstract forms of) Gelernter's Lin...
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.17.863","timestamp":"2014-04-19T18:53:19Z","content_type":null,"content_length":"36881","record_id":"<urn:uuid:60a95cfe-d62f-4c05-b6f3-66f48ab041cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
SCALE 6.1.2 Scale is a comprehensive modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). Scale provides a comprehensive, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. For over 30 years, regulators, licensees, and research institutions around the world have used Scale for safety analysis and design. Scale provides a 'plug-and-play' framework with 89 computational modules including 3 deterministic and 3 Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. Scale includes current nuclear data libraries and problem-dependent processing tools for continuous-energy and multigroup neutronics calculations, multigroup coupled neutron-gamma calculations, as well as activation and decay calculations. Scale includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. Scale's graphical user interfaces assist with accurate system modeling and convenient access to desired results. See the developers' website and the Scale 6 electronic notebook for news on Scale, updates, and tips on running the code. Scale 6.1 Overview: http://info.ornl.gov/sites/publications/Files/Pub30885.pdf Scale website: http://scale.ornl.gov Scale electronic notebook: http://scale.ornl.gov/notebooks.shtml Material Input and Problem-Dependent Cross-Section Data: A foundation of Scale is MIPLIB (Material Information Processor Library). The purpose of MIPLIB is to allow users to specify materials using easily remembered and easily recognizable keywords that are associated with mixtures, elements, and nuclides provided in the Scale Standard Composition Library. MIPLIB also uses other keywords and simple geometry input specifications to prepare input for the modules that perform the problem-dependent cross-section processing. Even when performing multigroup calculations, Scale begins with continuous-energy cross-section data and generates problem-dependent multigroup data based on a pointwise spectrum generated with the CENTRM (Continuous Energy Transport Module) and PMC (Produce Multigroup Cross Sections) modules. A keyword supplied by the user selects the cross-section library from a standard set provided in Scale or designates the reference to a user-supplied library. Criticality Safety Analysis: The CSAS (Criticality Safety Analysis Sequence) control module provides for the calculation of the neutron multiplication factor of a system. Computational sequences accessible through CSAS provide automated problem-dependent processing of cross-section data and enable general analysis of a one-dimensional (1D) system model using deterministic transport with XSDRNPM or three-dimensional (3D) Monte Carlo transport solution using KENO V.a. CSAS also provides the capability to search on geometry spacing or nuclide concentrations, and provides problem-dependent cross-section processing without subsequent transport solutions for use in executing stand-alone functional modules. CSAS6 is a separate criticality control module that provides automated problem-dependent cross-section processing and Monte Carlo criticality calculations via the KENO-VI functional module that uses the Scale Generalized Geometry Package (SGGP). The Scale Material Optimization and Replacement Sequence (SMORES) is a Scale control module developed for 1D eigenvalue calculations to perform system criticality optimization. The STARBUCS (Standardized Analysis of Reactivity for Burnup Credit using Scale) control module has been developed to automate the generation of spatially varying nuclide compositions in a spent fuel assembly, and to apply the spent fuel compositions in a 3D Monte Carlo analysis of the system using KENO, primarily to assist in performing criticality safety assessments of transport and storage casks that apply burnup credit. The KMART (Keno Module for Activity-Reaction Rate Tabulation) module produces reaction rates and group collapsed data from KENO. The USLSTATS (Upper Subcritical Limit Statistics) tool provides trending analysis for bias Shielding Analysis: The MAVRIC (Monaco with Automated Variance Reduction Using Importance Calculations) fixed-source radiation transport sequence is designed to apply the multigroup fixed-source Monte Carlo code Monaco to solve problems that are too challenging for standard, unbiased Monte Carlo methods. The intention of the sequence is to calculate fluxes and dose rates with low uncertainties in reasonable times even for deep penetration problems. MAVRIC is based on the CADIS (Consistent Adjoint Driven Importance Sampling) methodology, which uses an importance map and biased source that are designed to work together. MAVRIC generates problem-dependent cross-section data and then automatically performs a coarse mesh, 3D discrete ordinates transport calculation using Denovo to determine the adjoint flux as a function of position and energy, and to apply the information to optimize the shielding calculation in Monaco. The SAS1 (Shielding Analysis Sequence No. 1) control module provides general 1D deterministic shielding capabilities, and QADS (Quick and Dirty Shielding) provides for 3D point-kernel shielding analysis. Depletion, Decay, and Radioactive Source Term Analysis: The ORIGEN (Oak Ridge Isotope Generation) code applies a matrix exponential expansion model to calculate time-dependent concentrations, activities, and radiation source terms for a large number of isotopes simultaneously generated or depleted by neutron transmutation, fission, and radioactive decay. Provisions are made to include continuous nuclide feed rates and continuous chemical removal rates that can be described with rate constants for application to reprocessing or other systems that involve nuclide removal or feed. ORIGEN includes the ability to utilize multigroup cross sections processed from standard ENDF/B evaluations. Within Scale, transport codes can be used to model user-defined systems, and the COUPLE code can be applied to calculate problem-dependent neutron-spectrum-weighted cross sections that are representative of conditions within any given reactor or fuel assembly, and convert these cross sections into a library that can be used by ORIGEN. Time-dependent cross-section libraries may be produced that reflect fuel composition variations during irradiation. An alternative sequence for depletion/decay calculations is ORIGEN-ARP, which interpolates pre-generated ORIGEN cross-section libraries versus enrichment, burnup, and moderator density. Reactor Analysis: The TRITON (Transport Rigor Implemented with Time-Dependent Operation for Neutronic Depletion) control module provides flexible capabilities to meet the challenges of modern reactor designs by providing 1D pin-cell depletion capabilities using XSDRNPM, two-dimensional (2D) lattice physics capabilities using the NEWT 2D flexible mesh discrete ordinates code, or 3D Monte Carlo depletion using KENO. With each neutron transport option in TRITON, depletion and decay calculations are conducted with ORIGEN. Additionally, TRITON can produce assembly-averaged few-group cross sections for use in core simulators. Improved resonance self-shielding treatment for nonuniform lattices can be achieved through use of the MCDancoff (Monte Carlo Dancoff) code that generates Dancoff factors for generalized 3D geometries. Sensitivity and Uncertainty Analysis: TSUNAMI-1D and -3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation) are Scale control modules that facilitate the application of adjoint-based sensitivity and uncertainty analysis theory to criticality safety analysis. Additionally, a TSUNAMI-2D eigenvalue sensitivity analysis capability is available through the TRITON control module. TRITON also provides a generalized perturbation theory capability for 1D and 2D analysis that computes sensitivities and uncertainties for reactor responses such as reaction rate and flux ratios as well as homogenized few-group cross sections. TSAR (Tool for Sensitivity Analysis of Reactivity) provides sensitivity coefficients for reactivity differences, and TSUNAMI-IP (TSUNAMI Indices and Parameters) and TSURFER (Tool for Sensitivity and Uncertainty Analysis of Response Functions Using Experimental Results) provide code and data validation capabilities based on sensitivity and uncertainty data. Nuclear Data: The cross-section data provided with Scale include comprehensive continuous-energy neutron and multigroup neutron and coupled neutron-gamma data based on ENDF/B-VI.8 and ENDF/B-VII.0. Additional ENDF /B-V multigroup neutron libraries are also available. The comprehensive ORIGEN data libraries are based on ENDF/B-VII and JEFF-3.0/A and include nuclear decay data, neutron reaction cross sections, neutron-induced fission product yields, delayed gamma-ray emission data, and neutron emission data. The photon yield data libraries are based on the most recent Evaluated Nuclear Structure Data File (ENSDF) nuclear structure evaluations. The libraries used by ORIGEN can be coupled directly with detailed problem-dependent physics calculations to obtain self-shielded problem-dependent cross sections based on the most recent evaluations of ENDF/B-VII. Scale also contains a comprehensive library of neutron cross-section-covariance data for use in sensitivity and uncertainty analysis. Graphical User Interfaces: Scale includes a number of graphical user interfaces to provide convenient means of generating input, executing Scale, and visualizing models and data. GeeWiz (Graphically Enhanced Editing Wizard) is a Windows user interface that provides a control center for setup, execution, and viewing results for most of Scale's computational sequences including CSAS, MAVRIC, TRITON, and TSUNAMI. GeeWiz is coupled with the KENO3D interactive visualization program for Windows for solid-body rendering of KENO geometry models. The ORIGEN-ARP user interface for Windows provides for rapid problem setup and plotting of results for spent fuel characterization. The Javapeno (Java Plots Especially Nice Output) multiplatform interface provides 2D and 3D plotting of cross-section and cross-section-covariance data, multigroup fluxes and reaction rates from KENO and KMART, sensitivity data from TSUNAMI, and pointwise fluxes from CENTRM. The MeshView multiplatform interface produces 2D contour views of mesh data and mesh results from Monaco and KENO, and ChartPlot provides for energy-dependent plots of Monaco results. The ExSITE tool provides a dynamic multiplatform interface for the sensitivity and uncertainty analysis tools TSUNAMI-IP, TSURFER, and TSAR. The USLSTATS multiplatform interface allows for trending analysis with integrated plotting, and VIBE (Validation Interpretation and Bias Estimation) assists with interpretation of sensitivity data and couples with the DICE database from the International Criticality Safety Benchmark Evaluation Program. Additionally, several codes provide HTML-formatted output, in addition to the standard text output, to provide convenient navigation through the computed results using most common Web browsers with interactive color-coded output and integrated data visualization tools.
{"url":"http://www.oecd-nea.org/tools/abstract/detail/ccc-0785/","timestamp":"2014-04-21T09:41:43Z","content_type":null,"content_length":"34939","record_id":"<urn:uuid:e288b654-bc9a-4f36-af7c-ae5d7f28b859>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Thermodynamics Statistical Thermodynamics The total number of particles in our assembly is N or, expressed intensively, NA per mole ... boiling temperature of 4.22 K, one mole of He occupies 3.46 x 10 ... – PowerPoint PPT presentation Number of Views:255 Avg rating:3.0/5.0 Slides: 128 Added by: Anonymous more less Transcript and Presenter's Notes
{"url":"http://www.powershow.com/view/6fb85-MGZlM/Statistical_Thermodynamics_powerpoint_ppt_presentation","timestamp":"2014-04-20T23:33:31Z","content_type":null,"content_length":"137597","record_id":"<urn:uuid:212682fb-d4ef-418a-a640-b69b005bea91>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Mercer Island Algebra Tutor Find a Mercer Island Algebra Tutor ...It is one of my favorites. If you need help getting it to do what you need, let me know. I can help! 46 Subjects: including algebra 1, algebra 2, reading, English ...I have always sought avenues that allowed me to work with students and peers in teaching settings. As an undergrad I was a peer mentor for an intro to engineering class, where I guided students across various disciplines within the engineering field, but mostly helped with the finer details of t... 14 Subjects: including algebra 2, algebra 1, geometry, writing ...I'm also nearly fluent in Spanish, and would be happy to converse with students taking Spanish classes. I like to communicate plainly and simply, and have always enjoyed presenting material in a way that I find easy to understand, and like to approach the subject matter so that it becomes engagi... 39 Subjects: including algebra 2, algebra 1, English, reading ...I have been a musician from a very young age, having played clarinet in bands and orchestras from 4th grade through adulthood. I have had several years of classical training in piano with two of those at University of Puget Sound and Oregon State. I am an excellent sight reader and have been paid as an accompanist and have training in music theory. 43 Subjects: including algebra 1, algebra 2, chemistry, geometry ...I scored perfect on my first ASVAB exam and I've been able to score perfect on repeat exams of the GRE, ACT, and SAT. I get requests from all over the country, so I usually use online meeting software. The software allows us to talk in real time (just like we're on Skype or on the phone), and we see and work the same problems together. 15 Subjects: including algebra 1, algebra 2, GRE, ASVAB
{"url":"http://www.purplemath.com/mercer_island_algebra_tutors.php","timestamp":"2014-04-17T13:31:48Z","content_type":null,"content_length":"24002","record_id":"<urn:uuid:b533b2c2-f4df-499c-9fb0-13ffc16b9cc5>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Flattening an array Charles R Harris charlesr.harris@gmail.... Wed Dec 9 23:59:22 CST 2009 On Tue, Dec 8, 2009 at 5:29 PM, Jake VanderPlas <jakevdp@gmail.com> wrote: > Hello, > I have a function -- call it f() -- which takes a length-N 1D numpy > array as an argument, and returns a length-N 1D array. > I want to pass it the data in an N-D array, and obtain the N-D array > of the result. > I've thought about wrapping it as such: > #python code: > from my_module import f # takes a 1D array, raises an exception otherwise > def f_wrap(A): > A_1D = A.ravel() > B = f(A_1D) > return B.reshape(A.shape) > #end code If the function treats both types of input the same and the input arrays are genuinely C/F contiguous, then you can just reshape them A_1D = A.reshape(-1, order='C') # c order A_1D = A.reshape(-1, order='F') # fortran order Warning: if they aren't contiguous of the proper sort, copies will be made. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20091209/f9f88911/attachment.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-December/047370.html","timestamp":"2014-04-17T10:51:10Z","content_type":null,"content_length":"3948","record_id":"<urn:uuid:492833ee-8d1f-4db6-9305-168b60996335>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Paper No 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 Droplet Entrainment in Churn Flow MasroorAhmad*, Deng J. Pengt, Colin P. Halet, Simon P. Walker* and Geoffrey F. Hewittt Department of Mechanical Engineering, Imperial College London, SW7 2AZ, UK t Department of Chemical Engineering, Imperial College London, SW7 2AZ, UK m.ahmad06@imperial.ac.uk and g.hewitt@imperial.ac.uk Keywords: entrainment, churn flow, axial view photography Chum flow is an important intermediate flow regime between slug and annular flow. In the Taylor bubbles which are characteristic of slug flow, there is a falling film at the tube wall with an upwards gas flow in the core. The transition from slug flow to chum flow occurs when the conditions in the Taylor bubbles occurring in slug flow are such as to promote flooding (Jayanti and Hewitt, 1992). Chum flow is a region in which there are large interfacial waves travelling upwards with falling film regions between the waves; the transition to annular flow occurs when the (film) flow becomes continuously upwards. In both chum flow and annular flow, a (sometimes large) proportion of the liquid flow is in the form of entrained droplets in the gas core. In annular flow systems with evaporation, the entrainment of droplets from the film surface plus the evaporation of the film may not be sufficiently offset by droplet deposition and film dryout occurs. Dryout can be predicted provided expressions can be invoked for local entrainment, deposition and evaporation rates and these expressions integrated to establish the conditions under which the film flow rate becomes zero (Hewitt and Govan, 1990). This integration procedure needs a boundary value for entrained droplet flow rate at the onset of annular flow (i.e. at the transition between churn flow and annular flow) and the predictions obtained for dryout may be sensitive to this boundary value for short tubes. Some data and a correlation for the entrained fraction at the onset of annular flow were obtained by Barbosa et al. (2002) but this correlation applies only to adiabatic and equilibrium conditions. The work described in this paper is focused on the question of droplet entrainment in churn flow and the development of a methodology for predicting the entrained fraction at the onset of annular flow for diabetic (i.e. non-equilibrium) systems. New expressions are given which allow the integration procedure to be extended to cover both chum flow and annular flow. New experiments were also conducted on the churn-annular region using the axial view photography technique. This allowed the entrainment processes to be visualized starting at the onset of chum flow and passing into the annular flow regime. In gas-liquid mixture flows in vertical pipes, the phases distribute themselves in a variety of spatial and temporal distributions, referred to as flow regimes or flow patterns. A wide variety of names have been given to these phase distributions; a reasonably well accepted set of descriptions for vertical upflow is as follows: bubbly flow, slug flow, chur flow and annular flow. These flow regimes and corresponding transitions between them are of immense importance in predicting the transition of a regime of high heat transfer coefficient to one of greatly reduced heat transfer coefficient; this transition is referred to by a number of names, including dryout, critical heat flux (CHF) and boiling crisis. The accurate prediction of this transition is of great importance not only in design and safe operation of nuclear power plants but also many other types of industrial heat transfer equipment. Annular film dryout is arguably the most important mechanism for the onset of reduced heat transfer coefficient. In annular flows, liquid is lost from the film as a result of droplet entrainment and evaporation, and it is gained by the film by droplet deposition rate, which leads to liquid film dryout. In the annular flow model, the equations for entrainment, deposition and evaporation are integrated from the onset of annular flow (i.e. from the chur-annular transition); when the film flow rate is predicted to be zero, then dryout is predicted to occur. However, this integration process requires an initial value for entrained fraction (IEF) at the chur flow-annular flow transition; the IEF could, in principle vary between zero and unity. It was observed that dryout prediction may be a strong function of IEF at the onset of annular flow especially at high liquid mass fluxes. In order to solve the problem of IEF at the chur-annular transition, an understanding of droplet entrainment in chur flow is vital. As noted by Barbosa et al, (2001b), the word 'CHURN' is used by different research groups to describe different flow types. Thus, Zuber & Findley (1965) describes churn-turbulent flow as a type of bubble flow whereas Taitel et.al (1980) considered it to be a developing slug flow. However, the most widely accepted definition of churn flow is that of Hewitt & Hall Taylor (1970) who considered it to be an intermediate regime between slug and annular flow. Paper No Chum flow occurs due to break down of slug flow due to flooding of the liquid film in the Taylor bubble (Jayanti and Hewitt, 1992; McQuillan, 1985). It is characterized by large interfacial waves with flow reversal between the waves. The highly oscillatory liquid film in chum flow is accompanied a continuous gas core containing considerable amount of entrained liquid. Ultimately as the gas flow rises the periodic downward flow of liquid film ceases and gives rise to a unidirectional annular flow. This flow reversal point is described in terms of dimensionless superficial gas velocity by (Hewitt and Wallis, 1963; Wallis, 1969) as: gd(pL PG) Although extensive data for pressure gradient and liquid holdup are available for chum flow and the corresponding transition regions (McQuillan, 1985; Govan, 1991; Barbosa, 2001a), experimental data regarding film thickness and droplet entrainment behaviour is quite scarce. This might be due to the reason that conventional measurement techniques for film thickness and entrained liquid flow in annular flow could not be applied to chum flow due to its chaotic nature. Wallis et.al (1962) carried out entrained fraction measurements in the chum and churn-annular transition region by using a 12.7 mm internal diameter vertical tube fitted with single axially located sampling probe. The results, shown in Figure 1, indicated a considerable amount of liquid entrained as droplets in churn flow; the entrained fraction passes through a minimum around the chur-annular transition. 1: 0 0.0 2.0 4.0 6.0 I I * OE %, a ** ** 0 " Superficial liquid velocity [mIs] 0 0.042 (upflow) o 0.042 (downflow) 0.084 (upflow) O 0.084 (downflow) 0.147 (upflow) [] 0.168 (downflow) 0.0 20.0 40.0 60.0 SUPERFICIAL GAS VELOCITY [m/sl Figure 1: Variation of entrained fraction with gas velocity (Wallis, 1962). Recently, similar sorts of experiments encompassing a wider range of liquid flow rates and pressure, were carried out by Barbosa et.al (2002) using an isokinetic probe technique in a 31.8mm internal diameter and 10.8 meter long vertical tube facility. The results are presented in Figure 2 & 3 and show that the liquid entrained fraction decreases with increasing gas velocity in chur flow, passes through a minimum around the transition to annular flow and 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 increases again in the annular flow region. It is also worth noting that entrained fraction in these regions is a function of liquid mass flux and extrapolation of these results reveals a very high proportion of liquid entrained as droplets at the onset of churn flow. Barbosa et al (2002) proposed the, following correlation for IEF at onset of annular flow: E.0 (in %)= 0.95+342.55 PL- d PGGG (97 e 40 t io 0.40 0.80 1.20 Figure 2: Liquid entrained fraction as a function of the total liquid mass flux: p = 2 bara (Barbosa, 2002). 0.40 0.80 1.20 1.60 Figure 3: Liquid entrained fraction as a function of the total liquid mass flux: p = 3.6 bara (Barbosa, 2002). This paper addresses the problem of IEF at onset of annular flow in the context of dryout predictions using an annular flow dryout model. Firstly, pictures of churn flow and the corresponding transitions, obtained by an axial view photography method, were analyzed. On the basis of visual observation and the available (Barbosa et al, 2002) data, a new methodology for entrainment rate calculation in chur flow is proposed. Also, the annular flow dryout model is 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 extended to chur flow i.e. the integration process is made to start from onset of chur flow; this leads to better prediction of dryout data. C concentration of droplets (kg m 3) D deposition rate (kg m-2 sec 1) d diameter (m) E entrainment rate (kg m2 sec 1) Ef entrained fraction (-) G mass flux (kg m2 sec'1) g gravitational constant (m sec 2) kD deposition coefficient (m sec ') p pressure (bar) q heat flux (W m 2) UGS superficial gas velocity (m sec ') ULS superficial liquid velocity (m sec ') UG dimensionless superficial gas velocity Greek letters P density (kg m 3) C7 surface tension (kg sec 2) Yr viscosity (kg m1 sec'1) Figure 4: Schematic diagram of LOTUS air water facility entrained liquid liquid Film critical liquid film Reflect mirow Experimental Facility In view of importance of flow regimes and corresponding transitions between these flow regimes in prediction of the dryout position and post-dryout behaviour, axial view photography experiments have been carried out on the LOTUS (LOng TUbe System) facility at Imperial College The LOTUS system is a vertical air-water two phase facility with a test section consisting of a 10.41m long vertical acrylic (PerspexTM) tube with an internal diameter of 0.032m. Water is supplied from a separation vessel by two pumps with 1.5kw and 8.0kw power. Air is taken from the Imperial College site mains supply and the air-water mixture leaving the test section is separated using a cyclone, with the air being discharged to atmosphere and the water being returned to the separation vessel. An axial viewer was fitted to the top of the test section; essentially, this consists of a high speed video camera (Olympus i-SPEED 3) which is focused at an illuminated plane at a distance of 470 mm from the end of the tube. The water flow is diverted away from the camera window using an arrangement of air jets. The camera can then capture events occurring at the tube cross section in the illuminated plane. The principle of this method is described, for instance, by Hewitt and Whalley (1980). The schematic diagram of LOTUS facility and axial view photography setup is shown in Figure 4 & 5. The test conditions were adjusted to cover the churn and annular flow regions and the transition between them. Visualize position 3rd floor Figure 5: Axial view photography setup at LOTUS Results and Discussion IEF Problem: The problem of initial entrained fraction (IEF) input for the annular flow dryout model at high mass fluxes is a long standing one. Information on the amount of entrained liquid at the transition from chur to annular flow boundary is essential as it is the starting point of integration of entrainment/deposition processes in a heated channel. In order to demonstrate the problem, the annular film dryout model (as described by Hewitt and Govan, 1990 and Paper No Paper No embodied in the Imperial College computer code GRAMP) is applied to the uniform and non-uniform heated tube dryout data of Bennett et al. (1966) and Keeys et al. (1971). The results, as indicated in Figure 6&7, confirmed that at medium to high liquid mass fluxes, dryout location is a strong function of IEF at onset of annular flow. The analysis also revealed that high IEF values, normally in range of 0.65-0.95, are required to predict the dryout data at these high liquid flow rates. 46- A 42- * U % (Run 5358_G=380 kg/m2 /sec) 32 -* % (Run 5273_G=1020 kg/m2/sec) 30- % (Run 5374_G=3850 kg/m2/sec) Initial Entrained Fraction (-) Figure 6: IEF effect on the dryout predictions in uniformly heated tubes dryout data Keeys-141kw_G=720 kg/m2/sec 18 Keeys-191kw_G=2000 kg/m2/sec Initial Entrained Fraction Figure 7: IEF effect on the dryout predictions in non-uniformly heated tubes dryout data The same phenomena could be demonstrated by investigating the effect of IEF on dryout qualities, as illustrated in Figure 8&9. At low liquid flow rates, the IEF effect is minimal but at high flow rates dryout quality increases considerably with increasing IEF at onset of annular flow. Also, in the present study, the only available IEF correlation of Barbosa et.al (2002) was employed initially in the annular flow dryout model to predict the dryout data. The correlation yielded IEF values of 0.2 to 0.3 at the onset of annular flow and these values were much lower than the entrained fractions required to fit the dryout data for medium and high mass fluxes using the annular flow dryout model. A major problem with the Barbosa et al (2002) 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 model is that it is essentially for near-equilibrium flows. However, in heated channels, the flows will not be in equilibrium. A "memory" of the high entrained fractions at the onset of chum flow will persist since the entrained fraction does not immediately relax to the equilibrium values as the churn flow regime is traversed along the channel as a result of the increase in quality due to evaporation. What are needed, therefore, are local values of entrainment rate and deposition rate in chum flow so that the same type of integration can be carried out in chum flow as is done in the annular flow model for annular flow. )- -Dryout Point- .: - S - - IEF =0.9 -" ---. IEF = 0.7 S- -..... IEF = 0.5 .-----IEF =0.3 .. ...... IEF = 0.1 S Onset of annular Flow -005 000 005 010 015 020 025 030 035 040 Figure 8: IEF effect on the dryout quality predictions at high liquid mass fluxes (G = 3850 kg m-2 sec-1) *5 06 Figure 9: IEF effect on the dryout quality predictions at low liquid mass fluxes (G = 380 kg m-2 sec-1) Experimental Results: In view of role of droplet entrainment behaviour in churn and chum-annular transition region, axial view photography experiments were carried out at LOTUS facility. The visual evidence, as shown in Figure 10, also supported the experimental data of Barbosa et.al (2002) i.e. the amount of liquid entrained is high as the (chaotic) chum flow regime is entered and it passes through a minimum around the churn-annular transition, leading to an increase in entrained fraction with increasing gas velocity in annular flow. Paper No 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 predictions. In the absence of detailed data, it was assumed that the droplet deposition rate in churn flow could be predicted from a model identical to that used for annular flow. Specifically, the model suggested by Hewitt and Govan (1990) (given in Annex-A) is used. Thus, in equilibrium chum flow, it is assumed that: Echurn = Dchurn = (kD )annular C (3) For the equilibrium chum flow data, (such as that of Barbosa et al, 2002), the value of C (the concentration of drops in the gas core in kg/m3, calculated on the assumption of a homogeneous core flow) is known and (kD) .annular can be calculated from the equations given in Annex-A, Thus, Echurn can be calculated. Since C is higher in chum flow a) than would be expected in annular flow (and increases with decreasing gas flow rate) it follows that Echurn follows a similar trend (Figure 11 a,b,c). A simple correlation which gives an approximate representation of the data calculated churn = -8.73U + 9.73 Eannular,local (4) where Eannuar,loca1 is the value of entraimnent rate for annular flow calculated for annular flow from the relationships given in Annex-A and for the local values of gas and liquid flow rates. Thus, Equation 4 is a representation of the enhancement of entrainment rate above that for annular flow in the chum flow region. The enhancement factor becomes unity at UG=1. 0 34 --- GL=22 kg/m2/sec (a) 032. GL=48 kg/m2/sec S030. GL=126kg/m2/se S028. --GL=218 kg/m2/se 024 GL=331 kg/m /se E 022 E 012- a 010-* UG (-) 035 - E 025 2(c) 020 Figure 10: Axial view photography experiments undertaken at superficial liquid velocity ULS = 0.165 m/s in (a) Chum 01 GL23kg/ E GL=23 kg/m2 /sec Flow (UG = 0.461), (b) Chum-to-Annular Flow transition I 010 GL=48 kg/m2/sec (UC = 0.968) and (c) Annular Flow (U = 1.066) 005 GL=128 kg/m /sec S05- GL=215 kg/m2/sec GL=330 kg/m /sec Entrainment Rate in Churn Flow: 000o 2 0304 06 0708091011121314151617 UG (-) In order to predict the IEF at the onset of annular flow, it is necessary to have a model for the chum flow region. Even an approximate model would in principle allow the IEF to be predicted accurately enough to give better dryout Paper No u b -) GL=23 kg/m2/sec 045-- GL=48 kg/m2/sec 040- GL= 128 kg/m2/sec 0 35 ---GL=214kg/m2/sec .3\ GL=302 kg/m2/sec 0 30 - 0 25- 020- \v - 015 \ 0 10 \ 0 00 - UG (-) u 1. - 0 34- E 030- 0 28- 5 024- L 020- UG (-) Figure 11: Entrainment rate variation in chum and annular flow (a) p = 2 bar (b) p = 3.5 bar (c) p = 5 bar (d) Liquid Mass Flux = 215 kg m2 sec1 The application of this methodology supported the idea, depending upon different flow conditions, that the variation of entrained droplet flow with length in heated tubes ceases to follow the equilibrium curve; rather, the departure from equilibrium leads to maintenance of high values of entrained fraction at start of annular flow as indicated in Figure 12. Dryout Point S .... ...... S... .... . ", - - q = 5 94E05 W/m' .* q = 7 2E05 W/m2 - - q = 8 8E05 W/m2 -..... q = 1 OE06 W/m2 - - q = 1 2E06 W/m2 Onset of Churn Flow Figure 12: Churn flow methodology (G = 654 kg m2 sec-1) In order to complete the proposed dryout model, an initial value of entrained fraction at the start of chum flow is still required. In order to test the proposed methodology, predictions were made of dryout data for uniformly and 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 non-uniformly heated tube (Bennett, 1966; Keeys, 1971) with IEF at the start of chum flow chosen to be 0.9. The predictions, as indicated in Figure 13, of dryout data are encouraging i.e. within 20%. Measured Dryout Location Figure 13: Prediction of dryout data (IEF for chum flow = 0.9) Work is continuing on searching for an improved correlation for the IEF in chum flow. However, the approximate approach taken has already yielded much more realistic results, particularly for high mass fluxes. In the present study, the behaviour of droplet entrainment in churn flow is analysed with particular reference to the prediction of the value of entrained fraction at the start of annular flow. For this purpose, new equation for entrainment rate in chum flow is proposed, which allows the dryout model to start integration of entrainment and deposition processes from the start of chum flow rather than the start of annular flow. The prediction of dryout location, for both uniform and non-uniform heated tubes cases, by employing the new proposed methodology led to improved results i.e. within 20%. This work was carried out as part of the TSEC programme KNOO, and we are grateful to the EPSRC for funding under Grant EP/C549465/1. One author (M. Ahmad) would like to acknowledge the Higher Education Commission (HEC) of Pakistan and Pakistan Institute of Engineering & Applied Sciences (PIEAS) for funding his PhD studies at Imperial College London. Barbosa J.R., Govan A.H., Hewitt G.F., Visualisation and modelling studies of chum flow in vertical pipe, Int. J. Multiphase Flow, Vol. 27, Issue 12, pp. 2105-2127, (2001a) Barbosa J.R., Hewitt, G.F., Konig G. and Richardson S.M., Liquid entrainment, droplet concentration and pressure gradient at the onset of annular flow in a vertical pipe, Int. J. Multiphase Flow, Vol. 28, pp943-961, (2002) Barbosa, J.R., Richardson, S., Hewitt, G.F, Churn flow: (d) P= 2 bar P= 3 5 bar P= 5 bar Paper No myth, magic and mystery. In: 39th European Two-Phase Flow Group Meeting, Aveiro, Portugal, 18-20 June, (200 lb) Bennett A.W., Hewitt G.F, Kearsey H.A., Keeys, R.K.F, Heat transfer to steam water mixtures flowing in uniformly heated tubes in which the critical heat flux has been exceeded. AERE-R-5373, (1966). Govan, A.H., Hewitt, G.F., Richter, H.J., Scott, A., Flooding and churn flow in vertical pipes. Int. J. Multiphase Flow 17, 27-44 (1991) Hewitt G.F, Flow regimes: Transitions and flow behaviour. Multiphase Science and Technology, 15:131-143 (2003) Hewitt G.F and Govan A.H., Phenomena and prediction in annular two-phase flow: Invited Lecture, Symposium on Advances in Gas-Liquid Flows, Dallas, November, 1990 (at Winter Annual Meeting of ASME) ASME Volume Reference FED-Vol. 99, HTD Vol. 155 pp 41-56, (1990) Govan A.H. and Hewitt G.F, Phenomenological modelling of non-equilibrium flows with phase change. Int. J. Heat Mass Transfer, 33:229-42 (1990) Hewitt, G.F, Hall-Taylor, N.S., Annular Gas-Liquid Flow. Pergamon Press, Oxford (1970) Hewitt G. F. and Jayanti S., Prediction of the slug-to-churn flow transition in vertical two-phase flow, Int. J. Multiphase Flow, Vol. 18, pp 847-860, (1992) Hewitt, G.F, Wallis, G.B., Flooding and associated phenomena in falling film in a vertical tube. In: Proceedings of Multi-Phase Flow Symposium, Philadelphia, PA, 17-22 November, pp. 62-74 (1963) Hewitt, G.F, and Whalley, PB., Advanced optical instrumentation methods. Int. J. Multiphase Flow, Vol. 6, No. 12 p136-156. (1980) Keeys, R.F.K., Ralph, J.C., & Roberts, D.N., Post burnout heat transfer in high pressure steam-water mixtures in a tube with cosine heat flux distribution, AERER6411 (1971). 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 Hewitt and Govan droplet entrainment and deposition correlation A method for calculating the entrainment and deposition rates has been proposed by Hewitt & Govan (1990). These are as follows. 0- ;if GLF 0 316 E= 5.75x10 -5GG L(GLF -GLF)2 if GL >GLF, where GLFC is a critical liquid film mass flux given by, GLC L =exp 5.8504+0.4249 PLG d )l AL PG U The droplets deposition rate is expressed as, D= kC where C is the concentration of droplets in the core, 0 __ LE GG + GLE and the droplet deposition transfer coefficient, kd, is given 0.185 r ;if C < 0.3 PGd PG 0.083 0 ;if -2 0.3 SpGd ) pG McQuillan, K.W., Whalley, P.B., Hewitt, G.F., Flooding in (A-5) vertical two-phase flow. Int. J. Multiphase Flow 11, 741-760 (1985) Taitel, Y, Barnea, D., Dukler, A.E., Modelling flow pattern transitions for steady upward gas-liquid flow in vertical tubes. A1ChE J. 26, 345-354 (1980) Wallis, G.B., The onset of droplet entrainment in annular gas-liquid flows. General Electric Report No. 62GL127 Wallis, G.B., One-Dimensional Two-Phase Flow. McGraw-Hill, New York (1969) Zuber, N., Findlay, J.A., Average volumetric concentration in two-phase flow systems. J. Heat Transfer 87, 453-468
{"url":"http://ufdc.ufl.edu/UF00102023/00408","timestamp":"2014-04-18T16:18:41Z","content_type":null,"content_length":"45615","record_id":"<urn:uuid:90eb8e8f-2d88-4cfb-aa13-433c09904cf2>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Selected Research Papers of Jonathan Valvano Nonlinear Conductance-Volume Relationship for Murine Conductance Catheter Measurement System The conductance catheter system is a tool to determine instantaneous left ventricular volume in vivo by converting measured conductance to volume. The currently adopted conductance-to-volume conversion equation was proposed by Baan, and the accuracy of this equation is limited by the assumption of a linear conductance-volume relationship. The electric field generated by a conductance catheter is nonuniform, which results in a nonlinear relationship between conductance and volume. This paper investigates this nonlinear relationship and proposes a new nonlinear conductance-to-volume conversion equation. The proposed nonlinear equation uses a single empirically determined calibration coefficient, derived from independently measured stroke volume. In vitro experiments and numerical model simulations were performed to verify and validate the proposed equation. Evidence of Time-Varying Myocardial Contribution by In Vivo Magnitude and Phase Measurement in Mice Cardiac volume can be estimated by a conductance catheter system. Both blood and myocardium are conductive, but only the blood conductance is desired. Therefore, the parallel myocardium contribution should be removed from the total measured conductance. Several methods have been developed to estimate the contribution from myocardium, and they only determine a single steady state value for the parallel contribution. Besides, myocardium was treated as purely resistive or mainly capacitive when estimating the myocardial contribution. We question these assumptions and propose that the myocardium is both resistive and capacitive, and its contribution changes during a single cardiac cycle. In vivo magnitude and phase experiments were performed in mice to confirm this hypothesis. Thermal Conductivity and Diffusivity of Biomaterials Measured with Self-Heated Thermistors Thermal Properties measured with self-heat thermistors, and includes theory, instrumentation, calibration, and results measured from 3 to 45 C. International Journal of Thermophysics, 6 (3), 301-311, 1985. A Small Artery Heat Transfer Model for Self-Heated Thermistor Measurements of Perfusion in the Kidney Cortex A small artery model (SAM) for self-heated thermistor measurements of perfusion in the canine kidney is developed based on the anatomy of the cortex vasculature. In this model interlobular arteries and veins play a dominant role in the heat transfer due to blood flow. Effective thermal conductivity, kss , is calculated from steady state thermistor measurements of heat transfer in the kidney cortex. This small artery and vein model of perfusion correctly indicates the shape of the measured kss versus perfusion curve. It also correctly predicts that the sinusoidal response of the thermistor can be used to measure intrinsic tissue conductivity, km , in perfused tissue. Although this model is specific for the canine kidney cortex, the modeling approach is applicable for a wide variety of biologic tissues. Journal of Biomechanical Engineering. 116, 71-78, Feb. 1994. Bioheat Properties of Biomaterials The transport of thermal energy in living tissue is a complex process involving multiple phenomenological mechanisms including conduction, convection, radiation, metabolism, evaporation, and phase change. The equilibrium thermal properties presented in this chapter were measured after temperature stability had been achieved. 2-D Finite Difference Modeling of Microwave Heating in the Prostate Accurate prediction of temperatures in the prostate undergoing thermally-based treatments is crucial to assessing efficacy and safety. A two-dimensional transient finite difference model for predicting temperatures in prostate undergoing microwave heating via a transurethral fluid-cooled catheter is presented. Unconditional stability and good accuracy are achieved by using the alternating direction implicit method. A transverse section of the prostate centered at the urethra is modeled in cylindrical coordinates. The model geometry consists of a hollow silicone cylinder, representing the catheter, surrounded by multiple regions of tissue. Cold fluid flowing through the catheter minimizes the temperature in the periurethral tissue. This flow is modeled as a convective boundary condition at the surface between the catheter lumen and wall. The outer surface of the tissue is assumed to remain at baseline temperature. Microwave heating has both a radial and angular dependence. In order to maximize the heat to the target tissue, the microwave field emitted from the transurethral catheter focuses heat away from the rectum. Different perfusion situations within the prostate are simulated. Pennes' perfusion term is assumed to model the effect of perfusion on heat transfer. Results of the numerical model are compared to phantom experiment results. The model parameters which provided the best fit for the phantom was extended to model canine prostate. Treatment of Benign Prostatic Hyperplasia The treatment of benign prostatic hyperplasia (BPH) has implications which affect the majority of the adult male population. Although benign compared to prostate cancer, clinical symptoms can dramatically alter the quality of life. The hyperplastic tissue can cause constriction of the urethra and thus affect voiding of urine. Factors to consider for thermally-based treatments of the prostate include minimization of thermal injury to the urethra and rectum, and maximal delivery of thermal energy to target tissue. Minimizing temperature rise in the urethra allows for minimal or no anesthesia, and has been shown to reduce post-operative complications. Protection of the rectal wall is imperative since injury can lead to clinical complications as severe as a rectal fistula. Due to its location immediately dorsal to the prostate, the ventral aspect of the rectal wall is susceptible to overheating when a uniform radiating microwave heat source is applied transurethrally to treat the prostate. Interactive 6811 Simulator for Microcontroller Software Interfacing This paper presents a microcontroller hardware/software simulator which is used in a laboratory setting to educate undergraduate electrical engineering students. The specific objectives of the course include microcomputer architecture, assembly language programming, data structures, modular programming techniques, debugging strategies, hardware/software interfaces and embedded microcontroller applications. In this paper, I present both basic concepts and specific implementations which create an effective learning environment for my students. In particular, I wrote a DOS-based interactive simulator for the Motorola 6811. The application runs on a standard IBM-PC compatible with minimal requirements: Intel 386DX, 640K RAM, VGA color monitor, and 2 Megabytes of hard drive space. The student develops Motorola 6811 software which is cross-assembled and simulated. The major features of this interactive programming environment include user-configurable interactive external I/O devices, multiple display windows, extensive information available describing the activity both inside and outside the processor, elaborate protection against and explanation of programming errors, effective mechanisms for setting breakpoints, and user-defined scan points which allow the user program to interact with the graphics display. Analysis of the Weinbaum-Jiji Model of Blood Flow in the Canine Kidney Cortex for Self-Heated Thermistors The Weinbaum-Jiji equation can be applied to situations where: 1) the vascular anatomy is known; 2) the blood velocities are known; 3) the effective modeling volume includes many vessels; and 4) the vessel equilibration length is small compared to the actual length of the vessel. These criteria are satisfied in the situation where steady-state heated thermistors are placed in the kidney cortex. In this paper, the Weinbaum-Jiji bioheat equation is used to analyze the steady state response of four different sized self-heated thermistors in the canine kidney. This heat transfer model is developed based on actual physical measurements of the vasculature of the canine kidney cortex. In this model, parallel-structured interlobular arterioles and venules with a 60 µm diameter play the dominant role in the heat transfer due to blood flow. Continuous power is applied to the thermistor, and the instrument measures the resulting steady state temperature rise. If an accurate thermal model is available, perfusion can be calculated from these steady-state measurements. The finite element simulations correlate well in shape and amplitude with experimental results in the canine kidney. In addition, this paper shows that the Weinbaum-Jiji equation can not be used to model the transient response of the thermistor because the modeling volume does not include enough vessels and the vessel equilibration length is not small compared to the actual length of the vessel. Journal of Biomechanical Engineering, 116, 201-207, May 1994. Modeling of Temperature Probes in Convective Media This paper discusses the dynamic behavior of probes embedded in convective media during temperature measurements. In certain conditions the temperature measured by a probe can be written as the convolution of the true temperature with the impulse response of the probe. We present a general method to find the natural response of any kind of probe, and then we present results for a more realistic 1-D model for the thermistor probe in a thermodilution catheter. The results of these analyzes can be applied to enhance the dynamic response of temperature measurements made by probes in convective media. 17th Southern Biomedical Engineering Conference, Feb. 7, 1998. Measurement of the Dynamic Response of a Contact Probe Thermosensor in Conductive Media This paper describes a method for characterizing the step response of a thermistor probe embedded in a low-conductivity solid. We define the step response as the dynamic response of a finite-size thermosensor instantaneously plunged into an infinite homogeneous conductive solid. The final goal of this research is to evaluate and enhance the time-dependent response of contact-type thermosensors. We will use the step response as the parameter for optimizing the probe time-dependent behavior. Although our research focuses on thermistors, the results could be applied to other contact-type sensors like thermocouples and RTD s. Methodology for Modeling the Response of Temperature Probes in Convective Media This paper discusses the dynamic behavior of probes embedded in convective media during temperature measurements. It will be shown that in certain conditions the temperature measured by a probe can be written as the convolution of the true temperature with the impulse response of the probe. We present a general method to find the natural response of any kind of probe, and then we present results for a more realistic 1-D model for the thermistor probe in a thermodilution catheter. The results of these analyzes can be applied to enhance the dynamic response of temperature measurements made by probes in convective media. Thermal Properties by Kenneth Holmes The following physiological properties were compiled by Professor Kenneth R. Holmes <krholmes@ux1.cso.uiuc.edu> and were published in part previously. The tabulation includes values for both the native thermal conductivity of biomaterials (Appendix A) and the blood perfusion rates for specific tissues and organs (Appendix B). Original sources are documented in the dedicated list of references at the end of each appendix. Knowledge of the perfusion behavior of tissues is important in that the flow of blood can have a direct quantitative effect on the temperature distribution within living tissue. Real Time Data Acquisition and Control This paper presents a laboratory environment for the development of real time data acquisition and control on the IBM-PC platform. The laboratory station involves the integration of low-cost computer technology with powerful software components which empower the student to efficiently and effectively construct real time systems. The software base integrates an editor, a spreadsheet, and a real time programming environment built around Druma FORTH. We have written multiple FORTH libraries to assist the student in the translation of engineering concept into creation. Real time events are managed using a rich set of FORTH software routines which guarantee that time-critical software is executed on schedule. The real time color-VGA graphic library includes many types of windows. We have developed an extendible debugging tool called PROSYM (PROfiler and SYMbolic debugger.) PROSYM provides a simple set of primitives with a high expressive power that may be used singly or may be combined to construct customized debugging tools. In addition to providing basic debugging functions, PROSYM supports an event-action model of debugging. We have evaluated this development system on the full range of PC platforms from the original PC-XT to the newest 486 systems. The environment has been used for two years by Biomedical and Electrical Engineering graduate students performing both teaching and research projects. Gulf-Southwest Section of the American Society of Engineering Education, Austin, pp. 597-604, 1993.
{"url":"http://users.ece.utexas.edu/~valvano/research/right.html","timestamp":"2014-04-16T07:13:15Z","content_type":null,"content_length":"19096","record_id":"<urn:uuid:74fedc8f-e0ad-4300-831a-28fcdc4fa1f2>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
physics grade 11 (vector) Number of results: 160,423 A line in space has the vector equation shown here. Fill in the blanks to find the position vector to the fixed point on the line that appears in the equation: vector r=(5+(6/11)d)vector i + (3+(9/ 11)d)vector j + (7+(2/11)d)vector k vector v= _____ vector i + _____ vector j + ... Wednesday, January 18, 2012 at 7:07pm by Grace Let vector A = 4i^ + 4j^, vector B = -2i^ - 5j^, and vector F = vector A - 5(vector B). a) Write vector F in component form. vector F = ? b) What is the magnitude of vector F? F = ? c) What is the direction of vector F? theta = ? Thank you! Saturday, October 8, 2011 at 11:24pm by Sara Let vector A = 4i^ + 4j^, vector B = -2i^ - 5j^, and vector F = vector A - 5(vector B). a) Write vector F in component form. vector F = ? b) What is the magnitude of vector F? F = ? c) What is the direction of vector F? theta = ? Thank you! Monday, October 10, 2011 at 6:49pm by Sara physics grade 11 (vector) break the NE into N, and E components. Then add to the W (W=-E) Recombined into one vector. Sunday, October 17, 2010 at 2:17pm by bobpursley cartesian vectors 2 let vector U = (vector u1, vector u2) vector V = (vector v1, vector v2) and vector W = (vector w1, vector w2) Prove each property using Cartesian vectors: a) (vector U+V)+W = vector U+(v+W) b) k (vector U+V) = k vector U + k vector V c) (k+m)vector U = k vector U + m vector U Saturday, July 31, 2010 at 1:18pm by Shaila College Physics Vector Algebra let A vector=6ihat+3jhat, B vector= -3ihat-6jhat D vector=A vector-B vector what is D vector? Thank You! Tuesday, September 13, 2011 at 8:37pm by Sher vector a+ vector b= 2i Vector a+ vector b=4j Then the angle between vector a and vector b is ? Plz.solve Friday, June 10, 2011 at 1:34am by marley I have: A. vector n= 3*vector i + 2*vector j + 5* vector k B. vector v= 7*vector i + 10*vector j + vector k C. 52.46 degrees And I'm not quite sure for D... Wednesday, January 18, 2012 at 8:02pm by Grace College Physics Take the vector from O to B and subtract the vector from O to A (which is the same as adding the vector from A to O). The result will be the displacement vector from A to B. The vector components of the vector from A to b will tell you the direction. Divide the displacement ... Saturday, September 20, 2008 at 11:56pm by drwls Math - vectors In the product F(vector)=q(V(vector)xB(vector), take q = 4, V(vector)= 2.0i + 4.0j + 6.0k and F(vector)= 136i -176j + 72k. What then is B(vector) in unit-vector notation if Bx = By? Wednesday, March 28, 2007 at 6:46pm by sam Vector vector A has a magnitude of 27 units and points in the positive y-direction. When vector vector B is added to vector A , the resultant vector vector A + vector B points in the negative y-direction with a magnitude of 18 units. Find the magnitude of vector B ? Thursday, February 10, 2011 at 5:42pm by jj University Physics If vector B is added to vector C = 6.7 + 6.9, the result is a vector in the positive direction of the y axis, with a magnitude equal to that of vector C. What is the magnitude of vector B? Tuesday, September 6, 2011 at 3:43pm by Phil Vector vector A has a magnitude of 6.90 units and makes an angle of 46.5° with the positive x-axis. Vector vector B also has a magnitude of 8.00 units and is directed along the negative x-axis. Using graphical methods find the following. (a) The vector sum vector A + vector B... Wednesday, September 29, 2010 at 11:22am by Bill Muary When two vectors vector A and vector B are drawn from a common point, the angle between them is phi. If vector A and vector B have the same magnitude, for which value of phi will their vector sum have the same magnitude as vector A or vector B? Thursday, August 26, 2010 at 10:46pm by Ami 1. a centripetal-acceleration addict rides in uniform circular motion with period T=2s and radius r=3m. at t1 his acceleration vector= 6(m/s^2)i + -4(m/s^2)j. At that instant, what are the values of vector b (dot) vector a and vector r (dot) vector a All I could figure out was... Tuesday, June 2, 2009 at 10:08pm by physics If the expression for a given vector is Vector A-Vector B=(4-5)i^+(3+2)j^ and the magnitude of the difference of the vector is equal to 5.1. What is the direction of the difference Vector A - Vector B. (in the counterclockwise and +x direction) Thursday, April 7, 2011 at 12:04pm by Allison The three vectors have magnitude a =3m, b= 4m and c=10m . On the x axis is vector a at 0 degrees then 30 degrees is vector b then 120 degrees is vector c. If vector c =p*vector a + q*vector b .What are the values of p? and q? I am expose to get -6.67 for p and then 4.33 for q... Friday, October 6, 2006 at 7:06pm by Jenny Calc: PLEASE HELP! There is something wrong with what you are trying to prove. "Vector a" is a vector, but (Vector b • Vector c) is a scalar (just a number). I am assuming you really meant to write a dot product when you wrote Vector b • Vector c. You cannot add a scalar to a vector. Tuesday, July 1, 2008 at 2:51pm by drwls For (a), draw and solve a vector diagram. Air velocity + Airspeed vector = Ground speed vector For (b) Multiply the Ground speed vector by 2 hours, and the answer will be the displacement vector. I will be glad to critique your work. Sunday, October 28, 2007 at 3:29pm by drwls Vector has a magnitude of 122 units and points 37.0 ° north of west. Vector points 64.0 ° east of north. Vector points 17.0 ° west of south. These three vectors add to give a resultant vector that is zero. Using components, find the magnitudes of (a) vector and (b) vector . Monday, January 27, 2014 at 8:57pm by lawrenvce The drawing shows a force vector vector F that has a magnitude of 600 newtons. It lies at alpha = 67° from the positive z axis, and beta = 31° from the positive x axis. (that's the diagram given) webassign. (delete this space) net/CJ/p1-39alt.gif (a) Find the x component of ... Thursday, September 10, 2009 at 9:53pm by RE You are given vectors A = 5.0i - 6.5j & B = -3.5i + 7.0j. A third vector C lies on the xy-plane. Vector C is perpendicular to vector A, & the scalar product of C with B is 15.0. From this information, find the components of vector C. Sunday, June 23, 2013 at 6:19am by Cathy The acceleration of a particle moving only on a horizontal xy plane is given by a=5ti+6tj, where a is in meters per second-squared and t is in seconds. At t=0, the position vector r=(19.0m)i+(39.0m)j locates the particle, which then has the velocity vector v=(5.70m/s)i+(3.40m/... Saturday, March 15, 2014 at 11:16pm by Stacey Suppose vector A=3i-2j, B= -i-4j and vector C are three vectors in xy plane with the property that vectors A+B-C. What is the unit vector along vector C? Saturday, January 28, 2012 at 7:25am by Abdela said yimer physics (vector) the resultant of two vector A and B pependicular to vector A and its magnitude is equalto half the magnitude of vector b what is the angle between them . How? Monday, June 6, 2011 at 11:34pm by shwet For the three vectors A+B+C= -1.4i. Vector A is 4i and vector C is -2j. Write vector B as components separated by a comma. What is the magnitude of vector B? How many degrees above the negative x-axis does vector B point? Measure the angle clockwise from the negative x-axis. Tuesday, September 11, 2012 at 1:06am by bobby i dont see this in my book in a coordinate system, a vector is oriented at angle theta with respect to the x-axis. The y component of the vector equals the vector equals the vector's magnitude multiplied by which trigonometric function? Wednesday, February 18, 2009 at 10:17am by y912f The displacement vector required to reach the station is the original vector to the station, minus the vector of their actual initial travel. This is a vector subtraction ptoblem. You will have to do the steps yourself. We are not here to do your homework for you. Wednesday, August 18, 2010 at 10:36pm by drwls physics- TEST TMRW PLS HELP! In general The moment vector about any point O of force vector F is M = R x F where R is the vector from point O to a (any) point on vector F and F is the force vector Sunday, December 9, 2012 at 7:11pm by Damon physics- TEST TMRW PLS HELP! In general The moment vector about any point O of force vector F is M = R x F where R is the vector from point O to a (any) point on vector F and F is the force vector Sunday, December 9, 2012 at 7:11pm by Damon The components of vector A are Ax and Ay (both positive), and the angle that it makes with respect to the positive x axis is θ. Find the angle θ if the components of the displacement vector A are (a) Ax = 11 m and Ay = 11 m, (b) Ax = 19 m and Ay = 11 m, and (c) Ax = ... Sunday, January 30, 2011 at 8:04pm by thaokieu physics [vectors] The concept i get, but somehow i just can't execute this problem, please help me! You are given vectors A = 5.5 6.2 and B = - 3.7 7.4 . A third vector C lies in the xy-plane. Vector C is perpendicular to vector A and the scalar product of C with B is 19.0. I need x and y ... Sunday, August 30, 2009 at 10:45pm by lina maths vector if a vector+ b vector+ c vector =0, & a vector=3, b=5, c=9, find angle between a & b vector Saturday, March 16, 2013 at 11:25am by Anonymous physics grade 11 (vector) An airplane is flying 400 km/h west. There is a wind blowing 100 km/h NE. What is the airplane's actual speed and heading? Sunday, October 17, 2010 at 2:17pm by lawrence Draw the vector diagram. Draw the N vector. THen, from the top of the N vector, sketch the flow of water vector. Now, the resultant is from the bottom of the N vector to the tip of the water vector. Using the law of cosines: R^2=B^2 + W^2 -2BWcos60 and the direction, from the ... Sunday, September 4, 2011 at 9:51am by bobpursley A stone is shown at rest on the ground. A. The vector shows the weight of the stone. Complete the vector diagram showing another vector that results in zero net force on the stone. B. What is the conventional name of the vector you have drawn? Tuesday, July 6, 2010 at 8:44am by Anonymous A stone is shown at rest on the ground. A. The vector shows the weight of the stone. Complete the vector diagram showing another vector that results in zero net force on the stone. B. What is the conventional name of the vector you have drawn? Tuesday, July 6, 2010 at 8:44am by Anonymous A stone is shown at rest on the ground. A. The vector shows the weight of the stone. Complete the vector diagram showing another vector that results in zero net force on the stone. B. What is the conventional name of the vector you have drawn? Tuesday, July 6, 2010 at 8:44am by Anonymous Break the second force into a S and a W vector. The S subtracts from the original N vector. With that resultant, add to the W vector. Wednesday, October 7, 2009 at 8:48pm by bobpursley The x-component of vector A is -25.0 m and the y-component is +40.0 m. a) what is the magnitude of vector A? b) What is the angle between the direction of vector A and the positive direction of vector x? can you please help me with the steps to accomplishing these? Sunday, September 13, 2009 at 12:47pm by Mallory Data: theta_1 = 44.9 theta_2 = 146.1 A = 4.8 cm B = 8.7 cm A)What is the x component of vector A? B)What is the y component of vector A? C)What is the x component of vector B? D)What is the y component of vector B? E)What is the magnitude of vector (A + B)? Saturday, January 25, 2014 at 10:46am by Sam if vector a is fixed but vector b can be rotated in any direction. what should be the angle of vector b to give teh maximum resultant sum Monday, December 10, 2012 at 6:53pm by Anonymous We need the direction of their line of intersection.... double the 2nd plane equation, then add that to the 1st x+2y-3z = -6 6x - 2y + 4z = 8 7x + z = 2 z = 2-7x let x=1 then z = -5 back in the 1st 1 + 2y + 15 = -6 y = -11 -----------> point (1,-11,-5) let x = 0 then z = 2 ... Monday, May 7, 2012 at 8:28pm by Reiny A vector has components Ax = 44 m and Ay = 30 m. Find the length of the vector vector A and the angle it makes with the x axis. in meters and in degrees Wednesday, January 28, 2009 at 2:46am by kj Calc: PLEASE HELP! How can this be proven! I have tried so many ways! PLEASE help! Verify using an example that Vector a + (Vector b • Vector c) = (Vector a • Vector b) + Vector c? Explain your reasoning. Tuesday, July 1, 2008 at 2:51pm by Sick Sick 1.Find vectors v*w if vector v = 5 vector i – 4 vector j + 4 vector k and vector w = –6 vector i + 3 vector j – 2 vector k. 2.Find vectors v*w if vector v = -3 vector i – 4 vector j - 8 vector k and vector w = 2 vector i + 6 vector j + 4 vector k. Thursday, August 30, 2012 at 4:20pm by batmo Not the average of the two - you need to find the vector which, when added to the windspeed vector, will give the vector to the desired destination. So you are really subtracting vectors. Try Monday, September 6, 2010 at 11:15pm by kristen 4. When writing a vector as the sum of two vector components, the first vector to calculate is just the projection of a vector onto another vector. True 5. Work represents a scalar value. True Monday, June 17, 2013 at 4:53pm by mysterychicken Calculus - Dot Product consider a rhombus ABCD a) find the resultant of vector AB + vector AD and vector AB - vector AD? (cosine rule) b) What will be the value of the dot product of vector AB + vector AD and vector AB - vector AD always be? (always zero) c) Is this value of the dot product of ... Monday, August 9, 2010 at 9:06am by Shaila In a coordinate system, a vector is oriented at angle theta with respect to the x-axis. They y component of the vector equals the vector's magnitude multiplied by which trig function? ... please Wednesday, February 4, 2009 at 9:27pm by y912f In a coordinate system, a vector is oriented at angle theta with respect to the x-axis. They y component of the vector equals the vector's magnitude multiplied by which trig function? ... please Wednesday, February 4, 2009 at 9:28pm by y912f I will be glad to critique your work. We are getting more physics questions than we can handle in a timely manner, and have to be selective about who receives help. . Your information on the first cyclist establishes the vector distance of campground from the starting point. ... Thursday, September 16, 2010 at 10:55pm by drwls Which of the following statements is a true statement? A. A vector can have positive or negative magnitudes. B. A vector's magnitude cannot be more than the magnitude of one of its components. C. If the x-component of a vector is smaller than its y-component then that vector ... Friday, December 6, 2013 at 11:45am by laura A diagram has vector A at 30 degrees above the positive x axis. length = 3 m Vector B is ON the negative x-axis. length = 1 m Vector C is ON the negative y-axis. Length is 2 m What is the direction in degrees of vector a + vector B + vector C? Sunday, January 29, 2012 at 3:39pm by Sushmitha AP physics Vector vector A has a magnitude of 7.60 units and makes an angle of 48.5° counter-clockwise from the positive x-axis. Vector vector B has a magnitude of 8.00 units and is directed along the negative Sunday, August 23, 2009 at 12:51am by emeka Dot Product Verify using an example that Vector a • (Vector b • Vector c) = (Vector a • Vector b) • Vector c is not true. Explain your reasoning both numerically and by using the definition of the dot product. I am very confused as to what this means?! Thanks for the assistance. Tuesday, July 1, 2008 at 2:20pm by Putnam Vactors 12 Let A and B be any two points on the plane and O be the origin. Prove that vector AB = vector OB - vector OA. Let x be any other point on the plane prove that vector AB = vector xB - vector xA. Tuesday, July 20, 2010 at 2:36am by Shaila partscoretotalsubmissions1--10/12--10/13--10/14--10/1----4--The tail of a vector is fixed to the origin of an x, y axis system. Originally the vector points along the +x axis. As time passes, the vector rotates counterclockwise. Describe how the sizes of the x and y components... Monday, May 7, 2007 at 10:23am by Panda A vector of magnitude 10 unit combine in the nurth direction combine with another vector to give a zero resultant what is the other vector? Wednesday, May 11, 2011 at 6:04am by Mashkur The acceleration of a particle moving only on a horizontal xy plane is given by , where is in meters per second-squared and t is in seconds. At t = 0, the position vector locates the paticle, which then has the velocity vector . At t = 3.60 s, what are (a) its position vector ... Monday, February 3, 2014 at 12:10pm by Catherine Mathematics - Dot Product Consider rhombus ABCD a) Find the resultants of vector AB + vector AD and vector AB - vector AD b) What will the value of (vector AB + vector AD) dot product (vector AB - vector AD) always be? Explain. c) Is the value of (vector AB + vector AD) dot product (vector AB - vector ... Friday, August 20, 2010 at 12:50am by Suhani Mamthematics - Vectors a) If vector u and vector v are non-collinear vectors show that vector u, vector u cross product vector v and (vector u cross product vector v) cross product vector u are mutually othogonal. b) Verify this property using vectors collinear with the unit vector, i, j and k c) ... Friday, August 20, 2010 at 12:38am by Suhani What does this mean? For any vector Vector a find Vector a × Vector a. Explain why (this is cross product stuff). Thanks Tuesday, July 1, 2008 at 12:44pm by Derek Calculus and vectors Vector AB is a vector whose tail is at (-4,2) and whose head is at (-1,3). Calculate the magnitude of vector AB Determine the coordinates of point D on vector CD, if C (-6,0) and vector CD= vector AB. Please I need some help. Is there a formula to solve this? Pls help Tuesday, February 5, 2013 at 4:11pm by Ted Calculus and vectors Vector AB is a vector whose tail is at (-4,2) and whose head is at (-1,3). Calculate the magnitude of vector AB Determine the coordinates of point D on vector CD, if C (-6,0) and vector CD= vector AB. Please I need some help. Is there a formula to solve this? Pls help Tuesday, February 5, 2013 at 4:11pm by Ted 1. The problem statement, all variables and given/known data Find the vector product of Vector A cross Vector B(expressed in unit vectors) of the two vectors. What is the magnitude of the vector product? 2. Relevant equations Vector A= 5.00i + 2.00j Vector B= 3.00i - 1.00j 3... Thursday, September 17, 2009 at 10:18pm by RAYMOND Compute the velocity change vector during the 30 seconds and divide it by 30 for the average acceleration vector. Multiply that by the mass for the net force vector, Fnet. You know two of the three vectors that make up Fnet. Use that to determine the third (Wind) force vector Sunday, February 12, 2012 at 12:43am by drwls make a sketch by completing the usual parallelogram , if U+V = R R^2 = 11^2 + 9^2 - 2(9)(11)cos 137° = 346.808 R = 18.62 now use the Sine Law to findØ, the angle between V and R. sinØ/9 = sin137°/ 18.62 sinØ = .329595 Ø = 19.24° with vector V, then 23.76° with vector U Sunday, July 24, 2011 at 7:45pm by Reiny 24. In a 2-dimensional Cartesian system, the y-component of a vector is known, and the angle between vector and x-axis is known. Which operation is used to calculate the magnitude of the vector? (taken with respect to the y-component) Friday, June 11, 2010 at 9:52pm by anonymous The vector A -5.2 has a magnitude of 38m and points in the positive x direction. What is the x component of vector A? What is the magnitude of vector A? Friday, September 3, 2010 at 4:40pm by Jayson Displacement = (final vector location) - (initial vector location) = (-16 sin45 j + 16 cos45 i) - 13 i = -24.3i + 11.3 j Which means it moved 24.3 km west and 11.3 km north. "i" and "j" are unit vectors east and north. The magnitude of the displacement is sqrt[(24.3)^2 + (11.3... Thursday, September 9, 2010 at 3:43pm by drwls Which of the following is an accurate statement? a) a vector cannot have zero magnitude if one of its components is not zero b) the magnitude of a vector can be less than the magnitude of one of its componenets c) if the magnitude of vector A is less than the magnitude of ... Wednesday, September 19, 2007 at 2:42pm by Tammy Vector a has a magnitude of 5 m and is directed east. vector b has magnitude of 4m and is directed 35 degrees west of north. A)What is the magnitude and direction of vector a + vector b b)What is the magnitude and direction of vector b- vector a? I should have gotten a)4.2m at... Monday, October 2, 2006 at 3:31pm by Jesse Vector A has a magnitude 12m and is angled at 60 degrees counterclockwise from the positive direction of the x axis of an xy coord. system. Also. Vector B = (12m)i + (8m)j on that same coord system. Rotate the system counterclockwise about the origin by 20 degrees to form an x... Sunday, June 27, 2010 at 4:27pm by Rick vector m•a=vector m•g +vector F(spring) a=0 => 0=vector m•g +vector F(spring) x: 0 = m•g•sinα –k•x, x=m•g•sinα/k=6.1•9.8•sin 39°/126 =0.3 m Monday, November 5, 2012 at 5:00pm by Elena The vector sum of the velocity with respect to air (speed and heading) and the wind velocity vector must be 150 km/s in a south direction. Write that as a vector equation and solve for the unknown Tuesday, November 23, 2010 at 3:48pm by drwls AP physics Vector A has a magnitude of 8.00 units and makes an angle of 45.0° with the positive x-axis. Vector B also has a magnitude of 8.00 units and is directed along the negative x-axis. Using graphical methods, find (a) the vector sum A + B and (b) the vector difference A – B. Could... Sunday, August 23, 2009 at 12:51am by jackie Given the plane 3x+2y+5z=54 and the points P0(6, 8, 4)[on plane] and P1(13, 18, 5) [not on plane] A. Find vector n, a vector normal to the plane B. Find vector v from P0 to P1 C. Find the angle between vector n and vector v D. Find p, the scalar projection of vector v on ... Wednesday, January 18, 2012 at 8:02pm by Grace Math - Algebraic Vectors Do not confuse the problem of finding the length of a line segment if you know the two end points with finding the length of a vector. Suppose we have 2 points A(8,3) and B(11,7) then vector AB = [3,4] and │AB│ = √(3^2+4^2) = 5 line segment AB = √((11-8... Tuesday, February 12, 2008 at 7:27am by Reiny The tail of A vector is fixed to the origen of an X, Y axis system.Originally the vector points along the X axis. As time passes, the vector rotates counterclockwise. Describe how the sizes of the X and Y components of the vector compare to the size of the original vector for ... Tuesday, February 6, 2007 at 6:52pm by anonymous Given VectorA=12i+7j-5k and Vector B= 2i-4j+3k 1)Find angle theta between vector A and vector B? 2)Fine VectorB x VectorA How would I go about doing these problems ? Tuesday, September 14, 2010 at 11:50pm by Rima The acceleration of a particle moving only on a horizontal plane is given by a= 3ti +4tj, where a is in meters per second-squared and t is in seconds. At t = 0s, the position vector r= (20.0 m)i + (40.0 m)j locates the particle, which then has the velocity vector v= (5.000 m/s... Friday, February 24, 2012 at 3:06am by amy The acceleration of a particle moving only on a horizontal plane is given by a= 3ti +4tj, where a is in meters per second-squared and t is in seconds. At t = 0s, the position vector r= (20.0 m)i + (40.0 m)j locates the particle, which then has the velocity vector v= (5.000 m/s... Friday, February 24, 2012 at 3:14am by amy B = C - A in vector notation. Subtracting a vector is the same as adding its negative. Do the vector subtraction for the answer. Using the components method will get you the x and y components of B. From those two components, get the magnitude and direction Sunday, September 13, 2009 at 11:54pm by drwls Let's pick an arbitrary value for the first component, say 1. then let the vector be (1,b) (1,b)∙(3,4) = |(1,b)||(3,4)cos60° 3 + 4b = √(1+b^2)(5)(1/2) 6 + 8b = 5√(1+b^2) 36 + 96b + 64b^2 = 25(1+b^2) after squaring both sides 39b^2 + 96b + 11 = 0 Using the ... Wednesday, August 31, 2011 at 12:46pm by Reiny Vector A has a magnitude of 8.00 units and makes an angle of 45.0° with the positive x-axis. Vector B also has a magnitude of 8.00 units and is directed along the negative x-axis. Using graphical methods, find (a) the vector sum A + B and (b) the vector difference A – B. Could... Friday, August 28, 2009 at 3:14pm by benjamin Use the definition of scalar product( vector A* vector B = abcos theda and the fact that vector A * vector B = axbx+ ayby+azbz to calculate the angle between the two vectorgiven by vector A= 3i + 3j + 3k and vector B= 2i + 1j + 3k. The book said the answer is 22 degreess but I... Friday, October 6, 2006 at 5:02pm by Jamie Position vector r has magnitude of 14.8m and direction angle 240 degrees. 1. Find it's components, enter x and y components of the vector 2. Find the components of the vector -2r, enter x and y components of the vector 3. Find the magnitude of the vector -2r. Thursday, September 20, 2012 at 10:14pm by Jessica Your question does not make sense, especially "I think it is true..". Torque is specific to axis of rotation, and perpendicular distance to the axis of rotation. if axis of rotation is a vector, then distance is a vector perpendicular to that axis vector. Torque= force (a ... Friday, December 16, 2011 at 11:42am by bobpursley Take the "dot product" of the force vector and the displacement vector. If i is the x unit vector and j is the y unit vector, the force vector is F = 2.4 sin 100 i + 2.4 cos 100 j The displacement vector is deltaR = 3.0 i + 4.1 j W = F . (deltaR) = 7.2 sin 100 + 9.84 cos 100 Friday, October 30, 2009 at 12:50pm by drwls Triangle ABC is an equalateral triangle, with O it's centroid. a) Show that vector OA + vector OB + vector OC = vector 0 Saturday, April 24, 2010 at 4:31pm by Anonymous The equation of motion is (in vector form) vector( ma) = vector (m•g) + vector F(fr)+ vector N Projections on the axes x: m•a = m•g•sinα - F(fr) y: 0 = N - m•g•cosα F(fr) =k•N = k• m•g•cosα a = g•( sinα - k•cosα) s =Vo•t +at^2/2. Friday, April 6, 2012 at 1:11pm by Elena math-complex numbers If z = -3+4i, determine the following related complex numbers. a)vector z b)3(vector z) c) 1/(vector z) d) |z| e) |vector z| f) (vector z)/(|z|^2) Friday, October 11, 2013 at 10:45am by Rinchan Given that vector c and vector d are non zero vectors such that vector c =x1i + y1j + z1k and vector d =(y1z2-y2z1)+(x1y2 -x2y1)show that the two vectors are perpendicular. Saturday, February 18, 2012 at 8:25am by victor m a square is defined by the unit vectors i(vector) and j(vector). FInd the projections of i(vector) and j(vector) on each of the diagonals of the square. Monday, October 15, 2012 at 6:03pm by Jennifer physics (vector) Let the direction of the 1 n vector be up (+y direction). The components of the resultant are: 1 n: 1 j (j is a unit vector in +y direction) 2 n: -1 j + sqrt3 i (i is a unit vectyor in +x direction) 3 n: -1.5 j -(3/2)sqrt3 i vector sum: -1.5 j + -(1/2)sqrt3 i magnitude = sqrt[... Monday, June 6, 2011 at 11:46pm by drwls ap calculus.linear earnings = 0.05 A•f A+f is meaningless, since you can't add ft^2 and n. A•f computes the total ft^2 mowed. For various earnings, you'd also need a vector of rates. Not sure how to do that with just vector operations. Say the rates were 0.05, 0.10, 0.07, 0.12, 0.05, 0.08, 0.13... Wednesday, February 26, 2014 at 3:11am by Steve It will take the particle t = 18.4/9.4 = 1.957 s to go from wall to wall. In doing so, it will acquire a velocity component parallel to the wall equal to Vy = 5.8*t = 11.35 m/s Compute the vector sum of 11.35 m/s and 9.4 m/s at right angles Sunday, December 16, 2012 at 9:35pm by drwls The velocity vefot of a ball is v(t)=-3x+4y at any time. Its initial position is r=12x-4y. The components of the velocity vector are in m/s and the components of the position vector are in meters. The symbols x and y are the unit vetors in x and y directions respectively. Find... Sunday, September 26, 2010 at 6:14pm by Sarah Express the vector as a combination of the standard unit vectors i and j. v = AB where A = (10,-11) and B = (-11,8) I want to make sure I did this right: <-11-10, 8--11> = <-21, 19> So it's -21i + 19j? Thank you! Saturday, April 25, 2009 at 1:56pm by Momo Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=physics+grade+11+(vector)","timestamp":"2014-04-18T07:55:09Z","content_type":null,"content_length":"43129","record_id":"<urn:uuid:91edc6b8-f7ea-4305-a9e1-ec8110abceb3>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] raising a matrix to float power josef.pktd@gmai... josef.pktd@gmai... Sat Jul 10 18:45:36 CDT 2010 On Sat, Jul 10, 2010 at 7:39 PM, Sturla Molden <sturla@molden.no> wrote: > Alexey Brazhe skrev: >> Hi, >> I failed to find a way to raise a matrix to a non-integer power in >> numpy/scipy >> In Octave/Matlab, one would write M^0.5 to get the result >> whereas in numpy >> >>> maxtrix(M, 0.5) >> raises the "TypeError: exponent must be an integer" >> Is there a way to do matrix exponentiation to non-integer powers in >> numpy or scipy? >> Hope the answer is positive :) > Sure, M**0.5 is cho_factor(M). For other non-integers I am not sure what > matrix exponentiation could possibly mean. > Are you sure you don't mean array exponentiation? scipy linalg has several matrix functions 'expm', 'expm2', 'expm3', 'sqrtm', 'logm' 'sqrtm' solves dot(B,B) = A not dot(B.T,B) = A Besides cholesky, I use eigenvector decomposition to get the powers and other functions. > Sturla > _______________________________________________ > SciPy-User mailing list > SciPy-User@scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-July/026051.html","timestamp":"2014-04-20T09:20:18Z","content_type":null,"content_length":"4056","record_id":"<urn:uuid:b3db6125-9c13-483f-81fc-63245d5adcb0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Let's solve a generic 2x2 system of linear equations. To solve for x, let's eliminate the y's by multiplying the top equation by d and the bottom equation by -b When you add the two equations together, and solve for x, you get Now, let's solve for y by eliminating the x's. Multiply the top equation by -c and the bottom equation by a. When you add the two equations together, and solve for y, you get Now, consider the following definitions The determinant D formed by taking the coefficients The determinant D[x] formed by taking the coefficient matrix and replacing the x's by the constants on the right hand side. The determinant D[y] formed by taking the coefficient matrix and replacing the y's by the constants on the right hand side. Have you seen those determinants anywhere before? If not, then you've not been reading the lecture notes.
{"url":"https://people.richland.edu/james/lecture/m116/matrices/cramer.html","timestamp":"2014-04-24T11:02:49Z","content_type":null,"content_length":"3296","record_id":"<urn:uuid:1c264562-5df4-4960-9f12-00275371f795>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
DELTA by Jasmine Renold February 22, 2013 Deeper questioning Posted by jrenold under Communication, Edtech, Teaching Leave a Comment The first video is a prototype for a starter for a G2O facilitators session on questioning and challenge. It is made using Explain Everything but needs the sound rerecording. The second video is made using Windows Movie Maker and I like having a music background. It does not play on mobile devices yet but am looking into that. The aim is to make the viewer/participant frustrated by the time allowed to answer questions. The challenge of questions increases (a la Blooms) as does the time allowed to answer. The participants will be asked to either answer a question on a sticky note or ask further questions. These will make starting points for the discussion on developing deeper questioning. I think that creativity is so important yet many of us lose the ability to be creative or have it drummed out of us. David Kelley explores this in this TED talk. Here’s the video of the script below… Imagine travelling to a city to buy a length of cloth to make a dress. You know you need 8 metres for the pattern and so when you visit the cloth merchant, you ask for 8m of the finest cotton which he cuts and puts in a paper bag. You then go back to your home in the village 100 miles away, open the bag and find that you have only 7.2m. It is to long a journey to go back and so you have to make a smaller dress and breathe in to wear it. Strange? Well, hundreds of years ago this could have easily been the case. You wouldn’t have used metres as units, you might have used yards or ells (derived from elbows), bolts or barleycorns! Sometimes, even though the different areas of the country used the same word for the unit or length, they could actually differ. A similar story would also be the case for trading masses of flour, spices and other materials. Numerous arguments and fights would ensue from people thinking that they were being short changed, caused by the lack of consistency between units of measurement. This was certainly the case after the French Revolution, where the French thought that it would be a good idea to standardise units of measurement, to avoid such conflicts. The standard metre was therefore born, similar to the British yard, just a bit bigger. A metre bar, made from a platinum/iridium alloy, was kept centrally and people would have their own metre stick made from this standard. There was a similar block made for the standard kilogram. Phew, that’s better, now quantities could be compared more fairly. Across the channel in Britain and in many other countries, there was another problem: multiples of different quantities appeared to make little sense and had just been derived from quantities that seemed useful at the time. For example for mass: There were 16 ounces in a pound, fourteen pounds in a stone and 160 stones in a ton. For length: There were 12 inches in a foot, 3 feet in a yard, 22 yards in a chain, 10 chains in a furlong, 8 furlongs in a mile and 1760 yards in a mile. The coin system again had different multiples. YOU had to remember each and every one! In 1960, when it was becoming more and more crucial to keep units standard, the System International d’unites was set up. Known as the SI system, it was adopted nearly world-wide (the Americans weren’t that keen!). In the SI system, alongside the use of standard units, one of the rules was that any multiples of units, must use powers of ten. As bigger and bigger measurements were needed names were given to particular powers of ten, each one being 1000 times bigger than the previous unit. Similarly smaller and more precise measurements were becoming important and so again names were given to units that were one thousand times smaller than the previous. Prefixes such as mega, nano, pico, tera were set to describe these quantities. But what do these words mean and are they fitting? Well the names derive from words used in different languages to describe size. Think of some of the words in English used to describe size: tiny, diddy, little, small, medium, large, massive, huge, gigantic. Let’s take a closer look at the SI prefixes, starting with those that get smaller and smaller. Well first there is milli, as in millimetre or millisecond. This comes from the Latin, mille, for one thousand and the milli prefix means one thousand times smaller (or x10^-3). So 1mm is one thousandth of a metre, 1 x10^-3m. So that makes sense. The next smallest is micro. This comes from mikros, which is greek for small. How small? A million times smaller. So 1 microsecond is one millionth of a second (or x10^-6). What is smaller than Nano- is the answer. This comes from nanos, greek for dwarf. So dwarves, it seems are smaller than ‘small’. Nano- is one thousand million times smaller, or one billion times smaller. So a nanometre is the same as 1×10^-9metres. Pico is the next smallest and less creatively pico is simply italian for small. It stands for x10^-12 of the original quantity or one million millionth. So it appears that what the greeks considered to be small was in fact one millon times bigger than what the italians thought was small. Think of the implications!! Let’s now go bigger. One thousand times bigger than a metre is the kilometre and one thousand times bigger than a gram is a kilogram. So kilo- represents one thousand times bigger (or x10^3). Kilo comes from, khilioi, the greek for, guess what?! One thousand. Mega is the next one up, representing one million times bigger (or x10^6). Mega derives from the greek word for GREAT, megas. We seem to have adopted this in common parlance. That’s mega, mate. Ie that’s great! A billion times bigger (or x10^9) is the prefix giga. For example 1 gigahertz is one thousand million times faster than 1 hertz. Giga is from, gigas, greek for giant. And finally, my favourite, tera. Tera stands for one million million times bigger than the original quantity (or x10^12). And where does tera come from? Again it’s greek and it’s from teras, which means MONSTER! So monsters are bigger than giants which are bigger than ‘great’ characters and all are bigger than dwarves!! We knew that… So what’s the age of the universe? Well we think at the moment it is about 13.7 billion years old and this equates to: 5 monster days or 5000 giant days or 5000 000 great days….. How cool is that? This app is supposed to make blogging from my iPad so much easier. Pictures and other media should be easier to add. Well, let’s see. 1. Adding a photo. To get an image of the web, you still need to copy from Safari. After trying to figure out how to select a photo, it was brilliant. (You slide it into the envelope!) 2. Adding a video. Just needed to add my YouTube account and then drag. 3. Adding a link. Highlight the text. Then use the too, right button which opens a browser. Find the page you want and then drag the button to the left of the URL onto your text. Blogsy website. This also means that I don’t have to open Safari to get an image. Just search, save to library and drag into the envelop. YEAY! I posted an explanation of how to solve a problem using kinetic and potential energy on YouTube and embedded the link on the Unit 2 GSCE Physics wiki. My year 10 students were tasked with watching it for homework, know that they were going to get a similar question at the beginning of the next lesson. I said to them that, in this case, it was a good idea to do this homework the night before the lesson. I gave each student a piece of paper in the lesson and gave the, the same question but with different data. It became very obvious, who had not done the homework. I collected in the papers, went through the answer and proceeded with the lesson. At the end of the lesson, I kept back those who had got the problem wrong. They are a set of very bright students and it was because they had not studied the homework (ie. watched the flip video). I gave them another similar question and they could all do it and all got the right answer. Success – all the students understood the method and all were clear that they needed to do the homework. Oh January 2, 2013 How to build your creative confidence Posted by jrenold under Communication, creativity | Tags: creativity | Leave a Comment January 2, 2013 Sketch it Posted by jrenold under Edtech | Tags: elearning | Leave a Comment November 3, 2012 Dwarves, giants and monsters video Posted by jrenold under Uncategorized Leave a Comment July 29, 2012 Dwarves, giants and monsters – origins of SI prefixes Posted by jrenold under Teaching Leave a Comment July 8, 2012 Just signed up to the Blogsy App Posted by jrenold under Communication, Edtech Leave a Comment July 8, 2012 Another FLIP – How high will the ball go? Posted by jrenold under Edtech, Teaching Leave a Comment
{"url":"http://jrenold.wordpress.com/","timestamp":"2014-04-20T00:54:00Z","content_type":null,"content_length":"46228","record_id":"<urn:uuid:c55b193a-531a-4136-aefc-902ca34351dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
diyAudio - Spice JFET and MOSFET´s .model Osvaldo de Banfield 28th June 2012 12:22 PM Spice JFET and MOSFET´s .model Accordingly to this stuff , and manufacturer´s data sheet, I have made my own .models for some common JFET´s And MOSFET´s for Spice soft. I want to share them. In few time I´do for other field effect transistors. Those has been tested in a simulation of a reality working circuit, my OSF. Enjoy. .model 2SK117Y NJF(beta=6.323m vto=-0.547 is=-1n cgd= 8.0445p cgs=4.9555p lambda=65.89 mfg=TOSHIBA) .model 2SK117GR NJF(beta=13.7m vto=-0.547 is=-1n cgd= 8.0445p cgs=4.9555p lambda=30.41 mfg=TOSHIBA) .model 2SK117BL NJF(beta=30.55m vto=-0.547 is=-1n cgd= 8.0445p cgs=4.9555p lambda=13.64 mfg=TOSHIBA) .model 2SK170GR NJF(beta=13.7m vto=-0.547 is=-1n cgd= 16.089p cgs=13.91p lambda=20m mfg=TOSHIBA) .model 2SK170BL NJF(beta=28.283m vto=-0.547 is=-1n cgd= 1.119p cgs=11.881p lambda=9.72m mfg=TOSHIBA) .model 2SK170V NJF(beta=47.14m vto=-0.547 is=-1n cgd= 1.119p cgs=11.881p lambda=5.833m mfg=TOSHIBA) datjblvietnam 30th June 2012 06:15 AM This is interesting .Thanks a lot ! Osvaldo de Banfield 5th July 2012 12:04 PM More models Here are more model generated of my own: .model 2SJ103Y PJF(beta=1.053m vto=-1.342 is=1n cgd=8.0445p cgs=4.9555p lambda=34.8 mfg=TOSHIBA) .model 2SJ103GR PJF(beta=2.283m vto=-1.342 is=1n cgd=8.0445p cgs=4.9555p lambda=16 mfg=TOSHIBA) .model 2SJ103BL PJF(beta=5.092m vto=-1.342 is=1n cgd=8.0445p cgs=4.9555p lambda=7.2 mfg=TOSHIBA) .model 2SK246Y NJF(beta=13.7m vto=-0.648 is=-1n cgd=16.089p cgs=13.91p lambda=35.14m mfg=TOSHIBA) .model 2SK246GR NJF(beta=28.283m vto=-0.648 is=-1n cgd=1.119p cgs=11.881p lambda=16.22m mfg=TOSHIBA) .model 2SK246BL NJF(beta=47.14m vto=-0.648 is=-1n cgd=1.119p cgs=11.881p lambda=7.274m mfg=TOSHIBA) .model 2SK389GR NJF(beta=13.7m vto=-0.547 is=-1n cgd=14.74p cgs=10.25p lambda=6m mfg=TOSHIBA) .model 2SK389BL NJF(beta=28.283m vto=-0.547 is=-1n cgd=14.74p cgs=10.25p lambda=2.9m mfg=TOSHIBA) .model 2SK389V NJF(beta=47.14m vto=-0.547 is=-1n cgd=14.74p cgs=10.25p lambda=1.74m mfg=TOSHIBA) kevinkr 12th July 2012 04:35 PM :cop: 2SJ103 model corrected per Osvaldo's request. Osvaldo de Banfield 12th July 2012 04:58 PM Originally Posted by kevinkr (Post 3089980) :cop: 2SJ103 model corrected per Osvaldo's request. Many thanks, Kevin. Apologize by the inconvenience. Here I add one more: .MODEL 2SK2692 NMOS(mfg=TOSHIBA LEVEL=3 L=1n W=5.952 KP=62.5u VTO=1.265 CGSO=120p CGDO=20p) Good luck! tvrgeek 7th August 2012 01:24 PM Wooo Hooo! Much thanks Osvaldo de Banfield 7th August 2012 01:34 PM Originally Posted by tvrgeek (Post 3117748) Wooo Hooo! Much thanks Ok, boy. I did this work of my own, because I find that some models are by there, really didn't work properly. I did a all-FEt audio amplifier that worked fine in the proto-board, but when after I simulate it, there was too error from real circuit to the simulations. Following the models I did from the above mentioned stuff, the simulation was pretty close to the real circuit. For example: The real circuit was giving near 10Vpp at 440Hz immediately after clipping with 12V power supply, and the first models I found and use, clip with 8.3V or so. With my self-made models, clipping was very near 10V, circa 9.8V. So model appears to be more accurate. And none of them make difference between manufacturer's IDSS range, mine does. EUVL 20th August 2012 09:13 AM The Cgd values for differtent grades of 2SK170 & 2SK246 are inconsistent, and I believe incorrect. Jay 20th August 2012 11:03 AM Originally Posted by Osvaldo de Banfield (Post 3117756) For example: The real circuit was giving near 10Vpp at 440Hz immediately after clipping with 12V power supply, and the first models I found and use, clip with 8.3V or so. With my self-made models, clipping was very near 10V, circa 9.8V. So model appears to be more accurate.. Hopefully your device is original from Toshiba. Even so, I think the spread of the typical performance in JFET is quite large. Later I will compare yours with the ones from Fred Dieckmann that I found accurate. I'm not comfortable with low decimal places, incomplete parameters etc. Osvaldo de Banfield 28th August 2012 12:23 PM Originally Posted by EUVL (Post 3133273) The Cgd values for differtent grades of 2SK170 & 2SK246 are inconsistent, and I believe incorrect. Ok, yes, may be a error in my calculus, but in this case try your own values, and compare with mines. All times are GMT. The time now is 08:16 PM. vBulletin Optimisation provided by vB Optimise (Pro) - vBulletin Mods & Addons Copyright © 2014 DragonByte Technologies Ltd. Copyright İ1999-2014 diyAudio Content Relevant URLs by
{"url":"http://www.diyaudio.com/forums/software-tools/215282-spice-jfet-mosfet-s-model-print.html","timestamp":"2014-04-17T20:16:15Z","content_type":null,"content_length":"16279","record_id":"<urn:uuid:f44f74a6-055d-412a-913f-de58bc686709>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Have you ever wanted your very own math tutor? The algebra tutor below, known as Mrs. Lindquist, is a computerized tutor, available to you 24 hours per day, seven days per week. It can help you with your most difficult algebra problems. Why not give it, or one of the other links listed below, a try?
{"url":"http://wcb.neit.edu/asc/mathtut.htm","timestamp":"2014-04-17T00:51:10Z","content_type":null,"content_length":"7086","record_id":"<urn:uuid:ceb13a7c-7d38-4311-b073-b1f77664e350>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Which set(s) of numbers does -0.002, -3.4, and the square root of 36 fall under? (Real numbers) Thanks! - Homework Help - eNotes.com Which set(s) of numbers does -0.002, -3.4, and the square root of 36 fall under? (Real numbers) Thanks! All of these are rational numbers, i.e. they can all be easily expressed as ratios or fractions. -0.002 is `-2/1000=-1/500` -3.4 is `-3 4/10=-3 2/5` and `sqrt(36)=6` which is not only rational, it is a whole number, and an integer. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/which-set-s-numbers-does-0-002-3-4-square-root-36-451839","timestamp":"2014-04-21T07:52:44Z","content_type":null,"content_length":"25206","record_id":"<urn:uuid:ddd72e06-4350-42ab-aa82-0e583d2500ea>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Eigenvalue Solvers for Structural Dynamics There is one thing I didn't get or I didn't understand clearly, usually it's been assumed that at each 0.005 sec time-step seismic excitation can be assumed as linearly varying function. Even if I stick to that and doesn't allow the user to meddle up with it does it still stand as a "not exact" method? The time marching output should converge to the exact solution to the equation of motion as the timesteps are reduced. A typical time marching algorithm samples the value of the appled force at one time point in each step. It can't be more "accurate" than assuming the forcing function is a piecewise linear function between those sampling points, because it doesn't have any more data to work with. Time marching algorithms introduce errors in the frequencies of the response in each mode, and in the damping levels, compared with the equations of motion. These errors are small if the timestep is small compared with the period of the mode, but for your example of a 5 ms step, the output can not represent any vibration with a higher frequency than two step lengths (this is the same idea as the Nyquist frequency in digital signal processing) which corresponds to a frequency of 100 Hz. The reduced model probably has many modes with frequences higher than that, which are of no particular interest in themselves, but they represent the "stiff" load paths in the structure. (Hence throwing them all away with modal reduction can be a bad idea!). The time marching algorithm effectively reduces the frequencies of all those modes to something of the order of 100 Hz (the precise details are obviously algorithm dependent). Another way to look at this is to consider the speed that forces can propagate through the structure. That is simple to answer for an explicit algorithm: a force can only propagate from one point its nearest neighbours in one time step. This is part of the reason why explicit algorithms are conditionally stable - if dynamic effects can't propagate through the structure "one step at a time" at something approximating to the speed of sound in the material, the results won't make any sense. Implicit unconditionally stable methods have the opposite defect, that dynamics effects can propagate in the model in one timestep, even though that is physically impossible. If you want to look at this experimentally, set up a single degree of freedom model of a mass on a spring with no damping. With your step of 5ms, make models with frequencies of say 1, 3, 10, 30, 100, 300 Hz, give them some initial conditions to create free vibration (no applied right hand size force) and compare the frequency and damping of the output with the correct values. Then repeat including some damping in the model (say 10% of critical damping) and see what damping levels you get in the output. The results for 1Hz and 3Hz should be pretty good, but the others will probably not be! The amount of damping (if any) in the hgih frequency modes will probably be an artefact of the time marching algorithm itself, and almost indepenent of the damping matrix in the equations of
{"url":"http://www.physicsforums.com/showthread.php?p=3904344","timestamp":"2014-04-16T04:33:26Z","content_type":null,"content_length":"44807","record_id":"<urn:uuid:ddb0160a-ad58-4252-8399-3ced6d64f977>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Willis E on Hansen and Model Reliability Another interesting post from Willis: James Hansen of NASA has a strong defense of model reliability here In this paper, he argues that the model predictions which have been made were in fact skillful (although he doesn’t use that word.) In support of this, he shows the following figure: (Original caption)Fig. 1: Climate model calculations reported in Hansen et al. (1988). The three scenarios, A, B, and C, are described in the text as follows: Scenario A has a fast growth rate for greenhouse gases. Scenarios B and C have a moderate growth rate for greenhouse gases until year 2000, after which greenhouse gases stop increasing in Scenario C. Scenarios B and C also included occasional large volcanic eruptions, while scenario A did not. The objective was to illustrate the broad range of possibilities in the ignorance of how forcings would actually develop. The extreme scenarios (A with fast growth and no volcanos, and C with terminated growth of greenhouse gases) were meant to bracket plausible rates of change. All of the maps of simulated climate change that I showed in my 1988 testimony were for the intermediate scenario B, because it seemed the most likely of the three scenarios. I became curious about how that prediction had held up in the years since his defense of modeling was written (January 1999). So I started looking more closely at the figure. The first thing that I noted is that the four curves (Scenarios A, B, C, and Observations) don’t start from the same point. All three scenarios start from the same point, but the observations start well above that point … hmmm. In any case, I overlaid his figure with the very latest, hot off the presses, HadCRUT3 data from Phil Jones at the CRU … and in this case, I started the HadCRUT3 curve at the same point where the scenarios started. Here’s the result: Fig. 2: Climate model calculations reported in Hansen et al. (1988), along with HadCRUT3 data. A few things are worthy of note here. One is that starting the scenarios off at the same point gives a very different result from Hansen’s. The second is the size of the divergence. Scenario C, where greenhouse gases stop increasing in 2000, can be ignored “€? obviously, that didn’t happen. Looking at the other scenarios, the observed temperature in 2005 is a quarter of a degree C below Scenario B, and 0.6°C below Scenario A. Finally, the observations have mostly been below both all of the scenarios since the start of the record in 1958. Since according to Hansen Scenarios A and C were "meant to bracket plausible rates of change", I would say that they have not done so. A final note: I am absolutely not accusing James Hansen of either a scam or intellectual dishonesty, he clearly believes in what he is saying. However, he has shaded his original conclusions by starting the observational record well above where the three scenarios all start. Mainly, the problem is that the world has not continued to heat up as was expected post 1998, while his Scenarios A and B did continue to warm. The post-1998 climate actually is acting very much like his Scenario C … except, of course, that the CO2 emissions didn’t level off in 2000 as in Scenario C. The values for Hansen’s scenarios are not archived anywhere. Willis obtained them by digitizing the graphic in the pdf file; the values are provided in comment #63 below. Willis reports that he downlowded the HadCRUT3 dataset from http://hadobs.metoffice.com/hadcrut3/diagnostics/global/nh+sh/monthly and the GISTEMP dataset is from http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt Hansen aligned all three scenarios at the same starting point as noted by Willis, who aligned the two temperature series at the same starting value as used by Hansen. (See comment 63.) This procedure has been criticized by Tim Lambert. 174 Comments A final note: I am absolutely not accusing James Hansen of either a scam or intellectual dishonesty, he clearly believes in what he is saying Then Hansen is deluded. Who’s going to tell him? However, he has shaded his original conclusions by starting the observational record well above where the three scenarios all start. Not for the first time in his career James Hansen has cherrypicked his end-dates and cut-offs for maximum effect. Mainly, the problem is that the world has not continued to heat up as was expected post 1998, while his Scenarios A and B did continue to warm. The post-1998 climate actually is acting very much like his Scenario C … except, of course, that the CO2 emissions didn’t level off in 2000 as in Scenario C. ..or maybe the reality is that his models are far too sensitive to carbon dioxide forcing and have insufficient negative feedbacks. In any case, the observations have invalidated his model forecasts. In normal science, this would lead to a retraction or a modification of his theory in mitigation. But this isn’t normal 2. Even if you graft the hadCRUT3 data to the end of Hansen’s observed data (red line) you still get observations that fall below Scenario “C”, as the latter shows a sharp increase in 2006 while observations don’t. 3. Re #2. True. In fact the GISS observations (“Table of global-mean monthly, annual and seasonal land-ocean temperature index”) show mean temperature in the first seven months of 2006 as 0.10° C lower than in the corresponding months of 2005. That’s a big decrease – half the size of the cells in the vertical grid in Figs. 1 & 2. 4. Any idea why Hansen’s “observed” and HADCRUT3, while similar, aren’t identical? Also, is Hansen et al 1988, or is it 1998? 5. Recently, on the Discovery Channel here in the states, there was a program with Tom Brokaw called “What you need to know about Climate Change”. Tom interviewed Jim Hansen and during that interview, Hansen essentially stated without any caveats that the (his team?) modeling predicted that half the world’s species would be extinct by the turn of the century. Tom nodded knowingly and said nothing. I didn’t tape the show, so I am paraphrasing what Hansen said, but it appears that Jim has “moved on” in his modeling. 6. Re #4, the original Hansen forecasts were in 1988, followed by the paper cited above, in January of 1999. As an aside, the GISS dataset is very, very different from the HadCRUT3 dataset, and I have no idea why that might be. In particular, the difference for 2005 is very striking, with the GISS dataset showing it as much warmer than 1998, while the HadCRUT3 dataset shows it as much cooler … go figure. 7. Just for good due dillegance purposes what is the HADCRUT3 and why did you choose it? 8. Re #7, Joshua, thank you for the excellent due diligence question. HadCRUT3 is the latest version of the global temperature record maintained by Phil Jones. It is a joint project, as I understand it, of the Hadley Centre of the UK Met Office and the Climate Research Unit at East Anglia, UK. See http://www.cru.uea.ac.uk/cru/data/temperature/ for full details and data access. I picked it because it was not the other main temperature record, the GISSTemp dataset, maintained by among other people, James Hansen … Full details and data access for the GISSTemp dataset are available at http://data.giss.nasa.gov/gistemp/ 9. Re #5, Phil, thanks for the posting, wherein you note that: Hansen essentially stated without any caveats that the (his team?) modeling predicted that half the world’s species would be extinct by the turn of the century. These predictions of extinctions are among the most bogus forecasts arising out of the models, although they originally came, not from the models, but from estimates of tropical deforestation. I wrote a paper on this subject which is available here. I found when researching the subject that there have been very few bird or mammal extinctions on the continents “¢’ ¬? only nine in the last 500 years, six birds and three mammals. This was a great surprise to me, with all of the hype I expected many more. In any case, my paper shows that there has been no increase in extinctions … have a read, it’s an interesting paper. 10. Re fig’s 1 and 2, I don’t see the point of going from a graph which uses the GISS (Hansen et al) data to another which uses another updated data set – but leave the Hansen data as was. It is to compare apples with oranges. Surely it’s better, and simply less confusing, to update the obbserved data using updated GISS records and show that graph as Fig 2? Here’s one I found some time ago – C or B I’d say. 11. Re#10 The graphic you show still appears to have observed temperatures starting higher than model temperatures by about 0.1C, enough to bring the smoothed line of the observable temperature to below the zero emissions plot. 12. if the prediction was published in 1988,it is woth noting that the temperatures for the three scenarios and the giss data seem to be quite close in 1988. I have to say I am not yet convinced by this prediction. From 1988 to 2007 it predicts a rise of ~~0.3C, but from 1988 to 2019 it will be ~0.6C. If you put a ruler through the same temperature data you would get the same answer; I think you need quite robust testing to have confidence in a model. By the way, if the modelling software was so predictive in 1988, why have we changed it ? Surely future iterations of the modelling software will give even better predcitions ? 13. The observed and calculated initial values will be the same only if the calculated value equals the observed value. Clearly this is not the case. Additionally, the calculated initial values for the Global Average Temperature (GAT) anomalies could be determined in at least two different ways. The first would be the difference between the calculated ensemble-average (GAT)ea at the end of the control runs and the individual (GAT)cj values; these would all be different The second would be the difference between the individual model/code (GAT)cj values at the end of the control runs and its (GAT)cj; and so would be 0.0. One of many questions is should the initial values on the graph be shifted so as to correspond? If it is shifted then the graph does not represent the actual results of the calculations. And if the calculations had used the value corresponding to the shifted value the calculated response would have been different. Whatever the case, it seems to me that only the very, very rough trends of the calculations are the same as the observations; a very long-term increase. And even that doesn’t look so hot to me. The variability shown in the observations is not at all captured, even over periods of a decade. What then are the effects of this clearly evident lack of agreement on the calculated responses of local environmental, social, and health issues such as biomass, long-range weather, ice, ocean temperatures, etc. Finally, someone has noted that some (GAT) calculations are done with the sea surface temperatures fixed. Given the recent observations on the changes in the energy content of the oceans, this does not seem to be a good assumption. 14. The starting point for the data and the models is, of course, very important because the data is plotted in a comparison chart. If the model runs start in 1958?, then the observed temperature data trend should start at 0 in 1958 and so should the models. However, it is possible that the model runs actually start earlier than 1958 (where the temperature trend started at 0 initially as well) but Hansen did not show the earlier than 1958 simulations and observations. I don’t know if anyone can answer this, but why do the HadCRUT3 and GISSTemp datasets differ so much in recent years? 15. What would a plot, of A,B,C, HADCRUT3 and GISS, zeroed in 1988 as noted, look like? It would be nice to understand why the two sets of observations (GISS and HADCRUT3) differ. 16. Willis E, a fine piece of Validation work. Thanks for your efforts. 17. HADCRUT3 measures the anomaly from the 1961-1990 mean, while Hansen’s scenarios used the anomaly from the 1951-1980 mean. The 1961-1990 mean is higher than the 1951-1980 mean. Furthermore, Willis has not plotted HADCRUT3 accurately – all his numbers are too low by about 0.05 degrees. When you compare apples with apples and not Willis’s oranges, Hansen’s model has been eerily correct. 18. Thank you Willis; The answer to #15 is we need Jones to disclose his method but he has consistently refused to do so. This is only the start of the problems with the models. A few years ago I noted the discrepancy between the GISS and HADCRUT3 models was 0.5°C in one year and that this was equivalent to Jones’ estimate for increase of the GAT in approximately 130 years. Nobody was interested, the models were in full flight scaring the world and driving government policy. The data problem is only the first of many severe limitations of the models, but probably the most signifiant because they are basis of the entire cubic construct. Here you are only talking abut the surface data (actually Stevenson Screen measures are not the surface and in many instances above the cirtical boundary layer), consider the problems with anything above the surface where there is virtually no data. Consider this quote from an article in Science Vol.313 (4 August 2006) “Waiting for the Monsoons” about an attempt by the models to forecast precipitation for Africa, “One obvious problem is a lack of data. Africa’s network of 1152 weather watch stations, which provide real-time data and supply international climate archives, is just one-eighth the minimum density recommended by the World Meteorological Organization (WMO). Furthermore, the stations that do exist often fail to report.” As I keep repeating, probably ad nauseum for some, but I won’t stop until the severe problems are exposed, Canada has less weather stations now than in 1960 and many of those retained were equipped with unreliable Automatic Weather Observing Stations (AWOS). Warwick Hughes has documented the data problem more completely than anyone to my knowledge. I suspect you can add most of Asia, South America and Antarctica to this list not to mention the oceans. Indeed, I would like to know what percentage of the globe meets the WMO minimum density requirement. Maybe now thanks to your work more people will begin to examine the complete inadequacy of the models because of the data base on which they are constructed and then move on to look at the unjustified assumptions, grossly inadequate mechanisms incorporated and manipulations apparently more designed to achieve a predetermined outcome. Before anyone starts bleating about the value of models let me say they have a place in the laboratory where there is a scientific responsibility. I would argue that what Wilis is exposing appears to suggests this is not being met. The problem becomes worse when you go public and let the policymakers believe the models work, have credibillity, and can be the basis of global policy. And again before anyone starts bleating about the warnings put on the model output let me say that most of the public and the politicians have no understanding of the process or the warnings, especially when they are accompanied by ‘end of the world’ messages. 19. An interesting chart would be to average each data set over, say, five-year periods, and, using 1959-63 as the zero point, see what a combined plot looks like. I would do this but I cannot read the data points on the graphs very well. The data groups would be updated GISS, HADCRUT3, scenarioA, scenarioB and scenarioC. I tried this for 1985-89 and 2000-2004 and got actuals running a bit below scenarioC (the no-CO2-growth scenario). But, again it is hard to read the points. 20. Obviously I am not getting it – the charted trendlines all seem to start around 1958, and seem pretty close – I don’t see the selective bias. In any case, of more importance (to me) are the trend lines. It would seem the current models track one data set somewhat OK (GISS, “B”) and, presumably, their range of assumptions could be tweaked to follow HADCRUT3 as well. The questions that remains: 1) What is the “real” data set and 2) Are the slopes of increase between the two sets the same? These two data sets seem to be the range of reasonable prediction? 21. I have plotted out giss, hadcrut and crutem, and the difference between giss and the two hadley datasets. There is a bigger difference opening up between giss and hadcrut, presumably because giss and crutem are just over land, whereas hadcrut includes sea temperatures. I have posted the graphic on to steve if he wants to incorporate it. 22. Re #20 To me, the chart presented by Hansen, taken at a glance, indicates that temperature rises are running at about a scenario B rate, indicating the models have it “about right”. If the base is adjusted such that all data starts at the same point that chart, at a glance, would show GISS running below B and show HADCRUT3 running well below B. The visual impact is As a side note, I’m surprised that scenario C (apparently) levels out only five years after CO2 growth ends. I thought there is a longer lag, due to the oceans. 23. Re # 17, Tim, thanks for your contribution. You say: HADCRUT3 measures the anomaly from the 1961-1990 mean, while Hansen’s scenarios used the anomaly from the 1951-1980 mean. The 1961-1990 mean is higher than the 1951-1980 mean. Furthermore, Willis has not plotted HADCRUT3 accurately – all his numbers are too low by about 0.05 degrees. When you compare apples with apples and not Willis’s oranges, Hansen’s model has been eerily correct. As I stated very clearly, I have not plotted the HadCRUT3 data inaccurately. Instead, I have started the observational data at the same place the scenarios start. You could say this puts the HadCRUT data too low … or you could say that puts the scenarios too low … but if you want to compare them, you have to start everyone off at the same place. Otherwise, you could just pick your offset to suit your purposes, as Hansen has done. To compare apples with apples means that we start everyone out at the same temperature, and see where the observations and the scenarios go from there. That is what I have done. Remember, these are anomalies, not actual temperatures. The starting point is arbitrary. We are interested in comparing the trends, not the absolute values, and to do that, we need to start all of them from the same point. This is implicitly shown by the fact that all three scenarios start from the same identical temperature. If your analysis were correct, Tim, they would all start from different temperatures. Hansen is the one comparing apples and oranges, by starting off from a much warmer temperature than the scenarios. 24. Re 10, Peter, you raise an interesting question when you present Hansen’s updated graph, and ask why not use that one? Three reasons: 1) That updated graph repeats the earlier error of starting too high. This puts it way out of position. 2) The GISS dataset contains an anomalously high value for 2005. This was because they (coincidentally?) changed their dataset at the end of 2005. Jay Lawrimore, chief of NOAA’s climate monitoring branch, believes 2005 will be very close to 1998, the warmest year on record for the nation. “In fact it’s likely to only be second warmest according to the data set we are currently using as our operational version,” he told National Geographic. “(But) an improved data set for global analyses currently undergoing final evaluation will likely show 2005 slightly warmer than 1998. Me, I’m suspicious of “improved” datasets that (coincidentally?) say that things are even warmer than we thought … especially when they say: Our analysis differs from others by including estimated temperatures up to 1200 km from the nearest measurement station. Right … I take the temperature in London … and from that you can estimate the temperature in Italy … right … The third reason is that the model which Hansen is using is tuned to the GISS dataset. Therefore, we should not be surprised that it reproduces the data up until 1988, as it is tuned to do so. But that close correspondence proves nothing. I wanted to see how it would compare to the other major temperature dataset, HadCRUT3. That’s the three reasons I don’t use Hansen’s graph … PS – A final reason … the GISS temperature dataset is designed, analysed, evaluated, and maintained by James Hansen. Using that temperature dataset to verify Hansen’s claims about his models leads to the obvious inferences of data manipulation, particularly given their drive to have 2005 be the “warmest ever”. (Hansen had predicted in February 2005 that 2005 would be the warmest year ever … then, the old analysis showed that it wasn’t the warmest ever … and they changed to the new analysis which showed it was the warmest ever. Coincidence? Perhaps.) While these inferences may not be true, I thought it best to use another dataset for the analysis, to avoid such problems. 25. Re #11, per, you say: if the prediction was published in 1988,it is woth noting that the temperatures for the three scenarios and the giss data seem to be quite close in 1988. No, it means nothing, because the model is tuned to reproduce that data. 26. RE: #22 i think one point is that there is a different trend on land, versus (land + sea). If you compare the average difference between giss and crutem3 from 61 to 05, it is +0.067+/-0.064, whereas the difference for giss and hadcrut3 is 0.13 +/-0.06. it is clear that for global temperature that you want to have sea +land, but since giss is only for land, that is the data set hansen has, and therefore uses. No, it means nothing, because the model is tuned to reproduce that data I do not know how Hansen zeroed his predictions, and the giss data set. If there are facts here, I welcome them. What I was suggesting was that Hansen set the zero and the models to be very close in 1988, which is when the prediction was made. I am not suggesting that “means anything”, merely this is what his zeroing convention might be. 28. Per, you say (#20) There is a bigger difference opening up between giss and hadcrut, presumably because giss and crutem are just over land, whereas hadcrut includes sea temperatures. Per, GISS maintains 2 datasets, one of just land, and one of combined land/sea. The land/sea dataset is available at http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt 29. All- We discussed this over on our blog a few months ago. Hansen himself has admitted that his 1988 predictions of climate forcings overshot, hence so too did his temperature predictions. This is obvious from Figures 5A and B in this publication by Hansen et al.: Hansen reissued his 1988 preditions in the 1999 paper, because they overshot (otherwise why resissue them!). As well it looks like (to date at least) that his 1999 updated predictions have also overshot in terms of predicted change in aggregate forcings, though it is very early for such judgments. When evaluating predictions it is not enough to look only at the predicted temperature change, but also predictions for changes in various components of climate forcings that enter into the prediction. A good prediction will get the right answer and for the right reasons. 30. OK, a number of people people wanted to see what the graph looks like with the GISTEMP data rather than the HadCRUT data … so by popular demand, here it is. Note that there are a number of small year-by=year differences between the original GISTEMP dataset and the new analysis GISTEMP dataset. The net result is the same as with the HadCRUT3 dataset, however “¢’ ¬? the various scenarios are all too high. Hansen reissued his 1988 preditions in the 1999 paper, because they overshot (otherwise why resissue them!). As well it looks like (to date at least) that his 1999 updated predictions have also overshot in terms of predicted change in aggregate forcings, though it is very early for such judgments. Will we have to wait until 2009 before Hansen admits that he’s been flat wrong for all of this time? 32. Re 28, Roger, always a pleasure to have you contribute on any thread. Let me take this opportunity to recommend your excellent blog to anyone interested in climate science. However, you say that “Hansen himself has admitted that his 1988 predictions of climate forcings overshot, hence so did his temperature predictions”, and you say this is shown in the paper you cited. However, in the paper Hansen says: The climate index is strongly correlated with global surface temperature, which has increased as rapidly as projected by climate models in the 1980s. So it appears that he agrees that the forcing predictions were too high, but he still says the temperature rose “as rapidly as predicted” … go figure … Thanks again, 33. Per, thanks for the observation where you say: I do not know how Hansen zeroed his predictions, and the giss data set. If there are facts here, I welcome them. What I was suggesting was that Hansen set the zero and the models to be very close in 1988, which is when the prediction was made. I am not suggesting that “means anything”, merely this is what his zeroing convention might be. However, examination of the chart shows that the three scenarios are far apart in 1988, and are the same in 1958. This agrees with the reference given by Roger Pielke Jr. above, which shows that Hansens’ forcings start out together in 1958, and diverge from there. Thus, we need to start our comparison with observations in 1958. Per, GISS maintains 2 datasets,… mea culpa, my ignorance. I replotted the data, but zeroing everything at 1988 arbitrarily. If you do this, all three data sets (GISS, Crutem3, Hadcrut3) come out very close in 2005. However, they are then divergent in 35. Here is 36. Let me ask a Cohn and Lins type question (maybe exactly their question — I don’t have the article so I can’t say). Suppose we do a semi-naà Æ à ⮶e (or semi-sophisticated, depending on your point of view) time-series forecast, with trend estimation, of temperature. Make the forecast coincident with the time of a GCM forecast. (Cohn and Lins, if I recall, include drivers — the forcing variables — in their analysis. What I am suggesting here is more naà Æ à ⮶e: just do the simple time-series forecast, say a ARMA(3,3), and look at the results in that context.) For example for the 1988 Hansen forecast one would use the 1880-1988 data set and forecast 1989-2005. Put the forecast confidence intervals in and compare to the actual temperature (of the specific series) and the model. Then test if the model better predicts than the time-series forecast. (Somebody wants to tell me where Hansen’s model forecast data is and I will do it.) From looking at the pattern of temperature and the forecast confidence interval — roughly +/- 0.3 degrees C — it does not appear that standard forecast evaluation tests on levels would lead one to pick the model over the time series forecast. However, if you include an evaluation of turning points, the results might be more interesting (particularly with the Hadley data that produces a very large AR(2) terms with insignificant AR(1) and AR(3) terms, producing a strong sawtooth pattern to the forecast). 37. #135 I bet time series work better then computer models because we have a better grasp of what the confidence intervals are for the predictions and we are less likely to make a coding error. 38. Willis, Do you know if the delta-T is with respect to a climate “normals” period (e.g.,1931-1960, 1951-1980 etc.), or wrt a specific year? Since the standard in climatology is to use at least a 30-year averaging period, that might explain why the plot of the annual observations don’t start where you think they should. I really do not know, but I do think it would be odd to use an arbitrary year (given interannual variability) as a baseline. The validity of this entire discussion rests on whether each plot is supposed have the same origin. I am not certain that they are. Again, if a climatological baseline is used for comparison, then it would make sense that the first delta-t observation has a nonzero value. And since the model scenarios don’t appear to start at zero (and I can’t squint well enough to see that they even have the same starting point, as you claim), then there may be more going on than you suspect. In which case, you would have to move your “correction” back up to where Hansen put it. The answers may be in the 1988 paper. Maybe someone here has read it and knows the baseline? 39. I have glanced at (and I mean that literally, 40. Kenneth, you say: I really do not know, but I do think it would be odd to use an arbitrary year (given interannual variability) as a baseline. Odd or not, Hansen has started his forecast in 1958. All three of the scenarios start in 1958, at exactly the same temperature. If we are to compare these scenarios to observations, what justification is there for using any starting point other than that temperature? 41. Willis, If the temperature baseline is from a climate normals period, then there is no reason that the delta-T for the first observation should be zero. It would simply be the difference between the observed value and the climate normals value (in the case of 1958, it is positive). Your methodology only makes sense if the delta-t is with respect to 1958. Note that I am not saying that it is or is not. All I am saying is before you go changing a graph, you should be absolutely certain that you understand the context of it. I am admitting I do not; do you? 42. BTW, apologies for (current) #38. I was saying that a glance at the 1998 paper (linked by RP Jr) indicates they used the 1951-1980 normals period for their index values. But it really only was a glance, and I can not commit to giving it more than that right now. 43. #22 Willis You quote me but it is not anything I said in #17. 44. Perhaps, in the end, what counts are the slopes. What I’ve eyeballed, from the graph start to the final five years, are the following delta Ts: Scenario A: +0.92 Scenario B: +0.62 Scenario C: +0.55 adjusted GISS; +0.55 (using the starting point chosen by Hansen) HADCRUT3: +0.42 So, GISS and HADCRUT3 are running at or below scenario C (the one with the end of CO2 growth as of 2000). Besides all of that, I have wondered how the models were able to predict a multi-year dip in temperatures around 1964 which came true. That is remarkable. Perhaps it is just chance. I am also surprised that temperatures seem to level off so soon after 2000, in scenario C. I expected a longer lag. 45. I got to thinking about the Hansen scenarios, and I thought I’d compare their statistics to the actual observations. One of the comparisons we can make regards the year-to-year change in temperature. The climate models are supposed to reproduce the basic climate metrics, and the average year-to-year change in temperature is an important one. Here are the results: These are “boxplots”. The dark horizontal line is the median, the notches represent the 95% confidence intervals on the medians, the box heights show the interquartile ranges, the whiskers show the data extent, and the circles are outliers. It is obvious that the “Scenarios” do a very poor job at reproducing the year-to-year changes of the real world. Whether or not they can reproduce the trend (they don’t, but that’s a separate question), they do not reproduce the basic changes of the climate system. They should be disqualified on that basis alone. 46. Re 42, Tim Ball, I think the numbers have been messed up by retro-spanking … the post was actually from Tim Lambert. I’ll use last names always from now on 47. Re 43, David Smith, I appreciate your contributions. You ask: Besides all of that, I have wondered how the models were able to predict a multi-year dip in temperatures around 1964 which came true. That is remarkable. Perhaps it is just chance. The answer, of course, is that up until 1988 the models were tuned to the reality. It is only for the post-1988 period that they are actually forecasting rather than hindcasting. I just noted another oddity about the scenarios versus the data. The lag-1 autocorrelation of the scenarios is incredibly high: GISTEMP 0.79 HadCRUT3 0.82 Scenario A 0.98 Scenario B 0.96 Scenario Cà ‚⠠ 0.94 I remind everyone that the modeler’s claim is that they don’t need to reproduce the actual decadal swings, that it is enough to reproduce the statistical parameters of the climate system. To quote again from Alan Thorpe: However the key is that climate predictions only require the average and statistics of the weather states to be described correctly and not their particular sequencing. In this case, they’ve done a very poor job at describing the average and statistics … 48. Kenneth, they did use 1951-1980 as a base line for comparisons. Willis’s use of 1958 instead is erroneous, and the reason why he gets the result he does. If you do the comparisons the normal way, scenarios B and C are very close to what happened, whether you use GISS or HADCRUT3. Re#10-Here’s one I found some time ago – C or B I’d say. and Re#21 – Re #20 To me, the chart presented by Hansen, taken at a glance, indicates that temperature rises are running at about a scenario B rate, indicating the models have it “about right”. So you are picking the “correct” model scenario(s) based on the results? That’s inappropriate. The “correct” scenario is the one which meets the scenario assumption criteria, not the one that most closely matches the observed results. One has to take the scenario A, B, or C which has occurred and compare those model results with the observed temperature. So the questions is…which greenhouse growth scenario has most closely occurred since 1988 – A, B, or C? That’s the ONLY scenario that should be compared to the observations. If the model runs start in 1958?, then the observed temperature data trend should start at 0 in 1958 and so should the models. However, it is possible that the model runs actually start earlier than 1958 (where the temperature trend started at 0 initially as well) but Hansen did not show the earlier than 1958 simulations and observations. I’m curious how the pre-1988 results look. Other posters here have argued for/zeroes observed values to match the model predictions of 1988, and I think that’s valid – especially since the y-axis is not T but delta T. 50. Re #47, Michael, you ask an interesting question when you say: So the questions is…which greenhouse growth scenario has most closely occurred since 1988 – A, B, or C? That’s the ONLY scenario that should be compared to the observations. Unfortunately, there is no easy answer to your question. Hansen’s scenarios assumed the following: Scenario A CH4 0.5% annual emissions increase N2O 0.25% annual emissions increase CO2 3% annual emissions increase in developing countries, 1% in developed Scenario B CH4 0.25% annual emissions increase N2O 0.25% annual emissions increase CO2 2% annual emissions increase in developing countries, 0% in developed Scenario C CH4 0.0% annual emissions increase N2O 0.25% annual emissions increase CO2 1.6 ppm annual emissions increase and the results were CH4 0.5% annual emissions increase N20 0.9% annual emissions increase Developed countries CO2 emissions increase 1988-1998 -0.2% Developing countries CO2 emissions increase 1988-1998 4.4% CO2 atmospheric increase PPM (Mauna Loa 1998-2003) 1.6 ppmv In the event, Scenario A was closest for methane, C was closest for CO2 (up until 2000, when it leveled off), and B wasn’t closest for anything … However, life is never that simple. The problem is that we are mixing apples and oranges here, because we are looking at both emissions (methane, N2O, and CO2 in Scenarios A and B) and atmospheric levels (CO2 in Scenario C). As one example of the problem that causes, Scenario A had the highest estimate (0.5%) for the annual increase in methane emissions. This was also the best estimate. Now, methane emissions have been increasing radically, going from about 0.1% growth during the 1990s to over 1.0% from 2000-2005. Atmospheric methane growth, on the other hand, has been dropping steadily, from about 0.5% growth per year in the early ’90s, to about zero now. The concentration of atmospheric methane is currently about stable, neither increasing nor decreasing. So … which of the three scenarios best captures that change in methane? We can’t tell, because we don’t know how the model responded to the assumed growth in methane. I doubt greatly, however, that the drop in atmospheric methane to the current steady state was forecast by any of the scenarios … And the simple answer to your question about which scenario was closest to reality? … We don’t know. 51. Bender and others ragged me so much about overfitting that I think I finally understand it. And it looks to me like these models are just a very complex type of overfitting. If you have to change a model every few years to reflect reality, then it seems to me that you definitely are engaging in overfitting. If the model stands the test of time, then maybe you are on to something. 52. 50. IOW, some of the scenarios almost predicted the right temperature with the WRONG DATA. LOL. 53. One question: How do the contributions of Thomas R. Karl and Patrick J. Michaels from the 2002 U.S. National Climate Change Assessment: “Do the Climate Models Project a Useful Picture of Regional Climate?” fit into this picture? The link to the hearing was recently posted here, but there were no comments… 54. Re 49, Michael, you say: Other posters here have argued for/zeroes observed values to match the model predictions of 1988, and I think that’s valid – especially since the y-axis is not T but delta T. We can do that, this is a full-service thread … here are the results: For short-term estimates, we have a couple of comparisons that we can make. The first is just to compare the scenarios to the observations. As you can see, the observations have been running cooler than all of the scenarios for almost the whole period. (Curiously, Hansen said that the three scenarios were picked to “illustrate the broad range of possibilities”, but post 1888, the B and C scenarios are very similar … but I digress …) The other comparison we can make is to a straight linear extension of the trend of the previous decade. If the models are any good, they should perform with more skill than a straight extension of the trend. However, they do not perform with more skill. The results are: Scenario A r^2 = 0.58 Scenario B r^2 = 0.34 Scenario C r^2 = 0.54 Trend Only r^2 = 0.60 In other words, the straight line trend is better correlated with the outcome than any of the scenarios. Not exactly a resounding vote of confidence for the model and the scenarios thereof … 55. #54 Willis, the r^2 is a bit unfair. B gets low r^2 because it’s in antiphase with observations, but it still the closest to obs. The problem is that observations include stuff such as the Pinatubo eruption and the anomalous 1998 ElNino that cannot be predicted by the models (maybe eventually ElNino, but not volcanic eruptions). Wouldn’t a better comparison use the linear trends from the 3 scenarios r-squared with observations? What is typically used as a measure of model skill? #51 you make an interesting point. If the models are changed all the time, then we can never really know if they are “better” than the previous versions. The skill of the model is in predicting the unknown, not reproducing the known. One could then argue that we never know if we are “improving” the models or not. That’s a kind of fundamental philosophical argument against models, similar to what Hank Tennekes did, following Karl Popper. Does anyone know how many parameters are needed to “tune” the model? An ideal model would have no free parameter. Can we evaluate the skill of a model by how few free parameters it has? 56. The model being sponsored by BBC, (ClimatePrediction.net) has 34 adjustable parameters(3 values each)in their model. Not sure how comparable it is to other models. It sure seems to be an amazing amount of places to tune the model. Here’s a link to their parameters. http://www.climateprediction.net/science/parameters.php 57. RE 55, you ask, “how many parameters are needed to “tune” the model? An ideal model would have no free parameter.” I am reminded of Freeman Dyson’s story of taking his results to Enrico Fermi for evaluation: . . .[Fermi] delivered his verdict in a quiet, even voice: “There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.” . . .”To reach your calculated results, you had to introduce arbitrary cut-off procedures that are not based either on solid physics or on solid mathematics.” In desperation, I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. “How many arbitrary parameters did you use for your calculations?” I thought for a moment about our cut-off procedures and said “Four.” He said “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” With that, the conversation was over. So yes, the models can reproduce the past, and the elephant can wiggle his trunk … 58. The models just predict a 0.1 to 0.4 C increase per decade. They are just computer models that spit out a particular increasing trend-line. The fact that observed temperatures happen to increase at something like the same rate is a spurious relationship. The models do not have the right coefficients for GHG effects or for their secondary effects. The models do not explain the past history of the climate (including ice ages and the climate of the distant past.) Other factors that affect the climate are more important (or at least not sufficiently taken into account) in these models. It is a fluke that we are even examining predictions versus actuals. 59. Re 56, Francois, thanks for your questions. You say: Willis, the r^2 is a bit unfair. B gets low r^2 because it’s in antiphase with observations, but it still the closest to obs. The problem is that observations include stuff such as the Pinatubo eruption and the anomalous 1998 ElNino that cannot be predicted by the models (maybe eventually ElNino, but not volcanic eruptions). Wouldn’t a better comparison use the linear trends from the 3 scenarios r-squared with observations? What is typically used as a measure of model skill? Actually, the scenarios include random volcanic eruption, but you are right regarding the general effect of unpredictable future events. As a full-service thread, I will show the results, but I don’t put too much stock in them. Why? Because: 1) the models are supposed to do better than just a straight-line trend … otherwise, why have models? and 2) it’s quite possible, especially when we reduce a complex situation (climate changes) to one single number (trend per decade), to get the right answer for the wrong reasons. I have already shown that the models are far from lifelike, in that their year-to-year changes in temperature are much smaller than reality. They are very poor at modeling the statistics of the climate. Given all that, here’s the trends and the RMS error: The RMS errors are: Scenario A RMS = 1.05°C Scenario A RMS = 0.44°C Scenario A RMS = 0.56°C Extended 1979-1988 trend: RMS = 0.48°C Note that despite the claim by Hansen that the scenarios should give the range of possibilities, the trend line of the observations is below the trend line of all of the scenarios. 60. #56 — Bob, do they anywhere give the uncertainties in these parameters? 61. Willis, can you post up some of the digital series for these models so that people can do simple time series work on them. Also what the url is for the source of the digital version. Thx 62. This entire analysis and the analysis of the analysis (and indeed the entire debate) relies on temperature data that is highly suspect. Just because the temperature data is “all we have” doesn’t make it particularly useful. Any number of signals could be buried in the assumptions, discontinuities and their smooth-outs. The failure to zero in Hansen’s graph is probably not malicious; more likely a “close enough for government work” mentality. The comments on weather station scarcity are most telling. I guess it is just more fun to drill holes in the ice or trees and play with satellites than undertake actual observations of the data. Instead of trillions in CO2 control, how about some nice instruments all over the place? Gather data for a statistically significant period, plot it against other real data from other real instruments and see what we have. Even in the worst case scenario, if it takes 15 years to get valid data, you only have a 1C temp. increase. This might keep the polar bears out of Churchill, and chop a day or two off the ski season at Telluride. We need about a $ billion a year to maintain collect and analyze. You might even be able to tell how far apart you can maintain your sensors before they lose validity. “Oh, that is too much like real science. Too much like work” The fact that no one with any power is screaming for more observations tells you everything you need to know about the debate. 63. Re 61, Steve M., here’s the data. There’s no URL, these guys rarely publish their results, it is digitized from the Hansen graph. It was then checked by using Excel to generate graphs of the digitized data, then overlaying those graphs on the original Hansen graph to make sure there are no errors. I’ve done this data comma delimited, to make it easy to import into Excel or R. The GISTEMP and HadCRUT3 data has been aligned at the start with the three scenarios. Year,GISTEMP,HadCRUT3,Scenario A,Scenario B,Scenario C My best to everyone, have fun, 64. Proving a climate model even with a 0.6 Celsius error that accumulated in four decades requires a really skillful scientist. ;-) Maybe the explanation is that the actual weather is bribed by the oil industry. :-) 65. Remember the old saw “All models are wrong; some models are useful”. But that begs the question “Useful for what?” I think for pure research, to try to ferret out all the complexities of the physics of climate and weather, the GCMs (and the coupled versions) are interesting and probably useful. For the purpose of trying to say something meaningful about the future state of the climate, either locally or globally, they are pretty much useless. Just think about the initial conditions problem: these models compute the flow of heat (flux) through the different layers of a gas (or liquid for the oceans) and try to account for the mixing of the different layers, etc. They do this by breaking up the surface of the planet into cells, and the newest ones slice these cells vertically as well. Now before they can start a model run they need to seed all the cells with data, but how do they know what the right values are for any particular starting point in time? There’s a lot of data for each cell, and most of it must be in the form of a vector – for example it’s not good enough to just know how much heat is in a cell, you have to keep track of which direction it is flowing in. As complex as these models are it would seem to me that even small changes in the initial conditions could change the outcome drastically. The flux adjustment issue is a whole other can of worms. Some modelers believe that the drift they are seeing in the outputs are caused by fundimental erros in the models, with improper cloud modeling at the top of the suspect list. Others think that the model resolution is just too low and the drift is a consequence of subtle data-losses that accumulate over simulation time. Newer, faster computers will solve the problem they think. Either way, this problem introduces unknown, and maybe unknowable errors into the results. I won’t even go into the Linearity Assumption and whether it can even be proven; the known problems with the models already show why the results should not be quoted outside the lab. Finally, Jae is right. Comparing these model outputs to observation is a massive exercise in overfitting. 66. Re #65 This is the precise reason I came to this blog. To hear a discussion – any discussion, pro or con – on this point: Comparing these model outputs to observation is a massive exercise in overfitting. I am inherently skeptical of the GCMs, but (being unqualified to judge for myself) I am prepared to shift my view based on an objective analysis of model structure, function, & performance. Unfortunately, what I am seeing from the various web-based resources (pro and con) only serves to fuel my skepticism, not quell it. Chief among my concerns is – as alluded to in #60 – the pretense that there is either no uncertainty in model parameters, or that any uncertainty that does exist is not worth studying formally. Again, I ask – as with hurricane frequency data, as with the tree-ring proxies – are policy-makers intentionally being fed uncertainty-free pablum? If so, then why? And is this acceptable? I look forward to following this thread very closely. Proving a climate model even with a 0.6 Celsius error that accumulated in four decades requires a really skillful scientist. Proving even the 0.6 C is problematic when your error is +/- 1.0 C at best. 68. #57 Willis, that is one of the best stories about modeling I’ve seen. Thank you! 69. That is a great quote, and I was going to mention it in the “overfitting”” discussion with jae … along with another one by a French mathematician (who escapes my memory): “give me enough parameters and I will give you a universe”. Unfortunately, that’s a paraphrase. I couldn’t track down the original. 70. Willis, On climateprediction.net, they say : By using each model to produce a ‘hindcast’ for 1920-2000, and then comparing the spread of forecasts with observations of what actually happened, we will get an idea of how good our range of models is – do most of them do a good job of replicating what actually happened? This will also let us ‘rank’ models according to how well they do. All the models will also be used to produce a forecast for the future – until 2080. When this experiment finishes, we will have a range of forecasts for 21st century climate. I’m wondering how they do that (couldn’t find the details). A sensible method would be to start with the period 1920-1960, and find the “best” parameters. Then test the period 1960-2000 with those same parameters, and evaluate the skill. Now conversely, one should do the same exercise starting with the 1960-200 period, and then using it to test skill on the 1920-1960 period. Ideally, the set of parameters that best fits one period should be the same that best fits the other, and the predictive skills for those symmetrical experiments should be the same. That would give a lot of confidence in the choice or parameters. But then there is another question: how many different sets of parameters give an acceptable fit? With that many parameters, it is likely that there are a great many combinations of parameter values giving similar results. How do you chose between them? Of course, there is also the problem that maybe some unknown parameters are not included in the models. I see that most parameters are related to cloud formation. If it is, as some believe, related to the amount of cosmic rays, then that is a parameter that is not, to my knowledge, in any of the current GCM’s. Thus it is impossible to know if there would be a better fit with its inclusion. Good skill from current models would be pure luck, and predictive skill inexistent. For those interested, ClimateScience has a good post today on how algae in the Pacific have a significant effect on SST, something that, until recently, was not included in models. 71. re #60 To my knowledge uncertainty in values isn’t quantified by them. They do say they have three values available for use with each of their 34 parameters. I got the impression they have low, medium and high values for each. Sort of like an electric stove. These parameters don’t even cover the external forcings. I don’t know how they handle them. Had a discussion in a previous thread with a fellow named Carl that works for CP.net. Found here. http://www.climateaudit.org/?p=635#comments Starting around comment #174. He wasn’t at all helpful in answering questions. Although he did post a link to this letter they had in Nature. I’m not the sharpest tack in the box, since my formal education ended with HS in the mid 60′s. But I critiqued it and found it to be extremely biased in their selection of model runs to discard. Essentially almost all that showed negative temps in the training period. 72. If you blindly produce a million random models with 34 parameters each, then choose the one that best fits the data, you are going to get an overfit which may perform as well as an insightfully constructed (yet overfit) model. This is, in some ways, the worst kind of overfit because the selection process is unpenalized re: the number of candidates tested. i.e. The larger the number of candidates tested, the higher the likelihood of getting a spurious fit. If you are a serious college football fan … you will understand immediately the analagous problem of figuring out which teams are the top 8 teams that deserve to go to the BCS. You have hundreds of teams, but very few head-to-head experiments to analyse (especially among top teams, which actively seek to avoid the toughest competitions!), so you are woefully ill-eqipped to determine win-probabilities from what is a heavily under-determined data matrix. Enter the “process-models” (QB ratings, special teams performance stats, etc.) to solve the fundamental data inadequacy problem. But these models (there are dozens of them) perform poorly, and worse, are regionally biased. This is so well-recognized that rather than pick one, the BCS method averages them all in a great melting pot of bad math. (Sounding familiar?) And finally, the “computer rankings” are weighted as only a minor component compared to “expert opinion” – where opinion in year t is heavily correlated with opinion in year t-1. (It is an AR(1) process.) Climate modeling is about as flaky as the BCS college football performance modeling system. But whereas we are free to debate the BCS, there are many who believe we are unqualified to ask questions of the GCMs. 73. I think the parameters in the model should not be identified by using the computer model to find the best fit as the computer model are far too complex. Things like cloud feed back can be estimated from satellite and instrumental data. I assume other experiments could be done to describe the various energy exchanges between phases of mater. As for fitting the initial conditions I would recommend using Bayesian probability. We have some prior idea of the probability of the initial states p(a) based on principles like maximum entropy. We have a set of measurements b we want to find the expectation of a given the distribution p(a,b)=p(b|a)p(a). 74. Right on point Bender. In that old thread, I mentioned that if you give me the Daily Racing Form data from last year, I could write a program that would show a profit when run over that data. It wouldn’t have any predictive power tho’. Additionally. If there was a program that accurately predicted the outcome of sporting events, it would quickly become useless as the general public started using it, due to the odds being adjusted to compensate for better public knowledge. All gambling programs lose any edge they might have had once they are made public. 75. In financial forecasting, there is always a reckoning when the out-of-sample validation test data come in. A judgement day, if you will. The pain on that day is palpable. In climate modeling science, do you know what they do to avoid that judgement day? They “move on“. 76. Timmy-me-boy has some comments over at Deltoid to the effect that a 30-year average should be used instead of lining up the starting points at 1958. To me this doesn’t make sense as you have to run the models first to get their averages. Anyway, wouldn’t the averages be different for the A, B, and C scenarios? Seems like just a lot of confusion thrown in to give the Alarmists something to grasp on to. Can you comment Willis? Thanks. 77. Re # 70, Francois, because the models are tuned (not to mention overfitted) to the past trends, how well they do in replicating the past trends is meaningless. However, there are a number of other measures that we can use to determine how well the models are doing, measures to which they are not tuned. See for example my post #45 in this thread, which examined the models perfomance regarding ‘ˆ’€ T, the monthly change in temperature. That analysis made it clear that Hansen’s model did not hindcast anywhere near the natural range of Other valuable measures include the skew, kurtosis, normality, autocorrelation, and second derivatives (‘ˆ’€ ‘ˆ’€ T) of the temperature datasets. Natural temperatures tend to have negative kurtosis (they are “fat tailed”, for example. Let me go calculate the Hansen data … I’m going out on a limb with this prediction, I haven’t done the calculations before on this …OK, here it is. From the detrended datasets, we find the kurtosis to be: GISTEMP , -0.85 HadCRUT3 , -0.50 Scenario A , 0.68 Scenario B , 0.15 Scenario C , 1.14 You see what I mean, that although the models are tuned to duplicate the trend, they do not replicate the actual characteristics of the natural world. In this case, unlike natural data, the scenarios all exhibit positive kurtosis. Re the comment that a “30 year trend” should be used instead of lining up the starting points … why 30 years? Why not 10, or 35? Each one will give you a different answer. It seems to me that the answer we are looking for is “where will we be in X years from date Y?” To find that out, we have to line up starting points at date Y. But first, we’d have to have models that I call “lifelike”, that is to say, models whose results stand and move and turn like the real data does, they match observations in kurtosis, and average and standard deviation of ‘ˆ’€ T, and a bunch of other measures. Then we need models that don’t have parameters and flux adjustments to keep them in line. At that point, we can talk about starting points and thirty year averages. Until then, the models are only valuable in the lab in limited situations. 78. Re 73, John Creighton, you have a good idea when you say: We have some prior idea of the probability of the initial states p(a) based on principles like maximum entropy. We have a set of measurements b we want to find the expectation of a given the distribution p(a,b)=p(b|a)p(a). Unfortunately, our ground based data are abysmal, far too poor to do what you suggest. Africa, for instance, has 1152 weather stations, many of them unreliable. That’s one station for every 26,000 square km if they were evenly spaced, which they’re not. Garbage in, garbage out … 79. Just an observation. Willis quoted the original article as saying: Scenarios B and C also included occasional large volcanic eruptions, while scenario A did not. If I recall correctly, since Pinatubo in 1991 there haven’t been any large volcanic eruptions. This suggests again that perhaps the AGW forcing is overstated in the model(s). 80. #78 Whether the combination of instrumental and satellite data currently available is enough to initialize the models to get meaningful runs from a reasonable number of Monte Carlo trials is secondary to the question of weather the proper approach is to try and initialize the models using as much of the current instrumental and satellite measurements as possible. Of course given a model with fine resolution (a large number of initial states) it is necessary to use every measurement possible for initialization. Additionally without some prior understanding about the statistical distribution of the initial states (e.g. maximum entropy, seasonable variability, day nighttime variations, latitude and altitude variations, inland vs coastal variations) it is impossible to initialize the initial states beyond the resolution of the measurement system in any kind of meaningful way. 81. #65 — another can of worms to add to your list, Paul, is that the data calls in the GCMs, while running a simulation, can act like a periodicity in climate and end up accelerating some momenta artifactually. I remember reading a paper warning about this a while back. #71 — Thanks for your thoughts, Bob. I recall Carl from ClimatePrediction. Calling him ‘non-helpful’ is granting him far too much grace. As I recall, Willis critiqued the CP Nature paper here in CA. After that crushing expose’, it was easy to see the paper was worthless. It then became more easy to conceptualize something that had been bothering me in a visceral sort of way, namely that Nature isn’t really a science journal. It’s an editorializing magazine specializing in science and its more expert imitators. 82. Thanks for your analysis, Willis. I’m enjoying it. But a question about #77: But first, we’d have to have models that I call “lifelike”, that is to say, models whose results stand and move and turn like the real data does, they match observations in kurtosis, and average and standard deviation of à ⣃ ’ ” ¬⟔, and a bunch of other measures. Could you explain in more detail why it is necessary for the model to be be “lifelike”? I’m not sure that it is necessarily an indication of a good model. I’d guess that the statistical characteristics of the historical temperature record are due in large part to the exact sampling method used. Unless the model is artificially simulating the same measurement errors, why should it produce data with the same characteristics? Since the purpose of the model is to show the general trend, wouldn’t a smoother, less chaotic, model be more useful? For that matter, why is the year-to-year noise so large for the models? Wouldn’t the predictions be smoothed out over multiple runs of the model? Or is the model only run once? Along those lines, I wonder if any model that too closely matches the historical record should be viewed suspiciously, as the ability to track (what are most likely) random fluctuations seems like it must be evidence of overfitting. 83. Very interesting thread — I’ve been studying the global warming issue some time as something of a hobby and am continually struck by the utter lack of appreciation for the essential variability of global climate as best as we can determine it from paleontological records over very long time scales. Ice ages. Drought and desert. Humans not present at all for most of it. The data presented above is of course a snapshot of the merest flicker of time. It correlates well with almost any upwardly trended number (with suitable one or two parameter adjustment) — the rate of growth in the GDP, for example. However, correlation is not causality is a standard adage of statistics. Correlation can make one think of causality, however. The most plausible explanation I have seen for climate variations over time periods of centuries has to be that given here: There is lovely and (I think) compelling correlation between various direct and proxy measures of solar activity stretching back over at least 1000 years and historical records of temperature including the medieval optimum. To me the most interesting thing about it (and the reason that I bring it up now) is that it is pretty much the only single predictor that very clearly reproduces the “bobble” in temperatures that occurred in the 50′s through the 70′s, that provoked doom-and-gloom warnings at that time not of global warming but of the impending Ice Age! Figure 3 is a pretty picture of the trend through the late 80′s — I don’t know of anyone that has extended the result out through the present although that would clearly be a useful thing to do. Plots like this, either lagged 3-5 years or otherwise smoothed, exhibit correlations in the 0.9 and up range, including all the significant variations. I have no idea what goes into the multiparameter fits like Hansen’s, but a one parameter fit that beats them hands down over a far longer time scale is obviously a much better candidate for a causal connection. Beyond the statistics, I have to say (as a physicist) that I find solar dynamics to be an appealing dynamical mechanism for climate variation as well. To begin with, I’m far from convinced that the physics of the greenhouse effect for CO_2 supports a linear relationship between CO_2 concentration and heat trapping. Is an actual greenhouse with thick glass panes better at trapping radiation than one with thin ones? I think not, and that isn’t even considering the quantum mechanics (where the efficiency of CO_2 as a greenhouse gas actually drops with as the mean temperature of the blackbody radiator rises and shifts the curve peak away from the relevant resonances, as I understand it). Then there is the monotonic increase of CO_2 over hundreds of years but the clearly visible and significant variations in temperature with no observed fluctuation correlations with this trend. Clearly CO_2 can be no more than a factor in global temperature, and equally clearly solar irradiance variations must be another. Of course that leaves one requiring a two or more parameter fit, and one can either make CO_2 dominant (and explain the fluctuations with solar irradiance) or make solar irradiance dominant (and throw the CO_2 contribution out all together). Occam would prefer the latter, but the former could be right, if one looks at only 200 years of data. If you look all the way back one to two thousand years, however, then things like accurate estimations of global temperature become paramount. CO_2 from human sources (at least — there are many sources of CO_2 and it isn’t clear that the dynamics of CO_2 itself in the ecosystem is well understood or entirely predictable) was clearly negligible over most of this time, yet historically there appear to have been periods when the temperature was as high as it is today if not higher. Finally there is the nastiness of politics. I’ve just finished reading the papers that absolutely shred the statistical analysis behind the “approved” view of global warming with its “hockey stick” end and shoddy (and dare I say ignorant) statistical methodology. This isn’t science — this is about money, getting votes, saving rain forests, pursuing secondary agendas. Science is an open process, and one that doesn’t tailor results and ignore (or worse, hide in a mistreatment of a dizzying array of numbers) “inconvenient” truths — like the fact that an unbiased examination of global temperatures over long time scales suggests that our current climate is “nothing special” 84. Would you care to comment on this? 85. Would you care to comment on this? Someone seems to think you are doctoring the data. 86. eh nevermind I see you commented. That’s what I get for posting right after I wake up. :-p 87. #83 RB Can I ask how and where you started your research on AGW and what caused you to look into it in the first place? It sounds like you’ve had a similar experience to me. In my case Tim Lambert’s dissing of someone who I respect caused me to wonder whether there was something not quite right and sure enough (as you’ve seen for yourself) it isn’t. Like me, I hope you have lots of friends who are also scientists and engineers. Are any of them also waking up to the poor science that underpins the AGW myth? 88. Re #83 and Fig 3 of the SWFG article, I wonder what you think of this? 89. Another example of the models not reflecting past reality. 90. Solar variation should have an excellent correlation with global temperature except where other forcings rear their ugly head, in particular volcanic eruptions (the effects of which clear in a few years) and perturbation of greenhouse gases and aerosols by human emissions. There is an early anthroprogenic hypothesis that the rise of civilization has had an effect over the past 8K years, but that would have created a fairly steady state so that until about 200 years ago solar would still have dominated any changes. That being so, the argument that there is an excellent correlation of solar activity vs. global temperature over the past 1000 years is irrelevant. What one has to do is compare solar activity with global temperature over the past 200 years. If you do so you find that solar activity becomes a worse proxy, the closer to the present one comes, and that you cannot explain measurements over the past 50 years or so without either waving your hands or making very, shall we say, contested claims about various mechanisms. This can even be seen in Soon and Baliunas (Ap J 472 (1996) 472 summarized in this graph. Note that the greenhouse gas effect takes off in 1950 and dominates in later years. The paper was severely criticised (well as those things go) in the IPCC TAR for not having enough variability. Which brings up the basic criticism of what was done in this post. Since both models and observations have real variability (in the latter case both observational and climate related), one should set baselines by averaging over a fairly long and relatively quiescent period. Moreover it should be the same period so that one compares ducks with ducks and not chickens, similar beasts, different tastes. Finally, per has a good point. HadCrut3 includes sea surface temperatures, GISSTEMP does not. The 1988 Hansen graph is for “Annual mean global surface air temperatures from the model compared with the Hansen and Lebedeff GISSTEMP series. By the way it is contructed, the latter does not cover most of the ocean. Therefore comparing HadCrut3 with the 1988 calculations is comparing ducks with geese, even if all of the sets were properly zeroed. If you are going to use a Hadley Center product Crutem3 is a much better choice. It is quite clear from looking at the data that HadCrut3 has much less variability that Crutem3 which, of course, is no big surprise. There are more fowl arguments here for later, but we do have to forego the pleasure for other tasks 91. Re #89, Eli, thank you for your posting. Among other points which I will answer when time permits, you say: Finally, per has a good point. HadCrut3 includes sea surface temperatures, GISSTEMP does not. The 1988 Hansen graph is for “Annual mean global surface air temperatures from the model compared with the Hansen and Lebedeff GISSTEMP series. By the way it is contructed, the latter does not cover most of the ocean. Therefore comparing HadCrut3 with the 1988 calculations is comparing ducks with geese, even if all of the sets were properly zeroed. If you are going to use a Hadley Center product Crutem3 is a much better choice. It is quite clear from looking at the data that HadCrut3 has much less variability that Crutem3 which, of course, is no big surprise. Eli, I fear you are in error about the GISTEMP datasets. GISTEMP has two datasets, one of which includes SST, and one of which does not. The 1988 Hansen graph uses the dataset which includes the SST, which is why it correlates well with the HadCRUT3 dataset. In short, the HadCRUT3 dataset is the proper one for comparison. I have used both the GISTEMP + SST and the HadCRUT3 datasets in my analysis, since they are quite similar, and which one you choose doesn’t make much difference to the outomes or the statistics. 92. Sorry for the double post, I posted this in the wrong thread first: #89 I think a good part of temperature variation cannot be explained with simple linear models as a result of the combined forcing agents, solar, volcanic and CO2. I believe the dynamics of the earth induce randomness in the earth climate system. I’ve tried doing regression with a simultaneous identification of the noise here: I don’t think the results are that different from stand multiple regression for the estimates of the deterministic coefficients but it does show that the system can be fit very well, to an ARMA model plus a colored noise input. Regardless of what regression technique you use a large part of the temperature variance is not explained by the standard three forcing agents alone. Possible other forcing agents (sources of noise) could be convection, evaporation, clouds, jet streams and ocean currents. 93. Oh, gosh and somehow I messed up my html twice in a row: 94. #63. Willis, could you provide a desciption of the exact methods used to align HadCRUT3 and GISS and the exact url’s for the data sets that you used. I’ll update to the post to add in this as well as the data source. Also in the original article when Hansen started out in 1958, what was his explanation for the starting value. 95. Here is an idea of how good the above fit is: The fit uses an AR (4,4) model for each input. The standard deviation of the residual y-Ax-S shows how close the fit is with the estimated noise, while the standard deviation of residual y-Ax shows how good the deterministic part of the fit is. In this case the estimation of the noise did not effect the deterministic fit much because there were enough measurements for the noise and the deterministic output to be nearly orthogonal. We see that around 60% of the standard deviation is due to noise with an AR(4,4) fit. Looking at the above plot for a standard regression fit AR(1,0) we see that there is much more noise then signal. I suspect the same results with a standard regression fit for AR(4,4) because in the algorithm I used the first iteration only fits a small part of the noise and thus should not significantly effect the regression parameters. If I had of used less measurements though I would expect a difference between my algorithm and the standard regression fit. I’ll compare that 96. To Eli: I fundamentally disagree that the last 200 years is in any way crucial to the discussion, or rather, it is crucial only because it covers the entirety of the era for which we have approximately reliable thermometers and hence have any real (non-proxied) opportunity for a discussion. However, if one grants only one thing — that we do not quantitatively understand the dynamics of global climate from first principles, period — then everything else is reduced to curve fitting and models. I’m far from an expert on solar dynamics, but I know a fair bit about curve fitting and models where one has no real quantitative basis for the model forms in use. It is clear that there are fluctuations in global temperature with very, very long time scales compared to a year, or even 200 years. Very significant fluctuations. Furthermore, there are fluctuations on a time scale of hundreds of years evident in various proxy data that stretch our knowledge of the EXISTENCE of these fluctuations (but not their magnitude, which is inferrable only from e.g. tree ring proxies and the like via extrapolation subject to many unprovable assumptions). Depending on whether you “favor” Mann et. al. or McIntyre and McKitrick alone, there is or is not anything vaguely approximating a temperature anomaly to consider, let alone one that could be forced by human sources of greenhouse gases. Are bristlecone pines a good or bad proxy? I couldn’t say, but it is pretty clear that Mann et. al. did a lousy, sloppy job of extrapolating their proxy data and that (in my opinion, based on reading much of the debate) it is almost certainly true that it was as warm or warmer seven hundred years ago as it is today. There are similarly questions about the contemporary data that is used to develop the proxies in the first place. Since both sides in the issue (with the exception of e.g. M&M) seem to have abandoned all pretext of real scientific objectivity and refuse to open up the process of just how to compute global average temperatures anyway to a public scientific debate, there is basically an unknown source of noise, very likely (politically) biased noise, superimposed on the data being used to fuel the public political debate. These factors make it entirely possible that ALL attempts to fit the short time scale data are ignoring long time scale or chaotic dynamic factors that in fact dominate but are omitted from all models. None of the models have a plausible (or if you prefer, verifiable) explanation for the causes of ice ages, for example, or for a geologic period where (from observed peak ocean levels recorded by various proxies and so forth) it was even warmer (globally) than it is today. We don’t know how to fit the signal. We don’t even know how to separate signal from noise. We KNOW that the signal has contributions of more or less unknown strength from many time scales. And yet people are asserting that their models are valid on the basis of short time scale fits with many, many adjustable parameters. Even atmospheric CO_2 concentrations are not particularly well understood, from what I’ve been able to tell. For one thing, instead of increasing monotonically ONLY in the last 200 years, there is proxy data derived from multiple sources and as far as I know not challenged that asserts that global CO_2 levels have varied on a geological scale almost precisely with VERY coarse grained temperature, right through the last ice age, to a point back in the last warm interlude when they were as high as they are today. And (I would guess) it was pretty much as warm as it is today. And (I would further guess) that the pre-humans of those days were not affecting CO_2 levels much. One perfectly reasonable explanation is that CO_2 levels are forced by global temperature, which in turn is forced by solar dynamics on a very long time scale — possibly even true solar variability due to some serious physics going on in the core that people are just beginning to be able to understand or model or predict — as I presume that no one will seriously suggest that CO_2 levels force solar activity as an alternative. This is equally evident in shorter time scale CO_2 fluctuations from the recent past — there are clear trend correlations with temperature (IIRC, I don’t have time right now to dig for the figures I’ve seen in past digs). My primary conclusion isn’t that human forced global warming is or isn’t true. It is that it is absurd to claim that we can even THINK of answering the question at this point in time, and that there exists substantial evidence to the contrary, with solar dynamics known and unknown being a very plausible contender as primary agent for global climate with CO_2 and other greenhouse gases quite possibly being driven by it (with positive feedback effects, sure) rather than the other way around. Or it might be something else — tidal effects, magnetic effects — there are lots of sources of free energy out there and we don’t have a very accurate understanding of how they interact with the very complex system that produces weather, let alone climate. 97. Which, by the way, agrees qualitatively with John’s observation that he can fit the recent data decently by a multiparameter model with significant “noise” that can be interpreted as “missing dynamics in the model. Indeed, it extends it as some of the source of noise could have very long timescales compared to 500 or so years. I do have a question about the fit. Some of the sources of noise one might expect to be driven by global temperatures and hence possess some covariance with model parameters. If (say) cloud coverage was related to CO_2 concentrations via the possibly multi-timescale lagged effect of the former on temperature and hence oceanic evaporation, there are opportunities galore for chaotic (nonlinear) mesoscale fluctuations. Ditto, by the way, if (say) solar dynamics drives temperature drives CO_2 concentrations which feeds back onto temperature which feeds back onto cloud coverage which affects the temperature and maybe the rate of CO_2 being scrubbed into e.g. the oceanic reservoir… one would expect a highly non-Markovian description of the actual true energy dynamics. Would your model account for any portion of that sort of dependent covariance, or would it only account for effects with a straightforward (e.g. linear or near linear) non-delayed effect on temperature? Outside of the noise, I mean. The entire existence of ice ages seems to suggest that either the Sun has serious long period variability (sufficient to significantly lower global temperatures for hundreds of thousands of years stably) or there exist multiple attractors in a chaotic model driven by a more uniform heat source, with transitions that are perhaps initiated by fluctuations of one sort or another to different stable modes. I know there are people looking at this, but I don’t know that they’ve gotten good answers. A chaotic model might well exhibit “interesting” behavior on a whole spectrum of time scales, though, as various parts of the dynamic cycle undergo epicyclic oscillations. 98. Steve M, you ask: #63. Willis, could you provide a desciption of the exact methods used to align HadCRUT3 and GISS and the exact url’s for the data sets that you used. I’ll update to the post to add in this as well as the data source. Also in the original article when Hansen started out in 1958, what was his explanation for the starting value. The HadCRUT3 dataset is from http://hadobs.metoffice.com/hadcrut3/diagnostics/global/nh+sh/monthly The GISTEMP dataset is from http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt I don’t know how Hansen picked the 1958 starting value. I aligned the two temperature datasets by adding the same amount to each datapoint, in a manner such that their 1958 values matched the 1958 values for the scenarios. 99. #98 since the original model prediction work was done in 1988, he presumably did a 30 year run on the models, thus the 1958 start. 100. #97 In my identification all inputs including the noise have the same system poles. The system poles could be due to various feedbacks like cloud cover or CO_2 feedback. The noise which is identified is as a consequence of state changes in the poles that cannot be explained by external inputs. You could predicted outside the region of fit for a limited range based on the system dynamics given passed inputs and passed noise estimates. As for ice age effects I’ve read something about the ice ages correlating with the precession of the earth. I am not completely sure how it works. I am not sure if colder winters and warmer summers causes more ice or warmer winters and cooler summers causes more ice. 101. #97 Robert, you might want to google Demetris Koutsoyiannis (or look )here) on long-memory (LTP) processes and climate. Koutsoyiannis has given a lot of thought to this problem. 102. Robert G. Brown: My primary conclusion isn’t that human forced global warming is or isn’t true. It is that it is absurd to claim that we can even THINK of answering the question at this point in time, and that there exists substantial evidence to the contrary, with solar dynamics known and unknown being a very plausible contender as primary agent for global climate with CO_2 and other greenhouse gases quite possibly being driven by it (with positive feedback effects, sure) rather than the other way around. Or it might be something else “¢’ ¬? tidal effects, magnetic effects “¢’ ¬? there are lots of sources of free energy out there and we don’t have a very accurate understanding of how they interact with the very complex system that produces weather, let alone climate. That’s exactly my view, but the problem is that in climate science to posit such a fundamental lack of knowledge is called "denialism" rather than "ignorance". We are dealing with people like James Hansen who have spent their entire scientific careers on a single hypothesis and nothing, but nothing will stand in the way of that. This means of course, that the Greenhouse hypothesis as expressed in these "scenarios" is immune to falsification. To my mind, to attempt to model climate down to one variable (temperature) or two (temperature and precipitation) from hundreds of thousands of poorly controlled and poorly understood parameters is not simply overfitting but wishful thinking on a grand scale using computers. 103. #96 and #102 Robert and JohnA: I share your pessimism about climate modeling (poor data; uncertain physics; shoddy mathematical and statistical methods; etc.). Yet, I’m not sure I would disparage models that reduce “climate down to one variable.” Doesn’t it depend on what you’re trying to accomplish? For example, imagine a model for temperature combining some kind of (log)linear deterministic (physically based — get the physicists involved) predictor related to CO2 (if that is what we are interested in testing) with a realistic stochastic component for natural variability (I would look to the statisticians or Koutsoyiannis for this). No more than 3 fitted parameters altogether. Such parsimonious models are hard to build and easily “falsified” (everyone knows “all models are wrong,” and with small-dimension models you can easily see it), but they are much easier to interpret and understand. Though naive, such models might provide real insight. Alternatively, they might demonstrate that, given the complexity of background variability, we aren’t going to be able to say much. That would be interesting, too. Whatever. Just an early-morning thought; likely it has already been tried… 104. #96 Robert G Brown I have tried for years to say what you have said here so succinctly, especially in the last sentence. It is hard when you are inside the debate to keep perspective, especially when computer models are so glorified and have been so dominant. It is also complicated by the strident and constant noise generated to distract rather than clarify as you will have already noticed on this site. (The dictum that we can disagree but not be disagreeable is too often replaced by the belief that being disagreeable is tantamont to disagreeing.) It is refreshing to have someone from outside summarize and speak directly to the major inconsistencies and problems. Thank you. 105. #103. TAC – I agree entirely. I’d love to see some more articulated 1D and 2D models. While I can’t prove it mathematically, my entire instinct is that, if the metric is NH (or global) average temperature, there is a latent 1D model that approximates the 3D model to negligible difference. The 1D “explanations” of AGW are interesting and I wish that more effort was spent in explaining and expanding them. The only exposition of how AGW works in all three (four) IPCC reports is a short section in IPCC 2AR responding to skeptic Jack Barrett, arguing that increased CO2 will lead to absorption and re-radiation in weak lines at higher (colder) altitudes. This is the line of reasoning in radiative-convective models (which would be interesting to consider from a calculus of variations approach for someone with requisite skills). Some of Ramanathan’s articles from the 1970s and 1980s are quite illuminating and, if I ever get to it, I should post up some comments on them. Many of them are now online at AMS Journal of Climate. 106. Re: #103 Such parsimonious models are hard to build and easily “falsified” (everyone knows “all models are wrong,” and with small-dimension models you can easily see it) Just a minor clarification. By “falsification” I assume you mean “invalidation”. Because invalidation of a model is equivalent to falsification of a hypothesis if and only if the model adequately characterizes the hypothesis one is intending to test. That is the question: do simple models give a good and fair test, or are they so easily invalidated that the invalidation does not constitute a refutation? And this is the problem, I imagine, with pasimonious models: they may be adequate to respresent some aspects of climate theory, but not others. I am no expert, but I think the reason GCMs have grown in importance is because the more parsimonious models fail to include some of the local (i.e. non-global) processes (such as feedback proceses)that the vast majority of climate scientists feel are important. e.g. Water vapour cycling is not a global process. Re: #96,104 If there is uncertainty in the science, then does this completely erode the much-sought-after “consensus” on AGW? Or does it just weaken it somewhat? 107. To TAC #97, you are absolutely correct. The preprint linked to this site was a pleasure to read, and contained absolutely precisely the figures I was visualizing and most of the relevant remarks I sought to make, but with full detail. To bring it back to this thread (and indeed this entire site): Figures 1 and 4 of this preprint say it all. They should be required viewing for anybody who wants to even THINK of building a predictive model for this problem. Figure 4, in particular, is relevant to the M&M debate vs Mann, which in turn is ABSOLUTELY relevant to the Hansen graph even if one assumes that the “global temperatures” portrayed therein and being fit are actually meaningful instead of the method-error laden, biased garbage that they almost certainly are. Fitting in any short “window” onto the geologic time-temperature series leaves one absolutely, cosmically blind to underlying functional behavior of longer time scales. The only way one has even a hope of extracting meaningful longer-time behavior from such a fit is one knows PRECISELY what the actual underlying functional form is that one must fit to, and only then if that form possesses certain useful properties, like projective orthogonality. Without wanting to review all of functional analysis or fourier transforms or the like to prove this — it should be obvious or else you shouldn’t be participating in this debate. However there are some very simple statistical measures one can use to at least characterize things, positing a state of utter ignorance about the functional behavior and viewing the entire temperature series as “just a bunch of data”. For example, all the statistical moments (cumulants) — mean, variance, skew, kurtosis. Thus the importance of M&M vs Mann et al. If the temperature around the 1300′s was in fact as warm as it is today, the scientific debate about “global warming” as anything but speculative fiction is over. Period. It relies on the temperatures we observe being “unusual”. In a purely statistical sense, according to my good friend the central limit theorem, that means “unlikely to be observed in a random iid sample drawn from the distribution” where in turn “unlikely” is a matter of taste — a p-value of 0.05 might suffice for some, even though of course it occurs one year in 20, others might want it to be less than 0.01 (although that happens too). However, if two extensive excursions, lasting decades or more, of warm weather like we are currently experiencing, are observed in a mere seven hundred year sample, I don’t even need to do a computation to know that the this isn’t a 0.01 event on a millenial-plus scale. More like a 0.1 or 0.2 event. It also utterly, cosmically confounds attempts to fit human-generated CO_2 as THE primary controlling variable in temperature excursions from the mean, as human generated CO_2 is surely a monotonic function on the entire interval of human history. The sun, however, remains not only a plausible causal agent as the primary determinant in the observed variation (given that it IS, after all, the most significant source of free energy for the planet, dwarfing all other sources except possibly tidal heating due to the moon and radioactive heating of the earth’s core, and MAYBE cosmic rays (don’t know what the total power flux is due to cosmic rays but I’ll guess that it equals or exceeds the release of non-solar free energy sources by humans by a few orders of magnitude), averaged over the entire 4\pi solid angle… Besides, a number of measures of solar activity are strongly correlated with global temperature over at least a couple of thousand years, as best as proxies (including plain old history books) can tell us. Here the problem is complicated tremendously by the silliness of what one expects out of the fits and the massaging of the data that everybody does to perform them (effectively ignoring the stochastic richness of our ignorance of all causes and delayed differential effects). First of all, one needs to make it clear that we Do Not Understand Mr. Sun. We are working on it, sure, but in addition to considerable ignorance concerning the actual magnetohydrodynamics models being proposed (that all have to be validated with horribly incomplete information about interior state — initial conditions, if you will — from observations primarily of exterior state, where it is KNOWN that there are very long time scales to consider — as in a major fluctuation that arrives at the surface to significantly alter solar irradiance and this and that might have actually occurred a rather long time ago). Then we don’t understand all aspects of how the sun transmits energy to the planet or affects the rate and ways energy is absorbed or retained except via ordinary E&M radiation, maybe. Tidal heating, magnetic heating, cosmic rays and charge particles affecting cloud formation which in turn alters the Earth’s mean albedo (in a way many orders of magnitude more significant than CO_2 — clouds are a hell of a greenhouse “gas”). We do know that the Sun orbits the center of mass of the solar system, and that its orbit is highly irregular. We do know that there appears to be a transfer effect between orbital angular momentum and the sun’s visible spin, and that altering the latter is a process involving truly stupendous amounts of energy with the consequent release of heat. We know that the Sun does all sorts of strange magnetic things as it proceeds through these irregularly spaced (but predictable) orbital events. We know that they are correlated with, but not absolutely predictive of, things like sunspot count and interval. And we know that these things are correlated very strongly with events in the Earth’s global temperature series, not just over the last 200 years where we have something approximating temperature readings but over the entire range of times where we can deduce temperatures via any sort of proxies. It is equally obvious that it is one of MANY parametric inputs (the obviously dominant one, in my opinion) and that our ignorance of many of a near infinity of these inputs (fine grained, they go down to that damnable butterfly in Brazil, after all) requires that a HALFWAY believably model be something like a langevin equation (solved via a fokker-plank equation approach, maybe), if not a full-blown non-Markovian, stochastic, integrodifferential equation. Quasi-linear parametric models (“if CO_2 goes up, global temperatures go up”) are a joke, not serious science. But don’t mind me, this is just one of the things I “do” in other contexts where results are easily falsifiable and where we actually have a handle on the microscopic dynamics. One other thing I do is neural networks, which in the context of problems like this can be viewed as generalized nonlinear function approximators. The result of building a neural network is of course anathema to most model builders, who want to trumpet things like “This parameter is causing this output to happen the way that it is”. A neural network utterly obscures the true functional relationships it discovers. It also can handle input covariance transparently, and is perfectly happy in a high dimensional multivariate problem describing non-separable input relationships (ones that cannot be expressed as an outer product of independent functions of those variables, ones that e.g. encode an exclusive or or more complex relationship for example). They are, however, useful in two ways to the unbiased, even for addressing problems like this. For example, it would be interesting to try training a NN to predict “this year’s mean temperature” from (say) the last twenty-five years’ sunspot data, the last twenty-five years global temperature data (to give it a mechanism for doing the integro-differential non-markovian part of the solution) and with an input for some measure of the total volcanic activity for the last twenty five years. Nothing else. Train it over selected subsets of the last 250 years’ data (it needs to include examples of the bounded variance of the inputs and outputs. Then apply it and see how it does. One can then compare the performance of such a network to one built e.g. only with sunspots as input (no non-Markovian inputs). Only with volcanos. With sunspots and volcanos. Only with CO_2. With CO_2 and temperature. Etc. Abstracting information from this process is tedious, but it does not depend on any particular assumptions concerning functional dependence of the inputs. If they are useful, the network will use them. If not, it will ignore them. If they are strongly covariant with other inputs, it won’t matter. Even so, one can sometimes obtain very interesting information about the underlying process dependencies. Oh, and one will likely end up with a quantitative model that is more accurate a predictor than ANY existing model is today by far, but that is besides the point. In fact, if the markovian model proved accurate (and I’ll bet it would:-) one could use the network recursively with simulated inputs drawn from the expected distribution of e.g. sunspots and get an actual prediction of global future temperatures some years in the future that might even accurately describe the hidden feedback stabilization mechanisms that are obviously absent from quasi-linear (monotonic and non-delayed) models. To John A — I also completely agree, especially about the wishful thinking aspect of things. The above is the merest outline of how one might actually attack the problem in a way that didn’t necessary beg all sorts of questions. It still leaves one with the very substantial problem of the base data one attempts to fit — the temperature series one tries to fit is a patchwork of changing methodologies, technologies, locations, and worse with urbanization alone providing systematic errors that are corrected for more or less arbitrarily (so that what one measures depends on who is doing the correcting). The patches don’t even fit together well where they overlap — there are weather stations all over the planet that show no statistically significant global warming EVEN in the last decade and current measures of global temperature are either being “renormalized” by individuals with a stake in the discussion or are being based with extraordinary weight given to parts of the world where there IS NO reliable data that covers hundreds of years to which current measurements can be compared (and where it is easy to argue to the current data itself isn’t terribly reliable). GIGO. The very first problem one has to address, unless/until one agrees to use only satellite data and radiosonde data and dump surface measurements altogether as the load of crap they probably are. That still leaves one with accurately normalizing to the scale of the RELIABLE measurements we do have that stretch back over 250 years, but that is a more manageable problem. In the meantime, I find it overwhelmingly amusing, in a sick sort of way, that Time Magazine published in one month a cover story on the sun that clearly describes the immense complexity of solar dynamics, discusses the high probability that things like its irradiance and magnetic field and solar wind are significant contributers to things like earthbound weather and how important it is to understand them, and then JUST A MONTH OR THREE LATER publishes a cover story on global warming that utterly ignores the sun! No mention of Maunder minima, no glimpse of the Medieval Optimum, no look at temperature varation on a truly geological time scale — just “it is now a known and accepted scientific FACT that human generated CO_2 is about to destroy the planet as we know it”. The greatest tragedy associated with this is that the people betting on this horse had damn well be right. If (for example) the current sunspot minimum is in fact the edge of another Maunder minimum (as the solar theories seem to suggest and we’re about to enter a downturn that lasts some thirty years, it will (rightfully) damage the credibility of all scientists, everywhere, for the next fifty years. Chicken Little on a grand scale. As a scientist, I find the sheer probability of this very disturbing, as most ordinary citizens are utterly incapable of differentiating a scientist being enthusiastic about a pet theory from Darwin or Newton, where the “pet theory” is falsifiable and so overwhelming validated by observation that nobody sane expects it to ever be proven false. 108. Sorry, I’m not totally together this morning. I meant that the “non-Markovian” neural network would likely prove very predictive in this problem, not the “Markovian” one (which would not include any delayed data). I also failed to point out the virtue of including precisely 25 years worth of historical inputs on both temperature and sunspots to the NN. Current models all rely on “smoothing” at least e.g. sunspot data over some preselected interval, and often smooth temperature as well. This eliminates some obvious stochastic noise that would be very difficult to fit, but it also makes the resulting model depend on the particular interval one smooths over, and (in some cases) whether the smoothing includes FUTURE data as well as PAST (kind of tough to built a deterministic model that way, huh). Somehow nobody seems to ever do a comparative study of just how many years one SHOULD smooth over or why they choose “ten years” instead of “two years” or “fifty years”. Tough call, given that good old problem with time scales of unknown length and projective orthogonality (how much of figure 1 DOES one need to be able to extract an accurate fourier component to this pure sine function plus noise, hmmm). By including as inputs all the actual numbers over the last 25 years, one eliminates having to think about this in a way that biases any particular window, at least for projective components determinable within the window. If the NN decides it wants to smooth linearly exactly the last eight years, it will learn to ignore the previous seventeen. If it wants to “smooth” by morphing all the inputs through a nonlinear function with support from both the last five years and a second window of five years centered eleven years or twenty two years earlier, it has the data to do so over at least two full cycles. This also is wide enough to permit a physically expected lag of some years between a change in WHATEVER mechanism in solar energy transfer or loss inherent in the nonlinear function the network discovers and the resultant changes in NH temperatures. The earth has a certain “thermal inertia”, just like my house, and setting the thermostat up or down (especially to try to bring about a large change) doesn’t happen right away, especially when the earth has all sorts of “windows” it can open to let energy in or out while the AC or furnace is running. The other brainless thing I did was attribute the solar and global warming articles to time magazine. I meant National Geographic, sorry, specifically July 2004 and September 2004. Pretty amazing, really. 109. Re #107: On the utility/futility of neural networks. The result of building a neural network is of course anathema to most model builders, who want to trumpet things like “This parameter is causing this output to happen the way that it is”. Now why do you think that people would like to understand the relationship between input and output? A neural network utterly obscures the true functional relationships it discovers. It also can handle input covariance transparently, and is perfectly happy in a high dimensional multivariate problem describing non-separable input relationships (ones that cannot be expressed as an outer product of independent functions of those variables, ones that e.g. encode an exclusive or or more complex relationship for example). This sounds like a research proposal. Are you sure you’re being objective here? one will likely end up with a quantitative model that is more accurate a predictor than ANY existing model is today by far, but that is besides the point Neural network models are overfit models. They are as likely to fail in an out-of-sample validation test as any other overfit model. No? 110. Dano/Bloom, What do you have to say about Dr. Brown’s posts? I find it very refreshing when someone knowledgable puts things into perspective. That’s why I love this blog. 111. Bender, “This sounds like a research proposal. Are you sure you’re being objective here?” talking about Dr. Brown’s post. If true this puts Brown into the same position as all the grant funded climate scientist. Bloom/Dano are constantly telling us we should trust grant funded scientist because they are objective – receive no FF monies. 112. “As for ice age effects I’ve read something about the ice ages correlating with the precession of the earth.” I think you’re referring to Milankovich(vic?) I believe it is highly correlated to the recent glacial-interglacial periods but doesn’t help too much in explaining ice age-non ice age periods. Sorry no link, just my memory. 113. Re #111 He’s criticizing the traditional approach to climate modeling. I want to know what kind of weapons he’s packing. Could be interesting. I say ‘leave the trolls out of this’. Robert and JohnA: I share your pessimism about climate modeling (poor data; uncertain physics; shoddy mathematical and statistical methods; etc.). Yet, I’m not sure I would disparage models that reduce “climate down to one variable.” Doesn’t it depend on what you’re trying to accomplish? For example, imagine a model for temperature combining some kind of (log)linear deterministic (physically based “¢’ ¬? get the physicists involved) predictor related to CO2 (if that is what we are interested in testing) with a realistic stochastic component for natural variability (I would look to the statisticians or Koutsoyiannis for this). No more than 3 fitted parameters altogether. The only problem is that with three or more degrees of freedom, chaos ensues. 115. There has been some discussion here how averaging effects the model fit. Averaging can be corrected for in a limited way. For instance in audio systems speakers use sin(x)/x compensation to compensate for the distortion as a result of the sampling process. In the case of temperature averaging cause a more serious problem because the total power dissipated from the earth is proportional to the forth power of the temperature yet we average temperature and not the forth power of the temperature. To compensate for this fact I suggest that one of the forcing terms be -^ The regression coefficient for this term should be roughly proportional to the third power of temperature and we expect close agreement with the equations for black body radiation. I also suggest that we try to quantity other forcing terms like thermal transfer of energy though convection and evaporation. I wonder if any of these forcing components can be estimated though satellite data. 116. RGB – nice to have you posting. Very interesting comments. I think that I’ll put some threads on solar topics. TAC or RGB – would either of you like to do a post on Demetris’ article linked in 101 clipping out the key Figure? If so, email it to me or post it at Road Map and I’ll transfer. 117. Re: #101 Anyone interested in red noise processes and scaling of uncertainty should read papers on 1/f noise as well. 118. Re 114: The only problem is that with three or more degrees of freedom, chaos ensues Sure, Lorenz (1963). Of course it is possible that the chaos that emerges at one time-space-scale is embedded in higher-order patterns – non-chatic patterns – that emerge at larger time-space-scales. No? 119. Bender: I read Baum’s book What is Thought? and he describes a theorem by Vapnik and somebody else (whose name regrettably slips my mind) that precisely describes the fitting/overfitting behavior of NNs in a binary classification context. The basic idea, loosely, seems to be that if a) the data generating process is stationary and b) the NN is trained on historical samples, then c) the NN’s predictive power is inversely related to the number of free parameters the training has searched over. This is all defined in a precise mathematical way, but that’s the gist of it–if you find a good fit with one or two parameters, then it’s likely to work out of sample, but if you splined the data, there’s no reason to think your fit is going to be good on the next instance. That said, I also recall reading that NNs are no panacea in that training and convergence are often painfully slow and ineffective if the problem structure isn’t “friendly” to the NN’s architecture. It’s probably worth a try, though. 120. RGB, Cosmic rays presumably act on cloud formation, so there’s a huge amplifying mechanism there. No need for a lot of energy. Cosmic rays are in antiphase with the sun’s activity, since the sun’s magnetic field shields us from them. On the other hand, there is mounting evidence that the recent warming can be accounted for by a decreasing albedo over the past 25 years, apparently due to a lack of clouds, especially low-level clouds. We now have satellites observing cloud cover, so we have about 20 years of data. The radiation budget (incoming short wave radiation minus outgoing long wave radiation )has been observed to be much more variable over decadal time scales than is predicted by any model. However, the decreasing albedo trend has apparently reversed over the past 2-3 years. Interestingly, the ocean temperature has also started to drop at the same time, and has lost 20% of the heat it had accumulated over the past 20 years. None of these observations has been predicted by the models. But that’s not surprising since cloud formation is the one thing we don’t understand, and it has to be fully parametrized in the models, with more or less guess values. There is clearly a picture emerging from all this that the sun’s activity, and its direct and indirect effects, are the main driver of the climate dynamics. It could very well be that in the end, we find that CO2 has a very small role in all this. Question for Steve M. and John A.: Is there a directory somewhere where one could upload interesting papers, so that we can reference them in our posts and we are certain to find them? I download a lot of papers, but don’t always keep the url where I got them, so whenever I want to post a link, I have to search for them again, and sometimes they’re not there any more, and in any case it’s quite time consuming. I also have papers that were sent to me directly by the authors, and are not available on the web. You could build a bank of relevant papers for the blog. 121. #119 I doubt that the Neural Networks predictive power is always inversely proportional to the number of free parameters. There should be an optimal number of parameters. I’ve tried doing noise removal before in speech with a predictive MA filter and what I found was that the number of parameters that you need it had to be sufficient long enough to describe at least one or two cycles of the vowel frequency. 122. Francois, there may be better choices, but I just started using Google Notebook. It may be a very useful solution. Check it out here 123. #120. Francois, esnips.com (used by jae) has free storage that would be a good place to upload papers. I’m thinking of using it for pdf storage. 124. #120 you may be right. Perhaps cosmic rays are the biggest drivers of temperature changes because of the reaction in the atmosphere that creates clouds. If you look at figure 12 on: You see the plot of C14 anomalies looks like the mirror image of the low frequency information of what we believe the climate looked like for the last 1000 years. C14 is created by the interaction of the cosmic rays with the atmosphere. The paper also shows a very strong relationship between cosmic radiation and the amount of cloud cover. Unfortunately we don’t have data going back very far. Maybe the C14 data could be used to fill in the past data where the records are missing for cosmic rays and cloud cover. More papers on cosmic radiation and cloud cover can be found here: 125. I’m looking at the instrumental temperature record from 1600 to 2000 and I’m looking at the C14 over that same period and I can’t help but think that the C14 which is an indicated of cosmic rays, fits the temperature data much better then solar, greenhouse gases and volcanic eruptions. I wonder if the other drivers are significant at all. Unfortunately the C14 does not explain the high frequency data so I am still left to wonder what cause this. Perhaps the high frequency data has to do with when the clouds were formed. If the clouds were formed evenly throughout the year there would be less cooling I think then if it was concentrated within a shorter period of time. Or perhaps it has something to do with the global distribution of cloud cover. More clouds at the equator would have a greater cooling effect then at the poles I think. There is also the altitude distribution of clouds. I confess I haven’t read the paper yet. 126. #96 — “My primary conclusion isn’t that human forced global warming is or isn’t true. It is that it is absurd to claim that we can even THINK of answering the question at this point in time, and that there exists substantial evidence to the contrary, with solar dynamics known and unknown being a very plausible contender as primary agent for global climate…” Virtually every scientist who has posted a considered ‘global’ opinion of the state of climate science here, with the exception of John Hunter, has offered a similar view. 127. Re #107 Could someone refer me to the preprint that rgb is discussing? 128. Here is a plot of Willis’ data #63: http://opelinjection.free.fr/imagesforum/gw_models.jpg HadCrudt3 and Giss provide the same temps over the last decades, except for notable exceptions of 1998 (highest temp recorded for HadCrudt3) and 2005 (highest temp recorded for Giss). A new puzzle for climate science. 129. #127 I think the preprint you refer to is of one of Koutsoyiannis’s latest papers (here). It was cited in #101. I (and/or RGB) will likely be sending SteveM a post on it. 130. I also like this paper paper (here or here) on the difficulty of determining trend significance in the presence of possible long-term persistence. 131. Just a comment on the Hansen graph In his article Hansen writes Scenarios B and C also included occasional large volcanic eruptions This would suggest that the plots of B and C are ‘lower’ than they might have been (without volcanos). The last major eruption was in 1991 (Pinatubo) which according to NASA was responsible for a temperature drop of up to 0.5 deg C and affected global temps for up to 3 years. If Hansen were to re-run his model with just the 1991 volcanic eruption the observed temperatures would be running well below both scenarios B & C. On the discrepancies between GISS and HADCRUT. Tim lambert is correct about the different anomaly periods, but the ‘discrepancy’ seems to be growing (it should be reasonably constant). I think it may be due to the way the ocean temperatures are measured. 132. Re. #120 and #124 Here’s another interesting paper (abstract) that provides very strong evidence (IMHO) that GCRs modulate cloud cover. 133. OK, I should, of course, be doing something like actual work but this is more fun so I’ll see what I can do about the questions/suggestions above. a) Currently I am completely unfunded by federal money, and while I have been so funded in the past, the agency (ARO) that funded me could give a rodent’s furry behind for the entire weather debate one way or the other. In fact, I’d say that is true for pretty much all physics funding. Physics has its own problems with politicization of the funding process, but thankfully they are vastly smaller than Climate. So ain’t nobody holding a gun to my head here, I have no vested economic interest in this discussion, and although I have no way of “proving” this I am an ethical person — an ex-boy scout, a university professor and student advisor, a beowulf computing guy, open source fanatic, husband, father, and have hardly ever been accused of crimes major or minor over the years. I did get caught driving with an expired registration once, if that counts. So: “Everything I state in this discussion is my actual, unpaid for opinion based on doing half-assed web-based research and applying whatever I have learned for better or worse in 30 years or so of doing and teaching theoretical physics, advanced computation, statistical mechanical simulation, and in the course of starting up a company that does predictive modelling with neural networks (currently defunct, so no that is not a vested interest either).” Good enough disclaimer? b) Re: Neural networks. I have fairly extensive experience with neural networks (having written a very advanced one that uses a whole bunch of stuff derived from physics/stat mech to accelerate the optimization process for problems of very high input dimensionality). As I said, one of the best ways of viewing a NN is as a generalized multivariate nonlinear function approximator. In commercial predictive modelling, the multivariate nonlinear function one is attempting to model is the probability distribution function, usually used as a binary classification tool (will he/ won’t he e.g. purchase the following product if an offer is made, based on demographic inputs). However, NNs can equally well be trained to just plain model nonlinear complex (in the Santa Fe Institute sense, not complex number sense though of course they can do that as well) functions presented with noisy training data pulled from that function, not necessarily in an iid sense since the support of a 100 dimensional function may well live in a teeny weeny subvolume and there isn’t enough time in the universe to actually meaningfully sample it (even with only binary inputs, let alone real number inputs). For very high dimensional functions with unknown analytic structure, I personally think that NNs are pretty much the only game in town. To use anything else (except possibly Parzen-Bayes networks or various projective trees for certain classes of problems) restricts the result to preordained projective subspaces of the actual problem space. The process of projection, especially onto separable/orthogonal multivariate bases, can completely erase the significant multivariate features in the data, hiding key relationships and costing you accuracy. NNs have a variety of “interesting” properties (in the sense of the chinese curse:-). For one, constructing one with a “canned” program (even a commercial canned program) is likely to fail immediately for a novice user leaving them with the impression that they don’t work. Building a successful NN for a difficult problem (with that really DOES have high dimensionality and nontrivial internal correlations between input degrees of freedom and the output desired) is as much art as it is science, simply because the construction process involves solving an NP complete optimization problem and one needs to bring heavy guns to bear on it to have good success. Training takes as long as days or weeks, not minutes, and may require trying several different structures of network to succeed, facts that elude a lot of casual appliers of canned NNs. Having the source to the NN in question is even an excellent idea, because one may need to actually use human intuition, insight, and a devilishly deep understanding of “how it all works” to rebuild the NN on the spot at the source code level to manage certain problems. NNs also have a built in “heisenberg”-like process that resembles in some ways the quantum physics one (which is based on vector identities in a functional linear vector space or the properties of the fourier transform that links position and momentum descriptions as you prefer). If one builds a network with “too much power”, a NN overfits the data, effectively “memorizing specific instances” from the training set and using its internal structure to identify them, but then it does a poor job of interpolating or extrapolating. If one builds a network with too little “power” (too few hidden layer neurons, basically) then the network cannot encode all of the nonlinear structures that actually contribute to the solution and it again fails to reach is extrapolatory/ interpolatory optimum. So in addition to building a NN (solving a rather large optimization problem in and of itself) one has to also optimize the structure of the NN itself in terms of number of hidden layer neurons, precisely what inputs to use from a potentially huge set of input variables (building a NN with more than order 100 inputs starts to get very sketchy even with a cluster, even with NN’s significant scaling advantage in searching/sampling the available space, even for my program), and then there are a number of other control variables and possibilities for customized structure to optimize on top of that for truly difficult problems. Finally, NNs perform well for certain mappings of the input numbers into “neural form” and perform absolutely terribly for other encodings of the same input data. One has to actually understand how NNs work and what the data represents to present inputs to the network in a way that makes it relatively “easy” to find a halfway decent optimum (noting that the system is nearly ALWAYS sufficiently complex that you won’t find THE optimum, just one that is better than (say) 99.999999% of the local optima one might find by naive monte carlo sampling followed by e.g. a conjugate gradient or worse, a simple regression fit). Oops! I forgot to mention the entire problem of selecting an adequate training set! This problem is coupled to a number of the others above — for example, finding the right resolving power for the network — and above all, is related to the problem of being able to extrapolate any model beyond the range of the functional data used to build the model. This problem is discussed in a separate section below, so I won’t say more about it now, but at that time you will see that NNs aren’t any more or less capable of solving this problem in the strict sense of Jaynes (the world’s best description of axiomatic probability, in my humble opinion) and maximum likelihood, Polya’s urn, entropy etc for people who know what they are. Extrapolation necessarily involves making assumptions about the underlying functional form (even if they are only “it is an analytic function and hence smoothly extensible” and one can always find or define an infinite number (literally) of possible exceptions where the assumption breaks down. This leads to some pretty heavy stuff, mathematically speaking, concerning the dimensionality of the actual underlying function, the way that dimensionality projects into the actual dimensions you are fitting which may be non-orthogonal curvilinear transforms of the true dimensions or worse, and the log of the missing information — the information lost in the process of projection. WAY more than we want to cover here, way more than I CAN cover here — read Jaynes and prosper. From this you may imagine that I don’t think much of published conclusions concerning NNs — they tend to be based on simple feed forward/back propagation networks applied to simple problems or naively applied to problems that they cannot possibly solve, although there are exceptions. The exceptions are worth a lot of money, though, so people who work out truly notable exceptions do not necessarily publish at all. With regard to the specific problem at hand — building NNs to model a nonlinear function “T(i)” (temperature as a function of some given vector of “presumed climatological variables that might be either causal agent descriptors or functions of causal agent descriptions”) — the problem itself is actually simple enough that I think that NNs would do fairly well, subject to several constraints. The most important constraint is that the network needs to be trained on valid data. NNs aren’t “magical” any more than human brains are — they suffer from GIGO as much as any program on earth. They are simply far, far better at searching high dimensional spaces in a mostly UNSTRUCTURED way that makes the fewest possible assumptions about the underlying form of the function being fit, in comparison to using a power series, a fourier representation, a (yuk!) multivariate logistic representation, or fitting anything with a quasi-linear few-variable form that effectively separates contributions from the different dimensions into a simple product of monotonic forms (assumptions that are absolutely not justified in the current problem, by the way). As previously noted, this leads us back to the M&M vs Mann result, and the lovely papers with links provided by TAC. I don’t really see how to uplink and embed jpgs grabbed from the figures in these papers and I have to teach graduate E&M in a couple of hours (which does require some prep:-) so I’ll try to summarize the basic idea in (my own) general terms. I STRONGLY urge people to at least skim the actual papers by following the links TAC provided as they use simpler (although perhaps less precise) language than that below and besides, have simply LOVELY figures that say it c) Suppose you have a completely deterministic function (equation of motion) with (say) 10^whatever degrees of freedom, where whatever is “large”, 10^whatever is “very large” and numbers like 10^ whatever! are “good friends with infinity, who lives just past the end of their street” (the latter number figures prominently in various computations of probability in this sort of system). OK, so we cannot think about actually solving such a system, so we go from the actual equation of motion to its Generalized Master Equation. The way this works is that one selects a subset of the degrees of freedom or a functional transform thereof and performs a projection from the actual degrees of freedom to the new ones. To do this one has to embrace a statistical description of the underlying process and average over the neglected degrees of freedom. This results in the appearance of new, coarse grained degrees of freedom (like “temperature”, which is a proxy as it were for the average internal energy per degree of freedom in a multiparticle physical system at equilibrium, but there may be many others as well) and a new equation of motion for the quantities that remain in your microscopic description. See in particular links on the master equation page for Chapman-Kolmogorov and Fokker-Planck and Langevin. Note well that at the CK level, solutions on the projective subspace are most generally going to be the result of solving non-Markovian integrodifferential equations with a kernel that makes the time derivative of the quantities of interest at the current time (say, the joint probability distribution for global temperatures at all the measurement stations around the planet as a function of time) a function of not only the current values of those states and other input variables describing the current values of projective (coarse grained averaged) quantities, but the values of those variables at a continuum of times into the past that has to be integrated over. The system has a “memory”, and the dynamics are no longer local in time. Note well that the MICROSCOPIC dynamics IS time-local, but in the process of projection and coarse-grain averaging the new variables “forget” critical information from earlier times that would have been encoded on the degrees of freedom averaged over, and that is still at least weakly encoded on previous-time values of those average degrees of freedom. This sort of evolution is a generalized master equation, and alas wikipedia doesn’t yet have a reference for it but they are in use in physics in a number of places in e.g. the quantum optics of open systems. Sorry, I don’t know of any easier way of describing this, as this is the actual bare bones underlying mathematical structure of the actual physical problem one is trying to solve. One CAN (and obviously most people DO) just naively say “hey, global temperature (however that is defined) might be a smooth function of CO_2 concentration (however that is defined) with the following (presumed) parametric form and here is the best parametric fit to that form in comparison to the data” without even thinking about those hidden degrees of freedom and non-Markovian effects, but that is, really, a pretty silly thing to do WITHOUT even explicitly acknowledging the limitations on the likely meaning of the result. The point is that there are many physical systems where the local dynamics depends on the particular history of how one arrived at the current state, not just the values of its variables at that state. Pretty much any non-equilibrium, open system in statistical mechanics, for example. This gives you the merest glimpse at the true complexity of the problem at hand. The neglected degrees of freedom of the coarse grained variables are responsible for the colored stochastic noise that appears in the actual distribution one is trying to time-evolve or state-evolve, the state evolution is generally non-Markovian so that making a Markov approximation and time evolving it only on the basis of current state is itself an additional source of error and erases the POSSIBILITY of certain kinds of dynamics in the result, etc. Now, again for the specific problem at hand, the data being fit is a coarse-grained average of temperature measurements. Those measurements were recorded over hundreds of years and in a very, very inhomogeneous distribution of locations. Those measurements themselves (as measurements always are) are subject to errors both systematic and random — the former become what amounts to an unknown functional transformation of the results from each measurement apparatus and the latter appears as noise of one sort or another. The number of locations, the site of the locations, the unknown transformation of the measurements from these locations, all themselves have varied in ways both known and unknown over time. Finally, the results from those locations are THEMSELVES transformed in a specific way into the single number we are calling (e.g.) “global average temperature for the year 1853″ (or 2004, or 1932). Note that different transforms are required for all of those years simply because of HUGE differences in the profile of the contributing sites over decadal timescales. Note also that there are those that accuse the transform itself being used of significant bias, at least in recent years. I cannot address this — it really isn’t necessary. The point is that there is an UNCERTAINTY in T(t) for any given year, and that the deviation is almost certainly not going to be a standard normal error but rather an unknown systematic bias with a superimposed variance that is not normal. Curve fitting without an error estimate on the points is already an interesting exercise that we will not now examine, especially when that error estimate is not presumed normal — suffice it to say that this unknown error SHOULD cause us to reduce the confidence we place in the resulting fit. This, then, is the data that is extended by proxies by Mann et. al. to produce a temperature estimate back roughly 1000 years, T(t) for t in the general range of 1000-200* CE. Let us call this curve T_mea(t) (for Mann et. al.). MEA used tree rings as proxies, and as I’m sure everybody who is reading this site is aware, used a statistical weighting mechanism while effectively normalizing the T(t) for the last couple of hundred years to current tree rings that de facto gave a huge weight to just two species of tree in their sample, both with highly questionable growth ring patterns in modern times that may or may not be related to temperature at all and that are not CONSISTENT with patterns observed in the growth rings of hundreds of other neighboring species over the same interval. By doing what amounted to a terrible job with the actual process of proxy extrapolation, they completely erased a warm period back in the 1300-1400′s that is well-documented historically and by all the OTHER tree ring proxies they claimed to use. All of this computation was hidden, of course. M&M, by means of some brilliant detective work and bulldog-like dedication to task, had to DEDUCE that this was what had been done by attempting to reproduce their result and by means of limited private communications with MEA who as of the last M&M paper I read still have not disclosed all of their actual code. Steve, it would actually be lovely if you would drop A copy of the basic figure from one of your papers in here — one with and one without the hockey stick form (or if you like, one with and one without bristlecone pines being anomalously weighted). Two more remarks and then I have to go teach, sorry — maybe I’ll get back to this if there is still interest in people hearing more. First, at the VERY least this means that there is YET ANOTHER layer of systematic error AND stochastic noise on top of the projection of T(t) back over 1000 years via proxies. In my opinion, having read M&M and understanding what they are talking about, at least one source of systematic error has been uncovered by them and can be resolved by simply shifting from T_mea(t) to T_mm(t), which shows that temperatures a mere 700 years ago were as warm or warmer than they are today, in the complete absence of anything LIKE the anthropogenic CO_2 that everybody is worried about Second, an interval 200 years long, or 1000 years long, is still tiny on a geological scale and we KNOW that there are significant geological scale variations in global temperature. Forget possible causes — remember the process of projection and coarse graining/averaging that is implicit in any such description. Think instead about the TAC-referenced papers. We could well be in the position of the ant, living on the side of Mount Everest, who is trying to decide how to climb to the top of the world. To figure out which way to go, he wanders around a bit in his immediate neighborhood, and seems a few grains of sand, a gum wrapper, and — look! An acorn sitting just downhill of him! The shortsided ant climbs to the top of the acorn and proclaims himself king of the BEFORE talking about causes, curves, fits, ALL of the above has to be understood and the data itself has to be reliable. Error estimates have to be attached to the data being fit, and those estimates have to include the effects of convolving the measurement process with the actual values both systematic and random. Finally, NO method for fitting the data to ANY control parameters will succeed if the data has significant variation on timescales larger than the window being fit that are not represented in the basis of the fit. Sigh. Have to go. Perhaps I’ll return later to the actual point, which was NNs vs the data span and their likely ability to predict. In an acorn-shell, if the training data spans the REAL range of data in the periods of interest, it should be as good at capturing internal multivariate projective variation as anything. Nothing can extrapolate without assumptions, there is no way to validate those assumptions without still more data. Period. John Finn said : On the discrepancies between GISS and HADCRUT. Tim lambert is correct about the different anomaly periods, but the “discfrepancy’ seems to be growing (it should be reasonably It is growing… or it is lessening depending on the year. Maybe the discrepancies amplitude depends on the number of conventions/year climatologist made to “correct” data. Maybe when Hansen will be no more head of the GISS, GISTEMP will decrease. Who knows. 135. #133, RGB Hard going but fascinating stuff. Slight off thread but keep it up. 136. re: 133, It’s such fun reading messages like this. It’s just slightly over my head such that I can reasonably judge it to be correct since in those places where I clearly understand it’s correct and where I can’t there are no glaring problems. And interestingly, it’s not as far over my head in one sense as Steve M is often since it’s a more general discussion rather than one relying on detailed technical complexities. Still I wish I could take a semester long class on NNs with you as I’m sure I’d learn lots. For that matter it’s too bad I didn’t have someone like you as a teacher when I took E&M (though it was just an undergrad class.) My teacher may have known what he was talking about but we were in a small college and there were only 4 students in the class but he still lectured like he was talking to 100. Very off-putting. 137. Whew! I just reread what I wrote and it is far too much, sorry. Let me wrap up (since I have a couple of minutes before I have to do my next daily chore) with the following. This isn’t my field, and I have no idea where to get actual data sets, e.g. T_mm(t), T_mea(t), CCO_2(t). I did find sunspots on the web from 1755 on, or so it appears, which unfortunately doesn’t extend back to 1300. There are solar dynamics theories that try to extend patterns back that far and there may even be comparable data of some sort as sunspots were observed a LONG time ago by this culture or that, but I don’t know where to get that data if it does. If anybody can direct me at the data, I’d be happy to build a genetically optimized NN (with bells and whistles added as needed) to see it can do with various inputs to predict e.g. T(t,i) where i are the input vectors and where t may or may not be explicitly included (probably not, actually). What this can do is give one a reason to believe that a nonlinear functional mapping exists between the inputs used and the target in the different cases. It will not tell one what that relationship is, or whether the relationship is direct or indirect, only that with a certain set of inputs one “can” build a good network and without them one “cannot”, all things being equal. It will not be able to address, in all probability, whether or not e.g. CO_2 or sunspots or the cost of futures in orange juice is “the” best or worst variable to use as an input if, as is not unreasonable, it is discovered that all three are correlated in their variation to T() (quite possibly as effect, not cause, in some cases). But it might be fun just to see what one can see. 138. Well here is the data Michael Man used. I got it from the nature website. Sometime I am coning to try to find the data in a more raw form and rebuild it but not today. I don’t know if the following links will be helpful but there is more data that you might want to incorporate. I also found some length of day data which I posted here: Let me try this again. FOr some reason if I put too many links the spam filter gets me. 139. Well here is the data Michael Man used. I got it from the nature website. Sometime I am coning to try to find the data in a more raw form and rebuild it but not today. 140. I don’t know if the following links will be helpful but there is more data that you might want to incorporate. 141. I also found some length of day data which I posted here: P.S. I would of put this all in one post but the spam filter doesn’t seem to like me putting a lot of links in one post. 142. Thanks, John. I recorded links and looked at a bunch of data back to 1000 or so. It is really amazing that in Mann’s T_mea(t) the medieval optimum has just plain disappeared, and the maunder and sporer events are almost invisible. How could anybody take this seriously? I pulled just a couple of proxies (e.g. some african lake data, chinese river data) and they show beyond any question an extended warm spell in the 1100-1300 range that was clearly global in scope. I thought that this was visible in nearly all the tree ring data on the planet — but I see that now I can look for myself (if I can figure out how — there is a LOT of tree ring data, and of course (sigh) tree growth is itself multivariate and not a trivial or even a monotonic function of temperature). This leaves me with the usual problem — what to fit and how large a range to try to fit (or rather model with a predictive model, not exactly the same thing). There are sunspot proxies that go back over the 1000 year period — I don’t know exactly how that works but they are there. I’ll have to look at a couple of papers on solar dynamics and see if I can improve on this with perhaps orbital or magnetic data. I did look over the data on the variation of earth’s rotational period. Difficult to know what to make of this, as I’m not sure what this data reflects physically. There is transfer of angular momentum to and from the earth via e.g. tides and the moon and sun and other planets, there can also be CONSERVED angular momentum but a change in the earth’s moment of inertia due to internal mass rearrangements (upwelling magma? plate tectonics? something large scale). I’d expect none of these to be elastic processes and for there to be a large release of heat, in particular, accompanying anything but uniform motion as internal forces work to keep the essentially fluid (on this sort of time scale) body rotating roughly homogeneously. I’ll have to do a back of the envelope calculation to see if the energy changes that might be associated with the variation are of an order that could affect the temperature of the earth’s crust itself. Geodynamics is of course another potential heat source that may or may not be constant. I’d assumed that it was constant or very slowly varying, but this data suggests that it might not Anyway, it will take me some time to do the computations, so I’ll probably be quiet now until I have something concrete to say (if the thread survives until then:-). 143. A FYI and follow-up on earlier posts about trends versus model scenarios Because there was am early discussion in this thread of the relative merits of a simple time trend versus the Hansen Scenarios, I thought it would be interest to run a Diebold-Mariano predictive accuracy test. This test allows testing for different objective values — absolute value difference, squared differences, weighted, etc. — and accounts for the autocorrelations and heteroskedasticity of forecast errors (predicted minus actual values): characteristic missing in most common tests of forecast efficiency. The tests a “forecast” comparison of the Scenarios versus both a simple linear trend and a quadratic trend with an ARMA(3,3) structure, and were run for both absolute value and squared differences from the actual anomalies with both the GISS and Hadley series of anomalies. All series centered on the 1958-1988, common data set, means. Out of the 24 comparison there were 12 rejections of the “no statistical difference” between the forecasts, and each case the time series trend forecast was closer to the actuals. The results shown below [how does one post a table into a comment?] should not be interpreted as saying that a simple time series forecast is really superior. Rather, it should more modestly be considered to say that the Hansen scenarios offer no more predictive accuracy over what can be seen from a naà Æ à ⮶e or semi-semi extrapolation of the series. Scenario Trend Data ……Rejection.. t-stat . % Prob A Linear GISS………………..Yes…….-3.52………0.16% B Linear GISS…………………No………0.42……..33.92% C Linear GISS…………………No………1.42……….8.84% A ARMA+quadratic GISS..Yes…….-5.55………0.00% B ARMA+quadratic GISS..Yes……-2.34……….1.67% C ARMA+quadratic GISS…No…….0.14………44.71% Scenario Trend Data ….Rejection.. t-stat . % Prob A Linear HADLEY…………….Yes……-7.44……..0.00% B Linear HADLEY……………..No……..0.35…….36.46% C Linear HADLEY……………..No……..1.26…….11.27% A ARMA+quadratic HAD….Yes…….-8.06……..0.00% B ARMA+quadratic HAD….Yes……-2.75……….0.74% C ARMA+quadratic HAD…..No…….0.90……..19.09% Scenario Trend Data ……Rejection.. t-stat . % Prob A Linear GISS……………….Yes…….-3.57………0.14% B Linear GISS………………..No………0.39……..35.09% C Linear GISS………………..No………1.28……..10.96% A ARMA+quadratic GISS.Yes…….-6.44………0.00% B ARMA+quadratic GISS.Yes……-2.70……….0.82% C ARMA+quadratic GISS..No…….-0.72………24.13% Scenario Trend Data .. …Rejection. t-stat . % Prob A Linear HADLEY…………….Yes….-9.61………0.00% B Linear HADLEY……………..No……0.73……..23.86% C Linear HADLEY……………..No……1.47……….8.12% A ARMA+quadratic HAD….Yes……-5.64………0.00% B ARMA+quadratic HAD…Yes……-2.10……….2.64% C ARMA+quadratic HAD….No…….0.27………29.42% 144. Re: RGB #133 and preceding: Neural Networks Suppose a Neural Net I is used to make a univariate reconstruction or forecast (from some K number of “forcing,” “exogenous” or driving explanatory variables — no causal relationship assume). How is the distribution of the prediction — presume it is some series — known? You description of NN “fitting” as “generalized nonlinear approximators” is more than apposite. And generally the problem with nonlinear estimators is that one has to resort to asymptotic methods to determine the distributional properties, which leave us poor souls living in the finite sample universe often practicing statistics on faith. Anyway, I was fascinated by your comments and curious how you dealt with the statistical properties of NN. [Confession: I tried writing a NN program when I was first learning C++. The exercise scared me off object oriented programming and neural nets ever since.] 145. Speaking of C14 has anyone heard about the Suess effect? Apparently galactic cosmic rays are not the only thing the effects the c14 concentration. Also the burning of c14 depleted fossil fuels. I’m curious though, wouldn’t it be c14 rich fuels that should effect the c14 concentration the most. I wonder how difficult it would be to correct the c14 concentration for fossil fuel burnings so we could isolate the solar effects. I also wonder if frost fires effect the c14 concentration at all. 146. Oh, I understand it now. If the c14 of the fuels we burn has a lower c14 concentration that what is in the atmosphere we dilute it. Fossil fuels since they are older should have a lower c14 concentration. Trees should have a c14 concentration much closer to the atmosphere. Thus forest fires should have a much less significant effect on c14 concentration then fossil fuel burning. If we want to use c14 as a solar proxies we have to correct for the Suess effect. 147. Re #144 I just don’t worry about the statistical properties of NNs — I view them as “practical” predictive agents, not as formal statistical fits. In fact, I feel the same way about modelling in general in cases where the underlying model is effectively completely unknown, so one gets no help from Bayes and no help from a knowledge of functional forms. I’m working currently on a major random number testing program (GPL) called “dieharder”, that incorporates all of the old diehard RNG tests AND will (eventually) incorporate all the STS/NIST tests, some of the Knuth tests, and more as I think of things or find them in the literature. A truly universal RNG testing shell with a fairly flexible scheme for running RNG tests. In this context, I can speak precisely about statistical properties. Random number testing works by taking a distribution of some sort that one can generate from random numbers and that has some known property (ideally one that is e.g. normally distributed with known mean and variance). One generates the distribution, evaluates the statistic, and compares the mean result to the expected mean, computes e.g. chisq, transform to a p-value for the null hypothesis and I can then state “the probability of getting THIS result” from a PERFECT RNG is 0.001″ or whatever. Now, just how can one do that when one is pulling samples from an unknown distribution, with unknown mean, variance, kurtosis, skew, and other statistical moments, where the long time scale, short time scale, intermediate time scale behavior is not known, and where we KNOW that the underlying system is chaotic, with a high dimensionality to the primary causally correlated variables and clear mechanisms for nonlinear feedback? The answer is, we can’t. The basic point is that nobody can make a statement about p-value associated with any of the fits being discussed, which is why using them in public policy discussions is absurd. What we CAN do by simple inspection of the data is note that if the data range over 1000 years contains two warm excursions like the one we are just finishing this year, a simple maximum entropy assignment of probability (a la polya’s urn) for such excursions occurring in any give century is something like 10% to 20%, or in any given 100 year interval it is not unlikely to find at least one or two decades of similarly warm weather. To make statement beyond that requires accurate and similarly normalized data that stretches back over a longer time, and we observe (when we attempt to do so via proxies) long term temperature variability that vastly exceeds any of that observed in the tiny fraction of geological time since the invention of thermometer. So when I build NNs for this problem, it will not be so that I can “succeed” and build one that is highly predictive with some set of inputs and they say “Aha, now we understand the problem” or “Clearly these are the important inputs”. That’s impossible, given that there are lots of inputs and that their EFFECT is clearly all mixed up by feedback mechanisms so that they are all covariant in various ways. For example, there are clear long term variations in CO_2 (evident in e.g. ice core) that seem to occur with temperature. Is this cause? Effect? Both (via positive feedback through any of several positive mechanisms)? There are many worthwhile questions for science in all of this, but it is absolutely essential to separate out the real science, which SHOULD have a healthy amount of self-doubt and a strong requirement for validation and falsifiability (and hence for challenge from those that respectfully disagree) from the politics and public policy. Performing statistical studies is by far the least “meaningful” of all approaches to science, because correlation is not causality. It is easy in so many problems to show correlation. Most introductory stats books (at the college level) contain whole chapters with admonitory examples of how one can falsely claim that smoking causes premarital sex and other nonsense from observing correlations in the populations. Alas, people who become politicians, nay, who become presidents of the united states, may well be “C-” students in general and have never taken a singe stats course (let alone an advanced one), and besides, one of the lovely thing about misusing statistics is that is a fabulous vehicle for politicians and con artists both to make their pitches. “Help prevent teen pregnancy! Don’t let your daughter smoke!” I personally view statistical surveys and models of the sort being bandied about in this entire debate as being PRELIMINARY work one would do in the process of building a real science — exploring the correlations, trying to determine what needs to be explained and what CAN be MAYBE explained, and eventually connecting this back to “theoretical models” in the scientific sense. However, the PHYSICS of current models is so overwhelmingly underdone that I just think that it is absurd that anyone thinks that they can make any sort of statement at all about what causes what. The models contain parameters that are at best estimated and where the estimates can be off by a factor of two or more! They are missing entire physical mechanisms, or include them only by weak proxies (e.g. “solar activity” measures instead of “solar magnetic field” measures instead of “flux of cosmic rays”, where all of the above may vary in related ways, but with noise and quite possibly with additional significant functional variation from other systematic, neglected, mechanism). So seriously, the NN is “just for fun” and to see if one can read the tea leaves it produces for some insight, not to be able to make a statement with some degree of actual confidence (in the statistical sense). Sorry about your C++ experience — I actually don’t like C++ either as you can see from my website, where I have the “fake interview” on C++ that is pretty funny, if you are (like me) a C person…;-) 148. Robert, Best of luck with the neural network fits. You may also want to look at fitting to Satellite data as opposed to instrumental data. I say this since steve’s thread: Suggests that it is easier to fit the satellite data to an ARMA model then the instrumental data. I haven’t looked at this too closely but it could save you some grief. 149. Re # 147 I’m all for reading the tea leaves with a hope of insight. When I look at the various ARMA estimates I have made of annual or even monthly data, I don’t see any common structure to the patterns. I contrast this with daily and hourly temperature data which has been pretty consistent, at least for the data sets I have seen in the US. So maybe NNs catch a pattern which gives an insight which … And maybe the Laplacian efforts to model climate may come to fruition. If you have it, could you post the exact location of the “interview” regarding C++. I can’t really claim to be a C person — I haven’t written a line of C in over 10 years — but I still think K&R is, if not the greatest, then the cleanest book written on programming. But then the language is pretty clean also. 150. Martin, GIYF but among other places it is here: rgb’s C++ Rant (and fake “interview”). Lest this trigger a flame war (always fun, but not necessarily for this venue) let me hasten to point out that while I personally prefer C to C++ for some of the reasons humorously given here, I really think that language preferences of this sort are a matter of religious taste and not worth really arguing about. John: There are really two NN projects that seem to be implicitly possible — one that uses the extremely accurate satellite data as you suggest (which alas, doesn’t extend back very far at all on the 1000 year scale) to model short time fluctuations, which almost certainly won’t extrapolate but which should have really good basic data and one that uses the infinitely arguable T(t) from proxies as the model target and SOME sort of input related to e.g. solar activity? The problem with NN predictive models is that one has to be very careful picking one’s inputs to avoid certain obvious problems. For example, using a single input of t (the year) would permit a network to be built that pretty much approximates T(t) via interpolation and limited extrapolation. However, this isn’t desireable — one could do as well with a fourier transform and looking for important frequencies. Indeed, one hopes that this latter thing has already been done, as it would certainly yield important information. Yet the SPECIFIC aspects of solar dynamics that may be “the primary variable” in determining T may not be precisely reflected in “just” sunspot count, and sunspot counts per year only go back so far (roughly 1600′s), at least accurately. So one is tempted to extend them with extrapolated patterns, which in turn beg several questions about e.g. long term gleissberg-type fluctuations and which CAN become nothing but transforms of t if one isn’t careful. This kind of thing has been done many times by Friis-Christensen (who doesn’t JUST look at sunspot intervals but attempts to find evidence of deeper patterns in the sun’s irregular but predictable orbital behavior and its connection with its rotation and magnetic properties). The solar models seem to be getting there, but are still largely incomplete and to some extent phenomenological. The good thing about the NN in this context is that IF there is a realatively simple (e.g. fivefold) pattern in the underlying forcing/response, a suitably “stupid” NN will be forced to find a nonlinear model for it in order to end up with a good performance. One of my favorite demo problems, for example, is building a NN that can recognize binary integers that are divisible by e.g. 7, presented bitwise on its inputs. The amazing thing is that there exist networks that will “solve” this problem with something like 95% accuracy after being trained with only 25% or so of the data, given that a NN knows nothing about “division” and that the input neurons aren’t even ordinally labelled or asymmetric in any way. There is reason to hope for relations like those proposed by FC to be abstractable if the network has the right inputs. Noting that all of this reflects the basic problems with all the other kinds of models one might try to build. Since the invention of NASA and weather satellites, we have increasingly accurate and complete data on global weather. Before that we have accurate and complete data only from a tiny fraction of the world, for an appallingly short period of time on a geological scale, making it extremely dangerous to jump to any sort of dynamical model conclusions. 151. With regards to sunspot number I plotted the sunspot number and I looked at Mann’s graph of solar activity (Okay I forget what he called it) Anyway, Mann’s graph looked like the sunspot number put though a low pas filter. In the paper Mann referenced the figure was constructed from many solar indicators. So say Mann’s graph is an indication of solar flux while the sunspot number (with the mean subtracted) better correlates with solar magnetism and clouds. So a Neural network with suitably chosen nonlinearities may be able to extract a good deal of information from sun spot number alone. Additionally sunspot number extends though most of the years that Mann did his figure 7 correlations 152. Two topics of interest are how to combine high frequency data with low frequency data and how to compare estimates measured at different sampling rates. The problems arise for instance because of different types of data either measurement or proxies. My initial thoughts on the issue are if the systems aren’t too stiff then it may be okay to keep compare the estimates by finding a transformations from one controllable canonical form to another. Otherwise more numerically stable representations of the state space equations should be found. In either case I would suggest choosing a common form to compare the estimates obtained by both sets of data (e.g. state space, diagonalized, Jordan form, Schur Decomposition) In the case where a controllable canonical form is chosen as the form to compare the estimates an initial estimate can be obtained by estimating the ARMA coefficients from one set of data and then transforming those coefficients to the form of the other set of data. It is important to map the uncertainties as well as the estimates because this information will be used as aprori information in the improved estimate which incorporates this aprori information plus the other set of data. It should be noted that recursive least squares is equivalent to the Bayesian estimate where the aprori information is obtained from the previous estimates via the RLS algothim. I bring this up to point out that there is a wide variety of theory about how to recursively incorporate new sets of data to improve an estimate that often gives the same result. The advantage of using a controllable canonical form as the basis of compression is that only the model estimate due to one of the sets of data has to be transformed. A transformation can introduce numerical error and statistical bias. As the certainty in the transformations approaches zero the bias introduced by the transformation approaches zero. The problem with controllable canonical form is it my not represent stiff systems in a numerically stable way. If the poles are well separated the system can be put in diagonalized form but a diagonalized form becomes a Jordan form when there is repeated poles. A Schur decomposition is a form that is a numerically stable alternative to Jordan form but not as computationally efficient. I bring these issues up because there is a lot of talk here about that statistical issues of fits. Robert points out how difficult it is to provide meaningful statistical results and this gets ever harder in the presence of numerical instabilities. The procedures I describe retain the prospect of calculating error bars but open up the question how much does numerical error effect these error bars and do the algorithms proposed properly account for this error. 153. I was thinking about the orthogonally of the proxies and my first thought was it is probably a precipitation index. Precipitation indexes can be related to cloud cover which play a big part in warming. Of course low clouds cause cooling and high clouds cause warming so precipitation is not directly related to warming. I then recall one of Robert’s posts, “I pulled just a couple of proxies (e.g. some african lake data, chinese river data) and they show beyond any question an extended warm spell in the 1100-1300 range that was clearly global in scope. I thought that this was visible in nearly all the tree ring data on the planet “¢’ ¬? but I see that now I can look for myself (if I can figure out how “¢’ ¬? there is a LOT of tree ring data, and of course (sigh) tree growth is itself multivariate and not a trivial or even a monotonic function of temperature).” and I then wonder if maybe Mann took care to select the worst of the tree proxies. I am not sure if Robert was saying the MWP and LIA was in most tree data or not. Interestingly enough tree proxies supposedly best for high frequency information so if low frequency proxies are first used to identify the low frequency model and then if the low frequency part of the signal is removed by an inverse filter (similar to differencing) then maybe trees will provide a more robust method of identifying the high frequency part of the signal. Anyway, we may be able to use trees to get low frequency information but we have better ways of doing it. I think tree proxies should only be used were they are supposedly suppose to excel. 154. Re: #6 If memory serves me right, GISS does not use ship based SSTs, only buoys, whereas Hadley does use them. Their SSTs are therefore likely very different. 155. GISS has SST back to the 1870s for 0.5N 159E in the Hansen PNAS study, while HadCRU has virtually no values around 1900 – so someething else must be oging on as well 156. Re 155, the HadISST (Ice and Sea Surface Temperature) database goes back to1870. I believe that’s the data Hansen used … but like so many things in climate “science”, who knows? 157. I’ve been having a bit of a discussion with a rather “in-your-face” gentleman called Eli Rabett on another blog not to be named, about the changes in CO2 in Hansen’s different forcing scenarios. Eli’s claim is that there is “not a tit’s worth” of difference in CO2 in Scenarios B and C until the year 2000. In support of this, he provided the following chart: He does not say where the data for the chart comes from … or how he did the calculations … or anything. He just claims that’s the truth. But it can’t be, because if the difference were only a few parts per million between the observations and all three scenarios, why are the scenarios so different from each other and from the observations? (And in a bizarre twise, he has the Scenario C levels flattening out in 2000, whereas Hansen states that “Slow growth [Scenario C] assumes that the annual increment of airborne CO2 will average 1.6 ppm until 2025, after which it will decline linearly to zero in 2100.” Now, I’ve been putting off actually doing the exercise of figuring out the CO2 levels in Hansen’s scenarios, because it’s somewhat complex. The problem comes from Hansen’s specification of the inputs to the models, which are as follows: Scenario A CO2 3% annual emissions increase in developing countries, 1% in developed. Scenario B CO2 2% annual emissions increase in developing countries, 0% in developed. Scenario C CO2 1.6 ppm annual atmospheric increase until 2025, and decreasing linearly to zero by 2100. (He also specifies changes in methane and nitrous oxide, but as he says, “Comparable assumptions are made for the minor greenhouse gases. These have little effect on the results.”) As you can see, the CO2 inputs are in different units, with A and B given as emission changes, and C given as a change in atmospheric ppmv. That’s the difficulty. However, I am nothing if not persistent, so I tackled the problem. I got the historical carbon emission data by country from the CDIAC for 1958, the start of the run. I divided it into developed and developing countries. It breaks down like this (in gigatonnes of carbon emitted: World : 2.33 gTC Developed : 1.90 gTC Developing : 0.43 gTC That was the laborious part, splitting out the emission data. Next, the atmospheric data. Not all of the CO2 that is emitted stays in the atmosphere. To account for this, I had to calculate the percentage that remained in the atmosphere each year. This varies from year to year. To compute this, I took the Mauna Loa data for the change in CO2 year by year. Knowing the atmospheric concentration and the amount emitted, I then calculated the amount retained by the atmosphere. Over the period, this varied in the range of 80% to 40%. I was then ready to do the calcuations. For Scenarios A and B, I calculated each succeeding year’s increased emissions, multiplied that by the percentage retained in the atmosphere, and then used that to calculate the new atmospheric concentration. Scenario C was much easier, it was a straight 1.6 ppmv increase annually. Here are the final results: A couple things of note. First, Scenarios A, B and C are fairly indistinguishable until about 1980. This is also visible in the model results, where those scenarios do not diverge significantly until 1980. Second, all of the scenarios assume a higher rate of CO2 growth than actually occurred. Finally, since these scenarios were designed by Hansen to encompass the range of high and low possibilites, this sure doesn’t say much for the scenarios … 158. As I mentioned, I’ve been discussing this issue on another blog. Tim Lambert kindly pointed out to me that I was looking at the wrong specification for the forcings of the models. (This was followed by a very nasty response from Eli Rabett). Lambert was right. Here is my response. Tim, thank you for pointing out this error. You are 100% correct, Hansen describes two sets of scenarios A, B, and C in his paper. One is for the 1988 graph, and one is for the 2006 graph. Guess the Rabett was a far-sighted lagomorph after all. However, none of this changes a couple of things. 1) The CO2 projections by Hansen are quite good up until 1988, which makes sense, because the paper was written in 1988 and the scenarios were designed, understandably, to fit the history. And as Eli Rabett pointed out, all three CO2 scenarios are indentical until the C scenario goes flat in 2000. However, after 1988, all three scenarios show more CO2 than observations. C drops off the charts in when it goes flat in 2000, but A and B continue together, and they continue to be higher than observations. The CO2 forcings from A and B are higher than observations every year after 1988 to the present. 2) Including 4 of the other 5 major GHGs (CH4, N2O, CFC-11, and CFC-12) gives scenarios that diverge before 1988. A and B diverge immediately, and C diverges from B around 1980. Hansen’s claim that the scenarios were accurate can only be maintained by tiny graphs that don’t show the details. Once the details are seen, it is obvious that the forcings from the scenarios are all, every one of them, higher than observations, and the scenarios are still diverging from the observations to this day. Here is Hansen’s graph … And here is a clear graph of the same data … The actual 5 gases forcings based on observations follow B very closely until 1988. Again, this is no surprise, B was designed to be as close as possible to observations, with A above and C below. But after 1988, once again the observations diverged from all three scenarios. By 1998, just ten years into the experiment, observations were below all three observations. And the distance between them continued to increase right up to the present. A few more thoughts on the Hansen paper. He says: The standard deviation about the 100-year mean for the observed surface air temperature change of the past century (which has a strong trend) is 0.20°C; it is 0.12°C after detrending [Hansen et al., 1981]. The 0.12°C detrended variability of observed temperatures was obtained as the average standard deviation about the ten 10-year means in the past century; if, instead, we compute the average standard deviation about the four 25-year means, this detrended variability is 0.13°C. “ ⧆or the period 1951-1980, which is commonly used as a reference period, the standard deviation of annual temperature about the 30-year mean is 0.13°C. … We conclude that, on a time scale of a few decades or less, a warming of about 0.4°C is required to be significant at the 3à ?à ’ level (99% confidence level). “ ⧔here is no obviously significant warming trend in either the model or observations for the period 1958-1985. During the single year 1981, the observed temperature nearly reached the 0.4°C level of warming, but in 1984 and 1985 the observed temperature was no greater than in 1958. Early reports show that the observed temperature in 1987 again approached the 0.4°C level [Hansen and Lebedeff, 1988], principally as a result of high tropical temperatures associated with an El Nino event which was present for the full year. Analyses of the influence of previous El Ninos on northern hemisphere upper air temperatures [Peixoto and Oort, 1984] suggest that global temperature may decrease in the next year or two. “ ⧔he model predicts, however, that within the next several years the global temperature will reach and maintain a 3à ?à ’ level of global warming, which is obviously significant. Although this conclusion depends upon certain assumptions, such as the climate sensitivity of the model and the absence of large volcanic eruptions in the next few years, as discussed in Section 6, it is robust for a very broad range of assumptions about CO2 and trace gas trends, as illustrated in Figure 3. Now, is this all true? Are we in the midst of “signicant”, unusual warming? The answer requires a short detour into the world of statistics. “Standard deviation” is a measure of the average size of the short-term variations in a measurement, such as yearly measurement of temperature. A “3à ?à ’” (three sigma) level of significance means that the odds of such an event occurring by chance are about one in a thousand. However, there are a couple caveats … 1) All of these types of standard statistical calculations, such as Hansen used above, are only valid for what are called “stationary i.i.d. datasets”. “Stationary” means that there is no trend in the data. If there is a trend in the data, all bets are off. For example, suppose we are measuring the depth of a swimming pool with someone swimming in it, and we can measure the depth of the water every second. Since someone is swimming in the pool, we get different numbers every second for the depth. After a while, we can determine the standard deviation (average size) of the waves that the person makes. We can then say that if the depth of the water is less than the average depth minus three times the standard deviation (average size) of the waves, this is a “three sigma” event, one that is unusual. It means, perhaps, that someone has jumped in the pool. Now suppose that we pull the plug on the pool, and the water level slowly starts to fall. Sooner or later, the trough of one of the waves from the swimmer will be less than the three sigma depth … does this mean that that someone has jumped in the pool? No. It just means that initially we were dealing with “stationary” (trendless) data, so we could analyze the situation statistically. But once we started emptying the pool, we introduced a trend into the data, and at that point, we can no longer use standard statistics. In other words, all bets are off. The same is true for temperature, it always has a trend. As we know from the history of the world, temperature is never stable. It has trends on scales from months to millenia. Because of this, the analysis Hansen did is meaningless. 2) “i.i.d” stands for “independent identically distributed”. “Independent” means that the numbers in the dataset are not related to each other, that one does not depend on another. But this is not true of temperature data. A scorching hot month is not usually followed by a freezing month, for example. This type of dependence on the previous data point is called “autocorrelation”. In other words, the temperatures are not independent of each other, so we can’t use standard statistical methods as Hansen did. We need to use different statistical methods when a dataset is autocorrelated. One of the effects of autocorrelation is that it increases the standard deviation. Hansen observes (above) that the standard deviation during the 1951-1980 period was 0.13°C, which makes a 3 sigma event three times that, or 0.39°C. But the temperature record is autocorrelated, which increases the standard deviation. Adjusted for autocorrelation, the standard deviation for the ’51-’80 period increases to 0.19°C, which makes a 3 sigma event 0.57°C, not 0.39°C. Now, the average temperature anomaly from 1951-1980 was -0.11°C. The average anomaly 1996-2005 was 0.39°C. So, despite Hansen’s dire 1988 predictions, and even ignoring the fact that the global temperature dataset is not stationary, it has not happened that “within the next several years the global temperature will reach and maintain a 3à ?à ’ level of global warming” as Hansen Will we see such an event? Almost assuredly … because we can’t actually ignore the fact that the temperature is not stationary. Because of the trend, even if we adjust for autocorrelation, we cannot say that a particular data point in a series containing a trend is significant at any level. So sooner or later, we will see a three sigma event, which because of the trend won’t mean anything at all … but we haven’t seen it yet. My best to everyone, 159. From Steve Milloy Junkscince.com Check the link out for maths of exagerated warming trend forecast. “Real-world measures suggest moderate to strong negative feedback, currently unnamed and un-quantified, mitigates the Earth’s thermal response to additional radiative forcing from both human activity and natural variation. Justification for amplification factors >2.5 for unmitigated positive feedback mechanisms is not evident in empirical measures. It is not clear whether any amplification factor should be applied or even what sign any such factor should be. Nor is there evidence to support such large ? values in GCMs. Division of real-world measures continue to exhibit the same surface thermal response derived by Idso for contemporary local, regional and global climate, for ancient climate under a younger, weaker sun and for Earth’s celestial neighbors, Mars and Venus. In the absence of support for amplification factors and in view of their erroneously large ? values it is apparent that the wiggle fitting so far achieved with climate model output is accidental or that these models contain equally large opposing errors in other portions of their calculations such that a comedy of errors produce seemingly plausible results in the short-term. In either case no confidence is inspired. On balance of available evidence then the current model-estimated range of warming from a doubling of atmospheric carbon dioxide should probably be reduced from 1.4 – 5.8 °C to about 0.4 °C to suit observations or ËÅ” 0.8 °C to accommodate theoretical warming — and that’s including ?F of 3.7 Wm-2 from a doubling of pre-Industrial Revolution atmospheric carbon dioxide levels, a figure we suspect is also inflated. The bottom line is that climate models are programmed to overstate potential warming response to enhanced greenhouse forcing by a huge margin. The median estimate 3.0 °C warming cited by the IPCC for a doubling of atmospheric carbon dioxide is physically implausible.” “We do not know why modelers persist in using their 2.5 times amplification factor when empirical measure repeatedly demonstrates 0.5 to be the correct ratio. We would like to think competition for a share of the multi-billion-dollar global warming research largesse had nothing to do with it but we can see how difficult it would be to get published in such a frenetic field with results reflecting trivial response. With such a large cash cow to roast we expect heat settings to remain on “high” for the foreseeable future.” 160. # 158 Careful with the definitions. SteveM should put somewhere formal definitions of statistical terms, so we could talk with same language (in my case, equations+ bad English :). See e.g. #156 in http://www.climateaudit.org/?p=833#comments . (I’m not statistician, but here’s what I think, pl. correct if I’m wrong) 1) All of these types of standard statistical calculations, such as Hansen used above, are only valid for what are called “stationary i.i.d. datasets”. “Stationary” means that there is no trend in the data. If there is a trend in the data, all bets are off. i.i.d is enough. i.i.d process is stationary process. Not necessarily vice versa. However, talking about 3-sigmas with 99 % confidence implies that Hansen means Gaussian i.i.d. So, he shows that it is very unlikely that global temperature is a realization of Gaussian i.i.d process. I agree with that. We can then say that if the depth of the water is less than the average depth minus three times the standard deviation (average size) of the waves, this is a “three sigma” event, one that is This is a good example. 3-sigma with unusual refers to Gaussian distribution. But we can assume that it is normal that sometimes somebody jumps in the pool. Then the 3-sigma event is not very rare. But neither is the distribution Gaussian. Hmm, am I getting confusing again ? ;) But once we started emptying the pool, we introduced a trend into the data, and at that point, we can no longer use standard statistics. In other words, all bets are off. But now here is a change for statistical inference: after we get 5-sigmas we can drop the ‘Gaussian i.i.d’ hypothesis. One of the effects of autocorrelation is that it increases the standard deviation. Yes, in autocorrelated case, sample standard deviation from small sample will usually underestimate the process standard deviation. Shortly: Hansen shows that global temperature is not Gaussian i.i.d process. 161. Thanks, UC, for the clarification. My understanding, open to correction, is that “stationary and “iid” are different things. All iid means is that they are not autocorrelated, and that they have the same distribution (gaussian, poisson, etc.). These data points might or might not contain a trend. Adding a linear trend to gaussian data merely produces a new distributition, let me call it “trended gaussian”. As long as all of the data points are “trended gaussian” in distribution, are they not “identically distributed”? Also, you say: Yes, in autocorrelated case, sample standard deviation from small sample will usually underestimate the process standard deviation. This is true regardless of the size of the sample. “stationary and “iid” are different things All iid means is that they are not autocorrelated.. Even more, i.e. they are independent. For Gaussian random variables uncorrelation implies independency, not necessarily for other distributions. These data points might or might not contain a trend. Need to think about this.. Short samples can show a trend, but generally no trend. As long as all of the data points are “trended gaussian” in distribution, are they not “identically distributed”? And trended Gaussian is not stationary either, because the first moment (mean) changes over time. Read with caution, these are open to correction as well. 163. “identical” means the distribution doesn’t change. If there is a trend, then a key parameter of the distribution – the mean – is changing. “non-autocorrelated” and “independent” are synonymous. “stationary” typically means first order *and* second order moments (mean, variance) do not change. 164. There might be many definitions, but “non-autocorrelated” and “independent” are synonymous is this true? For independent F(x1,x2)=F(x1)F(x2), for uncorrelated E(x1x2)=E(x1)E(x2). F is the distribution function. Correlation is weaker property than independence. “stationary” typically means first order *and* second order moments (mean, variance) do not change. E(x) does not change over time, and autocorrelation can be expresses as R(delta_t), I think that is the definition of weak-sense-stationary process (?) 165. UC, if x1 is dependent on x2, and so on, then xi is, by definition, autocorrelated. Now, the ability to infer autocorrelation based on a sample autocorrelation statistic, that’s an issue. It’s nigh impossible if the series is very short. The dependency among xi needs to be somewhat persistent before the sample autocorrelation coefficient becomes significantly different from zero. The dependency may exist, but not be detectable via a (small) sample autocorrelation coefficient. Similarly, if the autocorrelation relationship varies from one (or some) xi to the next, then a dependency is always there, but it will not yield a significant autocorrelation coefficient, because it is not one, homogeneous dependency. This illustrates your point. But then I ask you: how would you characterize this moving dependency? You can’t. Therefore you have the same problem with the term “dependence” as you do with “autocorrelation” – whether the series is short, or whether the dependency varies. Sure, the terms can mean different things; but they are synonymous in the context you and Willis were using them. 166. Just a nitpick, I don’t think Lambert got it “100%” right, as he said, “You have given the definition for the one starting in 2006, not the one in his 1988 paper” but it was first outlined in the 98 paper. 167. Man, I love this blog. I learn more here in one day than I can even process. Thanks, guys. My original post that led to this discussion was originally intended for a less mathematically knowledgeable blog, so I tried to simplify the math, describing the standard deviation as the “average size” of the residuals, which is not strictly true, etc. UC, if x1 is dependent on x2, and so on, then xi is, by definition, autocorrelated. What is your definition for autocorrelation? I would use ‘E(x1x2)=E(x1)E(x2) means no autocorrelation’. My point is, if we criticize Hansen about faulty stats, we should be quite accurate with our terms then. (I know what Willis means and I agree with him) Let’s see if I find an example of dependent but non-autocorrelated process.. Change AR1 x(k+1)=alpha*x(k)+w(k) to x(k+1)=alpha*x(k)*w(k), would that do? 169. #168 is consistent with #165. Of course, the nature of w(k) matters. The correlation in x(k) will degrade as the variance in w(k) increases. That does not mean x(k) is not autocorrelated. It means the autocorrelation coefficient is a weak model for describing the autoregressive effect of alpha. Your caution about definitions, I imagine, stemmed from this line in #158: A scorching hot month is not usually followed by a freezing month, for example. This type of dependence on the previous data point is called “autocorrelation“. If you “agree with Willis” on this point, then why raise the issue about definitions, particularly the distinction between of “dependence” and “autocorrelation”? His statement is accurate enough for a blog and accurate enough to make his case. If he wanted to be more accurate he might have said: This type of dependence on the previous data point is [S:callled :S]what leads to “autocorrelation”. But then we’re splitting hairs here. And I just don’t think it’s necessary. Last post. #168 is consistent with #165. I don’t agree, E[x(k)x(k+1)]=0 and that means no autocorrelation (to me.) w(k) matters, that’s true, should add E[w(k)]=0, w(k) i.i.d. Your caution about definitions, I imagine, stemmed from this line in #158: My caution about definitions is in #160. You added “non-autocorrelated” and “independent” are synonymous. which I didn’t agree with. Last post. 171. Out of curiosity, have Hansen et al 2006 ever provided a source for the values in their charts? I’d like to see the number in the graph. FWIW, I think the difficulty with lining everything up in 1958 is that the initial conditions (IC) for the runs were probably midnight, Dec. 31, 1957. Hansen et al. doesn’t say this, but one must provide initial conditions to a run, and setting the IC to match that particular time is the only thing that makes any real sense. The Annual Average temperatures in 1958 did rise, and however the initialized the model didn’t. It does make complete sense to put HADCRUT and GISS on the same basis time basis, so what willis does makes sense there. You need to normalize everything to the same year. I’m actually not sure quite what is correct to do about matching or not matching start points. I could be wrong, but it seems to me there are challenges revolving around with setting initial conditions. You can’t set them for a full average year, you must set them for a precise time. How well can any modeler know everything in Dec. 31, 1957? Whatever choices are made have some effects on climate. Some choices — for example individual storms– may have short term effects on predicted climate; others’ long term effects on predicted climate. (Anomolously high or low amounts of stored heat in the oceans could have a quite long term effect.) The sensitivity to these initial conditions is not discussed in the 1988 papers, I don’t run these models, so I don’t know. But… anyway, did anyone ever find the data for the Hansen graph on line? I’m hankerin’ for the unshifted stuff, and I’d like 2006 and 2007! 172. I’d say in the case of “If August was 90 F, September won’t be -30 F” has little or nothing to do with statistics, and it’s certainly not iid. It’s a pattern of nature. (Or so I say!) If you use statistics to infer something about an unknown aspect of some sample, you can use the z-test to see if the difference between that sample mean and the population mean is large enough to be significant. In order to satisfy the central limit theorem (enough observations of variables with a fininte variance will be normally distributed (Gaussian or bell-curve), the observations are considered beforehand to be i.i.d. by default. A collection of random variables is i.i.d. (independent and identically distributed) if each has the same probability distribution and they are all independent of each other. If observations in a sample are assumed to be iid for statistical inference, it simplifies the underlying math, but may not be realistic from a practical standpoint. Examples of iid: Spinning a roulette wheel Rolling a die Flipping a coin Ceteris paribus of course. (A statement about a causal connection between two variables should rule out the other factors which could offset or replace the relationship between the antecedent (first half of the hypothetical proposition, in this case throwing a die) and the consequent (the second half, in this case that the die will land without influence that would make the throw be not iid in the sample, such as weighting one side of it before throwing it) 173. FWIW, here is my crude (sorry, I had to use Paint) update of the Hansen plot. For the sake of argument, I have done it on the “apples to apples” basis supported by Peter Hearnden in #10. I have estimated the GISS anomaly for 2007 to be +0.74, based on the J-N data and the fact that it looks like Dec will about the same or cooler than Nov. 174. It would appear that Hansen’s 1988 climate models are beginning to diverge from the actual temperature observations The latest GISS readings are shown in the diagram below: [wp_caption id="" align="alignnone" width="450" caption="Scenarios A, B and C Compared with Measured GISS Surface Station and Land-Ocean Temperature Data"] [/wp_caption] The original diagram can be found in Fig 2 of Hansen (2006) and the latest temperature data can be obtained from GISS. The red line in the diagram denotes the Surface Station data and the black line the Land-Ocean data. My estimate for 2008 is based on the first six months of the year. Scenarios A and C are upper and lower bounds. Scenario A is “on the high side of reality” with an exponential increase in emissions. Scenario C has “a drastic curtailment of emissions”, with no increase in emissions after 2000. Scenario B is described as “most plausible” and closest to reality. Hansen (2006) states that the best temperature data for comparison with climate models is probably somewhere between the Surface Station data and the Land-Ocean data. A good agreement between Hansen’s premise and measured data is evident for the period from 1988 to circa 2005; especially if the 1998 El Nino is ignored and the hypothetical volcanic eruption in 1995, assumed in Scenarios B and C, were moved to 1991 when the actual Mount Pinatubo eruption occurred. However, the post-2005 temprature trend is below the zero-emissions Scenario C and it is apparent that a drastic increase in global temperature would be required in 2009 and 2010 for there to be a return to the “Most-Plausible” Scenario B. Will global warming resume in 2009-2010, as predicted by the CO2 forcing paradigm, or will there be a stabilsation of temperatures and/or global cooling, as predicted by the solar-cycle/ cosmic-ray fraternity? Watch this space! P.S: It would be very interesting to run an “Actual Emissions” Scenario on the Hansen model to compare it with actual measurements. The only comments that I can glean from a literature survey is that Scenario B is closest to reality, but it would appear that CO2 measurements are above this scenario, but unexpectedly, methane emissions are significantly below. Does anyone have the source code and/or input data to enable this run? One Trackback 1. [...] In 1988, Hansen made a famous presentation to Congress, including predictions from then current Hansen et al (JGR 1988) online here . This presentation has provoked a small industry of commentary. Lucia has recently re-visited the topic in an interesting post ; Willis discussed it in 2006 on CA here . [...] Post a Comment
{"url":"http://climateaudit.org/2006/08/26/willis-e-on-hansen-and-model-reliability/?like=1&source=post_flair&_wpnonce=1fd78fe65c","timestamp":"2014-04-16T05:08:05Z","content_type":null,"content_length":"437744","record_id":"<urn:uuid:43b09c62-dcfe-4767-af31-635a344fd68e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Knowing .NET Due to popular demand… Here is source code for a preliminary Xamarin.iOS binding for Google’s ChromeCast Here is C# source code for a simple iOS app that casts a video URL In order for this to work, you’ll need: This is just source code, not a step-by-step walkthrough. Everything associated with this is in beta and I don’t want to invest a lot of time making things just so at this point. You can read an overview of the programming model here. As I blogged about last weekend, I got a ChromeCast and had a simple-enough time creating an iOS-binding library for Xamarin.iOS, allowing me to program the ChromeCast in C# (or F#, maybe next This weekend, I wrote a simple Home Media Server that allows me to stream… well, all my ChromeCast-compatible media, primarily mp4s. Here’s how I did it… ChromeCast Programming: Intro Essentially the ChromeCast is nothing but a Chrome browser on your TV. If you want to display HTML, no problem, but what you probably want to display is a great big video div: <video id="vid" style="position:absolute;top:100;left:0;height:80%;width:100%"> But where does this HTML come from? Here’s the first kind-of-bummer about ChromeCast: Every ChromeCast application is associated with a GUID that Google provides you. Google maintains a map of GUID-> URLs. And, since you have to send them your ChromeCast serial to get a GUID, it’s a safe bet they check the hardware, too. When you start an application with: session.StartSessionWithApplication ("93d43262-ffff-ffff-ffff-fff9f0766cc1"), the ChromeCast always loads the associated URL (in my case, “http://10.0.1.35/XamCast”): So, as a prerequisite, you need: • A ChromeCast that’s been “whitelisted” for development by Google; • A Google-supplied GUID that maps to a URL on your home network (a URL you decided during the “whitelist” application to Google) • A WebServer at that URL It’s important to realize that what’s at that URL is not your media, but your “receiver app”: which might be plain HTML but which is likely to be HTML with some JavaScript using the ChromeCast Receiver API that allows you to manipulate things like volume and playback position, etc. I basically just use this file from Google’s demo, with minor tweaks. Home Media Server : Intro So if you want to stream your home media, you need a WebServer configured to serve your media. This doesn’t have to be the same as your App Server (it probably will be, but conceptually it doesn’t have to be): The structure is straightforward: 1. The mobile controller gets a list of media from the Media Server 2. The application user selects a piece of media 3. The controller sends the selected URL (and other data) to the ChromeCast 4. The ChromeCast loads the media-URL from the Media Server For me, the “App Server” and “Media Server” are the same thing: an Apache instance running on my desktop Mac. ChromeCast Media-Serving : Components and Life-Cycle This is a rough sequence diagram showing the steps in getting a piece of media playing on the ChromeCast using the Xamarin.iOS binding: 1. Initialization 1. Create a GCKContext; 2. Create a GCKDeviceManager, passing the GCKContext; 3. Create a GCKDeviceManagerListener; hand it to the GCKDeviceManager; 4. Call GCKDeviceManager.StartScan 2. Configuring a session 1. When GCKDeviceManagerListener.CameOnline is called… 2. Create a GCKApplicationSession; 3. Create a GCKSessionDelegate, passing the GCKApplicationSession 3. Playing media 1. After GCKSessionDelegate.ApplicationSessionDidStart is called… 2. Create a GCKMediaProtocolMessageStream; 3. Get the Channel property of the GCKApplicationSession (type GCKApplicationChannel); 4. Attach the GCKMediaProtocolMessageStream to the GCKApplicationChannel 5. Create a GCKContentMetadata with the selected media’s URL 6. Call GCKMediaProtocolMessageStream.LoadMediaWithContentId, passing in the GCKContentMetadata Here’s the core code: public override void ApplicationSessionDidStart() var channel = session.Channel; if(channel == null) Console.WriteLine("Channel is null"); Console.WriteLine("We have a channel"); mpms = new GCKMediaProtocolMessageStream(); Console.WriteLine("Initiated ramp"); private void LoadMedia() Console.WriteLine("Loading media..."); var mediaUrl = Media.Url; var mediaContentId = mediaUrl.ToString(); var dict = new NSDictionary(); var mData = new GCKContentMetadata(Media.Title, Media.ThumbnailUrl, dict); var cmd = mpms.LoadMediaWithContentID(mediaContentId, mData, true); Console.WriteLine("Command executed? " + cmd); The core of a real home media server for the ChromeCast is the Web Server and the UI of the mobile application that browses it and chooses media. To turn this hack into a turnkey solution, you’d need • Run a public Chromecast application server that □ Deferred the URL of the media server to the client • Write the media server, with all the necessary admin • Write a nice client app, that stored the mapping between the public ChromeCast app server and the (strictly-local) media server • Make a great user interface for selecting media • Make a great user interface for controlling the media I have no plans on doing any of that stuff. What I plan on doing once ChromeCast and iOS 7 are out of beta is: • Make a nicer binding of the ChromeCast API and put it up for free on the Xamarin Component Store; and • Play around with serving media and blogging about anything interesting that comes up The real thing that I wanted to do was see if Xamarin.iOS worked well with ChromeCast (resounding “Yes!”) and come up with a hack for my own use. Achievement Unlocked. ChromeCast Notes I guess I got under the wire with the Netflix deal, so the net cost of the thing was $11. Even at $35, it’s a no-brainer for a developer to pick up and see if they can target. Very good OOBE: plug it in to HDMI port, power via USB, and… yeah, that works. Setup via iOS didn’t work for me (hung), so I set it up via Chrome on laptop: fine. Add extension to Chrome, can “cast” any single tab. Works great with Comedians in Cars Getting Coffee. Integrated is better, though: very easy to watch Netflix and cue up next issue of “Breaking Bad, Season 5″ (they’ve just released, dontcha’ know). YouTube app was a little confusing. Local files cast from Chrome Mixed bag. Worked well with raw GoPro MP4s, but not my QuickTime output Photos cast perfectly, but obviously would benefit from a native app. The one that jumps out is, of course, “DLNA -> Cast.” This would presumably require setting up an auto-transcode to supported formats. Would be best with an XPlat mobile controller: use iOS, Android, or Computer to select files on DLNA server. ? Is there a barebones DLNA library / app that could be hacked? “It’s not a slide projector, it’s a time machine…” Photo browser. Video logger: Watch raw footage on TV, hit “in/out”, make notes, triage. Imperfect information turn-based games (e.g., card games, Eurogames): TV is public, devices are private. Better than “pass-and-play” for, e.g., “Ticket to Ride”. Poker. Party photos: QR code on screen specifies photos taken in next N hours with device are shown / shared with others with same guid. (How to make work with different photosite / storage options?) Beta SDK available and simple apps at Github. I downloaded the iOS SDK and used Objective Sharpie to create Xamarion.iOS C# bindings. Very straightforward; tool did 95% of work. Needed to massage some stuff (some things improperly changed to fields, needed to change FieldAttribute. “Hello world” Sender app easy-peasy lemon-squeezie: var gckContext = new GCKContext("net.knowing.xamcast"); var deviceManager = new GCKDeviceManager(gckContext); var dmListener = new DeviceManagerListener(); dmListener.CameOnline += (s,e) => CreateSession(gckContext, e.Device); BUT… No generic media-receiver app? Can’t just write Sender app and send “GET endpoint to supported format”? That means all dev requires going through “whitelisting” phase, which takes at least 48 hours. Just figured this out this AM, so guess limited dev this weekend. It’s a beta SDK, so I’m not going to invest much effort in “C#”-ifying the bindings yet. Eventually, I’d like to make it available as a free component on the Xamarin Component Store, but initially I’ll probably just put it up on Github. I’ve already put up the silly Hello XamCast!. Scala has several nice language features, including the elegant use of val for immutable variables and var for mutable, but the feature that I miss the most on a day-to-day basis is “traits.” Traits allow you to implement one or more methods of an interface. The canonical use is to “mix-in” behavior while avoiding the “diamond-problem.” DCI has the idea that Objects (domain-meaningful entities that correspond to user conceptions) adopt Roles, which are context-specific. Roles interact to produce value. So, for instance, when you’re transferring money at an ATM, you’re dealing with two accounts that are the same type of Object (Account), but which are in two different roles in the context of “Transfer Money”: a TransferSource and a TransferSink. And an Account in a TransferSource role has different behavior than an Account in a TransferSink role (e.g., TransferSource expects to withdraw(Money amount) while TransferSink expects to credit(Money amount)). In C#, the way to specify that a class has a certain set of behaviors is to specify those behaviors in an interface and specify that the class implements them: public class Account: TransferSource, TransferSink And then, of course, you would implement the various methods of TransferSource and TransferSink within Account. But the very essence of DCI is the premise that classic OOP type-systems don’t appropriately capture the relationships between Objects-in-Roles, even though “Objects-in-Roles working with each other” is the domain-users mental model (“I pick a source account, and a destination account, and specify an amount, and the amount is debited from the source and credited to the destination”). So DCI says that the TransferTo method that corresponds to the use-case should be elevated to a first-class object. But in C# you cannot partially implement an interface. But you can create and implement an extension method on an interface! public static class TransferContextTrait public static void TransferTo(this TransferSource self, TransferSink sink, Decimal amount) if(self.Funds < amount) self.FailTransfer(new TransferFailedReason("Insufficient Funds")); var details = new TransferDetails(self.Name, sink.Name, amount); catch(Exception x) self.FailTransfer(new TransferFailedReason(x.ToString())); Note an interesting restriction, though: You cannot trigger an event from within an extension method! So in this case, although I would have preferred to propagate the results of the calculation by self.TransferAccomplished(this, details) I have to use a proxy function in Account: public void AccomplishTransfer(TransferDetails details) TransferAccomplished(this, new TArgs&lt;TransferDetails>(details)); public event EventHandler&lt;TArgs &lt;TransferDetails>> TransferAccomplished = delegate {}; I’ll be talking more about DCI and other cross-platform architectural techniques at MonkeySpace in Chicago next week. Hope to see you there! One of the emerging themes at this conference is the need to move “examples” (and their older siblings, scenarios and use-cases) “into the code,” so that examples/stories/scenarios/use-cases, which are tremendously meaningful to the subject-matter experts, are actually traceable directly into the code, which is tremendously meaningful to, you know, the machine. I very much enjoyed a talk on “Use-case Representation in Programming Languages,” which described a system called UseCasePy that added a @usecase annotation to Python methods. So you would have: def drawLine(ptA, ptB) … etc … Now, even if you go no further, you’re doing better than something in a documentation comment, since you can easily write a tool that iterates over all source-code, queries the metadata and builds a database of what classes and methods participate in every use-case: very useful. Even better, if you have a runtime with a decent interception hook, you can run the program in a particular use-case (perhaps from your BDD test suite, perhaps from an interactive tool, acquire the set of methods involved, and determine, by exercising a large suite of use-cases, metrics that relate the codes “popularity” to user-meaningful use-cases, which could be very helpful in, for instance, prioritizing bug-fixes. Oh, by the way, apparently we no longer call them “users” or even “domain experts,” they are now “Subject Matter Experts” or even SMEs (“Smees”). I think when people saw that Dart was from Gilad Bracha and Lars Bak there was an expectation that Dart was going to be a grand synthesis: a blazingly-fast NewSpeak-with-curly-brackets. It’s very much not such a language. It doesn’t seem, academically, vastly innovative because it doesn’t add much. But, in truth, optional types are a radical design decision in that they take away runtime aspects that a lot of mainstream programmers expect. (Of course, this raises the question of how to define the “mainstream”…) Pros and Cons of Mandatory Typing In Descending Order of Importance (per Gilad Bracha): • machine-checkable documentation • types provide conceptual framework • early error detection • performance advantages • expressiveness curtailed • imposes workflow • brittleness Having said that, I attended a lecture in which someone, perhaps from Adobe, measured the performance impact of optional typing. Their conclusion, although admittedly done on the troublesome-ly small and artificial SunSpider benchmarks, was that the performance penalty of implicit types amounts to 40% (with a very large standard of deviation). That “feels” about right to me — definitely significant but not the overwhelming performance benefit you might get from either parallelization or an algorithmic change. Gilad Bracha started the day’s Dynamic Languages Symposium with an invited talk on Dart, a new Web programming language (read: JavaScript replacement) in which “Sophisticated Web Applications need not be a tour de force.” OOPSLA is attended by academics, who are typically less interested in the surface appearance of a program (they’ve seen just about variation) and more interested in semantic questions whose impact in the real-world might not be felt for many years. So Bracha begins his talk by disavowing the “interesting-ness” of Dart: it’s a language whose constraints are entirely mundane: • Instantly familiar to mainstream prorgammer • Efficiently compile to JavaScript (Personally, I take it as a damnation of the audience that “Of interest to 90% of the programming world” is not of importance, but the gracious interpretation is that these are the trail-blazers who are already deep in hostile territory.) The gist of Bracha’s talk was on Dart’s “optional types” semantics. The great takeaway from this, I think, is that: “Dart’s optional types are best thought of as a type assertion mechanism, not a static type system” which allows for code that can make your blood run cold; what certainly looks like a statement of programmer intention (“this variable is of type Foo”) can be blithely trod over at runtime (“in fact, this variable is of type Bar”) without so much as a by-your-leave. The type expression is only evaluated at compilation time and, if the developer puts the compiler in “development” mode, you get warnings and errors. But once out of development mode, there are no runtime semantics of the type expressions. They have no behavior, but on the other hand, they have no cost. And, argues Bracha, this seemingly extreme position is important to support a language that remains truly dynamic and does not “put you in a box” wherein the type system becomes a restriction on expressiveness. One of the seemingly-obscure corners of language design are the semantics of generics (the building blocks of collection classes). Generics in Dart are reified and covariant, which to an academic means “the type system is unsound.” Bracha acknowledges this and says that he’s “given up” on fighting this battle. Another interesting design element of Dart is its recognition that the “classic” object-oriented constructor is a failed abstraction that only allows for “I want to allocate a new instance…” instead of common scenarios such as “I want to get an object from cache,” “I want an object from a pool of a specific size (often 1),” etc. So you can declare something that looks an awfully lot like a classical constructor, but in fact is “allowed” to return whatever the heck it wants. (I put “allowed” in quotes because, remember, all this type stuff is just epiphenomenal <– mandatory big word every paragraph!) The lack of mandatory types preclude the creation of type classes or C#-style extension methods. Those are grating, but really of concern to me is that their lack also precludes type-based initialization. This leads to the disturbing design that variables will have the value null until they are assigned to; a “disturbing design” that is standard in the mainstream but hated by all. …off to lunch, more later… I am in Portland for OOPSLA / SPLASH, a conference that is my sentimental favorite. I think my first OOPSLA was in New Orleans circa 1990 and OOPSLA Vancouver 92 is filled with memories (mostly because Tina came and we dove Orcas Island in wetsuits). OOPSLA is traditionally the big academic conference for programming language theory and implementation. When I was a magazine editor and track chair for the Software Development Conferences, OOPSLA is where I trolled for new fish — concepts and writers that were ready for trade exposure. That’s no longer my business, and I wonder if I’ll get the same thrill from attending that I used to. The program looks promising and I’ve just spent a few hours going over the papers in the proceedings DVD (no more phonebook-sized proceedings to bow the bookshelves, but I’m sure I can still steal some article ideas…). I’m happy by the late addition of talks by Gilad Bracha and Lars Bak on Dart, the new programming language from Google. I’m unabashedly a fan of Bracha’s NewSpeak and the one time I heard Bak talk, I said he was “dynamite….Concrete, informed, impressive….” so I’m favorably disposed to like their language, even if it does have null (and not just have it, but In Dart, all uninitialized variables have the value null, regardless of type. Numeric variables in particular are therefore best explicitly initialized; such variables will not be initialized to 0 by default. Which strikes me as flat-out crazy, reiterating Tony Hoare’s “Billion-Dollar Mistake.” Early reaction to Dart has been pretty harsh, it will be interesting to discuss it in-person (where the tone will be 1000x more reasonable and respectful than on the Internet). Prime numbers are not my thing, but generating them is a common task in the early Project Euler problems. The one algorithm I know for generating primes is the Sieve of Eratosthenes, which I defined in Scala as: def successor(n : Double) : Stream[Double] = Stream.cons(n, successor(n + 1)) def sieve(nums : Stream[Double]) : Stream[Double] = Stream.cons(nums.head, sieve ((nums tail) filter (x => x % nums.head != 0)) ) val prime_stream = sieve(successor(2)) The first function is the only function that I’ve ever written that I’m sure is properly “functional.” It’s stuck in my head from circa 1982 LISP. It uses Scala’s Stream class, which is like a List but is “lazily evaluated,” in other words, it only calculates the next value in the List when it’s needed (the Stream pattern is to create a List whose head is the next value and whose tail is a recursive call that, when executed will produce the next value). The 2nd function sieve is my take on the Sieve of Eratosthenes. It too returns a Stream of primes. (By the way, the reason I use Double rather than an Int or Long is that one of the early Project Euler problems involves a prime larger than LONG_MAX.) In case you’re not familiar with the algorithm, the Sieve is conceptually simple. Begin with a list containing all positive integers starting at 2 (the first prime) [2, 3, 4, ...] . Remove from the list every multiple of your current prime. The first number remaining is the next prime. For instance, after removing [2, 4, 6, ... ], the first number remaining is 3. Prime! So remove [3, 6 (already removed), 9, ... ]. Since 4 was removed as a multiple of 2, the next available is 5. Prime! Remove [5, 10 (already removed), 15 (already removed), ...] … The 7th Project Euler problem is “What is the 10001st prime number?” Unfortunately, scala> prime_stream take 10001 print 2.0, 3.0, 5.0, 7.0, ...SNIP ... 29059.0, 29063.0, java.lang.OutOfMemoryError: Java heap space at scala.Stream$cons$.apply(Stream.scala:62) at scala.Stream.filter(Stream.scala:381) at scala.Stream$$anonfun$filter$1.apply(Stream.scala:381) at scala.Stream$$anonfun$filter$1.apply(Stream.scala:381) at scala.Stream$cons$$anon$2.tail(Stream.scala:69) at scala.Stream$$anonfun$filter$1.apply(Stream.scala:381) at scala.Stream$$anonfun$... That will never do. Obviously, I could run Scala with more heap space, but that would only be a bandage. Since a quick Google search shows that the 1000th prime number is 104,729 and I’m running out of heap space near 30K, it seems that “messing around with primes near the 10Kth mark” requires some memory optimization. Converting the Sieve If I really wanted to work with very large sequences of primes, I should certainly move away from the Sieve of Eratosthenes. But I’m not really interested in prime number algorithms, I’m interested in the characteristics of the Scala programming language, so I’m going to intentionally ignore better algorithms. My first thought was “OK, I’ll allocate a chunk of memory and every time I find a prime, I’ll set every justFoundPrime-th bit to 1.” But that would depend upon my allocated memory being sufficient to hold the nth prime. With my Google-powered knowledge that the 10001st prime is only 100K or so, that would be easy enough, but (a) it seemed like cheating and (b) it would require a magic number in my code. My next thought was “OK, when I run out of space, I’ll dynamically allocate double the space– no, wait, I only need justFoundPrime-(2 * justFoundPrime) space, since I’ve already checked the numbers up to justFoundPrime.” My next thought was “And really I only need half that space, since I know 2 is a prime and I can just count by 2…And, y’know, I know 3 is prime too, so I can check–” At which point, I engaged in a mental battle over what was appropriate algorithmic behavior. On the one hand, I didn’t want to change algorithms: if I moved from the Sieve to a slightly better algorithm, then wasn’t it Shameful not to move to at least a Quite Good algorithm? On the other hand, the instant I opened the door to allocating new memory, I committed to keeping around the list of already-discovered primes, since I would have to apply that list to my newly-allocated memory. But if you have a list of numbers, checking if your candidate number is a multiple of any of them can be done without consuming any additional memory. But is it the same algorithm? Isn’t the Sieve fundamentally about marking spots in a big array? Finally, I decided that checking a candidate number against the list of already-discovered primes was the Sieve algorithm, just with a smallest possible amount of memory allocation — one number. (By the way, did you read the article in which scientists say that rational thought is just a tool for winning arguments to which you’re already emotionally committed?) Here then, is what I wrote: def multiple_of = (base : Long, target : Long) => target % base == 0; val twoFilter = multiple_of(2, _ : Long) val threeFilter = multiple_of(3, _ : Long) The first function multiple_of is a function literal (?) that returns true if the target is a multiple of the base. The next two lines, where I define twoFilter and threeFilter are an example of the functional idiom of partial function application (I think — “currying” is the use of partial function application to accomplish a goal, right?). This is an undeniably cool feature of functional languages. Without any fuss, these lines create new functions that require one less argument to have their needed context. Once you have a twoFilter, you don’t need to keep the value “2″ around to pass in. Which might not seem like a big win, since a function named twoFilter or threeFilter is no more compact than calling multiple_of(2, x) or multiple_of(3,x). But… def filter = (x : Long) => multiple_of(x, _ : Long); val fs = List(filter(2), filter(3)) for(f <- fs){ println("Eight is a multiple of this filter: " + f(8)) OK, now that’s nice and compact! Now we have a new tk function literal? tk called filter and rather than have a bunch of variables called twoFilter and threeFilter and fiveFilter, we just have a List of filters. With such a list in hand, it’s easy to figure out which numbers in a list are relatively prime: def relatively_prime(fs : List[(Long)=>Boolean], target : Long) : Boolean = { for(f <- fs){ return false; return true; println("4 is prime? " + relatively_prime(fs, 4)) println("5 is prime? " + relatively_prime(fs, 5)) val list = List[Long](2, 3, 4, 5, 6, 7, 8, 9, 10, 11) println(list.map(relatively_prime(fs, _))) Which leads to a simple recursive function to find the next prime: def next_prime(fs : List[(Long)=>Boolean], x : Long) : Long = { if (relatively_prime(fs, x)) { return x return next_prime(fs, x + 1) println(next_prime(fs, 4)) println(next_prime(fs, 8)) Which leads to our solution: def primes(fs : List[(Long)=>Boolean], ps: List[Long], x : Long, nth : Long) : List[Long] = { if(ps.size == nth){ return ps; val np = next_prime(fs, x) val sieve = fs ::: List(filter(np)); primes(sieve, ps ::: List(np), np + 1, nth); println("Missing 3 because its in fs" + primes(fs, List[Long](2L), 2, 8)) println((primes(List(filter(2)), List(2L), 2, 8) reverse) head) def nth_prime(nth : Long) : Long = { ) reverse ) head println("The 10001st prime is " + nth_prime(10001)) # ChromeCast Home Media Server: Xamarin.iOS FTW! ChromeCast Home Media Server with Xamarin Programming the ChromeCast with Xamarin Using Extension Methods on a C# Interface to Enable DCI in Xamarin OOPSLA Day 2: Explicit Use-Case Representation in Programming Languages OOPSLA Day 2: More on Dart OOPSLA Day 2: Gilad Bracha on Dart OOPSLA Day 0 Getting To Know Scala: Project Euler Primes
{"url":"http://www.knowing.net/index.php/category/languages/page/2/","timestamp":"2014-04-18T10:35:51Z","content_type":null,"content_length":"74472","record_id":"<urn:uuid:3735c77b-21e4-4ffd-a7a8-e666259501e5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Alpha Spectral Analysis One of the questions of interest is the optimal sampling frequency to use for extracting the alpha signal from an alpha generation function. We can use Fourier transforms to help identify the cyclical behavior of the strategy alpha and hence determine the best time-frames for sampling and trading. Typically, these spectral analysis techniques will highlight several different cycle lengths where the alpha signal is strongest. The spectral density of the combined alpha signals across twelve pairs of stocks is shown in Fig. 1 below. It is clear that the strongest signals occur in the shorter frequencies with cycles of up to several hundred seconds. Focusing on the density within this time frame, we can identify in Fig. 2 several frequency cycles where the alpha signal appears strongest. These are around 50, 80, 160, 190, and 230 seconds. The cycle with the strongest signal appears to be around 228 secs, as illustrated in Fig. 3. The signals at cycles of 54 & 80 (Fig. 4), and 158 & 185/195 (Fig. 5) secs appear to be of approximately equal strength. There is some variation in the individual pattern for of the power spectra for each pair, but the findings are broadly comparable, and indicate that strategies should be designed for sampling frequencies at around these time intervals. Fig. 1 Alpha Power Spectrum If we look at the correlation surface of the power spectra of the twelve pairs some clear patterns emerge (see Fig 6): Focusing on the off-diagonal elements, it is clear that the power spectrum of each pair is perfectly correlated with the power spectrum of its conjugate. So, for instance the power spectrum of the Stock1-Stock3 pair is exactly correlated with the spectrum for its converse, Stock3-Stock1. But it is also clear that there are many other significant correlations between non-conjugate pairs. For example, the correlation between the power spectra for Stock1-Stock2 vs Stock2-Stock3 is 0.72, while the correlation of the power spectra of Stock1-Stock2 and Stock2-Stock4 is 0.69. We can further analyze the alpha power spectrum using PCA to expose the underlying factor structure. As shown in Fig. 7, the first two principal components account for around 87% of the variance in the alpha power spectrum, and the first four components account for over 98% of the total variation. Fig. 7 Stock3 dominates PC-1 with loadings of 0.52 for Stock3-Stock4, 0.64 for Stock3-Stock2, 0.29 for Stock1-Stock3 and 0.26 for Stock4-Stock3. Stock3 is also highly influential in PC-2 with loadings of -0.64 for Stock3-Stock4 and 0.67 for Stock3-Stock2 and again in PC-3 with a loading of -0.60 for Stock3-Stock1. Stock4 plays a major role in the makeup of PC-3, with the highest loading of 0.74 for Fig. 8 PCA Analysis of Power Spectra Comments are closed on this post, sorry!
{"url":"http://jonathankinlay.com/index.php/2011/05/alpha-spectral-analysis/","timestamp":"2014-04-19T22:05:28Z","content_type":null,"content_length":"43531","record_id":"<urn:uuid:150f3f65-7e5e-4ef4-973e-2eb289fc6589>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
pre-calc stress September 14th 2008, 06:54 PM #1 Sep 2008 pre-calc stress I've been staring at the following problems for at least half an hour. Would anyone help me? Do you know how to use the quad formula? $\frac{-b_-^+ \sqrt{b^2-4ac}}{2a}$ I've tried, but I keep getting answers I'm sure are way off from being right. What kind of answers do you keep getting? for the first one i got .396 and 8.48 I'm going to do one for you for an example $\frac{-12_-^+ \sqrt{144-4(-9)(-4)}}{-18}$ $\frac{-12_+^- 0}{-18}$ Thank you so much. I think I have an idea of how to do the rest. Ok, I got stuck on x^3+2x^2-4x-8=0 For 2x^2+8x=0, both the terms have x in them so this can be factorised out: I was going to do the rest of them by finding a factor, but they all appear to be decimals or surds apart from the last one. I'll show you the method so you can use it on other questions: Since f(2)=0, f(x) has a factor x-2 ie. when x=2 f(x)=0. This can then be factorised using this (see icemanfan's post). A graphical method can help: If in doubt, draw the graph and see if that helps. September 14th 2008, 06:56 PM #2 September 14th 2008, 06:59 PM #3 Sep 2008 September 14th 2008, 07:04 PM #4 Junior Member Sep 2008 September 14th 2008, 07:12 PM #5 Sep 2008 September 14th 2008, 07:14 PM #6 September 14th 2008, 07:30 PM #7 Sep 2008 September 14th 2008, 07:46 PM #8 Sep 2008 September 15th 2008, 12:58 PM #9 September 15th 2008, 03:58 PM #10 MHF Contributor Apr 2008 September 16th 2008, 04:37 AM #11
{"url":"http://mathhelpforum.com/algebra/49093-pre-calc-stress.html","timestamp":"2014-04-18T09:40:41Z","content_type":null,"content_length":"57419","record_id":"<urn:uuid:26ad3a44-9f7c-4458-b268-c7ff4788fc6f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"url":"http://nrich.maths.org/public/leg.php?group_id=44&code=208","timestamp":"2014-04-16T07:14:28Z","content_type":null,"content_length":"42125","record_id":"<urn:uuid:f9a97ab9-59ff-479c-9598-e663a91b3b8b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Magnolia, TX Math Tutor Find a Magnolia, TX Math Tutor ...My first year teaching, my classrooms TAKS scores increased by 40%. This last year I had a 97% pass rate on the Geometry EOC and my students still contact me for math help while in college. I know I can help you.I currently teach Algebra 1 on a team that was hand selected because of our success... 8 Subjects: including algebra 1, algebra 2, biology, geometry Howdy, I'm J.C., and I'd love to be your tutor! I'm a recent graduate of Texas A&M and received my degree in industrial engineering. I'm an experienced tutor and will effectively teach all subjects in a way that is easily understood. 17 Subjects: including geometry, elementary math, reading, ACT Math ...My education, which includes 33 credit hours of graduate study, has more than prepared me for the material with which elementary students may need assistance. Also, I have successfully passed your qualification requirements for elementary math, elementary science, general reading, and writing. ... 73 Subjects: including calculus, chemistry, grammar, business ...Prealgebra tutoring for high school. Have also taught Math 0306, 0308 and 0310 at college level for past 10 years. Have tutored math for about 14 years. 20 Subjects: including SAT math, precalculus, ACT Math, algebra 1 ...I pride myself in differentiating my lessons to meet the individual needs of each student. I use manipulatives, supplemental resources, and other tools necessary for learning. I am a certified EC-12 Special Education teacher in the state of Texas. 14 Subjects: including prealgebra, reading, ADD/ADHD, Microsoft Word Related Magnolia, TX Tutors Magnolia, TX Accounting Tutors Magnolia, TX ACT Tutors Magnolia, TX Algebra Tutors Magnolia, TX Algebra 2 Tutors Magnolia, TX Calculus Tutors Magnolia, TX Geometry Tutors Magnolia, TX Math Tutors Magnolia, TX Prealgebra Tutors Magnolia, TX Precalculus Tutors Magnolia, TX SAT Tutors Magnolia, TX SAT Math Tutors Magnolia, TX Science Tutors Magnolia, TX Statistics Tutors Magnolia, TX Trigonometry Tutors Nearby Cities With Math Tutor Cut And Shoot, TX Math Tutors Cypress, TX Math Tutors Dobbin Math Tutors Hempstead, TX Math Tutors Hockley Math Tutors Hockley Mine, TX Math Tutors Hufsmith Math Tutors Oak Ridge N, TX Math Tutors Oak Ridge North, TX Math Tutors Pinehurst, TX Math Tutors Plantersville, TX Math Tutors Stagecoach, TX Math Tutors Todd Mission, TX Math Tutors Tomball Math Tutors Willis, TX Math Tutors
{"url":"http://www.purplemath.com/magnolia_tx_math_tutors.php","timestamp":"2014-04-16T16:23:50Z","content_type":null,"content_length":"23601","record_id":"<urn:uuid:54ea8c18-79e1-4a31-830e-039b74d7fe0f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Elementary proof of the Hurwitz formula up vote 7 down vote favorite I am aware of two forms of the Hurwitz formula. The first is more common, and deals only with the degrees. So if $f:X \rightarrow Y$ is a non-constant map of degree $n$ between two projective non-singular curves, with genera $g_X$ and $g_Y$, then $$ 2(g_X-1) = 2n(g_Y-1) + \deg(R), $$ where $R$ is the ramification divisor of $f$. The proof of this was given to me as an exercise when I started my PhD, and I am very happy with it. However, in some other work that I was doing it appeared that one could strengthen this to say rather that if $K_X={\rm div}(f^*(dx))$ and $K_Y=f^*{\rm div}(dx)$ are canonical divisors of $X$ and $Y$, then $$ K_X = n\cdot K_Y + R. $$ For ease of reference I will call this the "strong" Hurwitz formula. I have found this alluded to in a number of places, and even stated in Algebraic Curves Over Finite Fields by Carlos Moreno. However, this was without proof, and every idea of a proof that I have seen is in sheaf theoretic language. I am slowly getting through sheaves and schemes, but I am currently trying to prove this in an elementary manner (fiddling around with orders of $dx$ etc.), in the wildly ramified case (the tamely ramified case is fine). This has led to me the following questions: 1. Is it possible to prove this "strong" Hurwitz without using sheaves etc in the wildly ramified case? 2. If so, are there any references that would help with this? 3. Is there a different name for this "strong" Hurwitz formula. I imagine/assume that an elementary proof would rely on computing the order of $dx$ at a point, but the best I am able to get from this is a lower bound on the order, not the precise value. Previously asked at Stack Exchange (http://math.stackexchange.com/questions/174168/elementary-proof-of-the-hurwitz-formula/174189) but with no joy. ag.algebraic-geometry algebraic-curves reference-request ramification 2 You'll find a proof of what you call the strong Hurwitz formula, not using sheaves, in full generality, in e.g. in "Introduction to the Theory of Algebraic Functions of One Variable" by C. Chevalley. If you want a more recent reference, look up Stichtenoth's book "Algebraic Function Fields and Codes". – Felipe Voloch Jul 30 '12 at 11:42 Thank you for the references, I was unaware of the first one and will check that. I did look in Stichtenoth's book, but I really can't find it in there for the wildly ramified case. It may be because it is an old edition though. Thank you. – Tait Jul 30 '12 at 11:49 Your putative equation of divisors has one divisor on X and the other partly on Y and partly on X, so puzzles me. Maybe you mean div(f*(dx)) = f*(div(dx)) + R? then it seems just an elementary computation. But I am not an expert in "wild ramification". – roy smith Jul 30 '12 at 14:21 Thanks for pointing out the error, I have corrected that. In the tamely ramified case it certainly is elementary, but it doesn't seem so elementary otherwise (to me at least), and no-one on stack exchange seemed able to provide a reference. – Tait Jul 30 '12 at 15:32 Maybe your confusion stems from the following. The coefficient at a point $P$ of the ramification divisor is the order of vanishing at $P$ of $dx/dt$ for a local parameter $t$ at $P$ and the Hurwitz formula is almost trivial. If you expect this local multiplicity to be $e_P - 1$, where $e_P$ is the multiplicity of $P$ in $f^*(f(P))$, then this is simply false in the wild ramification case. – Felipe Voloch Jul 31 '12 at 12:41 show 1 more comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ag.algebraic-geometry algebraic-curves reference-request ramification or ask your own question.
{"url":"http://mathoverflow.net/questions/103510/elementary-proof-of-the-hurwitz-formula?answertab=votes","timestamp":"2014-04-18T03:34:36Z","content_type":null,"content_length":"54900","record_id":"<urn:uuid:46eb88e6-5be0-47eb-86da-4fb62c93f2d5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
How to calculate point price elasticity of demand with examples Point elasticity is the price elasticity of demand at a specific point on the demand curve instead of over a range of it. It uses the same formula as the general price elasticity of demand measure, but we can take information from the demand equation to solve for the “change in” values instead of actually calculating a change given two points. Here is the process to find the point elasticity of demand formula: Point Price Elasticity of Demand = (% change in Quantity)/(% change in Price) Point Price Elasticity of Demand = (∆Q/Q)/(∆P/P) Point Price Elasticity of Demand = (P/Q)(∆Q/∆P) Where (∆Q/∆P) is the derivative of the demand function with respect to P. You don’t really need to take the derivative of the demand function, just find the coefficient (the number) next to Price (P) in the demand function and that will give you the value for ∆Q/∆P because it is showing you how much Q is going to change given a 1 unit change in P. Example 1: Here is an example demand curve: Q = 15,000 - 50P Given this demand curve we have to figure out what the point price elasticity of demand is at P = 100 and P = 10. First we need to obtain the derivative of the demand function when it's expressed with Q as a function of P. Since quantity goes down by 50 each time price goes up by 1, This gives us (∆Q/∆P)= -50 Next we need to find the quantity demanded at each associated price and pair it together with the price: (100, 10,000), (10, 14,500) e = -50(100/10,000) = -.5 e = -50(10/14,500) = -.034 And these results make sense, first, because they are negative (downward sloping demand) and second, because the higher level results in a relatively more price elasticity of demand measure. Example 2: How to find the point price elasticity of demand with the following demand function: Q = 4,000 – 400P We know that ∆Q/∆P in this problem is -400, and we need to find the point price elasticity of demand at a price of 10 and 8. At a price of ten, we demand 0 of the good, so the measure is undefined. At a price of 8 we will demand 400 of the good, so the associated measure is: e = -400(8/400) = -8 What about a demand function of: Q = 8,800 – 1,000P Here our ∆Q/∆P will be -1,000 and we will need to find the associated measure at prices of 0, 2, 4, and 6. This means we will end up with: e = -1,000(0/8,800) = 0 e = -1,000(2/6,800) = -0.294 e = -1,000(4/4,800) = -0.8333 e = -1,000(6/2,800) = -2.14 Spread the knowledge! 1 comments:
{"url":"http://www.freeeconhelp.com/2012/04/how-to-calculate-point-price-elasticity.html","timestamp":"2014-04-16T05:04:20Z","content_type":null,"content_length":"74028","record_id":"<urn:uuid:5ad878ab-0d5f-41af-ba05-e687ec903c89>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help gr12 math - David Q, Sunday, August 31, 2008 at 6:04pm With no restrictions, just select any four from ten, which is: factorial(10) / (factorial(4) x factorial(6)) = (10x9x8x7x6x5x4x3x2x1) / {(6x5x4x3x2x1) x (4x3x2x1)} = (10x9x8x7) / (4x3x2x1). If you've got to have four women and two men, then you're choosing four women from six, and also two men from four. The number of ways you can choose four women from six is: (6x5x4x3x2x1) / {(4x3x2x1) x (2x1)} = (6x5) / (2x1) = 15 [Just to demonstate that, you can easily enumerate them all in this instance, since the number is very small. Call them A, B, C, D, E and F. All the possible combinations are as follows: ABCD, ABCE, ABCF, ABDE, ABDF, ABEF, ACDE, ACDF, ACEF, ADEF, BCDE, BCDF, BCEF, BDEF, CDEF. That's 15 in total.] The number of ways to choose two men from four is (4x3x2x1) / {(2x1) x (2x1)} = 6. [Again, you can easily enumerate them all here. Call them P, Q, R and S. All the possible combinations are PQ, PR, PS, QR, QS and RS. That's 6 in total.] The total number of ways of getting BOTH (four women from six) AND (two men from four) is just the product of those two previous answers. Are you able to tackle the remaining parts of the question now?
{"url":"http://www.jiskha.com/display.cgi?id=1220126878","timestamp":"2014-04-21T03:56:29Z","content_type":null,"content_length":"9522","record_id":"<urn:uuid:92d11c7c-8396-4195-815c-d22d0c5ae762>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: nl-function log4-formula Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: RE: nl-function log4-formula From Nick Cox <n.j.cox@durham.ac.uk> To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu> Subject st: RE: nl-function log4-formula Date Wed, 16 Nov 2011 10:36:08 +0000 I don't know, but tucking the -if- condition inside the parentheses looks a bit odd. Try moving it outside. -nl- is a command, not a function. Jennyfer Wolf I am working with the nl-function and with the logistic function model log 4. Could somebody please explain me why I do not get the same outout if I type: 1. nl log4: VAR1 VAR2 if VAR3=="xxx" 2. nl(VAR1={b0}+{b1}/(1+exp(-{b2}*(VAR2-{b3}))) if VAR3=="xxx") Actually 1. should just be an abbreviation of 2.? If I run command 1, I get estimates for b0, b1, b2, b3, if I run command 2 I only get an estimate for b0 and b1-b3 are 0. Could somebody explain how to write the formula "in full" (like in 2.), to get the same results as in the first formula. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-11/msg00786.html","timestamp":"2014-04-17T22:30:22Z","content_type":null,"content_length":"8325","record_id":"<urn:uuid:387993a9-0833-4be9-8941-f09a69ba21a0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
My Web 2.0 Journey Saturday, April 30, 2011 Posted from Diigo. The rest of my favorite links are here. Wednesday, April 27, 2011 I swear I titled a blog post this exact same way a few months ago because I was complaining that my precalc kids can't factor. Right now we're factoring in Algebra 1. It wasn't something that I was looking forward to; we've been struggling a lot lately with both behavior and material. (I'm still trying to forget about the weeks we spent solving systems. Sheer torture.) Last week we talked only about GCFs, which I thought were going well until I graded half of the quizzes. There were 4 - 5 kids with 100%+ (I had a couple of bonus problems) and everyone else was below a 50%. A little disparity, anyone? This week we've been factoring quadratics (just ) using the X-box method. We make an X to find the two values whose sum is b and product is c (have you seen the product-sum puzzles on ilovemath.org?) and then make a box to finish up the factoring. I was worried that it was going to be too much writing for the kids but things seem to be going well. Today's plan was to practice some more. I wanted four of those five kids to do a little more today, knowing that they would get bored quickly with doing the X-boxes... and that's when they go crazy and drive me crazy. (The fifth boy is super smart but is autistic and OCD. I didn't want to push.) so I wrote up a sheet with a combination of GCF and trinomials mixed up... and problems with both a GCF and then a factorable trinomial. I had the chosen few work on the first couple of x-box problems with the rest of the class so I could make sure they were comfortable, then gave them the second sheet. The amount of work and concentration that they showed that assignment was amazing! The one boy who is constantly up and around and bugging everyone else got every problem done... with help. I didn't have to tell him one time to sit and work and stop talking. Him on the way out the door: You know, I just get everything down and then you make it harder. Me: Yep. Think about how much smarter you’re getting! Him: Think about how much harder I’m working. Me: Yes! I promised him tomorrow would be more of the same... and then I'm going to move them on to solving the equations that they're factoring! Hopefully this will be the end of the behavior issues.... I can't believe that I almost forgot! Got my new ipad today at school! Woo hoo! Now if any of you can tell me how to use it in class I'll be forever grateful :) (Someone on twitter mentioned that I might be able to use it with geogebra... definitely need to figure that one out!) Tuesday, April 26, 2011 1. We started limits yesterday in precalc. I'm taking it slowly - just did them graphically yesterday, numerically today. It was actually kind of funny... we were doing $\inline \lim_{x\rightarrow 2}(3x-2)$ . I set up a table so we could see the values from both the left side and the right side, and we ended up getting an answer of 4. I could see wheels turning. Finally one of the girls was like, "Can't we just plug it in?" Yes! Big sighs of relief ensued. Of course, then I had to show them that everything's not pluggable. (I found the latex editor 2. As we were discussing last night's assignment (checking out graphs and limits from them), one of the girls asked if we could watch Mean Girls in class. After discovering that several of the kids have the movie at home, I said sure (at least the part where they reference limits!), we'll do that tomorrow. They were all pretty excited until they realized that the juniors have an academic awards breakfast tomorrow during first period and won't be in class. Then they got mad. :) (Several asked if they could skip the breakfast!) 3. I got my possible classes for next year from my department head yesterday. Because of people moving from our school to the other school in the district, there's no one else that wants to teach the Honors Precalc. (Can you imagine?! It's my favorite!) So it looks like I'll be teaching all three of those (down from 4 total this year, 2 are mine), 1 college prep Algebra 2, and I'm picking up the Honors Algebra 1. I'm iffy about the Algebra 1 - I've taught the Honors class before, but the problem is that it's not really honors. The "real" honors kids are now taking Algebra 1 in 7th grade. The next level takes "Honors" Algebra 1 in 8th grade. So these are kids whose parents want them in an honors class but weren't ready for Algebra 1 when the brightest of their class was. Twice. And they're freshmen. Double whammy. The good thing about the class is that I'm getting out of teaching the Integrated Algebra 1 (our general level) but unfortunately, that means my class size will go from 12 to 22-ish. Yuck. Saturday, April 23, 2011 I've avoided it long enough - I need to get back to summarizing my NCTM trip before I forget what all happened! Session 2: Trig Tricks You’ll Love (with Ann Coulson) I got there late but was in time for sin/cos spaghetti, trig cut-ups, patty paper exponential folding, patty paper conic folding. The place was packed – I sat in the “gallery”. People seemed to be enjoying the session, but I'd seen it all before. I left early. Session 3: Supporting Productive Struggling in the Mathematics Classroom (with Susan May and Kathi Cook) Problem: Anxiety about Algebra 1 (Students have trouble transitioning to hs level, math) Students don’t believe they can be successful Must address student motivation in tandem with academic skills Academic Youth Development – help students work on concept skills and identity at the same time Mathematically proficient students… make sense of problems and persevere in solving them. How do you build students who persevere? Students who persevere… understand the role of challenging tasks in learning. Understand that setbacks can be a natural part of learning Engage in self-monitoring Learn from setbacks and struggles Two views of intelligence: fixed and malleable Fixed: (Their intelligence is their identity… whether good or bad) Avoid challenges and seek easy successes Desire to look smart at all costs Worry about failure and question their ability Malleable: (Have control and can change their intelligence) Pursue and enjoy challenges Careless about “looking smart” and self-instruction Engage in self-monitoring Need to break the cycle of kids thinking they’re stuck. (Carol Dweck , 1999) Metacognitive strategies – internal dialogue prompts… chart in packet Make a plan, Monitor work, Evaluate Delicate balance between productive struggle and frustration… need to baby step kids into it. The Bucket Problem (How to split up 8 liters using only 3 liter and 5 liter containers) (Use clip from Die Hard) Strategies: clarify question, trial and error (brute force), discuss with others Why do we need persistent learners? Because the problems get bigger. Problem solving tool is intended to be used side-by-side when they’re working on a challenging task. And then you move on to problems that might take several days to solve…. Miles of Tiles: The Value of Persistence 5 levels of problems (A, B, C, D, E) Your role as a teacher is to help them be a problem solver, not to tell them the answers to the questions. NOYCE Foundation Session 4: Fortifying the First Five (with Robert Gerver and Richard Sgroi) Need some help getting your class started? Check this out. This was the one session I attended where the documents were online. Isn't it nice? Session 5: Student Centered Projects to Enrich a Precalculus Class (with Masha Albrecht and Dan Plonsey) These two had some great examples of student projects for precalc. I need to get together a list of these and the ones from my last session to help me plan out my precalc course next year. They gave a handout (but no link, darnit). Session 6: Conics with patty paper and the TI-Nspire (don't know name of presenter) Using the TI-Nspire to mimic the paper folding that is used to illustrate conics. I was getting a little frustrated with it and left early. I definitely need to see if I can borrow one from school for the summer (we have a new class set that no one knows how to use) and figure it out. Session 7: A Day in the Life of a Fractal (with Neil Cooperman) zzzz..... Oh, sorry about that. I was expecting a lot more out of this session. The speaker went through a lot of the upper-level math that creates fractals and examples of them. What I was looking for was how I could use this in my classes (I'd just done something with the Koch Snowflake in class in regards to the geometric sequences formed) and he didn't hit that at all. Fortunately, in the last 5 minutes his wife stood up and showed us a website she uses with her 6th grade classes to create fractals. That saved the session from being a complete washout for me. The website is Aros Session 8: The Unit Circle and Geogebra (with Zyad Bawatneh) I'm wondering if this was the guy's first presentation... he seemed very unorganized, though he did have a flash drive to share his presentation/documents on (I'm glad that I'd taken my laptop and actually had a battery long enough to use it during the session!). He went through the "making a sine graph out of the unit circle" process (that I do in class using string and spaghetti) using geogebra. It was pretty cool and showed me that I need to spend some more time with geogebra this summer. Session 9: More Precalculus Projects (with Luajean Bryan) GREAT ideas. Need to summarize them for my own use. We were given a book of her projects (including student work, rubrics, etc) and a disk with all of the files on it. Very cool. Posted from Diigo. The rest of my favorite links are here. I was just checking out my blog stats for the last year and was amazed at what I saw for the past year. I realize that these numbers are nowhere near some of the blogs I read, but honestly, I'm still amazed that anyone actually reads this. Thanks to those of you who take the time! Thursday, April 21, 2011 At the school where I teach, some of the kids go crazy in asking each other to prom (and answering those queries). Sometimes they get the teachers involved. A couple of days ago, one of my students from last year stopped in after school to ask if I would help her out in saying yes to prom. Her date is in one of my classes this year. Of course I agreed - I'd never want to stand in the way of young love! (gag.) She brought in a list of statement that she wanted me to say to the class. I was to have them stand up and if the statement wasn't true for them they were to sit down. It started out pretty general (you're taking precalculus; you're a boy; you're a senior. . .) and after 7 or 8 of these we were left with only the intended recipient. As the winner, he was given a 5 lb bag of flour with a note on it that said to look inside. The girl had told me she thought he'd wait until he got home (because I didn't want a big floury mess) but the other kids urged him to open it up and check it out. He headed up to the trash can (thank goodness) and slowly emptied the bag. Nothing in there. Today the girl came in with a pretty red flower and asked me to give it to him in class. So as we were getting started, I told him that there'd been a mistake yesterday and he'd gotten the wrong flower (flour... get it?). It had a note taped around the stem. Overall, a cute idea. I guess the days of just asking and saying yes are gone! Tuesday, April 19, 2011 I'm going to divert from NCTM talk for a post... I'll get back to it, I promise! Yesterday in Algebra 2 I wanted to take a day and give them a chance to ask questions. We'd started exponential functions last week right before I left and then they had a couple of activities about them while I was gone Thurs/Friday. I knew there would be some questions (that were hopefully cleared up in class!). So after taking questions, I showed them a clip from Mythbusters where they dispell the myth that a piece of paper can only be folded seven times. Here's what I showed. Then I gave them all pieces of patty paper and had them start folding. We looked at the number of layers and the surface area. They all seemed to enjoy it while looking at the exponential functions Of course, then, there were a bunch of kids that wanted to give it a try. On the clip they have a football-field size piece of paper.... I'm not sure where to get that! Where did I go? Twitter. It seems as though a bunch of the kids are now on twitter, so a couple of them pulled out their phones and tweeted the question. Almost immediately I had a response from , who said that his dad is local and works for a paper company. He suggested that I contact them about it. (I did... we'll see what happens!) The funniest thing is that when the kids found out that I was on twitter, we started comparing numbers. They were super jealous when they found out how many followers I have and how many tweets I've made.... they finally realized that I'm cooler than they are! :) I also heard from , who did something like this with his kids in class, but they used a roll of toilet paper . That could be an option if the kids are really interested... though we don't have a miles-long hallway to work with! I just got response from Smart Papers Ms. Fouss, Thank you for your email and thank you for your service in educating our future generations. Your class project sounds interesting. What SMART Papers could give to you is a sheet of paper approximately 10ft' wide, the width of our paper machine. We can vary the length. How long of sheet do you want? Woo hoo! Sunday, April 17, 2011 NCTM, Day 1 1st session: Differentiating for Gifted Learners Craig Russell University of Illinois Laboratory high School He started out by telling everyone that it was "National Poem in your Pocket" day and gave us a little green piece of cardstock with a poem called "Mathematical Mind" on it (written by a former student). Ok.... His take on gifted students: NCLB has left behind the gifted students. NAGC: “Regular classroom curricula and instruction must be adapted, modified, or replaced to meet the unique needs of gifted learners.” CCSS: No references to what to do with gifted students. What he does: Daily group work; group work often differentiated; group assignments change at least once per unit; homework assignments include required and suggested problems, most include “alternative” problems (required for certain students if going to calc, etc.); assessments may be differentiated How do you select the "gifted" students? Students self-select for “alternative” homework May be based on pre-test results (determines groupings in classroom… ex. change focus from graphing linear equations to parametric equations, etc.) Students may be “re-directed”, both on in-class work and on homework Assessment: continual, with evidence for parents “open questions” and “parallel tasks” allow for self-selection and re-direction Differentiation basics: 1. Product: gifted students may produce more sophisticated work with less structure in instructions 2. Process: Gifted students learn through exploration, problem solving 3. Learning Environment: Different students have different learning styles and respond to different stimuli 4. Content: Gifted students can learn more, at greater depth (not nec just moving faster) 1. Product differentiation: Student choice vs teacher choice (mandatory or suggested?) on projects Amazing Race – Roadblock. Choose one of two options. (Create or Crititque) Smaller-scale activities may be more open-ended for some students Tests/quizzes may cover enriched curriculum Amusement park: students choose which ride, with knowledge that some are tougher (design ride… basic is roller coaster, pirate ship is easiest, double ferris wheel, scrambler (toughest) – rates of change, parametric eq, accel, vel 2. Process differentiation: Problem-based learning More open-ended problems, less clearly defined Different resource materials More emphasis on “why” than on “what” 3. Learning Environment Differentiation Flexible furniture locations – grouped or individual Access to technology Whole class vs small group instruction Time allocation: “sweep” or “anchor” activity/exploration 4. Content examples What product might be expected: Modular arithmetic Absolute value inequalities Multi-variable functions 3-d conic sections Hyperbolic functions Partial fractions Differentiation by adaptation • Math forum pow • QELP (from community college in nw) – environmental data sets keyed by math topic Differentiation by unit: We do this now: What are the goals? How much classroom time is devoted to each goal? What lesson activities do you use? How will learning be assessed? We also need to think about: With which of the unit goals are gifted students already comfortable? What additional goals (enrichment? Acceleration?) related to this unit might be appropriate for gifted students? Should (or could) the planned lesson activities be differentiated? Materials (time?) Peer/parent pressure Balance: time for pre-testing, de-emphasis on “routine” Some lessons learned Develop a “few” differentiated lessons per year Study non-traditional textbooks: COMAP, Core Plus, SIMMS, IMP, UCSMP, Discovering Geo, college texts (iffy?) Use conferences to examine the literature Look into NCTM Illuminations and math forum PoW Find a partner "Differentiating an insipid curriculum results in a differentiated insipid curriculum." – Carol A Tomlinson Thinking about how to differentiate for gifted students actally caused us to think about how we challenge all students, and we have tightened our curriculum overall as a result. Overall, I enjoyed this session - it gave me a lot to think about. I'm not usually one who enjoys the philosophical-type talks, but this guy was entertaining and kept me interested. I've tried to do a bit of differentiating in precalc this year by offering options for assignments, but there's obviously so much more I could do. I'd like to incorporate pre-tests next year and differentiate based on those, but that's a big plan that will take a lot of work to pull together. More to come: Trig Tricks You’ll Love Ann Coulson Supporting Productive Struggling in the Mathematics Classroom Susan May and Kathi Cook Student Centered Projects to Enrich a Precalculus Class Masha Albrecht and Dan Plonsey And this was just from Thursday! Saturday, April 16, 2011 Posted from Diigo. The rest of my favorite links are here. Friday, April 15, 2011 I got home this afternoon from a fun two days at NCTM. I have several (8? 9?) pages of notes and other worksheets to sort through before I post any thoughts, but I was just catching up on my google reader (it was around 480... ugh) and wanted to pass along a few items I read that are definitely sharable. 1. A post at dy/dan by Dan Meyer referencing a clip from The Daily Show. Just thought it would be fun to throw at my Algebra 1 kids and have them check the math. 2. Kate Nowak at f(t) brought back (created by Jason Dyer last year in a response to a challenge thrown out by Kate to create something fun for the binomial theorem). I'll be covering the binomial theorem on Monday in precalc and it might be fun to use something like this with them. That's assuming I find the time this weekend to check it out. A semester review project from Mimi at I Hope This Old Train Breaks Down (which I think I'll refer to now on as IHTOTBD because it takes me forever to type that :) ). They're going to make a geometry magazine.... I thought that sounded cool! I guess it's time to think about what I want to do for the kids to review this semester - that'll definitely be on my list. A guy mentioned to me today at a session that last year he had the kids each pick a topic and they were to make a 5-8 minute presentation on it. He said that most made videos and they LOVED it. Another on my list. 4. I don't know where she finds the time, but Julie Ruelbach just made her first imovie with her class singing their Equation of a Line song. So cute. So totally can't see my high schoolers even giving it a shot. :) 5. I'm going to be hitting conics soon with my Algebra 2 kids and just saw this post on Square Root of Negative One Teach Math from Amy. It talks about a deck of conics cards that she says really helped her kids understand (and like?) conics. Definitely something to try! (I just e-mailed the originator and asked for the files.) One last thing before I crash. I don't know what happened, but all of a sudden my google reader looks different. It used to have the navigation stuff in a column on the left side of the screen. While I was just reading, I guess I clicked something and that's gone. Does anyone know where it went and/or how to fix it? Tuesday, April 12, 2011 I'm heading off to Indy tomorrow after school and was wondering if the gas reimbursement money that school is going to give me will be enough to cover the cost of the gas for my car (which is sky-high right now around here...). So I decided I'd let my Algebra 1 kids figure that out for me. I googled directions to Indy from school to get a total number of miles one way (122). I found the specs on my car to give them an average mpg, and I gave them a chart showing the price of gas at some local stations. It was a fun little exercise, especially when they figured out that I'll get a nice little overage! (I recall there being a nice mall in downtown Indianapolis....) Monday, April 11, 2011 I spent waaaaay too much time in class today having to go over arithmetic and geometric sequences. Unfortunately, I didn't have time for mathematical induction (which I know I'll need lots of time for) so for probably the first time all year the kids didn't have an assignment to work on. They were rather upset.... not. I'm hoping that the cheering and applauding didn't disturb the classes around us! I'm going to be out on Thursday and Friday for , so not getting done what I wanted to today changes my plans. Originally, I was going to do this: Mon: Mathematical Induction Tues: Review/practice Weds: Binomial Theorem Thurs/Fri: Review all (plus sequences) for quiz next week I found this picture on flickr... the caption said it's from a subway station in Portland. Random. Now what I'm thinking is this: Mon: Sequences Tues: Mathematical Induction Weds: Review/practice Thurs/Friday: Exploration into Pascal's Triangle This new version will probably be more fun for the kids (hopefully). I'm going to give them some stuff about the history and patterns in Pascal's Triangle for Thursday then the application of it (expansion of binomials, combinations) on Friday. I think this'll work. When I decided to do this during my plan bell 2nd period today, I checked out my links on diigo as a starting point (they're if you're interested). It's so nice to have resources available! Saturday, April 9, 2011 Posted from Diigo. The rest of my favorite links are here. Friday, April 8, 2011 This has to have been one of the strangest weeks of the year (never mind that it's our first week back after spring break!). 1. I was only here 3.5 days. I had a sick boy at home on Tuesday and was out for a half day on Thursday because we had to grade the practice OGT that was given to the freshmen. (Oh the fun.) The good thing about the grading is that it usually only takes us as a department about an hour, but we're out the whole afternoon. That gave us another 1.5 hours or so to catch up on work, chat, and keep tabs on the Reds score. 2. My sub for both days was a retired teacher from school that's a peppy, fun, interesting lady. The kids normally really enjoy having her here, and she likes being here. She actually sat with my Algebra 1 kids both days and did the work with them. She had a ball doing it (she told me) and they're still talking about how much they like her. (And my Algebra 1 class was crazy amazing this week. They were so good.... I asked them what was up with that and they told me it was because they like what we're working on now. Freshmen.) 3. I was looking over some Algebra 2 assignments (solving radical equations) this morning and noticed that a lot of the kids are making the same mistakes, so I pulled out an old favorite activity of mine that I definitely don't use enough. I copied down a few problems, worked them out, and made copies so everyone had one. The key was that every problem had a mistake in it.... and of course they were mistakes that I was constantly seeing. (Note to self: error analysis is the way to go!!) They got really frustrated with me as they were trying to find the mistakes because they didn't see what was wrong.... I'm hoping that I made some headway today! 4. As I was grading those assignments, I noticed two that were identical. (Ugh... more cheating.) One of the boys involved was the first in the room, so I asked him about it. He denied copying and said they'd worked together. We had a little talk about the difference between the two (copying vs working together) and he continued to deny. The other boy involved was in the room at this time and wouldn't answer the question to say that there was no copying involved. Boy #1 sat in his seat and pouted the whole class. I was surprised, though, that at the end of class (after everyone had left) he stopped at my desk to tell me that I was right; he'd copied and he was sorry. Said he just gets so caught up in the moment that he lies and then regrets it later. I was really amazed at the maturity of that and gave him the opportunity to do the assignment over the weekend. Hopefully he won't do that again to me. 5. (This just happened as I was typing) One of our math teachers is going to the other high school in our district next year to teach because of different cuts that have occurred. I have her daughter this year in Honors Precalc. I know the daughter has enjoyed having her mom here and is bummed that mom won't be here for her senior year. The daughter just came into my room offering me presents - either salt & pepper shakers or a foam bird that she'd made in an art class in exchange for being her "mommy" next year. Isn't that sweet? I was going to offer my services anyway (though I said I'd stop short of lending money because I never have any) but thought it was cool that she asked. :) (I chose the bird, obviously.) Wednesday, April 6, 2011 Student after returning from missing 2 days: "What did I miss?" Me: "Nothing. We just sat here and waited for you to come back." Student: "Really?" (P.S. - If you don't "get" twitter or think it's dumb, check Saturday, April 2, 2011 Posted from Diigo. The rest of my favorite links are here. Don't you think that the week of spring break is the quickest one all year? (Unless it's winter break, anyway.) I didn't have a whole lot planned for break except for a 2-day vacation from my children. Unfortunately, my mom got sick and sent the kids home early. (She's actually still in the hospital but is expected to go home on Monday, thank goodness.) I did a little bit of work while I was home. In my Integrated Algebra 1 class we're going to start factoring soon... and I'm not looking forward to it. At this point, the only thing I'm looking forward to in that class is June 3rd. Is that horrible? I'll keep going and try to keep as peppy as I can and find stuff that they might actually enjoy doing, but they're dragging me down.... In precalc we're going to do a little bit of sequences and series then jump into limits. It's been several years since I've had the opportunity to get to limits, so I was doing some searching the other day to see if I could find anything interesting to at least intro them. (They'll get the boring version next year. Oops. Did I say that?) I had some help from , and (in my comments ). Still trying to piece things together. There was a conversation (on twitter, of course) about what you expect students to know when they take a certain class. For instance, I would expect anyone entering Algebra 1 to be able to add, subtract, multiply, and divide numbers (like fractions, negatives, etc). With my class this year that was a dumb dumb dumb assumption. At this point in the year I make sure they all have calculators; they can usually do the algebra, it's the adding and subtracting that messes them up. put a request on her for topics that she called "Math's Greatest Hits".... what do you think they should know? After my frustration in precalc earlier this year about them not being able to factor (see ) I made up a list of all of the skills I would expect my students to have when starting out the year. I gave a copy to the Honors Algebra 2 teachers who would be sending kids my way. My plan is to quiz them in the first week of school on some of those skills and if need be, get them reviewing on their own. Here's my version (if you have any suggestions, please let me know!): Precalc skills
{"url":"http://myweb20journey.blogspot.com/2011_04_01_archive.html","timestamp":"2014-04-21T10:40:17Z","content_type":null,"content_length":"275508","record_id":"<urn:uuid:81a35609-4420-4588-91ab-416f759c4730>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodcliff Lake Math Tutor Find a Woodcliff Lake Math Tutor ...After spending some 17 years in the Film and Television industry I returned to my first love, teaching. For the past 10 years I have been working at Washingtonville High School in Orange County New York. Among the events I have experienced there were the transitions from New York State's Se... 10 Subjects: including geometry, algebra 1, algebra 2, American history ...By teaching students the art of self-learning, their confidence in their learning ability will improve while they develop and enhance their analytical and problem-solving skills. These skills are important, because they are applicable in any endeavor, such as being a doctor, engineer, attorney, or an entrepreneur. TEACHING STYLEMy teaching style is also very simple: teach to be taught. 13 Subjects: including linear algebra, algebra 1, algebra 2, calculus I have been tutoring Math since 1992 in my home country, Jamaica. I have been tutoring since 2000 in the USA. I first like to do a diagnostics test on grade lower than their present grade to find out what Math concepts are missing. 7 Subjects: including algebra 1, algebra 2, calculus, prealgebra ...I can help you excel in AP physics (B or C), AP calculus, pre-calculus, or related subjects and can provide experienced guidance in C++, python, or other programming languages. My fee is negotiable depending on the complexity of the material and the number of hours for which you anticipate needi... 12 Subjects: including algebra 2, probability, algebra 1, trigonometry ...I have a degree in Civil and Environmental Engineering. I've worked at several positions where I used AutoCAD daily. I started taking AutoCAD classes back in high school. 12 Subjects: including calculus, precalculus, statistics, probability
{"url":"http://www.purplemath.com/woodcliff_lake_nj_math_tutors.php","timestamp":"2014-04-17T01:04:17Z","content_type":null,"content_length":"23996","record_id":"<urn:uuid:bc0423cb-da0d-4215-b66b-7cbb86740898>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
The Maslov index for paths Results 1 - 10 of 35 - GAFA - Special Volume, Part II "... ..." , 1995 "... this paper is to show that this problem can be successfully overcome by using an idea from [11]. We begin with an exposition of the main notions of contact geometry and their symplectic analogs. We develop then an analog of Floer homology theory for the Lagrangian intersection problem in symplectiza ..." Cited by 20 (5 self) Add to MetaCart this paper is to show that this problem can be successfully overcome by using an idea from [11]. We begin with an exposition of the main notions of contact geometry and their symplectic analogs. We develop then an analog of Floer homology theory for the Lagrangian intersection problem in symplectizations of contact manifolds and give applications of this theory to contact geometry. There exist other methods for handling similar problem in contact geometry. Let us mention here Givental's approach through, so-called, non-linear Maslov index (see [9]), as well as the approach based on the theory of generating functions and hypersurfaces as it is described in [3]. All these methods, and the method considered in this paper, have common as well as complementary areas of applications. A part of this paper was written while first and third authors visited IHES. They thank the institute for the hospitality. 2 Contact geometry 2.1 Contact manifolds and their symplectizations - Math. Z "... Abstract. We study the coherent orientations of the moduli spaces of ‘trajectories ’ in Symplectic Field Theory, following the lines of [3]. In particular we examine their behavior at multiple closed Reeb orbits under change of the asymptotic direction. Analogous to the orientation of the unstable t ..." Cited by 20 (5 self) Add to MetaCart Abstract. We study the coherent orientations of the moduli spaces of ‘trajectories ’ in Symplectic Field Theory, following the lines of [3]. In particular we examine their behavior at multiple closed Reeb orbits under change of the asymptotic direction. Analogous to the orientation of the unstable tangent spaces of critical points in finite–dimensional Morse theory, the orientations are determined by a certain choice of orientation at each closed Reeb orbit. - J. SYMPLECTIC GEOM , 2001 "... We provide a translation between Chekanov’s combinatorial theory for invariants of Legendrian knots in the standard contact R 3 and a relative version of Eliashberg and Hofer’s Contact Homology. We use this translation to transport the idea of “coherent orientations ” from the Contact Homology world ..." Cited by 19 (8 self) Add to MetaCart We provide a translation between Chekanov’s combinatorial theory for invariants of Legendrian knots in the standard contact R 3 and a relative version of Eliashberg and Hofer’s Contact Homology. We use this translation to transport the idea of “coherent orientations ” from the Contact Homology world to Chekanov’s combinatorial setting. As a result, we obtain a lifting of Chekanov’s differential graded algebra invariant to an algebra over Z[t, t −1] with a full Z grading. - Comm. Math. Phys "... We prove the homological mirror conjecture for toric del Pezzo surfaces. In this case, the mirror object is a regular function on an ..." Cited by 13 (3 self) Add to MetaCart We prove the homological mirror conjecture for toric del Pezzo surfaces. In this case, the mirror object is a regular function on an , 2007 "... We construct coherent orientations on moduli spaces of quilted pseudoholomorphic surfaces and determine the effect of various gluing operations on the orientations. We also investigate the behavior of the orientations under composition of Lagrangian correspondences. ..." Cited by 8 (7 self) Add to MetaCart We construct coherent orientations on moduli spaces of quilted pseudoholomorphic surfaces and determine the effect of various gluing operations on the orientations. We also investigate the behavior of the orientations under composition of Lagrangian correspondences. - Pacific J. Math "... We give the construction ofsymplectic invariants which incorporates both the “infinite dimensional ” invariants constructed by Oh in 1997 and the “finite dimensional ” ones constructed by Viterbo in 1992. 1. Introduction. Let M be a compact smooth manifold. Its cotangent bundle T ∗M carries a natura ..." Cited by 7 (0 self) Add to MetaCart We give the construction ofsymplectic invariants which incorporates both the “infinite dimensional ” invariants constructed by Oh in 1997 and the “finite dimensional ” ones constructed by Viterbo in 1992. 1. Introduction. Let M be a compact smooth manifold. Its cotangent bundle T ∗M carries a natural symplectic structure associated to a Liouville form θ = pdq. For a given compactly supported Hamiltonian function H: T ∗M → R and a closed submanifold N ⊂ M Oh [30, 27] defined a symplectic invariants of , 2002 "... Abstract. We show that there exists no Lagrangian embeddings of the Klein bottle into C 2. Using the same techniques we also give a new proof that any Lagrangian torus in C 2 is smoothly isotopic to the Clifford torus. 1. Lagrangian Embeddings in C 2 The topology of closed Lagrangian embeddings into ..." Cited by 7 (0 self) Add to MetaCart Abstract. We show that there exists no Lagrangian embeddings of the Klein bottle into C 2. Using the same techniques we also give a new proof that any Lagrangian torus in C 2 is smoothly isotopic to the Clifford torus. 1. Lagrangian Embeddings in C 2 The topology of closed Lagrangian embeddings into C n (see [1]) is still an elusive problem in symplectic topology. Before Gromov invented the techniques of pseudo– holomorphic curves it was almost intractable and the only known obstructions came from the fact that such a submanifold has to be totally real. Then in [7] he showed that for any such closed, compact, embedded Lagrangian there exists a holomorphic disk with boundary on it. Hence the integral of a primitive over the boundary is different from zero and the first Betti number of the Lagrangian submanifold cannot vanish, excluding the possibility that a three–sphere can be embedded into C 3 as a Lagrangian. A further analysis of these techniques led to more obstructions for the topology of such embeddings in [18] and [20]. For C 2 the classical obstructions restrict the classes of possible closed, compact surfaces , 2001 "... At the very beginning of the quantum theory, Van-Vleck (1928) proposed a nice approximation formula for the integral kernel of the time dependent propagator for the Schrodinger equation. This formula can be deduced from the Feynman path integral by a formal stationary phase argument. After the fonda ..." Cited by 7 (2 self) Add to MetaCart At the very beginning of the quantum theory, Van-Vleck (1928) proposed a nice approximation formula for the integral kernel of the time dependent propagator for the Schrodinger equation. This formula can be deduced from the Feynman path integral by a formal stationary phase argument. After the fondamental works by Hormander and Maslov on Fourier-integral operators, it became possible to give a rigorous mathematical proof of the Van Vleck formula. We present here a more direct and elementary proof, using propagation of coherent states. We apply this result to give a mathematical proof of the Aharonov-Bohm effect observed on the time dependent propagator. This effect concerns a phase factor depending on the flux of a magnetic field, which can be non trivial, even if the particle never meets the magnetic field. 1 Introduction Let us consider the time dependent Schrodinger equation ih @/(t) @t = H(t)/(t); /(t 0 ) = f (1) t 0 is the initial time, f an initial state, H(t) a quantum
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2292123","timestamp":"2014-04-17T23:33:25Z","content_type":null,"content_length":"33757","record_id":"<urn:uuid:88fb8aaf-d1df-492d-b2d5-bb22c743853c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Some questions on unitarisability of discrete groups up vote 13 down vote favorite In this post I would like to ask several of questions related to Dixmier problem. I will try to make the post as self-contained as possible. A discrete group $G$ is unitarisable if for every Hilbert space $H$ and a homomorphism $\pi:G\rightarrow B(H)$ such that $||\pi(g)||<C$ for every $g\in G$ and some constant $C$ there exists an invertible operator $S\in B(H)$ such that for every $g\in G$ we have that $S\pi(g)S^{-1}$ is unitary operator. Note that the unitarisability property passes to subgroups. The following is still open: Dixmier problem: $G$ is amenable iff $G$ is unitarisable. Denote by $T_1(G)$ the space of functions $f:G\rightarrow \mathbb{C}$ with the following norm: $$||f||_{T_1(G)}=\inf \{ \sup\limits_{s}\sum\limits_{t}f_1(s,t)+\sup\limits_{t}\sum\limits_{s}f_2(s,t)\}$$ where $\inf$ is taken over all decompositions of $f(s^{-1}t)=f_1(s,t)+f_2(s,t)$. It was proved by Bozejko and Fendler that is $G$ is unitarisable then $T_1(G)\subseteq l_2(G)$, equivalently, there exist a constant $C>0$ such that for every $f\in \mathbb{C}[G]$ we have $$||f||_ {l_2(G)}\leq C ||f||_{T_1(G)}.$$ From this result it is immediate that $\mathbb{F}_{\infty}$ is not unitarisable. Indeed, let $f:\mathbb{F}_{\infty}\rightarrow\mathbb{C}$ be the characteristic function of the words of length $1$ with respect to the standard set of generators. Then $||f||_{l_2(G)}=\infty$. Since $f(s^{-1}t)=1_{\{(s,t): |s|>|t|, |st|=1\}}+1_{\{(s,t): |s|<|t|, |st|=1\}}$ we have $||f||_{T_1(G)}\leq 2$. Question 1: Let $G$ be such that $T_1(G)\not\subset l_{2}(G)$. Is it true that there exists a characteristic function of an infinite set $S\subset G$ such that its $T_1(G)$ norm of it is bounded? Are there sets $S_n$ such that $|S_n|\rightarrow \infty$ and $||1_{S_n}||_{T_1(G)}\leq C$ for some constant $C>0$? Note that Bozejko-Fendler result helps to catch examples of non-unitarisable group without free subgroups. The last fact is combination of recent results of Epstein, Monod and Osin. It is not clear however if in their example the function that violates Bozejko-Fendler condition can be chosen to be characteristic. Is the following true: Question 2: $G$ is not amenable iff there exists an infinite set $R\subset G$ such that $\Delta(R)=\{(s,t)\in G\times G: s^{-1}t\in R\}$ and $\Delta(R)=R_1\cup R_2$ with $R_1\cap R_2=\emptyset$ and $|\{s:(s,t)\in R_1\}|+|\{t:(s,t)\in R_2\}|<C$ for some constant $C>0$ and all $s,t\in G$. or, maybe, we just have a positive answer to the following question: Question 3: Assume $G$ satisfies the second part of the Question 2. Namely, there exists an infinite set $R\subset G$ such that $\Delta(R)=\{(s,t)\in G\times G: s^{-1}t\in R\}$ and $\Delta(R)=R_1 \cup R_2$ with $R_1\cap R_2=\emptyset$ and $|\{s:(s,t)\in R_1\}|+|\{t:(s,t)\in R_2\}|<C$ for some constant $C>0$ and all $s,t\in G$. Is it true that $G$ contains the free group on $2$ generators? Question 2 can be restated as follows: Question 2': $G$ is amenable iff there exist a sequence of subsets $S_n\subset G$ with $|S_n|\rightarrow \infty$ a constant $C>0$ such that for every finite sets $A,B\subset G$ we have $$|\Delta (S_n)\cap A\times B|\leq C (|A|+|B|)$$ Note that, in the Q. 2' one can take $A=B$. the same for the Question 3: Question 3': Is it true that if $G$ satisfies the second condition in the Question 2' then $\mathbb{F}_2$ is a subgroup of $G$? Disclaimer: Some of the questions above were communicated to me by Gilles Pisier in discussions following my talk on his working group seminar. gr.group-theory amenability similarity 1 thanks for sharing these, kate! – Jon Bannon Apr 8 '11 at 19:15 Please take a close look on the Question 1. It might be that there is a simple trick that makes it. Also similar question on changing a function with certain properties to characteristic function is posted here mathoverflow.net/questions/54921/…. – Kate Juschenko Apr 9 '11 at 9:09 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged gr.group-theory amenability similarity or ask your own question.
{"url":"http://mathoverflow.net/questions/61091/some-questions-on-unitarisability-of-discrete-groups","timestamp":"2014-04-18T18:11:02Z","content_type":null,"content_length":"52128","record_id":"<urn:uuid:5552342e-6626-4c84-a85d-eefea3162b23>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Alpha Spectral Analysis One of the questions of interest is the optimal sampling frequency to use for extracting the alpha signal from an alpha generation function. We can use Fourier transforms to help identify the cyclical behavior of the strategy alpha and hence determine the best time-frames for sampling and trading. Typically, these spectral analysis techniques will highlight several different cycle lengths where the alpha signal is strongest. The spectral density of the combined alpha signals across twelve pairs of stocks is shown in Fig. 1 below. It is clear that the strongest signals occur in the shorter frequencies with cycles of up to several hundred seconds. Focusing on the density within this time frame, we can identify in Fig. 2 several frequency cycles where the alpha signal appears strongest. These are around 50, 80, 160, 190, and 230 seconds. The cycle with the strongest signal appears to be around 228 secs, as illustrated in Fig. 3. The signals at cycles of 54 & 80 (Fig. 4), and 158 & 185/195 (Fig. 5) secs appear to be of approximately equal strength. There is some variation in the individual pattern for of the power spectra for each pair, but the findings are broadly comparable, and indicate that strategies should be designed for sampling frequencies at around these time intervals. Fig. 1 Alpha Power Spectrum If we look at the correlation surface of the power spectra of the twelve pairs some clear patterns emerge (see Fig 6): Focusing on the off-diagonal elements, it is clear that the power spectrum of each pair is perfectly correlated with the power spectrum of its conjugate. So, for instance the power spectrum of the Stock1-Stock3 pair is exactly correlated with the spectrum for its converse, Stock3-Stock1. But it is also clear that there are many other significant correlations between non-conjugate pairs. For example, the correlation between the power spectra for Stock1-Stock2 vs Stock2-Stock3 is 0.72, while the correlation of the power spectra of Stock1-Stock2 and Stock2-Stock4 is 0.69. We can further analyze the alpha power spectrum using PCA to expose the underlying factor structure. As shown in Fig. 7, the first two principal components account for around 87% of the variance in the alpha power spectrum, and the first four components account for over 98% of the total variation. Fig. 7 Stock3 dominates PC-1 with loadings of 0.52 for Stock3-Stock4, 0.64 for Stock3-Stock2, 0.29 for Stock1-Stock3 and 0.26 for Stock4-Stock3. Stock3 is also highly influential in PC-2 with loadings of -0.64 for Stock3-Stock4 and 0.67 for Stock3-Stock2 and again in PC-3 with a loading of -0.60 for Stock3-Stock1. Stock4 plays a major role in the makeup of PC-3, with the highest loading of 0.74 for Fig. 8 PCA Analysis of Power Spectra Comments are closed on this post, sorry!
{"url":"http://jonathankinlay.com/index.php/2011/05/alpha-spectral-analysis/","timestamp":"2014-04-19T22:05:28Z","content_type":null,"content_length":"43531","record_id":"<urn:uuid:150f3f65-7e5e-4ef4-973e-2eb289fc6589>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
• Frankfurt Institute for Advanced Studies (5) (remove) 5 search hits Learning more by sampling less: subsampling effects are model specific (2013) Viola Priesemann Michael Wibral Jochen Triesch Poster presentation: Twenty Second Annual Computational Neuroscience Meeting: CNS*2013. Paris, France. 13-18 July 2013. When studying real world complex networks, one rarely has full access to all their components. As an example, the central nervous system of the human consists of 1011 neurons which are each connected to thousands of other neurons [1]. Of these 100 billion neurons, at most a few hundred can be recorded in parallel. Thus observations are hampered by immense subsampling. While subsampling does not affect the observables of single neuron activity, it can heavily distort observables which characterize interactions between pairs or groups of neurons [2]. Without a precise understanding how subsampling affects these observables, inference on neural network dynamics from subsampled neural data remains limited. We systematically studied subsampling effects in three self-organized critical (SOC) models, since this class of models can reproduce the spatio-temporal activity of spontaneous activity observed in vivo [2,3]. The models differed in their topology and in their precise interaction rules. The first model consisted of locally connected integrate- and fire units, thereby resembling cortical activity propagation mechanisms [2]. The second model had the same interaction rules but random connectivity [4]. The third model had local connectivity but different activity propagation rules [5]. As a measure of network dynamics, we characterized the spatio-temporal waves of activity, called avalanches. Avalanches are characteristic for SOC models and neural tissue [6]. Avalanche measures A (e.g. size, duration, shape) were calculated for the fully sampled and the subsampled models. To mimic subsampling in the models, we considered the activity of a subset of units only, discarding the activity of all the other units. Under subsampling the avalanche measures A depended on three main factors: First, A depended on the interaction rules of the model and its topology, thus each model showed its own characteristic subsampling effects on A. Second, A depended on the number of sampled sites n. With small and intermediate n, the true A¬ could not be recovered in any of the models. Third, A depended on the distance d between sampled sites. With small d, A was overestimated, while with large d, A was underestimated. Since under subsampling, the observables depended on the model's topology and interaction mechanisms, we propose that systematic subsampling can be exploited to compare models with neural data: When changing the number and the distance between electrodes in neural tissue and sampled units in a model analogously, the observables in a correct model should behave the same as in the neural tissue. Thereby, incorrect models can easily be discarded. Thus, systematic subsampling offers a promising and unique approach to model selection, even if brain activity was far from being fully sampled. Emergence of the mitochondrial reticulum from fission and fusion dynamics (2012) Valerii M. Sukhorukov Daniel Dikov Andreas S. Reichert Michael Meyer-Hermann Mitochondria form a dynamic tubular reticulum within eukaryotic cells. Currently, quantitative understanding of its morphological characteristics is largely absent, despite major progress in deciphering the molecular fission and fusion machineries shaping its structure. Here we address the principles of formation and the large-scale organization of the cell-wide network of mitochondria. On the basis of experimentally determined structural features we establish the tip-to-tip and tip-to-side fission and fusion events as dominant reactions in the motility of this organelle. Subsequently, we introduce a graph-based model of the chondriome able to encompass its inherent variability in a single framework. Using both mean-field deterministic and explicit stochastic mathematical methods we establish a relationship between the chondriome structural network characteristics and underlying kinetic rate parameters. The computational analysis indicates that mitochondrial networks exhibit a percolation threshold. Intrinsic morphological instability of the mitochondrial reticulum resulting from its vicinity to the percolation transition is proposed as a novel mechanism that can be utilized by cells for optimizing their functional competence via dynamic remodeling of the chondriome. The detailed size distribution of the network components predicted by the dynamic graph representation introduces a relationship between chondriome characteristics and cell function. It forms a basis for understanding the architecture of mitochondria as a cell-wide but inhomogeneous organelle. Analysis of the reticulum adaptive configuration offers a direct clarification for its impact on numerous physiological processes strongly dependent on mitochondrial dynamics and organization, such as efficiency of cellular metabolism, tissue differentiation and aging. TRENTOOL: a Matlab open source toolbox to analyse information flow in time series data with transfer entropy (2011) Michael Lindner Viola Priesemann Raul Vicente Michael Wibral Background: Transfer entropy (TE) is a measure for the detection of directed interactions. Transfer entropy is an information theoretic implementation of Wiener's principle of observational causality. It offers an approach to the detection of neuronal interactions that is free of an explicit model of the interactions. Hence, it offers the power to analyze linear and nonlinear interactions alike. This allows for example the comprehensive analysis of directed interactions in neural networks at various levels of description. Here we present the open-source MATLAB toolbox TRENTOOL that allows the user to handle the considerable complexity of this measure and to validate the obtained results using non-parametrical statistical testing. We demonstrate the use of the toolbox and the performance of the algorithm on simulated data with nonlinear (quadratic) coupling and on local field potentials (LFP) recorded from the retina and the optic tectum of the turtle (Pseudemys scripta elegans) where a neuronal one-way connection is likely present. Results: In simulated data TE detected information flow in the simulated direction reliably with false positives not exceeding the rates expected under the null hypothesis. In the LFP data we found directed interactions from the retina to the tectum, despite the complicated signal transformations between these stages. No false positive interactions in the reverse directions were detected. Conclusions: TRENTOOL is an implementation of transfer entropy and mutual information analysis that aims to support the user in the application of this information theoretic measure. TRENTOOL is implemented as a MATLAB toolbox and available under an open source license (GPL v3). For the use with neural data TRENTOOL seamlessly integrates with the popular FieldTrip toolbox. Deceleration of fusion–fission cycles improves mitochondrial quality control during aging (2012) Marc Thilo Figge Andreas S. Reichert Michael Meyer-Hermann Heinz D. Osiewacz Mitochondrial dynamics and mitophagy play a key role in ensuring mitochondrial quality control. Impairment thereof was proposed to be causative to neurodegenerative diseases, diabetes, and cancer. Accumulation of mitochondrial dysfunction was further linked to aging. Here we applied a probabilistic modeling approach integrating our current knowledge on mitochondrial biology allowing us to simulate mitochondrial function and quality control during aging in silico. We demonstrate that cycles of fusion and fission and mitophagy indeed are essential for ensuring a high average quality of mitochondria, even under conditions in which random molecular damage is present. Prompted by earlier observations that mitochondrial fission itself can cause a partial drop in mitochondrial membrane potential, we tested the consequences of mitochondrial dynamics being harmful on its own. Next to directly impairing mitochondrial function, pre-existing molecular damage may be propagated and enhanced across the mitochondrial population by content mixing. In this situation, such an infection-like phenomenon impairs mitochondrial quality control progressively. However, when imposing an age-dependent deceleration of cycles of fusion and fission, we observe a delay in the loss of average quality of mitochondria. This provides a rational why fusion and fission rates are reduced during aging and why loss of a mitochondrial fission factor can extend life span in fungi. We propose the ‘mitochondrial infectious damage adaptation’ (MIDA) model according to which a deceleration of fusion–fission cycles reflects a systemic adaptation increasing life span. TRENTOOL: an open source toolbox to estimate neural directed interactions with transfer entropy (2011) Michael Wibral Raul Vicente Viola Priesemann Michael Lindner Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. Poster presentation To investigate directed interactions in neural networks we often use Norbert Wiener's famous definition of observational causality. Wiener’s definition states that an improvement of the prediction of the future of a time series X from its own past by the incorporation of information from the past of a second time series Y is seen as an indication of a causal interaction from Y to X. Early implementations of Wiener's principle – such as Granger causality – modelled interacting systems by linear autoregressive processes and the interactions themselves were also assumed to be linear. However, in complex systems – such as the brain – nonlinear behaviour of its parts and nonlinear interactions between them have to be expected. In fact nonlinear power-to-power or phase-to-power interactions between frequencies are reported frequently. To cover all types of non-linear interactions in the brain, and thereby to fully chart the neural networks of interest, it is useful to implement Wiener's principle in a way that is free of a model of the interaction [1]. Indeed, it is possible to reformulate Wiener's principle based on information theoretic quantities to obtain the desired model-freeness. The resulting measure was originally formulated by Schreiber [2] and termed transfer entropy (TE). Shortly after its publication transfer entropy found applications to neurophysiological data. With the introduction of new, data efficient estimators (e.g. [3]) TE has experienced a rapid surge of interest (e.g. [4]). Applications of TE in neuroscience range from recordings in cultured neuronal populations to functional magnetic resonanace imaging (fMRI) signals. Despite widespread interest in TE, no publicly available toolbox exists that guides the user through the difficulties of this powerful technique. TRENTOOL (the TRansfer ENtropy TOOLbox) fills this gap for the neurosciences by bundling data efficient estimation algorithms with the necessary parameter estimation routines and nonparametric statistical testing procedures for comparison to surrogate data or between experimental conditions. TRENTOOL is an open source MATLAB toolbox based on the Fieldtrip data format. We evaluated the performance of the toolbox on simulation data and also a neuronal dataset that provides connections that are truly unidirectional to circumvent the following generic problem: typically, for any result of an analysis of directed interactions in the brain there will be a plausible explanation because of the combination of feedforward and feedback connectivity between any two measurement sites. Therefore, we estimated TE between the electroretinogram (ERG) and the LFP response in the tectum of the turtle (Chrysemys scripta elegans) under visual stimulation by random light pulses. In addition, we also investigated transfer entropy between the input to the light source (TTL pulse) and the ERG, to test the ability of TE to detect directed interactions between signals with vastly different properties. We found significant (p<0.0005) causal interactions from the TTL pulse to the ERG and from the ERG to the tectum – as expected. No significant TE was detected in the reverse direction. CONCLUSION: TRENTOOL is an easy to use implementation of transfer entropy estimation combined with statistical testing routines suitable for the analysis of directed interactions in neuronal data.
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/collection/id/16217/start/0/rows/10/institutefq/Frankfurt+Institute+for+Advanced+Studies","timestamp":"2014-04-20T00:57:40Z","content_type":null,"content_length":"35023","record_id":"<urn:uuid:7a8c7a44-d039-4ffc-96f8-a91b32f44408>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Yahoo Groups More Cubics Expand Messages View Source Let ABC be a triangle and P a point on its plane. We construct [the how, and the number of solutions is another story !] three similar triangles PAbAc, PBaBc, PCaCb, whose the vertices Ai, Bi, Ci lie on the sides of ABC. / \ / \ / \ Ac Ma Ab / q f \ / w \ / P \ Bc w w Cb / f q \ / Mb Mc \ / q f \ Angles of the similar triangles PAbAc, PBaBc, PCaCb: P [= AcPAb = BcPBa = CaPCb] := omega [w; not be confused with Ab [=PAbAc] = Bc = Ca := phi (f) Ac [=PAcAb] = Ba = Cb := theta (q) Let now Ma,Mb,Mc be the midpoints of AbAc, BcBa, CaCb. The locus of the points P such that MaMbMc be in perspective with ABC is a cubic [actually a diparametric family of cubics, since the equation has two parameters, namely f,q [the third, w, is expressd by the twos : w = Pi - (f+q)]). If the similar triangles are isosceles with f = q = (Pi - w)/2, then the cubic, according to my computations, is an isogonal one (actually a pencil of isogonal cubics) with equation: x(y^2 + z^2) (BC - sA) + [cyclically] = 0 (in trilinears) where A,B,C are shortcuts for sin(A+w) - sinA, B =..., C = ...., s = sinw. The pivot (sin(B+w) - sinB)(sin(C+w) - sinC) - sinw(sin(A+w) - sinA) ::) can be simplified, but I didn't make the calculations. The most interesting cases are of course for w = 60 d. or 90 d. If w = 60 d. then the three similar triangles are equilateral, and a natural question is: How about their centers K1,K2,K3 ? That is: Which is the locus of P such that K1K2K3 be in perspective with ABC? (I think that the locus is a sextic, but I don't know whether reduces or not) If w = 90 d. then the midpoints Ma,Mb,Mc are centers of three squares (Kenmotu configuration), whose the fourth vertices let be P1,P2,P3: / \ / \ Ac Ma Ab / \ / \ / P \ Bc Cb / \ / Mb Mc \ / \ P3 P2 B-----Ba--------Ca-------C And now the question is about the P1,P2,P3: For which points P the triangle P1P2P3 is in perspective with ABC? (in general, we can also consider P1,P2,P3 as the forth vertices of three similar rhombi - FvL's configuration). (I haven't worked on this locus.) View Source Let ABC be a triangle, and PaPbPc the pedal triangle of P. The circle (P, PPa) intersects the bisector of ang(PbPPc) at A', A" [A' near to A] Similary we define the points B', C'; B", C". Which are the loci of P such that: 1. ABC, A'B'C' 2. ABC, A"B"C" are perspective? Let A1, A2 be the orth. proj. of A', A" (resp.) on BC. Similarly we define the points B1, C1; B2, C2. Which are the loci of P such that: 3. ABC, A1B1C1 4. ABC, A2B2C2 are perspective? Your message has been successfully submitted and would be delivered to recipients shortly.
{"url":"https://groups.yahoo.com/neo/groups/Hyacinthos/conversations/topics/1943?xm=1&m=e&l=1","timestamp":"2014-04-18T00:33:39Z","content_type":null,"content_length":"45973","record_id":"<urn:uuid:75561504-8dfe-40f5-a72c-7627ce771755>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Squaring the Circle Copyright © University of Cambridge. All rights reserved. 'Squaring the Circle' printed from http://nrich.maths.org/ Bluey - green squares, white squares, transparent squares with a few odd bits of shapes around the perimeter. But, how many squares are there of each type in the complete circle? Study the picture and make an estimate. Note the totals before embarking on a more rigorous audit of what is there. How accurate was your estimate? Can you give an upper and a lower bound to your estimate? If the blue-green squares are of 1 and 1.5 units of length, the white squares of 1.5 units and the transparent squares 1.75 units of length - how "big" is the circular part of the pavement? If the circular part was used as the design for a Roll- a- Penny stall which paid out on pennies successfully rolling fully into a transparent square: what is the probability of winning on this layout? You might like to think about the best and worst case.
{"url":"http://nrich.maths.org/893/index?nomenu=1","timestamp":"2014-04-20T16:14:34Z","content_type":null,"content_length":"4014","record_id":"<urn:uuid:e47deddc-4624-4cb6-9bbe-f14ccf949329>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Logical Consequence Model-Theoretic Conceptions of Logical Consequence One sentence X is said to be a logical consequence of a set K of sentences, if and only if, in virtue of logic alone, it is impossible for all the sentences in the set to be true without X being true as well. One well-known specification of this informal characterization is the model-theoretic conception of logical consequence: a sentence X is a logical consequence of a set K of sentences if and only if all models of K are models of X. The model-theoretic characterization is a theoretical definition of logical consequence. It has been argued that this conception of logical consequence is more basic than the characterization in terms of deducibility in a deductive system. The correctness of the model-theoretic characterization of logical consequence, and the adequacy of the notion of a logical constant it utilizes are matters of contemporary debate. Table of Contents 1. Introduction One sentence X is said to be a logical consequence of a set of sentences, if and only if, in virtue of logic alone, it is impossible for all the sentences in K to be true without X being true as well. One well-known specification of this informal characterization, due to Tarski (1936), is: X is a logical consequence of K if and only if there is no possible interpretation of the non-logical terminology of the language L according to which all the sentence in K are true and X is false. A possible interpretation of the non-logical terminology of L according to which sentences are true or false is a reading of the non-logical terms according to which the sentences receive a truth-value (that is, are either true or false) in a situation that is not ruled out by the semantic properties of the logical constants. The philosophical locus of the technical development of ‘possible interpretation’ in terms of models is Tarski (1936). A model for a language L is the theoretical development of a possible interpretation of non-logical terminology of L according to which the sentences of L receive a truth-value. The characterization of logical consequence in terms of models is called the Tarskian or model-theoretic characterization of logical consequence. It may be stated as follows. X is a logical consequence of K if and only if all models of K are models of X. See the entry, Logical Consequence, Philosophical Considerations, for discussion of Tarski’s development of the model-theoretic characterization of logical consequence in light of the ordinary We begin by giving an interpreted language M. Next, logical consequence is defined model-theoretically. Finally, the status of this characterization is discussed, and criticisms of it are 2. Linguistic Preliminaries: the Language M Here we define a simple language M, a language about the McKeon family, by first sketching what strings qualify as well-formed formulas (wffs) in M. Next we define sentences from formulas, and then give an account of truth in M, that is we describe the conditions in which M-sentences are true. a. Syntax of M Building blocks of formulas Individual names—’beth’, ‘kelly’, ‘matt’, ‘paige’, ‘shannon’, ‘evan’, and ‘w[1]‘, ‘w[2]‘, ‘w[3 ]‘, etc. Variables—’x', ‘y’, ‘z’, ‘x[1]‘, ‘y[1 ]‘, ‘z[1]‘, ‘x[2]‘, ‘y[2]‘, ‘z[2]‘, etc. 1-place predicates—’Female’, ‘Male’ 2-place predicates—’Parent’, ‘Brother’, ‘Sister’, ‘Married’, ‘OlderThan’, ‘Admires’, ‘=’. Blueprints of well-formed formulas (wffs) Atomic formulas: An atomic wff is any of the above n-place predicates followed by n terms which are enclosed in parentheses and separated by commas. Formulas: The general notion of a well-formed formula (wff) is defined recursively as follows: (1) All atomic wffs are wffs. (2) If α is a wff, so is (3) If α and β are wffs, so is (4) If α and β are wffs, so is v β) (5) If α and β are wffs, so is (6) If Ψ is a wff and v is a variable, then vΨ (7) If Ψ is a wff and v is a variable, then vΨ Finally, no string of symbols is a well-formed formula of M unless the string can be derived from (1)-(7). The signs ‘~’, ‘&’, ‘v‘, and ‘→’, are called sentential connectives. The signs ‘∀’ and ‘∃’ are called quantifiers. It will prove convenient to have available in M an infinite number of individual names as well as variables. The strings ‘Parent(beth, paige)’ and ‘Male(x)’ are examples of atomic wffs. We allow the identity symbol in an atomic formula to occur in between two terms, e.g., instead of ‘=(evan, evan)’ we allow ‘(evan = evan)’. The symbols ‘~’, ‘&’, ‘v‘, and ‘→’ correspond to the English words ‘not’, ‘and’, ‘or’ and ‘if…then’, respectively. ‘∃’ is our symbol for an existential quantifier and ‘∀’ represents the universal quantifier. vΨvΨfor some v, Ψ, and for all v, Ψ, respectively. For every quantifier, its scope is the smallest part of the wff in which it is contained that is itself a wff. An occurrence of a variable v is a bound occurrence iff it is in the scope of some quantifier of the form vv b. Semantics for M We now provide a semantics for M. This is done in two steps. First, we specify a domain of discourse, that is, the chunk of the world that our language M is about, and interpret M’s predicates and names in terms of the elements composing the domain. Then we state the conditions under which each type of M-sentence is true. To each of the above syntactic rules (1-7) there corresponds a semantic rule that stipulates the conditions in which the sentence constructed using the syntactic rule is true. The principle of bivalence is assumed and so ‘not true’ and ‘false’ are used interchangeably. In effect, the interpretation of M determines a truth-value (true, false) for each and every sentence of M. Domain D—The McKeons: Matt, Beth, Shannon, Kelly, Paige, and Evan. Here are the referents and extensions of the names and predicates of M. Terms: ‘matt’ refers to Matt, ‘beth’ refers to Beth, ‘shannon’ refers to Shannon, etc. Predicates. The meaning of a predicate is identified with its extension, that is the set (possibly empty) of elements from the domain D the predicate is true of. The extension of a one-place predicate is a set of elements from D, the extension of a two-place predicate is a set of ordered pairs of elements from D. The extension of ‘Male’ is {Matt, Evan}. The extension of ‘Female’ is {Beth, Shannon, Kelly, Paige}. The extension of ‘Parent’ is {<Matt, Shannon>, <Matt, Kelly>, <Matt, Paige>, <Matt, Evan>, <Beth, Shannon>, <Beth, Kelly>, <Beth, Paige>, <Beth, Evan>}. The extension of ‘Married’ is {<Matt, Beth>, <Beth, Matt>}. The extension of ‘Sister’ is {<Shannon, Kelly>, <Kelly, Shannon>, <Shannon, Paige>, <Paige, Shannon>, <Kelly, Paige>, <Paige, Kelly>, <Kelly, Evan>, <Paige, Evan>, <Shannon, Evan>}. The extension of ‘Brother’ is {<Evan, Shannon>, <Evan, Kelly>, <Evan, Paige>}. The extension of ‘OlderThan’ is {<Beth, Matt>, <Beth, Shannon>, <Beth, Kelly>, <Beth, Paige>, <Beth, Evan>, <Matt, Shannon>, <Matt, Kelly>, <Matt, Paige>, <Matt, Evan>, <Shannon, Kelly>, <Shannon, Paige>, <Shannon, Evan>, <Kelly, Paige>, <Kelly, Evan>, <Paige, Evan>}. The extension of ‘Admires’ is {<Matt, Beth>, <Shannon, Matt>, <Shannon, Beth>, <Kelly, Beth>, <Kelly, Matt>, <Kelly, Shannon>, <Paige, Beth>, <Paige, Matt>, <Paige, Shannon>, <Paige, Kelly>, <Evan, Beth>, <Evan, Matt>, <Evan, Shannon>, <Evan, Kelly>, <Evan, Paige>}. The extension of ‘=’ is {<Matt, Matt>, <Beth, Beth>, <Shannon, Shannon>, <Kelly, Kelly>, <Paige, Paige>, <Evan, Evan>}. (I) An atomic sentence with a one-place predicate is true iff the referent of the term is a member of the extension of the predicate, and an atomic sentence with a two-place predicate is true iff the ordered pair formed from the referents of the terms in order is a member of the extension of the predicate. The atomic sentence ‘Female(kelly)’ is true because, as indicated above, the referent of ‘kelly’ is in the extension of the property designated by ‘Female’. The atomic sentence ‘Married(shannon, kelly)’ is false because the ordered pair <Shannon, Kelly> is not in the extension of the relation designated by ‘Married’. Let α and β be any M-sentences. (IV) v β)v β) The meanings for ‘~’ and ‘&’ roughly correspond to the meanings of ‘not’ and ‘and’ as ordinarily used. We call v β)v‘ corresponds to inclusive or. There are a variety of conditionals in English (e.g., causal, counterfactual, logical), each type having a distinct meaning. The conditional defined by (V) above is called the material conditional. One way of following (V) is to see that the truth conditions for By (II) ‘~Married(shannon, kelly)’ is true because, as noted above, ‘Married(shannon, kelly)’ is false. (II) also tells us that ‘~Female(kelly)’ is false since ‘Female(kelly)’ is true. According to (III), ‘(~Married(shannon, kelly) & Female(kelly))’ is true because ‘~Married(shannon, kelly)’ is true and ‘Female(kelly)’ is true. And ‘(Male(shannon) & Female(shannon))’ is false because ‘Male (shannon)’ is false. (IV) confirms that ‘(Female(kelly) v Married(evan, evan))’ is true because, even though ‘Married(evan, evan)’ is false, ‘Female(kelly)’ is true. From (V) we know that the sentence ‘(~(beth = beth) → Male(shannon))’ is true because ‘~(beth = beth)’ is false. If α is false then Before describing the truth conditions for quantified sentences we need to say something about the notion of satisfaction. We’ve defined truth only for the formulas of M that are sentences. So, the notions of truth and falsity are not applicable to non-sentences such as ‘Male(x)’ and ‘((x = x) → Female(x))’ in which ‘x’ occurs free. However, objects may satisfy wffs that are non-sentences. We introduce the notion of satisfaction with some examples. An object satisfies ‘Male(x)’ just in case that object is male. Matt satisfies ‘Male(x)’, Beth does not. This is the case because replacing ‘x’ in ‘Male(x)’ with ‘matt’ yields a truth while replacing the variable with ‘beth’ yields a falsehood. An object satisfies ‘((x = x) → Female(x))’ if and only if it is either not identical with itself or is a female. Beth satisfies this wff (we get a truth when ‘beth’ is substituted for the variable in all of its occurrences), Matt does not (putting ‘matt’ in for ‘x’ wherever it occurs results in a falsehood). As a first approximation, we say that an object with a name, say ‘a’, satisfies a wff vv occurs free if and only if the sentence that results by replacing v in all of its occurrences with ‘a’ is true. ‘Male(x)’ is neither true nor false because it is not a sentence, but it is either satisfiable or not by a given object. Now we define the truth conditions for quantifications, utilizing the notion of satisfaction. The notion of satisfaction will be revisited below when we formalize the semantics for M and give the model-theoretic characterization of logical consequence. Let Ψ be any formula of M in which at most v occurs free. Here are some examples. ‘∃x(Male(x) & Married(x, beth))’ is true because Matt satisfies ‘(Male(x) & Married(x, beth))’; replacing ‘x’ wherever it appears in the wff with ‘matt’ results in a true sentence. The sentence ‘∃xOlderThan(x, x)’ is false because no McKeon satisfies ‘OlderThan(x, x)’, that is replacing ‘x’ in ‘OlderThan(x, x)’ with the name of a McKeon always yields a falsehood. The universal quantification ‘∀x( OlderThan(x, paige) → Male(x))’ is false for there is a McKeon who doesn’t satisfy ‘(OlderThan(x, paige) → Male(x))’. For example, Shannon does not satisfy ‘ (OlderThan(x, paige) → Male(x))’ because Shannon satisfies ‘OlderThan(x, paige)’ but not ‘Male(x)’. The sentence ‘∀x(x = x)’ is true because all McKeons satisfy ‘x = x’; replacing ‘x’ with the name of any McKeon results in a true sentence. Note that in the explanation of satisfaction we suppose that an object satisfies a wff only if the object is named. But we don’t want to presuppose that all objects in the domain of discourse are named. For the purposes of an example, suppose that the McKeons adopt a baby boy, but haven’t named him yet. Then, ‘∃x Brother(x, evan)’ is true because the adopted child satisfies ‘Brother(x, evan) ’, even though we can’t replace ‘x’ with the child’s name to get a truth. To get around this is easy enough. We have added a list of names, ‘w[1]‘, ‘w[2]‘, ‘w[3]‘, etc., to M, and we may say that any unnamed object satisfies vv with a previously unused w[i] assigned as a name of this object results in a true sentence. In the above scenerio, ‘∃xBrother(x, evan)’ is true because, ultimately, treating ‘w[1]‘ as a temporary name of the child, ‘Brother(w[1], evan)’ is true. Of course, the meanings of the predicates would have to be amended in order to reflect the addition of a new person to the domain of McKeons. 3. What is a Logic? We have characterized an interpreted formal language M by defining what qualifies as a sentence of M and by specifying the conditions under which any M-sentence is true. The received view of logical consequence entails that the logical consequence relation in M turns on the nature of the logical constants in the relevant M-sentences. We shall regard just the sentential connectives, the quantifiers of M, and the identity predicate as logical constants (the language M is a first-order language). For discussion of the notion of a logical constant see Section 5c below. At the start of this article, it is said that a sentence X is a logical consequence of a set K of sentences, if and only if, in virtue of logic alone, it is impossible for all the sentences in K to be true without X being true as well. A model-theoretic conception of logical consequence in language M clarifies this intuitive characterization of logical consequence by appealing to the semantic properties of the logical constants, represented in the above truth clauses (I)-(VII). In contrast, a deductive-theoretic conception clarifies logical consequence in M, conceived of in terms of deducibility, by appealing to the inferential properties of logical constants portrayed as intuitively valid principles of inference, that is, principles justifying steps in deductions. See Logical Consequence, Deductive-Theoretic Conceptions for a deductive-theoretic characterization of logical consequence in terms of a deductive system, and foror a discussion on the relationship between the logical consequence relation and the model-theoretic and deductive-theoretic conceptions of it. Following Shapiro (1991, p. 3), we define a logic to be a formal language L plus either a model-theoretic or a deductive-theoretic account of logical consequence. A language with both characterizations is a full logic just in case the two characterizations coincide. The logic for M developed below may be viewed as a classical logic or a first-order theory. 4. Model-Theoretic Consequence The technical machinery to follow is designed to clarify how it is that sentences receive truth-values owing to interpretations of them. We begin by introducing the notion of a structure. Then we revisit the notion of satisfaction in order to make it more precise, and link structures and satisfaction to model-theoretic consequence. We offer a modernized version of the model-theoretic characterization of logical consequence sketched by Tarski and so deviate from the details of Tarski’s presentation in his (1936). a. Truth in a structure Relative to our language M, a structure U is an ordered pair <D, I>. (1) D, a non-empty set of elements, is the domain of discourse. Two things to highlight here. First, the domain D of a structure for M may be any set of entities, e.g. the dogs living in Connecticut, the toothbrushes on Earth, the natural numbers, the twelve apostles, etc. Second, we require that D not be the empty set. (2) I is a function that assigns to each individual constant of M an element of D, and to each n-place predicate of M a subset of D^n (that is, the set of n-tuples taken from D). In essence, I interprets the individual constants and predicates of M, linking them to elements and sets of n-tuples of elements from of D. For individual constants c and predicates P, the element I[U](c) is the element of D designated by c under I[U], and I[U](P) is the set of entities assigned by I[U] as the extension of P. By ‘structure’ we mean an L-structure for some first-order language L. The intended structure for a language L is the course-grained representation of the piece of the world that we intend L to be about. The intended domain D and its subsets represent the chunk of the world L is being used to talk about and quantify over. The intended interpretation of L’s constants and predicates assigns the actual denotations to L’s constants and the actual extensions to the predicates. The above semantics for our language M, may be viewed, in part, as an informal portrayal of the intended structure of M, which we refer to as U^M. That is, we take M to be a tool for talking about the McKeon family with respect to gender, who is older than whom, who admires whom, etc. To make things formally prim and proper we should represent the interpretation of constants as I[U]M(matt) = Matt, I[U]M(beth) = Beth, and so on. And the interpretation of predicates can look like I[U]M(Male) = {Matt, Evan}, I[U ]M(Female) = {Beth, Shannon, Kelly, Paige}, and so on. We assume that this has been done. A structure U for a language L (that is, an L-structure) represents one way that a language can be used to talk about a state of affairs. Crudely, the domain D and the subsets recovered from D constitute a rudimentary representation of a state of affairs, and the interpretation of L’s predicates and individual constants makes the language L about the relevant state of affairs. Since a language can be assigned different structures, it can be used to talk about different states of affairs. The class of L-structures represents all the states of affairs that the language L can be used to talk about. For example, consider the following M-structure U’. D = the set of natural numbers I[U’](beth) = 2 I[ U’](Male) = {d ∈ D | d is prime} I[U’](matt) = 3 I[ U’](Female) = {d ∈ D | d is even} I[U’](shannon) = 5 I[ U’](Parent) = ∅ I[U’](kelly) = 7 I[ U’](Married) = {<d, d’> ∈ D^2 | d + 1 = d’ } I[U’](paige) = 11 I[ U’](Sister) = ∅ I[U’](evan) = 10 I[ U’](Brother) = {<d, d’> ∈ D^2 | d < d’ } I[ U’](OlderThan) = {<d, d’> ∈ D^2 | d > d’ } I[ U’](Admires) = ∅ I[ U’](=) = {<d, d’> ∈ D^2 | d = d’ } In specifying the domain D and the values of the interpretation function defined on M’s predicates we make use of brace notation, instead of the earlier list notation, to pick out sets. For example, we write {d ∈ D | d is even} to say “the set of all elements d of D such that d is even.” And {<d, d’> ∈ D^2 | d > d’} reads: “The set of ordered pairs of elements d, d’ of D such that d > d’.” Consider: the sentence OlderThan(beth, matt) is true in the intended structure U^M for <I[U]M(beth), I[U]M(matt)> is in I[U]M(OlderThan). But the sentence is false in U’ for <I[U’](beth), I[U’](matt)> is not in I[U’](OlderThan) (because 2 is not greater than 3). The sentence (Female(beth) & Male(beth)) is not true in U^M but is true in U’ for I[U’](beth) is in I[U’](Female) and in I[U’](Male) (because 2 is an even prime). In order to avoid confusion it is worth highlighting that when we say that the sentence ‘(Female(beth) & Male(beth))’ is true in one structure and false in another we are saying that one and the same wff with no free variables is true in one state of affairs on an interpretation and false in another state of affairs on another interpretation. b. Satisfaction revisited Note the general strategy of giving the semantics of the sentential connectives: the truth of a compound sentence formed with any of them is determined by its component well-formed formulas (wffs), which are themselves (simpler) sentences. However, this strategy needs to be altered when it comes to quantificational sentences. For quantificational sentences are built out of open wffs and, as noted above, these component wffs do not admit of truth and falsity. Therefore, we can’t think of the truth of, say, ∃x(Female(x) & OlderThan(x, paige)) in terms of the truth of ‘(Female(x) & OlderThan(x, paige))’ for some McKeon x. What we need is a truth-relevant property of open formulas that we may appeal to in explaining the truth-value of the compound quantifications formed from them. Tarski is credited with the solution, first hinted at in the following. The possibility suggests itself, however, of introducing a more general concept which is applicable to any sentential function [open or closed wff] can be recursively defined, and, when applied to sentences leads us directly to the concept of truth. These requirements are met by the notion of satisfaction of a given sentential function by given objects. (Tarski 1933, p. 189) The needed property is satisfaction. The truth of the above existential quantification will depend on there being an object that satisfies both ‘Female(x)’ and ‘OlderThan(x, paige)’. Earlier we introduced the concept of satisfaction by describing the conditions in which one element satisfies an open formula with one free variable. Now we want to develop a picture of what it means for objects to satisfy a wff with n free variables for any n ≥ 0. We begin by introducing the notion of a variable assignment. A variable assignment is a function g from a set of variables (its domain) to a set of objects (its range). We shall say that the variable assignment g is suitable for a well-formed formula (wff) Ψ of M if every free variable in Ψ is in the domain of g. In order for a variable assignment to satisfy a wff it must be suitable for the formula. For a variable assignment g that is suitable for Ψ, g satisfies Ψ in U iff the object(s) g assigns to the free variable(s) in Ψ satisfy Ψ. Unlike the earlier first-step characterization of satisfaction, there is no appeal to names for the entities assigned to the variables. This has the advantage of not requiring that new names be added to a language that does not have names for everything in the domain. In specifying a variable assignment g, we write α/v, β/v’, χ/v”, … to indicate that g(v) = α, g(v’ ) = β, g(v” ) = χ, etc. We understand U ⊨ Ψ[g] to mean that g satisfies Ψ in U. U^M ⊨ OlderThan(x, y)[Shannon/x, Paige/y] This is true: the variable assignment g, identified with [Shannon/x, Paige/y], satisfies ‘Olderthan(x, y)’ because Shannon is older than Paige. U^M ⊨ Admires(x, y)[Beth/x, Matt/y] This is false for this variable assignment does not satisfy the wff: Beth does not admire Matt. However, the following is true because Matt admires Beth. U^M ⊨ Admires(x, y)[Matt/x, Beth/y] For any wff Ψ, a suitable variable assignment g and structure U together ensure that the terms in Ψ designate elements in D. The structure U insures that individual constants have referents, and the assignment g insures that any free variables in Ψ get denotations. For any individual constant c, c[g] is the element I[U](c). For each variable v, and assignment g whose domain contains v, v[g] is the element g(v). In effect, the variable assignment treats the variable v as a temporary name. We define t[g] as ‘the element designated by t relative to the assignment g’. c. A formalized definition of truth for Language M We now give a definition of truth for the language M via the detour through satisfaction. The goal is to define for each formula α of M and each assignment g to the free variables, if any, of α in U what must obtain in order for U ⊨ α[g]. (I) Where R is an n-place predicate and t[1], …, t[n] are terms, U ⊨R(t[1], …, t[n])[g] if and only if (iff) the n-tuple <t[1][g], …, t[n][g]> is in I[U](R). (II) U ⊨ ~α[g] iff it is not true that U ⊨ α[g]. (III) U ⊨ (α & β)[g] iff U ⊨ α[g] and U ⊨ β[g]. (IV) U ⊨ (α v β)[g] iff U ⊨ α[g] or U ⊨ β[g]. (V) U ⊨ (α → β)[g] iff either it is not true that U ⊨ α[g] or U ⊨ β[g]. Before going on to the (VI) and (VII) clauses for quantificational sentences, it is worthwhile to introduce the notion of a variable assignment that comes from another. Consider ∃y(Female(x) & OlderThan(x, y)). We want to say that a variable assignment g satisfies this wff if and only if there is a variable assignment g’ differing from g at most with regard to the object it assigns to the variable y such that g’ satisfies ‘(Female(x) & OlderThan(x, y))’. We say that a variable assignment g’ comes from an assignment g when the domain of g’ is that of g and a variable v, and g’ assigns the same values as g with the possible exception of the element g’ assigns to v. In general, we represent an extension g’ of an assignment g as follows. [g, d/v] This picks out a variable assignment g’ which differs at most from g in that v is in its domain and g’(v) = d, for some element d of the domain D. So, it is true that U^M ⊨∃y(Female(x) & OlderThan(x, y)) [Beth/x] U^M ⊨ (Female(x) & OlderThan(x, y)) [Beth/x, Paige/y]. What this says is that the variable assignment that comes from the assignment of Beth to ‘x’ by adding the assignment of Paige to ‘y’ satisfies ‘(Female(x) & OlderThan(x, y))’ in U^M. This is true for Beth is a female who is older than Paige. Now we give the satisfaction clauses for quantificational sentences. Let Ψ be any formula of M. (VI) U ⊨∃vΨ[g] iff for at least one element d of D, U ⊨ Ψ[g, d/v]. (VII) U ⊨ ∀vΨ[g] iff for all elements d of D, U ⊨ Ψ[g, d/v]. If α is a sentence, then it has no free variables and we write U ⊨ α[g[∅]] which says that the empty variable assignment satisfies α in U. The empty variable assignment g[∅] does not assign objects to any variables. In short: the definition of truth for language L is A sentence α is true in U if and only if U ⊨ α[g[∅]], that is the empty variable assignment satisfies α in U. The truth definition specifies the conditions in which a formula of M is true in a structure by explaining how the semantic properties of any formula of M are determined by its construction from semantically primitive expressions (e.g., predicates, individual constants, and variables) whose semantic properties are specified directly. If every member of a set of sentences is true in a structure U we say that U is a model of the set. We now work through some examples. The reader will be aided by referring when needed to the clauses (I)-(VII). It is true that U^M ⊨ ~Married(kelly, kelly))[g[∅]], that is, by (II) it is not true that U^M ⊨ Married(kelly, kelly))[g[∅]], because <kelly[g[∅]], kelly[g[∅]]> is not in I[U]M(Married). Hence, by U^M ⊨ (Married(shannon, kelly) v ~Married(kelly, kelly))[g[∅]]. Our truth definition should confirm that ∃x∃y Admires(x, y) is true in U^M. Note that by (VI) U^M ⊨∃yAdmires(x, y)[g[∅], Paige/x] since U^M ⊨ Admires(x, y)[g[∅], Paige/x, kelly/y]. Hence, by (VI) U^M ⊨∃x∃y Admires(x, y)[g[∅]] . The sentence, ‘∀x∃y(Older(y, x) → Admires(x, y))’ is true in U^M . By (VII) we know that U^M ⊨ ∀x∃y(Older(y, x) → Admires(x, y))[g[∅]] if and only if for all elements d of D, U^M ⊨∃y(Older(y, x) → Admires(x, y))[g[∅], d/x]. This is true. For each element d and assignment [g[∅], d/x], U^M ⊨ (Older(y, x) → Admires(x, y))[g[∅], d/x, d'/y], that is, there is some element d’ and variable assignment g differing from [g[∅], d/ x] only in assigning d’ to ‘y’, such that g satisfies ‘(Older(y, x) → Admires(x, y))’ in U^M . d. Model-theoretic consequence defined For any set K of M-sentences and M-sentence X, we write K ⊨ X to mean that every M-structure that is a model of K is also a model of X, that is, X is a model-theoretic consequence of K. (1) OlderThan(paige, matt) (2) ∀x(Male(x) → OlderThan(paige, x)) Note that both (1) and (2) are false in the intended structure U^M . We show that (2) is not a model theoretic consequence of (1) by describing a structure which is a model of (1) but not (2). The above structure U’ will do the trick. By (I) it is true that U’ ⊨ OlderThan(paige, matt)[g[∅]] because <(paige)[g[∅]], (matt)[g[∅]]> is in I[U’](OlderThan) (because 11 is greater than 3). But, by (VII), it is not the case that U’ ⊨ ∀x(Male(x) → OlderThan(paige, x))[g[∅]] since the variable assignment [g[∅], 13/x] doesn’t satisfy ‘(Male(x) → OlderThan(paige, x))’ in U’ according to (V) for U’ ⊨ Male(x)[g[∅], 13/x] but not U’ ⊨ OlderThan(paige, x))[g[∅], 13/x]. So, (2) is not a model-theoretic consequence of (1). Consider the following sentences. (3) (Admires(evan, paige) → Admires(paige, kelly)) (4) (Admires(paige, kelly) → Admires(kelly, beth)) (5) (Admires(evan, paige) → Admires(kelly, beth)) (5) is a model-theoretic consequence of (3) and (4). For assume otherwise. That is assume, that there is a structure U” such that (i) U” ⊨ (Admires(evan, paige) → Admires(paige, kelly))[g[∅]] (ii) U” ⊨ (Admires(paige, kelly) → Admires(kelly, beth))[g[∅]] but not (iii) U” ⊨ (Admires(evan, paige) → Admires(kelly, beth))[g[∅]]. By (V), from the assumption that (iii) is false, it follows that U” ⊨ Admires(evan, paige)[g[∅]] and not U” ⊨ Admires(kelly, beth)[g[∅]]. Given the former, in order for (i) to hold according to (V) it must be the case that U” ⊨ Admires(paige, kelly))[g[∅]]. But then it is true that U” ⊨ Admires(paige, kelly))[g[∅]] and false that U” ⊨ Admires(kelly, beth)[g[∅]], which, again appealing to (V), contradicts our assumption (ii). Hence, there is no such U”, and so (5) is a model-theoretic consequence of (3) and (4). Here are some more examples of the model-theoretic consequence relation in action. (6) ∃xMale(x) (7) ∃xBrother(x, shannon) (8) ∃x(Male(x) & Brother(x, shannon)) (8) is not a model-theoretic consequence of (6) and (7). Consider the following structure U”’. D = {1, 2, 3} For all M-individual constants c, I[U”’](c) = 1. I[U”’](Male) = {2}, I[U”’](Brother) = {<3, 1>}. For all other M-predicates P, I[U”’](P) = ∅. Appealing to the satisfaction clauses (I), (III), and (VI), it is fairly straightforward to see that the structure U”’ is a model of (6) and (7) but not of (8). For example, U”’ is not a model of (8) for there is no element d of D and assignment [d/x] such that U”’ ⊨ (Male(x) & Brother(x, shannon))[g[∅], d/x]. Consider the following two sentences (9) Female(shannon) (10) ∃x Female(x) (10) is a model-theoretic consequence of (9). For an arbitrary M-structure U, if U ⊨ Female(shannon)[g[∅]], then by satisfaction clause (I), shannon[g[∅]] is in I[U](Female), and so there is at least one element of D, shannon[g[∅]], in I[U](Female). Consequently, by (VI), U ⊨∃x Female(x)[g[∅]]. For a sentence X of M, we write ⊨ X. to mean that X is a model-theoretic consequence of the empty set of sentences. This means that every M-structure is a model of X. Such sentences represent logical truths; it is not logically possible for them to be false. For example, ⊨ (∀x Male(x) → ∃x Male(x)) is true. Here’s one explanation why. Let U be an arbitrary M-structure. We now show that U ⊨ (∀x Male(x) → ∃x Male(x))[g[∅]]. If U ⊨ ∀x Male(x) [g[∅]] holds, then by (VII) for every element d of the domain D, U ⊨ Male(x)[g[∅], d/x]. But we know that D is non-empty, by the requirements on structures (see the beginning of Section 4.1), and so D has at least one element d. Hence for at least one element d of D, U ⊨ Male(x)[g[∅], d/x], that is by (VI), U ⊨∃x Male(x))[g[∅]]. So, if U ⊨ (∀x Male(x)[g[∅]] then U ⊨∃x Male (x))[g[∅]], and, therefore according to (V), U ⊨ (∀x Male(x) → ∃x Male(x))[g[∅]]. Since U is arbitrary, this establishes ⊨ (∀x Male(x) → ∃x Male(x)). If we treat ‘=’ as a logical constant and require that for all M-structures U, I[U](=) = {<d, d’> ∈ D^2| d = d’}, then M-sentences asserting that identity is reflexive, symmetrical, and transitive are true in every M-structure, that is the following hold. ⊨ ∀x(x = x) ⊨ ∀x∀y((x = y) → (y = x)) ⊨ ∀x∀y∀z(((x = y) & (y = z)) → (x = z)) Structures which assign {<d, d’> ∈ D^2| d = d’} to the identity symbol are sometimes called normal models. Letting v)v occurs free, ∀x∀y((x = y) → (Ψ(x) → Ψ(y))) is an instance of the principle that identicals are indiscernible—if x = y then whatever holds of x holds of y—and it is true in every M-structure U that is a normal model. Treating ‘=’ as a logical constant (which is standard) requires that we restrict the class of M-structures appealed to in the above model-theoretic definition of logical consequence to those that are normal models. 5. The Status of the Model-Theoretic Characterization of Logical Consequence Logical consequence in language M has been defined in terms of the model-theoretic consequence relation. What is the status of this definition? We answered this question in part in Logical Consequence, Deductive-Theoretic Conceptions: Section 5a. by highlighting Tarski’s argument for holding that the model-theoretic conception of logical consequence is more basic than any deductive-system account of it. Tarski points to the fact that there are languages for which valid principles of inference can’t be represented in a deductive-system, but the logical consequence relation they determine can be represented model-theoretically. In what follows, we identify the type of definition the model-theoretic characterization of logical consequence is, and then discuss its adequacy. a. The model-theoretic characterization is a theoretical definition of logical consequence In order to determine the success of the model-theoretic characterization, we need to know what type of definition it is. Clearly it is not intended as a lexical definition. As Tarski’s opening passage in his (1936) makes clear, a theory of logical consequence need not yield a report of what ‘logical consequence’ means. On other hand, it is clear that Tarski doesn’t see himself as offering just a stipulative definition. Tarski is not merely stating how he proposes to use ‘logical consequence’ and ‘logical truth’ (but see Tarski 1986) any more than Newton was just proposing how to use certain words when he defined force in terms of mass and acceleration. Newton was invoking a fundamental conceptual relationship in order to improve our understanding of the physical world. Similarly, Tarski’s definition of ‘logical consequence’ in terms of model-theoretic consequence is supposed to help us formulate a theory of logical consequence that deepens our understanding of what Tarski calls the common concept of logical consequence. Tarski thinks that the logical consequence relation is commonly regarded as necessary, formal, and a priori . As Tarski (1936, p. 409) says, “The concept of logical consequence is one of those whose introduction into a field of strict formal investigation was not a matter of arbitrary decision on the part of this or that investigator; in defining this concept efforts were made to adhere to the common usage of the language of everyday life.” Let’s follow this approach in Tarski’s (1936) and treat the model-theoretic definition as a theoretical definition of ‘logical consequence’. The questions raised are whether the Tarskian model-theoretic definition of logical consequence leads to a good theory and whether it improves our understanding of logical consequence. In order to sketch a framework for thinking about this question, we review the key moves in the Tarskian analysis. In what follows, K is an arbitrary set of sentences from a language L, and X is any sentence from L. First, Tarski observes what he takes to be the commonly regarded features of logical consequence (necessity, formality, and a prioricity) and makes the following claim. (1) X is a logical consequence of K if and only if (a) it is not possible for all the K to be true and X false, (b) this is due to the forms of the sentences, and (c) this is known a priori. Tarski’s deep insight was to see the criteria, listed in bold, in terms of the technical notion of truth in a structure. The key step in his analysis is to embody the above criteria (a)-(c) in terms of the notion of a possible interpretation of the non-logical terminology in sentences. Substituting for what is in bold in (1) we get (2) X is a logical consequence of K if and only if there is no possible interpretation of the non-logical terminology of the language according to which all the sentences in K are true and X is The third step of the Tarskian analysis of logical consequence is to use the technical notion of truth in a structure or model to capture the idea of a possible interpretation. That is, we understand there is no possible interpretation of the non-logical terminology of the language according to which all of the sentences in K are true and X is false in terms of: Every model of K is a model of X, that is, K ⊨ X. To elaborate, as reflected in (2), the analysis turns on a selection of terms as logical constants. This is represented model-theoretically by allowing the interpretation of the non-logical terminology to change from one structure to another, and by making the interpretation of the logical constants invariant across the class of structures. Then, relative to a set of terms treated as logical, the Tarskian, model-theoretic analysis is committed to (3) X is a logical consequence of K if and only if K ⊨ X. (4) X is a logical truth, that is, it is logically impossible for X to be false, if and only if ⊨ X. As a theoretical definition, we expect the ⊨-relation to reflect the essential features of the common concept of logical consequence. By Tarski’s lights, the ⊨-consequence relation should be necessary, formal, and a priori. Note that model theory by itself does not provide the means for drawing a boundary between the logical and the non-logical. Indeed, its use presupposes that a list of logical terms is in hand. For example, taking Sister and Female to be logical constants, the consequence relation from (A) ‘Sister(kelly, paige)’ to (B) ‘Female(kelly)’ is necessary, formal and a priori. So perhaps (B) should be a logical consequence of (A). The fact that (B) is not a model-theoretic consequence of (A) is due to the fact that the interpretation of the two predicates can vary from one structure to another. To remedy this we could make the interpretation of the two predicates invariant so that ‘∀x(∃y Sister(x, y) → Female(x))’ is true in all structures, and, therefore if (A) is true in a structure, (B) is too. The point here is that the use of models to capture the logical consequence relation requires a prior choice of what terms to treat as logical. This is, in turn, reflected in the identification of the terms whose interpretation is constant from one structure to another. So in assessing the success of the Tarskian model-theoretic definition of logical consequence for a language L, two issues arise. First, does the model-theoretic consequence relation reflect the salient features of the common concept of logical consequence? Second, is the boundary in L between logical and non-logical terms correctly drawn? In other words: what in L qualifies as a logical constant? Both questions are motivated by the adequacy criteria for theoretical definitions of logical consequence. They are central questions in the philosophy of logic and their significance is at least partly due to the prevalent use of model theory in logic to represent logical consequence in a variety of languages. In what follows, I sketch some responses to the two questions that draw on contemporary work in philosophy of logic. I begin with the first question. b. Does the model-theoretic consequence relation reflect the salient features of the common concept of logical consequence? The ⊨-consequence relation is formal. Also, a brief inspection of the above justifications that K ⊨ X obtain for given K and X reveals that the ⊨-consequence relation is a priori. Does the ⊨-consequence relation capture the modal element in the common concept of logical consequence? There are critics who argue that the model-theoretic account lacks the conceptual resources to rule out the possibility of there being logically possible situations in which sentences in K are true and X is false but no structure U such that U ⊨ K and not U ⊨ X. Kneale (1961) is an early critic, and Etchemendy (1988, 1999) offers a sustained and multi-faceted attack. We follow Etchemendy. Consider the following three sentences. (1) (Female(shannon) & ~Married(shannon, matt)) (2) (~Female(matt) & Married(beth, matt)) (3) ~Female(beth) (3) is neither a logical nor a model-theoretic consequence of (1) and (2). However, in order for a structure to make (1) and (2) true but not (3) its domain must have at least three elements. If the world contained, say, just two things, then there would be no such structure and (3) would be a model-theoretic consequence of (1) and (2). But in this scenario, (3) would not be a logical consequence of (1) and (2) because it would still be logically possible for the world to be larger and in such a possible situation (1) and (2) can be interpreted true and (3) false. The problem raised for the model-theoretic account of logical consequence is that we do not think that the class of logically possible situations varies under different assumptions as to the cardinality of the world’s elements. But the class of structures surely does since they are composed of worldly elements. This is a tricky criticism. Let’s look at it from a slightly different vantagepoint. We might think that the extension of the logical consequence relation for an interpreted language such as our language M about the McKeons is necessary. For example, it can’t be the case that for some K and X, even though X isn’t a logical consequence of a set K of sentences, X could be. So, on the supposition that the world contains less, the extension of the logical consequence relation should not expand. However, the extension of the model-theoretic consequence does expand. For example, (3) is not, in fact, a model-theoretic consequence of (1) and (2), but it would be if there were just two things. This is evidence that the model-theoretic characterization has failed to capture the modal notion inherent in the common concept of logical consequence. In defense of Tarski (see Ray 1999 and Sher 1991 for defenses of the Tarskian analysis against Etchemendy), one might question the force of the criticism because it rests on the supposition that it is possible for there to be just finitely many things. How could there be just two things? Indeed, if we countenance an infinite totality of necessary existents such as abstract objects (e.g., pure sets), then the class of structures will be fixed relative to an infinite collection of necessary existents, and the above criticism that turns on it being possible that there are just n things for finite n doesn’t go through (for discussion see McGee 1999). One could reply that while it is metaphysically impossible for there to be merely finitely many things it is nevertheless logically possible and this is relevant to the modal notion in the concept of logical consequence. This reply requires the existence of primitive, basic intuitions regarding the logical possibility of there being just finitely many things. However, intuitions about possible cardinalities of worldly individuals—not informed by mathematics and science—tend to run stale. Consequently, it is hard to debate this reply: one either has the needed logical intuitions, or not. What is clear is that our knowledge of what is a model-theoretic consequence of what in a given L depends on our knowledge of the class of L-structures. Since such structures are furniture of the world, our knowledge of the model-theoretic consequence relation is grounded on knowledge of substantive facts about the world. Even if such knowledge is a priori, it is far from obvious that our a priori knowledge of the logical consequence relation is so substantive. One might argue that knowledge of what follows from what shouldn’t turn on worldly matters of fact, even if they are necessary and a priori (see the discussion of the locked room metaphor in Logical Consequence, Philosophical Considerations: Section 2.2.1). If correct, this is a strike against the model-theoretic definition. However, this standard logical positivist line has been recently challenged by those who see logic penetrated and permeated by metaphysics (e.g., Putnam 1971, Almog 1989, Sher 1991, Williamson 1999). We illustrate the insight behind the challenge with a simple example. Consider the following two sentences. (4) ∃x(Female(x) & Sister(x, evan)) (5) ∃x Female(x) (5) is a logical consequence of (4), that is, there is no domain for the quantifiers and no interpretation of the predicates and the individual constant in that domain which makes (4) true and not (5). Why? Because on any interpretation of the non-logical terminology, (4) is true just in case the intersection of the set of objects that satisfy Female(x) and the set of objects that satisfy Sister(x, evan) is non-empty. If this obtains, then the set of objects that satisfy Female(x) is non-empty and this makes (5) true. The basic metaphysical truth underlying the reasoning here is that for any two sets, if their intersection is non-empty, then neither set is the empty set. This necessary and a priori truth about the world, in particular about its set-theoretic part, is an essential reason why (5) follows from (4). This approach, reflected in the model-theoretic consequence relation (see Sher 1996), can lead to an intriguing view of the formality of logical consequence reminiscent of the pre-Wittgensteinian views of Russell and Frege. Following the above, the consequence relation from (4) to (5) is formal because the metaphysical truth on which it turns describes a formal (structural) feature of the world. In other words: it is not possible for (4) to be true and (5) false because For any extensions of P, P’, if an object α satisfies v) & P’(v, n))v). According to this vision of the formality of logical consequence, the consequence relation between (4) and (5) is formal because what is in bold expresses a formal feature of reality. Russell writes that, “Logic, I should maintain, must no more admit a unicorn than zoology can; for logic is concerned with the real world just as truly as zoology, though with its more abstract and general features” (Russell 1919, p. 169). If we take the abstract and general features of the world to be its formal features, then Russell’s remark captures the view of logic that emerges from anchoring the necessity, formality and a priority of logical consequence in the formal features of the world. The question arises as to what counts as a formal feature of the world. If we say that all set-theoretic truths depict formal features of the world, including claims about how many sets there are, then this would seem to justify making ∃x∃y~(x = y) (that is, there are at least two individuals) a logical truth since it is necessary, a priori, and a formal truth. To reflect model-theoretically that such sentences, which consist just of logical terminology, are logical truths we would require that the domain of a structure simply be the collection of the world’s individuals. See Sher (1991) for an elaboration and defense of this view of the formality of logical truth and consequence. See Shapiro (1993) for further discussion and criticism of the project of grounding our logical knowledge on primitive intuitions of logical possibility instead of on our knowledge of metaphysical truths. Part of the difficulty in reaching a consensus with respect to whether or not the model-theoretic consequence relation reflects the salient features of the common concept of logical consequence is that philosophers and logicians differ over what the features of the common concept are. Some offer accounts of the logical consequence relation according to which it is not a priori (e.g., see Koslow 1999, Sher 1991 and see Hanson 1997 for criticism of Sher) or deny that it even need be strongly necessary (Smiley 1995, 2000, section 6). Here we illustrate with a quick example. Given that we know that a McKeon only admires those who are older (that is, we know that (a) ∀x∀y(Admires(x, y) → OlderThan(y, x))), wouldn’t we take (7) to be a logical consequences of (6)? (6) Admires(paige, kelly) (7) OlderThan(kelly, paige) A Tarskian response is that (7) is not a consequence of (6) alone, but of (6) plus (a). So in thinking that (7) follows from (6), one assumes (a). A counter suggestion is to say that (7) is a logical consequence of (6) for if (6) is true, then necessarily-relative-to-the-truth-of-(a) (7) is true. The modal notion here is a weakened sense of necessity: necessity relative to the truth of a collection of sentences, which in this case is composed of (a). Since (a) is not a priori, neither is the consequence relation between (6) and (7). The motive here seems to be that this conception of modality is inherent in the notion of logical consequence that drives deductive inference in science, law, and other fields outside of the logic classroom. This supposes that a theory of logical consequence must not only account for the features of the intuitive concept of logical consequence but also reflect the intuitively correct deductive inferences. After all, the logical consequence relation is the foundation of deductive inference: it is not correct to deductively infer B from A unless B is a logical consequence of A. Referring to our example, in a conversation where (a) is a truth that is understood and accepted by the conversants, the inference from (6) to (7) seems legit. Hence, this should be supported by an accompanying concept of logical consequence. This idea of construing the common concept of logical consequence in part by the lights of basic intuitions about correct inferences is reflected in the Relevance logician’s objection to the Tarskian account. The Relevance logician claims that X is not a logical consequence of K unless K is relevant to X. For example, consider the following pairs of sentences. (1) (Female(evan) & ~Female(evan)) (1) Admires(kelly, paige) (2) Admires(kelly, shannon) (2) (Female(evan) v ~Female(evan)) In the first pair, (1) is logically false, and in the second, (2) is a logical truth. Hence it isn’t possible for (1) to be true and (2) false. Since this seems to be formally determined and a priori , for each pair (2) is a logical consequence of (1) according to Tarski. Against this Anderson and Belnap write, “the fancy that relevance is irrelevant to validity [that is logical consequence] strikes us as ludicrous, and we therefore make an attempt to explicate the notion of relevance of A to B” (Anderson and Belnap 1975, pp. 17-18). The typical support for the relevance conception of logical consequence draws on intuitions regarding correct inference, e.g. it is counterintuitive to think that it is correct to infer (2) from (1) in either pair for what does being a female have to do with who one admires? Would you think it correct to infer, say, that Admires(kelly, shannon) on the basis of (Female(evan) & ~Female(evan))? For further discussion of the different types of relevance logic and more on the relevant philosophical issues see Haack (1978, pp. 198-203) and Read (1995, pp. 54-63). The bibliography in Haack (1996, pp. 264-265) is helpful. For further discussion on relevance logic, see Logical Consequence, Deductive-Theoretic Conceptions: Section 5.2.1. Our question is, does the model-theoretic consequence relation reflect the essential features of the common concept of logical consequence? Our discussion illustrates at least two things. First, it isn’t obvious that the model-theoretic definition of logical consequence reflects the Tarskian portrayal of the common concept. One option, not discussed above, is to deny that the model-theoretic definition is a theoretical definition and argue for its utility simply on the basis that it is extensionally equivalent with the common concept (see Shapiro 1998). Our discussion also illustrates that Tarski’s identification of the essential features of logical consequence is disputed. One reaction, not discussed above, is to question the presupposition of the debate and take a more pluralist approach to the common concept of logical consequence. On this line, it is not so much that the common concept of logical consequence is vague as it is ambiguous. At minimum, to say that a sentence X is a logical consequence of a set K of sentences is to say that X is true in every circumstance (that is logically possible situation) in which the sentences in K are true. “Different disambiguations of this notion arise from taking different extensions of the term ‘circumstance’ ” (Restall 2002, p. 427). If we disambiguate the relevant notion of ‘circumstance’ by the lights of Tarski, ‘Admires (kelly, paige)’ is a logical consequence of ‘(Female(evan) & ~Female(evan))’. If we follow the Relevance logician, then not. There is no fact of the matter about whether or not the first sentence is a logical consequence of the second independent of such a disambiguation. c. What is a logical constant? We turn to the second, related issue of what qualifies as a logical constant. Tarski (1936, 418-419) writes, No objective grounds are known to me which permit us to draw a sharp boundary between [logical and non-logical terms]. It seems possible to include among logical terms some which are usually regarded by logicians as extra-logical without running into consequences which stand in sharp contrast to ordinary usage. And at the end of his (1936), he tells us that the fluctuation in the common usage of the concept of consequence would be accurately reflected in a relative concept of logical consequence, that is a relative concept “which must, on each occasion, be related to a definite, although in greater or less degree arbitrary, division of terms into logical and extra logical” (p. 420). Unlike the relativity described in the previous paragraph, which speaks to the features of the concept of logical consequence, the relativity contemplated by Tarski concerns the selection of logical constants. Tarski’s observations of the common concept do not yield a sharp boundary between logical and non-logical terms. It seems that the sentential connectives and the quantifiers of our language M about the McKeons qualify as logical if any terms of M do. We’ve also followed many logicians and included the identity predicate as logical. (See Quine 1986 for considerations against treating ‘=’ as a logical constant.) But why not include other predicates such as ‘OlderThan’? (1) OlderThan(kelly, paige) (3) ~OlderThan(kelly, kelly) (2) ~OlderThan(paige, kelly) Then the consequence relation from (1) to (2) is necessary, formal, and a priori and the truth of (3) is necessary, formal and also a priori. If treating ‘OlderThan’ as a logical constant does not do violence to our intuitions about the features of the common concept of logical consequence and truth, then it is hard to see why we should forbid such a treatment. By the lights of the relative concept of logical consequence, there is no fact of the matter about whether (2) is a logical consequence of (1) since it is relative to the selection of ‘OlderThan’ as a logical constant. On the other hand, Tarski hints that even by the lights of the relative concept there is something wrong in thinking that B follows from A and B only relative to taking ‘and’ as a logical constant. Rather, B follows from A and B we might say absolutely since ‘and’ should be on everybody’s list of logical constants. But why do ‘and’ and the other sentential connectives, along with the identity predicate and the quantifiers have more of a claim to logical constancy than, say, ‘OlderThan’? Tarski (1936) offers no criteria of logical constancy that help answer this question. On the contemporary scene, there are three general approaches to the issue of what qualifies as a logical constant. One approach is to argue for an inherent property (or properties) of logical constancy that some expressions have and others lack. For example, topic neutrality is one feature traditionally thought to essentially characterize logical constants. The sentential connectives, the identity predicate, and the quantifiers seem topic neutral: they seem applicable to discourse on any topic. The predicates other than identity such as ‘OlderThan’ do not appear to be topic neutral, at least as standardly interpreted, e.g., ‘OlderThan’ has no application in the domain of natural numbers. One way of making the concept of topic neutrality precise is to follow Tarski’s suggestion in his (1986) that the logical notions expressed in a language L are those notions that are invariant under all one-one transformations of the domain of discourse onto itself. A one-one transformation of the domain of discourse onto itself is a one-one function whose domain and range coincide with the domain of discourse. And a one-one function is a function that always assigns different values to different objects in its domain (that is, for all x and y in the domain of f, if f(x) = f(y), then x = y). Consider ‘Olderthan’. By Tarski’s lights, the notion expressed by the predicate is its extension, that is the set of ordered pairs <d, d’> such that d is older than d’. Recall that the extension is: {<Beth, Matt>, <Beth, Shannon>, <Beth, Kelly>, <Beth, Paige>, <Beth, Evan>, <Matt, Shannon>, <Matt, Kelly>, <Matt, Paige>, <Matt, Evan>, <Shannon, Kelly>, <Shannon, Paige>, <Shannon, Evan>, <Kelly, Paige>, <Kelly, Evan>, <Paige, Evan>}. If ‘OlderThan’ is a logical constant its extension (the notion it expresses) should be invariant under every one-one transformation of the domain of discourse (that is the set of McKeons) onto itself. A set is invariant under a one-one transformation f when the set is carried onto itself by the transformation. For example, the extension of ‘Female’ is invariant under f when for every d, d is a female if and only if f(d) is. ‘OlderThan’ is invariant under f when <d, d’> is in the extension of ‘OlderThan’ if and only if <f(d), f(d’)> is. Clearly, the extensions of the Female predicate and the Olderthan relation are not invariant under every one-one transformation. For example, Beth is older than Matt, but f(Beth) is not older than f(Matt) when f(Beth) = Evan and f(Matt) = Paige. Compare the identity relation: it is invariant under every one-one transformation of the domain of McKeons because it holds for each and every McKeon. The invariance condition makes precise the concept of topic neutrality. Any expression whose extension is altered by a one-one transformation must discriminate among elements of the domain, making the relevant notions topic-specific. The invariance condition can be extended in a straightforward way to the quantifiers and sentential connectives (see McCarthy 1981 and McGee 1997). Here I illustrate with the existential quantifier. Let Ψ be a well-formed formula with ‘x’ as its only free variable. U^M for our language M about the McKeons. Let f be an arbitrary one-one transformation of the domain D of McKeons onto itself. The function f determines an interpretation I’ for Ψ in the range D’ of f. The existential quantifier satisfies the invariance requirement for U^M ⊨∃x Ψ if and only if U ⊨∃x Ψ for every U derived by a one-one transformation f of the domain D of U^M (we say that the U‘s are isomorphic with U^M ). For example, consider the following existential quantification. ∃x Female(x) This is true in the intended structure for our language M about the McKeons (that is, U^M ⊨∃x Female(x)[g[∅]]) ultimately because the set of elements that satisfy ‘Female(x)’ on some variable assignment that extends g[∅] is non-empty (recall that Beth, Shannon, Kelly, and Paige are females). The cardinality of the set of McKeons that satisfy an M-formula is invariant under every one-one transformation of the domain of McKeons onto itself. Hence, for every U isomorphic with U^M, the set of elements from D^U that satisfy ‘Female(x)’ on some variable assignment that extends g[∅] is non-empty and so U ⊨∃x Female(x)[g[∅]]. Speaking to the other part of the invariance requirement given at the end of the previous paragraph, clearly for every U isomorphic with U^M, if U ⊨∃x Female(x)[g[∅]], then U^M ⊨∃x Female(x)[g[∅]] (since U is isomorphic with itself). Crudely, the topic neutrality of the existential quantifier is confirmed by the fact that it is invariant under all one-one transformations of the domain of discourse onto itself. Key here is that the cardinality of the subset of the domain D that satisfies an L-formula under an interpretation is invariant under every one-one transformation of D onto itself. For example, if at least two elements from D satisfy a formula on an interpretation of it, then at least two elements from D’ satisfy the formula under the I’ induced by f. This makes not only ‘All’ and ‘Some’ topic neutral, but also any cardinality quantifier such as ‘Most’, ‘Finitely many’, ‘Few’, ‘At least two’, etc. The view suggested in Tarski (1986, p. 149) is that the logic of a language L is the science of all notions expressible in L which are invariant under one-one transformations of L’s domain of discourse. For further discussion, defense of, and extensions of the Tarskian invariance requirement on logical constancy, in addition to McCarthy (1981) and McGee (1997), see Sher (1989, 1991). A second approach to what qualifies as a logical constant is not to make topic neutrality a necessary condition for logical constancy. This undercuts at least some of the significance of the invariance requirement. Instead of thinking that there is an inherent property of logical constancy, we allow the choice of logical constants to depend, at least in part, on the needs at hand, as long as the resulting consequence relation reflects the essential features of the intuitive, pre-theoretic concept of logical consequence. I take this view to be very close to the one that we are left with by default in Tarski (1936). The approach is suggested in Prior (1976) and developed in related but different ways in Hanson (1996) and Warmbrod (1999). It amounts to regarding logic in a strict sense and loose sense. Logic in the strict sense is the science of what follows from what relative to topic neutral expressions, and logic in the loose sense is the study of what follows from what relative to both topic neutral expressions and those topic centered expressions of interest that yield a consequence relation possessing the salient features of the common concept. Finally, a third approach the issue of what makes an expression a logical constant is simply to reject the view of logical consequence as a formal consequence relation, thereby nullifying the need to distinguish logical terminology in the first place (see Etchemendy 1983 and Bencivenga 1999). We just say, for example, that X is a logical consequence of a set K of sentences if the supposition that all of the K are true and X false violates the meaning of component terminology. Hence, ‘Female(kelly)’ is a logical consequence of ‘Sister(kelly, paige)’ simply because the supposition otherwise violates the meaning of the predicates. Whether or not ‘Female’ and ‘Sister’ are logical terms doesn’t come into play. 6. Conclusion Using the first-order language M as the context for our inquiry, we have discussed the model-theoretic conception of the conditions that must be met in order for a sentence to be a logical consequence of others. This theoretical characterization is motivated by a distinct development of the common concept of logical consequence. The issue of the nature of logical consequence, which intersects with other areas of philosophy, is still a matter of debate. Any full coverage of the topic would involve study of the logical consequence relation between sentences from other types of languages such as modal languages (containing necessity and possibility operators) (see Hughes and Cresswell 1996) and second-order languages (containing variables that range over properties) (see Shapiro 1991). See also the entries, Logical Consequence, Philosophical Considerations, and Logical Consequence, Deductive-Theoretic Conceptions, in the encyclopedia. 7. References and Further Reading • Almog, J. (1989): “Logic and the World”, pp. 43-65 in Themes From Kaplan, ed. J. Almog, J. Perry, and H. Wettstein. New York: Oxford University Press. • Anderson, A.R., and N. Belnap (1975): Entailment: The Logic of Relevance and Necessity. Princeton: Princeton University Press. • Bencivenga, E. (1999): “What is Logic About?”, pp. 5-19 in Varzi (1999). • Etchemendy, J. (1983): “The Doctrine of Logic as Form”, Linguistics and Philosophy 6, pp. 319-334. • Etchemendy, J. (1988): “Tarski on truth and logical consequence”, Journal of Symbolic Logic 53, pp. 51-79. • Etchemendy, J. (1999): The Concept of Logical Consequence. Stanford: CSLI Publications. • Haack, S. (1978): Philosophy of Logics. Cambridge: Cambridge University Press. • Haack, S. (1996): Deviant Logic, Fuzzy Logic. Chicago: The University of Chicago Press. • Hanson, W. (1997): “The Concept of Logical Consequence”, The Philosophical Review 106, pp. 365-409. • Hughes, G. E. and M.J Cresswell (1996): A New Introduction to Modal Logic. London: Routledge. • Kneale, W. (1961): “Universality and Necessity”, British Journal for the Philosophy of Science 12, pp. 89-102. • Kneale, W. and M. Kneale (1986): The Development of Logic. Oxford: Clarendon Press. • Koslow, A. (1999): “The Implicational Nature of Logic: A Structuralist Account”, pp. 111-155 in Varzi (1999). • McCarthy, T. (1981): “The Idea of a Logical Constant”, Journal of Philosophy 78, pp. 499-523. • McCarthy, T. (1998): “Logical Constants”, pp. 599-603 in Routledge Encyclopedia of Philosophy, vol. 5, ed. E. Craig. London: Routledge. • McGee, V. (1999): “Two Problems with Tarski’s Theory of Consequence”, Proceedings of the Aristotelean Society 92, pp. 273-292. • Priest. G. (1995): “Etchemendy and Logical Consequence”, Canadian Journal of Philosophy 25, pp. 283-292. • Prior, A. (1976): “What is Logic?”, pp. 122-129 in Papers in Logic and Ethics ed. P.T. Geach and A. Kenny. Amherst: University of Massachusetts Press. • Putnam, H. (1971): Philosophy of Logic. New York: Harper & Row. • Quine, W.V. (1986): Philosophy of Logic, 2nd ed. Cambridge: Harvard University Press. • Ray, G. (1996): “Logical Consequence: A Defense of Tarski”, Journal of Philosophical Logic 25, pp. 617-677. • Read, S. (1995): Thinking About Logic. Oxford: Oxford University Press. • Restall, G. (2002): “Carnap’s Tolerance, Meaning, And Logical Pluralism”, Journal of Philosophy 99, pp. 426-443. • Russell, B. (1919): Introduction to Mathematical Philosophy. London: Routledge, 1993 printing. • Shapiro, S. (1991): Foundations without Foundationalism: A Case For Second-order Logic. Oxford: Clarendon Press. • Shapiro, S. (1993): “Modality and Ontology”, Mind 102, pp. 455-481. • Shapiro, S. (1998): “Logical Consequence: “Models and Modality”, pp. 131-156 in The Philosophy of Mathematics Today, ed. Matthias Schirn. Oxford, Clarendon Press. • Sher, G. (1989): “A Conception of Tarskian Logic”, Pacific Philosophical Quarterly 70, pp. 341-368. • Sher, G. (1991): The Bounds of Logic: A Generalized Viewpoint. Cambridge, Mass: MIT Press. • Sher, G. (1996): “Did Tarski Commit ‘Tarski’s Fallacy’?” Journal of Symbolic Logic 61, pp. 653-686. • Sher, G. (1999): “Is Logic a Theory of the Obvious?”, pp.207-238 in Varzi (1999). • Smiley, T. (1995): “A Tale of Two Tortoises”, Mind 104, pp. 725-36. • Smiley, T. (1998): “Consequence, Conceptions of”, pp. 599-603 in Routledge Encyclopedia of Philosophy, vol. 2, ed. E. Craig. London: Routledge. • Tarski, A. (1933): “Pojecie prawdy w jezykach nauk dedukeycyjnych”, translated as “On the Concept of Truth in Formalized Languages”, pp. 152-278 in Tarski (1983). • Tarski, A. (1936): “On the Concept of Logical Consequence”, pp. 409-420 in Tarski (1983). • Tarski, A. (1983): Logic, Semantics, Metamathematics 2nd ed. Indianapolis: Hackett Publishing. • Tarski, A. (1986): “What are Logical Notions?” History and Philosophy of Logic 7, pp. 143-154. • Varzi, A., ed. (1999): European Review of Philosophy, vol. 4, The Nature of Logic. Stanford: CSLI Publications. • Warbrod, K., (1999): “Logical Constants”, Mind 108, pp. 503-538. Author Information Matthew McKeon Email: mckeonm@msu.edu Michigan State University U. S. A.
{"url":"http://www.iep.utm.edu/logcon-m/print/","timestamp":"2014-04-17T12:31:14Z","content_type":null,"content_length":"95564","record_id":"<urn:uuid:a18544d9-9f14-4bb7-bdb3-93a2da39fc8f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Higher Ring Derivation and Intuitionistic Fuzzy Stability Abstract and Applied Analysis Volume 2012 (2012), Article ID 503671, 16 pages Research Article Higher Ring Derivation and Intuitionistic Fuzzy Stability Department of Mathematics, Mokwon University, Daejeon 302-729, Republic of Korea Received 3 May 2012; Accepted 12 June 2012 Academic Editor: Bing Xu Copyright © 2012 Ick-Soon Chang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We take account of the stability of higher ring derivation in intuitionistic fuzzy Banach algebra associated to the Jensen type functional equation. In addition, we deal with the superstability of higher ring derivation in intuitionistic fuzzy Banach algebra with unit. 1. Introduction and Preliminaries The stability problem of functional equations has originally been formulated by Ulam [1]: under what condition does there exist a homomorphism near an approximate homomorphism? Hyers [2] answered the problem of Ulam under the assumption that the groups are Banach spaces. A generalized version of the theorem of Hyers for approximately additive mappings was given by Aoki [3] and for approximately linear mappings was presented by Rassias [4] by considering an unbounded Cauchy difference. The paper work of Rassias [4] has had a lot of influence in the development of what is called the generalized Hyers-Ulam stability of functional equations. Since then, more generalizations and applications of the generalized Hyers-Ulam stability to a number of functional equations and mappings have been investigated (e.g., [5–7]). In particular, Badora [8] gave a generalization of the Bourgin's result [9], and he also dealt with the stability and the Bourgin-type superstability of derivations in [10]. Recently, fuzzy version is discussed in [11, 12]. Quite recently, the intuitionistic fuzzy stability problem for Jensen functional equation and cubic functional equation is considered in [13–15], respectively, while the idea of intuitionistic fuzzy normed space was introduced in [16], and there are some recent and important results which are directly related to the central theme of this paper, that is, intuitionistic fuzziness (see e.g., [17–20]). In this paper, we establish the stability of higher ring derivation in intuitionistic fuzzy Banach algebra associated to the Jensen type functional equation . Moreover, we consider the superstability of higher ring derivation in intuitionistic fuzzy Banach algebra with unit. We now recall some notations and basic definitions used in this paper. Definition 1.1 (see [5]). Let and be algebras over the real or complex field . Let be the set of the natural numbers. From , a sequence (resp., ) of additive operators from into is called a higher ring derivation of rank (resp., infinite rank) if the functional equation holds for each (resp., ) and for all . A higher ring derivation of additive operators on , particularly, is called strong if is an identity operator. Of course, a higher ring derivation of rank 0 from into (resp., a strong higher ring derivation of rank 1 on ) is a ring homomorphism (resp., a ring derivation). Note that a higher ring derivation is a generalization of both a ring homomorphism and a ring derivation. Definition 1.2. A binary operation is said to be a continuous t-norm if it satisfies the following conditions: (1) is associative and commutative, (2) is continuous, (3) for all whenever and for each . Definition 1.3. A binary operation is said to be a continuous t-conorm if it satisfies the following conditions: (1) is associative and commutative, (2) is continuous, (3) for all whenever and for each . Using the notions of continuous t-norm and t-conorm, Saadati and Park [16] have recently introduced the concept of intuitionistic fuzzy normed space as follows. Definition 1.4. The five-tuple is said to be an intuitionistic fuzzy normed space if is a vector space, is a continuous t-norm, is a continuous t-conorm, and are fuzzy sets on satisfying the following conditions. For every and ,(1),(2),(3) if and only if ,(4) for each ,(5),(6) is continuous, (7) and ,(8),(9) if and only if ,(10) for each ,(11), (12) is continuous, (13) and . In this case, is called an intuitionistic fuzzy norm. Example 1.5. Let be a normed space, , and for all . For all and every , consider Then is an intuitionistic fuzzy normed space. Example 1.6. Let be a normed space, , and for all . For all and every and , consider Then is an intuitionistic fuzzy normed space. Definition 1.7 (see [21]). The five-tuple is said to be an intuitionistic fuzzy normed algebra if is an algebra, is a continuous t-norm, is a continuous t-conorm, and are fuzzy sets on satisfying the conditions (1)–(13) of the Definition 1.4. Furthermore, for every and ,(14),(15). For an intuitionistic fuzzy normed algebra , we further assume that (16) and for all . The concepts of convergence and Cauchy sequences in an intuitionistic fuzzy normed space are studied in [16]. Let be an intuitionistic fuzzy normed space or intuitionistic fuzzy normed algebra. A sequence is said to be intuitionistic fuzzy convergent to if and for all . In this case, we write or as . A sequence in is said to be intuitionistic fuzzy Cauchy sequence if and for all and . An intuitionistic fuzzy normed space (resp., intuitionistic fuzzy normed algebra) is said to be complete if every intuitionistic fuzzy Cauchy sequence in is intuitionistic fuzzy convergent in . A complete intuitionistic fuzzy normed space (resp., intuitionistic fuzzy normed algebra) is also called an intuitionistic fuzzy Banach space (resp., intuitionistic fuzzy Banach algebra). 2. Stability of Higher Ring Derivation in Intuitionistic Fuzzy Banach Algebra As a matter of convenience in this paper, we use the following abbreviation: In addition, We begin with a generalized Hyers-Ulam theorem in intuitionistic fuzzy Banach space for the Jensen type functional equation. The following result is also the generalization of the theorem introduced in [13]. Theorem 2.1. Let be a vector space, and let be a mapping from to an intuitionistic fuzzy Banach space with . Suppose that is a function from to an intuitionistic fuzzy normed space such that for all and . If is a fixed integer, and for some real number with , then there exists a unique additive mapping such that , for all and , where Proof. Without loss of generality, we assume that . From (2.3) and (2.4), we get for all and . Again, by (2.3) and (2.4), we obtain for all and . Combining (2.7) and (2.8), we arrive at for all and . This implies that for all and . Now we define for all and . Then we have by assumption for all and . Using (2.10) and (2.12), we get for all and . Therefore, for all , we have for all and . Let and be given. Since and , there exists some such that . Since , there exists a positive integer such that for all . Then This shows that is a Cauchy sequence in . Since is complete, we can define a mapping by for all . Moreover, if we let in (2.14), then we get for all and . Therefore, we find that Next, we will show that is additive mapping. Note that On the other hand, (2.3) and (2.4) give the following: Letting in (2.18) and (2.19), we yield So we see that is additive mapping. Now, we approximate the difference between and in an intuitionistic fuzzy sense. By (2.17), we get for all and and sufficiently large . In order to prove the uniqueness of , we assume that is another additive mapping from to , which satisfies the inequality (2.5). Then for all and . Therefore, due to the additivity of and , we obtain that Since , and we get that is, and for all . So , which completes the proof. In particular, we can prove the preceding result for the case when . In this case, the mapping . We now establish a generalized Hyers-Ulam stability in intuitionistic fuzzy Banach algebra for the higher ring derivation. Theorem 2.2. Let be an algebra, and let be a sequence of mappings from to an intuitionistic fuzzy Banach algebra with for each . Suppose that is a function from to an intuitionistic fuzzy normed algebra such that for each , for all and , and that is a function from to an intuitionistic fuzzy normed space such that for each , for all , and . If is a fixed integer, , and for some real numbers and with and , then there exists a unique higher ring derivation of any rank such that for each , for all and . In this case, Moreover, the identity holds for each and all . Proof. It follows by Theorem 2.1 that for each and all , there exists a unique additive mapping given by satisfying (2.27) since is an intuitionistic fuzzy normed algebra. Without loss of generality, we suppose that . Now, we need to prove that the sequence satisfies the identity for each and all . It is observed that for each , for all and . On account of (2.26), we see that for each , for all and . Due to additivity of , for each , for all and . In addition, we feel that Letting in (2.31), (2.32), (2.33), and (2.34), we get and . This implies that for each and all . Using additivity of and (2.35), we find that So we obtain . Hence for each , for all and . This relation yields that for each , for all and . On the other hand, we see that Sending in (2.38) and ( 2.40), we have that for each , for all and . Thus, we conclude that for each and all . Therefore, by combining (2.35) and (2.42), we get the required result, which completes the proof. As a consequence of Theorem 2.2, we get the following superstability. Corollary 2.3. Let be an intuitionistic fuzzy Banach algebra with unit, and let a sequence of operators on satisfy for each , where is an identity operator. Suppose that is a function from to an intuitionistic fuzzy normed algebra satisfying (2.25) and (2.14) and that is a function from to an intuitionistic fuzzy normed space satisfying (2.26). If is a fixed integer, , and for some real numbers and with and , then is a strong higher ring derivation on . Proof. According to (2.30), we have for all , and so (=) is an identity operator on . By induction, we get the conclusion. If , then it follows from (2.29) that holds for all since contains the unit element. Let us assume that is valid for all and . Then (2.29) implies that for all . Since has the unit element, for all . Hence we conclude that for each and all . So this tells us that is a higher ring derivation of any rank from and . The proof of the corollary is complete. We remark that we can prove the preceding result for the case when and . The authors would like to thank the referees for giving useful suggestions and for the improvement of this paper. This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (no. 2012-0002410). 1. S. M. Ulam, A Collection of Mathematical Problems, Interscience Publishers, New York, NY, USA, 1960. 2. D. H. Hyers, “On the stability of the linear functional equation,” Proceedings of the National Academy of Sciences of the United States of America, vol. 27, pp. 222–224, 1941. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 3. T. Aoki, “On the stability of the linear transformation in Banach spaces,” Journal of the Mathematical Society of Japan, vol. 2, pp. 64–66, 1950. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 4. T. M. Rassias, “On the stability of the linear mapping in Banach spaces,” Proceedings of the American Mathematical Society, vol. 72, no. 2, pp. 297–300, 1978. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 5. Y.-S. Jung and I.-S. Chang, “On approximately higher ring derivations,” Journal of Mathematical Analysis and Applications, vol. 343, no. 2, pp. 636–643, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 6. R. Saadati, Y. J. Cho, and J. Vahidi, “The stability of the quartic functional equation in various spaces,” Computers & Mathematics with Applications, vol. 60, no. 7, pp. 1994–2002, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 7. R. Saadati and C. Park, “Non-Archimedian $L$-fuzzy normed spaces and stability of functional equations,” Computers & Mathematics with Applications, vol. 60, no. 8, pp. 2488–2496, 2010. View at Publisher · View at Google Scholar 8. R. Badora, “On approximate ring homomorphisms,” Journal of Mathematical Analysis and Applications, vol. 276, no. 2, pp. 589–597, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 9. D. G. Bourgin, “Approximately isometric and multiplicative transformations on continuous function rings,” Duke Mathematical Journal, vol. 16, pp. 385–397, 1949. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 10. R. Badora, “On approximate derivations,” Mathematical Inequalities & Applications, vol. 9, no. 1, pp. 167–173, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 11. A. K. Mirmostafaee and M. S. Moslehian, “Fuzzy almost quadratic functions,” Results in Mathematics, vol. 52, no. 1-2, pp. 161–177, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 12. A. K. Mirmostafaee and M. S. Moslehian, “Fuzzy versions of Hyers-Ulam-Rassias theorem,” Fuzzy Sets and Systems, vol. 159, no. 6, pp. 720–729, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 13. S. A. Mohiuddine, “Stability of Jensen functional equation in intuitionistic fuzzy normed space,” Chaos, Solitons & Fractals, vol. 42, no. 5, pp. 2989–2996, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 14. S. A. Mohiuddine, M. Cancan, and H. Şevli, “Intuitionistic fuzzy stability of a Jensen functional equation via fixed point technique,” Mathematical and Computer Modelling, vol. 54, no. 9-10, pp. 2403–2409, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 15. M. Mursaleen and S. A. Mohiuddine, “On stability of a cubic functional equation in intuitionistic fuzzy normed spaces,” Chaos, Solitons & Fractals, vol. 42, no. 5, pp. 2997–3005, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 16. R. Saadati and J. H. Park, “On the intuitionistic fuzzy topological spaces,” Chaos, Solitons and Fractals, vol. 27, no. 2, pp. 331–344, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 17. M. Mursaleen, V. Karakaya, and S. A. Mohiuddine, “Schauder basis, separability, and approximation property in intuitionistic fuzzy normed space,” Abstract and Applied Analysis, vol. 2010, Article ID 131868, 14 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 18. M. Mursaleen and S. A. Mohiuddine, “Statistical convergence of double sequences in intuitionistic fuzzy normed spaces,” Chaos, Solitons & Fractals, vol. 41, no. 5, pp. 2414–2421, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 19. M. Mursaleen and S. A. Mohiuddine, “On lacunary statistical convergence with respect to the intuitionistic fuzzy normed space,” Journal of Computational and Applied Mathematics, vol. 233, no. 2, pp. 142–149, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 20. M. Mursaleen, S. A. Mohiuddine, and O. H. H. Edely, “On the ideal convergence of double sequences in intuitionistic fuzzy normed spaces,” Computers & Mathematics with Applications, vol. 59, no. 2, pp. 603–611, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 21. B. Dinda, T. K. Samanta, and U. K. Bera, “Intuitionistic fuzzy Banach algebra,” Bulletin of Mathematical Analysis and Applications, vol. 3, no. 3, pp. 273–281, 2011.
{"url":"http://www.hindawi.com/journals/aaa/2012/503671/","timestamp":"2014-04-16T05:10:25Z","content_type":null,"content_length":"955363","record_id":"<urn:uuid:8c28a4b4-a895-45ee-8c70-800be0edfc74>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Types Of Matrices, Row, Column, Scalar, Unity - Transtutors Types of Matrices Assignment Help A matrix having a single row is called a row matrix. e. g. [1 3 5 7] A matrix having a single column is called a column matrix. e.g. An m x n matrix A is said to be a square matrix if m = n i.e. number of rows = number of columns. For example: The diagonal from left hand side upper corner to right hand side lower side lower corner is known as leading diagonal or principal diagonal. In the above example square matrix containing the elements 1, 3, 5 is called the leading or principal diagonal. A square matrix all of whose elements except those in the leading diagonal, are zero is called a diagonal matrix. For a square matrix A = [a[ij]][n][×][n] to be a diagonal matrix, a[ij] = 0, whenever i ≠ j. For example: A diagonal matrix whose all the leading diagonal elements are equal is called a scalar matrix. For a square matrix A = [a[ij]][n][×][n] to be a scalar matrix a[ij] = ^, where m ≠ 0. For example: ^is a scalar matrix. Unit Matrix or Identity Matrix: A diagonal matrix of order n which has unity for all its diagonal elements, is called a unit matrix of order n and is denoted by I[n]. Thus a square matrix A = [a[ij]][n][×][n] is a unit matrix if a[ij] = ^ For example: A square matrix in which all the elements below the diagonal elements are zero is called Upper Triangular matrix and a square matrix in which all the elements above diagonal elements are zero is called Lower Triangular matrix. Given a square matrix A = [a[ij]][n][×][n], For upper triangular matrix, a[ij] = 0, i > j and for lower triangular matrix, a[ij] = 0, i < j ● Diagonal matrix is both upper and lower triangular ● A triangular matrix A = [a[ij]][n×][n] is called strictly triangular if a[ii] = 0 for 1 < i < n. For example: If all the elements of a matrix (square or rectangular) are zero, it is called a null or zero matrix. For A = [a[ij]] to be null matrix, a[ij] = 0 ∀ i, j For example: Email Based Homework Assignment Help in Types of Matrices Transtutors is the best place to get answers to all your doubts regarding the types of matrices, row, column, square, diagonal, scalar, unity, triangular and null matrix with examples. You can submit your school, college or university level homework or assignment to us and we will make sure that you get the answers you need which are timely and also cost effective. Our tutors are available round the clock to help you out in any way with math. Live Online Tutor Help for Types of Matrices Transtutors has a vast panel of experienced math tutors who specialize in types of matrices and can explain the different concepts to you effectively. You can also interact directly with our math tutors for a one to one session and get answers to all your problems in your school, college or university level math homework. Our tutors will make sure that you achieve the highest grades for your math assignments. We will make sure that you get the best help possible for exams such as the AP, AS, A level, GCSE, IGCSE, IB, Round Square etc. Related Questions • list the correct values for x, y and z 33 mins ago list the correct values for x, y and z Tags : Science/Math, Math, Geometry, Upto Sophomore ask similar question • composition of two reflections over a rotation 43 mins ago composition of two reflections over a rotation Tags : Science/Math, Math, Geometry, Upto Sophomore ask similar question • Synthesis 1 hr ago you start with 1-propanol; how do you end up with t-buytl-n-propyl ether when your starting reagents are 2-methylpropene, H2SO4 and 1-propanol? Tags : Science/Math, Chemistry, Organic chemistry, College ask similar question • In the reduction of dichromate by Fe(II), the number of electrons involved... 1 hr ago In the reduction of dichromate by Fe(II), the number of electrons involved per chromium atom is: Tags : Science/Math, Chemistry, Others, Junior & Senior (Grade 11&12) ask similar question • start with 1-propanol; how do you end up with t-butyl-n-propyl ether when... 2 hrs ago start with 1-propanol; how do you end up with t-butyl-n-propyl ether when your reagents are 2-methylpropene, H2SO4 and 1-propanol. May you please help me; it is an Org Chem synthesis problems? Tags : Science/Math, Chemistry, Organic chemistry, University ask similar question • 2+2=? 2 hrs ago Tags : Science/Math, Math, Pre-Calculus, Junior & Senior (Grade 11&12) ask similar question • A hardware store will run an advertising campaign using radio and... 2 hrs ago A hardware store will run an advertising campaign using radio and newspaper. Every dollar spent on radio advertising will reach 60 people in the "Under $30,000" market, and 50 people in the "Over Tags : Science/Math, Math, Others, College ask similar question • What is the hypthosis of how much a animal eates 2 hrs ago HypothesisPredictionBecauseDiagram explaining this hypothesis Tags : Science/Math, Math, Algebra, Graduation ask similar question • GSSIPU last 10yeaRs question papers??? 3 hrs ago GSSIPU last 10yeaRs question papers??? Tags : Science/Math, Biology, Botany, University ask similar question • hypothesis 5 hrs ago It is claimed that an automobile is driven on the average less than 20,000 kilometers per year. To test this claim , a random sample of 100 automobile owners are asked to keep a record of the kilometers they travel. Would you... Tags : Science/Math, Math, Others, Upto Sophomore ask similar question more assignments »
{"url":"http://www.transtutors.com/math-homework-help/matrices/types-of-matrices.aspx","timestamp":"2014-04-18T20:49:36Z","content_type":null,"content_length":"83927","record_id":"<urn:uuid:49c50ed9-1dc3-4fa2-99b8-66caaa40b369>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Graph Coloring Problem pat%frumious.uucp@uunet.ca (Patrick Smith) Wed, 28 Oct 1992 03:44:28 GMT From comp.compilers | List of all articles for this month | Newsgroups: comp.compilers From: pat%frumious.uucp@uunet.ca (Patrick Smith) Organization: Compilers Central Date: Wed, 28 Oct 1992 03:44:28 GMT References: 92-10-093 Keywords: theory |QUESTION: Given a Conflict graph "G" in which the largest clique | in the graph is of size "k", is the graph "k" colorable? | (It seems to be true.) I'm assuming that by a "clique", Peter means a graph (or subgraph of a larger graph) in which there is an edge between every pair of nodes; I'm more used to the term "complete graph". (This definition seemed clear from the earlier posting, but wasn't explicit, so I thought I'd state my understanding, just to avoid misunderstandings.) One counter-example is a loop of n nodes and n edges, for any odd n > 3. This contains no complete subgraph of 3 nodes but can't be coloured with two colours. / \ E B | | If, say, A is coloured red and B blue, then C is red, D is blue, and E is red. Oops! Just as a side note - if the proposition were true, it would be very hard to prove, as it implies the Four Colour Theorem (since a planar graph cannot contain a complete subgraph of size 5). Patrick Smith Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/92-10-108","timestamp":"2014-04-17T00:58:49Z","content_type":null,"content_length":"7350","record_id":"<urn:uuid:66426793-e870-441d-b148-a5a7f38f5372>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
HowStuffWorks "How Time Works" How long is a day? It's the amount of time it takes for the Earth to rotate one time on its axis. But how long does it take the Earth to rotate? That is where things become completely arbitrary. The world has decided to standardize on the following increments: • A day consists of two 12-hour periods, for a total of 24 hours. • An hour consists of 60 minutes. • A minute consists of 60 seconds. • Seconds are subdivided on a decimal system into things like "hundredths of a second" or "millionths of a second." That's a pretty bizarre way to divide a day up. We divide it in half, then divide the halves by twelfths, then divide the twelfths into sixtieths, then divide by 60 again, and then convert to a decimal system for the smallest increments. It's no wonder children have trouble learning how to tell time. Why are there 24 hours in a day? No one really knows. However, the tradition goes back a long way. Take, for example, this quote from Encyclopedia Britannica: The earliest known sundial still preserved is an Egyptian shadow clock of green schist dating at least from the 8th century BC. It consists of a straight base with a raised crosspiece at one end. The base, on which is inscribed a scale of six time divisions, is placed in an east-west direction with the crosspiece at the east end in the morning and the west end in the afternoon. The shadow of the crosspiece on this base indicates the time. Clocks of this kind are still in use in primitive parts of Egypt. The Babylonians seem to be the ones who started the six fetish, but it is not clear why. Why are there 60 minutes in an hour and 60 seconds in a minute? Again, it is unclear. It is known, however, that Egyptians once used a calendar that had 12 30-day months, giving them 360 days. This is believed to be the reason why we now divide circles into 360 degrees. Dividing 360 by 6 gives you 60, and 60 is also a base number in the Babylonian math system. What do a.m. and p.m. mean? These abbreviations stand for ante meridiem, before midday, and post meridiem, after midday, and they are a Roman invention. According to Daniel Boorstin in his book The Discoverers, this simple division of the day into two parts was the Romans' first increment of time within a day: Even at the end of the fourth century B.C., the Romans formally divided their day into only two parts: a.m. and p.m. An assistant to the consul was assigned to notice when the sun crossed the meridian, and to announce it in the Forum, since lawyers had to appear in the courts before noon. Modern man bases time on the second. A day is defined as 86,400 seconds, and a second is officially defined as 9,192,631,770 oscillations of a cesium-133 atom in an atomic clock.
{"url":"http://science.howstuffworks.com/science-vs-myth/everyday-myths/time2.htm","timestamp":"2014-04-18T08:03:04Z","content_type":null,"content_length":"125150","record_id":"<urn:uuid:5d27eb6e-3ef1-4c49-9aee-1f3b9eefca75>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Evolutionary Monte Carlo: Applications to Cp Model Sampling and Change Point Problem Results 1 - 10 of 18 "... For the problem of model choice in linear regression, we introduce a Bayesian adaptive sampling algorithm (BAS), that samples models without replacement from the space of models. For problems that permit enumeration of all models BAS is guaranteed to enumerate the model space in 2 p iterations where ..." Cited by 9 (4 self) Add to MetaCart For the problem of model choice in linear regression, we introduce a Bayesian adaptive sampling algorithm (BAS), that samples models without replacement from the space of models. For problems that permit enumeration of all models BAS is guaranteed to enumerate the model space in 2 p iterations where p is the number of potential variables under consideration. For larger problems where sampling is required, we provide conditions under which BAS provides perfect samples without replacement. When the sampling probabilities in the algorithm are the marginal variable inclusion probabilities, BAS may be viewed as sampling models “near ” the median probability model of Barbieri and Berger. As marginal inclusion probabilities are not known in advance we discuss several strategies to estimate adaptively the marginal inclusion probabilities within BAS. We illustrate the performance of the algorithm using simulated and real data and show that BAS can outperform Markov chain Monte Carlo methods. The algorithm is implemented in the R package BAS available at CRAN. "... The need to identify a few important variables that affect a certain outcome of interest commonly arises in various industrial engineering applications. The genetic algorithm (GA) appears to be a natural tool for solving such a problem. In this article we first demonstrate that the GA is actually no ..." Cited by 5 (1 self) Add to MetaCart The need to identify a few important variables that affect a certain outcome of interest commonly arises in various industrial engineering applications. The genetic algorithm (GA) appears to be a natural tool for solving such a problem. In this article we first demonstrate that the GA is actually not a particularly effective variable selection tool, and then propose a very simple modification. Our idea is to run a number of GAs in parallel without allowing each GA to fully converge, and to consolidate the information from all the individual GAs in the end. We call the resulting algorithm the parallel genetic algorithm (PGA). Using a number of both simulated and real examples, we show that the PGA is an interesting as well as highly competitive and easy-to-use variable selection tool. "... Computation in Bayesian statistical models is often performed us-ing sampling techniques such as Markov chain Monte Carlo (MCMC) or adaptive Monte Carlo methods. The convergence of the sampler to the posterior distribution is typically assessed using a set of standard diag-nostics; recent draft Food ..." Cited by 3 (1 self) Add to MetaCart Computation in Bayesian statistical models is often performed us-ing sampling techniques such as Markov chain Monte Carlo (MCMC) or adaptive Monte Carlo methods. The convergence of the sampler to the posterior distribution is typically assessed using a set of standard diag-nostics; recent draft Food and Drug Administration guidelines for the use of Bayesian statistics in medical device trials, for instance, advocate this approach for validating computations. We give several examples showing that this approach may be in-sufficient when the posterior distribution is multimodal–that lack of convergence due to posterior multimodality can be undetected using the standard convergence diagnostics, including the Gelman-Rubin di-agnostic that was introduced for exactly this problem. We show that the poor convergence can be detected by modifying a validation technique that was originally proposed for detecting coding errors in MCMC soft- - , 2005 "... Sampling from multimodal and high dimensional target distribution posits a great challenge in Bayesian analysis. This paper combines the attractive features of the distributed genetic algorithm and the Markov Chain Monte Carlo, resulting in a new Monte Carlo algorithm Distributed Evolutionary Monte ..." Cited by 1 (0 self) Add to MetaCart Sampling from multimodal and high dimensional target distribution posits a great challenge in Bayesian analysis. This paper combines the attractive features of the distributed genetic algorithm and the Markov Chain Monte Carlo, resulting in a new Monte Carlo algorithm Distributed Evolutionary Monte Carlo (DEMC) for real-valued problems. DEMC evolves a population of the Markov chains through genetic operators to explore the target function efficiently. The promising potential of the DEMC algorithm is illustrated by applying it to multimodal samples, Bayesian Neural Network and logistic regression inference. , 2011 "... We analyze the convergence rate of a popular Gibbs sampling method used for statistical discovery of gene regulatory binding motifs in DNA sequences. This sampler satisfies a very strong form of ergodicity (uniform). However, we show that, due to multimodality of the posterior distribution, the rate ..." Cited by 1 (0 self) Add to MetaCart We analyze the convergence rate of a popular Gibbs sampling method used for statistical discovery of gene regulatory binding motifs in DNA sequences. This sampler satisfies a very strong form of ergodicity (uniform). However, we show that, due to multimodality of the posterior distribution, the rate of convergence often decreases exponentially as a function of the length of the DNA sequence. Specifically, we show that this occurs whenever there is more than one true repeating pattern in the data. In practice there are typically multiple, even numerous, such patterns in biological data, the goal being to detect the most well-conserved and frequently-occurring of these. Our findings match empirical results, in which the motif-discovery Gibbs sampler has exhibited such poor convergence that it is used only for finding modes of the posterior distribution (candidate motifs) rather than for obtaining samples from that distribution. Ours appear to be the first meaningful bounds on the convergence rate of a Markov chain method for sampling from a multimodal posterior distribution, as a function of statistical quantities like the number of observations. Keywords: Gibbs sampler; DNA; slow mixing; spectral gap; binding motifs; multimodal. "... Gene transcription is regulated by interactions between transcription factors and their target binding sites in the genome. A motif is the sequence pattern recognized by a transcription factor to mediate such interactions. With the availability of high-throughput genomic data, computational identifi ..." Add to MetaCart Gene transcription is regulated by interactions between transcription factors and their target binding sites in the genome. A motif is the sequence pattern recognized by a transcription factor to mediate such interactions. With the availability of high-throughput genomic data, computational identification of transcription factor binding motifs has become a major research problem in computational biology and bioinformatics. In this chapter, we present a series of Bayesian approaches to motif discovery. We start from a basic statistical framework for motif finding, extend it to the identification of cis-regulatory modules, and then discuss methods that combine motif finding with phylogenetic footprinting, gene expression or ChIP-chip data, and nucleosome positioning information. Simulation studies and applications to biological data sets are presented to illustrate the utility of these methods. "... Biostatistics), Alan Gelfand (Duke) When a mammogram is performed, a radiologist decides whether to recall the patient for further testing There is concern about inconsistency in this recall decision between radiologists Database of 500,000+ mammograms Demographic characteristics of the patient Outc ..." Add to MetaCart Biostatistics), Alan Gelfand (Duke) When a mammogram is performed, a radiologist decides whether to recall the patient for further testing There is concern about inconsistency in this recall decision between radiologists Database of 500,000+ mammograms Demographic characteristics of the patient Outcome of the mammogram (false +, true +, false-, or true-) Radiologist data Practice characteristics Demographic characteristics Concerns about malpractice , 2010 "... In this paper, we propose a population stochastic approximation MCMC (SAMCMC) algorithm, and establish its weak convergence (toward a normal distribution) under mild conditions. The theory of weak convergence established for the population SAMCMC algorithm is also applicable for general single chain ..." Add to MetaCart In this paper, we propose a population stochastic approximation MCMC (SAMCMC) algorithm, and establish its weak convergence (toward a normal distribution) under mild conditions. The theory of weak convergence established for the population SAMCMC algorithm is also applicable for general single chain SAMCMC algorithms. Based on the theory, we then give an explicit ratio for the convergence rates of the population SAMCMC algorithm and the single chain SAM-CMC algorithm. The theoretical results are illustrated by a population stochastic approximation Monte Carlo (SAMC) algorithm with a multimodal example. Our results, in both theory and numerical examples, suggest that the population SAMCMC algorithm can be more efficient than the single chain SAMCMC algorithm. This is of interest for practical applications. "... monte carlo algorithm. A crucial problem which arises when dealing with Bayesian neural networks is that of determining their most appropriate size, expressed in terms of number of computational units and/or connections. In fact, too small a network may not be able to learn the sample data, whereas ..." Add to MetaCart monte carlo algorithm. A crucial problem which arises when dealing with Bayesian neural networks is that of determining their most appropriate size, expressed in terms of number of computational units and/or connections. In fact, too small a network may not be able to learn the sample data, whereas one that is too large may give rise to overfitting phenomena and cause poor “generalization ” performance. A few solutions have been proposed in the literature to solve this problem, such as the use of a geometric prior probability on the number of hidden units (Müller and Rios Insua, 1998), thereby favouring smaller-size networks, and a reversible jump algorithm to move between architectures having a different number of hidden units (Rios Insua and Müller, 1998). In this work we propose a variable architecture model where input-to-hidden connections and, therefore, hidden units are selected by using a variant of the Evolutionary Monte Carlo (EMC) algorithm developed by Liang and Wong (2000). The , 2001 "... In this article, we study the connections between Bayesian methods and non-Bayesian methods for variable selection in multiple linear regression. We show that each ofthe non-Bayesian criteria, FPE; AIC; Cp and adjusted R 2, has its Bayesian correspondence under an appropriate prior setting. The theo ..." Add to MetaCart In this article, we study the connections between Bayesian methods and non-Bayesian methods for variable selection in multiple linear regression. We show that each ofthe non-Bayesian criteria, FPE; AIC; Cp and adjusted R 2, has its Bayesian correspondence under an appropriate prior setting. The theoretical results are
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1538931","timestamp":"2014-04-23T08:48:17Z","content_type":null,"content_length":"36501","record_id":"<urn:uuid:f4a63490-b922-42ca-a9d1-3d8694346e58>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
An Isolated System Consists Of Two Conducting Spheres ... | Chegg.com An isolated system consists of two conducting spheres A and B. Sphere A has five times the radius of sphere B. Initially, the spheres are given equal amounts of positive charge and are isolated from each other. The two spheres are then connected by a conducting wire. Note: The potential of a sphere of radius R that carries a charge Q is V = kQ/R, if the potential at infinity is zero. Determine the ratio of the charge on sphere A to that on sphere B, qA/qB, after the spheres are connected by the wire. A. 1 B. 1/5 C. 5 D. 25 E. 1/25 Please show work
{"url":"http://www.chegg.com/homework-help/questions-and-answers/isolated-system-consists-two-conducting-spheres-b-sphere-five-times-radius-sphere-b-initia-q2713152","timestamp":"2014-04-21T03:44:22Z","content_type":null,"content_length":"21650","record_id":"<urn:uuid:88cd1460-0279-4f84-9579-198e28213c0d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Inclusion Exclusion Principle A selection of articles related to inclusion exclusion principle. Original articles from our library related to the Inclusion Exclusion Principle. See Table of Contents for further available material (downloadable resources) on Inclusion Exclusion Principle. Inclusion Exclusion Principle is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Inclusion Exclusion Principle books and related Suggested Pdf Resources Suggested Web Resources Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
{"url":"http://www.realmagick.com/inclusion-exclusion-principle/","timestamp":"2014-04-19T04:25:54Z","content_type":null,"content_length":"28551","record_id":"<urn:uuid:760f0707-ec12-4813-ae68-71169d7bb222>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
recursive type - In Proceedings of the 11th Annual Symposium on Logic in Computer Science , 1996 "... Abstract We study recursive types from a syntactic perspective. In particular, we compare the formulations of recursive types that are used in programming languages and formal systems. Our main tool is a new syntactic explanation of type expressions as functors. We also introduce a simple logic for ..." Cited by 31 (0 self) Add to MetaCart Abstract We study recursive types from a syntactic perspective. In particular, we compare the formulations of recursive types that are used in programming languages and formal systems. Our main tool is a new syntactic explanation of type expressions as functors. We also introduce a simple logic for programs with recursive types in which we carry out our proofs. 1 Introduction Recursive types are common in both programming languages and formal systems. By now, there is a deep and well-developed semantic theory of recursive types. The syntactic aspects of recursive types are also well understood in some special cases. In particular, there is an important body of knowledge about covariant recursive types, which include datatypes like natural numbers, lists, and trees. Beyond the covariant case, however, the syntactic understanding of recursive types becomes rather spotty. Consequently, the relations between various alternative formulations of recursive types are generally unclear. Furthermore, the syntactic counterparts to some of the most basic semantic results are unknown. , 1999 "... We introduce a hypergraph-based process calculus with a generic type system. That is, a type system checking an invariant property of processes can be generated by instantiating the original type system. We demonstrate the key ideas behind the type system, namely that there exists a hypergraph morph ..." Cited by 11 (4 self) Add to MetaCart We introduce a hypergraph-based process calculus with a generic type system. That is, a type system checking an invariant property of processes can be generated by instantiating the original type system. We demonstrate the key ideas behind the type system, namely that there exists a hypergraph morphism from each process graph into its type, and show how it can be used for the analysis of processes. Our examples are input/output-capabilities, secrecy conditions and avoiding vicious circles occurring in deadlocks. In order to specify the syntax and semantics of the process calculus and the type system, we introduce a method of hypergraph construction using concepts from category theory. - Computer Science Logic, 18th International Workshop, CSL 2004, 13th Annual Conference of the EACSL, Karpacz, Poland, September 20-24, 2004, Proceedings, volume 3210 of Lecture Notes in Computer Science , 2004 "... Our contribution to CSL 04 [AM04] contains a little error, which is easily corrected by 2 elementary editing steps (replacing one character and deleting another). Definition of wellformed contexts (fifth page). Typing contexts should, in contrast to kinding contexts, only contain type variable decla ..." Cited by 7 (3 self) Add to MetaCart Our contribution to CSL 04 [AM04] contains a little error, which is easily corrected by 2 elementary editing steps (replacing one character and deleting another). Definition of wellformed contexts (fifth page). Typing contexts should, in contrast to kinding contexts, only contain type variable declarations without variance information. Hence, the second rule is too liberal; we must insist on p = ◦. The corrected set of rules is then: ⋄ cxt ∆ cxt ∆, X ◦κ cxt ∆ cxt ∆ ⊢ A: ∗ ∆, x:A cxt Definition of welltyped terms (immediately following). Since wellformed typing contexts ∆ contain no variance information, hence ◦ ∆ = ∆, we might drop the “◦ ” in the instantiation rule (fifth rule). The new set of rules is consequently, (x:A) ∈ ∆ ∆ cxt ∆ ⊢ x: A ∆, X ◦κ ⊢ t: A ∆ ⊢ t: ∀X κ. A ∆, x:A ⊢ t: B ∆ ⊢ λx.t: A → B ∆ ⊢ t: ∀X κ. A ∆ ⊢ F: κ "... We present a new type system for verifying the security of cryptographic protocol implementations. The type system combines prior work on refinement types, with union, intersection, and polymorphic types, and with the novel ability to reason statically about the disjointness of types. The increased ..." Cited by 7 (1 self) Add to MetaCart We present a new type system for verifying the security of cryptographic protocol implementations. The type system combines prior work on refinement types, with union, intersection, and polymorphic types, and with the novel ability to reason statically about the disjointness of types. The increased expressivity enables the analysis of important protocol classes that were previously out of scope for the type-based analyses of protocol implementations. In particular, our types can statically characterize: (i) more usages of asymmetric cryptography, such as signatures of private data and encryptions of authenticated data; (ii) authenticity and integrity properties achieved by showing knowledge of secret data; (iii) applications based on zero-knowledge proofs. The type system comes with a mechanized proof of correctness and an efficient type-checker. "... The aim of this thesis is to describe the semantics of a process calculus by means of hypergraph rewriting, creating a specification mechanism combining modularity of process calculi and locality of graph transformation. Verification of processes is addressed by presenting two methods: barbed congru ..." Cited by 5 (4 self) Add to MetaCart The aim of this thesis is to describe the semantics of a process calculus by means of hypergraph rewriting, creating a specification mechanism combining modularity of process calculi and locality of graph transformation. Verification of processes is addressed by presenting two methods: barbed congruence for relating processes displaying the same behaviour and generic type systems, forming a central part of this work. Based on existing work in graph rewriting... - In: Proc. of the 3 rd International Conference on Typed Lambda Calculus and Applications, TLCA'97 , 1997 "... . We present a technique to study relations between weak and strong fi-normalisations in various typed -calculi. We first introduce a translation which translates a -term into a I-term, and show that a -term is strongly fi-normalisable if and only if its translation is weakly fi-normalisable. We t ..." Cited by 4 (1 self) Add to MetaCart . We present a technique to study relations between weak and strong fi-normalisations in various typed -calculi. We first introduce a translation which translates a -term into a I-term, and show that a -term is strongly fi-normalisable if and only if its translation is weakly fi-normalisable. We then prove that the translation preserves typability of -terms in various typed -calculi. This enables us to establish the equivalence between weak and strong fi-normalisations in these typed -calculi. This translation can deal with Curry typing as well as Church typing, strengthening some recent closely related results. This may bring some insights into answering whether weak and strong fi-normalisations in all pure type systems are equivalent. 1 Introduction In various typed -calculi, one of the most interesting and important properties on -terms is how they can be fi-reduced to fi-normal forms. A -term M is said to be weakly fi-normalisable (WN fi (M )) if it can be fi-reduced to a - Applied Categorical Structures , 1996 "... This paper is a translated extract of my diploma thesis, see [Mar95]. I would like to thank Pawe/l Urzyczyn for some useful hints, Martin Hofmann, Mathias Kegelmann and Hermann Puhlmann for their careful proof reading, and especially Achim Jung for his stimulating supervision ..." Cited by 1 (0 self) Add to MetaCart This paper is a translated extract of my diploma thesis, see [Mar95]. I would like to thank Pawe/l Urzyczyn for some useful hints, Martin Hofmann, Mathias Kegelmann and Hermann Puhlmann for their careful proof reading, and especially Achim Jung for his stimulating supervision , 2002 "... This paper is a comparative study of a number of (intensional-semantically distinct) least and greatest fixed point operators that natural-deduction proof systems for intuitionistic logics can be extended with in a proof-theoretically defendable way. Eight pairs of such operators are analysed. The e ..." Add to MetaCart This paper is a comparative study of a number of (intensional-semantically distinct) least and greatest fixed point operators that natural-deduction proof systems for intuitionistic logics can be extended with in a proof-theoretically defendable way. Eight pairs of such operators are analysed. The exposition is centered around a cube-shaped classification where each node stands for an axiomatization of one pair of operators as logical constants by intended proof and reduction rules and each arc for a proof- and reduction-preserving encoding of one pair in terms of another. The three dimensions of the cube reflect three orthogonal binary options: conventional-style vs. Mendler-style, basic (``[co]iterative'') vs. enhanced (``primitive-[co]recursive''), simple vs. course-of-value [co]induction. Some of the axiomatizations and encodings are well-known; others, however, are novel; the classification into a cube is also new. The differences between the least fixed point operators considered are illustrated on the example of the corresponding natural number types.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=975323","timestamp":"2014-04-16T16:34:33Z","content_type":null,"content_length":"33696","record_id":"<urn:uuid:b1252277-374b-4083-8c73-1b46c76f76de>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Fran, Henderson & Pingry, and Me: A Tale of Problems vs Exercises By Barry Garelick Fran, by Way of Introduction My high school algebra 2 class which I had in the fall of 1964, was notable for a number of things. One was learning how to solve word problems. Another was a theory that most problems we encountered in algebra class could be solved with arithmetic. Yet another was a girl named Fran who I had a crush on. Fran professed to like neither algebra nor the class we were in, and found word problems difficult. On a day I had occasion to talk to her, I tried to explain my theory that algebra was like arithmetic but easier. Admittedly, my theory had a bit more to go. She appeared to show some interest, but she wasn’t interested. On another occasion I asked her to a football game, but she said she was washing her hair that day. Although Fran had long and beautiful black hair, and I wanted to believe that she had a careful and unrelenting schedule for washing it, I resigned myself to the fact that she would remain uninterested in me, algebra, and any theories about the subject. My theory of arithmetic vs. algebra grew from a realization I had when I was taking Algebra 1. It dawned on me one day that the problems that were difficult for me years ago when I was in elementary school were now incredibly easy using algebra. For example: $24 is 30% of what amount? In arithmetic this involved setting up a proportion while in algebra, it translated directly to 24 = 0.3x, thus skipping the set up of the ratio to 24/x = 30/100. Similarly, it was now much easier to understand that an increase in cost by 25% of some amount could be represented as 1.25x. What had been problems before were now exercises; being able to express quantities algebraically made it obvious what was going on. It seemed I was on to something, but I wasn’t quite sure what. Henderson and Pingry The issue of “problems vs. exercises” is one that has remained a part of education school catechism for many years. I first heard it during a discussion with the teacher of my math teaching methods class in education school. We had been talking about how Singaporean students obtain the highest score on an international math test (the TIMSS exam, given every four years). My teacher was quick to tell me that Singapore students have been successful on multiple choice, short answer tasks where you need to apply a known algorithm that has been practiced extensively. She stated that TIMSS and many of the other tests are focused on “exercises” rather than problems. “What happens,” she asked, “when we get off the ‘script’?” This topic surfaces time and again, frequently appearing in theses and school papers written for education school classes. Since some of these papers show up on the internet I have read a few of them. The papers I’ve seen reference an article written by Henderson and Pingry (1953 which appeared in an annual report published by the National Council of Teachers of Mathematics. In it, Henderson and Pingry addressed the typical difficulties one encounters in differentiating between what is an exercise and what is a problem. Stated simply, “exercises” are the things you do when you’re applying algorithms or routine you know. They point out, however, that this is relative, and what is a problem for second graders is an exercise for fifth graders. Given the changing nature of problem versus exercise, they identified three necessary conditions that define a “problem-for-a-particular-individual”: 1. The individual has a clearly defined goal of which he is consciously aware and whose attainment he desires. 2. Blocking of the path toward the goal occurs, and the individual’s fixed patterns of behavior or habitual responses are not sufficient for removing the block. 3. Deliberation takes place. The individual becomes aware of the problem, defines it more or less clearly, identifies various possible hypotheses (solutions), and tests these for feasibility. The Henderson and Pingry article has piqued my interest not only because it addressed the issue of problems and problem-solving, but because Henderson and Pingry were two of the three authors of the algebra textbooks I used in high school. (Aiken, Henderson and Pingry, 1960a and b). Having actually experienced the implementation of their theories as a student, I therefore had a bit more “on the ground” information than the casual author of education school papers or theses. The article by Henderson and Pingry (1953) is rife with familiar tunes. They echo the critics of textbook problems and state the advantages of “real life” or “real world” problems: i.e., real world problems have no definite question, but the student has to figure out what questions to ask at the outset, the student has to collect the data necessary to solve the problem, and a definite answer often is not possible. They argue that lacking such problems in mathematics courses, students will not likely become competent in solving them. (p. 234). They repeat the criticism often heard today during debates on how to teach math, that problems in textbooks are frequently only a practice for a procedural problem solving method. Such an approach, the critics argue, falls short of teaching math as a sense-making, problem-solving discipline. They are “inauthentic”. Today’s critics argue for math to be taught as it is practiced by professionals: problems first, gather data, and then generalizations and abstractions follow. And it would appear that Henderson and Pingry (1953) seem to be heading in that general direction: There is considerable evidence that many mathematics teachers do not understand what problem-solving is. … One example of this is the manner in which many teachers teach the verbal problems of the algebra course. Many of the problems are catalogued into types such as “Mixture problems,” “coin problems,” “age problems,” and others. The teacher demonstrates to the student how to solve the type, and a list of problems of the type is then given to the students. The students do not experience problem-solving. Rather, they experience practice of applying a memorized technique. (p. This is where the similarity between Henderson and Pingry and today’s math education critics ends, however. First, and notably, the authors do not make the claim as many reformers and critics do, that students find these types of problems irrelevant and are therefore not motivated to try to solve them. In fact, they claim that such problems (i.e., mixtures, coin, age, work, distance/rate) –when not taught as memorized types and solutions–actually can be used for improving problem-solving ability. “If the teacher selects verbal problems carefully so as to be at the student’s level, and if he can get the students to identify themselves with these problems, then the verbal “problems” become real problems.” (p. 234) And this is, in fact, what they did in their algebra textbooks. They provided instruction on how to identify the data in a particular problem, what is being asked, and how this unknown quantity figures into the organization of data. For example, a typical mixture problem may ask: “How many ounces of 80% sulfuric acid solution must be added to 20 ounces of a 20% solution of the acid to make a 50% solution?” The authors show how to analyze and organize the information in the problem in order to solve it. If there are 20 ounces of 20% sulfuric acid, then the solution contains 4 ounces of sulfuric acid and 16 ounces of water. An unknown amount of an 80% acid solution is added. This would increase the sulfur acid in the original container to 4 ounces plus x ounces times 0.8, or 4 + 0.8x. The total amount of solution would also be increased by x ounces or 20 + x. Thus, the new acid solution could be represented as (4 + 0.8x) / (20 + x) = 0.5 . And then, it is a simple matter to solve for x. The problems that followed were similar to the original problem, which would likely cause critics to jump up and say “There, you see? They are just applying a memorized technique. But as anyone knows who has learned a skill through initial imitation of specific techniques, such as drawing, bowling, swimming, dancing and the like, watching something and doing it are two different things. What looks like it will be easy often is more challenging than it appears. So too with algebra problems. In the worked examples in my algebra textbooks, it wasn’t a simple matter of looking at it and saying “Oh, this number goes here, and that number goes there.” You had to understand what it was you were representing; i.e., you had to think about what you were doing. What things remain equal? What changes? What are the relationships between the unknown quantities and what is known? Henderson and Pingry’s approach was used to good effect in their algebra textbook. Problems of various types were appropriately scaffolded and each was different enough from the last so that they presented a challenge. That is, students could not simply plug numbers into a formula and get an answer. By varying the problems and increasing the difficulty, students remain challenged and problems remain problems. By the same token, there was enough repetition that students could master the basic technique before being presented with novel twists and challenges. Despite the skillful use of scaffolding of problems, however, their algebra textbooks had a minimum of explanation for solving problems and would have benefitted from inclusion of more worked examples. This could have been accomplished without sacrificing student learning, or turning problems into exercises. I base this judgment on my experience in Algebra 1. My teacher, though very good at teaching procedures, admitted he was not good with word problems. Thus, he could offer little more help than what the book provided. He struggled with doing the problems as well as explaining what was going on. As a result, I was not proficient at solving word problems. I mention this because of a blanket assumption made about people like me (I ended up majoring in math), which is “You would have understood math no matter how it was taught”. In fact, it was a combination of the book and the good fortune of my algebra 2 teacher making it a top priority to teach students how to solve such problems. Miss Beck provided us with a system of diagrams – boxes—for solving mixture problems, which enabled us to organize the data and to enable us to see how to set up the equations to represent the quantities being mixed. Miss Beck gave us a handout of about five problems of her own devising to supplement what was in the textbook. I did all but the last which was a bit different than the rest. It was something along the lines of: “A tank has a capacity of 10 gallons. When it is full, it contains 15% alcohol. How many gallons must be replaced by an 80% alcohol solution to give 10 gallons of 70% solution?” I recall when Miss Beck showed us how to solve the problem and I realized that it followed the same pattern of organization of data as the other problems except for a subtraction step which had eluded me. With this realization came an elaboration of my theory of arithmetic vs. algebra. I understood then that the problems we had to solve could probably be solved without algebra, but algebra offered a more efficient and concise method for solving involved problems than solving by arithmetic. It occurred to me that eventually I would face problems that could not be solved by arithmetic. But I had faith that even those could be efficiently and concisely solved. Every Problem is Ultimately an Exercise As a high school student, the power and utility of algebra in solving problems was abundantly evident, and the goal of attaining this efficiency seemed well worth pursuing. I liked the idea that “problems” could be reduced to what were ultimately “exercises”. But the trend of educational thinking is that standard mathematics textbook problems are too repetitive, too boring and “inauthentic”. Such a view fails to take into account that such problems are helping to form the building blocks of problem solving thinking, called “schemas”. Schemas allow people to gain additional knowledge by building on previous knowledge and skills. This is true of learning in general. A baby learns to use his hands to grasp and with that skill can then pick up objects. Picking up objects then is a schema that allows for other more complex tasks to be accomplished. The schema involved in solving math problems starts at a basic level, and ultimately can be built upon to non-routine types of problems–after much practice with many problems. Most problems ultimately break down into basics and become exercises. Students do best with very explicit instruction, starting with simple problems. They then begin to develop the knowledge and skills to solve increasingly more difficult problems with novel twists. Without explicit instruction in problem solving, many just give up and don’t try the problems. Students benefit by seeing how to think about the problem before actually working it. Imitation of procedure therefore becomes one of imitation of thinking. This is not something education papers like to focus on, however. For example, in a paper by Yeo (2007), he categorizes various types of problems and establishes a set of problems that are in the “grey area” of higher order thinking and procedures. The ultimate goal, it seems, are to present problems that remain problems, that don’t break down into procedural steps. Yeo suggests that one way to get at this is to reinterpret Henderson and Pingry’s (1953) definition of what a problem is and extend it to open-ended and investigative problems. Thus, Yeo finds problems such as “Find patterns in power of 3” or “How many different handshakes occur between 14 people” as stimulating mathematical thinking as opposed to problems that can readily be broken down into procedural methods. These problems are not bad, but when presented without a sufficient learning base of prior knowledge and procedural skills, they do little to promote problem solving skills. It is as if the advocates of open-ended and investigative problems are saying that presenting students with non-routine, and open-ended problems on a constant basis form a “problem solving” schema. Furthermore, they view such problem solving schemes as independent of the mastery of basic types of problems that are learned by example and scaffolded to present more challenge. The thinking is that by giving students a constant does of challenging problems, not only are problem solving “schemas” being developed—so the theory goes —but also all the students in the class are in the same boat. That is, all students will be struggling and there won’t be those few who “get it” while others are left feeling inadequate. The danger in such thinking is that there is a converse to this theory that is usually conveniently ignored. That converse would be that a steady diet of problems held beyond everyone’s reach may well result in students being in the same boat–but it is a boat in which all are feeling lost and inadequate. In fact, there is no schema for solving new and unseen and nonroutine problems. In a paper on whether problem solving can be taught, Sweller, Clark and Kirschner (2010) state that it cannot be taught independent of basic tools and basic thinking. Over time, these tools or schemas contribute to a repertoire of problem solving techniques. These tools are learned by working through examples as I did in Miss Beck’s class. Once someone learns how to solve a particular category of problems, the person becomes much better at solving them and those problems ultimately become exercises. Turning problems into exercises is in fact the way in which problem solving skill is increased. Sweller et al, state that after a half century of effort, there is simply no evidence whatsoever that we can improve general problem solving skill so that people will become better at solving novel problems. The standard problems of Henderson and Pingry, in fact, did present a challenge that students, with proper instruction and guidance, were able to meet. Many textbooks followed using the same techniques. Reformers tend to criticize these type of standard textbook for not requiring mathematical thinking in the same way as the more advanced or non-routine problems. In fact, the standard textbook problems of Henderson and Pingry and others do require mathematical thinking, albeit not at the same level of the more difficult problems. Students must exercise judgment and analytic skills in identifying how to use the data in the problem. Learning these skills allows students to solve more complex and demanding problems. The criticism that textbook problems don’t reflect “real mathematical thinking” confuses pedagogy with epistemology as pointed out by Kirshner, Sweller & Clark (2006). That is, novices do not and cannot think like experts. The methods of the past often strike people as antiquated and ineffective. They are viewed in the same way that one looks at photos of students in old yearbooks from the 50’s and 60’s, or films of students from that era. I too am often amused at the formality of those times, and how some aspects of life have improved for the better. While I am tempted at times to wonder how I learned in such a strict environment, I have a strong feeling that many of us from that era received a far better education than many students today for whom problems will almost always remain problems. Girls like Fran, however, remain as poster children for that era and used as evidence by some that math as it was taught in the past was a failure for thousands of students. I don’t know how Fran did in the class, or how she ended up in life, but I do know that based on what I saw of my fellow students working problems at the board (which was the norm back then), most of them seemed to have a good understanding of the subject despite popular belief to the contrary. Although we were novices, we were becoming proficient at reducing problems to their core exercises. Barry Garelick has written extensively about math education in various publications including Education Next, Educational Leadership, and Education News. He is currently doing student teaching at a junior high school in the central coast area of California, and plans to teach math as his second career. Barry recently retired from the federal government and was a co-founder of the U.S. Coalition for World Class Math. http://usworldclassmath.webs.com/ Aiken, D. J., Henderson, K.B. & Pingry, R.E. (1960a). Algebra: Its Big Ideas and Basic Skills; Book I. McGraw Hill. New York. Aiken, D. J., Henderson, K.B. & Pingry, R. E. (1960b). Algebra: Its Big Ideas and Basic Skills; Book II. McGraw Hill. New York. Henderson, K.B., & Pingry, R.E. (1953). Problem Solving in Mathematics. In H. F. Fehr (Ed.), The Learning of Mathematics: Its Theory and Practice (pp. 228-270). Washington, D.C.; National Council of Teachers of Mathematics. Kirschner, Paul A., Sweller, & J., Clark, R.. 2006. Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 42(2), 75-86. http://projects.ict.usc.edu/itgs/papers/Constructivism_KirschnerEtAl_EP_06.pdf Sweller, John., Clark R., & Kirschner, P. 2010. Teaching General Problem-Solving Skills Is Not a Substitute for, or a Viable Addition to,Teaching Mathematics. Notices of the American Mathematical Society. Vol. 57., No. 10. November. http://www.ams.org/notices/201010/rtx101001303p.pdf Yeo, Joseph B.W. 2007. Mathematical Tasks: Clarification, Classification and Choice of Suitable Tasks for Different Types of Learning and Assessment. Technical Report ME2007.01. National Institute of Education, Nanyang Technological University, Singapore.
{"url":"http://www.educationnews.org/k-12-schools/fran-henderson-pingry-and-me-a-tale-of-problems-vs-exercises/","timestamp":"2014-04-19T07:03:27Z","content_type":null,"content_length":"59038","record_id":"<urn:uuid:1af72cfb-cd6c-45c3-ba96-1ab8e3360bfc>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
In the following scenario, if Kate has a taxable income of $100,520, how much tax will she pay? Figure out how much... - Homework Help - eNotes.com In the following scenario, if Kate has a taxable income of $100,520, how much tax will she pay? Figure out how much income tax each of the following individual The marginal tax rates are structured as follows: Income up to $42,350 is taxed at 15 percent Income earned between 42,350 and $61,400 is taxed at 28 percent. Income between $61,401 and $128,100 is taxed at 31 percent In this case, many people think that Kate’s tax liability is simply calculated by multiplying her taxable income by the tax rate for people who are making that much money. In other words, they would say we would figure Kate’s tax by multiplying her taxable income by 31 percent. That would get us 100,520*.31 = 31,161.20 But that is not correct, Kate does not pay this much tax. The reason is that the marginal tax system works differently. The money that Kate (or anyone) makes between certain amounts is all taxed at a different rate. As the question tells us, the first $42,350 of someone’s income is taxed at 15%. Kate has more than $42,350 of income. Therefore, she has a full $42,350 to be taxed at that rate. We find out how much she has to pay on that income with the following equation: 42,350*.15 = 6352.5 Now, Kate also has income in the next tax bracket. All income between $42,351 and and $61,400 is taxed at 28%. First, we have to find out how much money Kate makes in this bracket. Since she makes more than 61,400, we find that through this equation: 61,400 – 42351 = 19,049. Now, we need to multiply that amount of income by 28%. That equation is as follows: 19,050*.28 = 5333.72 Finally, we know that Kate has income in the 31% bracket. First, we have to find out how much income she has in that bracket. We do that with this equation: 100,520 – 61,401 = 39,119 Now, we find out how much tax she pays on that income by multiplying it by 31%. That is shown in the following equation. 39,119*.31 = 12,126.89 Now we know how much Kate pays for each chunk of her income. We now have to add together all of the taxes she paid on all of the chunks. We use the following equation 12,126.89 + 5333.72 + 6352.5 = 23,813.11 So, instead of paying over $31,000, Kate actually pays more than $7,000 less than that. Kate’s tax liability, then, is $23,813.11. The rate at which income earned is taxed increases with an increase in the taxable income. In most nations the increased rate of taxation is not applicable for the entire income earned, instead several tax brackets are created with a progressively higher rate of taxation. The information provided in the question gives the rate of taxation as: 15% for income up to $42350. 28% for income earned between $42350 and $61400 31% for income between $61401 and $128100 Kate's income is $100520. The tax paid by her for the first $42350 earned is 0.15*42350 = $6352.5 The tax paid on the income lying between $42350 and $61400 is 0.28*19050 = $5334 The rest of her income i.e. 100520 - 61400 = $39120 is taxed at 31%; this gives the tax to be paid as $12127.2 The total amount to be paid as income tax is $6352.5 + $5334 + $12127.2 = $23813.7 Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/am-not-sure-under-which-topic-this-should-but-423804","timestamp":"2014-04-19T04:25:46Z","content_type":null,"content_length":"31152","record_id":"<urn:uuid:0cc38f19-f471-473b-b3ce-86286ff586e8>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Quaedam Tertia Natura Abscondita 9.8 Quaedam Tertia Natura Abscondita The square root of 9 may be either +3 or -3, because a plus times a plus or a minus times a minus yields a plus. Therefore the square root of -9 is neither +3 nor -3, but is a thing of some obscure third nature. Girolamo Cardano, 1545 In a certain sense the peculiar aspects of quantum spin measurements in EPR-type experiments can be regarded as a natural extension of the principle of special relativity. Classically a particle has an intrinsic spin about some axis with an absolute direction, and the results of measurements depend on the difference between this absolute spin axis and the absolute measurement axis. In contrast, quantum theory says there are no absolute spin angles, only relative spin angles. In other words, the only angles that matter are the differences between two measurements, whose absolute values have no physical significance. Furthermore, the relations between measurements vary in a non-linear way, so it's not possible to refer them to any absolute direction. This "relativity of angular reference frames" in quantum mechanics closely parallels the relativity of translational reference frames in special relativity. This shouldn’t be too surprising, considering that velocity “boosts” are actually rotations through imaginary angles. Recall from Section 2.4 that the relationship between the frequencies of a given signal as measured by the emitter and absorber depends on the two individual speeds v[e] and v[a] relative to the medium through which the signal propagates at the speed c[s], but as this speed approaches c (the speed of light in a vacuum), the frequency shift becomes dependent only on a single variable, namely, the mutual speed between the emitter and absorber relative to each other. This degeneration of dependency from two independent “absolute” variables down to a single “relative” variable is so familiar today that we take it for granted, and yet it is impossible to explain in classical Newtonian terms. Schematically we can illustrate this in terms of three objects in different translational frames of reference as shown below: The object B is stationary (corresponding to the presumptive medium of signal propagation), while objects A and C move relative to B in opposite directions at high speed. Intuitively we would expect the velocity of A in terms of the rest frame of C (and vice versa) to equal the sum of the velocities of A and C in terms of the rest frame of B. If we allowed the directions of motion to be oblique, we would still have the “triangle inequality” placing limits on how the mutual speeds are related to each other. This could be regarded as something like a “Bell inequality” for translational frames of reference. When we measure the velocity of A in terms of the rest frame of C we find that it does not satisfy this additive property, i.e., it violates "Bell's inequality" for special relativity. Compare the above with the actual Bell's inequality for entangled spin measurements in quantum mechanics. Two measurements of the separate components of an entangle pair may be taken at different orientations, say at the angles A and C, relative to the presumptive common spin axis of the pair, as shown below: We then determine the correlations between the results for various combinations of measurement angles at the two ends of the experiment. Just as in the case of frequency measurements taken at two different boost angles, the classical expectation is that the correlation between the results will depend on the two measurement angles relative to some reference direction established by the mechanism. But again we find that the correlations actually depend only on the single difference between angles A and C, not on their two individual values relative to some underlying reference. The close parallel between the “boost inequalities” in special relativity and the Bell inequalities for spin measurements in quantum mechanics is more than just superficial. In both cases we find that the assumption of an absolute frame (angular or translational) leads us to expect a linear relation between observable qualities, and in both cases it turns out that in fact only the relations between one realized event and another, rather than between a realized event and some absolute reference, govern the outcomes. Recall from Section 9.5 that the correlation between the spin measurements (of entangled spin-1/2 particles) is simply -cos(q) where q is the relative spatial angle between the two measurements. The usual presumption is that the measurement devices are at rest with respect to each other, but if they have some non-zero relative velocity v, we can represent the "boost" as a complex rotation through an angle f = arctanh(v) where arctanh is the inverse hyperbolic tangent (see Part 6 of the Appendix). By analogy, we might expect the "correlation" between measurements performed with respect to two basis systems with this relative angle would be which of course is Lorentz-Fitzgerald factor that scales the transformation of space and time intervals from one system of inertial coordinates to another, leading to the relativistic Doppler effect, and so on. In other words, this factor represents the projection of intervals in one frame onto the basis axes of another frame, just as the correlation between the particle spin measurements is the projection of the spin vector onto the respective measurement bases. Thus the "mysterious" and "spooky" correlations of quantum mechanics can be placed in close analogy with the time dilation and length contraction effects of special relativity, which once seemed equally counterintuitive. The spinor representation, which uses complex numbers to naturally combine spatial rotations and "boosts" into a single elegant formalism, was discussed in Section 2.6. In this context we can formulate a generalized "EPR experiment" allowing the two measurement bases to differ not only in spatial orientation but also by a boost factor, i.e., by a state of relative motion. The resulting unified picture shows that the peculiar aspects of quantum mechanics can, to a surprising extent, be regarded as aspects of special relativity. In a sense, relativity and quantum theory could be summarized as two different strategies for accommodating the peculiar wave-particle duality of physical phenomena. One of the problems this duality presented to classical physics was that apparently light could either be treated as an inertial particle emitted at a fixed speed relative to the source, ala Newton and Ritz, or it could be treated as a wave with a speed of propagation fixed relative to the medium and independent of the source, ala Maxwell. But how can it be both? Relativity essentially answered this question by proposing a unified spacetime structure with an indefinite metric (viz, a pseudo-Riemannian metric). This is sometimes described by saying time is imaginary, so its square contributes negatively to the line element, and yields an invariant null-cone structure for light propagation, yielding invariant light speed. But waves and particles also differ with regard to interference effects, i.e., light can be treated as a stream of inertial particles with no interference (though perhaps "fits and starts) ala Newton , or as a wave with fully wavelike interference effects, ala Huygens. Again the question was how to account for the fact that light exhibits both of these characteristics. Quantum mechanics essentially answered this question by proposing that observables are actually expressible in terms of probability amplitudes, and these amplitudes contain an imaginary component which, upon taking the norm, can contribute negatively to the probabilities, yielding interference effects. Thus we see that both of these strategies can be expressed in terms of the introduction of imaginary (in the mathematical sense) components in the descriptions of physical phenomena, yielding the possibility of cancellations in, respectively, the spacetime interval and superposition probabilities (i.e., interference). They both attempt to reconcile aspects of the wave-particle duality of physical entities. The intimate correspondence between relativity and quantum theory was not lost on Niels Bohr, who remarked in his Warsaw lecture in 1938 Even the formalisms, which in both theories within their scope offer adequate means of comprehending all conceivable experience, exhibit deep-going analogies. In fact, the astounding simplicity of the generalisation of classical physical theories, which are obtained by the use of multidimensional [non-positive-definite] geometry and non-commutative algebra, respectively, rests in both cases essentially on the introduction of the conventional symbol sqrt(-1). The abstract character of the formalisms concerned is indeed, on closer examination, as typical of relativity theory as it is of quantum mechanics, and it is in this respect purely a matter of tradition if the former theory is considered as a completion of classical physics rather than as a first fundamental step in the thorough-going revision of our conceptual means of comparing observations, which the modern development of physics has forced upon us. Of course, Bernhardt Riemann, who founded the mathematical theory of differential geometry that became general relativity, also contributed profound insights to the theory of complex functions, the Riemann sphere (Section 2.6), Riemann surfaces, and so on. (Here too, as in the case of differential geometry, Riemann built on and extended the ideas of Gauss, who was among the first to conceive of the complex number plane.) More recently, Roger Penrose has argued that some “complex number magic” seems to be at work in many of the most fundamental physical processes, and his twistor formalism is an attempt to find a framework for physics that exploits this the special properties of complex functions at a fundamental level. Modern scientists are so used to complex numbers that, in some sense, the mystery is now reversed. Instead of being surprised at the physical manifestations of imaginary and complex numbers, we should perhaps wonder at the preponderance of realness in the world. The fact is that, although the components of the state vector in quantum mechanics are generally complex, the measurement operators are all required – by fiat – to be Hermitian, meaning that they have strictly real eigenvalues. In other words, while the state of a physical system is allowed to be complex, the result of any measurement is always necessarily real. So we can’t claim that nature is indifferent to the distinction between real and imaginary numbers. This suggests to some people a connection between the “measurement problem” in quantum mechanics and the ontological status of imaginary numbers. The striking similarity between special relativity and quantum mechanics can be traced to the fact that, in both cases, two concepts that were formerly regarded as distinct and independent are found not to be so. In the case of special relativity, the two concepts are space and time, whereas in quantum mechanics the two concepts are position and momentum. Not surprisingly, these two pairs of concepts are closely linked, with space corresponding to position, and time corresponding to momemtum (the latter representing the derivative of position with respect to time). Considering the Heisenberg uncertainty relation, it’s tempting to paraphrase Minkowski’s famous remark, and say that henceforth position by itself, and momentum by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality. Return to Table of Contents
{"url":"http://www.mathpages.com/rr/s9-08/9-08.htm","timestamp":"2014-04-17T10:45:45Z","content_type":null,"content_length":"22024","record_id":"<urn:uuid:18e6f367-d799-46ed-9495-f17462f9005e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
expected value and variance (continuous random variables) I would like your help! X1, X2, … are independent continuous random variables with E[x]=2, V(X)=9. Yi=(0.5^i)Xi, i=1,2,… An=(1/n) Tn E and V of Yn, Tn, An ??? E[Yn] = E[(0.5^i)Xi] = (0.5^i)E[Xi] = 2(0.5^i) V[Yn] = V[(0.5^i)Xi] = [(0.5^i)^2] V[Xi] = 9 [(0.5^i)^2]
{"url":"http://mathhelpforum.com/advanced-statistics/155113-expected-value-variance-continuous-random-variables.html","timestamp":"2014-04-18T06:37:49Z","content_type":null,"content_length":"48307","record_id":"<urn:uuid:4340b08c-1047-408c-9b48-d064350107cf>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Godel's First Incompleteness Theorem as it possibly relates to Physics Alasdair Urquhart urquhart at cs.toronto.edu Sat Oct 11 09:43:23 EDT 2008 On Thu, 9 Oct 2008, Brian Hart wrote: > Why doesn't Godel's 1st Incompleteness Theorem imply the > incompleteness of any theory of physics T, assuming that T is > consistent and uses arithmetic? Shouldn't the constructors of the > Theory of Everything be alarmed? I know this suggestion of > application of Godel's theorem was made decades ago but why didn't it > make a bigger impact? Is it because it is wrong or were there some > sociological reasons for mainstream ignorance of it? The basic problem with this idea is that it is consistent with current knowledge (as far as I know) that there could be a Theory of Everything that is in some sense complete in its physical implications, though remaining incomplete in its mathematical foundations. Of course, it remains rather unclear what we mean by "complete in its physical implications." But I would guess that physicists would be very happy with a fundamental theory that predicts all of the basic properties of the elementary particles, including the constants that currently have to be "put in by hand." Of course, gravity would have to be included as well, and that seems at the moment to be a very intractable problem. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2008-October/013083.html","timestamp":"2014-04-18T06:14:52Z","content_type":null,"content_length":"4106","record_id":"<urn:uuid:fffab4f3-4c9f-4ab9-bc9b-5343c00db881>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Parallel triangular solution in the out-of-core multifrontal approach for solving large sparse linear systems Slavova, Tzvetomila. Parallel triangular solution in the out-of-core multifrontal approach for solving large sparse linear systems. PhD, Institut National Polytechnique de Toulouse, 2009 (Document in English) PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader Official URL: http://ethesis.inp-toulouse.fr/archive/00000774/ We consider the solution of very large systems of linear equations with direct multifrontal methods. In this context the size of the factors is an important limitation for the use of sparse direct solvers. We will thus assume that the factors have been written on the local disks of our target multiprocessor machine during parallel factorization. Our main focus is the study and the design of efficient approaches for the forward and backward substitution phases after a sparse multifrontal factorization. These phases involve sparse triangular solution and have often been neglected in previous works on sparse direct factorization. In many applications, however, the time for the solution can be the main bottleneck for the performance. This thesis consists of two parts. The focus of the first part is on optimizing the out-of-core performance of the solution phase. The focus of the second part is to further improve the performance by exploiting the sparsity of the right-hand side vectors. In the first part, we describe and compare two approaches to access data from the hard disk. We then show that in a parallel environment the task scheduling can strongly influence the performance. We prove that a constraint ordering of the tasks is possible; it does not introduce any deadlock and it improves the performance. Experiments on large real test problems (more than 8 million unknowns) using an out-of-core version of a sparse multifrontal code called MUMPS (MUltifrontal Massively Parallel Solver) are used to analyse the behaviour of our algorithms. In the second part, we are interested in applications with sparse multiple right-hand sides, particularly those with single nonzero entries. The motivating applications arise in electromagnetism and data assimilation. In such applications, we need either to compute the null space of a highly rank deficient matrix or to compute entries in the inverse of a matrix associated with the normal equations of linear least-squares problems. We cast both of these problems as linear systems with multiple right-hand side vectors, each containing a single nonzero entry. We describe, implement and comment on efficient algorithms to reduce the input-output cost during an outof- core execution. We show how the sparsity of the right-hand side can be exploited to limit both the number of operations and the amount of data accessed. The work presented in this thesis has been partially supported by SOLSTICE ANR project (ANR-06-CIS6-010). Repository Staff Only: item control page
{"url":"http://oatao.univ-toulouse.fr/7786/","timestamp":"2014-04-18T08:16:12Z","content_type":null,"content_length":"23483","record_id":"<urn:uuid:0072a8e3-228a-497b-9da7-77525763721f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Critical Points Question December 4th 2012, 01:02 PM #1 Simple Critical Points Question I know this is really simple but I cant seem to figure it out: Classify all critical points of f(x,y) = sin(x)cos(x) I know that fx = cos(x)cos(y) and that fy = -sin(y)sin(x) Then, the critical points are x = n*pi, y = pi/2 + k*pi OR x = pi/2 + k*pi, y = n* pi. How do I go about classifying these? I see that for choosing n or k differently, I get alternating positive negative values in the hessian matrix. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/209077-simple-critical-points-question.html","timestamp":"2014-04-18T04:21:58Z","content_type":null,"content_length":"29893","record_id":"<urn:uuid:8233f4aa-3cb0-4652-a8b4-b7aa8f7c1d0a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - circular motion Welcome to PF regan1, mg cannot be equal to rcos(theta) simply on dimensional grounds. The former is a force, and the latter is a length. Could you post the equations that you are having trouble with more carefully and in greater detail? For a banked curve, if you look at the road in cross-section, it is essentially like an inclined plane, with the "downhill" direction being the direction towards the centre of the circle. In the absence of friction, the only two forces that act on the car are gravity, and the normal force from contact with the road, which can contribute to the centripetal force. This should be helpful to you as a starting point for working out the force balance equations.
{"url":"http://www.physicsforums.com/showpost.php?p=4223681&postcount=2","timestamp":"2014-04-18T03:12:17Z","content_type":null,"content_length":"7897","record_id":"<urn:uuid:b03ef081-ce57-4d58-927d-8e05fa74f99b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Expressing complementarity problems and communicating them to solvers Results 1 - 10 of 14 - Computational Optimization and Applications , 1998 "... Several new interfaces have recently been developed requiring PATH to solve a mixed complementarity problem. To overcome the necessity of maintaining a different version of PATH for each interface, the code was reorganized using object-oriented design techniques. At the same time, robustness issues ..." Cited by 48 (17 self) Add to MetaCart Several new interfaces have recently been developed requiring PATH to solve a mixed complementarity problem. To overcome the necessity of maintaining a different version of PATH for each interface, the code was reorganized using object-oriented design techniques. At the same time, robustness issues were considered and enhancements made to the algorithm. In this paper, we document the external interfaces to the PATH code and describe some of the new utilities using PATH. We then discuss the enhancements made and compare the results obtained from PATH 2.9 to the new version. 1 Introduction The PATH solver [12] for mixed complementarity problems (MCPs) was introduced in 1995 and has since become the standard against which new MCP solvers are compared. However, the main user group for PATH continues to be economists using the MPSGE preprocessor [36]. While developing the new PATH implementation, we had two goals: to make the solver accessible to a broad audience and to improve the , 1999 "... Complementarity solvers are continually being challenged by modelers demanding improved reliability and scalability. Building upon a strong theoretical background, the semismooth algorithm has the potential to meet both of these requirements. We briefly discuss relevant theory associated with th ..." Cited by 19 (7 self) Add to MetaCart Complementarity solvers are continually being challenged by modelers demanding improved reliability and scalability. Building upon a strong theoretical background, the semismooth algorithm has the potential to meet both of these requirements. We briefly discuss relevant theory associated with the algorithm and describe a sophisticated implementation in detail. Particular emphasis is given to robust methods for dealing with singularities in the linear system and to large scale issues. Results on the MCPLIB test suite indicate that the code is robust and has the potential to solve very large problems. , 1998 "... This survey gives an introduction to some of the recent developments in the field of complementarity and related problems. After presenting two typical examples and the basic existence and uniqueness results, we focus on some new trends for solving nonlinear complementarity problems. Extensions to ..." Cited by 14 (0 self) Add to MetaCart This survey gives an introduction to some of the recent developments in the field of complementarity and related problems. After presenting two typical examples and the basic existence and uniqueness results, we focus on some new trends for solving nonlinear complementarity problems. Extensions to mixed complementarity problems, variational inequalities and mathematical programs with equilibrium constraints are also discussed. , 2002 "... Constrained optimization has been extensively used to... This paper briefly reviews some methods available to solve these problems and describes a new suite of tools for working with MPEC models. Computational results demonstrating... ..." Cited by 13 (3 self) Add to MetaCart Constrained optimization has been extensively used to... This paper briefly reviews some methods available to solve these problems and describes a new suite of tools for working with MPEC models. Computational results demonstrating... , 2000 "... Complementarity problems arise in a wide variety of disciplines. Prototypical examples include the Wardropian and Walrasian equilibrium models encountered in the engineering and economic disciplines and the first order optimality conditions for nonlinear programs from the optimization community. The ..." Cited by 6 (0 self) Add to MetaCart Complementarity problems arise in a wide variety of disciplines. Prototypical examples include the Wardropian and Walrasian equilibrium models encountered in the engineering and economic disciplines and the first order optimality conditions for nonlinear programs from the optimization community. The main focus of this thesis is algorithms and envi-ronments for solving complementarity problems. Environments, such as AMPL and GAMS, are used by practitioners to easily write large, complex models. Support for these packages is provided by PATH 4.x and SEMI through the customizable solver interface specified in this thesis. The main design feature is the abstraction of core components from the code with implementations tailored to a particular environment supplied either at compile or run time. This solver interface is then used to develop new links to the MATLAB and NEOS tools. Preprocessing techniques are an integral part of linear and mixed integer programming codes and are primarily used to reduce the size and complexity of a model prior to solving it. For example, wasted computation is avoided when an infeasible model is detected. , 2001 "... Preprocessing techniques are extensively used by the linear and integer programming communities as a means to improve model formulation by reducing size and complexity. Adaptations and extensions of these methods for use within the complementarity framework are detailed. The preprocessor developed i ..." Cited by 3 (2 self) Add to MetaCart Preprocessing techniques are extensively used by the linear and integer programming communities as a means to improve model formulation by reducing size and complexity. Adaptations and extensions of these methods for use within the complementarity framework are detailed. The preprocessor developed is comprised of two phases. The rst recasts a complementarity problem as a variational inequality over a polyhedral set and exploits the uncovered structure to x variables and remove constraints. The second discovers information about the function and utilizes complementarity theory to eliminate variables. The methodology is successfully employed to preprocess several models. Keywords: mixed complementarity, preprocessing 1. INTRODUCTION General purpose codes for solving complementarity problems have previously lacked one signicant feature: a powerful preprocessor. The benets of preprocessing have long been known to the linear [1, 2] and integer [19] programming communities, yet have no... , 2009 "... Extended mathematical programs are collections of functions and variables joined together using specific optimization and complementarity primitives. This paper outlines a mechanism to describe such an extended mathematical program by means of annotating the existing relationships within a model to ..." Cited by 3 (1 self) Add to MetaCart Extended mathematical programs are collections of functions and variables joined together using specific optimization and complementarity primitives. This paper outlines a mechanism to describe such an extended mathematical program by means of annotating the existing relationships within a model to facilitate higher level structure identification. The structures, which often involve constraints on the solution sets of other models or complementarity relationships, can be exploited by modern large scale mathematical programming algorithms for efficient solution. A specific implementation of this framework is outlined that communicates structure from the GAMS modeling system to appropriate solvers in a computationally beneficial manner. Example applications are taken from chemical - Ill{Posed Variational Problems and Regularization Techniques, number 477 in Lecture Notes in Economics and Mathematical Systems , 1998 "... Over the past several years, many practitioners have been formulating nonlinear variational inequalities as mixed complementarity problems within modeling languages such as GAMS and AMPL. Sometimes the models generated are poorly specified, either because the function is undefined near the solut ..." Cited by 3 (2 self) Add to MetaCart Over the past several years, many practitioners have been formulating nonlinear variational inequalities as mixed complementarity problems within modeling languages such as GAMS and AMPL. Sometimes the models generated are poorly specified, either because the function is undefined near the solution or the problem is ill-conditioned or singular. In this paper, we look at information provided by the PATH solver about the model that can be used to identify problem areas and improve formulation. Descriptions and uses of the data provided are detailed via several case studies. 1 Introduction Developing a practical model of a complex situation is a difficult task in which an approximate representation is initially constructed and then iteratively refined until an accurate formulation is obtained. During the intermediate stages, the models generated have a tendency to be ill-defined, ill-conditioned, and/or singular. Information generated by a solver can help the modeler to detect - Optimization. Lecture Notes in Economics and Mathematical Systems , 2000 "... . We consider a primal-dual approach to solve nonlinear programming problems within the AMPL modeling language, via a mixed complementarity formulation. The modeling language supplies the first order and second order derivative information of the Lagrangian function of the nonlinear problem using au ..." Cited by 3 (0 self) Add to MetaCart . We consider a primal-dual approach to solve nonlinear programming problems within the AMPL modeling language, via a mixed complementarity formulation. The modeling language supplies the first order and second order derivative information of the Lagrangian function of the nonlinear problem using automatic differentiation. The PATH solver finds the solution of the first order conditions which are generated automatically from this derivative information. In addition, the link incorporates the objective function into a new merit function for the PATH solver to improve the capability of the complementarity algorithm for finding optimal solutions of the nonlinear program. We test the new solver on various test suites from the literature and compare with other available nonlinear programming solvers. Keywords: Complementarity problems, nonlinear programs, automatic differentiation, modeling languages. 1 Introduction While the use of the simplex algorithm for linear programs in the 1940's h... - Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods , 1998 "... We describe several new tools for modeling MPEC problems that are built around the introduction of an MPEC model type into the GAMS language. We develop subroutines that allow such models to be communicated directly to MPEC solvers. This library of interface routines, written in the C language, prov ..." Add to MetaCart We describe several new tools for modeling MPEC problems that are built around the introduction of an MPEC model type into the GAMS language. We develop subroutines that allow such models to be communicated directly to MPEC solvers. This library of interface routines, written in the C language, provides algorithmic developers with access to relevant problem data, including for example, function and Jacobian evaluations. A MATLAB interface to the GAMS MPEC model type has been designed using the interface routines. Existing MPEC models from the literature have been written in GAMS, and computational results are given that were obtained using all the tools described. Keywords Complementarity, Algorithm, MPEC, Modeling 1 Introduction The Mathematical Program with Equilibrium Constraints (MPEC) arises when one seeks to optimize an objective function subject to equilibrium contraints. These equilibrium constraints may take the form of a variational inequality or complementarity problem, o...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=797846","timestamp":"2014-04-16T22:54:26Z","content_type":null,"content_length":"38774","record_id":"<urn:uuid:a4978e57-ef81-4ab2-a69d-780deba32e63>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
On the rate of formation of carbon monoxide in gas producers: Electronic EditionUniversity of Illinois Mass Digitization ProjectOn the rate of formation of carbon monoxide in gas producers: Electronic Edition: Electronic Edition UNIVERSITY OF ILLINOIS ENGINEERING EXPERIMENT STATION BULLETIN No. 30 FEBRUARY, 1909 ON THE RATE OF FORMATION OF CARBON MONOXIDE IN GAS PRODUCERS BY J. K. CLEMENT, PHYSICIST, U. S. G. S., TECHNOLOGIC BRANCH, ASSISTED BY L. H. ADAMS, JUNIOR CHEMIST, U. S. G. S., TECHNOLOGIC BRANCH; APPENDIX BY C. N. HASKINS, ASSISTANT PROFESSOR OF MATHEMATICS, UNIVERSITY OF ILLINOIS. I. INTRODUCTORY STATEMENT. The rapid advance in the use of producer gas in recent years has given rise to a demand for a more accurate knowledge of the processes taking place in the fuel bed of the producer and the effect on these processes of certain variations in the conditions of operation. The primary function of the gas producer is to trans- form solid fuel into a more readily combustible gaseous fuel This transformation, which is relatively slow, consists of the fol lowing processes: 1. The distillation of the volatile hydro-carbons from the freshly fired fuel at relatively low temperatures. 2. The combustion of fuel by combination with the oxygen of the air. 3. The formation of producer gas proper in accordance with the equations: I. CO, + C = 2 CO, II. H20 + C = CO + H2. The first of these reactions, the formation of carbon monox- ide, is the one with which the present investigation deals. The problem proposed is in broad terms to determine the factors that govern the production of CO in the gas producer; and the effect of the temperature and of the time of contact of the gas and carbon on the percentage of CO in the producer gas. The question ILLINOIS ENGINEERING EXPERIMENT STATION of the effect of temperature was brought forth by certain experi- ments made by one of the writers at the Fuel Testing Plant of the U. S. Geological Survey at the Jamestown Exposition, in which it was found that the temperature in the fuel bed of the gas producer varies greatly from one portion of the bed to another. In order, therefore, to ascertain the conditions of tem- perature most favorable to the efficient operation of the producer, it becomes necessary to determine the temperature requisite for the formation of carbon monoxide and hydrogen in accordance with the reactions quoted in the preceding paragraph. A study of the conditions for the reduction of CO2 by carbon seems desirable from another consideration. A small amount of CO is invariably contained in the flue gases of boiler furnaces. It was hoped, therefore, that the investigation might furnish an explanation of the formation of CO in boiler furnaces and per- haps suggest a means of preventing such formation. The investigations herein described were made in, and with the facilities of, the Physical Laboratory of the University of Illinois. II. FUNDAMENTAL EQUATIONS. According to the law of chemical mass action, a chemical reaction, as for example the reaction expressed by the equation C + CO2 = 2 CO, proceeds in one direction until equilibrium is established and then stops; and when the system is in equilibrium there is for a given temperature a certain constant relation between the amounts of the components entering into the reaction. Thus, in the system under consideration, let [CO] = the concentration of CO in gram molecules' per liter, [CO,] = the concentration of CO2 in gram molecules per liter; then for equilibrium, the relation 1A gram molecule of a substance Is a weight of the substance In grams numerically equal to the molecular weight. Thus a gram molecule of CO is 28 grams, one of CO, is 44 grams, etc. THE FORMATION OF CO IN GAS PRODUCERS [CO = constant = K (1) [C021 must be satisfied. The relation (1) may be thrown into another form as follows: Let 100 x = per cent of CO in the gas by volume; 100 (1 - x) = per cent of CO2 in the gas by volume; p = pressure in atmospheres; T = absolute temperature; R = absolute gas constant = 0.0821 for the sys- tem of units here employed; n = number of gram molecules of the gas unuder consideration; v = volume of gas in liters. The characteristic equation of gases is pv = nRT from which l e p Now xn and (1 - x)n. are respectively the numbers of gram molecules of CO and CO, in the gas; hence the concentra- tions of CO and CO, are respectively [CO] - _ xp_ -v RT 001 - (1 - x)n (1 - x)p [C02] - v HT Placing these values in (1), the resulting equation is 1 -) -Y (2) 1- x R 1' - If the pressure and temp- rature are kept constant, the factor i-- s a.constant, and (2) may be written Xe KRT I _ - In " ILLINOIS ENGINEERING EXPERIMENT STATION where K' is a new constant. When CO2 gas is maintained in contact with carbon at con- stant temperature and pressure the two will react rapidly at first and then more slowly until the amount of CO formed is 100 x per cent of the total, where x is given by equation (3). The relation expressed by (1) may be considered as a special case of a more general law. According to' the theory of the kinetics of reactions now generally accepted in reversible reac- tions, two reactions take place simultaneously, one from left to right and one from right to left. In the reection CO2 + C = 2 CO the velocity of the reaction from left to right, that is, the rate of formation of CO, is at any instant proportional to the number of CO2 molecules in the unit volume; thus denoting the v.locity by v, v = k,[CO2]. Similarly the velocity of the reaction from right to left, that is, the rate of formation of CO2, is proportional to the square of the number of CO molecules in a unit volume. Hence v' = k, [CO]2 The increase in the number of CO molecules per unit volume in the time dt is the difference of the two velocities r and r'; that is, d(I [] = v - [(;02] = k1 1(dO]M (4) As [C(()] the number of gram molecules cf CO in the unit vo lume increases and [(CO,] the number (f gramii molecules of CO., de- creases, the velocity dt will beceme smaller and smaller until finally [CO] and [CO,] become constant, that is, the system attains equilibrium. In this case we have, therefore, d [ - = dt ck,1 [(CO] = k, [(1CO]2. whence THE FORMATION OF CO IN GAS PRODUCERS That is, the number of CO2 molecules formed in a given time is equal to the number that are decomposed to form CO. The last equation may be written [CO] ki [CO -_ - .K (6) [GO0] k- * K is called the equilibrium constant. Equation (4) giving the rate of formation of CO may be modified as follows: Let 100 a = per cent of CO2 by volume at the beginning of the reaction, that is, when t = 0, 100 x = per cent of CO by volume after the time t has elapsed. At the beginning of the reaction, that is when t = 0, there is no CO, hence x = 0. If now n is the number of gram molecules of the gas when t = 0, then na is the number of gram molecules of CO2, and the concentration of the CO2 is therefore [CO21 = nn a But since pr = n R T, this relation may be written [CO2] = ap = M gram molecules per liter. At the time t suppose m gram molecules per liter of CO to have been formed. This involves the disappearance of 2 gram mole- m cules per liter of CO2, leaving M - 2 . The number of gram molecules of gas is now n (1-a) + [ M - + m ] v =n(1-a) + ( M+ ) - ILLINOIS ENGINEERING EXPERIMENT STATION = n1 + m a) 2M m RT = n( 1 + )' and the volume of the gas is therefore increased from v to v ( 1 + R ). The concentrations at the time t are there- fore [CO] = m m RT (7) 2 p ifO- m [CO] = . (8) m RI (8) + 2 2 p But since 100 x per cent of the gas is CO by volume the concentra- tion of the CO is also given by the relation [CO] = . (9) Combining the two expressions for [CO] given by (7) and (9), we obtain. 2x _ m 2--x RT' and introducing this expression for m in (8), the result after slight reduction is [COs] = (a - a 1 x) P . (10) Introducing in equation (4) the expressions for the concentrations given by (9) and (10), the result is dE00] - k a±1 2 r 1] But from (9) d[CO] _ p dx dt RT dt ' THE FORMATION OF CO IN GAS PRODUCERS whence dX , a + 1 x) p X. dt - (a 2 - 2 RB1 Replacing the constant k2 R- by a single symbol k'2, the final equation for the reaction velocity is dx a-12-( dt = k, ( a 2 ) - k' x. (12) The integration of the differential equation (12) offers no great difficulty. The determination of the constants kc and k2 seems to have been made hitherto by assuming the value of the ratio k-. This method is applicable when the equilibrium conditions are readily realized. As, however, this is not the case in the present reac- tion it has been necessary to devise a method for the determina- tion of k, and k'2 from two or more pairs of simultaneous obser- vations of x and t. It turns out that the method is applicable not only to the reaction in question, but also to the most general in- complete reactions of the second order. The great difficulty of realizing, with certainty, the condition of the equilibrium in reactions like the one under consideration, makes it highly desirable, therefore, to obtain a general solution of the differential equation, without introducing a particular numer- ical value for the ratio Y--. We are indebted to Prof. C. N. Haskins for the following solution, the developments of which will be found in the appendix: 4 a r tank at a + 1 1 + r tank at In equation (13) a and y are determined by the relation k -- 4a a + 1' ILLINOIS ENGINEERING EXPERIMENT STATION a (a (1 ). F2 -- 4 a When the initial percentage of CO2 is 100, then a = 1, dx -- k, ( 1 - x) - k'2x, and 2 r tank at (14) 1 + r tank 'it In the case of the gas producer the value of a is about 0.21. III. BOUDOUARD'S EXPERIMENTS. An elaborate series of determinations of the amount of CO formed at different temperatures has been made by 0. Boudouard.1 Boudouard's observations were made at 6500, 800' and 9250 C. In his experiments at 6500 and 800' glass tubes containing char- coal, coke, retort carbon or lamp black were filled with CO,, heat- ed to the temperature of the experiment and then sealed. The tubes were maintained at constant temperature until equilibrium was reached, that is, when further heating at the same tempera- ture produced no increase in the percentage of CO present. At 6500 the heating was continued for twelve hours before equi- librium was attained. At 8000 equilibrium was reached in one hour in the tubes containing charcoal and in two and one-half hours in those containing lamp black. With coke and retort car- bon the process was not complete at the end of nine hours. In the experiment at 9250 the carbon was heated in a porcelain tube, through which was passed a stream of CO, gas. The average time of contact between CO2 and carbon calculated from the data given 20. Boudouard, Comptes Rendus de I'Academie des Sciences, Vol. 128, page 824, 154; 1899. Vol. 131, page 1204, 1900. Vol. 130, page 132, 1900. Bulletin Hoc. Chim., Paris, Vol. 21, 1901. Bulletin Soc. Chim., Paris, Vol. 25, 1901. Ann. de Chimie et de Physique, Vol. 354, 1901. See also Haber, Thermodynamics of Technical Gas Reactions, 1908, p. 311. THE FORMATION OF CO IN GAS PRODUCERS in Boudouard's account of his experiments was approximately 30 seconds. A summary of Boudouard's results is given in the following table: TEMPERATURE PER CENT OF PER CENT OF C.0 C02 CO 650. 61. 39. 800. 7. 93. 925. 4. 96. These values have been made the basis of computation by many writers on the chemistry of combustion and of the water gas reaction and especially in treatises on the gas producer. In the first references to Boudouard's work which came to the attention of the writers, no notice was taken of the remarkably low reaction velocity of the formation of CO from CO2 and carbon, and of the great length of time required to obtain the percentages of CO that are given in the preceding table. In at least one case the values shown above were offered as representing the quality of gas that should be obtained in the gas producer at the tempera- tures given. The writers were therefore led to regard Boudouard's figures as defining the relative proportions of CO2 and CO that should be formed in a gas producer at various temperatures of the fuel bed. The experiments which form the subject of this paper had originally as their object the confirmation of Boudouard's results as well as a continuation of them at higher temperatures. Pre- liminary experiments made by Dr. C. S. Hudson demonstrated that the amount of CO formed at a given temperature depends largely on the time of contact or in other words on the rate of flow of the gas through the fuel bed. Apparently Boudouard's results represent limiting values, which can be obtained only with a very low gas velocity. In order to ascertain the condi- tions for the formation of CO in producer furnaces, it is neces- sary, therefore, to determine the rate of formation of CO from CO2 and carbon at various temperatures; that is, to determine the ILLINOIF ENGINEERING EXPERIMENT STATION amount of CO formed with different rates of flow of the gas through the fuel bed. IV. METHOD OF EXPERIMENT. The arrangement of the apparatus is shown in Fig. 1, 2, and 3. A porcelain tube of 1.5 cm inside diameter, 60 cm long, and glazed on the outside was filled with charcoal, coal, or coke and heated in an electric furnace. The furnace, which was designed FIG. 1 especially for this investigation, is shown in detail in Fig. 3. It has been operated continuously for a period of six months at tem- peratures of from 800' to 12000, and even 13000 C. The temper- ature inside the porcelain tube could be maintained at any value up to 13000 C with no fluctuations greater than 10 or 2'. The .. . 4' -. -. - FIG. 2 heating coil consists of a coil of No. 13 nickel wire, wound with eight turns per inch on an electrical porcelain insulating tube of 38 mm inside diameter and 35 cm long. At either end of the coil THE FORMATION OF CO IN GAS PRODUCERS 11 ILLINOIS ENGINEERING EXPERIMENT STATION the number of turns was increased slightly to compensate for the cooling effect of the end at the furnace. As a protection against corrosion, the coil was painted over with a thin layer of magnesite cement-a material that is capable of withstanding very high tem- peratures. The heating tube was then mounted inside of and con- centric with several terra cotta pipes and the spaces between the pipes were filled with light calcined magnesia. The cost of the material used in the furnace and the amount of labor required were very small. A temperature of 10000 C could be maintained by the expenditure of 600 watts. The temperature inside the porcelain tube was measured by means of a platinum platinum-rhodium thermocouple (see Fig. 2) and a Siemens & Halske milli-voltmeter. As the thermo-elec- tric height of the couple fell slightly with use at high temperature -due probably to the reducing action of CO gas on the insulating tubes and the consequent contamination of the couple-it was found necessary to calibrate the couple from time to time. This was accomplished by determination of the melting points of zinc, silver and copper. The error of individual temperature observa- tions does not exceed 50 below 1100° and 10° to 150 between 11000 and 1300°. The carbon with which the porcelain tube was filled was crushed to pieces of a uniform size-about 5 mm on a side. Only the central portion of the tube (see Fig. 2) contained carbon, the remainder of the space being occupied by pieces of broken porce- lain, which served at one end to heat the gas entering the tube, and at the other end, by reducing the size of the passage way, to increase the velocity of the gas through the region of falling tem- perature. Through the porcelain tube was passed a stream of CO2 gas. In the earlier experiments CO2 was prepared from mar- ble and hydrochloric acid. Later, CO2 was taken from a tank of liquid carbon dioxide. The velocity of the gas over the carbon was determined by the dimensions of the tube, the weight and density of the carbon, THE FORMATION OF CO IN GAS PRODUCERS and the temperature and the volume of gas passed through the tube per minute.1 The analyses were made by the Hempel method, both CO, and CO being absorbed. The amount of gas remaining in the burette after the absorption in cuprous chloride was seldom great- er than two per cent. V. EXPERIMENTS WITH CHARCOAL. With the apparatus described in the preceding pages, experi- ments were conducted at temperatures ranging from 700° to 13000 C. The experiments with charcoal extended over a period of sev- eral months. The results are contained in tables 1-6. TABLE 1 RATE OF FORMATION OF CO FROM C02 AND CHARCOAL AT A TEMPERATURE OF 8000 C. k = 0.01968 k2 = 3.031 Time of contact 1 % CO,100 t CO/100 in seconds "- Observed Calculated t oc 0 ..... 0.535 188.6 0.0053 0.503 0.534 115.9 0.0086 0.504 0.527 57.18 0.0175 0.518 0.508 45.70 0.0219 0.522 0.468 24.20 0.0413 0.375 0.345 15.50 0.0645 0.283 0.252 12.32 0.0810 0.-'45 0.209 2.686 0.354 0.063 0.051 1.550 0.645 0.03io 0.030 'The increase in the volume of the gas in its passage through the reaction tube, due to the formation of two CO molecules in place of every molecule of CO, which disappears, makes it difficult to determine accurately the time of con- tact and consequently the velocity of the gas. The values of t, the time of contact, given in the following tables are based on the volume of gas leaving the tube and are therefore somewhat too low. Since the major portion of the expansion takes place within a short distance from the entrance to the tube, the error here introduced is probably not appre- ciable. ILLINOIS ENGINEERING EXPERIMENT STATION TABLE 2 RATE OF FORMATION OF CO FROM C02 AND CHARCOAL AT A TEMPERATURE OF 850° C. ki = 0.07174 k2 = 3.238 TABLE 3 RATE OF FORMATION OF CO FROM C02 AND CHARCOAL AT A TEMPERATURE OF 9000 C. ki = 0.1540 k2 = 2.599 Time ,f contact 1 % CO/100 % CO/100 in seconds - Observed Calculated t 0 ..... 0.873 64.29 0.0156 0.873 0.873 41.18 0.0226 0.867 0.872 10.008 0.0999 0.708 0.739 4.257 0.234 0.498 0.472 2.840 0.352 0.311 0.351 2.172 0.461 0.344 0.284 Time of contact in seconds 1 00CO/100 CO/100 t t Observed Calculated a 0 ..... 0.742 123 0 0.0082 0.743 0.742 54.18 0.0184 0.702 0.741 24.43 0.0410 0.572 0.694 13.,-,3 0.0756 0.526 0 564 .268 0.1070 0.297 0.463 4.630 0.216 0,297 0.281 3 6,4 0.271 0.224 0.231 :1. 254 0.307 0.225 0.207 THE FORMATION OF CO IN GAS PRODUCERS TABLE 4 RATE OF FORMATION OF CO FROM C02 AND CHARCOAL AT A TEMPERATURE OF 925' C. ki =.0.2175 k2 = 2.298 TABLE 5 RATE OF FORMATION OF CO FROM CO2 AND CHARCOAL AT A TEMPERATURE OF 10000 C. ki = 0.6404 ks = 4.708 Time of contact 1 % CO/100 % CO/100 in seconds Observed Calculated t oo 0 ...... 0.942 70.0 0.0143 0.949 0.942 18.60 0.0538 0.943 0.941 8.245 0.1195 0.903 0.938 3.675 0.272 0.797 0.869 2.296 0.436 0.795 0.752 Time of contact 1 %CO/100 % CO/100 in seconds 7 Observed Calculated t oo 0 .... . 0.914 118.8 0.0084 0.947 0.914 81.2 0.0123 0.933 0.914 12.37 0.0807 0.848 0.875 5.80 0.1725 0.718 0.697 4.277 0.234 0.642 0.595 2.272 0.440 0.375 0.387 ILLINOIS ENGINEERING EXPERIMENT STATION TABLE 6 RATE OF FORMATION OF CO FROM CO2 AND CHARCOAL AT A TEMAERATURE OF 1100° C. kl = 1.495 k2 = 5.275 Time of contact 1 * CO/100 % CO/100 in seconds t Observed Calculated t oo 0 ..... 0.972 36.48 0.0274 0.987 0.972 10.43 0.0958 0.983 0.972 4.968 0.2010 0.981 0.971 3.640 0.2745 0.973 0.968 1.921 0.521 0.946 0.955 The first and second columns of each table give the time of 1 contact t and the reciprocal of the time of contact -, which is equal to the velocity of the gas divided by the length of the char- coal column; thus t ~l* The third column contains the percentages of CO observed, and the values in the last column were calculated by means of equa- tion (13), viz: 4 a y tank at a + 1 1 + r tank at The method of computing a and y of this equation is described in the appendix. The constants ki and k'2 are determined by the relations, k - 4ar , a a + 1) k2 a 4a+ THE FORMATION OF CO IN GAS PRODUCERS Values of kj, k2, k'2, a, and y are given in table 7. TABLE 7 CONSTANTS USED IN COMPUTATION OF X Temp. Co 800. 850. 900. 92.5 1000 1100 a (a = 1) 0.0276 0.0612 0.09998 0.1297 0.3617 0.7921 7 (a =1) 0.3568 .5853 .7711 .8388 .8853 .9437 k- 2 0.03373 0.03443 0.02646 0.02291 0.04416 0.04588 k2 3.031 3.238 2.599 2.298 4.708 5.275 0.01968 0.07174 ' 0.1540 0.2175 0.6404 1.4950 K= -k 0.006493 0.02216 0.05925 0.09465 0.13603 0.28341 The calculated and observed values of x, the per cent of CO, agree within two or three per cent. A comparison of the results in tables 1 to 6 shows in the first place that with increasing temperature there is a rapid in- crease in the percentage of CO obtained with any given rate of 100 90 80 0 70 S60 0 50 S40 30 20 jo 1-.-4 -§ -~ _F .10 .20 .30 .40 50 .60 .70 .80 .90 1 RECIPROCAL OF TnuiE OF CONTACT, - FIG. 4. ILLINOIS ENGINEERING EXPERIMENT STATION flow of the gas; in the second place that with increasing gas ve- locity at low temperatures the percentage of CO formed falls off very rapidly, at higher temperatures very slowly. These varia- tions are illustrated by the curves in Fig. 4 in which the per- 1 v centage of CO is plotted as a function of t -. When 1, the length of the charcoal column, is equal to one, i. e., is equal to the unit of length, then the numbers along the abscissa give the velocity of the gas in terms of the same unit of length, and per second. For example, the length of the charcoal column in the experiments here recorded, was approximately 20 cm. The velocity corresponding to the point t = 1 at the extreme right of Fig. 4, is therefore 20 cm. per second. The general shape of all the curves in Fig. 4 is the same. The percentage of CO is greatest at zero velocity. With 1 increasing values of t each curve falls off, slowly at first then more rapidly, passing a point of inflection and finally becom- ing nearly horizontal. The intersections of the curves with the CO axis give the percentage of CO corresponding to the condition of equilibrium. That a considerable amount of time is required to reach equi- librium in the reaction under consideration, is further illustrated in Fig. 5, in which the percentage of CO is plotted as a function of t, the time of contact. (One small division = 1 second). At 800' for example, the percentage of CO reaches a practically constant value at the end of 50 sec.; at 1000' in 6 sec. The curves in Fig. 4 were plotted from values of x (= per cent CO) calculated from equation (13). The observed values are indicated by the small circles. By means of equation (13) it is possible to calculate the per cent of CO corresponding to any giv- en gas velocity providing a and y or ki and k2 are known. THE FORMATION OF CO IN GAS PRODUCERS 100 90 80 0 0 70 o 60 50 a 40 30 20 10 100 90 80 70 60 50 40 30 20 10 10 20 30 40 50 60 70 80 90 TIME OF CONTACT (CHARCOAL) FIG. 5. VI. VARIATIONS OF K AND k1 WITH TEMPERATURE. The values of ki and K given in table 7 exhibit a systematic variation with temperature. If equations can be found that will express ki and K as functions of the temperature it will then be possible to calculate the per cent of CO for any time of contact and any desired temperature. Such equations have been deduced by Van't Hoff from purely thermodynamical considerations. They are the following.1 d (In K) _ Q dT R T2' and d ( Inn ) A dT(ln + B. (16) In these equations, Q is the latent heat of reaction at the abso- lute temperature T, A is a function of Q but is selected arbitrar- ily, and B is an arbitrary function of the temperature. By inte- gration the latter equation becomes 'The symbol In in the following equations stands for natural logarithm. * e i /c I I ^ * ? ? * t z * r1 ^ c * ? ^ ^ ? y e. * * ! d 3 -I -I - -I -I - I L I a I i i i ) i f 2 3 !- I I i i i E F b E t: E i L I- I- h *i I F i" i b i L ^ I I F L * E i * * F p h ILLINOIS ENGINEERING EXPERIMENT STATION A In k = - + BT C, (17) where C is an integration constant. The values of A, B, and C, in equation (5) have been determined from the simultaneous values of ki and T of tables 1-6. Table 8 contains the values of k,, ob- tained at various temperatures as well as the values of k1 calcu- lated from equation (17). The agreement is remarkable good. TABLE 8 VARIATION OF ki WITH TEMPERATURE (CHARCOAL) In = 5010 - 0.0203 T+- 65.376 Temp. Absolute Temp. ki (obs) ki (calc) Deg. C. T 800 1073 0.020 0.021 850 1123 0.073 0.064 900 1173 0.154 0.159 925 1198 0.217 0.237 1000 1273 0.640 0.629 1100 1373 1.490 1.53 In order to integrate the equation d(ln K) _ _ Q d T RI'2 it is first necessary to determine the heat Q as a function of the temperature. It has been shown by Kirchoff that the increase of Q per degree rise in temperature is equal to the difference of the molecular heats of the factors and of the products of the reaction. Following this law, and taking the specific heat of a factor or product as a linear function of the temperature, which is very nearly true for gases, the relation between Q and T is given by the equation' Q = Qo + Ci T+ c2 T2 (18) In this equation Qo denotes the heat of reaction for T = 0, and c, and c2 are obtained as follows: Assuming that the mean specific heat of each gas is given by an expression of the form 'Haber, Thermodynamics of Technical Gas Reactions, p. 49, eq. (7a). THE FORMATION OF CO IN GAS PRODUCERS c== a+ bT, then cl is the difference between the sum of the a's of the factors and the sum of the a's of the products; likewise c2 is the sum of the b's of the factors less the sum of the b's for the products. Sub- stituting the value of Q given by (18) in (15) the result is d(In K) 1 Qo c+ dt R T T whence by integration In K = ( - c In T- c, T + C. (19) The constants in this equation, (19), may be determined by either of two different methods: by experimental determinations of Q0 c,, and c2, or from four or more simultaneous observations of K obs and T. The first method was adopted in this instance, the following being the values of the quantities in question: Q0o= - 40166 c, = - 2.055 c2 = 0.003104 C = 8.604 In the determination of c, and c2, Langen's values for the specific heats of CO and CO2 and the value of Kunz for the specific heat of charcoal were employed. The value of R (in gram-calories per deg.) is 1.985. Hence taking the above constants and this value of R, (19) reduces to 20235 In K - + 1.035 In T - 0.001564 T + 8.604. (20) Table 9 gives values of K calculated from equation (20) along with values (marked Kob8 ) obtained from observed values of x and T in tables 1-6. In the fourth column are the observed values of x, the amount of CO in equilibrium with CO2 and charcoal at temperatures from 8000 to 11000; and in the fifth columns the values of x corresponding to the values of K in the third column. ILLINOIS ENGINEERING EXPERIMENT STATION The constants of equation (19) were calculated also by the method of least squares, from simultaneous values of Kobs and T, but the agreement was less satisfactory than by the first method. TABLE 9 VALUES OF K AND OF XZ Temp. C. 500 600 650 700 800 850 900 925 1000 1100 1200 1300 1400 1500 1600 K (obs) 0.0065 0.022 0.059 0.094 0.136 0.283 K (cal) 0.000007 0.00013 0.00046 0.00137 0.0090 0.020 0.042 0.060 0.151 0.448 1.120 2.455 4.826 8.671 14.44 x . (obs) 0.526 0.738 0.871 0.912 0.939 0.971 x , (cal) 0.021 0.093 0.185 0.283 0.582 0.722 0.832 0.873 0.945 0.981 0.994 0.997 0.9985 0.9992 0.9996 x C (obs) Boudouard 0.39 0.93 0.96 The agreement between "observed" and "calculated" values of K and k in tables 8 and 9 shows that the changes of K and kc, with temperature follow van't Hoff's laws. It is possible, therefore, by means of equations (17) and (19) and the values of the con- stants of these equations given in tables 8 and 9, to compute K and ki for any desired temperatures. Having the values of K and k, and consequently of k2 (k2= 1 ) the per cent x of CO corresponding to any time of contact t can then be calculated by means of equation (13). VII- EXPERIMENTS AT 700°. In Fig. 4 the curve for 8000 falls off very rapidly with increas- ing rate of flow of gas. At this temperature the gas velocity must be exceedingly low to obtain the equilibrium percentage of CO. THE FORMATION OF CO IN GAS PRODUCERS At temperatures below 8000 it was practically impossible to reach equilibrium with a finite gas velocity. A great number of experi- ments were made at 7000 but the results were too inconsistent to admit of mathematical treatment. Some of the observations are given in the following table: TABLE 10 OBSERVATIONS AT 700 C. (CHARCOAL). t = Time of 1 % CO/100 contact in seconds - 86.9 0.0115 0.012 86.2 0.0116 0.155 99.9 0.0100 0.077 23.4 0.0426 0.004 15.0 0.0668 0.014 7.11 0.1405 0.009 9.71 0.103 0.022 5.60 0.178 0.006 5.02 0.199 0.008 4.18 0.239 0.012 These results show that, except at exceedingly low velocities, the amount of CO formed was never greater than one or two per cent. VIII. EXPERIMENTS WITH COKE AND COAL. The experiments with coke and coal were conducted in the same manner as with charcoal. The material was crushed to pieces about 5 mm on a side. The constants a and y of equation (13) were obtained for each temperature by the method given in the appendix. Tables 11-15 contain the results of the observa- tions with coke. In the last column of each table are given the values of x, the percentage of CO formed, calculated from equa- tion (13). ILLINOIS ENGINEERING EXPERIMENT STATION TABLE 11 RATE OF FORMATION OF CO FROM CO2 AND COKE AT A TEMPERATURE OF 900° C. ki = 0.00231 k2 = 0.03686 TABLE 12 RATE OF FORMATION OF CO FROM C02 AND COKE AT A TEMPERATURE OF 10000 C. ki = 0.02323 k2 = 0.3591 Time of contact 1 % CO/100 % CO/100 in seconds - Observed Calculated t t 123.2 0.0081 0.784 0.866 80.25 0.0125 0.644 0.795 33.25 0.0301 0.529 0.527 18.72 0.0535 0.320 0.350 6.37 0.1571 0.139 0.138 4.101 0.2439 0.115 0.091 3.072 0.3258 0.092 0.069 1.983 0.5045 0.063 0.045 Time of contact 1 % CO/100 % CO/100 in seconds. Observed Calculated 142.0 0.0070 0.276 0.278 80.20 0.0124 0.131 0.169 43.91 0.0228 0.094 0.096 24.82 0.0403 0.057 0.056 16.11 0.0620 0.049 0.037 9.575 0.1045 0.026 0.023 3.741 0.2671 0.008 0.009 THE FORMATION OF CO IN GAS PRODUCERS TABLE 13 RATE OF FORMATION OF CO FROM CO2 AND COKE AT A TEMPERATURE OF 1100° C. ki = 0.1335 k2 = 0.5296 Time of contact 1 %0/ 100 % CO/100 in seconds 1 0/100 CO/100 t seconds Observed Calculated 90.00 0.0111 0.971 0.971 29.92 0.0334 0.854 0.955 13.20 0.0758 0.661 0.817 6.765 0.1476 0.556 0.592 3.198 0.3135 0.317 0.346 1.784 0.5606 0.304 0.211 1.660 0.6030 0.240 0.1942 1l590 0.6299 0.221 0.190 1.462 0.6840 0.214 0.177 0.962 1.0399 0.133 0.121 TABLE 14 RATE OF FORMATION OF CO FROM CO2 AND COKE AT A TEMPERATURE OF 1200° C. ki = 0.4095 k2 = 0.6718 Time of contact in seconds t 18.92 12.70 8.250 2.402 1.582 1.080 1 0.0528 0.0788 0.1213 0.4160 0.6320 0.9260 % CO/100 Observed 0.989 0.978 0.953 0.685 0.439 0.335 % CO/100 Calculated 0.987 0.983 0.956 0.624 0.460 0.357 ILLINOIS ENGINEERING EXPERIMENT STATION TABLE 15 RATE OF FORMATION OF CO FROM C02 AND COKE AT A TEMPERATURE OF 13000 C. ki = 1.483 k2 = 0.7313 Time of contact 1 % CO/100 % CO/100 in seconds - Observed Calculated 8.860 0.1129 0.999 0.997 4.149 0.2415 0.979 0.997 2.100 0.4760 0.932 0.955 1.130 0.8850 0.834 0.816 The results with coke are shown graphically in Fig. 6. The curves for 900', 10000 and 11000 are considerably lower than the curves with charcoal for the same temperatures, except for very low velocities. .10 .20 .30 .40 .50 .60 .70 .80 .90 RECIPROCAL OF TIME Fia. 6. 1 OF CONTACT, - 1 - - - - - - THE FORMATION OF CO IN GAS PRODUCERS TABLE 16 RATE OF FORMATION OF CO FROM C02 AND ANTHRACITE COAL AT A TEMPERATURE OF 11000 C. ki = 0.119 k2 = 1.410 TABLE 17 RATE OF FORMATION OF CO FROM CO2 AND ANTHRACITE COAL AT A TEMPERATURE OF 12000 C. ki = 0.2374 k2 = 0.1767 Time of contact 1 CO/100 % CO/100 in seconds - Observed Calculated 47.05 0.0212 0.997 0.993 10.39 0.0964 0.856 0.901 5.070 0.1971 0.715 0.688 2.845 0.3516 0.423 0.472 1.592 0.6270 0.310 0.309 Time of contact 1 % CO/100 % CO/100 in seconds Observed Calculated 34.20 0.0293 0.8780 0.912 9.370 0.1069 0.6>10 0.657 5.415 0.1848 0.4770 0.472 3.301 0.3026 0.3020 0.322 2.439 0.4101 0.2650 0.251 ILLINOIS ENGINEERING EXPERIMENT STATION TABLE 18 RATE OF FORMATION OF CO FROM C02 AND ANTHRACITE COAL AT A TEMPERATURE OF 13000 C. k- = 0.5791 k2 = 0.2016 Time of contact 1 % CO/100 CO/100 in seonds T Observed Calculated 12.40 0.0806 0.999 0.997 6.030 0.1659 0.965 0.968 3.600 0.2779 0.824 0.876 2.980 0.3358 0.809 0.822 1.908 0.5249 0.663 0.668 1.070 0.9350 0.503 0.462 The observations with anthracite coal are given in tables 16, 17, and 18, and are illustrated graphically in Fig. 7. .10 .20 .30 .40 .50 .60 .70 .80 .90 1 RECIPROCAL OF TIME OF CONTACT, - FIG. 7. Here the curves fall off even more rapidly than the curves for coke in Fig. 6. With very low velocities, that is, when the time THE FORMATION OF CO IN GAS PRODUCERS of contact is sufficient for the reaction to reach equilibrium, the percentage of CO formed is practically the same with each of the three forms of carbon. As the rate of flow of the gas increases, the effect of the difference in the reaction velocities becomes more appreciable. TABLE 19 VALUES OF X, FOR CHARCOAL, COKE, AND COAL. Temp. C. x. (cal) xx (obs) x. (obs) x. (obs) Charcoal Coke Coal 900 0.832 0.871 0.875 ...... 1000 0.945 0.939 0.886 1100 0.981 0.971 0.968 0.914 1200 0.994 ...... 0.987 0.994 1300 0.997 ...... 0.996 0.997 Values of x co the percentages of CO in equilibrium with CO2, and charcoal, coke, and coal respectively, are given in table 19. The values in the second column of this table were calculated from the values of K in table 9, by means of the equation x2- = RT -K 1 - x p' A comparison of Fig. 5, 6, and 7 shows that the reaction velocity is greatest with charcoal and lowest with anthracite coal. The temperature coefficient of k,, the coefficient of reaction ve- locity, was determined for coal and coke in the same manner as for charcoal. The "observed" and "calculated" values of k/ are shown in tables 20 and 21. The constant k2 in the equation d[CO] dt -[ k [C02] - k2 [CO]2 is the coefficient of reaction velocity of the reaction CO,2 + C = 2 CO taken from right to left. At the temperatures of these experi- ments, 800°-13000, the carbon produced by the decomposition of ILLINOIS ENGINEERING EXPERIMENT STATION CO is in the form of lamp black, regardless of the form of carbon present in the reaction tube, viz: charcoal, coke, or coal. At any one temperature, therefore, k2 should be the same in all three cases. From a comparison of tables 1-6, 11-15 and 16-18 it will be seen that there is considerable deviation in the values of k2 for the three forms of carbon used. This is doubtless due in part to experimental errors. There is a further consideration, how- ever, to which attention should be called, viz: that the reaction in question is not reversible. The lamp black produced by the re- verse reaction 2 CO = CO2 + C, is not identical physically with the form of carbon, charcoal, or coke that is consumed in the formation of CO. Consequently the law of chemical mass action is not strictly applicable. In the systems under consideration, equilibrium would not be reached until all the carbon has been transformed to lamp black. TABLE 20 VARIATION OF k1 WITH TEMPERATURE (COKE) 47220 In ki -= - 0.009699 T + 45.597 T TABLE 21 VARIATION OF ki WITH TEMPERATURE (ANTHRACITE COAL) 31972 In ki = -- + 0.02272 T - 56.607 T THE FORMATION OF CO IN GAS PRODUCERS IX. APPLICATION OF EXPERIMENTAL RESULTS TO THE PROCESSES OF THE GAS PRODUCER AND BOILER FURNACE. As stated in the introduction, the experiments here described were undertaken primarily to determine the temperature neces- sary for the formation of high percentage CO gas in the fuel bed of the gas producer, and to ascertain the conditions that govern the formation of CO in boiler furnaces. The results here pre- sented indicate that the amount of CO formed in the gas pro- ducer depends on three factors: (1) the temperature; (2) the depth of the hot portion of the bed; and (3) the rate of flow of gas through the bed. Stated in a more concise form, the per- centage of CO formed depends on the temperature and the time of contact of gas and carbon, i. e., the average time required for a molecule of gas to pass through the fuel bed. The variation of the percentage of CO with the rate of flow of gas is illustrated in Fig. 4, 6 and 7. The curves for coke, Fig. 6, may be taken as representing the conditions in the fuel bed of the producer. At 13000 C., for example, with zero velocity (time of contact = o) 1 practically all the CO2 will be converted to CO; when t= 0.5, t (time of contact t = 2 sec.), 90 per cent CO is obtained; and when t = 1 only 80 per cent CO is formed. In a fuel bed one foot in depth, since 1 - velocity of gas _ v t depth of fuel bed I a time of contact of t = 2 sec. corresponds to a velocity 0.5 ft. per sec. and t = 1 to a velocity of 1 ft. per sec. At 1300° C., then, in a fuel bed one foot in depth, with a velocity of 0.5 t. per sec. 90 per cent of CO would be formed and with a velocity of 1 ft. per sec. 80 per cent. In a fuel bed two feet in depth, the gas velocities corresponding to the same percentage of CO would be twice as great. In other words, for given conditions of tem- perature and quality of gas, the depth of bed and velocity of gas ILLINOIS ENGINEERING EXPERIMENT STATION must vary proportionally and their ratio v must remain con- stant. A fuel bed one foot in depth and a gas velocity of one foot per second should yield the same percentage of CO as a bed two feet in depth with a gas velocity of 2 ft. per sec. It is impossible to determine accurately the velocity of the gas through the producer fuel bed, on account of the difficulty of estimating the magnitude of the passages through the bed.1 The velocity lies probably between 0.5 and 5.0 ft. per sec. The right half of the curves in Fig. 6 lies within these limits and therefore corresponds approximately to the conditions of pro- ducer operation. 0 A 600 700 800 900 1000 1100 1200 1300 TEMPERATURE, DEGREES C. FIG. 8. 1400 1500 1600 In Fig. 8 is shown graphically the variation with tempera- 1 ture of the amount of CO formed with different values of 1. The t 'When any given number of pounds of air is passed per second through a fuel bed of given dimensions, the velocity through the bed will increase as the per- centage of voids is decreased. Thus the velocity will be much higher with slack coal than with uniformly sized nut coal. Further, the per cent of voids will be influenced by the amount of coking and clinkering. THE FORMATION OF CO IN GAS PRODUCERS ordinate is the per cent of CO in gas containing initially 21 per cent CO2 (air in which the oxygen has been converted quantita- tively to CO2). The abscissa is temperature in degrees Centigrade. 1 The upper curve, t = 0, represents the maximum amount of CO which could be produced from air. The intersection of the curve for any velocity with a given horizontal line, for example, the line for CO = 30 per cent, gives the temperature required to form that amount of CO with the particular velocity. Thus to obtain 30 per cent CO with a velocity of one foot per sec. (length of bed = 1 ft.) will require a temperature of 13600 C., and with a velocity of 2 ft. per sec., 1435'. The curves of Fig. 6 and 8 indicate that the temperature of the producer bed should not be less than 13000 C. These investigations demonstrate that a very high tempera- ture is necessary for the production of CO from CO2 and carbon. There are other considerations, however, which are opposed to the operation of the fuel bed of the gas producer at extremely high temperature-above 13000 C.: A high temperature of fuel bed means that the gases will leave the producer at a high tem- perature and thus lower the efficiency of the producer. The gain in capacity will therefore be accompanied by a loss in efficiency, unless the heat of the gases can be used efficiently for generating steam and preheating the air blast. Also a high temperature fav- ors clinkering. In the application of the results of these experi- ments to commercial producers and furnaces it will be necessary of course to consider the various questions that are involved. Various explanations have been suggested to account for the presence of small amounts of carbon monoxide in the flue gases of boiler furnaces. Perhaps the one most generally accepted by engineers is that the oxygen of the air first unites with carbon to form CO2 and that as this gas passes up through the hot fuel bed it combines with carbon in accordance with the equation CO2 + C = 2 CO. ILLINOIS ENGINEERING EXPERIMENT STATION Assuming this to be the correct explanation, then the ques- tion to be solved is what conditions are favorable to this reaction and what conditions will tend to retard it. In the preceding para- graphs it has been shown that the higher the velocity of the gas and thinner the fuel bed, the less will be the percentage of CO formed. A heavy fuel bed in the boiler furnace would therefore favor the formation of CO. Also, the greater the supply of air to a given depth of bed, the less should be the tendency to form CO. X SUMMARY AND CONCLUSIONS. 1. The rate of formation of CO in the reaction, CO2 + C = 2 CO has been determined with charcoal from 8000 to 1100' C., with coke from 9000 to 13000 C., and with anthracite coal from 1100° to 13000 C. 2. The differential equation for the velocity of incomplete reactions dx a + 1 dt = k a - x ) - kx2 has been solved for given values of Ak' and k,, and it has been shown (in the appendix) that the method is applicable to other cases. 3. Van't Hoff's laws for the variation of equilibrium con- stants and coefficients of reaction velocity with temperature have been applied to the values of k, and K obtained in these experi- ments, and a close agreement between observed and calculated values has been found. 4. By means of the equations expressing the laws referred to in paragraphs (2) and (3) it is possible to compute the per- centage of CO formed at any temperature and with any time of contact. 5. It has been shown that for the production of a high per- centage of CO gas, the producer fuel bed should have a tempera- ture of 1300' C. or over, and that increasing the depth of the hot THE FORMATION OF CO IN GAS PRODUCERS 35 portion of the bed will increase the percentage of CO, and conse- quently the capacity of producer at first rapidly and then more and more slowly. 6. To minimize the production of CO in the boiler furnace the fuel bed should be thin. Increasing the velocity of the gas will tend to decrease rather than increase the percentage of CO formed. THE FORMATION OF CO IN GAS PRODUCERS APPENDIX ON THE COMPUTATION OF THE CONSTANTS OF THE REACTION EQUATION BY CHARLES N. HASKINS. 1. REDUCTION AND INTEGRATION OF THE DIFFERENTIAL EQUATION. The differential equation is dx a+ 1 d = k, ( a - 2 x ) - k2', where 100 a = % ( CO + CO2 ) at time t = 0, (1) 100 x = % CO at time t, t = time in seconds, and k, and k2 are the two constants of the reaction the values of which are sought. The initial condition is that a = 0, when t = 0. (2) To integrate, we introduce a new variable z and new con- stants a, y, defined by the relations a +1 2 i) x b 2 az ( 2 (2 (3) a- _ ( 1 + (-, 7 1- (a +1 (1) S= , 1 a +1 a ( 1 - 2 ) The differential equation becomes, under these substitutions, 38 ILLINOIS ENGINEERING EXPERIMENT STATION dz = -2) (4) with the initial condition z = 0, when t = 0. (5) Integrating, we have In + _ = 2 at ; (6) and solving for z, e at - e -at z = reat + e -at = r tanl at; (7) whence, substituting in (3), 2 ar tank at a + 1 1 (8) 2 )( ± Y 2. THE EQUATION FOR y AND THE CRITERION FOR THE EXISTENCE OF ONE AND ONLY ONE ROOT. We have (equation 8) an expression by means of which the per cent of CO at any time t may be computed if the constants y and a are known. We now wish to determine y and a from two pairs of observed corresponding values of t and x. Let these two pairs be (t,, x), and ( t2 x2), and let t2 > t1. Then since a is known we may compute (a + 1)\ a (+ 1) 1 (-) a' 2 - (a + 1) 2 1 22 and have In r + z= 2 4at , In +2 2 ,42, (9) Y - Zl r -. from which y and a are to be determined. Eliminating a, we readily obtain In 2 = t2 In (10) T - 22 t1 -Z 'The symbols In x, log x will De used to denote the natural and the common logarithm of x, respectively. THE FORMATION OF CO IN GAS PRODUCERS The determination of y and a depends therefore on the solution of this (transcendental) equation in 7. Consideration of the function U (r) = tl In r +Z t In + (11) r - z2 - -1 and of its derivative U',() - 2 ( 2 tl) (12) shows that the equation r +22 2 + Z1 (10) 7 - z2 t1 7 - Z has a root r > Z2 when and only when t2zi - t122 > 0, that is, z2 t2 (13} 21 t1 If a root exists there is but one, and it satisfies the in- equalities Z2 < r < z z t2z t2 2 t11 (14> t2 21 - t1 Z2 The inequality (13) furnishes a negative criterion for the applicability of the differential equation (1) to a reaction under investigation. For if the reaction is governed by equation (1) and if the observations are made with sufficient accuracy there must exist a 7 satisfying equation (10) and hence the inequality (13) must be satisfied. If, then, this inequality is not satisfied, and hence no such y can be found; then either the assumptions involved in (1) must be invalid, or there must be errors in the observations. On the other hand, if (13) is satisfied we can only conclude that (1) may be applicable, and we proceed to deter- mine whether it is so by computing y and a and comparing the values of x computed by means of (8) with the observed values of X. 40 ILLINOIS ENGINEERING EXPERIMENT STATION r+ Z2 t2 Y + zi 3. SOLUTION OF THE EQUATION In - =- In '--z2 1] r 3z If the selected pairs (t,, xi), (t2, x2) of observed values satisfy the criterion 22 t2 - < t- (13) z1 t\ we compute y as follows. Passing, for convenience of computa- tion, from natural to common logarithms, we have r + z2 t,2 + z2 log - log- (lOa) r - Z2 1 r- Z1 Assume now a value for y, say y = 71, where z2 < r1 < Z Z2 (t2 2- -tl Z) t2 e3 - t Z2 Then we may compute a quantity log N1 by the equation log Nj = t log" + , (15) Determine now a new value of 7, y = 72, by the relation log r2+ 2= log N (16) that is, r2 + Z2 _1 I (16a) orI , + 1 r, = z N I_ 1 (16b) Proceed now to determine a new approximation y3 from 72 in the same way that 72 was determined from 71, and continue the pro- cess until its repetition produces either no change, or a change which is negligible compared with the experimental errors. It will be found that in general the process converges fairly rapidly and only a few repetitions are necessary. 'This computation may be abridged by the use of Gaussian logarithms. THE FORMATION OF CO IN GAS PRODUCERS Suppose, then, that y has been found by this process. Then a is computed by either of the relations 1 r +z 1 r+z2 a -In , a - In (9a) t, 2 - Ze t r -Z , or, what amounts to the same thing, if N is the last of the numbers N1 N2,.... used in computing y, 1 T + ze 1 1 a- r n- Z=-2 InN - log In 10, t2 - z2 t2 t (17) 2.3026 log (17) a =- log N. t2 4. COMPUTATION OF THE REACTION-CONSTANTS kl, k2, AND VERI- FICATION OF THE REACTION-EQUATION, When the constants a, y have been computed we can find the original constants ki, k2 by the relations 2 a k = , k = (a +)a (1- r) (3) --2/ a 2 r To determine the applicability of the reaction-equation (1) to the case in hand, we have now only to introduce the values a, y, just found into the equation z = 7 tank at, (7) 2 a r tank at x = (a+1 + r tank at (8) and compare the values of z or x obtained with those found by observation. For this purpose equation (7) is the simpler, espe- cially when the z's corresponding to observed values of x have already been computed. ILLINOIS ENGINEERING EXPERIMENT STATION 5. CORRECTION OF THE CONSTANTS a, y BY THE METHOD OF LEAST SQUARES. The constants a, y obtained above are determined by two pairs of observations only. It is of course desirable that all the obser- vations be used in fixing their values, as on account of experi- mental errors the values obtained from different pairs of observa- tions will in general not be identical. We proceed, therefore, to correct the constants by the Method of Least Squares. Let ( t,, x, );( 12, x2 ); * * .(, xn,) ben observed pairs of values of t and x. The problem is then to determine a and y in such a way that if In (r + zi) -2 a t, - a, (61) then n2 = i 2 (18) 1 shall be a minimum. In order that a and -y shall make nv' a minimum they must satisfy the so-called normal equations: nd(62) n d8 n r i 2 da 21 \2 j - InY- iz0/ n d(a) _. da 2 dr dr = (19) 1 dr S i -- zi 2z - ati 0. These equations may be written n n - \ 2. a E .,+ _ = o . (20) 1 -2 z2 1 r-zi ( + zi THE FORMATION OF CO IN GAS PRODUCERS As their exact solution in their present form is impracticable on account of their complexity we replace them in the usual way by a system of approximately equivalent linear equations, by making use of the fact that the quantities u, v by which a and y differ from ao, yo respectively are small compared with ao, 7o. Substituting a = ao + U r = ro + V, (21) we have / V ro + zi (2 I + V zi In ro - zi (22) 2 zi Yo + z (1 2 Z2) ( + - ( + 2ro V V 0 i S(o + zi 2To V zE - In - \ 2 y o v + v/ 1 (2 - z ) 1 + / Expanding the logarithms by Maclaurin's series and neglecting terms of the order of uv and v2 in comparison with those of the order of u and v we have n V + V t =Z i In ro +-- z n^ ^+ 2 i -L In -- 2 -a0 t, 1 1 0 -i 1 To - zi (23) ti zi z o + z) " i - +v = In -2 aoti1 0o- 1 (r ,-z?) 1 (o-z) roo , Expressing the natural logarithms in terms of common logar- ithms we have, putting ILLINOIS ENGINEERING EXPERIMENT STATION Ai - tj , Bi-- ? Z Ci =--log ro + zi - Mt, TO - i 7o - Zi K=- n210 = 1.15129 , M -- - 1- 0.86859 ao ; n n n , Fi A? + v J Ai Bi K> Ai C , (24) 1 1 1 n n n u , Bi A, + v Ei B = B K BK a , (25) 1 1 1 or in the usual notation of the method of least squares - . u[AA] + v[AB] = K[AC], (20) u [B A] + v[B B] = K [B C]. From these equations u and v are readily computed and hence the corrected values Sao+ U(21) 7 = ao + V are found. 6. APPLICATION OF THE METHOD OF §§ 3, 4 TO OTHER EQUATIONS OF REACTION- VELOCITY. The well known equation' dx S= k (1 - ) - (27) with the initial condition x = 0, when t = 0, is reducible by the substitutions xz z= r-r^ x + - , a = k k2, k- r , k (28) r = k2 to the form we have considered, viz.: 'Cf Nernst, Theoretische Chemie, 5 Aufl, p. 564; 2d English edition, p. 568. THE FORMATION OF CO IN GAS PRODUCERS. 45 dz a (4) S (r- ) (4) with the initial condition z = 0 when t = 0. Its .integral is, therefore, In + 2 at, (6) or z = r tank at, (7) r tank at or 1 + r tank at ' (8) and the constants are determined by the equation r + z t2 + z1 In = - Inr ' (10) r - Z tl r - z if the criterion Z2 t2 zl tl (13) is satisfied. The more general equation' dx - ki (a, - x) (b, - x) - k2 (a2 + x) (b2 + x), (29) with initial conditions x = 0, when t = 0 is reducible by a substitution of the form S-- (30) 1+z where p' and a are constants depending on a,, b,, a2, b2 but not on 1c, k2, to the equation dz -( - z'). (4) The initial condirions, however, are now not t = 0, z = 0, (5) but t = 0, z = - zo ; (5') and hence the integral and the equation determining the con- stants are more complicated. 'Cf. Nernst, Theoretische Chemie, 5 Aufl. p. 543; 2d English edition, p. 542. ILLINOIS ENGINEERING EXPERIMENT STATION The equation determining y is r + Z2 2 1 + z 2 + zo Irn 2 - In- - ( 1 In (31) r- r -Iz \ 1- 0 and the criterion for the existence of a solution is t,2 z - tl z2 - (t2 - t0) z0 > 0. (32) The solution, if existent, is unique, and is determined by process similar to that of § 3. 7. CONCLUSION. The analysis in the preceding sections furnishes a simple negative criterion (13) or (32) for the applicability of the differ- ential equation (I) or (29) to a given reaction and, in case this criterion is satisfied, provides a straightforward method of corn puting the numerical values of the reaction constants k, and k2 from any two pairs of observed corresponding values of x and t. It renders unnecessary, moreover, except as a matter of control any observations of equilibrium conditions. The detailed discussion of the more general equation (29) is reserved for subsequent publication.
{"url":"https://www.ideals.illinois.edu/bitstream/handle/2142/4295/engineeringexperv00000i00030_tei.xml?sequence=1","timestamp":"2014-04-17T10:21:17Z","content_type":null,"content_length":"88184","record_id":"<urn:uuid:149b923a-22ed-44c6-b026-8a6b8022c5bc>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Knowledge and wisdom are perhaps the greatest gift and treasure which mankind can possess. In our desire to discover the origins of the universe and to learn of our destiny, many of us will endure considerable physical pain and financial hardship. Some of us will even travel to the very ends of the earth to find it, and even now, mankind reaches out towards the stars, among the distant galaxies of the universe, in the unrelenting quest for more and more knowledge. However, the ultimate truth of mankind's origin may be found locked up in coded information within one of the oldest books of antiquity - The I Ching - the Chinese book of changes. This book, which has continuously inspired sages, philosophers and poets for over three thousand years has just revealed some of its secret knowledge. It originally emerged from obscurity in the north western part of China with a text of peasant omens and divination material. Recent studies by the author has indicated that it contains a mathematical science of continuous change and transformation which underlies all existence. Research has now proved that the objective of its creators was to establish a language for communication between intelligences based on numbers and their symbolism. The commentary text of the I Ching consists of sixty-four linear symbols called hexagrams. These are derived from the association between trigram symbols and the divination interpretation of each hexagram for a given situation. What is particularly significant is the fact that the symbolism within the text forms a binary mathematical system which has been attributed to Baron von Leibniz, a 17th Century German mathematician. It would be easy for any writer of science fiction to speculate upon the possibility that the ancient Chinese used computers over three thousand years ago. Such statements can only lead to the conclusion that at some early stage in the evolution of mankind contact was made with extra terrestrial beings. These beings, continually and periodically, stimulated the minds of a number of selected individuals. This contact over the past decades could have caused the pronounced effects and changes in the way in which mankind formulated his ideas and actions, leading to the advancement of the technological world which we now live in. Although western scholars have usually surmised this significant fact as being purely co-incidental and meaningless (because the binary system, within the I Ching text, appears to have served only as a means to amplify the meaning of the divinatory interpretation) traditional scholars still cling to the belief that the contents of the text constitutes a vast repository of ancient ideas and hidden knowledge, but up to now very little proof has been published which could substantiate their claims. The author's recent research suggests that the I Ching text does in fact contain a considerable amount of unusual mathematical and scientific knowledge, which must have been formulated in its present computerized form by an unknown advanced civilization. The initiation of the research was the discovery of an unknown trigram cyclic sequence that had been incorporated within Tantric symbolism and Buddhist iconography, embossed on an old copper Tibetan Geomantic Calendar Mandala. This particular arrangement of trigram symbols differed from the two standard arrangements known as the Earlier Heaven and the Later Heaven arrangements, which are displayed in most books of the I Ching. At first, it was thought that a mistake had been made by the artisan who had fashioned this particular artefact. However, during subsequent investigations, additional and other different trigram cyclic sequence arrangements in a variety of art forms were found. Several artefacts were discovered with the trigram symbols cut into stone, carved from wood blocks and/or embossed onto metal objects of great antiquity. By using a computer, the maximum number of possible combinations and/or permutations for the eight trigram cyclic arrangements was determined to be some 40,320 i.e. factorial 8! The possibility that mathematical and scientific knowledge could have been hidden in a coded form, prompted the author to conduct an in-depth examination and subsequent mathematical and biological analysis of the linear patterns which formed the various trigram cyclic sequences. Quite by chance, a discovery was made which has enabled the author to gain access to some of the immense wealth of hidden knowledge which has been stored for posterity within this strange but practical book of antiquity. There is a considerable number of authors who are only involved in the divination aspect of the I Ching. John C. Compton's research covers the very structure of the mathematical codes which are incorporated within the text of this ancient book. For example, his research notes on Trigram Transition and the Symbolic /Numerical Representation of the numbers - 6, 7, 8 and 9, prove the symbolic / numerical relationship of the I Ching, i.e. QUOTE - How strange it is that Richard Wilhelm, whether by design or perhaps unintentionally, made some of the most profound statements on the I Ching, regarding the nature and relationship of trigram line transition. For instance, in his introduction to the I Ching, he stated in support of an example that: All of the lines of a hexagram do not necessarily change; it depends entirely on the character of a given line. A line whose nature is positive, with increasing dynamism turns into its opposite, a negative line, whereas a positive line of lesser strength remains unchanged. The same principle holds for the negative lines. More definite information about those lines which are considered to be so strongly charged with positive or negative energy that they move, is given in book II in the Great Commentary (pt. 1 chapter IX), and in the special section on the use of the oracle at the end of the book. Richard Wilhelm then made this profound statement - Suffice it to say here that positive lines that move are designated by the number 9, and negative lines that move by the number 6, while none moving lines which serve only as the structural matter in the hexagram, without intrinsic meaning of their own, are represented by the number 7 (positive) and the number 8 (negative).* Thus, when the text reads, "Nine in the beginning means.... " this is the equivalent of saying: "When the positive line in the first place is represented by the number 9, it has the following meaning...." If on the other hand, the line is represented by the number 7, it is disregarded in the interpreting of the oracle. The same principle holds for lines represented by the number 6 and 8 respectively. If we now refer to the Great Commentary, as suggested by Richard Wilhelm, a considerable amount of additional information can be obtained. For instance Chapter IX - On the Oracle - Section 1 - informs us that the section contains speculations about numbers similar to those in the section entitled Hung Fan in the Book of History (Shu Ching), which may represent the beginning of the connection between the number speculation of the Book of History and the yin-yang doctrine of the Book of Changes, which played an important role in Chinese thought, especially under the Han Which means, that there is a direct relationship between numbers and the yin-yang doctrine. Section 7 - informs us that the eight signs constitute each a small completion and that a hexagram is made up of two trigrams. The "eight signs" are the eight primary trigrams. In addition it informs us that the lower trigram is also called the inner trigram, and the upper trigram may be called the outer trigram. Which means, that the hexagram is made up of two trigrams, the lower being the inner and the upper the outer trigram. (This is particularly important for determining the trigram arrangement for cyclic sequences or trigram discs). Section 8 - informs us that each of the sixty-four hexagrams can change into another through the appropriate movement of one or more lines. Which means, that each hexagram can change if line movement occurs. There is still more information of trigram transition given in Part 1, Chapter 1 -The Changes in the Universe and in the Book of Changes - In Section 1 - Richard Wilhelm informs us that in the Book of Changes, a distinction is made between three kinds of change: non-change, cyclic change, and sequent change, and he continues with the following statement - Non-change is the background, as it were, against which change is made possible. For in regard to any change there must be a fixed point to which change can be referred; otherwise there can be no definite order and everything is dissolved into chaotic movement. This point of reference must be established and this always requires a choice and a decision. It makes possible a system of co-ordinates into which everything else can be fitted. Which means, that a reference point must be established to enable a system of co-ordinates to be created, into which everything else can be fitted. Richard Wilhelm also states that: The ultimate frame of reference for all that changes is the non-changing. Which means, that the reference point is a non-changing point of reference. Section 2 - informs us that the eight trigrams succeed one another by turns, as the firm and then the yielding displace each other. Richard Wilhelm then makes another profound statement - Here cyclic change is explained. It is the rotation of phenomena, each succeeding the other until the starting point is reached again. Examples are furnished by the course of the day and year, and by the phenomena that occurs in the organic world during these cycles. Cyclic change, then, is recurrent change in the organic world, whereas sequent change means the progressive (non-recurrent change) of phenomena produced by causality. Which means, that cyclic change is a rotation of phenomena that is recurrent, whilst sequent change is progressive (non-recurrent change) of phenomena that is produced by causality. Richard Wilhelm then continues with yet another profound statement on hexagram lines. The firm and the yielding displace each other within the eight trigrams. Thus the firm is transformed, melts as it were, and becomes the yielding; the yielding changes, coalesces, as it were and becomes the firm. In this way the eight trigrams change from one another in turn, and the regular alternation of phenomena within the year takes it course. Which means, that the lines within the trigrams may change from one to another to create a different trigram. In Chapter X - The Four Fold Use of the Book of Changes - Section 3 - informs us that the three and five operations are undertaken in order to obtain a change. Divisions and combinations of the numbers are made. If one proceeds through the changes, they complete the forms of Heaven and Earth. If the number of changes is increased to the utmost, they determine all the images on earth. If this were not the most changing thing on earth how could it do Richard Wilhelm then states - A great deal has been said about the "three and five" divisions, and even Chi Hsi is of the opinion that this passage is no longer comprehensible. In fact this may be quite comprehensible, as the three fold division may be associated with the geometric proposition of Pythagoras, and the five fold division may refer to the pentagon which incorporates the mathematical irrational constants of the Universe within its symmetry. What we may ask do all these profound statements by Richard Wilhelm mean? All these statements tell us that: • A trigram symbol may change into another trigram by the transition of individual lines or by the movement of trigram lines within a cyclic sequence. • The statements provide us with The Law of Trigram Transition and its relationship with numerology. • The positive line represented thus_____ becomes the negative line represent thus__ __ and vice versa. • Numerically it means the following: Positive to Positive lines = 7 (non-changing line) Negative to Negative lines = 8 (non-changing line) Positive to Negative lines = 9 (changing line) Negative to Positive lines = 6 (changing line) These simple line transformations form and represent - It also means, that the trigram symbols may be represented as a unique set of three numbers and vice versa. In this manner each trigram that forms a circular arrangement, known as a cyclic sequence or trigram disk, can be given a unique set of clockwise and anti-clockwise numerical values, which are dependent on the lines of the adjoining trigrams next to it. It should be noted that the number created by the transition is always assigned to the trigram line initializing the change. * This statement by Richard Wilhelm is in the author's opinion not correct, because there is a direct mathematical relationship between the I Ching numerical line values and the Genetic Code, i.e, Adenine = 7, Cytosine = 9, Guanine = 6, Thymine / Uracil = 8 See Volume Two of the author's work for the proof of this statement. Similar documentary evidence of the author's research is described in considerable detail within his five books. In the course of the research, the original arrangement of the sixty-four square matrix grid known as the "Key for identifying hexagrams" was discovered and proved to have been utilized as a visual display board, where hexagrams represent computer pixels. The 1st volume of the research work conclusively proves that the I Ching - the book of changes, forms part of an ancient computer system, where a vast library of data and information is stored in a symbolic and pictographic format. The 2nd volume shows the results of the author's research, which conclusively proves the relationship between the I Ching and the Genetic code. The evidence presented shows that this relationship is based on a sixty-four numerical eight by eight matrix system, where each individual numerical digit represents the Vedic summation of the electrons contained within the molecular structure of an individual polynucleotide chain. The mathematical methodology utilized appears to be reminiscent of the Chinese lost formalistic natural philosophy that sought to embrace the entire world of thought in a system of number symbols, which saw its beginning during the Ch'in and Han dynasties. The 3rd volume contains examples of the analytical methodology employed. It also contains many technological images, obtained from a number of pictographic samples, extracted from the library of rediscovered hidden symbolic data. The 4th volume contains additional research notes on the Genetic Code matrix system, molecular biological mathematics and a hypothesis on the Diversity of Species. The 5th volume contains the author's research notes on the I Ching and its mathematical relationship with Magic Squares, Genetic Code molecular Hyper-cubes and the Gates of Destiny. It conclusively proves and re-affirms his interpretation of Richard Wilhelm's comments, regarding the Yin/Yang trigram line transformation and non-transformation, which changes the ancient linear line language into a simple numerical line language (Refer to Volume 1, Chapter 7 - Number Mysticism and Yin/Yang line transformation). It shows that the mathematical methodology utilized by the ancient Chinese scribes may have been created in a mystical manner, as it involves the formulation of a unique diagram which represents the "five stages of change" and the mathematical disposition of the natural elements, represented by the Vedic summation of the Heavenly and Earthly numerical characteristics of the Ho T'u - The Dragon Horse diagram and the Lo Shu - The Tortoise diagram. Compton-Kowanz Publications - 2013
{"url":"http://www.ichingmaster.co.uk/","timestamp":"2014-04-21T04:37:18Z","content_type":null,"content_length":"26966","record_id":"<urn:uuid:62d7d9f3-5025-4939-825b-bd8ab49c1c58>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert-Me.Com - online units conversion i want to convert sq feet to sq yard. How do i do it? how many feet form a yard??? i want to buy a carpet and my room is 13 by 10 sq ft. but i am getting carpet in yards so how do i know how much i would need Re: how to convert sq ft to yard Answers to your questions are here: How may feet are in a yard? How may square feet are in a square yard? guest123 wrote:hi. i want to convert sq feet to sq yard. How do i do it? how many feet form a yard??? i want to buy a carpet and my room is 13 by 10 sq ft. but i am getting carpet in yards so how do i know how much i would need For converting measurements of area (e.g. square feet to square yeards), use this page: http://www.convert-me.com/en/convert/area One square yard = 9 square feet. One yard (linear yard) = 3 feet (linear feet) If your room is 13 by 10 feet (not square feet!), its area is 13*9=113 square feet = 12.56 square yards. Last edited by convert-me.com on Sat Jan 21, 2006 12:32 pm, edited 1 time in total. While the provided formula is technically correct, please note that most carpet centers are not willing to cut a piece of carpet for you that will measure your exact measurements of 13' x 10'. Instead, you are probably buying a carpet that is 12' wide x 13' long. Carpets tend to be either 6' or 12' across, and since they will have such a difficult time selling your remnant carpet measuring 2' x 13', they will "give" you that portion along with the carpet you are actually purchasing. Your actual purchase price will be based of the 12' wide x 13' long or 17.33 square yards (12' x 13' = 156 square feet / 9 square feet/yard = 17.33 square yards). Re: how to convert sq ft to yard If your room is 13 by 10 feet (not square feet!), its area is 13*9=113 square feet = 12.56 square yards.[/quote] hi, i would like to know. How many feet are in ayard ? fert and yards How many feet are in a yard??[/b] Re: yards guest1981 wrote:hi, i would like to know. How many feet are in ayard ? there are 9 square feet in a square yard Re: yards Anonymous wrote: guest1981 wrote:hi, i would like to know. How many feet are in ayard ? there are 9 square feet in a square yard 3 feet in a lin. yard hOW MANY SQUARE FEET ARE IN A SQUARE YARD? buying carpet my room is 12 x 15 how many yards of carpet will i need please help.. Re: how to convert sq ft to yard 24 feet equals how many yards sq. feet in a sq. yard anyone who doesn't know how many square feet there are in a square yard shouldn't be buying carpet *l* 13x10 room carpet sold 12ft wide purchase 12x13=156sq.ft 156sq ft divided by 9 = 17.33 sq yards You have to trim the 2'' off the 12" Re: How do I convert sq yards to lineal yards how many yards is in a foot I would like to buy decking - the measurement for the area is 11 ft x 14ft and the area of each decking is 17 3/4 x 17 3/4". Please could you work out how many decking I would need to buy - thanks guys could you please answer this for me. How do I convert my room measurement which is 13 by 13 to sq feet? carpeting room how many square yards are in a room measuring 15' by 18' feet To convert your 13 x 13 room is somewhat easy. Carpet only comes in 12ft rolls. So you first start with a 12ft x 13ft piece of carpet. Now you have a 1 foot by 13 foot piece that needs to be filled. Depending on how many seams you want will determine the amount of carpet you will need. Normally this is done by using 3 foot lengths. So now you room's size is 12ft x 16ft to give you enough carpet to cover the full 13 x 13. Now let translate this into square yards. If you take the overall length and make it to inches 16ft =192 inches. Take the 192 and divide it by 9 inches(the amount of carpet to equal 1 yard at 12 ft roll length) which will give you 21.33 , so this means your room will require 21 1/3 yards to cover the floor with 5 total seams. The more carpet you buy the less seams you will have, but the more it will cost. Re: carpeting room ables wrote:how many square yards are in a room measuring 15' by 18' feet To do this will require a little more. Remember that carpet normally comes in 12 foot rolls, but some can come in 13.5 and 15 ft rolls. But here goes. Depending on what you buy will determine the true square yards. If your carpet has no pattern to match here is your answer 32 yards with 3 total seams. Re: how to convert sq ft to yard guest123 wrote:hi. i want to convert sq feet to sq yard. How do i do it? how many feet form a yard??? i want to buy a carpet and my room is 13 by 10 sq ft. but i am getting carpet in yards so how do i know how much i would need Here is your answer to this one. Remember that carpet usually comes in 12 ft rolls so you will end up with some let over so your room is actually in carpet is a 12 x 13. Now take the 13 and divide it by 3 and add that amount to the 13 and you get the total sqaure yardage you will need. 13/3= 4.33 + 13 = 17.33 yards. Hope this helped. Re: buying carpet Anonymous wrote:my room is 12 x 15 how many yards of carpet will i need please help.. 12 x 15 equates to 15/3=5+15= 20 sq yrds Re: perimeter and square feet help me to understand why 40*40 square feet = 1600 and the perimetrer or the linear feet is 40+40+40+40 = 160 and as you continue to use the same perimeter such as 30*50 or 20*60 or any combination that amounts to 80 all the perimeter is always 160 but the square feet changes but why doesn't the perimeter or the linear doesn't change ? Re: perimeter and square feet myers wrote:help me to understand why 40*40 square feet = 1600 and the perimetrer or the linear feet is 40+40+40+40 = 160 and as you continue to use the same perimeter such as 30*50 or 20*60 or any combination that amounts to 80 all the perimeter is always 160 but the square feet changes but why doesn't the perimeter or the linear doesn't change ? To understand it try a simple game. Take a lace and tie its ends together. That's your perimeter. Now put it on the table so it makes a circle. Note the area inside. Next line it up so that it makes a long narrow bar. Note again the area inside the lace. It became much smaller now while perimeter remained the same. The experiment above illustrates the fact that area and perimeter are very loosely related. Lineal yard to square feet How do you convert a number in lineal yards to square feet? Re: Lineal yard to square feet Guest wrote:How do you convert a number in lineal yards to square feet? Read this: http://sergey.gershtein.net/blog/2006/1 ... -foot.html Re: buying carpet Anonymous wrote: Anonymous wrote:my room is 12 x 15 how many yards of carpet will i need please help.. 12 x 15 equates to 15/3=5+15= 20 sq yrds Or you can figure it this way. 12 x 15= 180sq ft. To figure sq yrds by sq ft, divide by 9= 20sq yrds Re: how to convert sq ft to yard guest123 wrote:hi. i want to convert sq feet to sq yard. How do i do it? how many feet form a yard??? i want to buy a carpet and my room is 13 by 10 sq ft. but i am getting carpet in yards so how do i know how much i would need = 130 sq ft and convert to yds is = 14.44 sq yds. There are 9 sq ft in 1 sq yd so multiply room width by lenght for sq ft and the divide that by 9 to get your yards. Re: buying carpet Anonymous wrote:my room is 12 x 15 how many yards of carpet will i need please help.. Hi my daughter's room is 10 foot long by 8 foot wide how many s.q.y.d would i need in carpet for it hope somebody can help me Thank you Kindest Regards Tina Anonymous wrote:Hi my daughter's room is 10 foot long by 8 foot wide how many s.q.y.d would i need in carpet for it hope somebody can help me Thank you Kindest Regards Tina 10' x 8' = 80 ft^2 80 ft^2 / 9 ft^2 = 8.89 yd^2, so 9 square yards of carpet. Now the problem (you knew there'd be a problem). Most carpet is sold in 13 feet widths so don't just go out and get 9 yards of carpet as it'll be too short in one dimension and too long in the other. You might be able to find a remnant at a carpet store that will work but wherever you buy the carpet, give the tech your dimensions in feet to be sure you get enough carpet. I strongly suggest you have the carpet professionally installed. linear yard Hi. I need help, I dont understand linear yard. How long is it? in meters or feet. Hoping for your kind response. Thank you. Re: linear yard vertigoth wrote:Hi. I need help, I dont understand linear yard. How long is it? in meters or feet. Hoping for your kind response. Thank you. The word "linear" is often over used. When dealing with length, the word "linear" usually doesn't need to be stated. Linear (adj.): Of, relating to, or resembling a line; straight or curved. 1. In, of, along, describing, described by, or related to a straight or curved line: a “linear foot”. 2. Having only one dimension. So, a linear yard is simply a line that's 1 yard long (36"). A linear foot is a line that's 1 foot long (12"). Does that help? reply about "linear" Thank you DIRTMAN. That says it all. I fully understand it now. Re: perimeter and square feet convert-me.com wrote: myers wrote:help me to understand why 40*40 square feet = 1600 and the perimetrer or the linear feet is 40+40+40+40 = 160 and as you continue to use the same perimeter such as 30*50 or 20*60 or any combination that amounts to 80 all the perimeter is always 160 but the square feet changes but why doesn't the perimeter or the linear doesn't change ? To understand it try a simple game. Take a lace and tie its ends together. That's your perimeter. Now put it on the table so it makes a circle. Note the area inside. Next line it up so that it makes a long narrow bar. Note again the area inside the lace. It became much smaller now while perimeter remained the same. The experiment above illustrates the fact that area and perimeter are very loosely related. WRONG....If you tie a lace together and make a circle, it will have the EXACT inside area as if you were to make a square, triangle, or narrow bar. Re: perimeter and square feet Anonymous wrote: convert-me.com wrote: myers wrote:help me to understand why 40*40 square feet = 1600 and the perimetrer or the linear feet is 40+40+40+40 = 160 and as you continue to use the same perimeter such as 30*50 or 20*60 or any combination that amounts to 80 all the perimeter is always 160 but the square feet changes but why doesn't the perimeter or the linear doesn't change ? To understand it try a simple game. Take a lace and tie its ends together. That's your perimeter. Now put it on the table so it makes a circle. Note the area inside. Next line it up so that it makes a long narrow bar. Note again the area inside the lace. It became much smaller now while perimeter remained the same. The experiment above illustrates the fact that area and perimeter are very loosely related. WRONG....If you tie a lace together and make a circle, it will have the EXACT inside area as if you were to make a square, triangle, or narrow bar. Wrong? Sorry, but Sergey (Convert-me.com) is correct. Examples with a perimeter of 200 feet: Square = 2500 ft2 Circle = 3183.1 ft2 Equilateral Triangle = 1924.5 ft2 90' * 10' Narrow Rectangle = 900 ft2 And all have a 200' perimeter! Re: how to convert sq ft to yard Thanks for sharing the post and its really good information.. Re: how to convert sq ft to yard I have a room which is 12ft by 15ft and need to know what size carpet in metres i need. Could someone please help me? Thank you Re: how to convert sq ft to yard 1 foot = 0.3048 meters So, it's nearly 3,65m x 5 m if you want to cover the whole floor Last edited by Liandella on Fri Jun 15, 2012 1:08 pm, edited 2 times in total. Re: how to convert sq ft to yard There are many free tools on the web that help with this. Re: how to convert sq ft to yard I have a Carpet cleaning company and when giving out prices over the phone for carpet cleaning I always have a problem converting sq ft to yards , what is the formula Last edited by Dmreed4311 on Sat Nov 05, 2011 7:13 pm, edited 1 time in total. Re: how to convert sq ft to yard ok here are two little tables that should solve the problem sq ft = sq yd 1.0 = 0.11111 2.0 = 0.22222 3.0 = 0.33333 4.0 = 0.44444 5.0 = 0.55556 6.0 = 0.66667 7.0 = 0.77778 8.0 = 0.88889 9.0 = 1.00000 sq yd = sq ft 1.0 = 9.00000 2.0 = 18.00000 3.0 = 27.00000 4.0 = 36.00000 5.0 = 45.00000 6.0 = 54.00000 7.0 = 63.00000 8.0 = 72.00000 9.0 = 81.00000 hope that helps
{"url":"http://www.convert-me.com/en/bb/viewtopic.php?t=609","timestamp":"2014-04-19T14:51:44Z","content_type":null,"content_length":"92574","record_id":"<urn:uuid:90e62128-c1ee-4cf3-aacc-7cc4905ea764>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Google Answers: Diffusion of gases Hi brittanyl-ga. The theory behind your question goes back to Graham's law of gases. This states that the speed of each particle (atom or molecule) of a gas has a speed which is proportional to the 1 / M^0.5 (i.e., one over the square root of the molar mass of the gas). The molar mass is the atomic mass of the particle expressed in grams. Here's an example: For Neon (Ne), the atomic mass is 10 daltons, so the molar mass is 10 grams. This means that one mole (Avogadro's number of particles, 6.02 x 10^23) has a mass of 10 grams. For hydrogen, with an atomic mass of 1 dalton for it's single proton, the molar mass would be 1 gram. For Neon, this means that the speed of the individual Neon atoms is proportional to 1 / sqrt(10). Why is this so? Graham's law is not just arbitrary. Imagine two containers, each with one of two gases at the same given temperature. Temperature is a bulk measure of the kinetic energy of individual particles in a material. At the same temperature, the average energy of the particles in each container will be the same. But... the particles in each container have different masses. Let's say the first container has Neon and the second has Argon (Ar). On average, then, we have the following: KE(Ne) = KE(Ar) 1/2 M(Ne) v(Ne)^2 = 1/2 M(Ar) v(Ar)^2 [KE = 1/2 mv^2] M(Ne) v(Ne)^2 = M(Ar) v(Ar)^2 v(Ne)^2 / v(Ar)^2 = M(Ar) / M(Ne) taking the square root of each side: v(Ne) / v(Ar) = sqrt [M(Ar)/M(Ne)] So, as stated in Graham's law, the velocity of individual particles in gasses at the same temperature is proportional to the reciprocal of the square root of the molar mass. In simpler terms, the heavier each particle is in a gas, the slower it moves to have the same kinetic energy. So, the lightest gas particles will need to move the fastest to have equal kinetic energies, from the equation above. The faster a gas particle moves, the faster the gas will disperse overall. Think of the gas particles moving towards the edges of the container... if the particles there are moving faster, the volume of the gas will expand faster. So, the molar masses of the gasses you list are as follows: Neon (Ne): 10 grams Argon (Ar): 39.9 grams Krypton (Kr): 83.8 grams Chlorine gas (Cl2): 71 grams (35.5 x 2) So, Neon is the lightest and will therefore move the fastest if all of these gases are at the same temperature. For this reason, Neon will diffuse the fastest. Here is a page that further describes the diffusion of gasses, including a lab: A nice discussion of the motion of particles in gasses can be found at this site from the Oklahoma State introductory chemistry course: Best of luck in your studies. Please feel free to request any clarification.
{"url":"http://answers.google.com/answers/threadview/id/546023.html","timestamp":"2014-04-18T08:04:09Z","content_type":null,"content_length":"9724","record_id":"<urn:uuid:bb58d632-c612-4220-ab8a-9c2bf9ac6023>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
User minimax bio website visits member for 4 years, 4 months seen Apr 10 at 18:39 stats profile views 279 Mar Algorithm for computing basis of zero dimensional ring? 6 comment @Qiaochu: I just meant the global section of Spec of that ring, i.e. the ring itself.... It seems unnecessary to use that language so I have changed the working. Sorry for the Mar Algorithm for computing basis of zero dimensional ring? 6 revised added 25 characters in body Mar Algorithm for computing basis of zero dimensional ring? 6 comment @Steven: I am curious what algorithm does that command use? 6 asked Algorithm for computing basis of zero dimensional ring? Feb Computer package to compute HOMFLY polynomial? 21 comment @Steven: I have installed the package and it works nice! One more question, how to generate the cable over trefoil in that package? Feb Computer package to compute HOMFLY polynomial? 21 comment @Andrew: Thanks! 20 awarded Commentator Feb Computer package to compute HOMFLY polynomial? 20 comment @Steven: Thanks, I will try it out! Feb Computer package to compute HOMFLY polynomial? 20 comment @Ryan: I want to compute the HOMFLY polynomial of (3,19) cable over trefoil. 19 asked Computer package to compute HOMFLY polynomial? Jan What are the general techniques for proving a variety is not toric? 19 comment @Piotr: Thanks, it has been changed! Jan What are the general techniques for proving a variety is not toric? 19 revised deleted 4 characters in body Jan What are the general techniques for proving a variety is not toric? 17 comment @Piotr: Changed! Jan What are the general techniques for proving a variety is not toric? 17 revised edited title 17 asked What are the general techniques for proving a variety is not toric? Jan When will the pushforward of a structure sheaf still be a structure sheaf? 3 comment Is the reference to Hartshorne III.10.3 correct? 31 accepted Why nilpotent elements must be allowed in modern algebraic geometry? 31 awarded Editor Dec Cell decomposition of punctual Hilbert scheme of points on $A^n$? 31 revised deleted 3 characters in body; edited title How Fine One Must Choose an Affine Cover to get Weil Restriction? Nov comment @nosr: I think I figured this out, by infinitesimal criterion for etaleness: a morphism $f: X\to Y$ of locally of finite presentation iff for any $Y$-scheme $T$ and closed subscheme 20 $T_0$ of $T$ defined by a square zero ideal, the map of sets $\Hom_Y(X,T)\to\Hom_Y(X,T_0)$ is bijective. Now etaleness follows if we use the universal property defining Weil restriction and the above criterion....
{"url":"http://mathoverflow.net/users/1992/minimax?tab=activity","timestamp":"2014-04-16T19:29:32Z","content_type":null,"content_length":"44962","record_id":"<urn:uuid:8e86440d-8aa4-475a-88b4-a59fa0f439cd>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Realtime Simulation Robot design optimization In the EU-project RealSim (Real-time Simulation for Design of Multi-physics Systems) DLR and KUKA worked together to develop methods and tools that ease the design of new robots or of variants of existing robots by taking into account the interaction of mechanics, electronics and software systems of a robot early in the design phase. The goal is to reduce development cost and time and improve robot performance. It is planned to include step after step the developed and demonstrated technology into the actual design process at KUKA. The figure sketches the design process. After the initial specification of the robot (desired payload, work space, etc.), a first model of the robot is created. In a robot component library, the data of older designs are available, that can be directly utilized or served as a reference. The initial design is carried out following heuristic rules, e.g., by “static” calculations to check the joint torques in different arm configurations and adjust the kinematic parameters. The initial design is followed by a MOPS design optimization phase in which a good compromise candidate is determined by minimizing the maximum of a set of (appropriately scaled) criteria for a set of typical industrial tasks. Finally, the design can be verified in a real-time simulation using the actual robot control hardware, replacing the not-yet-existing robot by a real-time simulation. Real-Time Simulation The core for robot optimization are flexible, detailed robot models. These are difficult to obtain because multi-domain models are required, containing components from multi-body systems, drive trains, electrical systems and controllers. Furthermore fast simulations are essential for optimization in rapid control prototyping and hardware in the loop simulations.A robot component library is the core for all design phases in order to describe the robot. A typical robot model is shown in the figure. It is based on the free, object-oriented modeling language Modelica. Modelica models are simulated with the commercial simulation package Dymola from the Swedish company Dynasim. In the RealSim project, Dynasim and DLR developed together new algorithms for the real-time simulation of stiff differential-algebraic equation systems in order that detailed robot models can be simulated in real-time. For the robots under consideration these algorithms give a speed-up of at least 15 with respect to standard integrators, such as the explicit or implicit Euler method with fixed step-size used in real-time simulations but also with respect to the state-of-the-art offline variable-step integrator DASSL. The basic idea is that stiff variables are discretized with the implicit Euler method, other variables with the explicit Euler method and that the symbolic algorithms of Dymola are applied on the discretized model equations. These developments have been demonstrated at the 2001 Hanover fair. The figure shows on application of the proposed real-time models. The standard KUKA control system, including the KUKA control panel seen in front, controls a virtual robot instead of the real robot. This virtual robot is based on a real-time simulation of a detailed Modelica robot model (about 80 differential + 1000 algebraic equations) together with a CAD data based online animation to get immediate visual response together with process data visualization, such as path deviations or end effector vibrations.
{"url":"http://www.dlr.de/rm/en/desktopdefault.aspx/tabid-3798/6002_read-8850/usetemplate-print/","timestamp":"2014-04-20T00:18:40Z","content_type":null,"content_length":"12914","record_id":"<urn:uuid:5cc6fe8c-20d7-4258-807c-b147ff2bde62>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Electrical Engineering 331 > Dr. Craig Lorie > Notes > ch35_1_young_freedman.pdf | StudyBlue Simply amazing. The flashcards are smooth, there are many different types of studying tools, and there is a great search engine. I praise you on the awesomeness. - Dennis I have been getting MUCH better grades on all my tests for school. Flash cards, notes, and quizzes are great on here. Thanks! - Kathy I was destroying whole rain forests with my flashcard production, but YOU, StudyBlue, have saved the ozone layer. The earth thanks you. - Lindsey This is the greatest app on my phone!! Thanks so much for making it easier to study. This has helped me a lot! - Tyson
{"url":"http://www.studyblue.com/notes/note/n/ch35_1_young_freedmanpdf/file/328105","timestamp":"2014-04-18T15:40:27Z","content_type":null,"content_length":"35464","record_id":"<urn:uuid:74cc0f93-3f5b-4e0b-9319-e7ca9129beec>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Solid State Physics We Excel in Solving Solid State Physics Homework Problems of All Kinds: Solid-state physics is a branch of physics that deals with intrinsic properties of solid-state materials. This includes crystallographic studies of unit cells that make solid materials including metals. Besides, at elementary level, it involves calculation of lattice spaces such as bravais lattice. In fact, study of crystallographic dimensions may not be that easy if basic concepts are not clear. In such a case, you may address us, as we provide best solid state physics assignment help. Besides, we also provide all kinds of solid state physics homework help. Thus, if you have any solid state physics homework problems, then you are most welcome. Why Do Students Need Solid State Physics Problems And Solutions? • difficulty in multitasking; • lack of proper guidance and assistance; • difficulty with calculus; Most students score low in solid-state physics. The reason is poor performance in solid state physics homework. Indeed, it is ideal to do your homework on your own, but what to do when you do not know how to solve solid state physics homework problems? Difficulty with calculus is an undeniable fact that most students face. Besides, solid state physics assignment consumes lot of time. At times, students fail to grasp the essence of the problem. We Provide Complete Assistance With Problems In Solid State Physics: • we undertake all types of solid state physics assignments; • we attend your request 24 x 7; • we offer challenging solid state physics solutions; We provide best guidance to students facing problems with any nature of assignment. Our long service and experience help us rescue you in all your difficulties with studies. Besides, we have a dedicated help line, which attends to your queries invariably around the clock. In case you have difficulty with any solid state physics homework, then you can approach us at any point of time. Moreover, we are committed towards providing high quality services that are based on authentic and reliable information sources. Thus, we claim to provide complete solid state physics problems and solutions to any kind of difficulty that students face. Why Choose Us To Solve Problems In Solid State Physics? • dedicated team of experts on board; • we undertake time bound projects; • around the clock live support help; Indeed, most problems in solid state physics are of recurring origin. Thus, you require professionals that can help you solve all your difficulties. We have an experienced team having decent knowledge. You may choose us to solve any difficulty pertaining to solid state physics assignment. We are available day and night at your disposal. Besides, you can contact the live support desk at any hour of clock to find us. In addition, all our solid state physics homework are time bound and we take special care in completing them within the time as requested.
{"url":"http://www.physicsexpert.com/solid-state-physics/","timestamp":"2014-04-21T04:33:18Z","content_type":null,"content_length":"17740","record_id":"<urn:uuid:c8e82c74-3ab6-4789-b522-683c0bd41a8b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: METHOD FOR CONTROLLING AN ELECTRICAL CONVERTER Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A method is provided for predicting pulse width modulated switching sequences for a multi-phase multi-level converter. With a first predicted switching sequence, due to multi-phase redundancies, equivalent switching sequences are determined. From the equivalent switching sequences, one switching sequence optimal with respect to a predefined optimization goal is selected. The selected switching sequence is used to switch the converter. A method for controlling a converter, wherein the converter is configured for generating an AC current for at least two phases by outputting different voltage levels for each phase, wherein the different voltage levels depend on switching states of the converter, the method comprising: (a) generating a first sequence of voltage vectors, each voltage vector including a voltage level for each phase, by (i) generating a reference waveform for each phase, and (ii) determining the voltage levels for a phase for each voltage vector of the first sequence by deriving the voltage levels from the respective reference waveform for the phase; (b) determining a set of equivalent sequences of voltage vectors by: (i) calculating voltage level differences for each voltage vector of the sequence of voltage vectors, the voltage level differences being differences of voltage levels of the voltage vector, (ii) determining an equivalent voltage vector with equal voltage differences, and (iii) generating an equivalent sequence by replacing at least one voltage vector of the first sequence with the equivalent voltage vector; (c) selecting one sequence from the set of equivalent sequences which optimizes an internal state of the converter, when the sequence is applied to the converter; and (d) applying the first voltage vector of the selected sequence to the converter. The method of claim 1, wherein the reference waveform is generated based on at least one of a reference profile, a reference time, and a maximal amplitude. The method of claim 1, wherein in step (a) a voltage level for a phase is derived from the reference waveform by intersecting the reference waveform with at least one carrier waveform having periodic positive and negative slopes, wherein the at least one carrier waveform covers an interval between a lower voltage level and a higher voltage level of the converter, and wherein the voltage level is set to the lower voltage level if the reference waveform intersects a positive slope of the carrier waveform, and the voltage level is set to the higher voltage level if the reference waveform intersects a negative slope of the carrier waveform. The method of claim 3, wherein each voltage level of a phase relates to a time instant, and wherein the time instant is the time instant at which the carrier waveform intersects the reference The method of claim 3, wherein in the positive slope the carrier waveform linearly increases from the lower voltage level to the higher voltage level, and wherein in the negative slope the carrier waveform linearly decreases from the higher voltage level to the lower voltage level. The method of claim 3, wherein an additional voltage level for a phase is derived from the reference waveform, when the reference waveform intersects a voltage level between two carrier waveforms, and wherein the additional voltage level for a phase is set to the next higher voltage level if the reference waveform increases at the point of intersection, and the additional voltage level for a phase is set to the next lower voltage level if the reference waveform decreases at the point of intersection. The method of claim 6, wherein the additional voltage level of a phase is related to a time instant, and wherein the time instant is the time instant at which the reference waveform intersects the voltage level between two carrier waveforms. The method of claim 3, wherein the reference waveform includes a sequence of voltage values, each voltage value relating to a time instant, and wherein, when calculating the intersection between a carrier waveform and the reference waveform, the reference waveform is interpolated between the voltage values. The method of claim 8, wherein the reference waveform between a first voltage and a consecutive second voltage values is interpolated as being the first voltage value. The method of claim 1, wherein in step (c) the sequence is selected by: estimating the internal state of the converter by applying the sequence to a model of the converter; and selecting the sequence with the optimal estimated internal state. The method of claim 1, wherein in step (c) the sequence is selected such that at least one of the following internal states of the converter is optimized: a neutral point potential lies within predefined bounds; at least one of switching losses and the switching frequency are minimized; at least one of a common mode voltage and variations of the common mode voltage are minimized; and an average deviation of an internal state from a predefined internal state is minimal. A non-transitory computer-readable recording medium having a computer program recorded thereon that causes a processor of a computer processing device to execute operations for controlling a converter, wherein the converter is configured for generating an AC current for at least two phases by outputting different voltage levels for each phase, wherein the different voltage levels depend on switching states of the converter, and wherein the operations comprise: (a) generating a first sequence of voltage vectors, each voltage vector including a voltage level for each phase, by (i) generating a reference waveform for each phase, and (ii) determining the voltage levels for a phase for each voltage vector of the first sequence by deriving the voltage levels from the respective reference waveform for the phase; (b) determining a set of equivalent sequences of voltage vectors by: (i) calculating voltage level differences for each voltage vector of the sequence of voltage vectors, the voltage level differences being differences of voltage levels of the voltage vector, (ii) determining an equivalent voltage vector with equal voltage differences, and (iii) generating an equivalent sequence by replacing at least one voltage vector of the first sequence with the equivalent voltage vector; (c) selecting one sequence from the set of equivalent sequences which optimizes an internal state of the converter, when the sequence is applied to the converter; and (d) applying the first voltage vector of the selected sequence to the converter. A controller for controlling a converter, wherein the converter is configured for generating an AC current for at least two phases by outputting different voltage levels for each phase, wherein the different voltage levels depend on switching states of the converter, wherein the controller comprises a processing unit configured to: (a) generate a first sequence of voltage vectors, each voltage vector including a voltage level for each phase, by (i) generating a reference waveform for each phase, and (ii) determining the voltage levels for a phase for each voltage vector of the first sequence by deriving the voltage levels from the respective reference waveform for the phase; (b) determine a set of equivalent sequences of voltage vectors by: (i) calculating voltage level differences for each voltage vector of the sequence of voltage vectors, the voltage level differences being differences of voltage levels of the voltage vector, (ii) determining an equivalent voltage vector with equal voltage differences, and (iii) generating an equivalent sequence by replacing at least one voltage vector of the first sequence with the equivalent voltage vector; (c) select one sequence from the set of equivalent sequences which optimizes an internal state of the converter, when the sequence is applied to the converter; and (d) apply the first voltage vector of the selected sequence to the converter. A converter comprising: a converter circuit with switches, the converter circuit being configured for generating output voltages for at least two phases, the output voltages corresponding to different voltage levels generated by switching states of the switches; and the controller of claim 13, the controller being configured for controlling the switches. The method of claim 2, wherein in step (a) a voltage level for a phase is derived from the reference waveform by intersecting the reference waveform with at least one carrier waveform having periodic positive and negative slopes, wherein the at least one carrier waveform covers an interval between a lower voltage level and a higher voltage level of the converter, and wherein the voltage level is set to the lower voltage level if the reference waveform intersects a positive slope of the carrier waveform, and the voltage level is set to the higher voltage level if the reference waveform intersects a negative slope of the carrier waveform. The method of claim 15, wherein each voltage level of a phase relates to a time instant, and wherein the time instant is the time instant at which the carrier waveform intersects the reference The method of claim 16, wherein in the positive slope the carrier waveform linearly increases from the lower voltage level to the higher voltage level, and wherein in the negative slope the carrier waveform linearly decreases from the higher voltage level to the lower voltage level. The method of claim 17, wherein an additional voltage level for a phase is derived from the reference waveform, when the reference waveform intersects a voltage level between two carrier waveforms, and wherein the additional voltage level for a phase is set to the next higher voltage level if the reference waveform increases at the point of intersection, and the additional voltage level for a phase is set to the next lower voltage level if the reference waveform decreases at the point of intersection. The method of claim 18, wherein the additional voltage level of a phase is related to a time instant, and wherein the time instant is the time instant at which the reference waveform intersects the voltage level between two carrier waveforms. The method of claim 19, wherein the reference waveform includes a sequence of voltage values, each voltage value relating to a time instant, and wherein, when calculating the intersection between a carrier waveform and the reference waveform, the reference waveform is interpolated between the voltage values. The method of claim 20, wherein the reference waveform between a first voltage and a consecutive second voltage values is interpolated as being the first voltage value. The method of claim 20, wherein the reference waveform is linearly interpolated between a first voltage value and a consecutive second voltage value. The method of claim 8, wherein the reference waveform is linearly interpolated between a first voltage value and a consecutive second voltage value. RELATED APPLICATIONS [0001] This application claims priority as a continuation application under 35 U.S.C. §120 to PCT/EP2010/070518, which was filed as an International Application on Dec. 22, 2010 designating the U.S., and which claims priority to European Application 10151549.2 filed in Europe on Jan. 25, 2010. The entire contents of these applications are hereby incorporated by reference in their entireties. FIELD [0002] The present disclosure relates to the field of power electronics. More particularly, the present disclosure relates to a method, a computerized implementation of the method, a controller for controlling a converter, and to such a converter. BACKGROUND INFORMATION [0003] A multi-level controller may be used for controlling a multi-phase electrical machine. The multi-level converter includes a phase module for each phase generating a number of different output voltages dependent on the design of the phase module. For example, a two-level phase module generates two output voltages (+UDC, 0) and a three-level phase module generates three output voltages (+UDC, 0, -UDC). A phase module may include a plurality of electrical switches, such as power semiconductor switches, which generate the output voltage of the respective phase according to a switching pattern or switching state, which describes which switches of the phase module are conducting (on) and which switches are blocking (off). There are several possibilities (e.g., modulation methods) for generating these switching patterns. For example, switching patterns may be determined with the concept of optimized pulse patterns (OPP). With optimized pulse patterns, a motor's operation may be based on pre-calculated switching patterns that achieve a certain minimization objective, such as the elimination of certain harmonics or the minimization of the total harmonic distortion of the motor current. However, when the motor speed or the amplitude of the voltage or both go below a certain threshold value, the number of pulses required for an optimized pulse pattern is so high that it may become prohibitive. Moreover, at such low values of the motor speed and/or voltage, the usage of optimized pulse patterns does not provide an advantage in terms of the produced value of total harmonic distortion of the motor current, when compared to other methods, such as pulse width modulation (PWM). Thus, in the case of low motor speed and/or voltage, the concept of pulse width modulation (PWM) may be used. Here, for example, the average value of the output voltage over a modulation cycle that has to be fed to the electrical machine may be controlled by switching between the possible output voltages with a high frequency compared to the fundamental frequency of the AC output voltage. Another possibility is to use the concept of direct torque control (DTC), in which states of the motor, for example, the torque and the magnetic flux, are estimated and are controlled to stay within their hysteresis bands by switching when the respective variable error reaches its upper or lower limit. When any modulation method is used for the operation of a converter, in particular for one with a five-level topology, a key challenge arises: the proper choice of the actual converter switching patterns that reproduce the required output voltages while balancing the internal voltages of the converter (for example neutral point potential, floating capacitor voltages). SUMMARY [0009] An exemplary embodiment of the present disclosure provides a method for controlling a converter. The converter is configured for generating an AC current for at least two phases by outputting different voltage levels for each phase, wherein the different voltage levels depend on switching states of the converter. The exemplary method includes (a) generating a first sequence of voltage vectors, each voltage vector including a voltage level for each phase, by (i) generating a reference waveform for each phase, and (ii) determining the voltage levels for a phase for each voltage vector of the first sequence by deriving the voltage levels from the respective reference waveform for the phase. The exemplary method also includes (b) determining a set of equivalent sequences of voltage vectors by: (i) calculating voltage level differences for each voltage vector of the sequence of voltage vectors, the voltage level differences being differences of voltage levels of the voltage vector, (ii) determining an equivalent voltage vector with equal voltage differences, and (iii) generating an equivalent sequence by replacing at least one voltage vector of the first sequence with the equivalent voltage vector. The exemplary method also includes (c) selecting one sequence from the set of equivalent sequences which optimizes an internal state of the converter, when the sequence is applied to the converter. In addition, the exemplary method includes (d) applying the first voltage vector of the selected sequence to the converter. An exemplary embodiment of the present disclosure provides a non-transitory computer-readable recording medium having a computer program recorded thereon that causes a processor of a computer processing device to execute operations for controlling a converter. The converter is configured for generating an AC current for at least two phases by outputting different voltage levels for each phase, wherein the different voltage levels depend on switching states of the converter. The operations include (a) generating a first sequence of voltage vectors, each voltage vector including a voltage level for each phase, by (i) generating a reference waveform for each phase, and (ii) determining the voltage levels for a phase for each voltage vector of the first sequence by deriving the voltage levels from the respective reference waveform for the phase. The operations also include (b) determining a set of equivalent sequences of voltage vectors by: (i) calculating voltage level differences for each voltage vector of the sequence of voltage vectors, the voltage level differences being differences of voltage levels of the voltage vector, (ii) determining an equivalent voltage vector with equal voltage differences, and (iii) generating an equivalent sequence by replacing at least one voltage vector of the first sequence with the equivalent voltage vector. In addition, the operations include (c) selecting one sequence from the set of equivalent sequences which optimizes an internal state of the converter, when the sequence is applied to the converter. The operations also include (d) applying the first voltage vector of the selected sequence to the converter. An exemplary embodiment of the present disclosure provides a controller for controlling a converter. The converter is configured for generating an AC current for at least two phases by outputting different voltage levels for each phase, wherein the different voltage levels depend on switching states of the converter. The controller includes a processing unit configured to: (a) generate a first sequence of voltage vectors, each voltage vector including a voltage level for each phase, by (i) generating a reference waveform for each phase, and (ii) determining the voltage levels for a phase for each voltage vector of the first sequence by deriving the voltage levels from the respective reference waveform for the phase; (b) determine a set of equivalent sequences of voltage vectors by: (i) calculating voltage level differences for each voltage vector of the sequence of voltage vectors, the voltage level differences being differences of voltage levels of the voltage vector, (ii) determining an equivalent voltage vector with equal voltage differences, and (iii) generating an equivalent sequence by replacing at least one voltage vector of the first sequence with the equivalent voltage vector; (c) select one sequence from the set of equivalent sequences which optimizes an internal state of the converter, when the sequence is applied to the converter; and (d) apply the first voltage vector of the selected sequence to the converter. BRIEF DESCRIPTION OF THE DRAWINGS [0012] Additional refinements, advantages and features of the present disclosure are described in more detail below with reference to exemplary embodiments illustrated in the drawings, in which: FIG. 1 schematically shows a motor system with a converter according to an exemplary embodiment of the present disclosure. FIG. 2 schematically shows a phase module according to an exemplary embodiment of the present disclosure. FIG. 3 shows a diagram with voltage vectors for a converter according to an exemplary embodiment of the present disclosure. FIG. 4 shows a flow diagram for a method for controlling a converter according to an exemplary embodiment of the present disclosure. FIG. 5 shows a flow diagram for a method for predicting a first sequence of voltage vectors according to an exemplary embodiment of the present disclosure. FIG. 6 shows a diagram with an example of a reference waveform according to an exemplary embodiment of the present disclosure. FIG. 7 shows a diagram with a further example of a reference waveform according to an exemplary embodiment of the present disclosure. FIG. 8 shows a diagram with a scaled and shifted reference waveform according to an exemplary embodiment of the present disclosure. FIG. 9 shows a diagram with an output voltage waveform with additional voltage levels according to an exemplary embodiment of the present disclosure. FIG. 10 shows four diagrams with the results of the prediction method. The reference symbols used in the drawings, and their meanings, are listed in summary form in the list of reference symbols. In principle, identical parts are provided with the same reference symbols in the drawings. DETAILED DESCRIPTION [0024] Exemplary embodiments of the present disclosure better balance internal states of an electrical converter that is controlled by the pulse width modulation method. An exemplary embodiment of the present disclosure provides a method for controlling a converter. According to an exemplary embodiment of the present disclosure, the converter is configured for generating an AC current for at least two phases by outputting different voltage levels for each phase, wherein the different voltage levels depend on switching states of the converter. For example, the converter may be used for supplying an electrical motor, for connecting a generator to a power grid, or for the interconnection of two power grids. The converter may be a direct or an indirect converter. For generating the voltage levels for each phase, the converter may include a phase module for each controlled phase that includes switches generating the output voltages for the phase. Generally, the phase modules of the converter may have an equivalent design. The output voltage levels depend on the switching state of the switches (opened/closed for each switch). Due to redundancies that may be inherently available in converters, different switching states of the switches of the phase module may generate the same voltage level. According to an exemplary embodiment of the present disclosure, the method includes the step of: (a) generating a first sequence of voltage vectors, each voltage vector including a voltage level for each phase, by: (i) generating a reference waveform for each phase; and (ii) determining the voltage levels for a phase for each voltage vector of the first sequence by deriving the voltage levels from the respective reference waveform for the phase. A voltage vector may include a voltage level for each phase. When a voltage vector is applied to the converter, the switches of each phase modules have to be switched such that the voltage level of the voltage vector for the respective phase is output by the respective phase module. A sequence of voltage vectors may include a set of voltage vectors which may be output by the converter at consecutive time instants to generate a modulated output voltage for each phase. In step (a), the first switching sequence is determined by so-called carrier based pulse width modulation (CB-PWM). In this case, the switching instances may be derived by the intersection of the carrier waveform and the reference waveform. A switching instant may include a voltage level and a switching time. Additionally, a switching instant may include a switching state of the phase module that results in the voltage level. For each phase, the reference waveform may be the waveform that in the average should be output by the respective phase module. The reference waveform may be determined based on the frequency, the torque or other variables in order for the motor to fulfill certain requirements. The carrier waveform usually is a periodic waveform with a higher frequency as the reference waveform. For example, the carrier waveform may have a period in a range of 200 to 1000 μs. For deriving the switching instants for each phase, the carrier waveform is intersected with the reference waveform and the intersection points determine the switching time of the switching instant. The voltage level of the switching instant, for example, the voltage level that should be applied to the respective phase module, may also be derived from the slope of the carrier waveform at the intersection point and from the magnitude of the voltage at the intersection point. From the switching instants at the same switching time, a voltage vector at the switching time may be formed. The time-order voltage vectors then may form the first switching sequence. According to an exemplary embodiment of the present disclosure, the method includes the step of: (b) determining a set of equivalent sequences of voltage vectors by: (i) calculating voltage level differences for each voltage vector of the sequence of voltage vectors, the voltage level differences being differences of voltage levels of the voltage vector; (ii) determining an equivalent voltage vector with equal voltage differences; and generating an equivalent sequence by replacing at least one voltage vector of the first sequence with the equivalent voltage vector. In step (b), the so-called multi-phase redundancy is used for generating equivalent sequences, for example, sequences that generate the same currents in the motor, when the sequence is applied to the motor. (When a sequence is applied to the motor, the voltage vectors of the sequence are applied consecutive to the converter. After the application of a voltage vector, the controller waits a predefined time (which may be defined by the switching times) before applying the next voltage vector. It has to be understood that the sequences are usually not applied to the motor but are used for estimating or simulating the behavior of the converter or the motor for deriving values that have to be optimized.) As not the voltage differences between the potential at the terminals of the motor with respect to the neutral point of the converter, but the voltage differences between the motor terminals generate the current in the motor, two voltage vectors with equal voltage differences between the phases will generate the same current in the motor. Thus, the voltage differences of all voltage vectors of the sequence may be determined and equivalent voltage vectors with equal voltage differences may be used to generate equivalent sequences. For example, equivalent voltage vectors may be stored in a lookup table. To get all possible equivalent sequences, all possible combinations of equivalent voltage vectors may be replaced in the first sequence. It has to be noted that also so-called one-phase redundancies may be used for generating equivalent sequences. A one-phase redundancy refers to the fact that different switching states of a multi-level converter module can generate the same output voltage level. In this case, a switching state has to be assigned to each voltage level. Thus, the sequences include the switching states of the phase modules. According to an exemplary embodiment of the present disclosure, the method includes the step of: (c) selecting one sequence from the set of equivalent sequences which optimizes an internal state of the converter, when the sequence is applied to the converter. In step (c), for all equivalent sequence, internal states of the converter are estimated, when the respective sequence would be applied to the converter. For example, the neutral point potentials or potentials of the capacitors may be calculated. Then the sequence is selected which has optimal internal states. According to an exemplary embodiment of the present disclosure, in step (c) the sequence is selected such that at least one of the following internal states of the converter is optimized: a neutral point potential lies within predefined bounds, switching losses and/or the switching frequency are minimized, a common mode voltage and/or variations of the common mode voltage are minimized, an average deviation of an internal state from a predefined internal state is minimal. According to an exemplary embodiment of the present disclosure, in step (c) the sequence is selected by: (i) estimating the internal state (or the development of the internal state which respect to time) of the converter by applying the sequence to a model of the converter; and (ii) selecting the sequence with the optimal estimated internal state. According to an exemplary embodiment of the present disclosure, the method includes the step of: (d) applying the first voltage vector of the selected sequence to the converter. In step (d) not the whole selected sequence, but only the first voltage vector of the sequence is applied to the converter. According to an exemplary embodiment of the present disclosure, the reference waveform is generated based on at least one of a reference profile and a maximum amplitude. The reference profile may be a periodic function, for example a sinusoidal function. Also higher-order periodic deviations may be added. For example the reference profile sin(x)+sin(3x) may have the advantage of maximizing the utilization of the power converter DC link voltage applied to the motor. The reference profile may include a set of discrete values that are stored in a lookup table. The reference time may be used for defining at which position within the period of reference profile the reference waveform starts. This may depend on the current angular position of the respective phase, for which the reference waveform has to be calculated. The maximal amplitude may define the maximal value of the reference waveform. The maximal amplitude may be based on a modulation index of the motor the converter is connected to. The modulation index may relate to the maximal amplitude of the voltage that should be applied to the motor and may be derived from the model of the motor, the torque and the angular velocity of the motor. The reference waveform for each phase may be based on the model of the electric machine the converter is connected to. Several rated values exist for the machine: torque, stator flux, rotor flux, and stator current. The reference waveform has to be chosen such that these rated values are met. According to an exemplary embodiment of the present disclosure, in step (a) of the method, a voltage level for a phase is derived from the reference waveform by intersecting the reference waveform with at least one carrier waveform having periodic positive and negative slopes. As already explained, from the intersections of the reference waveform with the carrier waveform, the switching time and the voltage level for the switching instant may be derived. According to an exemplary embodiment of the present disclosure, each voltage level of a phase relates to a time instant or time point, wherein the time instant may be the time instant at which the carrier waveform intersects the reference waveform. Each derived voltage level for a phase may be related to a certain time instant. If a carrier waveform intersects the reference waveform, a new voltage level for the phase is generated. The new voltage level is related to the time instant of the intersection. All voltages levels for all phases may be generated (or predicted) for a predetermined future time period by the method. After all voltage levels for the predetermined time period have been generated, the voltage levels relating to one time instant are gathered to a voltage vector. If at a time instant only voltage levels for certain phase, but not for all phases exists, a voltage vector with the at the time instant sustained voltage value of the missing phase may be inserted into the voltage vector. In this way, a voltage vector may be related to a time instant, for example, the time instant of its voltage levels. The sequence of voltage vectors may be generated from voltage vectors ordered by their time instants. According to an exemplary embodiment of the present disclosure, at least one carrier waveform covers an interval between a lower voltage level and a higher (consecutive) voltage level of the converter. The converter (and in particular a phase module) may be configured to generate N voltage levels. Thus, N-1 carrier waveforms may be generated. For example, if the converter has the voltage levels {-1, 0, 1}, there may be two carrier waveforms one covering -1 to 0 and one covering 0 to 1. For a five-level converter there may be four carrier waveforms. According to an exemplary embodiment of the present disclosure, the voltage level is set to the lower voltage level, if the reference waveform intersects a positive slope of the carrier waveform and the voltage level is set to the higher voltage level, if the reference waveform intersects a negative slope of the carrier waveform. Each positive slope may be on a first half of a carrier period and each negative slope may be on a second half of the carrier period. In this way, when the reference waveform is between two voltage levels, alternating switching instants are created that create the same average voltage as the reference waveform. For example, in the positive slope the carrier waveform may linearly increase from the lower voltage level to the higher voltage level, and in the negative slope the carrier waveform may linearly decrease from the higher voltage level to the lower voltage level. Thus, a carrier waveform may be a lambda-shaped (λ) function, the minimal value of the function may be a first (lower) voltage level and the maximal value of the function may be a second (the one higher) voltage level. With a carrier function having only straight sections, the calculation of the intersections may be straight According to an exemplary embodiment of the present disclosure, an additional voltage level for a phase is derived from the reference waveform, when the reference waveform intersects a voltage level between two carrier waveforms, wherein the additional voltage level for a phase is set to the next higher voltage level, if the reference waveform increases at the instant of intersection, and the additional voltage level for and phase is set to the next lower voltage level, if the reference waveform decreases at the instant of intersection. As all voltage levels, also the additional voltage level of a phase may be related to a time instant, wherein the time instant is the time instant at which the reference waveform intersects the voltage level between two carrier waveforms. According to an exemplary embodiment of the present disclosure, the reference waveform includes a sequence of voltage values, each voltage value relating to a time instant. These time instants may correspond to the time instants where the carrier slope reverses. The reference waveform may be a discrete curve. Each voltage value may be related to an instant or point in time. Consecutive voltage values may be separated by a half of the called carrier period. When calculating the intersection between a carrier waveform and the reference waveform, the reference waveform may be interpolated between the voltage values. For example, the reference waveform between a first voltage and a consecutive second voltage values may be interpolated as being the first voltage value. In this case, the reference waveform may be seen as a step function. Alternatively, the reference waveform may be linearly interpolated between a first voltage value and a consecutive second voltage value. An exemplary embodiment of the disclosure provides a program element (a computer program) for controlling a converter, which when being executed by at least one processor is configured for executing the steps of the method as described in the above and in the following. For example, the processor may be a processor of the controller. In accordance with an exemplary embodiment, the program element (computer program) is tangibly recorded on a non-transitory computer-readable recording medium, which may be any type of non-volatile memory capable of recording such program. An exemplary embodiment of the present disclosure also provides a non-transitory computer-readable medium, in which such a program element is stored (recorded). Examples of a non-transitory computer-readable medium include, but are not limited to, a floppy disk, a hard disk, a USB (Universal Serial Bus) storage device, a RAM (Random Access Memory), a ROM (Read Only memory), an EPROM (Erasable Programmable Read Only Memory), or a FLASH memory. In accordance with an exemplary embodiment, the program may be downloaded from a data communication network, e.g. the Internet, and recorded on the computer-readable medium. It is also possible that the method is implemented with a FPGA (field-programmable gate array). An exemplary embodiment of the present disclosure provides a controller for controlling a converter, which is configured for executing the method as described in the above and in the following. The controller may include a processor and a memory with the program element to be executed on the processor. Alternatively the controller may include the FPGA. An exemplary embodiment of the present disclosure also provides a converter. According to an exemplary embodiment of the present disclosure, the converter includes a converter circuit with switches, the converter circuit being configured for generating output voltages for at least two phases, the output voltages corresponding to different voltage levels generated by switching states of the switches. The converter circuit may include the above mentioned phase modules for generating the output voltages. According to an exemplary embodiment of the present disclosure, the converter includes a controller for controlling the switches, the controller being configured for executing the method as described in the above and in the following. The method as described in the above and in the following is generic and may be applicable to any setup of a multi-level converter controlled by a carrier-based pulse-width modulation technique, for which predictions of the future switching events can be applied to a predictive internal voltage regulation function. To summarize, a method for predicting pulse width modulated switching sequences for a multi-phase multi-level converter is provided. With a first predicted switching sequence, due to multi-phase redundancies, equivalent switching sequences are determined. From the equivalent switching sequences one switching sequence optimal with respect to a predefined optimization goal is selected. The selected switching sequence is used to control the converter. It is to be understood that features of the method as described in the above and in the following may be features of the devices as described in the above and in the following. If technically possible but not explicitly mentioned, also combinations of embodiments of the present disclosure described in the above and in the following may be embodiments of the method and the These and other aspects of the disclosure will be apparent from and elucidated with reference to the exemplary embodiments described hereinafter. FIG. 1 shows a motor system 10 with a three-phase motor 12 and a converter with a converter circuit 14. The converter circuit 14 includes a phase module 16 for each phase P1, P2, P3 to be supplied to the motor. Each of the phase modules 16 has an output 18 connected to the respective phase P1, P2, P3 of the motor 12 and an output 20 that may be used for earthing (grounding) the phase module 16. In accordance with an exemplary embodiment, the terminal 20 is left floating so as to act as a virtual ground. Between the outputs 18 and 20 each phase module 16 is adapted to generate the respective output AC voltage U , U , U . Each of the phase modules further has two inputs 22, 24 which are connected to a separate DC supply voltage U . The motor system 10 includes further a transformer 23 which supplies three diode rectifiers 25 with AC current. For example, the transformer 23 may have three or six phase connections on the secondary side. The diode rectifiers 25 may be 6- or 12-pulse rectifiers for generating the supply voltage U The motor system may be a medium voltage system, for example, the supply voltage U may be in a range between 1 kV and 50 kV. The converter includes further a controller 26 that is configured to receive control signals like the phase currents from the motor 12 and to control the converter circuit 14 by sending or applying switching state commands to the phase modules, for example, by turning switches on or off in the phase modules. From the received control signals, the controller 26 estimates internal states of the motor like stator flux, rotor flux, and the electromagnetic torque. With the aid of these quantities the desired phase voltages U , U , U are calculated by the controller 26. FIG. 2 shows a possible design of the phase modules 16, which is configured to generate five different output voltages U . With three such phase modules 16, the converter becomes a five-level converter providing the phase voltages U , U , U . The single-phase module 16 is used for ABB's ACS5000 converter topology. The phase module 16 includes a DC link 28 with two capacitors 30, 32 connected in series between the inputs 22 and 24. Between the two capacitors 30, 32, the neutral point 34 of the phase module is The phase module 16 further includes a first inverter circuit 36 and a second inverter circuit 38. Each of the inverter circuits 36, 38 include four power semiconductor switches 40, which are connected in series. The ends of the two series of switches are connected in parallel across the DC link 28 and to the inputs 22, 24. Between the first and the second switch 40 and between the third and the forth switch 40 of each of the inverter circuits 36, 38, there is a connection to the neutral point 34 of the phase module. The output 18 is connected between the second and third switch 40 of the inverter circuit 38. The output 20 is connected between the second and third switch 40 of the inverter circuit 36. The phase module 16 is configured to generate five different voltage levels between the outputs 18, 20. The voltage levels are generated by connecting the outputs 18, 20 to the input 22 (with a positive potential U.sub.+), the input 24 (with a negative potential U ) or to the neutral point 34 (with a neutral point potential U The connections are generated by the controller 26 which opens and closes the switches 40 according to a switching state of the phase module. At high motor speed, the controller 26 calculates the switching states according to the optimized pulse pattern (OPP) method, which relies on the precalculation of a set of pulse patterns (that may be coded as sequences of voltage vectors) that represent the voltages the converter needs to supply to the motor at steady state, such that a certain minimization criterion is fulfilled. These pulse patterns are stored in look-up tables. During the operation of the motor the controller 26 reads the pulse patterns out of these tables that should be applied to the motor 12, depending on the operating conditions. However, the only information that is contained in the OPP and that is read out of the look up table is the angular position (for example, the time instant) and the voltage levels (e.g. -V , -V /2, 0, V /2, V for each phase module 16) that needs to be applied to the motor phase terminals P1, P2, P3. At low motor speed, the controller 26 calculates the switching states according to the carrier based pulse width modulation (CB-PWM) method as explained in the above and in the following. In this operating region (low motor speed) CB-PWM may be advantageous because of the high carrier frequency to fundamental frequency ratio. In particular, the switching states of the phase modules 16 are determined by the controller 26 by executing the control method as explained in the above and in the following. The considerations with respect to the OPP method apply also to the CB-PWM method. When operating a conventional two-level converter, every possible phase voltage corresponds to a unique switch combination (switching state) that can produce it, creating a one-to-one mapping between the required voltages and the corresponding switch positions. However, this is not the case with the multi-level converter, where the so-called single- and three-phase redundancies are present. Specifically, the term single-phase redundancy describes the availability of two (or more) different switching states of one of the phase modules that produce the same phase voltage (for example U ) but that have the opposite effect on the neutral point potential U or on a floating capacitor voltage: if one configuration increases the voltage, the other (for the same current) decreases it. With respect to FIG. 2, one can achieve the same phase voltage (U ) by either connecting 18 to 22 and 20 to 34 or 18 to 34 and 20 to 24. The first option will, for a positive current, decrease the neutral point potential U while the second will (for the same current) increase it. The single-phase redundancies are commonly exploited for balancing internal converter voltages, as they provide alternatives for the phase voltage required by the modulation scheme that can steer the internal voltages to the desired direction. FIG. 3 shows a diagram with the possible voltage vectors 42 of a five-level converter such as 14. Inside each circle for a voltage vector 42, three numbers are given that indicate the three voltage levels of the voltage vector. For example, the voltage vectors 42 includes the voltage levels -1, 2, -2 corresponding to an output of the three phase modules of: U /2, U and U . The voltage vector 42 may be described by (-1 2-2). As may be derived from the diagram only the voltage vector 42 has the voltage differences -3=-1-(+2) and 4=2-(-2). FIG. 3 shows further a switching sequence 44 including the voltage vectors (1 0 2), (2 0 2), (2 0 1) and (2 0 0) and an equivalent switching sequence including the voltage vectors (0-1 1), (1-1 1), (0-2-1) and (1-1-1). For example, the first voltage vector (1 0 2) and (0-1 1) of the two switching sequences 44, 46 have the same voltage differences 1 and -2. The term three-phase redundancy refers to the redundancies in the voltage differences between the output voltages U , U , U , i.e. the case where different phase voltages can be combined to provide the motor terminals P1, P2, P3 with the same line-to-line voltage. Since all the electrical quantities of the motor depend on the line-to-line voltage rather than the individual phase voltages U , U , U , an even larger number (compared to the single-phase case) of redundant switch positions exist. These switch positions generate equal line-to-line voltages at the machine terminals P1, P2, P3. This is shown in FIG. 3, where one can observe how different combinations of single phase voltages can generate the same line-to-line voltage (voltage differences) and thus the same voltage vector on the Thus, in a multi-level converter (in contrast to the two-level converter case), no one-to-one mapping exists, neither between the phase voltages U , U , U and the corresponding phase module switching states due to single-phase redundancies, nor between the line-to-line voltage (voltage differences) and the overall converter switching states due to the three-phase redundancies. This implies that when a voltage is required by the OPP or the CB-PWM, an algorithm may be needed to decide (out of the many options that exist) on the appropriate switch positions that achieve the voltage requested by the OPP or the CB-PWM, while balancing the converter internal quantities and additionally reducing its switching losses, reducing its switching frequency, minimizing common mode voltage values, etc. FIG. 4 shows a flow diagram for a method for controlling the converter. In a step S10, a first sequence of voltage vectors is generated. Dependent on the motor speed either the sequence is generated from OPP patterns or with the algorithm for generating a CB-PWM based sequence described in detail below. In both cases the sequence may be determined dependent on operating conditions of the motor system 10, such as load torque, speed, motor current, and so on. For example, the generated sequence is the sequence 44 shown in FIG. 3. In a step S12, for each voltage vector of the generated (first) sequence, equivalent voltage vectors with equal voltage differences are determined. For example, for the first voltage vector (1 0 2) of the sequence, these would be the voltage vectors (-1-2 0) and (0-1 1). This may be done with the aid of a look up table, which in principle stores the information shown in FIG. 3. Then all possible combinations of equivalent sequences are generated by replacing the voltage vectors of the first sequence with equivalent voltage vectors. One of these sequences would be the sequence 46. Further, to use the one-level redundancies, in each equivalent sequence, each voltage level of each voltage vectors is supplemented with a switching state generating the voltage level. From this sequences equivalent sequences with equal voltage vectors but with different switching states are derived by replacing the switching states with equivalent switching states, for example, switching states generating the same voltage level for the phase. In a step S14, for each sequence of the set of equivalent sequences generated in step S12, effects of the switchings defined by the sequence on the converter are estimated. In particular, the variation of the neutral point potential U of each phase module 16 is calculated with the aid of the integral = 1 C ∫ T 0 T 1 i ( t ) t , ##EQU00001## with which the voltage U over each of the capacitors 30, 32 may be calculated. C corresponds to the capacitance of the capacitors 30, 32 of the DC link 28 and i(t) the current flowing to the respective capacitor 30, 32. The current i(t) depends on the switching state of the phase module 16 encoded in the switching sequence and the motor current of the respective phase. The motor current may be calculated with a model of the motor 12 or may be estimated with a simple sinusoidal function under the assumption that the motor current of the respective phase is substantially determined by its fundamental mode. The times T1 and T2 are determined by the switching sequence which also includes the times when the switchings have to be applied to the converter. In a step S16, the sequences are selected for which the neutral point potential U for all phases stays within predefined bounds. In a step S18, for each sequence selected in step S16 the switching losses are estimated. After that, the sequence with the smallest switching losses is selected as the optimal sequence. Alternatively or additionally, further optimization criterions may be the switching frequency or the common mode voltage, etc. In a step S20, the first voltage vector of the optimal sequence is applied to the converter. Before applying the next voltage vector, the steps S12 to S18 are executed again, to determine a new optimal sequence, which may deviate from the previous determined optimal sequence, for example due to changes in torque, load or motor currents. The method for optimizing internal states of the converter described herein does not create additional commutations that increase the switching losses, and does not interfere with the harmonic volt-second balance commanded by the pulse width modulator. Therefore the harmonic distortion of voltages and currents does not increase. The method allows for easy adaptation to different multi-level converter cases. When using the optimization method for PWM, predicted future converter switching instants allow for the balancing of the converter's internal voltages (neutral point potentials, floating capacitor voltages), while satisfying specified objectives (reduced switching losses, as an example). With OPP, predicted future switching instants (for example, the first sequence of voltage vectors) may be already present in the controller 26. When the controller 26 uses PWM, in particular includes a CB-PWM modulator (programmed in a FPGA, as an example) for controlling the converter, future converter switching instants may not be present for the optimization method, since the CB-PWM modulator may only calculate the next needed switching instant. Thus, one difficulty in the application of CB-PWM may lie in the fact that the future switching instants are not predetermined as for OPPs. For use of the method with a CB-PWM modulator, the upcoming switching instants may be pre-calculated online in an efficient manner according to the following algorithm, an online method of computing the next several switching instants in real time. FIG. 5 shows a flow diagram for a method or algorithm for generating a CB-PWM based sequence of voltage vectors, which may be executed in step S12 of the method of FIG. 4 for generating the first sequence of voltage vectors. The method may be seen as a computational method for prediction of CB-PWM switching instants for the predictive internal voltage balancing algorithm of the multi-level converter. The method may be based on asymmetric sampling of a reference waveform. In a step S30 a reference waveform 50 and the carrier slopes 54 for each phase are calculated. This step will be explained with reference to FIG. 6 and FIG. 7. FIG. 6 shows a diagram with the scaling of a reference waveform 50 and a carrier waveform 52 for a 2-level converter. To simplify the required computation the reference waveform 50 and the carrier waveform 52 are scaled so that the peak-to-peak base value is one. In particular, the two waveforms 50, 52 only include values between -0.5 and 0.5. The diagram shows the development of the two waveforms 50, 52 with respect to time in seconds. The reference waveform 50 of FIG. 6 can be generated with the following equation = M i 2 cos ( ω s t + θ ) + V off , ##EQU00002## where M[i] is the modulation index (a maximal amplitude), ω is the fundamental frequency of the motor 12 and θ is a phase shift. In the present case V =0 since the reference waveform 50 is a simple sinusoidal wave. In principle, any of the well-known reference waveforms may be used. For example, FIG. 7 shows a diagram with the scaling of a reference waveform 50 and four carrier waveforms 52 for a 5-level converter. The reference waveform 50 of FIG. 7 is a superposition of the reference waveform 50 of FIG. 6 with higher-order harmonics. At the beginning, the phase of the carrier waveform 52 and the reference waveform have to be synchronized with the current state of the converter. In particular, for the reference waveform, the parameters M , ω , θ have to be determined. For synchronizing, the algorithm uses the information from the CB-PWM modulator providing the present operating point of the converter. This information may include the 3-phase voltage level state, the slope of the next carrier cycle and the current clock time of the FPGA. In general, the generated reference waveform 50 is a sequence of N voltage values, where N is the length of the prediction horizon of the method, i.e V (k) with k=0, 1, . . . , N-1. The time instants relating to each voltage value (sample instant) of the reference waveform 50 are at the peak and valley of the carrier waveform 50, for example, corresponding to two times the carrier frequency. Due to the special profile of the carrier waveforms 50, only the first carrier slope 54 (positive or negative) of the carrier waveforms 50 has to be determined and is set according to the next carrier signal from the CB-PWM modulator. The future carrier slopes are alternating positive and negative slopes or vice versa. Note that the algorithm does not calculate the carrier waveforms 50, which are only shown in FIGS. 6 and 7 to illustrate the method. In principle, the voltage values of the reference waveforms 50 for each of the three phases are pre-calculated along with the sequence of carrier signal slopes for a predefined prediction horizon N, for example N=4. The prediction horizon of N voltage values of each of the three-phase PWM waveform begins with a first voltage value synchronized with the current state of the converter. Next, the future voltage values are calculated assuming a steady state operating point, for example, constant modulation index M and reference frequency ω , for example with the above referenced formula for V In a step S32 (see FIG. 5), the values of the reference waveform 50 are scaled and shifted from the carrier levels 56 to the range [0, 1] as shown in FIG. 8. A carrier level 56 is defined by two consecutive voltage levels of the converter. If the converter has n voltage levels, there are n -1 carrier levels. As is indicated in FIG. 7, a five-level converter 16 has four carrier levels 56 and each carrier level 56 is covered by one carrier waveform 52. An example of scaling and shifting of the waveform begins with the carrier waveforms 52 and reference waveform 50 for a five-level converter shown in FIG. 7. The reference waveform 50 is scaled with -1) so that each carrier level 56 has a peak to peak magnitude of 1. Each carrier level 56 is than shifted to the range [0, 1], for example the part of the reference in each carrier level is shifted respectively by: top (-1), upper middle (0), lower middle (+1) and bottom (+2). For each carrier level 56, the voltage values of the reference waveform 50 that in the end do not have a value in the range [0, 1] are discarded. After the scaling and the shifting, for each carrier level 56 there is a scaled reference waveform as indicated in FIG. 8. In FIG. 8 also the time axis has been scaled such that the time instant of the voltage values of the reference waveform 50 correspond to the time index k (a natural number). For a five-level converter, this calculation may be done in a way that the values of the reference waveform 50 of FIG. 7 are partitioned with respect to the carrier levels 56. For each k: The top carrier level: 0.5 quadratureV The upper middle carrier level: 0.25 quadratureV The lower middle carrier level: 0 quadratureV The bottom carrier level: -0.25 quadratureV After partitioning the reference values for each carrier level 56 they are scaled and shifted: (k)-1 for 0.5 quadratureV (k)-0 for 0.25 quadratureV (k)+1 for 0 quadratureV (k)+2 for 0.25 quadraturequadratureV where n =5 in this particular example. In a step S32 (see FIG. 5) the switching instants (voltage levels and time points, i.e. time instants) for each carrier level 56 (and for each phase) are calculated. For the scaled reference waveform 50 in FIG. 8, only two voltage levels 60, 62 bounding or limiting the carrier level 56 are possible. Due to the scaling the two voltage levels are 0 and 1 in FIG. 8. With the CB-PWM method, the output voltage waveform 58 is set to the lower voltage level 62, if the reference waveform 50 intersects a positive slope of the carrier waveform 52 and the voltage level is set to the higher voltage level 60, if the reference waveform 50 intersects a negative slope of the carrier waveform 52. As a function of the reference waveform 50, the output voltage waveform 58 may be a vector V (k) with k=0, . . . , N-1 (k)=0 (for positive carrier slope) (k)=1 (for negative carrier slope) To produce the correct output levels, the values V (k) have to be scaled and shifted back to the scaling of FIG. 7. With the scaling of the scaled reference waveform 50, the location (time instant) of the switching instants within a half carrier cycle are directly proportional to the scaled voltage value depending on the period of the carrier waveform: t sw ( k ) = V ref_new ( k ) T carr 2 ( for positive carrier slope ) ##EQU00003## t sw ( k ) = ( 1 - V ref_new ( k ) ) T carr 2 ( for positive carrier slope ) ##EQU00003.2## where T is the carrier period. During the previous calculations, it has been assumed that the value of the reference waveform 50 is sustained between two time instants k. Alternatively, according to a second embodiment linear interpolation between the voltage values of the reference waveform 50 is possible to calculate the intersection point of the reference waveform 50 and the carrier waveform 52. In this case S32 is modified to calculate the intersection of two lines. In a step S34 (see FIG. 5) additional switching instants are calculated as necessary. FIG. 9 shows an output voltage waveform 58 with additional switching instants 64a, 64b caused by a jump of the reference waveform 50 from one carrier level 56 to the next due to sampling. The additional switching instants 64a, 64b or additional voltage levels 64a, 64b correspond to points where the reference waveform 50 crosses a voltage level between two carrier waveforms 52. The additional voltage level 64a, 64b is set to the next higher voltage level, if the reference waveform 50 increases at the point of intersection and the additional voltage level for a phase is set to the next lower voltage level, if the reference waveform decreases at the point of intersection. For example, the first additional switching instant 64a at k=1 transitions to a voltage level of 0, since the reference waveform at k=1 intersects the voltage level of -0.25. The second additional switching instant 64b transitions to a voltage level of 0.25 since the reference waveform 50 intersects the voltage level of 0. To determine the locations (time instants) of the voltage level crossings, the algorithm must find neighboring pairs of voltage values of the reference waveform one of which is below and the other of which is above one of the voltage levels. In step S36 (see FIG. 5) the sequence of voltage vectors is generated. This will be explained with reference to FIG. 10. FIG. 10 shows four diagrams 66, 66a, 66b, 66c with the results of the prediction method with a 4 transition horizon. In all diagrams 66, 66a, 66b, 66c, the x-axis shows the time in seconds. Diagram 66 shows four carrier waveforms 52, a reference waveform 50a for a first phase, a second reference waveform 50b for a second phase and a third reference waveform 50c for a third phase. Diagram 66a shows the output voltage waveform 58a generated from the reference waveform 50a, diagram 66b shows the output voltage waveform 58b generated from the reference waveform 50b and diagram 66c shows the output voltage waveform 58c generated from the reference waveform 50c. In each of the diagrams 66a, 66b, 66c, the switching instants 70 generated for all three phases in the steps S32 and S34 are indicated by small x's. The phase in which the switching transition occurs is indicated by a circle 68 around the x. For every switching transition 68 at a certain time instant t for one of the phases, the voltage level is maintained or sustained in the other phases. After that for the switching instants at every time instant t, a voltage vector is formed including the switching time of the switching instants and the three voltage levels for the phases. The time ordered set of voltage vectors then forms the sequence of voltage vectors predicted by the algorithm. The voltage vectors of the sequence of voltage vectors may be stored in an array V V Vectors = ( 1 2 1 - 1 0 0 - 1 - 1 - 1 ) ##EQU00004## The generated sequence of voltage vectors, for example, the predicted voltage levels along with the switching times can then be provided to the method of FIG. 4 for determining an optimal or optimized sequences of phase states. It has to be noted that another order of the calculation steps, different from the order indicated in FIG. 5 is possible. For example, the transition times of the reference waveform from one carrier level to another (and thus the time instants of the additional switching instants) may be determined before the switching instants for one carrier level are calculated. Further, for example, the switching instants of step S32 may be calculated separately for the positive and the negative slopes of the carrier waveform. For example in FIG. 9 the first transition of the reference waveform 50 occurs on a negative carrier slope while the second is on a positive slope. Thus two selection vectors are created to correctly combine the transitions: positive slope=[2 4 . . . ]; negative slope=[1 3 . . . ]. These two arrays may be combined to one time ordered array of voltage levels. Out of the time ordered array for each phase, the array V containing the voltage vector level for each phase corresponding to each switching instant may be created according to the slope of the next carrier and scaled to the previously determined reference waveform partitioning. As a final step, the additional voltage level values due to the level transitions may be inserted into the array V at the corresponding time ordered instants. While the disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the disclosure is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art and practicing the claimed disclosure, from a study of the drawings, the disclosure, and the appended claims. In the claims, the words "comprising" and "including" do not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or controller or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope. It will be appreciated by those skilled in the art that the present invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The presently disclosed embodiments are therefore considered in all respects to be illustrative and not restricted. The scope of the invention is indicated by the appended claims rather than the foregoing description and all changes that come within the meaning and range and equivalence thereof are intended to be embraced therein. LIST OF REFERENCE SYMBOLS [0145] 10 motor system 12 motor 14 converter circuit 16 phase module 18 phase output 20 earthed output P1, P2, P3 phase UP1, UP2, UP3 phase voltage 22, 24 input 23 transformer 25 rectifier UDC supply voltage 26 controller 28 DC link 30, 32 capacitors 34 neutral point 36, 38 inverter circuit 40 switch U+ positive potential U- negative potential Uo neutral point potential 42 voltage vector 50, 50a, 50b, 50c reference waveform 52 carrier waveform 54 first slope 56 carrier level 58, 58a, 58b, 58c output voltage waveform 60 upper output voltage 62 lower output voltage 64a, 64b additional switching instants 66, 66a, 66b, 66c diagram with results 68, 70 switching instants Patent applications by Frederick Kieferndorf, Fislisbach CH Patent applications by Georgios Papafotiou, Adliswil CH Patent applications by Nikolaos Oikonomou, Baden CH Patent applications by ABB RESEARCH LTD Patent applications in class In transistor inverter systems Patent applications in all subclasses In transistor inverter systems User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130016549","timestamp":"2014-04-16T09:39:08Z","content_type":null,"content_length":"103539","record_id":"<urn:uuid:1508d8cd-e03f-43bb-98f3-237cd887e657>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
CS 184/284A. Foundations of Computer Graphics Current Schedule (Spring 2014) Catalog Description: (4 units) Techniques for modeling objects for the purpose of computer rendering: boundary representations, constructive solids geometry, hierarchical scene descriptions. Mathematical techniques for curve and surface representation. Basic elements of a computer graphics rendering pipeline; architecture of modern graphics display devices. Geometrical transformations such as rotation, scaling, translation, and their matrix representations. Homogeneous coordinates, projective and perspective transformations. Algorithms for clipping, hidden surface removal, rasterization, and anti-aliasing. Scan-line based and ray-based rendering algorithms. Lighting models for reflection, refraction, transparency. Light transport and methods for computing global illumination. Basics of animation, non-photorealistic rendering, and image-based rendering. Prerequisites: CS 61B; programming skills in C or C++; linear algebra and calculus. Course objectives: • An understanding of the physical and geometrical principles used in computer graphics • An understanding of rendering algorithms, and the relationship between illumination models and the algorithms used to render them • An understanding of the basic techniques used to model three dimensional objects, both as surfaces and as volumes • An acquaintance with the principles of interaction and of user interfaces Topics covered: • Polygon scan conversion (rasterization) • 2D and 3D Geometric and Modeling Transformations • Rotation about an arbitrary axis, quaternions, exponential maps • Homogeneous coordinates and projective geometry • Planar geometric parallel and perspective projections • 2D and 3D viewing transformations • Perspective Pipeline • Line and Polygon clipping algorithms • Visible surface determination • Illumination (Reflectance) models and gamma correction • Smooth shading methods and mach band artifacts • Ray tracing: reflection/refraction/transparency/shadows • Radiosity, photon mapping, and global illumination • Texture mapping • Environment mapping and bumb mapping • Spline curve and surface representations • Animation • Image-based and non-photorealistic rendering General Catalog Undergraduate Student Learning Goals
{"url":"http://www.cs.berkeley.edu/Courses/Data/214.html","timestamp":"2014-04-17T12:53:47Z","content_type":null,"content_length":"9560","record_id":"<urn:uuid:4210d8fa-5a50-40cf-9184-b845c14dd361>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Applications of Math: Theory vs. Practice up vote 7 down vote favorite I have a problem: I learned about a lot of the applications of mathematics from academics. Neither they nor I have had much contact with the "real world" to go and see for ourselves how mathematics are used today (rather than, say, in the pre-computer age). So if there are non-academics out there reading MO, I would very much like to hear from them about how their use of mathematical tools may or may not differ from the academic training they received. I don't expect we'll get a very representative sample of math-users on MO, but that's quite all right, anecdotal evidence is all I'm after. Precision: I want to read about the tools being used rather than, say, an underlying mathematical motivation (which is another legitimate role of mathematics, but not really part of my question). So for instance, you may say that Google's PageRank algorithm was motivated by the theory of Markov chains, but (from what I can tell), I would not say it uses Markov chains. big-picture applications teaching soft-question Yes, I'd like to see list of desired knowledge for applied mathematician! – Nikita Kalinin Sep 1 '10 at 22:01 @Nikita: This isn't quite the spirit of my question. My impression is that there are many ways in which one can apply mathematics, so each person should have their own list. – Thierry Zell Sep 2 '10 at 15:47 add comment 4 Answers active oldest votes I'm not quite sure how to answer this but I'll take a stab anyway. Once I started working as a mathematician, I found that my grasp of probability and discrete mathematics was very weak (now it is at least adequate). It is quite rare for me to go through the details of writing a proof; instead, I code up an idea in MATLAB (which I also learned outside of academia). Once it works, then I usually have what amounts to a proof embedded in the logical structure of code. Because my initial background was not ideal, the things that I've learned professionally have tended to have direct applications to my work. But this has still been an esoteric bag of tricks, for which I will supply a few examples from the first five years or so of my career (it has been another five years since). One of the first things I did was to give myself a crash course (now forgotten) in algorithms, crypto and complexity theory. I learned Markov processes and queueing theory to model up vote coarse-grained computer network traffic, and martingales to profile its behavior. I learned the rudiments of graph theory, combinatorics and information theory to develop data structures and 8 down work with statistical symmetries in finite strings. I learned about toric varieties and briefly revived my acquaintance with index theory to understand Euler-Maclaurin formulae for vote polytopes, which were of theoretical import for precisely enumerating/sampling from those same statistical symmetries. The overarching theme has always been to either develop methods of my own for tackling specific problems determined by needs external to my own narrow interests or to identify if and how someone else's constructions work, as well as to find areas for improvement. In both cases the goal has not been detailed proofs but either code or an argument for doing something in a particular way. I will say that my formal education has been of comparatively little use. The few good techniques and working habits I've developed have come from my professional work and not from school. add comment My work draws on various bodies of mathematics. Here's a brief description (by no means exhaustive) of how I use math in my work: • Mathematical programming/optimization: Optimization is used for anything from reconciling actual data to a model's (regression), estimating unknown parameters in a system, to finding the best inputs that will extremize some functional in a dynamic system. The applications are endless. When people think mathematical programming, they think Linear Programming. But convex nonlinear programming is actually very well-established. In fact, large problems in nonconvex optimization are routinely solved (although modeling a nonconvex system can be quite an • Real/functional analysis: useful for understanding optimization algorithms. An understanding of convex functions and sets is crucial -- they lead to global solutions (with guarantees) without solving an NP-hard problem, so we exploit convexity properties whenever possible. (Lipschitz) continuity is another important idea, subgradients etc. are important concepts in nonsmooth optimization. Real analysis is not applied directly, but a good understanding of it is required for reading convergence proofs or descriptions of optimization algorithms. • Computational Geometry: ideas like convex hulls, Voronoi diagrams, etc. are useful in optimization. I use them to partition a problem space into convex regions, or to parametrize a space. The region bounded by convex polytopes can be represented by a set of inequality constraints that can be enforced in an optimization problem. Discrete optimization is used to optimally switch between these regions. up vote • ODE/DAE theory: used for modeling dynamic systems. In particular, understanding the notion of index in DAEs can help one develop models that are amenable to reliable numerical solution. 5 down • Calculus. Differential calculus is used everywhere (e.g. model sensitivity analysis, automatic differentiation, postoptimality analysis) vote • Statistics: projection methods like the Karhunen-Loewe transform (related to SVD) are used to reduce the dimensionality of large models constructed from data. They're also the only way to handled correlated/collinear data (in practice, most large datasets in the real world are correlated. The assumption of factor independence built into standard regression techniques often does not hold, so methods like multiple linear regression often have to be modified for instance into principal components regression in order for them to be usable on real world large datasets). Also, tools like time-series analysis are used to construct time series models from data. • Linear algebra: used almost everywhere. They're the basic building blocks for working with nonlinear systems. In particular, efficient numerical solution of sparse structured matrices is crucial to the efficiency of large-scale nonlinear optimization algorithms (the bottleneck is often in the linear algebra solvers, not in the optimization algorithm itself). Tools like SVD are frequently used. • Numerical methods: used everywhere. Understanding concepts like numerical conditioning is crucial; when modeling, one wants to end up with a system with a Jacobian that is well-conditioned with respect to inversion. • Misc: Diophantine equations are used to derive certain control laws. Laplace transforms are used for modeling linear-time-invariant systems because they allow differential equations to be manipulated as algebraic ones. Algebraic Riccati equations are solved in the derivation of the Kalman gain. Fixed-point iteration is used to converge decomposed models. add comment When I was in grad school, numerical math essentially meant large-scale linear algebra and problems which were reducible to linear algebra (primarily PDEs.) When I started working in biostatistics, I was surprised how little use I had for linear algebra beyond basic operations on small matrices. What I did have a need for was random number generation, optimization, up vote 3 and numerical integration. down vote add comment I use Octave and Matlab to code algorithms to quickly check particular things out. Sometimes, that involves simplifying things. See for example, my answer to a question which misuses the word "permutation" on this site, when what is really meant is a fairly simple problem. Simplifying the complex sounding questions by looking at all of the aspects of it is a mathematical trait and habit I've been trained in, especially by mathematics and mathematicians. up vote 0 Formal education can also mislead you into applying the tools you have at hand rather than the best tool for the job. Just because you've got a chainsaw doesn't mean that you have to or down vote even can use it to solve the problem that needs a delicate chisel or gouge. I also like to use command line tools such as "sed" and "awk" on large columnar text data files, usually with the "bash" shell in unix or linux. add comment Not the answer you're looking for? Browse other questions tagged big-picture applications teaching soft-question or ask your own question.
{"url":"http://mathoverflow.net/questions/36796/applications-of-math-theory-vs-practice?sort=oldest","timestamp":"2014-04-21T12:32:14Z","content_type":null,"content_length":"68695","record_id":"<urn:uuid:2860d390-f7f8-4f2e-88f3-335d60a427a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Group Schemes and Moduli (I) Hi! My name is Matt DeLand, I’m a graduate student at Columbia and I’m responding to Charles’ call for cobloggers. I also study Algebraic Geometry, and have been enjoying Charles’ posts; hopefully I can help out and make some positive contributions. I apologize in advance for the quality of my first post…. Introductions aside, it’s time for some math! In previous posts, the importance of moduli spaces in algebraic geometry has already been underlined (Representable Functors, Grassmannians, Hilbert Schemes, Hom Schemes, Families of Cartier Divisors, etc.). There’s even already a post-series outlining how to take quotients of varieties by finite (in fact reductive) algebraic groups. I’ll start out slowly here toward the goal of explaining what this means. First, a group variety has already been defined in this series. Since the first applications we have in mind are geometric, we’ll work over a fixed algebraically closed field $\text{Spec } k$. All schemes, varieties, and morphisms should be assumed to be over $\text{Spec } k$. At some point we reserve the right to assume that the characteristic is 0. Recall that a group scheme is a scheme $G$ with morphisms $m: G \times G \rightarrow G$, $i: G \rightarrow G$, and $e: \text{Spec } k \rightarrow G$ which satisfy the usual relationships for groups. By an Affine Group Scheme, we will mean that $G$ is isomorphic to an affine scheme $\text{Spec } A$. This will imply that $A$ has a Hopf Algebra structure, but we won’t focus on that for now. For the purposes of this post, all group schemes will be affine. The theory of complete group varieties (also known as Abelian Varieties) is a topic for another day(s). Examples: The following are the classic examples of Group Schemes that we should all be familiar with. 1. $G = \text{Spec } A$ where $A = k[t]$ and the defining morphism are given by: $m : G \times G \rightarrow G$ corresponds to $t \rightarrow t \otimes 1 + 1 \otimes t$, $i: G \rightarrow G$ corresponds to $t \rightarrow -t$, and $e: \text{Spec } k \rightarrow G$ corresponds to $t \rightarrow 0$. This group scheme (in fact variety) is denoted $\mathbb{G}_a(k)$ but we’ll leave out the field notation from here on out. Notice that the functor of points of $G$ defined by $h_G : \text{Affine k-Schemes} \rightarrow \text{Groups}$ (note: for any group scheme $G$, the functor $h_G$ actually takes values in the category of Groups rather than Sets because of the axioms) defined by $Y \rightarrow Hom(Y,G)$ sends $\text{Spec } R$ to the ring $R$ as an additive group. 2. $G = Spec A$ where $A = k[t, t^{-1}]$ and the morphism are given by: $m : G \times G \rightarrow G$ corresponds to $t \rightarrow t \otimes t$, $i: G \rightarrow G$ corresponds to $t \rightarrow t ^{-1}$ and $e: \text{Spec } k \rightarrow G$ corresponds to $t \rightarrow 1$. This group variety is denoted $\mathbb{G}_m$. With the notation as above, $h_G(\text{Spec } R) = R^\times$, the group of invertible elements of $R$, as a group under multiplication. 3. $G = Spec A$ where $A = k[t]/(t^n - 1)$. The morphisms are the same as in Example 2 above, except the map $i$ corresponds to $t \rightarrow t^{n-1}$. For obvious reasons, this scheme is called the group of n-th roots of unity, and is denoted $\mu_n$. Notice that if n is a multiple of the characteristic of the field, then we have encountered our first group scheme which is not a group variety. 4. We leave it as an exercise to work out other standard group schemes $GL_n, SL_n, O_n, SO_n, SP_{2n}$ and any others that motivate you. Definition A morphism of group schemes $f : G \rightarrow H$ is a morphism of schemes which is also a homomorphism of groups. As an example, there is an exact sequence of groups schemes $0 \rightarrow \mu_n \rightarrow \mathbb{G}_m \rightarrow \mathbb{G}_m \rightarrow 0$, where the second arrow is given by $t \rightarrow t^ Suppose that $V$ is a finite type scheme, then we define what it means for a group scheme $G$ to act on $V$. Definition Suppose $G$ is a group scheme and $V$ is a scheme. An action of $G$ on $V$ is given by a map $\rho: G \times V \rightarrow V$ such that $\rho \circ (Id \otimes e) : V \cong V \times pt \ rightarrow V \times G \rightarrow V$ is the identiy and such that the two maps $\rho \circ (Id \times m), \rho \circ (\rho \times Id) : V \times G \times G \rightarrow V$ are equal. This definition simply encodes the usual definition for a group acting on a set. At the level of rings, there is a dual notion: Definition: A representation of a group scheme $G = Spec A$ is a k-vector space (not necessarily finite dimensional) $V$ along with a linear map $\mu : V \rightarrow V \otimes A$ which satisfy the dual relations for those of an action. A vector $v \in V$ is called invariant if $\mu(v) = v \otimes 1$ and a subspace $U \subset V$ is called a subrepresentation if $\mu(U) \subset U \otimes A$. Here are some Examples of group actions on $V = \mathbb{A}^2$: (In each case we leave it to the reader to check that we actually have a group action). If I could draw pictures here I would… 1. $G = \mathbb{G}_m$ acts on $V$ by the map $(t, (x,y)) \rightarrow (tx, ty)$. 2. $G= \mathbb{G}_m$ acts on $V$ in another way by the map $(t, (x,y)) \rightarrow (tx, t^{-1}y)$. 3. $G = \mathbb{G}_a$ acts on $V$ by the map $(s, (x,y)) \rightarrow (x + sy, y)$. 4. $G = \mathbb{G}_m \times \mathbb{G}_a$ acts on $V$ by the map $((t,s), (x,y)) \rightarrow (x + sy, ty)$. 5. $G = \mathbb{G}_m \times \mathbb{G}+a$ acts on $V$ in another way by the map $((t,s), (x,y)) \rightarrow (tx + uy, t^{-1}y)$. 6. Suppose for simplicity that $k$ is algebraically closed and that the characteristic is prime to n. Fix a primitive n-th root of unity, $z$. Then $G = \mu_n$ acts on $V$ by sending $(a, (x,y)) \ rightarrow (z^ax, z^ay)$. Now we can ask what it should mean to take the quotient of $V$ by a group action $G$. The original scheme should certainly map to the quotient, which we will call $Y$. In the best case scenario, the points of $Y$ will correspond to orbits of the action. However, if $f: V \rightarrow Y$ is a quotient, then fibers over closed points are closed in $V$. If there are non-closed orbits then, it can’t be the case that points of $Y$ correspond to orbits uniquely. Let’s analyze the oribts in the above examples. In Example 1, the origin is an orbit, and all lines through the origin (not including the origin) are also orbits. In Example 2, the origin is an oribt, as are all hyperbolas $xy = a$, as are the x-axis and the y-axis if we leave out the origin. In Example 3, the orbits are all closed, they are “horizontal lines” andall (isolated) points on the x-axis. In Example 4, there is an open orbit which is the complement of the x-axis, and then isolated points on the x-axis. In Example 5, there is the origin, the x-axis minus the origin, and the plane minus the x-axis. Finally in Example 6, the orbits are collections of n points except for the origin which is its own orbit. Even with relatively simple group actions we’ve run into non-closed The ideas involved in taking a quotient are made clear by looking at the affine case. Suppose then that $G = \text{Spec } A$ and $V = \text{Spec} R$. Let $R^G = \{ f \in R | \mu(f) = f \otimes 1 \}$ is the set (actually subalgebra) of $G$ invariants for the action. Consider the map $V \rightarrow \mathbb{A}^n$ given by some $G$ invariant functions $f_1, \ldots, f_n \in R^G$. We have an induced (surjective) map $V \rightarrow W = \text{Spec } R^G$ corresponding to the inclusion. Since each $f_i$ is an invariant, this map is constant on $G$ orbits of the action, that is, it sends each orbit to a point. We can ask then when do distinct orbits map to distinct points? When the algebra $R^G$ is finitely generated, we can take a set of generators to define the map, and hope that the image is a variety and is the quotient we want. We’ll see in the future that this is always the case when the action is nice (see below). For the technical definition , we’ll follow Mumford: Definition Suppose a group scheme $G$ acts on a k-scheme $V$. We say that $f: V \rightarrow Y$ is a geometric quotient if: i) $f \circ \rho = f \circ p_2$ ($p_2$ is the second projection on $G \times V$). ii) The map $f$ is surjective and the map $(f, p_2) : G \times V \rightarrow V \times V$ has image $V \times_Y V$. iii) The map $f$ is submersive. iv) The sheaf $\mathcal{O}_Y$ is the subsheaf of $f_*(\mathcal{O}_V)$ consisting of invariant funtions. Said another way, if $h \in f_* (\mathcal{O}_V)(U)) = \mathcal{O}_V(F^{-1}(U)$ then $h \in \ mathcal{O}_Y(U)$ if and only if the two maps $H \circ \rho, H \circ p_2 : X \times f^{-1}(U) \rightarrow \mathbb{A}^1$ are equal. Here $H$ is the map determined by $h$. It’s a mouthful, and vaguely translated, condition i) is the property the morphism contracts orbits, condition ii) is the property that fibers over closed points correspond to orbits (see the discussion below), and all the conditions together assure that it is the “smallest” variety (that is, satisfying a universal property) that has properties i) and iv). Now of course, the question becomes, when do geometric quotients exist? The answer will be when the group is reductive (a notion that we won’t define until next time). In fact, when $V = \text{Spec } R$ is an affine variety and the action is nice, the coordinate ring of the quotient will be exactly $R^G$. Let’s analyze the above examples. In Example 1, the affine quotient is a single point (the only invariant functions are constant)! This will be fixed in the future when we remove the origin and we’ll see that the quotient is $\mathbb {P}^1$ as expected. In Example 2, the quotient is $\mathbb{A}^1 = \text{Spec } (k[xy]$). Notice that the non-closed orbits fail to be separated by the quotient map. In Example 3, the quotient is $\ mathbb{A}^1 = \text{Spec } (k[x]$). Here, even closed orbits (the points) are not separated, indeed the group $\mathbb{G}_a$ is not reductive. We’ll leave the rest as exercises, it should be similar. Since we haven’t covered any theory at all, and since actions of $\mathbb{G}_m$ and $\mathbb{G}_a$ are ubiquitous (though mostly the former), we’ll discuss such actions slightly more here. In the special case when $G = \mathbb{G}_m$, the representations are particularly simple. Given $V$ and an integer $a$, consider the map $V \rightarrow V \otimes k[t,t^{-1}]$ which sends $v \mapsto v \otimes t^a$. This is called a representation of weight a. Proposition: For each representation $V$ of $\mathbb{G}_m$, there is a direct sum decomposition $V = \bigoplus V_m$ where each $V_m$ is a subrepresentation of weight m. Proof: Define $V_m = \{ v \in V | \mu(v) = v \otimes t^m \}$. This is a subrepresentation of weight m. To verify the direct sum decomposition, for an arbitrary vectory write $\mu (v) = \Sigma v_m \ otimes t^m \in V \otimes k[t,t^{-1}]$ (this sum will be finite). By property i) of an action, we’ll have $v = \Sigma v_m$ so we just must verify that each $v_m \in V_m$. By property ii) of an action though, we have that $\Sigma \mu (v_m) t^m = \Sigma v_m \otimes t^m \otimes t^m \in V \otimes k[t,t^{-1}] \otimes k[t,t^{-1}]$. By linear independence of the $t^m$, we must have then that each $v_m \ in V_m$. From this we see that to give a $\mathbb{G}_m$ action on $\text{Spec } R$ is equivalent to specifying a grading decomposition $R = \bigoplus R_m$. The invariants of the action correspond to elements of weight 0. In characteristic 0, something similar for an action of $\mathbb{G}_a$ is true: Proposition: Every representation $V$ of $\mathbb{G}_a$ is given by $\mu (v) = \Sigma f^n (v) \otimes \frac{ t^n}{n!}$ (sum taken over non-negative integers) for some $f \in End(V)$ which is locally nilpotent (that is, every vector is eventually killed). We’ll leave this proof as an exercise, it’s not much harder than the previous one and isn’t used as often. Hint : Consider the linear maps $h_n$ defined by $\mu(v) = \Sigma h_n(v) \otimes s^n \in V \ otimes k[s]$. August 19, 2008 at 7:02 pm Concerning the definition of geometric quotient, I thought that it was necessary to strengthen iii) and require the map to the quotient to be submersive (at least following Mumford’s definition). Also it is condition ii) that ensures that the geometric fibres agree with the geometric orbits. Regarding the existence of geometric quotients, I don’t want to second guess you since you haven’t really posted about it, but I just thought I’d point out that being linearly reductive isn’t necessary. It is enough to be just reductive (linear reductivity is very restrictive in characteristic p). And then, even when the scheme being acted upon is affine and your group is reductive the action needs to be closed for the categorical quotient you produce to be geometric. A good post though and I’m looking forward to the follow-ups. August 19, 2008 at 10:18 pm Hi Greg, Thanks for keeping me honest on my first post! You’re right on both points, I glossed too quickly over Mumford’s definition. I should have said the map is submersive, and I should have said that a “nice” action will correspond to a reductive group, though I didn’t define what that means. I wanted to stay a little vague about exactly what sorts of quotients existed in what sense until I could make the definitions at least. Thanks for the comments. August 21, 2008 at 1:06 pm I think it is more precise to use the notation $\mathbb{G}_{a,k}$ instead of $\mathbb{G}_a(k)$. The first is a group scheme over k, while the second is a set with a group structure that is naturally identified with the underlying additive group of k. August 21, 2008 at 5:05 pm Hi Matt!
{"url":"http://rigtriv.wordpress.com/2008/08/19/group-schemes-and-moduli-i/","timestamp":"2014-04-17T03:50:19Z","content_type":null,"content_length":"88087","record_id":"<urn:uuid:adec6b0b-c73b-41c5-ba8c-494ab19ecfc2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Somerset Independent Schools Adjustable Spinner: Students can create a game spinner with variable sized sectors to look at experimental and theoretical probabilities. Parameters: Sizes of sectors, number of sectors, number of trials. Algebra Four: Students play a generalized version of connect four, gaining the chance to place a piece on the board by solving an algebraic equation. Algebra Quiz: Similar to Algebra Four, Algebra Quiz gives the user randomized questions to answer on solving algebraic linear and quadratic equations of one variable. Students practice their knowledge of acute, obtuse and alternate angles. Area Explorer: Students are shown shapes on a grid after setting the perimeter and asked to calculate areas of the shapes. Arithmetic Four: A game like Fraction Four but instead of fraction questions the player must answer arithmetic questions (addition, subtraction, multiplication, division) to earn a piece to place on the board. Caesar Cipher: Students practice simple arithmetic skills by encoding and decoding messages using an affine cipher. Cantor's Comb: Students learn about fractions between 0 and 1 by repeatedly deleting portions of a line segment, also learning about properties of fractal objects. Circle Graph: Enter data categories and the value of each category to create a circle graph. Similar to "Pie Chart" but the user can define the data set. Clock Arithmetic: Students learn about modular arithmetic operations through working with various types of clocks. Clock Wise: Practice reading a clock. Coloring Multiples in Pascal's Triangle: Students color numbers in Pascal's Triangle by rolling a number and then clicking on all entries that are multiples of the number rolled, thereby practicing multiplication tables, investigating number patterns, and investigating fractal patterns. Coloring Remainders in Pascal's Triangle: Students color numbers in Pascal's Triangle by rolling a number and then clicking on all entries that have the same remainder when divided by the number rolled, thereby practicing multiplication tables, investigating number patterns, and investigating fractal patterns. Comparison Estimator: Compares two sets of objects. Converts fractions to decimals and decimals to fractions. Dice Table: Students experiment with the outcome distribution for a roll of two dice by playing a dice throwing game. Parameters: Which player wins on which rolls. Elapsed Time: Practice finding elapsed time given a starting time and an ending time. Practice estimation skills by determining the number of objects, length, or area. Estimator Quiz: Randomized questions to answer on estimating the value of number sentences. . Equivalent Fractions Finder: Visually represent equivalent fractions by dividing squares or circles and shading portions equivalent to a given fraction. Experimental Probability: Experiment with experimental probability using a fixed size section spinner, a variable section spinner, 2 regular 6-sided number cubes or design your own number cubes. Appropriate for elementary Factorize Factorize 2: Learn about factors through building rectangular arrays on a grid. Fraction Finder: Determine the value of two given fractions represented as points on a number line then graphically find a fraction whose value is inbetween the value of the 2 given fractions and determine its value. Fraction Four: A version of connect four, gaining the chance to place a piece on the board by simplifying a fraction. Fraction Quiz: Fraction Quiz gives the user randomized questions to answer on simplifying fractions; conversions between fractions, decimals, percents, and percentage problems. Fraction Sorter: Represent fractions by coloring in the appropriate portions of either a circle or a square, then order those fractions from least to greatest. Fractured Pictures: Students generate complicated geometric fractals by specifying starting polygon and scale factor. Function Machine: Students investigate very simple functions by trying to guess the algebraic form from inputs and outputs. General Coordinates Game: Students investigate the Cartesian coordinate system through identifying the coordinates of points, or requesting that a particular point be plotted. Graph Sketcher: Students can create graphs of functions by entering formulas -- similar to a graphing calculator. Students can graph functions and sets of ordered pairs on the same coordinate plane. Linear Function Machine: Students investigate linear functions by trying to guess the slope and intercept from inputs and outputs. Students learn about sampling with and without replacement by modeling drawing marbles from a bag. Parameters: Number and color of marbles in the bag, replacement rule. Students enter data and view the mean, median, variance and standard deviation of the data set. Parameters: Number of observations, range for observations, which statistics to view, identifiers for the data. Ordered Simple Plot: Another version of "Simple Plot" which allows the user to plot and connect ordered pairs in the order that they are input. This enables pictures to be drawn by connecting the pairs rather than having the computer connect them from left to right. Pattern Generator: Determine and then continue the pattern generated. Perimeter Explorer: Students are shown shapes on a grid after setting the area and asked to calculate perimeters of the shapes. Pie Chart: Students view piecharts. Parameters: Number of sectors, size of sector as a percent. Plop It!: Students click to build dot plots of data and view how the mean, median, and mode change as numbers are added to the plot. Parameters: Range for observations. Positive Linear Function Machine: Students investigate linear functions with positive slopes by trying to guess the slope and intercept from inputs and outputs. Pythagorean Explorer: Students find the length of a side of a right triangle by using the Pythagorean Theorem, and then check their answers. Racing Game with One Die: Two players each roll a die, and the lucky player moves one step to the finish. Parameters: what rolls win and how many steps to the finish line. Simple Coordinates Game: Students investigate the first quadrant of the Cartesian coordinate system through identifying the coordinates of points, or requesting that a particular point be plotted. Simple Plot: Students can plot ordered pairs of numbers, either as a scatter plot or with the dots connected. Students can create a game spinner with one to twelve sectors to look at experimental and theoretical probabilities. Surface Area and Volume: Students manipulate dimensions of polyhedra, and watch how the surface area and volume change. Tortoise and Hare Race: Students step through the tortoise and hare race, based on Zeno's paradox, to learn about the multiplication of fractions and about convergence of an infinite sequence of numbers. Whole Number Cruncher: Similar to "Number Cruncher" but only generates multiplication and addition functions to avoid outputting any negative numbers.
{"url":"http://www.somerset.k12.ky.us/content_page2.aspx?schoolid=2&cid=50","timestamp":"2014-04-19T09:36:10Z","content_type":null,"content_length":"130221","record_id":"<urn:uuid:3fe626c4-1906-41e7-9339-443cd6c22739>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Class MPInt public class MPInt extends Object [ Math 220 main page ] This is a data type for multi-precision integers of arbitrary sign. Its important data are (1) an array of digits a and (2) a sign s. Also, for convenience the size n of the sub-array of significant digits is maintained. Here is a very simple sample program which multiplies its two arguments. Constructs a multi-precision integer from a standard Java int. Constructs a non-negative multi-precision integer from an array of digits. Allows a sign as well. Constructs an integer from a string of decimal digits. This is one of the usual arithmetic operations. This function is called implicitly when you print a variable of type MPInt. public int a[] public int n public int sign public MPInt(String s) throws NumberFormatException Constructs an integer from a string of decimal digits. public MPInt(int n) Constructs a multi-precision integer from a standard Java int. public MPInt(int a[]) Constructs a non-negative multi-precision integer from an array of digits. public MPInt(int a[], int s) Allows a sign as well. public MPInt plus(MPInt y) This is one of the usual arithmetic operations. They all can be combined as the ordinary symbolic operations. They associate to the left. Thus a.plus(b).times(c) is (a+b)*c, while a.plus(b.times (c)) is (a+(b*c)). public MPInt minus(MPInt y) public MPInt times(MPInt y) public MPInt dividedBy(MPInt y) throws ArithmeticException public MPInt modulo(MPInt y) throws ArithmeticException public String toString() This function is called implicitly when you print a variable of type MPInt. For example if a is an MPInt then System.out.println(a) will print out this string. toString in class Object public static MPInt factorial(int n) throws ArithmeticException
{"url":"http://www.math.ubc.ca/~cass/courses/m220-00/MPInt.html","timestamp":"2014-04-20T16:11:29Z","content_type":null,"content_length":"7210","record_id":"<urn:uuid:ee6b5ef7-25a2-49b5-9b2d-778632e5de7c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Some Fundamental Algebraic Tools for the Semantics of Some Fundamental Algebraic Tools for the Semantics of Computation Part 3. Indexed Categories R Burstall, J Goguen and A Tarlecki Abstract: This paper presents indexed categories, which model uniformly defined families of categories, and suggests that they are a useful tool for the working computer scientist. An indexed category gives rise to a single flattened category as a disjoint union of its component categories plus some additional morphisms. Similarly, an indexed functor (which is a uniform family of functors between the component categories) induces a flattened functor between the corresponding flattened categories. Under certain assumptions, flattened categories are (co)complete if all their components are, and flattened functors have left adjoints if all their components do. Several examples are given.
{"url":"http://www.lfcs.inf.ed.ac.uk/reports/89/ECS-LFCS-89-90/index.html","timestamp":"2014-04-17T01:10:59Z","content_type":null,"content_length":"4967","record_id":"<urn:uuid:a7185c20-eafe-45e9-987f-67ca996534e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Erik's Chemistry: Chapter 7: Electronic Structure of Atoms Chapter 7: Electronic Structure of Atoms Back To Erik's Chemistry: Main Page A. Quantum Theory Electrons in atoms can have only certain discrete energies, referred to as energy states or energy levels. Normally, the electron is in the state of lowest energy called the ground state (n=1). By absorbing a certain definite amount of energy, the electron can move to a higher level, called an excited state (n=2,3...). When electrons return to lower energy levels, energy may be given off as light. The difference in energy between the levels can be deduced from the wavelength or frequency of the light. Postulates of the Quantum Theory 1. Atoms and molecules can only exist in certain states, characterized by definite amounts of energy. When an atom or molecule changes its state, it absorbs or emits an amount of energy just sufficient to bring it to another state. 2. When atoms or molecules absorb or emit light in moving from one energy state to another, the wavelengths ( E[hi] - E[lo] = ([final] - E[initial] = E[hi] - E[lo]) E[lo] - E[hi] = - hc = 1.196x10^5 3. The allowed energy states of atoms and molecules can be described by sets of numbers called quantum numbers. B. Bohr Model Bohr postulated that an electron moves about the nucleus in circular orbits of fixed radius. By absorbing energy, it moves to a higher orbit of larger radius; energy is given off as light (photons) when the electron returns. The fixed radii was based on spectroscopy; Lyman Series (ultraviolet), Balmar Series (visible light), Paschen Series (infrared) ● Electron returning to ground state (n=1) produce Lyman Series, ultraviolet. ● Electron returning to excited state (n=2) produce Balmar Series, visible light. ● Electron returning to excited state (n=3) produce Paschen Series, infrared. Basic Equation development: mvr = n mvr = angular momentum, m = mass, v = velocity, r = radius of electrons orbit. From experiment the electron's energy was restricted in the following way: E = n is an integer from spectroscopy, B = -1312 Equation (2) allowed Bohr to find the energy of the photon, then plugging the energy into Einstein's equation Note 1: Bohr is now capable of finding the wavelength of light theoretically, his calculations match experimental work with spectroscopy (for hydrogen). Note 2: Using Bohr's equations, the ionization energy could also be found. (Ionization energy is the energy required to remove an electron from a gaseous atom) C. Wave Mechanical Model (Quantum mechanical atom) By the mid-1920's it had become apparent that the Bohr model could not be made to work for atoms other than hydrogen. A new approach was formed by three physicists; Werner Heisenberg, Louis de Broglie, and Erwin Schrodinger. 1. Louis de Broglie 2. Werner Heisenberg Heisenberg uncertainty principle: "There is a fundamental limitation to just how precisely we can know both the position and the momentum of a particle at a given time." We can't determine the position and velocity of an electron at the same time. Therefore Bohr's equation dealing with knowing the electrons' position can't be used. m v 2x = 3. Erwin Schrodinger Developed an equation based on the probability of an electron's given position in space at any given time. This study of the electron's probable location has come to be called quantum D. Scrodinger's equation is based on the four quantum numbers: n, l, m[l], m[s] n = period, principle energy level l = s, p, d, f orbital type, sublevel m[l] = orientation of orbitals (angular momentum) m[s] = spin 1. Principle Energy Level n = 1, 2, 3, etc. Value of n is the main factor (but not the only one) that determines the energy of an electron and its distance from the nucleus. Maximum capacity for energy level = 2n^2 2. Sublevels l = 0, 1, 2, ... (n-1) n = 1 l = 0 (One sublevel) n = 2 l = 0, 1 (Two sublevels) n = 3 l = 0, 1, 2 (Three sublevels) n = 4 l = 0, 1, 2, 3 (Four sublevels) electrons for which l = 0 are called s (Stands for sharp) spherical l = 1 are called p (Stands for principle) perpendicular l = 2 are called d (Stands for diffuse) l = 3 are called f (Stands for fundamental) The letters come from the atomic spectrum series from the 20th century. 3. Orbitals Each sublevel contains one or more orbitals. m[l] describes the orientation of the electron cloud. For any value of l, m[l] may have any integral values between -l and l. i.e. l = 2 m[l] = -2, -1, 0, 1, 2 (5 oribtals) For any l there are 2l + 1 orbitals in that sublevel. 4. Spin m[s] = spin an electron. Can have one of 2 spins + Electrons that have the same value of m[s] are said to have parallel spins. Electrons that have different m[s] values are said to have opposed spins. For 2 electrons to exist in thee same orbital, they must have opposed spins. Pauli Exclusion Principle: No two electrons in the same atom can have the same set of quantum numbers. Procedure for placing electrons in an atom: Aufbau Principle: Electrons are added to sublevels in the order of increasing energy. Generally fills each sublevel before beginning the next. Hund's Rule: When filling orbitals of equal energy (degenerate orbitals) order is such that as many electrons as possible remain unpaired. back to top Back To Erik's Chemistry: Main Page Any comments will be appreciated. Please e-mail me at eepp@altavista.net URL: http://members.tripod.com/~EppE/elstruct.htm This page was made by Erik Epp.
{"url":"http://eppe.tripod.com/elstruct.htm","timestamp":"2014-04-20T04:05:57Z","content_type":null,"content_length":"26848","record_id":"<urn:uuid:c0927efb-6a14-40e5-bf6d-a4f9f0a1db24>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00604-ip-10-147-4-33.ec2.internal.warc.gz"}