content
stringlengths
86
994k
meta
stringlengths
288
619
Fundamental Theorem of Arithmetic Date: 07/08/97 at 18:44:42 From: bill Subject: Fundamental Theorem of Arithmetic What is the big deal? I have excelled in math for many years, all the while ignorant of what I recently learned to be the "Fundamental Theorem of Arithmetic". Date: 07/13/97 at 02:06:48 From: Doctor Chita Subject: Re: Fundamental Theorem of Arithmetic Dear Bill, It's a philosophical thing, like enjoying a Mozart symphony without knowing anything about composition, or a poem without knowing anything about iambic pentameter, or art without knowing anything about Romanticism. Sure, you can combine numbers without quoting the fundamental theorem of arithmetic, solve equations without quoting the fundamental theorem of algebra, differentiate and integrate functions without quoting the fundamental theorem of calculus. But isn't learning all about the search for truth, beauty, and consistency? The fundamental theorem of arithmetic is at the center of number theory, and simply, but elegantly, says that all composite numbers are products of smaller prime numbers, unique except for order. For example, 12 = 3*2*2, where 2 and 3 are prime numbers. The prime numbers, themselves, are unique, starting with 2. (1 is not considered a prime, since prime numbers have exactly two divisors; themselves and 1. The number 1 has only 1 divisor: itself.) Like atoms in chemistry, prime and composite numbers are the building blocks of arithmetic. For a nice, non-technical discussion of number theory and its intriguing mysteries (for example, is there a largest prime number?), check out Ivars Peterson's book, _The Mathematical For me, the fundamental theorem of arithmetic is one of the few things I can depend on. I know I sleep better at night knowing it. -Doctor Chita, The Math Forum Check out our web site! http://mathforum.org/dr.math/ Date: 07/13/97 at 12:25:40 From: Anonymous Subject: Re: Fundamental Theorem of Arithmetic Dr. Chita, Your answer to my inquiry about the fundamental theorem of arithmetic, although elegant, just didn't do it for me. I understood what the theorem states, and perhaps now you have given me an appreciation for its simplicity, but I am looking for something more. What would you say to a student who asks, "Why did we just spend 20 minutes stating/ writing down/discussing this theorem? So what if every number can be expressed as a product of primes?" I would like to be able to explain/show the student where this theorem fits into the big picture: what it leads to, what important concepts use it as a basis, why we would still be in the 18th century without Thanks for your original response. I would also appreciate any further insight which might calm my frustrated engineer brain. Bill Sanford Date: 07/14/97 at 23:11:17 From: Doctor Chita Subject: Re: Fundamental Theorem of Arithmetic Dear Bill, Now that you've centered the original question in a context, I can more easily appreciate your frustration. I know how students have a way of asking questions that can deflate many flights of fancy. I am not sure that students understand the need that mathematicians have for defining their world as one that starts from a few basic premises. For example, in Euclidean plane geometry, there are just a few defined terms (e.g., point, line, plane, space) and a minimum number of postulates (axioms). For example, "A straight line can be drawn from any point to any point." and "All right angles are equal to one another." From this set of geometric building blocks, all else is derived in the form of theorems. In the case of number theory, the subject is of course numbers - the natural numbers, that is: {1, 2, 3, . . . }. Number theorists want to know as much about them as possible. The specialness of the fundamental theorem lies in the fact that there there is an infinite set of prime numbers and that unique combinations of them generate an infinite set of composite numbers. The fact that there is no largest prime suggests that there is no largest composite number. Again, this is a sort of metaphysical observation. As for uses; modern cryptography relies on the fundamental theorem of arithmetic. By combining primes and forming very large composite numbers, code makers can create a key that makes a cipher virtually unbreakable. So, knowing how to construct numbers has a very practial use in our world today. If you will forgive another suggestion, you and your students might want to read a little book called _Cryptology_ by Albrecht Beutelspacher, published by the MAA. And of course, there's the book _Enigma_ about Turing (the father of artificial intelligence) and how he cracked the German's code during WW II. Finally, arguments about the beauty of mathematics may fall on deaf ears when the audience consists of kids who have somehow picked up the idea that mathematics is all about solving arithmetic problems. With such a narrow view, they cannot yet appreciate that mathematics is a way of thinking, made up of many kinds of objects (numbers, equations, figures, etc.), and each conforming to a logical flow of postulates, definitions, and theorems. Encourage your students to read more about math and mathematicians. Have them research Euclid's proof that there is no largest prime. What is a Mersenne prime? What is the current "largest" prime, discovered by a computer just a short while ago? And what is the product of this prime and a smaller prime? Maybe if you could get them to think more about the big ideas of numbers, they will gradually appreciate what makes me sleep more soundly at night. Thanks for getting back. I don't know if I've helped you any further. I hope so. Good luck! -Doctor Chita, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/55798.html","timestamp":"2014-04-19T00:22:21Z","content_type":null,"content_length":"10794","record_id":"<urn:uuid:a908aa0e-0d65-48cd-9504-1af1c03bc6ab>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Newsletter: Math Forum Internet News No. 2.38 (Sept. 22) Replies: 0 Newsletter: Math Forum Internet News No. 2.38 (Sept. 22) Posted: Sep 21, 1997 3:47 PM 22 September 1997 Vol.2, No.38 MEGAMATH | Magic Decoder Game | Math Quotations Server Mathematicians experiment and play with creative and imaginative ideas, many of them accessible to children. Other ideas (infinity is a good example) have already piqued many children's curiosity, but their profound mathematical importance is not widely known or understood. The MegaMath project brings unusual and important mathematical ideas to elementary school classrooms so that students and teachers can think about them together. The MegaMath site features: - The Most Colorful Math of All - Games on Graphs - Untangling the Mathematics of Knots - Algorithms and Ice Cream for All - Machines that Eat Your Words - Welcome to the Hotel Infinity - A Usual Day at Unusual School Ideas are provided for browsing MegaMath as - a teacher - a student - or a mathematician Crack the code of a secret message encrypted in a simple substitution cipher. You are supplied with a magic decoder device and some statistics to make the job easier: - Enter your guesses in the box corresponding to the cipher text letter. You do not have to give a value for every letter; you can leave boxes blank. - Hit the "Try It" button to see your substitutions above the cipher text in the message. - Look for patterns which suggest words to refine your From the Puzzle Ring site, by Jeremy T. Teitelbaum. "A Mathematician is a machine for turning coffee into theorems. - Paul Erdös "On two occasions I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." - Charles Babbage "Do not worry about your difficulties in mathematics, I assure you that mine are greater." - Albert Einstein A collection of mathematical quotations culled from many sources. You may conduct a keyword search through the quotation database, download the whole collection at once (there are about 83 printed pages), or browse the quotations page by page. The quotations are organized in alphabetical order by the author's last name. CHECK OUT OUR WEB SITE: The Math Forum http://forum.swarthmore.edu/ Ask Dr. Math http://forum.swarthmore.edu/dr.math/ Problem of the Week http://forum.swarthmore.edu/geopow/ Internet Resources http://forum.swarthmore.edu/~steve/ Join the Math Forum http://forum.swarthmore.edu/join.forum.html SEND COMMENTS TO comments@forum.swarthmore.edu _o \o_ __| \ / |__ o _ o/ \o/ __|- __/ \__/o \o | o/ o/__/ /\ /| | \ \ / \ / \ /o\ / \ / \ / | / \ / \ The Math Forum ** 22 September 1997 You will find this newsletter, a FAQ, and subscription directions archived at
{"url":"http://mathforum.org/kb/thread.jspa?threadID=485924","timestamp":"2014-04-23T17:22:01Z","content_type":null,"content_length":"18696","record_id":"<urn:uuid:5ae5158a-267e-47c9-a2e6-54d8641ad18c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Deterministic system From Math Images A deterministic system is a system where an initial state completely determine the system's future states. Thus, there is no randomness in producing the future states. If a deterministic system is given some initial inputs, the model will produce the same states every time. A very simple example would be a position function: $x(t) = v*t + {x_0}\,$ where x(t) is the position at any given time, v is the velocity, t is time, and ${x_0}$ is the initial position. Given initial input values v and ${x_0}$, we can exactly predict the position of $x(t)$ at any time in the future or the past. Non-Deterministic System A non-deterministic system is a system where a single set of inputs can produce multiple outputs; randomness determines future states. If a non-deterministic system is given some initial inputs, the model will produce a different state for each run. Throwing a dice and recording the number it lands on is a non-deterministic system. If the dice is thrown, we will not be able to predict its outcome. If it has been thrown five times and landed on 6 every time, we will still not be able to determine the outcome of the next roll.
{"url":"http://mathforum.org/mathimages/index.php/Deterministic_system","timestamp":"2014-04-18T00:20:53Z","content_type":null,"content_length":"16895","record_id":"<urn:uuid:2e66e2b4-54ec-4fd3-9766-89f17c1bbbbc>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Math for Dummies 21-book eBook Pack Math for Dummies 21-book eBook Pack Publisher: For Dummies | 21 PDF | 251MB 21 "for Dummies" eBooks on the topic of math, logic, statistics etc from basic math to Trigonometry to Calculus II. The only oddball in this collection is the Calculus for Dummies book because it is a 93MB scan as opposed to an official eBook. Algebra I Essentials for Dummies ISBN: 0470618349 With its use of multiple variables, functions, and formulas algebra can be confusing and overwhelming to learn and easy to forget. Perfect for students who need to review or reference critical concepts, Algebra I Essentials For Dummies provides content focused on key topics only, with discrete explanations of critical concepts taught in a typical Algebra I course, from functions and FOILs to quadratic and linear equations. This guide is also a perfect reference for parents who need to review critical algebra concepts as they help students with homework assignments, as well as for adult learners headed back into the classroom who just need a refresher of the core concepts. Algebra I for Dummies - 2nd Edition ISBN: 0470559640 Factor fearlessly, conquer the quadratic formula, and solve linear equations There's no doubt that algebra can be easy to some while extremely challenging to others. If you're vexed by variables, Algebra I For Dummies, 2nd Edition provides the plain-English, easy-to-follow guidance you need to get the right solution every time! Algebra II Essentials for Dummies ISBN: 0470618400 Passing grades in two years of algebra courses are required for high school graduation. Algebra II Essentials For Dummies covers key ideas from typical second-year Algebra coursework to help students get up to speed. Free of ramp-up material, Algebra II Essentials For Dummies sticks to the point, with content focused on key topics only. It provides discrete explanations of critical concepts taught in a typical Algebra II course, from polynomials, conics, and systems of equations to rational, exponential, and logarithmic functions. This guide is also a perfect reference for parents who need to review critical algebra concepts as they help students with homework assignments, as well as for adult learners headed back into the classroom who just need a refresher of the core concepts. Algebra II for Dummies ISBN: 0471775812 Besides being an important area of math for everyday use, algebra is a passport to studying subjects like calculus, trigonometry, number theory, and geometry, just to name a few. To understand algebra is to possess the power to grow your skills and knowledge so you can ace your courses and possibly pursue further study in math. Algebra II For Dummies is the fun and easy way to get a handle on this subject and solve even the trickiest algebra problems. This friendly guide shows you how to get up to speed on exponential functions, laws of logarithms, conic sections, matrices, and other advanced algebra concepts. In no time you’ll have the tools you need to: * Interpret quadratic functions * Find the roots of a polynomial * Reason with rational functions * Expose exponential and logarithmic functions * Cut up conic sections * Solve linear and non linear systems of equations * Equate inequalities * Simplifyy complex numbers * Make moves with matrices * Sort out sequences and sets This straightforward guide offers plenty of multiplication tricks that only math teachers know. It also profiles special types of numbers, making it easy for you to categorize them and solve any problems without breaking a sweat. When it comes to understanding and working out algebraic equations, Algebra II For Dummies is all you need to succeed! Basic Math & Pre-Algebra for Dummies ISBN: 0470135372 Get the skills you need to solve problems and equations and be ready for algebra class. Whether you're a student preparing to take algebra or a parent who wants to brush up on basic math, this fun, friendly guide has the tools you need to get in gear. From positive, negative, and whole numbers to fractions, decimals, and percents, you'll build necessary skills to tackle more advanced topics, such as imaginary numbers, variables, and algebraic equations. Basic Math & Pre-Algebra Workbook for Dummies ISBN: 0470288177 When you have the right math teacher, learning math can be painless and even fun! Let Basic Math and Pre-Algebra Workbook For Dummies teach you how to overcome your fear of math and approach the subject correctly and directly. A lot of the topics that probably inspired fear before will seem simple when you realize that you can solve math problems, from basic addition to algebraic equations. Lots of students feel they got lost somewhere between learning to count to ten and their first day in an algebra class, but help is here! Begin with basic topics like interpreting patterns, navigating the number line, rounding numbers, and estimating answers. You will learn and review the basics of addition, subtraction, multiplication, and division. Do remainders make you nervous? You’ll find an easy and painless way to understand long division. Discover how to apply the commutative, associative, and distributive properties, and finally understand basic geometry and algebra. Find out how to: * Properly use negative numbers, units, inequalities, exponents, square roots, and absolute value * Round numbers and estimate answers * Solve problems with fractions, decimals, and percentages * Navigate basic geometry * Complete algebraic expressions and equations * Understand statistics and sets * Uncover the mystery of FOILing * Answer sample questions and check your answers Complete with lists of ten alternative numeral and number systems, ten curious types of numbers, and ten geometric solids to cut and fold, Basic Math and Pre-Algebra Workbook For Dummies will demystify math and help you start solving problems in no time! Calculus Essentials for Dummies ISBN: 0470618353 Just the key concepts you need to score high in calculus From limits and differentiation to related rates and integration, this practical, friendly guide provides clear explanations of the core concepts you need to take your calculus skills to the next level. It's perfect for cramming, homework help, or review. Calculus for Dummies ISBN: 0764524981 The mere thought of having to take a required calculus course is enough to make legions of students break out in a cold sweat. Others who have no intention of ever studying the subject have this notion that calculus is impossibly difficult unless you happen to be a direct descendant of Einstein. Well, the good news is that you can master calculus. It's not nearly as tough as its mystique would lead you to think. Much of calculus is really just very advanced algebra, geometry, and trig. It builds upon and is a logical extension of those subjects. If you can do algebra, geometry, and trig, you can do calculus. Calculus For Dummies is intended for three groups of readers: * Students taking their first calculus course – If you're enrolled in a calculus course and you find your textbook less than crystal clear, this is the book for you. It covers the most important topics in the first year of calculus: differentiation, integration, and infinite series. * Students who need to brush up on their calculus to prepare for other studies – If you've had elementary calculus, but it's been a couple of years and you want to review the concepts to prepare for, say, some graduate program, Calculus For Dummies will give you a thorough, no-nonsense refresher course. * Adults of all ages who'd like a good introduction to the subject – Non-student readers will find the book's exposition clear and accessible. Calculus For Dummies takes calculus out of the ivory tower and brings it down to earth. This is a user-friendly math book. Whenever possible, the author explains the calculus concepts by showing you connections between the calculus ideas and easier ideas from algebra and geometry. Then, you'll see how the calculus concepts work in concrete examples. All explanations are in plain English, not math-speak. Calculus For Dummies covers the following topics and more: * Real-world examples of calculus * The two big ideas of calculus: differentiation and integration * Why calculus works * Pre-algebra and algebra review * Common functions and their graphs * Limits and continuity * Integration and approximating area * Sequences and series Don't buy the misconception. Sure calculus is difficult – but it's manageable, doable. You made it through algebra, geometry, and trigonometry. Well, calculus just picks up where they leave off – it's simply the next step in a logical progression. Calculus II for Dummies ISBN: 047022522X Calculus II is a prerequisite for many popular college majors, including pre-med, engineering, and physics. Calculus II For Dummies offers expert instruction, advice, and tips to help second semester calculus students get a handle on the subject and ace their exams. Calculus Workbook for Dummies ISBN: 076458782X From differentiation to integration - solve problems with ease Got a grasp on the terms and concepts you need to know, but get lost halfway through a problem or, worse yet, not know where to begin? Have no fear! This hands-on guide focuses on helping you solve the many types of calculus problems you encounter in a focused, step-by-step manner. With just enough refresher explanations before each set of problems, you'll sharpen your skills and improve your performance. You'll see how to work with limits, continuity, curve-sketching, natural logarithms, derivatives, integrals, infinite series, and more! 100s of Problems! * Step-by-step answer sets clearly identify where you went wrong (or right) with a problem * The inside scoop on calculus shortcuts and strategies * Know where to begin and how to solve the most common problems * Use calculus in practical applications with confidence Differential Equations Workbook for Dummies ISBN: 0470472014 Need to know how to solve differential equations? This easy-to-follow, hands-on workbook helps you master the basic concepts and work through the types of problems you'll encounter in your coursework. You get valuable exercises, problem-solving shortcuts, plenty of workspace, and step-by-step solutions to every equation. You'll also memorize the most-common types of differential equations, see how to avoid common mistakes, get tips and tricks for advanced problems, improve your exam scores, and much more! Intermediate Statistics for Dummies ISBN: 0470045205 Need to know how to build and test models based on data? Intermediate Statistics For Dummies gives you the knowledge to estimate, investigate, correlate, and congregate certain variables based on the information at hand. The techniques you’ll learn in this book are the same techniques used by professionals in medical and scientific fields. Linear Algebra for Dummies ISBN: 0470430907 Does linear algebra leave you feeling lost? No worries —this easy-to-follow guide explains the how and the why of solving linear algebra problems in plain English. From matrices to vector spaces to linear transformations, you'll understand the key concepts and see how they relate to everything from genetics to nutrition to spotted owl extinction. Logic for Dummies ISBN: 0471799416 Logic concepts are more mainstream than you may realize. There’s logic every place you look and in almost everything you do, from deciding which shirt to buy to asking your boss for a raise, and even to watching television, where themes of such shows as CSI and Numbers incorporate a variety of logistical studies. Logic For Dummies explains a vast array of logical concepts and processes in easy-to-understand language that make everything clear to you, whether you’re a college student of a student of life. LSAT Logic Games for Dummies ISBN: 0470525142 Improve your score on the Analytical Reasoning portion of the LSAT If you're like most test–takers, you find the infamous Analytical Reasoning or "Logic Games" section of the LSAT to be the most elusive and troublesome. Now there's help! LSAT Logic Games For Dummies takes the puzzlement out of the Analytical Reasoning section of the exam and shows you that it's not so problematic after all! Math Word Problems for Dummies ISBN: 0470146605 Everyone remembers story problems, nowadays called "word problems," from elementary school and middle school math. Solving word problems is the latest way to help students who struggle learning basic math skills, as well as to introduce more complicated math concepts. Math Word Problems For Dummies shows students and adult learners how to solve word problems with a method that works for any word problem at any level. Math-wary readers will use basic math to work through problems, focusing on elementary-level skills before moving on to algebra and geometry. Mary Jane Sterling (Peoria, IL), a teacher for more than 25 years, is the author of numerous For Dummies books, including Algebra For Dummies (0-7645-5325-9) and Trigonometry For Dummies (0-7645-6903-1). Pre-Algebra Essentials For Dummies ISBN: 0470618388 Just the critical concepts you need to score high in pre-algebra This practical, friendly guide focuses on critical concepts taught in a typical pre-algebra course, from fractions, decimals, and percents to standard formulas and simple variable equations. Pre-Algebra Essentials For Dummies is perfect for cramming, homework help, or as a reference for parents helping kids study for exams. Pre-Calculus Workbook for Dummies ISBN: 0470421312 Get the confidence and the math skills you need to get started with calculus! Are you preparing for calculus? This easy-to-follow, hands-on workbook helps you master basic pre-calculus concepts and practice the types of problems you'll encounter in your cour sework. You get valuable exercises, problem-solving shortcuts, plenty of workspace, and step-by-step solutions to every problem. You'll also memorize the most frequently used equations, see how to avoid common mistakes, understand tricky trig proofs, and much more. 100s of Problems! Detailed, fully worked-out solutions to problems The inside scoop on quadratic equations, graphing functions, polynomials, and more A wealth of tips and tricks for solving basic calculus problems Probability for Dummies ISBN: 0471751413 Packed with practical tips and techniques for solving probability problems Increase your chances of acing that probability exam -- or winning at the casino! Whether you're hitting the books for a probability or statistics course or hitting the tables at a casino, working out probabilities can be problematic. This book helps you even the odds. Using easy-to-understand explanations and examples, it demystifies probability -- and even offers savvy tips to boost your chances of gambling success! Discover how to * Conquer combinations and permutations * Understand probability models from binomial to exponential * Make good decisions using probability * Play the odds in poker, roulette, and other games Statistics Essentials for Dummies ISBN: 0470618396 Statistics Essentials For Dummies not only provides students enrolled in Statistics I with an excellent high-level overview of key concepts, but it also serves as a reference or refresher for students in upper-level statistics courses. Free of review and ramp-up material, Statistics Essentials For Dummies sticks to the point, with content focused on key course topics only. It provides discrete explanations of essential concepts taught in a typical first semester college-level statistics course, from odds and error margins to confidence intervals and conclusions. This guide is also a perfect reference for parents who need to review critical statistics concepts as they help high school students with homework assignments, as well as for adult learners headed back into the classroom who just need a refresher of the core concepts. Trigonometry Workbook for Dummies ISBN: 0764587818 From angles to functions to identities - solve trig equations with ease Got a grasp on the terms and concepts you need to know, but get lost halfway through a problem or worse yet, not know where to begin? No fear - this hands-on-guide focuses on helping you solve the many types of trigonometry equations you encounter in a focused, step-by-step manner. With just enough refresher explanations before each set of problems, you'll sharpen your skills and improve your performance. You'll see how to work with angles, circles, triangles, graphs, functions, the laws of sines and cosines, and more! 100s of Problems! * Step-by-step answer sets clearly identify where you went wrong (or right) with a problem * Get the inside scoop on graphing trig functions * Know where to begin and how to solve the most common equations * Use trig in practical applications with confidence to get Premium speed
{"url":"http://carnesdownloads.org/e-books/18793-math-for-dummies-21-book-ebook-pack.html","timestamp":"2014-04-17T01:12:34Z","content_type":null,"content_length":"36990","record_id":"<urn:uuid:643ad074-1f44-4152-a8dc-fc0a3533c34a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Sundae Times Lite Get a free trial now! Teachers, create your own FREE account at Mangahigh. Students will be able to save their scores, earn medals and access Prodigi. Create School Student will be able to earn Silver and Bronze Medals Achievement Goal Bronze Silver Gold Medals won Adding & subtracting up to 10 Beat the Target Rating in all Group A tables 7 8 9 Adding & subtracting up to 20 Beat the Target Rating in all Group B tables 7 8 9 Adding & subtracting up to 100 Beat the Target Rating in all Group C tables 7 8 9 Multiplying & dividing by up to 5 Beat the Target Rating in all Group D tables 7 8 9 Click and Drag Game Goals In this math game you race against two rival players to build the tallest, most incredible ice cream sundae possible! The game tests your mental math skills, challenging you with a range of addition, subtraction, multiplication and division problems. Each problem answered correctly earns you a fresh scoop of delicious ice cream, but mistakes lose you time – so think carefully, because those precious few seconds could cost you the race! How To Play On the main title screen, you can choose to play either Solo or Multiplayer: • Solo: In this mode you race against two computer rivals. Choose from one of 12 levels ranging from ‘Adding up to 10’ to ‘Multiplying & dividing by up to 5’. • Multiplayer: In this mode you race online against other students from around the world, tackling a random mix of problems from adding & subtracting up to 20 and multiplying & dividing up to 5. Once you’ve made your choices, prepare to race! Sundae bowls are shown at the bottom of the screen, with your bowl in the middle and those of your rivals to the left and the right. Mental math problems appear at the top of the screen. If you answer a problem correctly, you earn a scoop of ice cream and your sundae grows taller. If you answer a problem incorrectly, the scoop falls past your bowl and has to be cleaned up – during this time you cannot answer a new problem, giving your rivals the chance to pull ahead. When time runs out, the heights of the sundaes are calculated and then Gold, Silver or Bronze Stars are awarded according to final position, numbers of problems answered correctly, and accuracy. Game Controls Use the keyboard to enter your answers. If you prefer you can access an on-screen input by clicking the When you collect a power-up, activate it by clicking the icon or pressing SPACE. Attack power-ups automatically target your most threatening rival, leaving you free to concentrate on problem solving. Be sure to use power-ups before they disappear off-screen, or you’ll lose them forever! If a rival targets you with the Squirt-Squirt Cream or Frozen Desserts power-ups, rapidly click the mouse button or tap ENTER to eradicate their effects more quickly. After each race you are given a rating based on how well you played. Use the performance read-out to spot where your weak spots are so you can do better next time! There are 11 power-ups in total. Here’s the lowdown on what they do once activated (by either clicking the icon or pressing SPACE): Awards you with extra ice cream. There are three varieties of this tasty treat: single scoop, double scoop, and triple scoop. Squirts blazing hot spicy relish over one of your rivals’ sundaes, robbing them of a precious ice cream scoop. Awards you TWO ice cream scoops for every problem answered correctly. It only lasts for a short time, so make good use of it! Also, this power-up is lost if you enter an incorrect answer. Freezes rivals’ inputs and prevents them from entering answers. If you fall victim to its chilly grasp, rapidly click the mouse or tap ENTER to shatter the ice more quickly. Unleashes a ravenous sweet-toothed mutt who homes in on the tallest sundae and wolfs down a whole stack of ice cream scoops. Down, boy! Steals an ice cream scoop from one of your rivals and adds it to the top of your sundae. Covers one of your rivals’ inputs in fluffy white cream. If you get attacked by this sticky threat, rapidly click the mouse or tap ENTER to shake off the cream more quickly. Protects your sundae from any power-up attack. It only lasts for a short time, so make sure you activate it at the right moment. Releases a swarm of hungry insects who just love ice cream. As they sweep across the screen, they steal three ice cream scoops from each of your rivals. At the end of each race you are awarded a ‘performance rating’ out of 10. This rating is based on your final position (1st, 2nd or 3rd), the number of problems answered correctly, and your accuracy. If your rating is high enough you win Gold, Silver or Bronze Stars, which are then converted into points and added to your overall score. This table shows you what you need to achieve to earn each Star type and what they’re worth: Star Type Rating Needed Rating Requirements Score Position Correct Answers Accuracy Gold 9.00 or higher 1^st 40 or more 100% 50 points Silver 8.00 or higher 1^st or 2^nd 30 or more 95% or higher 20 points Bronze 7.00 or higher Any 20 or more 90% or higher 5 points In Solo mode, Stars are ‘one time only’ awards – i.e. if you race the same level multiple times, you only earn points the first time you get a Star. In addition, if you win a better Star than before, your score is boosted by the points difference only. You enter Solo mode and play the 'Adding up to 20' level for the very first time. At the end of the race your performance earns you a Bronze Star, so you receive 5 points towards your score. You race the 'Adding up to 20' level once more, and again you earn a Bronze Star. As you already won a Bronze Star for this level, your score is not increased. You race the 'Adding up to 20' level a third time and do much better; this time you earn a Gold Star! Although Gold Stars are worth 50 points you already won 5 points from your Bronze Star, so your score is increased by 45 points. In Multiplayer mode, you can win points each time you race – assuming your performance is good enough to earn a Star, of course! As your score increases, so does your Rank. There are 10 different Ranks; what’s the highest Rank you can reach? Click and Drag Improve Your Score Basic Strategies • Some players find it quicker to use the keypad rather than the keyboard – try both to see which feels more comfortable. • If a rival hits you with the Squirt-Squirt Cream or Frozen Desserts power-up, as soon as the power-up takes effect start clicking the mouse button or tapping ENTER rapidly. • If you have an ‘attack’ power-up (Chilli Sauce Splat, Frozen Desserts, Hungry Hound, Scooper Stealer, Squirt-Squirt Cream) and either of your rivals have Shields active, wait until they expire before unleashing it. • When you’re in the lead, you become a target for everyone! If you have a Shield power-up, wait to activate it until your rivals are just about to collect power-ups of their own – this ensures you get maximum protection from it. • In Solo mode, there is a maximum limit to how many points you can score. As there are 12 different levels, the best you can achieve is Gold Stars in each, i.e. 12 × 100 = 1,200 points. If you want to get a high Rank, then Multiplayer is the place to be – each race has the potential to earn you a new Gold Star! Math Strategies Adding small numbers together Start with the bigger number and then keep counting on with the other number. For example: if you were working out 7 + 4 you could start with 7 and then add on 4 by counting from 7: 7, 8, 9, 10, 11 So, 7 + 4 = 11 Adding tens and one together Start by adding the tens and then add the ones. For example: if you were working out 26 + 31 you could start with 20 + 30 and then work out 6 + 1. 20 + 30 = 50 and 6 + 1 = 7 Now add 50 and 7 together. So, 26 + 31 = 57 Subtracting small numbers You can do this by counting backwards. For example: if you were working out 12 − 4 you could start with 12 and then count 4 more backwards: 12, 11, 10, 9, 8 So, 12 − 4 = 8 Subtracting with tens and ones One way to do this is to count back up. For example: if you were working out 40 − 26 you could count up from 26 to make 40. Start with 26, add 4 to make 30. Starting from 30, add 10 to make 40. All together you added 14. So, 40 − 26 = 14 Swapping numbers around when you multiply If you see a times tables question that you find hard, you might find it easier to work out if you think of the two numbers the other way around. For example: 5 × 2 means "five groups of two" or "2 + 2 + 2 + 2 + 2". But if you work out 2 × 5 you get the same answer. 2 × 5 means “two groups of five” or “5 + 5”. You might find it easier to work out 5 + 5 = 10 Multiplying by 0 When you multiply a number by ZERO, the answer is always ZERO. For example: if you were working out 0 × 3 or 5 × 0 you get the answer ZERO. 0 × 3 = 0 and 5 × 0 = 0 Multiplying by 1 When you multiply a number by 1, this is really just saying “What is one group of the number?” For example: if you were working out 1 × 3 or 3 × 1 this means “What is one group of three?” The answer is just 3. 1 × 3 = 3 and 3 × 1 = 3 Multiplying by 2 When you multiply a number by 2, it is the same as doubling the number. Sometimes it’s easier to think of doubling a number than timesing it by 2. For example: if you were working out 4 × 2 or 2 × 4 this would be the same as doubling 4. When you multiply a number by 2, you might find it easier to work out the answer by adding. For example: work out 2 × 3. This can be worked out by adding 3 to 3. So, 3 + 3 = 6 Multiplying by 3 When you multiply a number by 3, you might find it easier to work out the answer by adding. For example: work out 3 × 5. This can be worked out as 5 + 5 + 5 = 15 Multiplying by 4 When you multiply a number by 4, it is the same as doubling the number and then doubling again. Sometimes it’s easier to think of doubling a number than timesing it by 4. For example: if you were working out 3 × 4 this would be the same as doubling 3 and then doubling again. So double 3 to get 6. Then double 6 to get 12. So, 3 × 4 = 12 Multiplying by 5 When you multiply a number by 5 you might find it helpful to count quickly in fives. For example: to work out 3 × 5 you can count in fives to get: 5, 10, 15. So the answer is 15. Dividing by 1 When you divide a number by 1, you get just the same number. For example: if you were working out 7 ÷ 1 you get just the answer 7. So, 7 ÷ 1 = 7 Dividing by 2 When you divide a number by 2, it is the same as halving the number. Sometimes it’s easier to think of halving a number than dividing it by 2. For example: if you were working out 10 ÷ 2 this would be the same as halving 10. So, 10 ÷ 2 = 5 and half of 10 is 5 Dividing by 3 When you divide a number by 3, it means “How many 3s go into the number?” You can work out the answer by counting in 3s. For example: if you were working out 12 ÷ 3 you could count in 3s: 3, 6, 9, 12 This means that the answer is 4. So, 12 ÷ 3 = 4 Dividing by 4 When you divide a number by 4, it means “How many 4s go into the number?” You can work out the answer by counting in 4s. For example: if you were working out 12 ÷ 4 you could count in 4s: 4, 8, 12 This means that the answer is 3. So, 12 ÷ 4 = 3 Dividing by 5 When you divide a number by 5, it means “How many 5s go into the number?” You can work out the answer by counting in 5s. For example: if you were working out 20 ÷ 5 you could count in 5s: 5, 10, 15, 20 This means that the answer is 4. So, 20 ÷ 5 = 4 Click and Drag More Games Math Games Try these short versions of our free math games. Your students can play full length versions for free when you create a school account and issue them with logins.
{"url":"http://www.mangahigh.com/en-us/games/sundaetimeslite","timestamp":"2014-04-20T05:43:44Z","content_type":null,"content_length":"87594","record_id":"<urn:uuid:faa302b7-54d8-4dcb-b2d5-5697882910ac>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
Rational Functions: Tricks of the trade Zero Quotient Rule. The quotient a/b equals 0 if two conditions are met: (1) a=0 and (2) b≠0. This zero quotient rule applies as well to quotients of any two functions: the quotient p(x)/q(x) equals 0 if two conditions are met: (1) p(x)=0 and (2) q(x)≠0. A typical application involves finding values of x that meet these two conditions. Rational Functions Not Expressible as a Polynomial. Does a rational expression like (x–1)^2/(x–1) reduce, through cancellation, to the polynomial x–1? The answer is no! It reduces to the term x–1 only with the extra condition: x≠1. The cancellation is called "improper" if you exclude the condition x≠1. Because of this extra condition, it does not qualify to be a "full-fledged" polynomial. True, it is "almost" a polynomial, but that is not the same as "being" a polynomial. In fact, unlike the polynomial function y=x–1, the graph of y=(x–1)^2/(x–1) has a hole in it, to signify that the function is not defined at x=1. Here is the graph with the hole in it. Continuity and "Smoothness". Rational functions are continuous at every point in their domain. The domain excludes all denominator zeros (values of x that make the denominator equal to zero). Denominator zeros give rise to breaks in the curve, either holes or vertical asymptotes. As for smoothness, rational functions never have corners. Vertical Asymptotes and Holes. Whenever the denominator is zero at a particular value, say x=c, there is either a hole or vertical asymptote at x=c. If (x–c) is a common factor that can be eliminated from the denominator by improper cancellation, it gives rise to a hole; otherwise, there is a vertical asymptote at x=c. Either way, the function is not defined at x=c. It is for this reason that the curve can never intersect a vertical asymptote (it has no y-value, which is required to have a point of intersection). Maximum Number of x-Intercepts. The numerator of a rational function is a polynomial and x-intercepts can only occur at zeros of the numerator (values of x that make the numerator equal to zero). So, as in the case of polynomial functions, the maximum number of x-intercepts equals the degree of the polynomial in the numerator. Rational Function End Behaviors. Every rational function can be expressed as the sum of a polynomial quotient plus a fractional term in which the numerator has degree less than that of the denominator. For example, (x^2+1)/(x–1) = (x+1) + 2/(x–1) Here, the polynomial quotient is (x+1) and in the fractional term, the numerator, 2, has degree 0, which is less than that of the denominator, x–1, which has degree 1. The end behaviors of a rational function follow those of its polynomial quotient, so in this example the end behaviors follow those of y=x+1: down on the left and up on the right. Also, we say that the rational function is asymptotic (in its end behavior) to the polynomial y=x+1. What are Asymptotes? Asymptotes are dashed lines or dashed polynomial curves that are displayed with the graph of rational and other nonpolynomial functions to help clarify certain types of limiting behaviors. Strictly speaking, asymptotes (whether vertical, horizontal, oblique, or some other polynomial curve) are not part of the function and it is for this reason that they are traditionally displayed as dashed lines, to distinguish them from the solid lines that represent the function. This manner of displaying asymptotes is standard for mathematicians and mathematics textbooks. Less sophisticated graphing calculators display the asymptotes either as a solid line or as no line at all, either of which can cause confusion with standard notation. On WebGraphing.com, asymptotes are always displayed as dashed. Straight Line Asymptotes. A rational function may, but need not, have a vertical asymptote. Further, a rational function may be asymptotic to a nonvertical straight line in two ways. (1) If the degree of the numerator polynomial is less than or equal to the degree of the denominator polynomial, the rational function has a horizontal asymptote. (2) If the degree of the numerator polynomial exceeds by 1 the degree of the denominator polynomial, the rational function has a slant or oblique asymptote. These "rules" are quick ways to determine the existence of horizontal or slant asymptotes. To determine an asymptote explicitly, you need to compute the polynomial quotient. Below is an example of a rational function with a horizontal asymptote. Note that the curve intersects the horizontal asymptote at (0,0). While it is impossible for a rational function to ever intersect any of its vertical asymptotes, this example illustrates how it is possible for a rational function to intersect its end behavior polynomial asymptote. Graphing an Elementary Rational Function Given in Factored Form: Example: y=(x–1)/(x+2). First, plot the y-intercept (set x=0 and solve for y) and plot the x-intercept (set y=0 and solve for x). The y-intercept is y=(0–1)/(0+2)=–(1/2). The x-intercept(s) is (are) determined by solving for x in the quotient: (x–1)/(x+2)=0. According to the zero quotient rule, the quotient is zero only when the numerator is zero and the denominator is not zero. Here, setting the numerator equal to 0, x–1=0, and solving for x we get x=1. Note that the denominator is not zero when x=1, so the zero quotient rule is satisfied and there is one x-intercept at x=1. To determine if there are any vertical asymptotes, set the denominator equal to zero and solve for x: x+2=0. Consequently, x=–2 is a vertical asymptote. Lastly, if you divide the numerator by the denominator, the polynomial quotient is 1 and y=1 represents the polynomial asymptotic end behavior. In this case, y=1 is a horizontal asymptote. Now, you can plot the x- and y-intercepts together with the vertical and horizontal asymptotes. For rational functions, the x-intercepts and vertical asymptotes are the only places where the y-value may, but need not, change sign. So, with one x-intercept at x=1 and one vertical asymptote at x= –2, the x-axis is split into three open intervals, (–∞,–2), (–2,1), and (1,∞), on each of which the function is either always positive (the graph is in the upper half-plane) or always negative (the graph is in the lower half-plane). Selecting convenient "test values" on each interval, say x=–3, x=0, and x=2, and substituting into the formula for y, we construct a table to determine the sign of y on each interval. The sign tells us whether the curve lies above or below the x-axis on each interval. │Test value │y=(x–1)/(x+2) │Sign of y│ │ –3 │ 4 │ + │ │ 0 │ –1/2 │ – │ │ 2 │ 1/4 │ + │ So far, there are no points plotted to the left of x=–2 but we can plot the test values from the table to remedy this and gain further insight. Also, using the information about the sign enables us to begin to sketch the graph nearby the plotted points: To the left of x=–2, we can complete the graph by extending the curve so it is asymptotic to the vertical asymptote, x=–2, and the horizontal asymptote y=2. Note that on this half-plane, since the curve cannot cross the x-axis to the left of x=–2 (otherwise, the crossing would be another zero), there is only one way to approach the vertical asymptote: up (downward would mean crossing the x -axis). To the right of x=–2, we can complete the graph by extending the curve so it is asymptotic to the vertical asymptote, x=–2, and the horizontal asymptote y=2. Note that on this half-plane, since the curve cannot cross the x-axis again, there is only one way to approach the vertical asymptote: down (upward from (0,–1/2) would mean crossing the x-axis). Question: Does the curve intersect the horizontal asymptote y=1? Answer: Suppose there is a value of x for which y=1. "If" there truly is one, we can find it by solving the equation Multiplying both sides by x+2 we get Subtracting x from both sides, we get This is impossible. That is, the equation cannot be solved for x; so the curve cannot intersect the horizontal asymptote.
{"url":"http://webgraphing.com/rationaltricksoftrade.jsp","timestamp":"2014-04-20T10:11:41Z","content_type":null,"content_length":"42362","record_id":"<urn:uuid:344cf108-6464-4e09-b382-1fadd90d88bd>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 216 The perimeter of a rectangle is to be no greater than 300 in., and the length must be 125 in. Find the maximum width of the rectangle. biology: the study of life the sad thing is, if you're looking at this page its cuz u dont know the answers :P fml all the smart kids ain't sharin.. ! The sum of two numbers is 38. Their difference is 12. What are the two numbers? AP Biology I'm working on a lab to show that light reactions must occur prior to dark reactions in photosynthesis. so far, i thought of the experiment involving two plants, one exposed to sunlight, the other is kept in a box. both receive the same amount of water soil, etc and both a... 3 common challenges are that: 1. we use alot of them 2. not alot of people know how to use them and they waste them 3. We don't have enough of them to last Does dat help? Find the lateral and surface area of the right rectangluar prism whose length is 7 cm, width is 5 cm and height is 1.5 cm. I did 2(7 x 1.5+ 1.5 x 5) and I got 36 as my lateral area. Then I add 36+ 2 (7 x5) and I got 106 as my surface area. Is this right? Peter is reading a 103 page book. He has read three pages more than one fourth of the number of pages he hasn't yet read. How many pages he has bot read? Estimate how many days it will take Peter to finish the book if he reads about 8 pages per day. I'm confused. I nee... intro of my essay say something like: Many nuclear power plants are thought to be bad in many ways, but they help americans fulfill daily life. So, it would be intelligent to keep Nuclear Power Plants around, yet they effect human health. (give some examples what they do to health and say even ... ok so 5X=X-12 -X -X MINUS X FROM BOTH SIDES. THAT GIVES YOU 4X=-12 THEN DIVIDE BOTH SIDES BY FOUR. THEN THAT GIVES YOU X=-3 Is this an example of the transitive property: G+G+G=D G^2=D AP bio Explain why adaptations of organisms to interspecific interactions do not necessarily represent coevolution. What would a researcher have to demonstrate about an interaction between two species to make a convincing case for coevolution? i dont understand what this is asking...... Triple-Beam Balance Human biology Coursework On the net it has been suggested and my experiment it has been proven that Fructose can break down Vitamin C or atleast stop it from being effective in some way. I can not find ant theories on the net to how this breakdown happpens. As someone who has always been rubbish at ch... How does africa's mineral wealth effect the economy? The mineral wealth is the only thing that attracts investment. Without investment, the economy is at a standstill. Political and social instability is making that investment very risky. acids and alkalis Anyone know what household cleaning products are acids and alkilis? doing chemistry in science, really difficult!!! PLEASE HELP!! Pages: <<Prev | 1 | 2 | 3
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Carly&page=3","timestamp":"2014-04-16T04:32:30Z","content_type":null,"content_length":"9872","record_id":"<urn:uuid:a0d63fd2-0dd9-4fdf-a2df-a1664affa1c1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Proofs: What to do when you're out of time Ever had to write a proof on a test and you just have no clue what to do? Try one of these: Proof by intimidation: I think it's true. I DARE you to prove me wrong. Therefore it is true. Proof by future knowledge: After this test, I will go home and work on it till I prove it's true. Since it will be proven true in the future, it must be true now as well. Proof by I don't want to fail: If I write any sort of a disproof, you will mark it wrong. Therefore, it must be true. Proof by I know the history of math: I have a truly marvelous proof of this proposition which this margin is too narrow to contain. Or the original latin if you got a good memory: Cuius rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet. Feel free to submit your own. Be creative, but posting something you heard from somewhere else is fine too. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=34586","timestamp":"2014-04-18T21:56:25Z","content_type":null,"content_length":"21611","record_id":"<urn:uuid:e03d27e2-1ec5-4309-a073-04fa1eb1897f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
$I’ll take my NLS with weights, please…$ Today I want to advocate weighted nonlinear regression. Why so? Minimum-variance estimation of the adjustable parameters in linear and non-linear least squares requires that the data be weighted inversely as their variances . Only then is the BLUE (Best Linear Unbiased Estimator) for linear regression and nonlinear regression with small errors (http://en.wikipedia.org/wiki/ Weighted_least_squares#Weighted_least_squares), an important fact Introducing ‘propagate’ $Introducing ‘propagate’$ With this post, I want to introduce the new ‘propagate’ package on CRAN. It has one single purpose: propagation of uncertainties (“error propagation”). There is already one package on CRAN available for this task, named ‘metRology’ (http://cran.r-project.org/web/packages/metRology/index.html). ‘propagate’ has some additional functionality that some may find useful. The most important functions are: * propagate: A predictNLS (Part 2, Taylor approximation): confidence intervals for ‘nls’ models $predictNLS (Part 2, Taylor approximation): confidence intervals for ‘nls’ models$ Initial Remark: Reload this page if formulas don’t display well! As promised, here is the second part on how to obtain confidence intervals for fitted values obtained from nonlinear regression via nls or nlsLM (package ‘minpack.lm’). I covered a Monte Carlo approach in http://rmazing.wordpress.com/2013/08/14/predictnls-part-1-monte-carlo-simulation-confidence-intervals-for-nls-models/, but here we will take a different approach: First- and second-order predictNLS (Part 1, Monte Carlo simulation): confidence intervals for ‘nls’ models $predictNLS (Part 1, Monte Carlo simulation): confidence intervals for ‘nls’ models$ Those that do a lot of nonlinear fitting with the nls function may have noticed that predict.nls does not have a way to calculate a confidence interval for the fitted value. Using confint you can obtain the error of the fit parameters, but how about the error in fitted values? ?predict.nls says: “At present se.fit Trivial, but useful: sequences with defined mean/s.d. $Trivial, but useful: sequences with defined mean/s.d.$ O.k., the following post may be (mathematically) trivial, but could be somewhat useful for people that do simulations/testing of statistical methods. Let’s say we want to test the dependence of p-values derived from a t-test to a) the ratio of means between two groups, b) the standard deviation or c) the sample size(s) of the wapply: A faster (but less functional) ‘rollapply’ for vector setups For some cryptic reason I needed a function that calculates function values on sliding windows of a vector. Googling around soon brought me to ‘rollapply’, which when I tested it seems to be a very versatile function. However, I wanted to code my own version just for vector purposes in the hope that it may bigcor: Large correlation matrices in R $bigcor: Large correlation matrices in R$ As I am working with large gene expression matrices (microarray data) in my job, it is sometimes important to look at the correlation in gene expression of different genes. It has been shown that by calculating the Pearson correlation between genes, one can identify (by high values, i.e. > 0.9) genes that share a common The magic empty bracket $The magic empty bracket$ I have been working with R for some time now, but once in a while, basic functions catch my eye that I was not aware of… For some project I wanted to transform a correlation matrix into a covariance matrix. Now, since cor2cov does not exist, I thought about “reversing” the cov2cor function (stats:::cov2cor). Inside Peer-reviewed R packages? Dear R-Users, a question: I am the author of the ‘qpcR’ package. Within this, there is a function ‘propagate’ that does error propagation based on Monte Carlo Simulation, permutation-based confidence intervals and Taylor expansion. For the latter I recently implemented a second-order Taylor expansion term that can correct for nonlinearity. The formulas are quite complex A weighting function for ‘nls’ / ‘nlsLM’ $A weighting function for ‘nls’ / ‘nlsLM’$ Standard nonlinear regression assumes homoscedastic data, that is, all response values are distributed normally. In case of heteroscedastic data (i.e. when the variance is dependent on the magnitude of the data), weighting the fit is essential. In nls (or nlsLM of the minpack.lm package), weighting can be conducted by two different methods: 1) by supplying
{"url":"http://www.r-bloggers.com/author/anspiess/","timestamp":"2014-04-21T04:43:52Z","content_type":null,"content_length":"40266","record_id":"<urn:uuid:8e7c81f6-1512-474a-a622-f485936e9412>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
so the positioning of the objects does not determine the magnitude of the upthrust ? cos different density? Yes positioning doesn't affect upthrust, as long as the submerged volume is the same. The positioning only affects the fluid resistance/drag force as the object moves through the water, which isn't relevant here. Next qn: a lot of people say that under different fluids, the upthrust will change. But wad about for the same object, at first u put it in fresh water of density 1, and then u shift it to sea water of density 2, the upthrust will change? Yes. Assuming that the volume of the displaced fluid is the same, if the density doubles, the weight of the displaced fluid also doubles and therefore the upthrust doubles as well. Upthrust is proportional to the density of the fluid.
{"url":"http://www.physicsforums.com/showthread.php?t=421203","timestamp":"2014-04-19T02:20:57Z","content_type":null,"content_length":"49885","record_id":"<urn:uuid:01a29c4d-6018-4b1e-a6c6-457dd11ce75e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Instructor Class Description Time Schedule: Milagros C Loreto B CUSP 122 Bothell Campus Introduction to Elementary Functions Covers college algebra with an emphasis on polynomial, rational, logarithmic, exponential, and trigonometric functions. Prerequisite: either a minimum grade of 2.5 in B CUSP 121 or a score of 147-150 on the MPT-GSA assessment test. Offered: AWSp. Class description The emphasis of the course is to learn and improve the algebraic skills necessary to go on taking more math courses. The course covers fractions, exponents, radicals, factorization, linear functions, quadratic functions, some discussion of polynomials and rational functions, and an introduction to exponential and logarithmic functions. Graphing of various functions is also covered. Student learning goals Improve algebraic skills. Learn functional notation, algebraic manipulation of functions, and composition of functions. Recognize and be comfortable using polynomial, exponential, logarithmic and rational functions. Ability to graph various functions, and manipulate functions symbolically. Apply functions and concepts to solve real world problems. Improve problem-solving ability. General method of instruction A typically class will consist of interactive lectures with use of examples from the textbook and small group work, usually involving worksheets. Recommended preparation The Placement test and the desire to learn. Class assignments and grading Online textbook: College Algebra 4/e (4th Edition) by Judith Beecher, Judith Penna, and Marvin Bittinger. Students can purchase an online (eTextbook) version of the textbook from Pearson. There is no need to buy a hard copy of the text. However, if students prefer, they may purchase a hard-copy of the textbook at the UW Bookstore or at www.mypearsonstore.com. There will be homework, several tests and one comprehensive final exam. The course is not graded on a curve. Grades will be determined using the following weighting: Homework - 20% Tests 60% and Final Exam - 20% . The information above is intended to be helpful in choosing courses. Because the instructor may further develop his/her plans for this course, its characteristics are subject to change without notice. In most cases, the official course syllabus will be distributed on the first day of class. Last Update by Milagros C Loreto Date: 12/10/2013
{"url":"http://www.washington.edu/students/icd/B/bcusp/122mloreto.html","timestamp":"2014-04-16T14:26:18Z","content_type":null,"content_length":"5299","record_id":"<urn:uuid:eec9860b-fd76-4460-882c-6676c182eb31>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
hi guys i have just discovered a link on the list of 100 greatest fictional characters.it is awesome but some things are not right by me.for example: then in #30 to #2 it all goes great.and then: Last edited by anonimnystefy (2011-08-28 23:28:03) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=248521","timestamp":"2014-04-17T10:25:02Z","content_type":null,"content_length":"17349","record_id":"<urn:uuid:6469ffa0-12df-48ef-9b8c-3de08a00f74d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Idea for board feature 08-08-2006 #1 Idea for board feature Since math and comp sci are so closely related, would it be possible to implement the following on this board? 1. Superscripts (for exponents) 2. Subscripts (for variable notation) 3. Derivative symbol 4. Summation symbol 5. Integral symbol I'm sure you can already do this via PHP here, but perhaps we could have an eaiser method. Right now it is very difficult to write out an equation on here. Some of that could be very difficult to implement, particually the integral and summation symbol if you're going to need upper and lower bound numbers, derivative should be easy enough without any help, I would just use f'(x), superscript you can use ^, and subscript, of course, you could use the subscript operator. Sent from my iPadŽ Well I think Bubba is on to something. :P There are notations for this stuff, I'm not saying that we should be able to write it (as in without some third party piece to generate it) on the board but maybe we could support something like MathML. Or TeX. I forgot about that... thought I'd mention it now. So it looks nice, and you would be able to put equations in context instead of lumped at the bottom with an image. Look at how civil and smart that is. -shiftily looks around- ...I'm not saying this is a terribly easy idea. Last edited by whiteflags; 08-08-2006 at 11:35 PM. I think it would also be difficult to ask people to learn the syntax required to properly use those tools. Leaving aside the fact that it may be very difficult and very hazzardous to implement. Have you ever seen a forum support these utilities? Sent from my iPadŽ > Have you ever seen a forum support these utilities? > Leaving aside the fact that it may be very difficult and very hazzardous to implement. Hazardous is a spooky word. I don't think it's all that 'dangerous', especially since we're only discussing some way of writing math on the board. The most dangerous thing that happens is probably you don't have the right font, and you get some nice boxes instead of math symbols. Other than that I really don't know who would use it. I just thought it's a nice, possible idea. >> Have you ever seen a forum support these utilities? << Physics Forums support LaTeX. The syntax (pdf) can be pretty complex. Extension. Last edited by anonytmouse; 08-09-2006 at 12:45 AM. This site will convert LaTeX into images. You can always create the image and then upload it into your post. If I did your homework for you, then you might pass your class without learning how to write a program like this. Then you might graduate and get your degree without learning how to write a program like this. You might become a professional programmer without knowing how to write a program like this. Someday you might work on a project with me without knowing how to write a program like this. Then I would have to do you serious bodily harm. - Jack Klein [offtopic] pianorain, the link in your sig is broken. [/offtopic] Memorial University of Newfoundland Computer Science Mac and OpenGL evangelist. Does that fix it? If I did your homework for you, then you might pass your class without learning how to write a program like this. Then you might graduate and get your degree without learning how to write a program like this. You might become a professional programmer without knowing how to write a program like this. Someday you might work on a project with me without knowing how to write a program like this. Then I would have to do you serious bodily harm. - Jack Klein Good class architecture is not like a Swiss Army Knife; it should be more like a well balanced throwing knife. - Mike McShaffry You mean ∑ and ∫ ? edit: and ∂ I think it's a great idea bubba, I prefer tex over not tex and I like the idea of stuffing it right in a post. Does that fix it? Not for me. Apparently I'm not cool enough. I don't have the correct privileges to see the fancy whatever-you-linked-to. There is a difference between tedious and difficult. You mean ∑ and ∫ ? edit: and ∂ Oh, what's the ? one for? Just go into MS word, or whatever, screenshot your equation and IMG it into the thread. Simple! EDIT - Example given in next page Last edited by twomers; 08-09-2006 at 03:49 PM. I assume your font isn't cool enough to display the either the summation, integration, or whatever that stylish backwards 6 I'm not familiar with is. Bitstream displays all three. edit: here's a mod that adds a [latex][/latex] bbtag http://www.vbulletin.org/forum/showt...ighlight=latex Last edited by valis; 08-09-2006 at 03:51 PM. 08-08-2006 #2 08-08-2006 #3 08-08-2006 #4 08-09-2006 #5 08-09-2006 #6 08-09-2006 #7 Join Date Feb 2002 08-09-2006 #8 08-09-2006 #9 Join Date Feb 2002 08-09-2006 #10 08-09-2006 #11 08-09-2006 #12 08-09-2006 #13 08-09-2006 #14 08-09-2006 #15
{"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/81731-idea-board-feature.html","timestamp":"2014-04-17T02:11:17Z","content_type":null,"content_length":"99894","record_id":"<urn:uuid:7e268d55-ac9d-42ac-98aa-448e2818e175>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Possible new multiplication operators for Python [Numpy-discussion] Possible new multiplication operators for Python Charles R Harris charlesr.harris@gmail.... Mon Aug 18 12:21:09 CDT 2008 On Mon, Aug 18, 2008 at 10:55 AM, Grégory Lielens <gregory.lielens@fft.be>wrote: > On Sat, 2008-08-16 at 22:03 -0700, Fernando Perez wrote: > > Hi all, > > > > [ please keep all replies to this only on the numpy list. I'm cc'ing > > the scipy ones to make others aware of the topic, but do NOT reply on > > those lists so we can have an organized thread for future reference] > > > > In the Python-dev mailing lists, there were recently two threads > > regarding the possibility of adding to the language new multiplication > > operators (amongst others). This would allow one to define things > > like an element-wise and a matrix product for numpy arrays, for > > example: > > > > http://mail.python.org/pipermail/python-dev/2008-July/081508.html > > http://mail.python.org/pipermail/python-dev/2008-July/081551.html > > > > It turns out that there's an old pep on this issue: > > > > http://www.python.org/dev/peps/pep-0225/ > > > > which hasn't been ruled out, simply postponed. At this point it seems > > that there is room for some discussion, and obviously the input of the > > numpy/scipy crowd would be very welcome. I volunteered to host a BOF > > next week at scipy so we could collect feedback from those present, > > but it's important that those NOT present at the conference can > > equally voice their ideas/opinions. > > > > So I wanted to open this thread here to collect feedback. We'll then > > try to have the bof next week at the conference, and I'll summarize > > everything for python-dev. Obviously this doesn't mean that we'll get > > any changes in, but at least there's interest in discussing a topic > > that has been dear to everyone here. > > > > Cheers, > > > > f > As one of the original author behind the PEP225, I think this is an > excellent idea. > (BTW, thanks for resurrecting this old PEP :-) and considering it > useful :-) :-) ). > I think I do not need to speak to much for the PEP, telling that I did > not change my mind should be enough ;-)...but still, I can not resist > adding a few considerations: > Demands for Elementwise operators and/or matrix product operator is > likely to resurface from time to time on Python-dev or Python-idea, > given that this is a central feature of Matlab and Matlab is a de-facto > standard when it comes to numeric-oriented interpreted languages (well, > at least in the engineering world, it is in my experience the biggest > player by far). > At the time of the original discussion on python-dev and of PEP225 > redaction , I was new to Python and fresh from Matlab, and the default > behavior of elementwise-product annoyed me a lot. Since then, I have > learned to appreciate the power of numpy broadcast (Use it extensively > in my code :-) ), so the default dehavior do not annoy me anymore... > But I still feel that 2 sets of operator would be very useful > ( especially in some code which implement directly heavy linear algebra > formula), the only thing where my point of view has changed is that I > now think that the Matlab way ( defining * as matrix product and .* as > elementwise product) is not necessarily the best choice, the reverse > choice is as valid... > Given the increasing success of Python as a viable alternative, I think > that settling the Elementwise operator issue is probably a good idea. > Especially as the Python 3000 transition is maybe a good time to > investigate syntax changes/extension. > > > > I don't think so, but given that pep 225 exists and is fully fleshed > > out, I guess it should be considered the starting point of the > > discussion for reference. This doesn't mean that modifications to it > > can't be suggested, but that I'm assuming python-dev will want that as > > the reference point. For something as big as this, they would > > definitely want to work off a real pep. > > > > Having said that, I think all ideas are fair game at this point. I > > personally would like to see it happen, but if not I'd like to see a > > final pronouncement on the matter rather than seeing pep 225 deferred > > forever. > > > I agree 100%. Keeping PEP 225 in limbo is the worst situation imho, > given that the discussion about elementwise operator (or matrix product) > keep showing again and again, having a final decision (even if negative) > would be better. And as I said above, I feel the timing is right for > this final decision... Tim Hochberg proposed using the call operator for matrix multiplication, Which has the advantage of using an existing operator. It looks like function composition, which isn't that far off the mark if matrices are looked at as mappings, but might take a bit of getting used to. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20080818/378e4823/attachment.html More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-August/036820.html","timestamp":"2014-04-19T07:24:07Z","content_type":null,"content_length":"9127","record_id":"<urn:uuid:92ab76eb-b796-4d1f-8f6b-e2ff79c0f47f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving Trigonometric Equations Solve the following equation algebraically. Give a general solution. 3cos2x+cosx=2 in the interval $[0, 2\pi]$ ... $\cos{x} = {\color{red}+}\frac{2}{3}$ ... $x = \arccos\left(\frac{2}{3}\right)$ and $x = 2\pi - \arccos\left(\frac{2}{3}\right)$ $\cos{x} = -1$ ... you tell me what x value on the unit circle has a cosine of -1 if you want all solutions, add integer multiples of $2\pi$ to all three solutions. For the first one, x= 0.841+2npi And for the second one, x=pi+2npi Where are you getting a third solution from? $\cos{x} = \frac{2}{3}$ has two solutions, one in quad I and one in quad IV for $x = \pi$, the general solution should be written ... $x = \pi + 2k\pi , k \in \mathbb{Z}$ ... and put away the
{"url":"http://mathhelpforum.com/trigonometry/208077-solving-trigonometric-equations.html","timestamp":"2014-04-17T05:49:08Z","content_type":null,"content_length":"55214","record_id":"<urn:uuid:7a404c78-331c-459a-9521-81b1bbe66c25>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
[ARCHIVE] WEAPON TABLES : FORUM DOFUS : DOFUS FORUM: Discussion forum for the DOFUS MMORPG, Massively Multiplayer Online Role-Playing Game I made these tables so players can compare weapons easier. Comparing by minimum and maximum damage can be hard especially when critical hits come into play, so I decided to list the average damages of the weapons instead. Hopefully this will help players to decide which weapon hits more for their build. The list includes all weapons from lvls 100+ except from a few healing based ones.The numbers inside the [] are the average damages when the weapons are maged with a 85% potion. You can also find these tables at my talk page in the dofus wiki and I advise doing so because they just look better over there :p Sorted by level Sorted by Ap cost Sorted by level Sorted by Ap cost Sorted by level Sorted by Ap cost Sorted by level Sorted by Ap cost Sorted by level Sorted by Ap cost Sorted by level Sorted by Ap cost Sorted by level Sorted by Ap cost Sorted by level Sorted by ap cost This post has been edited by XehanordHeartless - June 16, 2013, 14:48:24. Wow! Nicely done - Bochi This needs a sticky. Very nice to have, thanks. Should help a lot when picking weapons. I don't mean to nit pick, but Mush Mishish has a cha steal, not air. Quote (SiberianCreeper @ 21 November 2012 18:10) Very nice to have, thanks. Should help a lot when picking weapons. I don't mean to nit pick, but Mush Mishish has a cha steal, not air. Oops! I'm using the staff myself. Corrected, thanks a lot Also I'm sorry about the size of the screenshots. This post has been edited by XehanordHeartless - November 23, 2012, 13:21:30. Awesome table. Not sure if it would confuse things more than it would help, but I'd also recommend adding a column (or columns) for average damage per AP, since that's one of my favorite stats to look at when considering weapons. Quote (AFDowntown @ 30 November 2012 18:58) Awesome table. Not sure if it would confuse things more than it would help, but I'd also recommend adding a column (or columns) for average damage per AP, since that's one of my favorite stats to look at when considering weapons. I thought about it but it may be misleading since you can't use all your ap with any weapon. ha wont need this anymore i rather us my punch So much wasted time. sorry to say xeanord... it seems you wasted a lot of time doing this now. I really liked the time and effort you put in to do this. edit. sorry i posted this before reading yours yesterday This post has been edited by clarkymark - March 17, 2013, 09:31:25. I'll try to update this after the changes. Quote (XehanordHeartless @ 17 March 2013 11:39) I'll try to update this after the changes. There will still be room for strategic calculations with regard to weapons. For example it seems obvious that a 4 ap hammer won't be able to do as much damage as a 3 ap dagger that you can use twice for 6 ap total. This will actually allow you to calculate damage per AP as well as the max number of times you can use a weapon to figure out what the most effective fire weapon is for example: Is it stormcloud staff for 5 AP? Or Mallow Marsh daggers for 3 AP? Etc... Sadly the fact that all classes can use any weapon with no penalty and the fact that you can only use your weapon a set number of teams means that no more than ever these is going to be only one weapon of each element that is ideal. Before it made sense to say that one weapon was worth using because of your class, or maybe you had a 10 ap build so a 5 ap weapon worked, or you had a 12 ap build so a four ap weapon was best. But now it doesn't matter what build or class you are. Sadly there is only going to be one weapon for each element which is best, and it will be easy to figure out what that weapon is. This post has been edited by Mishna - March 23, 2013, 21:18:48. Well nice work , seems time consuming but im sorry to say its not usefull any more after weapons update sadly Quote (AboodHamed @ 06 May 2013 07:10) Well nice work , seems time consuming but im sorry to say its not usefull any more after weapons update sadly This is getting a major overhaul in summer. This post has been edited by XehanordHeartless - May 27, 2013, 20:41:10. I hope they bring back the old weapon system! UPDATE: I've started making the new tables. This will go slowly because the items are not in wiki and I have to search the official weapon list. Quote (IAMONEOFAKIND @ 21 November 2012 04:21) Wow! Nicely done - Bochi Quote (Mishna @ 23 March 2013 21:17) Quote (XehanordHeartless @ 17 March 2013 11:39) I'll try to update this after the changes. There will still be room for strategic calculations with regard to weapons. For example it seems obvious that a 4 ap hammer won't be able to do as much damage as a 3 ap dagger that you can use twice for 6 ap total. This will actually allow you to calculate damage per AP as well as the max number of times you can use a weapon to figure out what the most effective fire weapon is for example: Is it stormcloud staff for 5 AP? Or Mallow Marsh daggers for 3 AP? Etc... Sadly the fact that all classes can use any weapon with no penalty and the fact that you can only use your weapon a set number of teams means that no more than ever these is going to be only one weapon of each element that is ideal. Before it made sense to say that one weapon was worth using because of your class, or maybe you had a 10 ap build so a 5 ap weapon worked, or you had a 12 ap build so a four ap weapon was best. But now it doesn't matter what build or class you are. Sadly there is only going to be one weapon for each element which is best, and it will be easy to figure out what that weapon is. I can't disagree more. If you have high stats and low damages, you may prefer a single one hit weapon like slashen axe rather than a elfic shovel (which is worth using only if you have high fixed damage and ch)
{"url":"http://forum.dofus.com/en/5-general-class-discussion/272936-archive-weapon-tables","timestamp":"2014-04-16T18:57:51Z","content_type":null,"content_length":"119657","record_id":"<urn:uuid:de9e1d25-3208-419c-be67-eeda3c3dfce2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Taking a break from Statistical Mechanics I noticed Corey Yanofsky, whom I respect a great deal, is starting a blog. Corey plans to explore Dr. Mayo’s Severity Principle, which he describes as the “strongest defense of frequentism I’ve ever encountered.” A similarly great geometer, Cosma Shalizi, is even more effusive. I believe Dr. Mayo misunderstands error distributions and the basic facts concerning them (see here and here), but philosophy can be argued endlessly. It’s more productive to examine Corey’s (and Mayo’s) claim that “the severity principle scotches many common criticisms of frequentism”. The key paper is this one, which uses the Severity function SEV(H,T,X) to answer 13 common criticisms (“howlers”) of frequentist methods. Here H is a hypothesis, T a test, and X the data. SEV is often abbreviated SEV(H) when T and X are understood. Unfortunately Mayo based this entire discussion on the NIID example and its sufficient statics. For this case, Which is just the posterior probability! With a slight verbal change, Mayo’s paper is a more convincing defense of Bayesian posteriors than most Bayesians can muster. To illustrate consider howlers # 2 and #3, (#2) All statistically significant results are treated the same. (#3) The p-value does not tell us how large a discrepancy is found. Mayo considers So is SEV or Bayes fixing classical statistics? To put SEV to a “severe” test we need an example where they differ. Since Bayes Theorem automatically uses any sufficient statistics if present, let’s use non-sufficient statistics and see what Intuitively, we’d drop the inaccurate data point and say For example, the Bayesian gets Using the natural, but non-sufficient, test statistic which implies, according to Mayo, the data is good evidence for This is far from the only embarrassment for SEV, but I won’t run up the scoreboard by mentioning others. At this rate “Error Statistics” will turn everyone into Bayesians and where’s the fun in that? What’s weird though is that no one thought to do an elementary check on it. It’s almost as though “SEV” was accepted on religious grounds. I don’t want to be entirely negative, so let me finish on a positive note by echoing Corey & Cosma: Dr. Mayo’s SEV is the strongest defense of frequentist statistics out there. UPDATE: Mayo considers it a key selling point of Error Statistics over Bayesian methods that you can use any T to probe a hypothesis H. By probing H with multiple T’s you get a better sense of whether it’s true. Regardless of what T you use you should get results consistent with the truth or falsity of H for reasonable data consistent with H being true/false. All I did was evaluate this claim. For some T, the SEV answer is identical to the Bayesian posterior. So I looked at the first T that makes SEV differ from the posterior. What I found is that this T get’s it exactly wrong. Note: it doesn’t produce a weakened or less useful form of the intuitive conclusion; it completely contradicts the correct answer. It says the data does provide strong evidence for H, when in fact it doesn’t. This doesn’t mean SEV performs poorley compared to Bayes. It means SEV’s wrong regardless of what the Bayesian answer is. September 12, 2013 91 comments » • Even for frequentists, the natural statistic is the precision-weighted mean, not the raw mean. • That’s not the point. The point is SEV only works when it agrees with the Bayesian posterior. It’s the posterior that’s fixing classical statistics, not SEV. If I used a sufficient statistic then I just get the Bayesian answer. The fact is as soon as you move away from the Bayesian posterior, SEV doesn’t merely become inadequate, it’s extraordinarily • Incidentally Corey, I took you to mean on your blog that you intend to show how “severely testing hypothesis” falls naturally out of probability theory to the extent that it makes sense at all, kind of in the style of Polya showing how probability theory naturally mimics the way humans think. Is that where you’re going? • I don’t find much value in criticisms of error statistics that don’t address how it would actually be practiced. In this specific case, I would expect Mayo to point out that your choice of statistic fails to satisfy the criterion laid out in E.S. Pearson’s Step 2. The challenge is to find a probability model (and a statistic satisfactory according to step 2) with the following properties: - the probability model is simple enough that a statistically literate person’s common sense of the can easily grasp it; - the severity analysis and the Bayesian analysis disagree; - one or the other of them offends common sense. I think a normal model with known variance and optional stopping is the way to go here, because it allows us to force severity and Bayes to disagree even when using the exact same information. Contrast this to your example above, in which it takes a straw statistic to make severity produce garbage. • Your prediction is close. Mayo has offered several phrasings of her Severity Principle. I seem to recall that they can be divided into two categories: the first implicitly assumes a dichotomous datum, and the second implicitly assumes a real-valued datum. The difference between the two is that the continuous versions uses phrases like “more extreme” (i.e., tail areas); the dichotomous versions leave it out. I planned on identifying the dichotomous version as suitable for informal situations where black-and-white reasoning might be a useful quick-n-dirty heuristic; and of course, it accords with Bayes. The continuous version I would call “formal” in the sense of intended for use in actual statistical analyses that call on formal probability models. • Corey, I don’t see the problem. There is nothing either in the explanation or mathematics of Mayo’s development which says we can’t use that estimator. Just the opposite actually. It should work without any problems according to Error Statistics and Mayo’s logic, but in reality is an incredible disaster. I’m well aware that in real life anyone applying SEV would spot the absurdity and would just restrict it to those cases where it agrees with the Bayesian posterior. That was kind of my point in Bottom line: either Mayo’s reasoning is sound and this estimator should work, or her reasoning is unsound. It’s clearly the later. • Out of curiosity, what if I repeated this little exercise with the Cauchy distribution where there are no sufficient statistics? What estimator appears natural now? Which ones are approved by the Severity Principle? Which ones aren’t allowed all of a sudden? • My brain’s model of an error statistician asserts that the specification of a sensible statistic is logically prior to the use of severity in much the same way that the specification of the joint prior for data and hypotheses is logically prior to Bayesian updating. A prior distribution that is inappropriate on the prior information will lead to a posterior distribution inappropriate on the posterior information; in likewise fashion, a choice of statistic nonsensical according to Step 2 will lead to a nonsensical severity curve. • Cauchy! — you go right for the jugular don’tcha, you sonuvabitch. I’m assuming that the specified statistic needs to work of an arbitrary sample size (or equivalently, there’s a family of statistics, one for each possible sample size). I’d need to look at the sampling distribution of a bunch of estimators with a bunch of sample sizes to try to come up with a good one. (My gut suggests that posterior mean under a flat prior might be good.) I wonder what Spanos would say… • “you go right for the jugular don’tcha, you sonuvabitch.” It’s a Marine thing. • Some time ago I’ve read Mayo’s rebuttal of Birnbaum’s theorem (SP+CP ~ LP) and I admit to have understood nothing. Too discorsive and obscure for me. I would be glad if I will can read something about it here. • Joseph, What is the posterior P(mu>0.1 | given Ybar) If you calculate SEV with Ybar you have to calculate the posterior with Ybar – or else you are comparing judgments with different sets of information. • More clearly, suppose you only know that raw average of the two measurements, how would you calculate the posterior? Because that is what you did in the SEV. You calculated it supposing you only knew the raw measurement. But if you only knew the raw measurement, your posterior would also not be zero. • Anon, I think you and some others are missing the logic here. I’m not comparing SEV to Bayes. I’m comparing SEV to the intuitively correct answer. Intuitively, there is no evidence at all for The fact is SEV should have given results consistent with the intuitive answer, but it draws the exact opposite conclusion. It gets it exactly 180 degrees wrong. It’s just flat out not right. There’s no escaping it. The fact that Bayes automatically extracts the most it can from the data and automatically gets an answer consistent with the intuitively correct one is just gravy, but not essential. Even if that wasn’t the case, SEV is still wrong. • Antonio, I’m probably not the person to talk to about Mayo’s critique of Birnbaum’s result. I don’t think Bayesians really have a dog in that fight, but my current understanding is that Mayo is basically right. You need an additional assumption to get LP: SP+CP+??? ~ LP. Everything turns on ??? and whether it’s true or not. People can argue about that endlessly and it doesn’t seem very productive. I was thinking about doing something tangentially related though. Corey seems like he has much more to say about stopping rules, so maybe he’s planning on taking it up. • My point is this: We have to know how to best combine the evidence from both tests. The most efficient way to combine both tests, without losing information, is to come up with a sufficient statistic. In that case, the SEV will be in accord to the common sense. Now, you are saying that SEV will violate the common sense. It will when you ignore information. So will Bayes. If you ignore the information that you had tests with different precisions, the posterior will not be 0. • “It will when you ignore information.” Throwing away info might limit the usefulness of SEV. For example, intuitively there may have been strong evidence for H, but since SEV is using less information it concludes “H hasn’t been well probed by T”. But that’s not what happened at all. What actually happened is that SEV concluded “There is strong evidence for H”. Regardless of whether you use all the info or part of it, this is wrong. “In that case, the SEV will be in accord to the common sense.” Yes, but it’s also in accord with the Bayesian posterior. The entire point was to see what happens when it disagrees with the Bayesian posterior. What happens is that it’s wrong. All you’re really saying is you shouldn’t use SEV unless is agrees with the Bayesian answer, because it’s nonsense otherwise. In that I think we are both in agreement. Also please note: Mayo brags about how SEV will work with any test statistic. In her mind it’s a key point in favor of Error Statics over Bayesian methods that you can use different T’s to probe H in different ways. It should have been consistent with the truth for this T, but it isn’t. End of story. • Anon, Based on our converstation, I added an update to the post. • “Throwing away info might limit the usefulness of SEV. For example, intuitively there may have been strong evidence for H, but since SEV is using less information it concludes “H hasn’t been well probed by T”. But that’s not what happened at all. What actually happened is that SEV concluded “There is strong evidence for H”. Regardless of whether you use all the info or part of it, this is But if you throw away info the bayesian answer will not be humble either. If you feed the likelihood of Ybar, ignoring the info that Y1 and Y2 were measured with different precisions, it will say that mu> >.1 has high posterior probability. In your example, the SEV works how it is supposed to work with the info you are giving it. When you sum Y1+Y2 this is equivalent to saying that you have two observations with the same (in)precision. Given the evidence from two observations with the same (in)precision, it is not absurd to say that you have pretty good reason to believe that mu is higher then 0.1. Now, if someone tells you that each observation has a different precision, then you should take this into account. And your assessment of the evidence will change. How is this differente from bayes? If someone gives you Ybar, you will say that 0.1 is probable. But if afterwards someone says that Y1 and Y2 came from different distributions, you take this into account and change your beliefs. • Anon, See the update to the post. But, you’re getting technical facts confused here. If you feed less info into the Bayesian machine it will increase the uncertainty of every hypothesis. In general it will move probabilities of any hypothesis closer to .5 (or whatever that hypothesis would have under the prior) which is the point of maximum uncertainty. Sometimes it will move them a lot, sometimes a little. This is all well and good. But it will NOT start assigning a very high probability to For Bayesians, throwing away data increases uncertainty, it doesn’t decrease it. I say again: SEV doesn’t give a weakened answer, but one still consistent with the truth. It gives the wrong • You have: Ybar = mu1 + e, e~ norm(0,.25) And you observe ybar=1 Assume a very uncertain normal prior on mu1: mu1~normal(0, 1000) then the posterior would be mu1|ybar ~ normal(1,0.25) P(mu>0.1 | Y=1)=0.964 • I have ignored the term 1/1000, but you get the point. If you only have the observation ybar, the bayes analysis will say that you are pretty sure that the mean is above 0.1 • Anon, That’s not the bayesian calculation. Given • What? We are assuming that you only know ybar. You do not know the individual variances, you only know the variance of ybar. How would you do the calculation with that information? Give a concrete example. • Anon, your calculation of the posterior given only Anon, you seem to know a lot about it. Let’s clarify the key point: given the complete data, what stricture of the severity approach (or error statistics writ large) forbids the use of • Whoops! I used \approx ( • Joseph, you don’t need a change of variables and integration (although it doesn’t do any harm). It follows straight from the stability property of the normal distribution that $(Y_1 + Y_2) \sim N (2 \mu, 1)$. If you actually work through your $\int dB P(A,B| \mu)$, you’ll arrive at the same place. • Well, in this case it seems that it is the same justification for both baysesian and frequentist. Given Y1 and Y2, no bayesian would use only ybar! How would they justify it? They would say that when you use ybar you lose important information. This is no different for the frequentist. Given that you know the variables had different precisions, you should use it to get the best inference possible. • Anon, Bayesians would say, “Condition on all of the available data.” This doesn’t seem to be a tenet of frequentism in cases with no sufficient statistic. In particular, it seems to me that severity requires a reduction of the data to a single real statistic. In the space of possible data, this reduction defines level sets of equal statistic value; Pearson would have us choose a statistic such that by moving across the level sets, we become “more and more inclined on the information available, to reject the hypothesis tested in favour of alternatives”. Is this always possible? If not, in cases where no acceptable reduction exists does error statistics simply balk? • Anon, Actually, I take back the last couple of comments and apologize. You’re right that with the average statistic that SEV is getting the Bayesian posterior condition on part of the information. This really changes my intuition for what’s happening with SEV (as well as what Bayes does when you throw away information). It really improves my understanding at least. If SEV uses all the information it gets the Bayesian posterior. If it throws away some of the information it’s getting the Bayesian posterior conditional on that reduced information. Very interesting! But this is where philosophy matters. Suppose an Error Statistician does as Mayo advocates and conducts multiple probes of $H: \mu>.1$, one using the sufficient statistic and one using the average. They would have to conduct multiple tests to get a more complete picture if no sufficient statistic were available. What will they conclude? For the sufficient statistic they will get a very low SEV for H. Indicating “H has not passed a serve test”. When they use the average they will get “H has passed a severe test”. According to Error Statistics the former doesn’t mean H is wrong, it only means it hasn’t passed a stringent test. Once they combine that ambiguous result with a strong pass, they’ll conclude H is true. The Bayesian looking at the two sets of numbers merely says “My best guess based on the best information available is that H is false” This means that if H is false and a Baysian gets a small P(H|K), and there is any functional reduction of the data for which k=F(K) and P(H|k) is large, then the Error Statistician is liable conclude H is true from the same data K. Moreover, if the distribution has no sufficient statistic, there is no test statistic which uses all the information, so every test statistic will effectively be a P(H|k). Who knows what will happen, especially if the amount of data is large, so that any given statistic likely throws lots of the information away. They’re interpreting the numbers wrong, and getting the wrong final answer because of it. They could patch this up by requiring that only sufficient statistics should ever be used, but there’s no frequentist justification for doing that, and it would only sometimes work. All these problems are solved instantly by just using the Bayesian posterior and interpreting it as a probability. The Bayesian posterior really is what’s fixing those frequentist tests. • “Condition on all of the available data.” That is valid for all frequentists and bayesians. Actually, let’s rephrase that. That should be the case for all scientists. I’m not defending applied classical statistics as it is done today. A lot of theoretical statisticians just do is this: make up a statistic, do a taylor expansion, figure out the asymptotic distribution; or make up a statistic, simulate it and figure out the distribution. Then applied researches start using the statistics. Just because you have the distribution of a statistic, it does not mean it is useful. The measure of your statistic can make no sense at all to the problem you are facing. And, as far as I can see, this is a problem BOTH for bayesians and frequentists. If the only information you have available is a nonsensical statistic (like ybar), then you will get nonsensical results. • “The Bayesian looking at the two sets of numbers merely says “My best guess based on the best information available is that H is false”” The scientist facing two tests should do the same. One of the tests treat all observations as if they came from a N(mu, 0.25). The other test treat each observation with their respective precision. Which one use all information available? If you only knew that both observations came from N(mu, 0.25), then your best guess would actually be that mu>0.1. But that is because you are constructing the evidence (the statistic) in this My guess (I say guess because I have not thought of all examples and have not demonstrated it)is that when there is no good information available (that is, when the statistic is not good), both frequentists and bayesians will very likely be wrong. • Now, just repeating, I’m not defending applied classical statistics as it is done today. It is a complete mess. I point to Gigerenzer or McCloskey surveys on that. • Anon, That’s not all the information they have available. Behind the scenes (from a Bayesian perspective) we can see that they’re only using part of the information k=F(K), but they don’t say “H is warranted by k” they say “H is warrented by K” for the very good reason that they do actually have K. A bayesian can see that they’re really doing the former and not get tripped up, but they will claim the later and be wrong. Once SEV deviates from P(H|K), it gives nonsense. There is no getting around this. If you want to artifically strict SEV to cases when it’s equal to P(H|K) then great! Me too! But frequentist don’t want to make this restriction. • “Once SEV deviates from P(H|K), it gives nonsense. There is no getting around this. If you want to artifically strict SEV to cases when it’s equal to P(H|K) then great! Me too!” I don’t think your example proves your claim, for two reasons: i) when we use ybar, both SEV and P(H|K) – with uncertain prior – are giving nonsense (nonsense because we actually know the truth in the example). ii) we could put a strong prior on mu, and we could make P(H|K) as arbitrarily far from SEV as we want. For example, we can calculate SEV with the wheigthed mean, that would provide strong evidence for mu0.1)=90% even if we used all information available. • hat would provide strong evidence for mu0.1)=90% even if we used all information available. (don’t know what happened, but there was a problem in the coment above, some words are missing) • Ok, the same problem happend again! • Anon, if you use the < character, it appears raw in the HTML and your browser interprets it as the opening of an HTML tag. Words following the open tag disappear because your browser wants to treat them as markup, not text. If you write &lt; it will appear as <. • Anon, This is a serious stretch. If you use a prior for mu which contains Seriously, there is nothing, absolutely nothing, in the Severity concept which says we can’t use that test statistic to test how warranted H is by K. In fact, they insist that SEV has exactly this flexibility. It’s a major selling point for them. They don’t want SEV restricted in the way you or I would. Now after the fact, we can see it was a dumb statistic, and from a Bayesian perspective we see it’s not using all the information, but IT’S PERFECTLY OK BY THEIR OWN CRITERION. • We could see it was a dumb statistic before the bayesian calculation, since there was another statistic that used the full information set available. Now, maybe you should come up with a different example where both bayesian and frequentist will use the same info (that is, the bayesian likelihood and the frequentist likelihood will be the same) and while the frequentist gets nonsense, the baysesian does not get a nonsensical result. • I have to go now, but I will come back later to read your thoughts on these topics, (Joseph and Corey). So please elaborate further, I’m enjoying our discussion. Best. • I agree with Joseph: if the SEV claim is that it works with any statistic, then he only needs to demonstrate one choice of statistic that breaks it. We can _imagine_ (without needing to demonstrate) scenarios where we don’t know a priori which statistics make “sense” and which don’t – this is why working on _any_ statistic is a selling point in the first place. Regarding the claim that “condition on all of the available data” is valid for all frequentists and Bayesians: no, false on both counts. First Bayesians: we can condition on whatever we like. Whenever we condition on something different, we are calculating a different quantity – this is fine provided we keep track of these different quantities and don’t start using them as if they were the same thing. Thus p(x|I_0) reflects what we would believe about x if the available information were I_0, while p(x|I_1) reflects what we would believe about x if the available information were I_1. There are many reasons we might want to calculate both of these quantities. E.g. we might be trying to judge what would be most useful to learn, so as to decide which future experiments to perform. Or we might be trying to judge how sensitive our model is to neglecting certain parts of the available information (perhaps to save on computational costs). But what we don’t do is to calculate p(x|I_0) and then proceed as if what we calculated is actually p(x|I_1). Next, frequentists: the whole hypothesis testing setup is based on the idea that you can choose whatever test statistic you like (where a test statistic is a summary of the data, i.e. a subset of the available information). A more skilled modeller may come up with a better statistic than a less skilled modeller, but the point of the framework is that it is supposed to safeguard even the less skilled modeller against incorrect conclusions. Thus a poor choice of test statistic may lead to an underpowered test, but should still provide a guarantee against false positives. When this is not the case, the whole foundation crumbles. The idea that frequentism does _not_ force you to use all the available information is pretty central. • konrad, I agree that we can condition on different things to see what can be inferred in different states of information, and that this can answer interesting and/or instrumentally important questions. My point is that if the question is, “What can be inferred from some specific set of data?” — as it almost always is, in science — then in general we need to condition on all of the data, not just a lower-dimensional function of it. • Agreed – if we condition on a different set of information, we are answering a different question (namely, what can be inferred from _that_ information?). • “First Bayesians: we can condition on whatever we like. Whenever we condition on something different, we are calculating a different quantity” You can condition your test whatever you like too, and you are calculating a different quantity. If you assume that your measurements have the same variance, as Joseph did, then you will conclude that the data is good evidence for the discrepancy. Of course, that is a wrong assumption, but you can do it. Just like you can do the baeysian posterior with the same assumption and get the same results. Both approaches will fail. • “The idea that frequentism does _not_ force you to use all the available information is pretty central.” If anyone has said that, that person is the one to blame. All relevant information must go to testing, including background information. Of course, there are a lot of theoretical statisticians developing tests that makes no sense, both Bayesians and Frequentists. That does not mean you should use it. For example, Bayes Factors. Or “Full Bayesian Tests”. Or testing precise hypothesis with priors on point nulls. People develop this kind of stuff. And the results that come out of it can be as nonsensical as you want. • Anon, I think you are missing the point. Sure, if one makes an incorrect assumption one should not be surprised to get an incorrect answer. But the point is that the frequentist test described in the post _does not_ make the assumption that the measurements have the same variance. It just constructs a test based on a statistic, and one does not need to make any assumptions about measurement variances to construct a test based on a statistic. So the point is that one gets an incorrect answer _without_ making an incorrect assumption – this is why the methodology is • When you test with the statistic Ybar, you are making a wrong judgement, just as if you had update your prior with Ybar. When you sum both variables without taking account of the different precisions, you are acting as if both of them have the same precision. You chose to ignore this information – so, yes, you are acting as if both observations have the same importance, and this is an wrong assumption. Now, if you think it is wrong updating your prior with Ybar, because you know you have more information than is contained in Ybar, you cannot justify testing with Ybar either. You are losing important information in both cases, so if you claim one methodology is wrong because it could use Ybar and get a nonsensical result, the other methodology is also wrong because mathematically it coul also use Ybar to update the prior and also get a nonsensical result. But it is easy to see that the problem in both cases is not the methodology, but the wrong application of it. • Anon, THEY DON’T SEE IT AS A WRONG APPLICATION!!!!!!!!!!! You are seriously getting this wrong. It’s like looking at two brothers, one who saves his money and is rich and the other who spends his money and is poor, and then claiming “see they both have money problems because if the rich brother put his money in a pile and burned it he’d be poor too”. The Bayesian calculation, without effort or special notice, and without making any choices about estimators, automatically uses all the info. Even if there are no sufficient statistics. SEV does not. And if there aren’t any sufficent statistics, then SEV never can. But that’s not the worst of it for SEV, because they themselves don’t see their procedure as “throwing away information”. That understanding makes perfect sense to me from a Jaynesian perspective, but they want to expressly deny that perspective. They view SEV applied to that bad estimator as a perfectly legitimate way to probe the hypothesis. They believe the data (all of it) is showing that H passes a severe test. They are simply wrong in this. Moreover, I think you and most others are greatly underestimating how real a problem this would be in practice. In large scale simulations involving multiple complicated data sources, the sample average and sample variance are often the only statistics ever used. Nate Silver mentioned something about this in a recent talk when he basically said sample average is king. In that realistic setting it would be a highly non-trivial problem to identify when this is occurring and would be practically impossible in most cases. The fact that the Bayesian posterior doesn’t suffer from this problem would be a huge practical advantage. But there is no getting around the basic point. According to their Frequentist ideology SEV should work for this statistic. It doesn’t. Anyone who doesn’t like that is free to artificially (according to their frequentist principles) restrict it to cases where it matches the simple posterior, which is perfectly fine with me. • Ok, I think you are right when you claim that people do not see this in general as wrong application. I do, but in practice you are rigth that most people don’t. In this case in particular, since it is obvious, people would see it as wrong. But in other cases people wouldn’t. I have faced this problem with unit root “tests”. I have shown people that their tests are irrelevant, because the metric chosen is not appropriate for their problem, EVEN if the test has high severity and low type I error – the problem is that when testing, they are making “hidden” assumptions about the data, losing important information. They think that since they have the statistic sample distribution, that is all that matters – but it is not. Most people don’t get this, and do a complete nonsense, for example, appliying all different kinds of testing but having no idea how to combine the evidence -usually saying, ok, this test was significant, this test was not etc. Now, even with what I have exposed above, my take is that – as far as I can see – the bayesian approach would suffer the same problem. Maybe you should come up with a different example that ilustrates this situation Nate Silvers points out. • A straightforward mindless idiot application of bayes theorem doesn’t suffer from this problem. It only suffers from it if the Bayesian goes way out of their way to screw it up. And incidentally, I don’t think the Bayesian’s answer with the reduced statistic is really “wrong”. It’s “right” in the sense that if all you really knew was the reduced statistic then the Bayesian answer is making a reasonable guess. The goal of Bayesian statistics, after all, is to make the best guess possible from a given state of information. A given state of knowledge doesn’t always contain enough information to really be useful. So sometimes when you make a best guess from uninformative information those guesses don’t agree with reality. There is no way to avoid this other than using more information. Of course, Mayo as a frequentist doesn’t think about the problem this way. She’s openly admitted she has no idea why Jaynes is concerned about “information” and views it as at best an unnecessary veneer laid over statistics and at worst utter nonsense. Which in retrospect is why SEV screws things up. So the Bayesian answer is making the best guess possible given the information fed into it. When you feed more information in, you get better guesses. I don’t see how this is a failure of Bayesian Statistics and all I can recommend is that if you have relevant information be sure to us it in your analysis. It would be nice if we could take “nothing” and make accurate inferences about the real world. Indeed I could think of all kinds of ways to use such an oracle if it were possible. But it isn’t. Inferences are based on information and the quality of inferences has to depend on that information somehow. • Joseph, The rationale for severity is that, were you hypothesis false, with high probability you would have had a statistic that fits less well the hypothesis than the one you actually have. In the present case: 1) the measure of fit, of distance, is well defined – the bigger the sample mean, the less it fits with the hypothesis that mu = 0. The only problem is that we do know that measurements have different precisions, so we should take this into account (but I’ll get back to this later) 2) your error probabilities are correct. The distribution of the sample statistic is correctly derived. So your example is correct, and we correctly could say that the hypothesis H (mu >0.1)severely passes test T (96% severity) with outcome y (y1=2, y2=10^-10). Now, why could we say that? Because this test result is “reliable”, in the sense that only 4% of the time we would get it wrong if the truth were mu<0.1. Unfortunetaly, in the present case, we are in that 4% of the time. Because the imprecise measurement gave us a result of 2, an outlier when mu=0, and ~~ because we did not take into account the different precisions of the measurements ~~ the noisy measure, the outlier, dominantes our test. But does that mean that the rationale is nonsensical? No, it doesn't. If this were the only test available for us, it would indeed be rational to believe the result of the test, since it is realible procedure – only 4% of the time we would get it wrong. And this a procedure that would lead us to discover the error, if we were in error. We could repeat our measurements and, even with this simple mean, we would see that this first result was an outlier, thus learning from error. But, we do know another statistic test ~~far superior ~~ than this one. For example, the power of the test with ybar is pretty low for ranges where the power of the test with the wheigthed average is 100%. Given this knowledge, one should use the more reliable test, in the same way that you should update the prior with Y1 and Y2, and not with Ybar. So my point here is that this example does not invalidate the rationale for severity assessment, even though it does warn people that the SEV, like the p-value, is not an absolute number that you can just calculate blindly without proper knowledge of the problem. • “But does that mean that the rationale is nonsensical? No, it doesn’t” Uh, yes it does. What you’re saying is almost right, but SEV gets it wrong for a very specific reason. It takes into account information which is basically irrelevant to the problem (the first measurement), which the Bayesian calculation is effectively ignoring. The error that SEV is making is the same as the one being joked about here: Which was ridiculed by Mayo on the grounds that no Frequentist would be stupid enough to make this mistake. The dice roll is clearly irrelevant to the Sun exploding. Yet that’s exactly the mistake SEV is making here! The first measurement is effectively irrelevant to the question but SEV is taking it into consideration anyway. SEV is getting this wrong specifically because it’s trying to judge things using the just the sampling distribution, rather than keeping the data fixed and using the posterior. The Rational of using the sampling distribution in the way you describe is invalid except in special instances when it gives the same result as the posterior, in part because it’s liable to include information which is irrelevant to the question at hand. So yes, the reasoning is wrong. • Anon, you claimed that using Ybar in a frequentist framework implies an assumption that all measurements have the same variance. On what do you base this claim? Even if I were to agree that it implies some assumption (which I don’t), why would it imply that _particular_ assumption rather than, say, the assumption that the measurement variances are different but unknown? • Konrad, I’m saying that choosing to ignore the known variances is equivalent to treating both observations as having the same precision. Imagine a situation where you do not know each variance in particular, only the variance of ybar. Then the test result is Ok and it actually agrees with the Bayesian posterior with an very flat prior, as we have seen. But in this case you do know the variances. If you do use the variances, you have more reliable test, with 100% power to very small discrepancies. So, you have two tests. One of them is equivalent to assuming equal variances for both observations (which you know it is not true). The other one uses all information you have and it is more reliable (it has more power to the same type I error). If you choose to use the first test, you are acting as if you did not know the variances were different, when you actually do. This is akin to choosing to update your prior with ybar when you do know y1 and y2. • But I’m not saying that it implies a particular assumption, it implies all assumptions that are equivalent to treating both observations as having the same precision. • I am unclear on which notion of equivalence you are using here – it seems to be one in which an assumption can be “equivalent” to a method (i.e. to “treating both obserations as…”) and I’m not sure the issue can be fixed by simple rephrasing. There is clearly a difference between assuming the precisions are the same and assuming they are different but unknown – these assumptions cannot be equivalent to each other for any sensible definition of equivalence (e.g. in the Bayesian framework they would imply different models). So, which of these two assumptions is equivalent to “treating both observations as having the same precision”, and why? Why not the other assumption? Is it accurate to say that ignoring the precisions is treating them as being the same (and why)? More generally, how do we tell whether a given assumption is “equivalent” to a given method? (Avoiding questions of this type is exactly why the frequentist framework is set up so that a test statistic can be used _without_ committing to an assumption.) • Imagine two worlds: A) You have only observations in which the error term is distributed N(0, 0.25). B) You have two sets of observations Y1 and y2 (that is, you have two separte error terms with different precisions). In A you have only that informations, that is, observations with error N(0, 0.25). So, if you want to do bayesian calculation, you can only uptade your prior with the observations with errors N (0, 0.25). If you want to test, you can only test with this precision. Your test is not very powerful, it has only 7% power to discrepancies as big as 0.1, for example, considering a significance level of 5%. In B you have more information. Now the bayesian analysis can uptade the prior considering the different precisions, which gives a more accurate answer. And yoy can also use a test considering the different precision, which is much more powerful then the other test (it has 100% power to a discrepancy as big as 0.1 and significance of 5%). Now, if you are in world B, you could also act as if you were in world A, that is, you could act as if you had only observations with error term N(0, 0.25). That does not mean you should do that, but you could, either to use an inferior information to uptade your prior or to use a inferior test. That is the notion of equivalence. If you choose to ignore the information, you are acting as if you were in the world without that information. • Ok, one more try before I give up: 1) In frequentist analysis (specifically, hypothesis tests controlling false positive error rate), a test is either valid or it is not. If it is valid, it provides a guarantee against false positives. Among valid tests, some may be inferior to others – an inferior test is one which has weaker power (while still retaining the same guarantee against false positives). The point is that if a valid test gives a positive result you can believe it, and do not need to go in search of a more powerful test because the one you used is already powerful enough to detect the signal in your data set. The only way you would need to replace the test with a different one is if it is not valid – for this to be the case there needs to be an actual error in the methodology. 2) You are not addressing my questions at all. Specifically, I raised the possibility where the precisions are unequal but unknown. So we are in your World B, but we cannot plug the precisions into the calculation because we don’t know them. • @Joseph, in reply to “THEY DON’T SEE IT AS A WRONG APPLICATION!” \lt;dons error statistician hat> You keep repeating this, and it keeps not being true. The very first thing I asked Mayo for was rules/guidelines for choosing statistics. She never really answered me, but she alluded to what the answer would look like, both on her blog and in the paper for hers you cite (see the first two occurences of the word “agreement”). \lt;doffs error statistician hat> • Corey, See page 164 of that paper for a definition of what passing a sever test means. S1 and S2 are both satisfied in this case just like the examples in which I took it from. The frequentist rational would apply to any statistic. Some statistics may be more useful than others in the sense that some are a more complete probe than others, but there is absolutely nothing in the frequentist rational which suggests you should get blatantly wrong answers that contradict what a simple look at the data implies. The examples she actually cites require the low p-value for H_0 and high SEV in order to justify the statement “H passes the test with high severity”. Those were provided in this case. See for example Anon’s comment above which includes: “But does that mean that the rationale is nonsensical? No, it doesn’t. If this were the only test available for us, it would indeed be rational to believe the result of the test, since it is reliable procedure ” He’s saying this because frequentist understanding of the problem leads them to believe the procedure would only fail 4% of the time. That’s the criterion Mayo and other frequentists want to use. They believe it’s perfectly legitimate and their belief causes them to get this one wrong. It may only fail 4% if their assumption about the frequency of future events actually holds true, but regardless of whether it does or not, it fails in a way that’s obvious from a simple intuitive look at the data. Note: it doesn’t just fail because they got unlucky. It fails because it trivially contracts what a simple look at the data implies. If Mayo wants to partially reject this frequentist understanding and to restrict the range of test statistics down to those which imply the Bayesian result, then great. I’m all for it. But then how exactly does she claim SEV fixed the relevant howlers when Laplace was using mathematically identical procedures on the identical problem two centuries ago? Look, frequentists have a great advantage here. Every time a problem is found with their procedures they can patch it up with another intuitive ad-hocery . Then we find another problem, and they patch it up again. Always moving ever closer to the Bayesian result, but never acknowledging it. So I wouldn’t be surprised if Mayo wants to shift the goal posts in this way. But I got plenty more examples were that came from. Incidentally, for the NIID case what do you suppose the • Corey, Look at the definition for passing a severe test on 164 and then look at the example mid page on 169. Then think about the frequentest rational for these procedures. Where in any of that do you see even a hint of the idea that if you choose the wrong T, you’ll can get H passing a severe test even though H is over a billion standard deviations from the obviously correct area observed just by inspecting the data? • Or Corey, here’s another way to look at it. Where in the philosophical justification for severe tests on 164 does it explain that one of these test statistics is legitimate while the other shouldn’t be used? • Joseph, it’s in (S-1): “for a suitable notion of accordance”. Annoyingly, what is and is not a suitable notion of accordance is never made explicit, although Pearson’s Step 2 and the “agreement” quotes I cited give an inkling. I feel pretty confident that something reasonable and objective involving first- and/or second-order stochastic dominance could be defined to give a partial preference order on test statistics. • Regarding In the NIID case? Well, you haven't specified the stopping rule, so the question is underspecified. • Joseph, You are mistaking your lack of knowledge about the method as a faiiure of the method. Not every test is equal, there are more powerful tests, there are measures that are relevant to the problem and measures that are not. You are also contradicting yourself. You say that when only Ybar is available it is not rational to believe the test result, when we have showed that it is with a flat prior and you have agreed with that. So if you think one method is flawed, the other is also flawed too, because it would lead to the same conclusion in the same circumstances…we can easily see that the problem in your example is not the method, but the wrong application of it – I could use the same example to “prove” that Bayesian analysis is wrong when updating with Ybar. And you could easily see that it is blatantly wrong to criticize a method wrongly applying it. Now, where is my answer to Konrad? I have answered him earlier and I can’t find my answer. • Corey: “it’s in (S-1): “for a suitable notion of accordance”. Annoyingly, what is and is not a suitable notion of accordance is never made explicit,” It is made explicit in the examples. See mid-down page 169. I met S-1 the same way Mayo did. Both S-1 and S-2 are explicitly satisfied. “I’m pretty sure that equation is only meant to be used with one-sided composite hypotheses” So SEV can’t even handle an absolutely simple and necessary generalization to trivial problems. How again does it fix classical statistics or serve as the foundation to applied statistics? This wasn’t an innocent question either, because as soon as you start to define things like that then Cox’s theorem starts exerting itself. “Not every test is equal,” I’ve said this over and over again: I know not every test is equal. Not every test is equally useful for SEV or other frequentists. But there is absolutely nothing in the philosophy behind SEV to suggest that it will explode with the wrong T. You should be able to use any T, it’s just that some will be deeper probes than others. “You say that when only Ybar is available it is not rational to believe the test result, when we have showed that it is with a flat prior and you have agreed with that.” I agreed with that from a Bayesian perspective. The creators of SEV don’t agree with that Bayesian perspective and think it’s completely wrong and nonsensical. Within the frequentist/SEV world that calculation shows H has passed a sever test, end of story. So they take data which is clearly showing H can’t be true, and use that data to conclude “H has passed a severe test”. How much more wrong would they have to be before you admit they get it wrong? Whether or not a Bayesian can patch SEV up enough to work in this problem is completely irrelevant. • Reposting the answer to Conrad: “The point is that if a valid test gives a positive result you can believe it, and do not need to go in search of a more powerful test because the one you used is already powerful enough to detect the signal in your data set.” Konrad, what you have said above is not true, both in the mathematical point of view and in the methodological point of view. If a test gives a positive result that does not mean that “the one [test] you used is already powerful enough to detect the signal in your data set”. The power of a test to detect a discrepancy as big as 0.1 does not change whether you have a positive or negative result. In our example, the power of the test to detect a 0.1 discrepancy is ALWAYS 7% when alpha is 5%. So this test is always a poor test, in the sense that it is not reliable to correctly detect the 0.1 discrepancy, irrespective of what result it gives. Second, the logic of tests and designing experiments is to: (i) find the more accurate test possible, always searching where you experiment could be in error; and, (ii) to actually repeat your experiments and improve it whenever possible, controlling the sources of errors (both sistematic and non sistematic errors) so to put your theories under stringent scrutinity. Let’s suppose the 0.1 discrepancy is relevant to find out, if it exists. Then if one uses Ybar, one should be seriously questioned why she is doing experiments that will have only 7% chance of finding an effect this big when it exists. And, more seriously, should be questioned why she isn’t using the other much more reliable test that has 100% chance to detect the same effect. When you have two results from two instruments with different reliabilities (power, measures), you will trust the more reliable one (the more powerful one, the more adequate measure to your So this claim is incorrect: “The only way you would need to replace the test with a different one is if it is not valid”. As I have said in all my comments before, you can have formal valid tests that are useless for most practical problems. (even bayesian tests) “You are not addressing my questions at all. Specifically, I raised the possibility where the precisions are unequal but unknown.” Yes, I am. You asked about the concept of equivalence, and I have tried to make clear the concept of equivalence in general. In the specific case that you have unequal unkown variances, this would not be equivalent to equal known variances, because you would have to estimate the variance from the data – that is, you would have even less information. • All, If you consider There is nothing in the philosophical justification for SEV to indicate one of these shouldn’t be used. They may not both be useful, but neither is disallowed. So what happens if we use both? Now if you do a simple mindless Bayesian calculation conditional on both of them you’ll get the truth. What happens if you looked at SEV for both of these? According to the official philosophy, a low value of SEV doesn’t mean H is wrong, it just means that it didn’t pass that particular test with high Severity. So an error statistician would look at both estimators and say, “H didn’t pass one test with high severity, but it did pass the other test with high severity, therefore taken together the data provides some descent evidence for H” A carpenter ignorant of any mathematics above arithmetic would have gotten this right, just like the Bayesian, but an Error Statistician get’s it wrong. • That last comment, by the way is directly related to Cox’s theorem, sinc it is directly exploiting the fact that the Severity Principle isn’t combining evidence in a way consistent with the product rule. • “So an error statistician would look at both estimators and say, “H didn’t pass one test with high severity, but it did pass the other test with high severity, therefore taken together the data provides some descent evidence for H”” Look, I grant you this: people do that in practice. Not error statisticians, or statisticians, but scienstists. They are social scientists that have no clue what they are doing, practicing a nonsense cult. So, when you say that people could do this you are right. If you take a sample of empirical papers in social science, you will see people doing 5 10 different tests and having no clue what to infer, just claiming “This was significant, this wasn’t”. And they think are doing science. And my biggest problem with Mayo is that she does not fight against that. She actually fight against people that show this problem, like Gigerenzer or McCloskey! What a contradiction! She prefers to discuss against bayesians, who are the least of our problems!!! Our biggest problem is that very important and inteligent researches are doing nonsensical signicant testing and, thus, making a bad name of what could be a sensible approach. Now, back to the theory. “So what happens if we use both?” You have two tests. T1 and T2. The first one is very unreliable. It will give you only 7% of the time the correct answer when there is a discrepancy as big as 0.1 The second one is perfectly precise. It has 100% power to detect what you want to detect. You are going to choose the best instrument to your inference. Or, even better, you can combine both tests into a more powerful test. There is no mistery in this. The logic in error assessment is to analyse how you could be in error, and avoid it. • “Look, I grant you this: people do that in practice. Not error statisticians, or statisticians, but scienstists. ” Anon, this is not a mistake. It’s an official part of their philosophy. That’s the way you’re supposed to do it according to them. They brag about it even. Also, note it’s not a question of the results being consistent with the truth. Anyone can be fooled by misleading data. At issue is the consistency between their results and what a simple intuitive look at the data reveals. The data isn’t misleading here. They’re just process it wrong. Once again, the fact that you or I can use our Bayesian understanding to patch up their methods is irrelevant to whether their methods work on their own. If you want to restrict SEV to instances in which it agrees with Bayes I can’t argue with that. If you say “it would be stupid not to restrict them in that way”, well I can’t argue with that either. If that’s the best defense of SEV that anyone can come up with, then why waste anymore time with SEV? • Anon, Consider this quote from page 159: “This is an important source of objectivity that is open to the error statistician: choice of test may be a product of subjective whims, but the ability to critically evaluate which inferences are and are not warranted is not.” or on 160: “Standard statistical hypotheses, while seeming oversimple in and of themselves, are highly flexible and effective for the piece-meal probes the error statistician seeks.” In other words, Mayo is bragging about how we can conduct piece-meal probes (tests) of hypothesis and then evaluate the “severity” of those tests. When you actually carry out what she considers a major selling point of the Error Statistics program you get answers that directly contradict the data! • Joseph wrote: “Consider this quote from page 159: “This is an important source of objectivity that is open to the error statistician: choice of test may be a product of subjective whims, but the ability to critically evaluate which inferences are and are not warranted is not.”” I concede the point — especially given the parenthetical remark that precedes your quote: “(Even silly tests can warrant certain claims.)”. I thank you for correcting my mistake. If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for I still maintain that severity can be, if not fixed, then improved, by considering the choice of the test statistic along the lines I’ve given above before any given “post-data” severity assessment is carried out. • Corey, I think they can be improved too. According to Cox’s theorem, any improvement either is equivalent to using probabilities for hypothesis or still has problems. So any Frequentist whose dead set against assigning probabilities to hypothesis can put Bayesian critics in a kind of Zeno’s Paradox. Each time a problem is brought up they can fix it but without going full Bayesian. Then the Bayesian has to find where the new methods break. With each iteration it becomes harder and harder to find where they breakdown because each iteration brings it closer to the Bayesian result. Remember SEV is not the first iteration of this process. SEV is there to fix previous frequentist iterations. SEV is already much closer to Bayes than previous efforts like p-values. To see this So the only thing preventing That’s one approach to fixing SEV. Another way is to go full Bayesian. So it’s not like I’m lacking ideas for fixing it. • Joseph, severity aims to quantify how well a test has probed an hypothesis for a certain error. This gives it an out — it’s not aimed at being a generic measure of plausibility per se, so it’s an open question whether it will fall afoul of Cox’s theorem. You’re right that the product rule is where the action is, though — it’s severity’s dependence on tail areas derived from the sampling distribution (as opposed to likelihood) that gives me a wedge to distinguish Bayes and severity in optional stopping. • Corey, My initial thinking was that SEV could save itself and avoid Cox’s Theorem, by agreeing with Bayes sometimes and then when it disagreed, it could say “this case is ambiguous, no determination can be made” or some sort of subterfuge like that. But my thinking has changed quite a bit. Anon’s insight that when T wasn’t a sufficient statistics you’re basically getting Now I don’t see how SEV avoids Cox’s Theorem. The Severity of a hypothesis like But even worse, it’s clear that SEV must have some sort of consistency property. Presumably, if Also, it looks like you can get In addition, you can’t simply restrict T to sufficient statistics, since they don’t always exist. So it looks like SEV(H,T,X) will have to be reinterpreted in some way, because under the current interpretation a non-sufficient statistics can reverse the judgment from a more informative statistic. Taken together, especially with that consistency requirement, I don’t see how SEV avoids the wrath of Cox’s Theorem. • I will elaborate further trying to explain why testing does not need to be consistent as Joseph has put and why this is ok IF you interpret test how they should be. But befor thrat, let me just say something. Mayo is right saying that silly tests can warrant some claims. Some silly tests can sure warrant some claims with high accuracy. In our example, even our Ybar test can warrant us that mu<10 FOR SURE. So, even this silly statistic that nobody would use – given the information we have in the example- would provide us a very good test against mu<10. I' m saying this because I do not know what you are discussing sometimes. If you are trying to discuss what did Mayo want to say in this or that passage, this is a silly discussion, and I do not want to go that road. The interesting discussion is what frequentist reasoning. properly done, can accomplish, and in this case it is clear that frequentist reasoning would tackle the problem correctly, without any bayesian aid. • Sorry for the typos. • Anon, Mayo’s stance on this is clear. It’s repeated in several papers and probably 10 times on her blog, that a big advantage of Error Statistics is that you test hypothesis piecemeal with lots of statistics and you examine those tests to see whether the hypothesis passes them severely. So this isn’t just a matter of interpreting a few words. This is a key part of the Error Statistics As to the more important point about whether consistency is required, first let me say that “consistency” doesn’t mean they get the same answer. You could for example, have one test be inconclusive and another go in favor of H. I would consider those consistent with each other. So there’s plenty of wiggle room here. But how could you ever tolerate having How is the statistician to know which one to use? • I like the consistency requirement approach — I’d be interested to see some examples. But for myself, I’m going to stick to the optional stopping approach to force a distinction between Bayes and • Joseph, in our very example we have the violation of consistency, because SEV(mu>0.1, Ybar) = 96% SEV(mu<0.1, weighted mean)=100% And I have explained how the statistician can know how to choose one result over the another. Now, to make this explanation general to more circumstances I have to come with other examples (like embeded hypothesis) that would not be trivial to calculate the power function and etc, this will take a while. • Anon, it would be more interesting to have a case where two apparently equally good statistics conflict. In the present case the “correct” statistic is too obvious. • Anon, I was thinking about something simpler. Just take two data points from Let the two estimators be I don’t see how SEV avoids a reinterpretation. • This case is even simpler, since you can just combine both observations to get a most powerful test. Informally, the result of T1 is what the evidence would indicate so far given only the observation Y1; the result of T2 is what the evidence would indicate so far given only the observation Y2. And if you combine both observations, you have a more precise test that tell you what the evidence indicates so far given both Y1 and Y2. • I agree, but SEV needs to be reinterpreted before Error Statisticians will agree. Here is the exact wording of the Severity Principle: Severity Principle (full). Data x0 (produced by process G) provides good evidence for hypothesis H (just) to the extent that test T severely passes H with x0. That most powerful test is just the Bayesian (automatically generated) sufficient statistics. So what you’re saying is if you interpret statistics in terms of “information content” (which Mayo in particular has stated she thinks this is a bizarre and unnecessary Jaynesian dead end) then if you do the best job you can of extracting and using information in the data, you get the Bayesian result. Plus you need to interpret it in the Bayesian way, because if you take the above Severity Principle as stated, without reinterpretation, then it leads to nonsense. Sometimes Anon, I can’t tell whether you’re trying to defend Bayes or SEV. • I will try to elaborate this further, but I will take a while. I have to finish a paper due next week and I think I will elaborate my arguments as a longer pdf note, with simulation of some tricky examples that people see as classical statistics giving contradicting answer (and that are not trivial to solve, that is, common sense does not help at first, just after you analise the problem carefully with its frequentist properties). I think this is a worthy discussion, even though sometimes I think most of the discussion problems are communication problems (damn • But just to defend Mayo (and my goal is not to defend her) Her words are ok, see Severity Principle (full). Data x0 (produced by process G) provides good evidence for hypothesis H (just) to the extent that test T severely passes H with x0. So what can we say to our problem? Data Y1 provides good evidence that for H:mu>0 for H mu>0 severely passes T1 with Y1. Data Y2 provides good evidence that for H:mu<0 for H mu0 if we only knew Y1. And Y2 is good evidence that mu<0 if we only knew Y2! Now, with Y1 and Y2 Data (Y1,Y2) does not provide good evidence neither for H:mu0 because these do not severely pass T with (Y1, Y2). So severity is ok here. It tells what is suppose to tell with the evidence you feed it. If you pretend you only have Y1, it is going to tell you (correctly) that Y1 is good evidence that mu>0 – and indeed it is. And so on. • ok, the text there is all messed up because it misunderstood the signs as HTML code. The thing is, in words. Data Y1 provides good evidence for the hypothesis that the mean is greater than zero because this hypothesis severely passes T1 with Y1. Data Y2 provides good evidence for the hypothesis that the mean is less than zero because this hypothesis severely passes T2 with Y2. And does this makes sense? Yes, if we only knew Y1 it is ok to consider it evidence that the mean is greater than zero. And if we only knew Y2 it is ok to consider it evidence that the mean is less than zero. Data (Y1, Y2) does not provide good evidence for the hypothesis that the mean is different than zero because this hypothesis does not severely passes T with (Y1, Y2). And since you have the full data – and not just partial data – that is what the full evidence tells you, with the best test available. So severity is ok here. It tells what is suppose to tell with the evidence you feed it. If you pretend you only have Y1, it is going to tell you (correctly) that Y1 is good evidence that mu>0 – and indeed it is. And so on. • Anon, This is my last comment. See the latest post for more. I understood your point a long time ago, but what you don’t seem to understand is that this really does change SEV and what’s worse (from a frequentist perspective) it directly pushes SEV toward being a posterior probability. In particular, as soon as you start requiring things like: the same information can’t be used in two legitimate Severity analysis to draw contradictory conclusions then this drives you right towards digesting data using Bayes Theorem. For example if you have two test statistics {T_1, T_2} which are informationaly equivalent to {T_3, T_4} in the sense that either pair can be used to calculate the other, then we shouldn’t get that {SEV(H,T_1,X), SEV(H,T_2,X)} directly contradicts the results from {SEV(H,T_3,X), SEV(H,T_4,X)}. Now you may not be bothered by
{"url":"http://www.entsophy.net/blog/?p=163","timestamp":"2014-04-16T07:13:18Z","content_type":null,"content_length":"143714","record_id":"<urn:uuid:d7164e22-1ca0-4ecf-b658-2399bcd0a2c6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: Graphing Question [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] RE: st: Graphing Question From jverkuilen <jverkuilen@gc.cuny.edu> To <statalist@hsphsun2.harvard.edu> Subject RE: st: Graphing Question Date Sat, 26 Jul 2008 11:10:53 -0400 I do have to ask the graph snob question: Why a pie chart? There are almost always better optons that are perceived more accurately by readers, use less space, etc. -----Original Message----- From: "Data Analytics Corp." <dataanalytics@earthlink.net> To: statalist@hsphsun2.harvard.edu Sent: 7/25/2008 8:59 AM Subject: st: Graphing Question Good morning, I have, what should be, a simple graphing problem. I clustered doctors into four groups. I created a table showing the proportion of doctors in each group who use a certain drug with their patients. The proportions are simply the weighted means of a binary variable which indicates whether or not the drug is prescribed by that doctor. Now I want to draw a pie chart showing those proportions. I can easily draw a pie that displays proportions which are not weighted, but how do I tell Stata to draw the pie using the weighted means? In short, I want the table and pie slices to be identical so I can give my client both. Incidentally, how do I get the pie slice labels to be outside the pie and have the percentages next to the slice label? It seems that everything goes inside the pie and we can either get labels or percentages, but not both. Walter R. Paczkowski, Ph.D. Data Analytics Corp. 44 Hamilton Lane Plainsboro, NJ 08536 (V) 609-936-8999 (F) 609-936-3733 * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-07/msg00962.html","timestamp":"2014-04-18T11:39:00Z","content_type":null,"content_length":"6791","record_id":"<urn:uuid:422bd318-36a9-4d24-83a2-2e54ecb673a4>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
rational {MASS} Find rational approximations to the components of a real numeric object using a standard continued fraction method. rational(x, cycles = 10, max.denominator = 2000, ...) Any object of mode numeric. Missing values are now allowed. The maximum number of steps to be used in the continued fraction approximation process. An early termination criterion. If any partial denominator exceeds max.denominator the continued fraction stops at that point. arguments passed to or from other methods. Each component is first expanded in a continued fraction of the form x = floor(x) + 1/(p1 + 1/(p2 + ...))) where p1, p2, ... are positive integers, terminating either at cycles terms or when a pj > max.denominator. The continued fraction is then re-arranged to retrieve the numerator and denominator as integers and the ratio returned as the value. A numeric object with the same attributes as x but with entries rational approximations to the values. This effectively rounds relative to the size of the object and replaces very small entries by See Also Documentation reproduced from package MASS, version 7.3-29. License: GPL-2 | GPL-3
{"url":"http://www.inside-r.org/r-doc/MASS/.rat","timestamp":"2014-04-20T05:24:08Z","content_type":null,"content_length":"17086","record_id":"<urn:uuid:4f8728b3-1e16-4d87-87cd-3cb2b872c465>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Restoring Confidence in Usability Results About Jeff Sauro Jeff Sauro is the founding principal of Measuring Usability LLC, a company providing statistics and usability consulting to Fortune 1000 companies. He is the author of over 20 journal articles and 4 books on statistics and the user-experience. More about Jeff... Posted Comments There are 4 Comments December 8, 2011 | Jeff Sauro wrote: That's always a *healthy* debate and one I've had a lot (more advising than debating sometimes and more debating than advising on others). You've articulated both concerns really well. And this touches on a larger issue with the usability profession. For a long time I think people thought we would be left alone doing our craft. This was especially the case to coexist peaceably with Market Research and Marketing which routinely survey much larger sample sizes. Add numbers and confidence intervals to our work and suddenly a more critical eye comes from Market Research and Marketing and they start questions our methods, our conclusions and well, our In *most* cases I find success in reporting the margin of error or confidence interval in some way. That doesn't mean I'm always advertising the huge margin of error on the 1st slide, but usually, I let people know that I've computed the interval, am aware of the variability, and the data supports my conclusion--although there is risk from the uncertainty. As you point out well, with small sample usability research, it's much easier to show unusability than usability. I have found particular success in focusing on a comparison instead of trying to narrow that interval (which takes a much larger sample size). Sometime this means comparing results statistically to a prior version of the interface (from actually having the users attempt tasks on both versions) and in other cases I've computed the probability that the completion rate or rating is above a certain threshold (e.g. a 70% completion rate). So for example, even with 6 users, if all 6 complete the task, there's a 94% chance the population completion rate is above 70%. At 5/6 completing, that drops to a 73% chance. Not great, but in the right direction. Just a few weeks ago I was presenting to a Senior VP of one of the largest dot.com's the results of an early stage usability test. We had only 13 users and I reported the confidence intervals around all the measures. Half way through the presentation he said "I can't believe you're showing me confidence intervals on a sample size of 13." My response was, actually, at smaller sample sizes, confidence intervals are more important than when the sample sizes are large. Even at this sample size, the confidence intervals are showing us that some tasks are performing statistically better, some statistically worse and many about the same. We're limited to seeing only big changes, but in early research and design, that's what we're interested in--changes that are big, changes that the user will notice and the confidence intervals tell us where we're certain and where we need more evidence. December 7, 2011 | Lisa Maurer wrote: Jeff – My colleagues and I are having a healthy debate about whether to include confidence intervals for task success when we conduct testing with 6-8 users. One point of view is that given “it is much easier to proclaim a task completion rate unacceptable than it is to declare it acceptable. That is, it's hard to show usability, it's much easier to show un-usability,” there is a concern that the amount of energy required in trying to explain Confidence Interval data takes away from the energy to focus on possible solutions when there is a usability issue. Another point of view is that by including confidence intervals (using the Adjusted Wald method) even when there is a large width between the intervals, enhances our credibility and provides clarity around our results. How would you respond to these alternate points of view? &nbsp April 30, 2010 | jim wrote: VERY VERY HELPFUL thank you so much - this is the ONLY place i found this useful info on the web&nbsp December 11, 2007 | Lei wrote: Very useful infomation. thanks.&nbsp Post a Comment Your Name: Your Email Address: To prevent comment spam, please answer the following : What is 2 + 3: (enter the number)
{"url":"http://www.measuringusability.com/conf_intervals.htm","timestamp":"2014-04-18T15:38:56Z","content_type":null,"content_length":"48334","record_id":"<urn:uuid:17ffc5ae-0799-46ff-9d0c-3fe10049a185>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra resources Completing the Square (to find MAX and MIN values) Part 1 Completing the square is an algebraic technique which has several applications. These include the solution of quadratic equations. In this unit we use it to find the maximum or minimum values of quadratic functions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square (to find MAX and MIN values) Part 2 Completing the square is an algebraic technique which has several applications. These include the solution of quadratic equations. In this unit we use it to find the maximum or minimum values of quadratic functions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square (to find MAX and MIN values) Part 3 Completing the square is an algebraic technique which has several applications. These include the solution of quadratic equations. In this unit we use it to find the maximum or minimum values of quadratic functions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square (to find MAX and MIN values) Part 4 Completing the square is an algebraic technique which has several applications. These include the solution of quadratic equations. In this unit we use it to find the maximum or minimum values of quadratic functions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square (to find MAX and MIN values) Part 5 Completing the square is an algebraic technique which has several applications. These include the solution of quadratic equations. In this unit we use it to find the maximum or minimum values of quadratic functions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square (to find MAX and MIN values) Part 6 Completing the square is an algebraic technique which has several applications. These include the solution of quadratic equations. In this unit we use it to find the maximum or minimum values of quadratic functions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square 1 In this iPOD video we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas, but we will see an example of its use in solving a quadratic equation. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square 2 In this iPOD video we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas, but we will see an example of its use in solving a quadratic equation. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square 3 In this iPOD video we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas, but we will see an example of its use in solving a quadratic equation. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square 4 In this iPOD video we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas, but we will see an example of its use in solving a quadratic equation. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square 5 In this unit we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas, but we will see an example of its use in solving a quadratic equation. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square 6 In this unit we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas, but we will see an example of its use in solving a quadratic equation. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Cubic Equations 1 All cubic equations have either one real root, or three real roots. In this video we explore why this is so. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Cubic Equations 2 All cubic equations have either one real root, or three real roots. In this video we explore why this is so. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Cubic Equations 3 All cubic equations have either one real root, or three real roots. In this video we explore why this is so. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Cubic Equations 4 All cubic equations have either one real root, or three real roots. In this video we explore why this is so. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Cubic Equations 5 All cubic equations have either one real root, or three real roots. In this video we explore why this is so. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Cubic Equations 6 All cubic equations have either one real root, or three real roots. In this video we explore why this is so. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Cubic Equations 7 All cubic equations have either one real root, or three real roots. In this video we explore why this is so. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Cubic Equations 8 All cubic equations have either one real root, or three real roots. In this video we explore why this is so. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Expanding and Removing Brackets Part 1 In this unit we see how to expand an expression containing brackets. By this we mean to rewrite the expression in an equivalent form without any brackets in. Fluency with this sort of algebraic manipulation is an essential skill which is vital for further study. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Expanding and Removing Brackets Part 2 In this unit we see how to expand an expression containing brackets. By this we mean to rewrite the expression in an equivalent form without any brackets in. Fluency with this sort of algebraic manipulation is an essential skill which is vital for further study. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Expanding and Removing Brackets Part 3 In this unit we see how to expand an expression containing brackets. By this we mean to rewrite the expression in an equivalent form without any brackets in. Fluency with this sort of algebraic manipulation is an essential skill which is vital for further study. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Expanding and Removing Brackets Part 4 In this unit we see how to expand an expression containing brackets. By this we mean to rewrite the expression in an equivalent form without any brackets in. Fluency with this sort of algebraic manipulation is an essential skill which is vital for further study. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Expanding and Removing Brackets Part 5 In this unit we see how to expand an expression containing brackets. By this we mean to rewrite the expression in an equivalent form without any brackets in. Fluency with this sort of algebraic manipulation is an essential skill which is vital for further study. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 1 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 10 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 11 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 12 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 13 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 2 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 3 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 4 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 5 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 6 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 7 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 8 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Expressions 9 An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to �¢??remove�¢?? or 'multiply-out�¢?? brackets from an expression. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Indices or Powers 1 A knowledge of powers, or indices as they are often called, is essential for an understanding of most algebraic processes. In this section you will learn about powers and rules for manipulating them through a number of worked examples. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Indices or Powers 2 A knowledge of powers, or indices as they are often called, is essential for an understanding of most algebraic processes. In this section you will learn about powers and rules for manipulating them through a number of worked examples. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Indices or Powers 3 A knowledge of powers, or indices as they are often called, is essential for an understanding of most algebraic processes. In this section you will learn about powers and rules for manipulating them through a number of worked examples. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Indices or Powers 4 A knowledge of powers, or indices as they are often called, is essential for an understanding of most algebraic processes. In this section you will learn about powers and rules for manipulating them through a number of worked examples. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Indices or Powers 5 A knowledge of powers, or indices as they are often called, is essential for an understanding of most algebraic processes. In this section you will learn about powers and rules for manipulating them through a number of worked examples. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Indices or Powers 6 A knowledge of powers, or indices as they are often called, is essential for an understanding of most algebraic processes. In this section you will learn about powers and rules for manipulating them through a number of worked examples. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Indices or Powers 7 A knowledge of powers, or indices as they are often called, is essential for an understanding of most algebraic processes. In this section you will learn about powers and rules for manipulating them through a number of worked examples. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Indices or Powers 8 A knowledge of powers, or indices as they are often called, is essential for an understanding of most algebraic processes. In this section you will learn about powers and rules for manipulating them through a number of worked examples. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Indices or Powers 9 A knowledge of powers, or indices as they are often called, is essential for an understanding of most algebraic processes. In this section you will learn about powers and rules for manipulating them through a number of worked examples. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Linear Equations in One Variable Part 1 IPOD VIDEO: In this unit we give examples of simple linear equations and show you how these can be solved. In any equation there is an unknown quantity, x say, that we are trying to find. In a linear equation this unknown quantity will appear only as a multiple of x, and not as a function of x such as x squared, x cubed, sin x and so on. Linear equations occur so frequently in the solution of other problems that a thorough understanding of them is essential. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Linear Equations in One Variable Part 2 IPOD VIDEO: In this unit we give examples of simple linear equations and show you how these can be solved. In any equation there is an unknown quantity, x say, that we are trying to find. In a linear equation this unknown quantity will appear only as a multiple of x, and not as a function of x such as x squared, x cubed, sin x and so on. Linear equations occur so frequently in the solution of other problems that a thorough understanding of them is essential. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Linear Equations in One Variable Part 3 IPOD VIDEO: In this unit we give examples of simple linear equations and show you how these can be solved. In any equation there is an unknown quantity, x say, that we are trying to find. In a linear equation this unknown quantity will appear only as a multiple of x, and not as a function of x such as x squared, x cubed, sin x and so on. Linear equations occur so frequently in the solution of other problems that a thorough understanding of them is essential. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Linear Equations in One Variable Part 4 IPOD VIDEO: In this unit we give examples of simple linear equations and show you how these can be solved. In any equation there is an unknown quantity, x say, that we are trying to find. In a linear equation this unknown quantity will appear only as a multiple of x, and not as a function of x such as x squared, x cubed, sin x and so on. Linear equations occur so frequently in the solution of other problems that a thorough understanding of them is essential. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Linear Equations in One Variable Part 5 IPOD VIDEO: In this unit we give examples of simple linear equations and show you how these can be solved. In any equation there is an unknown quantity, x say, that we are trying to find. In a linear equation this unknown quantity will appear only as a multiple of x, and not as a function of x such as x squared, x cubed, sin x and so on. Linear equations occur so frequently in the solution of other problems that a thorough understanding of them is essential. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Linear Equations in One Variable Part 6 IPOD VIDEO: In this unit we give examples of simple linear equations and show you how these can be solved. In any equation there is an unknown quantity, x say, that we are trying to find. In a linear equation this unknown quantity will appear only as a multiple of x, and not as a function of x such as x squared, x cubed, sin x and so on. Linear equations occur so frequently in the solution of other problems that a thorough understanding of them is essential. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Linear Equations in One Variable Part 7 IPOD VIDEO: In this unit we give examples of simple linear equations and show you how these can be solved. In any equation there is an unknown quantity, x say, that we are trying to find. In a linear equation this unknown quantity will appear only as a multiple of x, and not as a function of x such as x squared, x cubed, sin x and so on. Linear equations occur so frequently in the solution of other problems that a thorough understanding of them is essential. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Logarithms 1 Logarithms appear in all sorts of calculations in engineering and science, business and economics. Before the days of calculators they were used to assist in the process of multiplication by replacing the operation of multiplication by addition. Similarly, they enabled the operation of division to be replaced by subtraction. They remain important in other ways, one of which is that they provide the underlying theory of the logarithm function. This has applications in many fields, for example, the decibel scale in acoustics. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Logarithms 10 Logarithms appear in all sorts of calculations in engineering and science, business and economics. Before the days of calculators they were used to assist in the process of multiplication by replacing the operation of multiplication by addition. Similarly, they enabled the operation of division to be replaced by subtraction. They remain important in other ways, one of which is that they provide the underlying theory of the logarithm function. This has applications in many fields, for example, the decibel scale in acoustics. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Logarithms 2 Logarithms appear in all sorts of calculations in engineering and science, business and economics. Before the days of calculators they were used to assist in the process of multiplication by replacing the operation of multiplication by addition. Similarly, they enabled the operation of division to be replaced by subtraction. They remain important in other ways, one of which is that they provide the underlying theory of the logarithm function. This has applications in many fields, for example, the decibel scale in acoustics. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Logarithms 3 Logarithms appear in all sorts of calculations in engineering and science, business and economics. Before the days of calculators they were used to assist in the process of multiplication by replacing the operation of multiplication by addition. Similarly, they enabled the operation of division to be replaced by subtraction. They remain important in other ways, one of which is that they provide the underlying theory of the logarithm function. This has applications in many fields, for example, the decibel scale in acoustics. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Logarithms 4 Logarithms appear in all sorts of calculations in engineering and science, business and economics. Before the days of calculators they were used to assist in the process of multiplication by replacing the operation of multiplication by addition. Similarly, they enabled the operation of division to be replaced by subtraction. They remain important in other ways, one of which is that they provide the underlying theory of the logarithm function. This has applications in many fields, for example, the decibel scale in acoustics. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Logarithms 5 Logarithms appear in all sorts of calculations in engineering and science, business and economics. Before the days of calculators they were used to assist in the process of multiplication by replacing the operation of multiplication by addition. Similarly, they enabled the operation of division to be replaced by subtraction. They remain important in other ways, one of which is that they provide the underlying theory of the logarithm function. This has applications in many fields, for example, the decibel scale in acoustics. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Logarithms 6 Logarithms appear in all sorts of calculations in engineering and science, business and economics. Before the days of calculators they were used to assist in the process of multiplication by replacing the operation of multiplication by addition. Similarly, they enabled the operation of division to be replaced by subtraction. They remain important in other ways, one of which is that they provide the underlying theory of the logarithm function. This has applications in many fields, for example, the decibel scale in acoustics. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Logarithms 7 Logarithms appear in all sorts of calculations in engineering and science, business and economics. Before the days of calculators they were used to assist in the process of multiplication by replacing the operation of multiplication by addition. Similarly, they enabled the operation of division to be replaced by subtraction. They remain important in other ways, one of which is that they provide the underlying theory of the logarithm function. This has applications in many fields, for example, the decibel scale in acoustics. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Logarithms 8 Logarithms appear in all sorts of calculations in engineering and science, business and economics. Before the days of calculators they were used to assist in the process of multiplication by replacing the operation of multiplication by addition. Similarly, they enabled the operation of division to be replaced by subtraction. They remain important in other ways, one of which is that they provide the underlying theory of the logarithm function. This has applications in many fields, for example, the decibel scale in acoustics. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Logarithms 9 Logarithms appear in all sorts of calculations in engineering and science, business and economics. Before the days of calculators they were used to assist in the process of multiplication by replacing the operation of multiplication by addition. Similarly, they enabled the operation of division to be replaced by subtraction. They remain important in other ways, one of which is that they provide the underlying theory of the logarithm function. This has applications in many fields, for example, the decibel scale in acoustics. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Mathematical Language Part 1 IPOD VIDEO: This introductory section provides useful background material on the importance of symbols in mathematical work. It describes conventions used by mathematicians, engineers, and scientists. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Mathematical Language Part 2 IPOD VIDEO: This introductory section provides useful background material on the importance of symbols in mathematical work. It describes conventions used by mathematicians, engineers, and scientists. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Mathematical Language Part 3 IPOD VIDEO: This introductory section provides useful background material on the importance of symbols in mathematical work. It describes conventions used by mathematicians, engineers, and scientists. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Mathematical Language Part 4 IPOD VIDEO: This introductory section provides useful background material on the importance of symbols in mathematical work. It describes conventions used by mathematicians, engineers, and scientists. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Mathematical Language Part 5 IPOD VIDEO: This introductory section provides useful background material on the importance of symbols in mathematical work. It describes conventions used by mathematicians, engineers, and scientists. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Mathematical Language Part 6 IPOD VIDEO: This introductory section provides useful background material on the importance of symbols in mathematical work. It describes conventions used by mathematicians, engineers, and scientists. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Mathematical Language Part 7 IPOD VIDEO: This introductory section provides useful background material on the importance of symbols in mathematical work. It describes conventions used by mathematicians, engineers, and scientists. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Mathematical Language Part 8 IPOD VIDEO: This introductory section provides useful background material on the importance of symbols in mathematical work. It describes conventions used by mathematicians, engineers, and scientists. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Partial Fractions 1 This video segment introduces partial fractions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Partial Fractions 2 This video segment continues to develop partial fractions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Partial Fractions 3 This video segment continues to develop partial fractions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Partial Fractions 4 This video segment continues to develop partial fractions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Partial Fractions 5 This video segment continues to develop partial fractions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Partial Fractions 6 This video continues to develop partial fractions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Pascal's Triangle & the Binomial Theorem 1 A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Pascal's Triangle & the Binomial Theorem 2 A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Pascal's Triangle & the Binomial Theorem 3 A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Pascal's Triangle & the Binomial Theorem 4 A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Pascal's Triangle & the Binomial Theorem 5 A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Pascal's Triangle & the Binomial Theorem 6 A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Pascal's Triangle & the Binomial Theorem 7 A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Pascal's Triangle & the Binomial Theorem 8 A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Pascal's Triangle & the Binomial Theorem 9 A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Polynomial Division 1 In order to simplify certain sorts of algebraic fraction we need a process known as polynomial division. This unit describes this process. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Polynomial Division 2 In order to simplify certain sorts of algebraic fraction we need a process known as polynomial division. This unit describes this process. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Polynomial Division 3 In order to simplify certain sorts of algebraic fraction we need a process known as polynomial division. This unit describes this process. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Polynomial Division 4 In order to simplify certain sorts of algebraic fraction we need a process known as polynomial division. This unit describes this process. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 1 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 10 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 2 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 3 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 4 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. Quadratic Equations 5 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 6 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 7 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 8 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 9 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Rearranging Formulae Part 1 IPOD VIDEO: It is often useful to rearrange, or transpose, a formula in order to write it in a different, but equivalent form. This unit explains the procedure for doing this. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Rearranging Formulae Part 2 IPOD VIDEO: It is often useful to rearrange, or transpose, a formula in order to write it in a different, but equivalent form. This unit explains the procedure for doing this. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Rearranging Formulae Part 3 IPOD VIDEO: It is often useful to rearrange, or transpose, a formula in order to write it in a different, but equivalent form. This unit explains the procedure for doing this. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Rearranging Formulae Part 4 IPOD VIDEO: It is often useful to rearrange, or transpose, a formula in order to write it in a different, but equivalent form. This unit explains the procedure for doing this. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Rearranging Formulae Part 5 IPOD VIDEO: It is often useful to rearrange, or transpose, a formula in order to write it in a different, but equivalent form. This unit explains the procedure for doing this. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Rearranging Formulae Part 6 IPOD VIDEO: It is often useful to rearrange, or transpose, a formula in order to write it in a different, but equivalent form. This unit explains the procedure for doing this. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Rearranging Formulae Part 7 IPOD VIDEO: It is often useful to rearrange, or transpose, a formula in order to write it in a different, but equivalent form. This unit explains the procedure for doing this. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simplifying algebraic fractions Part 1 IPOD VIDEO: This video explains how algebraic fractions can be simplified by cancelling common factors This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simplifying algebraic fractions Part 2 IPOD VIDEO: This video explains how algebraic fractions can be simplified by cancelling common factors. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simplifying algebraic fractions Part 3 IPOD VIDEO: This video explains how algebraic fractions can be simplified by cancelling common factors. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simplifying algebraic fractions Part 4 IPOD VIDEO: This video explains how algebraic fractions can be simplified by cancelling common factors. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simplifying algebraic fractions Part 5 IPOD VIDEO: This video explains how algebraic fractions can be simplified by cancelling common factors. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simplifying algebraic fractions Part 6 IPOD VIDEO: This video explains how algebraic fractions can be simplified by cancelling common factors. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simplifying algebraic fractions Part 7 IPOD VIDEO: This video explains how algebraic fractions can be simplified by cancelling common factors. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simultaneous Linear Equations Part 1 IPOD VIDEO: The purpose of this section is to look at the solution of simultaneous linear equations. We will see that solving a pair of simultaneous equations is equivalent to finding the location of the point of intersection of two straight lines. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simultaneous Linear Equations Part 2 IPOD VIDEO: The purpose of this section is to look at the solution of simultaneous linear equations. We will see that solving a pair of simultaneous equations is equivalent to finding the location of the point of intersection of two straight lines. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simultaneous Linear Equations Part 3 IPOD VIDEO: The purpose of this section is to look at the solution of simultaneous linear equations. We will see that solving a pair of simultaneous equations is equivalent to finding the location of the point of intersection of two straight lines. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simultaneous Linear Equations Part 4 IPOD VIDEO: The purpose of this section is to look at the solution of simultaneous linear equations. We will see that solving a pair of simultaneous equations is equivalent to finding the location of the point of intersection of two straight lines. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simultaneous Linear Equations Part 5 IPOD VIDEO: The purpose of this section is to look at the solution of simultaneous linear equations. We will see that solving a pair of simultaneous equations is equivalent to finding the location of the point of intersection of two straight lines. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simultaneous Linear Equations Part 6 IPOD VIDEO: The purpose of this section is to look at the solution of simultaneous linear equations. We will see that solving a pair of simultaneous equations is equivalent to finding the location of the point of intersection of two straight lines. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simultaneous Linear Equations Part 7 IPOD VIDEO: The purpose of this section is to look at the solution of simultaneous linear equations. We will see that solving a pair of simultaneous equations is equivalent to finding the location of the point of intersection of two straight lines. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Solving Inequalities 1 This video explains linear and quadratic inequalities and how they can be solved algebraically and graphically. It includes information on inequalities in which the modulus symbol is used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Solving Inequalities 2 This video explains linear and quadratic inequalities and how they can be solved algebraically and graphically. It includes information on inequalities in which the modulus symbol is used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Solving Inequalities 3 This video explains linear and quadratic inequalities and how they can be solved algebraically and graphically. It includes information on inequalities in which the modulus symbol is used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Solving Inequalities 4 This video explains linear and quadratic inequalities and how they can be solved algebraically and graphically. It includes information on inequalities in which the modulus symbol is used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Solving Inequalities 5 This video explains linear and quadratic inequalities and how they can be solved algebraically and graphically. It includes information on inequalities in which the modulus symbol is used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Substitution and Formulae Part 1 IPOD VIDEO: In mathematics, engineering and science, formulae are used to relate physical quantities to each other. They provide rules so that if we know the values of certain quantities; we can calculate the values of others. In this video we discuss several formulae and illustrate how they are used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Substitution and Formulae Part 2 IPOD VIDEO: In mathematics, engineering and science, formulae are used to relate physical quantities to each other. They provide rules so that if we know the values of certain quantities; we can calculate the values of others. In this video we discuss several formulae and illustrate how they are used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Substitution and Formulae Part 3 IPOD VIDEO: In mathematics, engineering and science, formulae are used to relate physical quantities to each other. They provide rules so that if we know the values of certain quantities; we can calculate the values of others. In this video we discuss several formulae and illustrate how they are used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Substitution and Formulae Part 4 IPOD VIDEO: In mathematics, engineering and science, formulae are used to relate physical quantities to each other. They provide rules so that if we know the values of certain quantities; we can calculate the values of others. In this video we discuss several formulae and illustrate how they are used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Substitution and Formulae Part 5 IPOD VIDEO: In mathematics, engineering and science, formulae are used to relate physical quantities to each other. They provide rules so that if we know the values of certain quantities; we can calculate the values of others. In this video we discuss several formulae and illustrate how they are used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Substitution and Formulae Part 6 IPOD VIDEO: In mathematics, engineering and science, formulae are used to relate physical quantities to each other. They provide rules so that if we know the values of certain quantities; we can calculate the values of others. In this video we discuss several formulae and illustrate how they are used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Substitution and Formulae Part 7 IPOD VIDEO: In mathematics, engineering and science, formulae are used to relate physical quantities to each other. They provide rules so that if we know the values of certain quantities; we can calculate the values of others. In this video we discuss several formulae and illustrate how they are used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Substitution and Formulae Part 8 IPOD VIDEO: In mathematics, engineering and science, formulae are used to relate physical quantities to each other. They provide rules so that if we know the values of certain quantities; we can calculate the values of others. In this video we discuss several formulae and illustrate how they are used. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Algebra Refresher A refresher booklet on Algebra with revision, exercises and solutions on fractions, indices, removing brackets, factorisation, algebraic frations, surds, transpostion of formulae, solving quadratic equations and some polynomial equations, and partial fractions. An interactive version and a welsh language version are available. Algebra Refresher - Interactive version An interactive version of the refresher booklet on Algebra including links to other resources for further explanation. It includes revision, exercises and solutions on fractions, indices, removing brackets, factorisation, algebraic frations, surds, transpostion of formulae, solving quadratic equations and some polynomial equations, and partial fractions. An interactive version and a welsh language version are available. Cwrs Gloywi Algebra An Algebra Refresher. This booklet revises basic algebraic techniques. This is a welsh language version. Expanding or removing brackets In this leaflet we see how to expand an expression containing brackets. By this we mean to rewrite the expression in an equivalent form without any brackets in. Factorising complete squares There is a special case of quadratic expression known as a complete square. This leaflet explains what this means and how such expressions are factorised. Factorising quadratics This leaflet shows how to take a quadratic expression and factorise it. Special cases of complete squares and difference of two squares are dealt with on other leaflets. Factorising the difference of two squares There is a special case of quadratic expression known as the difference of two squares. This leaflet explains what this means and how such expressions are factorised. Indices or Powers A power, or index, is used when we want to multiply a number by itself several times. This leaflet explains the use of indices and states rules which must be used when you want to rewrite expressions involving powers in alternative forms. Logarithms - changing the base Sometimes it is necessary to find logs to bases other then 10 and e. There is a formula which enables us to do this. This leaflet states and illustrates the use of this formula. Simple linear equations This leaflet shows how simple linear equations can be solved by performing the same operations on both sides of the equation. The laws of logarithms There are rules, or laws, which are used to rewrite expressions involving logs in different forms. This leaflet states and illustrates these rules. What is a logarithm ? Logarithms can be used to write expressions involving powers in alternative forms. This leaflet explains how. Completing the square It is often useful to be able write a quadratic expression in an alternative form - that is as a complete square plus or minus a number. The process for doing this is called completing the square. This booklet explains how this process is carried out. Completing the square - maxima and minima This is a workbook which describes how to complete the square for a quadratic expression. It goes on to show how the technique can be used to find maximum or minimum values of a quadratic expression. Cubic equations This booklet explains what is meant by a cubic equation and discusses the nature of the roots of cubic equations. It explains a process called synthetic division which can be used to locate further roots when one root is known. The graphical solution of cubic equations is also described. Expanding, or removing brackets This is a complete workbook covering the removal of brackets from expressions. It contains lots of examples and exercises. It can be used as a free-standing resource, or can be read in conjunction with mathtutor - the companion on-disk resource. Factorising quadratics The ability to factorise a quadratic expression is an essential skill. This booklet explains how this process is carried out. Indices or Powers This is a complete workbook on Indices covering definitions, rules and lots of examples and exercises. It can be used as a free-standing resource, or can be read in conjunction with mathtutor - the companion on-disk resource. Linear equations in one variable This is a complete workbook introducing the solution of a single linear equation in one variable. It contains plenty of examples and exercises. It can be used as a free-standing resource or in conjunction with the mathtutor DVD. This booklet explains what is meant by a logarithm. It states and illustrates the laws of llogarithms. It explains the standard bases 10 and e. Finally it shows how logarithms can be used to solve certain types of equations. Mathematical language This introductory booklet describes conventions used in mathematical work and gives information on the appropriate use of symbols. Partial fractions An algebraic fraction can often be broken down into the sum of simpler fractions called partial fractions. This process is required in the solution of a number of engineering and scientific problems. This booklet explains how this is done. Polynomial division Polynomial division is a process used to simplify certain sorts of algebraic fraction. It is very similar to long division of numbers. This booklet describes how the process is carried out. Quadratic equations This booklet explains how quadratic equations can be solved by factorisation, by completing the square, using a formula, and by drawing graphs. Simplifying Fractions This booklet explains how an algebraic fraction can be expressed in its lowest terms, or simplest form. Simultaneous linear equations This is a complete workbook introducing the solution of a pair of simultaneous linear equations. It contains plenty of examples and exercises. It can be used as a free-standing resource or in conjunction with the mathtutor DVD. Solving inequalities This booklet explains linear and quadratic inequalities and how they can be solved algebraically and graphically. It includes information on inequalities in which the modulus symbol is used. Substitution and formulae Formulae are used to relate physical quantities to each other. They provide rules so that if we know the values of certain quantities we can calculate the values of others. This booklet discusses several formulae. Transposition, or rearranging formulae It is often necessary to rearrange a formula in order to write it in a different, yet equivalent form. This booklet explains how this is done. Combining algebraic fractions - Numbas 13 questions on combining algebraic fractions. An area in which students often need practice. Numbas resources have been made available under a Creative Commons licence by the School of Mathematics & Statistics at Newcastle University. Completing the square - Numbas Two questions on completing the square. The first asks you to express $x^2+ax+b$ in the form $(x+c)^2+d$ for suitable numbers $c$ and $d$. The second asks you to complete the square on the quadratic of the form $ax^2+bx+c$ and then find its roots. Numbas resources have been made available under a Creative Commons licence by the School of Mathematics & Statistics at Newcastle University. Diagnostic Test - Brackets Diagnostic test for using brackets. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Diagnostic Test - Completing the square 1 Diagnostic test for completing the square. (1 of 2). This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Diagnostic Test - Completing the square 2 Diagnostic test for completing the square. (2 of 2). This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Diagnostic Test - Cubics Diagnostic test for cubic equations. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Diagnostic Test - Factorising quadratics 1 Diagnostic test for factorising quadratics. (1 of 2). This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Diagnostic Test - Factorising quadratics 2 Diagnostic test for factorising quadratics. (2 of 2). This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Diagnostic Test - Inequalities Diagnostic test for solving inequalities. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Diagnostic Test - Linear equations Diagnostic test for linear equations. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Diagnostic Test - Logarithms Diagnostic test for logs and logarithms. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Diagnostic Test - Mathematical language Diagnostic test for mathematical language This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Diagnostic Test - Partial fractions Diagnostic test for partial fractions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Diagnostic Test - Polynomial division Diagnostic test for polynomial division. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Diagnostic Test - Powers Diagnostic test for powers and indices. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Diagnostic Test - Simplifying fractions Diagnostic test for simplifying fractions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Diagnostic Test - Simultaneous equations Diagnostic test for simultaneous equations. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Diagnostic Test - Substitution Diagnostic test for substituting formulae. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Diagnostic Test - Transposing formulae Diagnostic test for transposing formulae. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Exercise - Brackets Online exercise on using brackets. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Exercise - Completing the square Online exercise on completing the square. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Exercise - Completing the square - maxima and minima Online exercise on completing the square using maxima and minima. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Exercise - Cubics Online exercise on cubic equations. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Exercise - Factorising Quadratics 1 Online exercise on factorising quadratics. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Exercise - Factorising Quadratics 2 Online exercise on factorising quadratics (2). This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Exercise - Fractions Online exercise on partial fractions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Exercise - Inequalities Online exercise on inequalities. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Exercise - Linear equations Online exercise on linear equations. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Exercise - Logarithms Online exercise for logarithms. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Exercise - Partial fractions Online exercise on partial fractions. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Exercise - Polynomial division Online exercise on polynomial division. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Exercise - Powers Online exercise for powers. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Exercise - Simultaneous equations Online exercise on simultaneous equations. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Exercise - Substitution Online exercise on substitution. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Exercise - Transposing formulae Online exercise on transposing formulae. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Expanding Brackets - Numbas 9 questions: Expanding out expressions such $(ax+b)(cx+d)$ etc. Numbas resources have been made available under a Creative Commons licence by the School of Mathematics & Statistics at Newcastle Factorising quadratics - Numbas 3 questions on factorising quadratics. The second question also asks for the roots of the quadratic. The third question involves factorising quartic polynomials but which are quadratics in $x^2$. Numbas resources have been made available under a Creative Commons licence by the School of Mathematics & Statistics at Newcastle University. Logarithms and solving equations - Numbas 8 questions using logarithms. 7 questions use logarithms to solve equations. Numbas resources have been made available under a Creative Commons licence by the School of Mathematics & Statistics at Newcastle University. Maths EG Computer-aided assessment of maths, stats and numeracy from GCSE to undergraduate level 2. These resources have been made available under a Creative Common licence by Martin Greenhow and Abdulrahman Kamavi, Brunel University. Partial fractions - Numbas 1 question on partial fractions. Numbas resources have been made available under a Creative Commons licence by the School of Mathematics & Statistics at Newcastle University. Polynomial division - Numbas 2 questions. First question divides a cubic by a linear polynomial. The second divides a degree 4 polynomial by a degree 2 polynomial. Numbas resources have been made available under a Creative Commons licence by the School of Mathematics & Statistics at Newcastle University. Solving simple linear equations - Numbas 2 equations, both linear (the second needs a small amount of algebra to reduce to a linear equation). Numbas resources have been made available under a Creative Commons licence by the School of Mathematics & Statistics at Newcastle University. System of linear equations - Numbas 3 questions. First, two equations in two unknowns, second 3 equations in 3 unknowns, solved by Gauss elimination. The third two equations in 2 unknowns solved by putting into matrix form and finding the inverse of the coefficient matrix. Numbas resources have been made available under a Creative Commons licence by the School of Mathematics & Statistics at Newcastle University. Completing the square - an Animation This mathtutor animation shows how the quadratic equation for a parabola may be transformed by completing the square. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square - by Inspection In this unit we consider how quadratic expressions can be written in an equivalent form using the technique known as completing the square. This technique has applications in a number of areas, but we will see an example of its use in solving a quadratic equation. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Completing the Square - maxima & maxima Completing the square is an algebraic technique which has several applications. These include the solution of quadratic equations. In this unit we use it to find the maximum or minimum values of quadratic functions. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Expanding & Removing Brackets In this unit we see how to expand an expression containing brackets. By this we mean to rewrite the expression in an equivalent form without any brackets in. Fluency with this sort of algebraic manipulation is an essential skill which is vital for further study. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Factorising Quadratic Equations An essential skill in many applications is the ability to factorise quadratic expressions. In this unit you will see that this can be thought of as reversing the process used to 'remove' or 'multiply-out' brackets from an expression. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Logarithms appear in all sorts of calculations in engineering and science, business and economics. Before the days of calculators they were used to assist in the process of multiplication by replacing the operation of multiplication by addition. Similarly, they enabled the operation of division to be replaced by subtraction. They remain important in other ways, one of which is that they provide the underlying theory of the logarithm function. This has applications in many fields, for example, the decibel scale in acoustics. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Mathematical language This introductory section provides useful background material on the importance of symbols in mathematical work. It describes conventions used by mathematicians, engineers, and scientists. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Partial Fractions After viewing this tutorial, you should be able to explain the meaning of the terms 'proper fraction' and 'improper fraction', and express an algebraic fraction as the sum of its partial fractions. (Mathtutor Video Tutorial) algebraic fraction as the sum of its partial fractions. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Pascal's triangle and the binomial expansion A binomial expression is the sum or difference of two terms. For example, x+1 and 3x+2y are both binomial expressions. If we want to raise a binomial expression to a power higher than 2 it is very cumbersome to do this by repeatedly multiplying x+1 or 3x+2y by itself. In this tutorial you will learn how Pascal's triangle can be used to obtain the required result quickly. (mathtutor video) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Polynomial Division In order to simplify certain sorts of algebraic fraction we need a process known as polynomial division. This unit describes this process. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. A knowledge of powers, or indices as they are often called, is essential for an understanding of most algebraic processes. In this section you will learn about powers and rules for manipulating them through a number of worked examples. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simple Linear Equations In this unit we give examples of simple linear equations and show you how these can be solved. In any equation there is an unknown quantity, x say, that we are trying to find. In a linear equation this unknown quantity will appear only as a multiple of x, and not as a function of x such as x^2, x^3, sin x and so on. Linear equations occur so frequently in the solution of other problems that a thorough understanding of them is essential. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simplifying Algebraic Fractions This video explains how algebraic fractions can be simplified by cancelling common factors. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simultaneous linear equations - an Animation This mathtutor animation shows how solutions to simultaneous linear equations may be found. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Simultaneous Linear Equations Part 1 The purpose of this section is to look at the solution of simultaneous linear equations. We will see that solving a pair of simultaneous equations is equivalent to finding the location of the point of intersection of two straight lines. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Solving Cubic Equations All cubic equations have either one real root, or three real roots. In this video we explore why this is so. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Solving Inequalities This video explains linear and quadratic inequalities and how they can be solved algebraically and graphically. It includes information on inequalities in which the modulus symbol is used. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Solving Quadratic Equations This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Substitution & Formulae In mathematics, engineering and science, formulae are used to relate physical quantities to each other. They provide rules so that if we know the values of certain quantities; we can calculate the values of others. In this video we discuss several formulae and illustrate how they are used. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Transposition or Re-arranging Formulae It is often useful to rearrange, or transpose, a formula in order to write it in a different, but equivalent form. This unit explains the procedure for doing this. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.
{"url":"http://www.mathcentre.ac.uk/courses/physicalscience/algebra/","timestamp":"2014-04-18T13:07:16Z","content_type":null,"content_length":"169254","record_id":"<urn:uuid:f6653fe5-ad2b-4d20-909c-e14c25a3d81a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithm to check if a number can be rearranged to form a Palindrome This is a slight modification of the Palindrome check question. Can a number be arranged in such a way to form a palindrome: 1. Find out the occurrence of each character 2. Only one character with odd occurrence is allowed because in a palindrome maximum number of character with odd occurrence can be 1 3. All other character should occur for even number of times 4. If condition 1, 2 and 3 does not satisfy then palindrome is not possible out of the string. This approach was used for finding if a string is anagram of another string using namespace std; bool PossiblePal(char *s) int array[256]={0}, Odd=0; //Refer to anagram article similar approach is used for(int i=0;i<256;i++) //On encountering character with odd occurance if(array[i]%2!=0 && Odd==0) //If one more charcter with odd occurances else if(array[i]%2!=0 && Odd==1) return false; return true; int main() char str[100]; cout<<"\nEnter the Number to be checked :\n"; cin.getline(str, 100); cout<<"Palindrome is possible \n"; cout<<"Palindrome is NOT possible \n"; return 0; laptop:~/code$ ./a.out Enter the Number to be checked : Palindrome is possible laptop:~/code$ ./a.out Enter the Number to be checked : Palindrome is possible laptop:~/code$ ./a.out Enter the Number to be checked : Palindrome is NOT possible □ Yeah true. implementing an actual hash table will be excellent and thanks for the idea. Please post your code if you have implemented already or i will post it in near future.
{"url":"http://chinmaylokesh.wordpress.com/2011/01/26/algorithm-to-check-if-a-number-can-be-rearranged-to-form-a-palindrome/","timestamp":"2014-04-18T02:58:07Z","content_type":null,"content_length":"64439","record_id":"<urn:uuid:f56a55f6-1f14-42ff-9311-80ccf14b5dc0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of ANOVA , or analysis of variance – simultaneous component analysis is a method that partitions variation and enables interpretation of these partitions by SCA, a method that is similar to . This method is a multi or even megavariate extension of . The variation partitioning is similar to Analysis of variance (ANOVA). Each partition matches all variation induced by an effect or factor, usually a treatment regime or experimental condition. The calculated effect partitions are called effect estimates. Because even the effect estimates are multivariate, interpretation of these effects estimates is not intuitive. By applying SCA on the effect estimates one gets a simple interpretable result. In case of more than one effect this method estimates the effects in such a way that the different effects are not correlated. See also references ( Many research areas see increasingly large numbers of variables in only few samples. The low sample to variable ratio creates problems known as multicollinearity and singularity. Because of this, most traditional multivariate statistical methods cannot be applied. ASCA algorithm This section details how to calculate the ASCA model on a case of two main effects with one interaction effect. It is easy to extend the declared rationale to more main effects and more interaction effects. If the first effect is time and the second effect is dosage, only the interaction between time and dosage exist. We assume there are four time points and three dosage levels. Let X be a matrix that holds the data. X is mean centered, thus having zero mean columns. A1, A2, A3 and A4, as B1, B2 and B3 indicate the levels in time and dosage. A and B are required to be balanced if the effect estimates need to be orthogonal and the partitioning unique. Matrix E holds the information that is not assigned to any effect. The partitioning gives the following notation: $X = A+B+AB+E ,$ Calculating main effect estimate A (or B) Find all rows that correspond to effect A level 1 and averages these rows. The result is a vector. Repeat this for the other effect levels. Make a new matrix of the same size of X and place the calculated averages in the matching rows. That is, give all rows that match effect (i.e.) A level 1 the average of effect A level 1. After completing the level estimates for the effect, perform an SCA. The scores of this SCA are the sample deviations for the effect, the important variables of this effect are in the weights of the SCA loading vector. Calculating interaction effect estimate AB Estimating the interaction effect is similar to estimating main effects. The difference is that for interaction estimates the rows that match effect A level 1 are combined with the effect B level 1 and all combinations of effects and levels are cycled through. In our example setting, with four time point and three dosage levels there are 12 interaction sets {A1-B1, A1B2, A2B1, A2B2 and so on}. It is important to deflate (remove) the main effects before estimating the interaction effect. SCA on partitions A, B and AB Simultaneous component analysis is mathematically identical to PCA, but is semantically different in that it models different objects or subjects at the same time. The standard notation for a SCA – and PCA – model is: $X=TP^\left\{"\right\}+E ,$ where X is the data, T are the component scores and P are the component loadings. E is the residual or error matrix. Because ASCA models the variation partitions by SCA, the model for effect estimates looks like this: $A=T_\left\{a\right\}P_\left\{a\right\}^\left\{"\right\}+E_\left\{a\right\} ,$ $B=T_\left\{b\right\}P_\left\{b\right\}^\left\{"\right\}+E_\left\{b\right\} ,$ $AB=T_\left\{ab\right\}P_\left\{ab\right\}^\left\{"\right\}+E_\left\{ab\right\} ,$ $E=T_\left\{e\right\}P_\left\{e\right\}^\left\{"\right\}+E_\left\{e\right\} ,$ Note that every partition has its own error matrix. However, algebra dictates that in a balanced mean centered data set every two level system is of rank one. This results in zero errors, since any rank 1 matrix can be written as the product of a single component score and loading vector. The full ASCA model with two effects and interaction including the SCA looks like this: $X=A+B+AB+E ,$ \}P\left\{e\right\}^\left\{\text{'}\right\}+E_\left\{a\right\}+E_\left\{b\right\}+E_\left\{ab\right\}+E_\left\{e\right\}+E ,$ Time as an Effect Because 'time' is treated as a qualitative factor in the ANOVA decomposition preceding ASCA, a nonlinear multivariate time trajectory can be modeled. An example of this is shown in Figure 10 of this reference ( ).
{"url":"http://www.reference.com/browse/ANOVA","timestamp":"2014-04-20T10:03:01Z","content_type":null,"content_length":"82222","record_id":"<urn:uuid:c7100695-8022-41c3-8860-29228545d5c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH 1324: Mathematics for Business and Economics I [TCCN: MATH 1324] Topics include review of basic algebraic concepts, linear equations and inequalities, mathematics of finance, matrices, introduction to linear programming, topics in probability. Prerequisite: Satisfactory score on SAT, ACT, or THEA. Credit not given for both MATH 1324 and MATH 1314 or MATH 1332.
{"url":"http://www.uttyler.edu/catalog/10-12/2543.htm","timestamp":"2014-04-20T18:28:03Z","content_type":null,"content_length":"2981","record_id":"<urn:uuid:40d1f07d-aa20-4fca-a347-c8b910e768d9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
An asymmetry-steepness parameterization of the generalized lambda distribution Chalabi, Yohan / Y. and Scott, David J and Wuertz, Diethelm (2012): An asymmetry-steepness parameterization of the generalized lambda distribution. Download (759Kb) | Preview The generalized lambda distribution (GLD) is a versatile distribution that can accommodate a wide range of shapes, including fat-tailed and asymmetric distributions. It is defined by its quantile function. We introduce a more intuitive parameterization of the GLD that expresses the location and scale parameters directly as the median and inter-quartile range of the distribution. The remaining two shape parameters characterize the asymmetry and steepness of the distribution respectively. This is in contrasts to the previous parameterizations where the asymmetry and steepness are described by the combination of the two tail indices. The estimation of the GLD parameters is notoriously difficult. With our parameterization, the fitting of the GLD to empirical data can be reduced to a two-parameter estimation problem where the location and scale parameters are estimated by their robust sample estimators. This approach also works when the moments of the GLD do not exist. Moreover, the new parameterization can be used to compare data sets in a convenient asymmetry and steepness shape plot. In this paper, we derive the new formulation, as well as the conditions of the various distribution shape regions and moment conditions. We illustrate the use of the asymmetry and steepness shape plot by comparing equities from the NASDAQ-100 stock index. Item Type: MPRA Paper Original An asymmetry-steepness parameterization of the generalized lambda distribution Language: English Keywords: Quantile distributions; generalized lambda distribution; shape plot representation Subjects: C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C16 - Specific Distributions Item ID: 37814 Depositing Yohan Chalabi Date 03. Apr 2012 12:42 Last 19. Feb 2013 06:14 W. H. Asquith. L-moments and TL-moments of the generalized lambda distribution. Computational Statistics & Data Analysis, 51(9):4484– 4496, May 2007. M. Bigerelle, D. Najjar, B. Fournier, N. Rupin, and A. Iost. Application of lambda distributions and bootstrap analysis to the prediction of fatigue lifetime and confidence intervals. International Journal of Fatigue, 28(3):223–236, Mar. 2006. C. J. Corrado. Option pricing based on the generalized lambda distribution. Journal of Futures Markets, 21(3):213–236, 2001. B. Dengiz. The generalized lambda distribution in simulation of M/M/1 queue systems. Journal of the Faculty of Engineering and Architecture of Gazi University, 3:161–171, 1988. J. J. Filliben. The probability plot correlation coefficient test for normality. Technometrics, 17(1):111–117, Feb. 1975. B. Fournier, N. Rupin, M. Bigerelle, D. Najjar, and A. Iost. Application of the generalized lambda distributions in a statistical process control methodology. Journal of Process Control, 16(10):1087 – 1098, 2006. B. Fournier, N. Rupin, M. Bigerelle, D. Najjar, A. Iost, and R. Wilcox. Estimating the parameters of a generalized lambda distribution. Computational Statistics & Data Analysis, 51 (6):2813 – 2835, 2007. M. Freimer, G. Kollia, G. Mudholkar, and C. Lin. A study of the generalized Tukey lambda family. Communications in Statistics-Theory and Methods, 17(10):3547–3567, 1988. W. Gilchrist. Statistical modelling with quantile functions. CRC Press, 2000. J. Hastings, Cecil, F. Mosteller, J. W. Tukey, and C. P. Winsor. Low moments for small samples: A comparative study of order statistics. The Annals of Mathematical Statistics, 18(3):pp. 413–426, 1947. D. Hogben. Some properties of Tukey’s test for non-additivity. Unpublished Ph.D. thesis, Rutgers-The State University, 1963. B. L. Joiner and J. R. Rosenblatt. Some properties of the range in samples from Tukey’s symmetric lambda distributions. Journal of the American Statistical Association, 66(334):pp. 394–399, 1971. Z. Karian and E. Dudewicz. Fitting statistical distributions: the generalized lambda distribution and generalized bootstrap methods. Chapman & Hall/CRC, 2000. Z. Karian and E. Dudewicz. Comparison of GLD fitting methods: superiority of percentile fits to moments in L2 norm. Journal of Iranian Statistical Society, 2(2):171–187, 2003. Z. A. Karian and E. J. Dudewicz. Fitting the generalized lambda distribution to data: a method based on percentiles. Communications in Statistics - Simulation and Computation, 28 (3):793–819, 1999. J. Karvanen and A. Nuutinen. Characterizing the generalized lambda distribution by L-moments. Computational Statistics & Data Analysis, 52(4):1971–1983, Jan. 2008. J. Karvanen, J. Eriksson, and V. Koivunen. Maximum likelihood estimation of ICA model for wide class of source distributions. In Neural Networks for Signal Processing X, 2000. Proceedings of the 2000 IEEE Signal Processing Society Workshop, volume 1, pages 445 –454 vol.1, 2000. R. King and H. MacGillivray. Fitting the generalized lambda distribution with location and scale-free shape functionals. American Journal of Mathematical and Management Sciences, 27 (3-4):441–460, 2007. R. A. R. King and H. L. MacGillivray. A starship estimation method for the generalized λ distributions. Australian & New Zealand Journal of Statistics, 41(3):353–374, 1999. A. Lakhany and H. Mausser. Estimating the parameters of the generalized lambda distribution. Algo Research Quarterly, 3(3):47–58, 2000. D. Najjar, M. Bigerelle, C. Lefebvre, and A. Iost. A new approach to predict the pit depth extreme value of a localized corrosion process. ISIJ international, 43(5):720–725, 2003. A. Negiz and A. Çinar. Statistical monitoring of multivariable dynamic processes with state-space models. AIChE Journal, 43(8):2002–2020, 1997. A. Ozturk and R. Dale. A study of fitting the generalized lambda distribution to solar-radiation data. Journal of Applied Meteorology, 21(7): 995–1004, 1982. A. Ozturk and R. Dale. Least-squares estimation of the parameters of the generalized lambda-distribution. Technometrics, 27(1):81–84, 1985. S. Pal. Evaluation of nonnormal process capability indices using generalized lambda distribution. Quality Engineering, 17(1):77–85, 2004. J. S. Ramberg and B. W. Schmeiser. An approximate method for generating asymmetric random variables. Commun. ACM, 17(2):78–82, 1974. S. Shapiro, M. Wilk, and H. Chen. A comparative study of various tests for normality. Journal of the American Statistical Association, 63(324): 1343–&, 1968. S. S. Shapiro and M. B. Wilk. An analysis of variance test for normality (complete samples). Biometrika, 52(Part 3–4):591–611, Dec. 1965. H. Shore. Comparison of generalized lambda distribution (GLD) and response modeling methodology (RMM) as general platforms for distribution fitting. Communications In Statistics-Theory and Methods, 36 (13-16):2805–2819, 2007. S. Su. A discretized approach to flexibly fit generalized lambda distributions to data. Journal of Modern Applied Statistical Methods, 4(2): 408–424, 2005. S. Su. Numerical maximum log likelihood estimation for generalized lambda distributions. Computational Statistics & Data Analysis, 51(8): 3983–3998, May 2007. A. Tarsitano. Fitting the generalized lambda distribution to income data. In COMPSTAT’2004 Symposium, pages 1861–1867. Physica-Verlag/Springer, 2004. A. Tarsitano. Comparing estimation methods for the FPLD. Journal of Probability and Statistics, 2010. J. W. Tukey. The practical relationship between the common transformations of percentages or fractions and of amounts. Technical Report Technical Report 36, Statistical Research Group, Princeton, 1960. 32 J. W. Tukey. The future of data analysis. The Annals of Mathematical Statistics, 33(1):1–67, Mar. 1962. J. Van Dyke. Numerical investigation of the random variable y = c(u^\lambda − (1 − u)^\lambda). Unpublished working paper, National Bureau of Standards, 1961. URI: http://mpra.ub.uni-muenchen.de/id/eprint/37814 Available Versions of this Item • An asymmetry-steepness parameterization of the generalized lambda distribution. (deposited 03. Apr 2012 12:42) [Currently Displayed]
{"url":"http://mpra.ub.uni-muenchen.de/37814/","timestamp":"2014-04-16T11:14:03Z","content_type":null,"content_length":"32802","record_id":"<urn:uuid:b4191421-72f2-46c0-bb9b-460f964e8235>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Imlaystown Math Tutor Find a Imlaystown Math Tutor ...I LOVE kids, I work at a local day camp and as a local baby sitter. I have had plenty of hands on experiences teaching students in all subjects but I specialize in math, science, and special education. UDel is known across the country as having one of the best education programs out there, especially in math education. 20 Subjects: including discrete math, reading, prealgebra, geometry ...Due to my mastery of high school subjects, I would easily be able to tutor for the high school equivalency test, the GED. My comprehension skills and strategizing ability allow me to prepare my students with the best ways to approach the reading portion of the SAT. I also work with them to boos... 33 Subjects: including algebra 1, algebra 2, calculus, ACT Math ...I graduated from the University of Maryland in 2007 with a degree in physics and I have been teaching ever since. I am very passionate about my profession and about physics in particular. My tutoring style is centered around getting students to understand concepts instead of blindly plugging and chugging into random equations. 4 Subjects: including algebra 1, algebra 2, geometry, physics ...I have a bachelor's in mathematics from Rutgers University. I have experience passing the GREs as well as the Praxis II for mathematics. Furthermore, I have experience tutoring students taking math classes up to and including calculus. 16 Subjects: including trigonometry, algebra 1, algebra 2, calculus ...I have 10 years experience teaching science in small, private Christian schools and have just received Certificate of Eligibility to teach physics in the State of NJ. Please consider me for your math, physics, astronomy, and calculus needs. I know and understand the frustration levels of not knowing how to even approach the problems that you face day in and day out. 13 Subjects: including calculus, physics, precalculus, trigonometry
{"url":"http://www.purplemath.com/imlaystown_nj_math_tutors.php","timestamp":"2014-04-19T05:25:31Z","content_type":null,"content_length":"23966","record_id":"<urn:uuid:22e42a1b-1ab0-46d5-9d56-970dc5a01cfd>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Stright Lines, January 2001, Vol. 4-1 The Official Newsletter of the IUP Mathematics Department January, 2001___________________Volume 4, Issue 1 Welcome to another issue of Stright Lines. For any of you receiving this as a first issue and thinking those IUP Mathematics Faculty can't spell “straight”, I remind you that the Mathematics Department is located in Stright Hall. I am sad to report that we have received no letters from graduates to include in this newsletter. We still hope to hear from you. In the last issue Joe Kirchner recalled several humorous stories from his days at IUP. We hoped to get some more stories from alumni or retired faculty. Joe thought “it might be a little tough getting funny stories from a bunch of math majors” and he seems to have been right since we have received no contributions. We still would welcome humorous stories about your days at IUP. Jim Reber, Editor. IUP Graduates are Involved! In one of the first issues of Stright Lines I extolled the professional activities of IUP graduates in mathematics education. IUP graduates continue to make a difference in education. For example, check out the web page of the Mathematics Council of Western Pennsylvania at www.mcwp.org. It is maintained by David Taylor, an IUP graduate. David taught for 3.5 years in Maryland and then returned to Western Pennsylvania to teach mathematics at South Fayette Township Jr.-Sr. High School. One of his responsibilities at the school has been the Cooperative Satellite Learning Project, a cooperative effort among the school, NASA, Goddard Space Flight Center, and Allied Signal Technical Services Corporation. This fall David became Director of Information Technology at South Fayette. This year from March 15 - 17, 2001, the 50th Annual Meeting of the Pennsylvania Council of Teachers of Mathematics (PCTM) will be held in Pittsburgh at Greentree’s Radisson and Holiday Inn hotels. IUP graduates are certainly prominent among committee chairs, presenters and presiders. Dave Depner is Co-chair of Local Arrangements and Susan Stonebraker is Chair of Meals and Functions. Many alumni who have earned their degrees at IUP are sharing their knowledge and expertise by presenting programs. These alumni include Linda Brecht, Elaine Carbone, Patty Flach, Rhonda Fedyk-Foust, Nina Girard, Bill Hadley, Jennifer Landsman, Peggy Lunardini, Majory Maher, Rita McMinn, Mary Lou Metz, Mary Lynn Raith, Shannon Relihan-Rieger, Cathy Schloemer, Eli Shaheen, Anita Smith, Kirstie Trump, John Uccellini, and Mark Zelinskas. Many of the presenters are also presiding over sessions, as are Adrienne Kapisak, Dorothy Mullin, and nine student teachers. (I wonder who twisted the arms of the latter!) Anyway, these student teachers deserve mention; they are Tracy Birchall, Leah Drane, Jessica Feerst, Melissa Luckey, Shawn Moorhead, Doug Murdoch, John Nelson, Denise Shade, and Jane Shumaker. (If I have forgotten anyone, please advise me, and I will give you proper coverage in the next Stright Lines.) If you are attending the PCTM meeting, please look on the message board by the registration table for IUP announcements. We will try to plan some get-togethers. You may be interested in knowing where some of our recent graduates have taken positions. We are now fortunate to have many of our graduates getting their first positions in Pennsylvania. Recent graduates who are now teaching in Pennsylvania are as follows: Shelly Huston, Shady Side Academy, Pittsburgh; Ralph Santilli, Butler; Matt Rodkey, Homer Center in Homer City; Jeff Ziegler, Pittsburgh Public Schools; Brad Baker, Beaver; LeeAndrea McCullough, Quaker Valley, Sewickly; Karin Rabenold, Marion Center; and Lisa Sargent, Manheim Twp., Lancaster. Among those who have gone to other states are Kim White, Concord, North Carolina; Janel Hartzok, Westminster, MD; and Joyce George, Ocean City, MD. Periodically we get e-mail messages from former students who are recruiting mathematics teachers for their schools. Two of these have come from Chris Clark at Manassas Park, Virginia and Chris O’Rourke at McLean, Virginia. Although our mathematics department does not operate a placement bureau, we are always happy to share job postings. If you are searching for a job or trying to fill a position, please forward information to us. Ann Massey asmassey@grove.iup.edu News about Graduates Dr. Buriok received a note from John A. Miller (J.Miller@connect.xerox.com) who graduated from IUP in 1977 and is now Managing Principal, Document Management and Imaging, with Xerox Connect in Pittsburgh. John noted that he gave one of the commencement speeches in the department. He also mentioned that Xerox Connect is growing quickly and hires many college seniors. Mark Rayha (Class of 1993) resigned his position as a Business Systems Analyst with Citistreet (formally known as the Copeland Companies) and accepted a position as a Lead Systems Analyst at Schering-Plough, a pharmaceutical company. His home email address is m.rayha@gte.net. Tracie A. Moreland (Class of 1996) finished the Applied Statistics graduate program at Villanova University (while working full time). She is currently with Merck & Co. as a marketing analyst. Dr. Rebecca Stoudt received a note from Aurele Houngbedji (amhst44+@pitt.edu) who graduated from IUP with an M.S. degree in August, 1996. Aurele graduated with his Ph.D. on April 30, 2000. He will be working at Ohio Savings Bank in the Capital Markets Department as a Quantitative Analyst. The position is related directly to Aurele’s research, which is stochastic modeling in Finance. He will be doing quantitative research, financial data analysis, derivatives trading and risk management. Cindy Venturino Biedrycki wrote Dr. Massey from Prince William County in Virginia where she and Stephanie Clifton are teaching. Both finished their masters degrees in Curiculum and Instruction at Virginia Tech last August. Alumni Bulletin Board Available at our Web site If you go to the IUP Mathematics Department web site, http://www.ma.iup.edu/, you can leave a message on the Alumni Bulletin Board. One recent posting is from Kirstie Trump ( MDteach4u2@aol.com) on 09/27/00 : Hello everyone! I am currently teaching 8th grade math and algebra in Carroll County, Maryland. IUP prepared me well for teaching and I am grateful to all of my professors and classmates for always supporting me. Carroll County is always looking for good math teachers and loves to recruit IUP graduates. Please email me if you would like more info! Mullin Receives Award Last year Dorothy Mullin received the Award for Outstanding Contributions to MCWP (Mathematics Council of Western Pennsylvania). Dorothy has served as a member of the MCWP board and chairperson of many committees for PCTM and NCTM regional meetings as well as for MCWP meetings. Always she has been willing to give of herself to make professional events successful. Dorothy received her bachelor’s degree in mathematics education from IUP and both her masters in mathematics and her doctorate in mathematics education at the University of Pittsburgh. She taught at Penn State, McKeesport for more than 20 years. We were fortunate to have Dorothy in our department when she returned to her Alma Mater as a temporary instructor for a year. Dale Shafer died in Florida on March 21, 1999. His master’s degree was from Columbia University and his doctorate of education degree was from the University of Oklahoma. He taught for two years in the Oley Valley School District, for three years at Slippery Rock College, and for 30 years in the IUP Mathematics Department from 1964 - 1994. He was the executive secretary of the School, Science and Math Association for 10 years. At IUP he often taught statistics courses. Richard “Dick” Wolfe died in South Carolina on January 24, 2000 from injuries suffered in a traffic accident. His master’s and doctorate degrees were from the University of Illinois in Champaign-Urbana. He taught at Waynesboro High School and then here at IUP from 1967 until his retirement in 1991. He taught mathematics education courses and supervised numerous student teachers over the years. I. “Ike” Leonard Stright died on February 9, 2000. He received his Ph.D. degree from Case Western Reserve University. He taught mathematics in high school and at Baldwin Wallace College and Northern Michigan University. He became Professor of Mathematics at IUP in 1947 and was Dean of the Graduate School from 1957 until 1971. The building which houses the Mathematics Department, and hence this newsletter, is named for Dr. Stright. Word from Daniel Griffith, Class of 1970 Dr. Daniel A. Griffith, now Professor of Geography at Syracuse University, sent us two recent publications. One article appeared in the Journal of Statistical Planning and Inference. He noted that his IUP mathematics education prepared him very well for earning an M.S. in statistics (1985). The other article appeared in Linear Algebra and Its Applications. This article draws upon his undergraduate and graduate work in mathematics at IUP (B.S., 1970; graduate work 1970-72). Daniel observes that training by three of his IUP instructors - Mr. D. McBride (retired), Dr. J. Hoyt (retired) and Mr. C. Maderer - helped make this second article possible. In closing he notes that he continues to appreciate the mathematics training he receive at IUP that has enabled him to both publish in statistics journals and contribute to the linear algebra literature. The SPIRAL Project By Rebecca A. Stoudt and Roberta M. Eddy SPIRAL (Science/Mathematics/ Technology Preparation Involving Real-world Active Learning) is a teacher professional development project funded by the Eisenhower Professional Development Program and IUP matching funds. SPIRAL is a multi-disciplinary program that SPIRALs concepts from K through 12 and out across the disciplines. The disciplines involved are Mathematics, Biology, Chemistry, Geoscience, and Physics. The use of a wide variety of technology is woven throughout the program. The project is co-directed by Rebecca Stoudt (Mathematics) and Roberta Eddy (Chemistry). Other SPIRAL faculty are Janet Walker and Gary Stoudt (Mathematics), Terry Peard (Biology), John Wood (Chemistry), Connie Sutton (Geoscience), and Norman Gaggini and Ken Hershman (Physics). Kent Jackson (Special Education), Mary Ann Rafoth (Educational and School Psychology), and Len Lehman (Curriculum Consultant) complete the SPIRAL staff. The central focus of this project is an 8-day, intensive, residential, summer institute (SI) where preservice teachers, inservice teachers, and administrators come together to learn instructional strategies and to conduct field-tested activities consistent with state and national standards. The SI emphasizes two SPIRAL models, LIGHT and ECOSYSTEM. An awareness of special needs students and diverse learning styles in science and mathematics is stressed throughout the SI. Furthermore the incorporation of SPIRAL activities into the school district’s curricula is facilitated by two SI synthesis and curriculum incorporation sessions. SPIRAL also includes ongoing professional development activities such as follow-up workshops (fall and spring), development of portfolios, and a joint ARIN/SPIRAL Academic Alliance for educators of mathematics and science. A 5-member SPIRAL school district team ideally consists of an administrator (can be a principal, assistant principal, curriculum director, or head of department), a special needs or learning support instructor, and three K-12 teachers of mathematics and science (specifically an elementary teacher, a middle school teacher, and a high school teacher). When each team arrives at the SI, it is linked with two IUP preservice teachers, one elementary and one secondary. The preservice teachers are majoring or concentrating in mathematics and/or science. SPIRAL participants use standard-based models of teaching that emphasize the inquiry approach and cooperative learning. As a result, the participants’ content knowledge in all SPIRAL disciplines has increased significantly in every SI since the beginning of SPIRAL (1998). This significant increase was measured by the pre/post-test scores of 93 inservice and 51 preservice teachers. In fact, for each SI, the post-test score mean was at least double the pre-test score mean. The Eisenhower Professional Development Program has awarded SPIRAL approximately $597,000 since the projects beginning. These awards have been matched with approximately $349,000 from IUP (College of Natural Sciences and Mathematics, College of Education, Graduate School and Research), Texas Instruments, and ARIN IU-28. Hence, SPIRAL is almost a $1 million project to date. A large portion of the grant money is spent on supplies and materials for the teams to take back to their home schools so that they can easily implement SPIRAL activities in their curricula. Each team receives over $4000 of equipment which includes but is not limited to: (1) TI-83 Plus calculator/viewscreen; (2) CBL2 kit with set of probes--biology gas pressure sensor, dissolved oxygen, colorimeter, pH system; (3) CBR system; (4) digital camera; (5) various CD-ROMs; (6) aquatic kick net; (7) Silica Gel GF thin layer chromatography plates; (8) UV lamp; (9) HACH Color Cube kits (iron, nitrogen-nitrate, phosphorous orthophosphate); (10) HACH Color Disc Kits (iron, nitrogen-nitrate, phosphorous orthophosphate); (11) light, image, shadow kits; (12) topographic and geologic maps; (13) Guide Book to Rocks and Soil; (14) rock/mineral set; (15) fossil set; (continued on page 4) (continued from page 3) (16) pocket gem field magnifier; (17) pH tester; (18) fluorescent experiment kit, (19) lightsticks; (20) cool blue light and goofy glowing gel kits; (21) color filters; (22) mirror set; (23) soil percolation kit; (24) bar magnet set; (25) student clinometer; (26) refracting telescope kit; (27) Ecneics kit; (28) solar system floor puzzle; (29) star chart; (30) solar system/planet poster; (31) spectrum analysis chart; (32) spectroscope and (33) numerous activity books. For more information, pictures, sample activities, syllabi for inservice/preservice academic credit, and links to electronic portfolios of SPIRAL teams, we invite you to visit the SPIRAL website at http://www.iup.edu/smetc/spiral/ IUP's Curriculum Through the Years, Part 3 by Gary Stoudt In the last two issues we looked at the opening of the Indiana Seminary and Normal School and the State Normal School of the Ninth District, in Indiana, Pennsylvania. One of the texts used in the curriculum of the Indiana Seminary and Normal School was Ray’s Algebra. Thanks to Dr. Ed Donley who loaned me a copy of the book, I can tell you something about the book in order to help you get a feel for what the mathematical studies at the Normal School where like. Unless otherwise stated, quotes in this article are from this book. Dr. Donley’s copy is of the 1875 edition, so it is most likely that this was the text used at the time of the school’s founding in 1875. The full title of the book is Elements of Algebra for Colleges, Schools, and Private Students, Second Book. The author is Joseph Ray, M.D., professor of mathematics at Woodward College. Woodward College was located in Cincinnati but no longer exists. The publisher was Wilson, Hinkle and Co. in Cincinnati. There are very few diagrams in the text, although it is typeset using modern notation. According to Miami Valley Vignettes, by George C. Crout (http://www.middle-america.org/crout/ mvvig/pioneers.html): Ray wrote a series of texts which made arithmetic understandable to elementary pupils. Joseph Ray was a professor at Woodward College, later becoming its president. In addition to his work at the Cincinnati college, he was a state leader in education. Ray compiled a set of three texts in mathematics, taking the student from simple processes to advanced ones. His third book was used in both high school and colleges. The series was published in Cincinnati. Even after his death in 1865, the Ray textbook series dominated the textbook field in mathematics until the early 1900's. The text has a wonderful Preface, part of which is reproduced here. Algebra is justly regarded one of the most interesting and useful branches of education, and an acquaintance with it is now sought by all who advance beyond the more common elements. To those who would know Mathematics, a knowledge not merely of its elementary principles, but also of its higher parts, is essential; while no one can lay claim to that discipline of mind which education confers, who is not familiar with the logic of algebra. It is both a demonstrative and a practical science - a system of truths and reasoning, from which is derived a collection of Rules that may be used in the solution of an endless variety of problems, not only interesting to the student, but many of which are of the highest possible utility in the arts of life. Those were the days! This sentiment is still alive today in the current debate concerning “algebra for all.” Of course, we also still make the claim that algebra is “useful.” The text starts with definitions, notation, and the fundamental rules of arithmetic, including operations with polynomials, all in Chapter 1. The description of operations with monomials is much like a modern text with the exception of the use of the vinculum (a horizontal bar) along with parentheses. For example, There is no mention of FOIL, but there is an interesting method of multiplying and dividing polynomials called the “method of detached coefficients.” Ray states “this method is applicable where the powers of the same letter increase or decrease regularly.” For example, to multiply by : 1 - 3 + 0 + 1 1 + 0 - 1 1 - 3 + 0 + 1 - 1 + 3 - 0 - 1 1 -3 -1 + 4 -0 -1 the answer is . In the next two chapters we move into factoring (factoring of quadratic trinomials is done “by inspection”) and working with algebraic fractions, which we would call rational expressions. This is all done in the fairly standard “modern” way. The lone exception is the work done on converting fractions into infinite series. For example, ( 1 - x ) / (1 + x ) is written as an infinite series using long division. (continued on page 5) (continued from page 4) In Chapters 4 and 5 Ray moves into solving equations, starting with the “simple equation” (linear equation). This is done in the usual way, but Ray includes some interesting word problems, as in this example: “A smuggler had a quantity of brandy, which he expected would sell for 198 shillings; after he had sold 10 gallons, a revenue officer seized one third of the remainder, inconsequence of which, what he sold brought him only 162 shillings. Required the number of gallons he had, and the price per gallon.” There are also included many problems that we would recognize (Plus ça change...). Classic problems such as division of items (“a sum of money is to be divided among five persons so that ...”); work problems (“If A does a piece of work in 10 days...”); traveling problems (“There are two places, 154 miles distant from each other, from which two persons, A and B, set out at the same instant...”); number problems (“There are three numbers whose sum is 187...”); and purchasing problems (“If 10 apples cost a cent, and 25 pears cost 2 cents, ...”). Ray then discusses systems of two linear equations (no solution by graphing, though) and literal equations. We now move on to powers and roots in Chapter 6. Interestingly, the binomial theorem is stated (as Newton’s Theorem) but Pascal’s triangle is nowhere to be found. Ray shows how to find square roots and cube roots of numbers and polynomials. (For the younger folks out there, send me an email if you want to know the method!) The sections that follow deal with radicals, including fractional exponents and “imaginary, or impossible quantities.” The chapter ends with a section on simple inequalities. The solution of quadratic equations begins in Chapter 7. First we solve the pure quadratic, which “contains only the second power of the unknown quantity, and known terms” and then the “affected quadratic,” which “contains the first and second power of the unknown quantity, and known terms.” The affected quadratic is first solved by completing the square. Next the affected quadratic is solved by the “Hindoo [sic] Method.” This method was known to Brahmagupta (b. 598) and Ray describes it much as Brahmagupta did, except Ray uses modern notation. 1st. Reduce the equation to the form 2nd. Multiply both sides by four times the coefficient of . 3rd. Add the square of the coefficient of x to each side, extract the square root, and finish the solution. As an example, consider . Multiply both sides by 8: . Add 25 to both sides: . Extract the root: 4x - 5 = , etc. Next in the text is a discussion of the theory of quadratic equations, a look at equations that are quadratic in form, theorems concerning the roots of quadratic equations, theorems concerning imaginary roots, and so on. The chapter ends with a discussion of the solution of two simultaneous quadratic equations in two variables. Chapter 8 is concerned with ratios, proportion and progressions. Included here is a discussion of the mean proportion of two numbers, alternation, inversion, and composition of proportions, harmonic proportions, arithmetical, geometrical, and harmonic progressions, including the sums of arithmetic and geometric series. In Chapter 9 Ray discusses permutations, combinations, and the binomial theorem. The notation for combinations is not used. Instead Ck is used, where it is assumed n is known. Infinite series is the topic covered in Chapter 10, along with the general Binomial theorem and decomposition of fractions into partial fractions, which Ray calls “decomposition of rational fractions.” This topic is in this chapter because of its relationship to the technique of indeterminant coefficients for finding the terms of a series expansion. Work with series is done in the spirit of Newton: treating infinite sums as finite sums with respect to performing algebraic operations on them. Here is an example. Thus it is required to develop 1/ (3x - x2) and we assume the series to be , etc., we have after clearing of fractions [multiply both sides by 3x - x2] , , etc. from which, by equation the coefficients of the same powers of x, 1 = 0, 3A = 0, etc. The first equation, 1 = 0, being absurd, we infer that the expression cannot be developed under the assumed form. But, Putting ,etc., clearing of fractions, and equating the coefficients of the like powers of x, we find , , , , etc. Hence Or, since the division of 1 by the first term of the denominator gives , or 3 x -1 we ought to have assumed , etc. (continued on page 6) (continued from page 5) Work done with series is also done in the spirit of Leibniz, using the so-called “differential method of series.” This method is based on sequences of differences. Let the series [Ray’s term] be a, b, c, d, e,... ; then the respective orders of differences are, first order b - a, c - b, d - c, e - d, ... second order c - 2b + a, d - 2c + b, e - 2d + c, ... third order d - 3c + 3b - a, e - 3d +3c - b, ... fourth order e - 4d + 6c - 4b + a, .... If we denote the first terms of the 1st, 2nd, 3rd, 4th, etc., orders of differences by D1, D2, D3, D4, etc., and invert the order of the letters we have D1 = - a + b; D2 = a - 2b + c; D3 = - a + 3b - 3c + d; D4 = a - 4b + 6c - 4d + e, etc. Here, the coefficients of a, b, c, d, etc., in the nth order of differences, are evidently those of the terms of a binomial raised to the nth power; and their signs are alternately positive and negative. From this the author shows how to find the nth term of a series a, b, c, d, e,... using differences: D1 = - a + b; whence b = a + D1 D2 = a - 2b + c; whence c = a + 2D1+D2 D3 = - a + 3b - 3c + d; whence d = a + 3D1+3D2+D3 D4 = a - 4b + 6c - 4d; whence e = a + 4D1+6D2+4D3+D4. This technique is then applied to counting the number of balls in triangular and rectangular piles of cannon balls. This is the only place in the book where illustrations appear; there are illustrations of piles of cannon balls! The chapter concludes with a look at “recurring series,” what we would call recursive sequences. In Chapter 11 Ray discusses continued fractions, logarithms, exponential equations, interest, and annuities. In the logarithm sections, time is spent on computing common logarithms using a table of logarithms. Next there is a brief section on the rules of single and double position. These are techniques for solving linear equations of the form ax + b = m. These techniques were known to the ancient Egyptians and were used in medieval Europe under the name “regla falsi,” or “false position.” The technique involves making two guesses x1 and x2 and finding the differences e1 and e2 between ax1 + b and m and ax2 + b and m. This section is placed here in the text because this technique is used to solve exponential equations of the form xx = a. An example is given, to solve xx = 100. Begin by rewriting as x log x = 2. First supposition Second supposition x = 3.5; log x = .544068 x = 3.6; logx = .556303 x log x = 1.904238 x log x = 2.002690 a = 2 a = 2 error = -.095762 error = .002690 Diff results : diff assumed nos. : : Error 2nd result : Its cor. .098452 : 0.1 : : .002690 : 0.00273 Hence x = 3.6 - .00273 nearly. The sections on interest and annuities are very similar to what is in books now, with the exception that all the formulas are derived using properties of series, instead of just being given. In Chapter 12 the general theory of equations is discussed, including the relationship between the coefficients and roots of equations, the factor theorem, the Fundamental Theorem of Algebra, Descartes’ rule of signs, the transformation of equations, and Sturm’s theorem. Chapter 13 ends the book with a discussion of numerical solutions of polynomial equations, including Horner’s and Newton’s methods. Also included is Cardano’s rule for solving cubics! This is quite a text. We do not know how much of the book was covered in the course that used it. It is important to keep in mind that this course was required of all students at Indiana Normal. I hope you enjoyed this look into the past. It would be interesting to learn how many of these topics were covered in later years. You can help by going through your old textbooks (or just going through your memories) and dropping us a note. As always, let me know what you think and please feel free to get involved. Send (via email, FAX or U.S. Mail) what mathematics/education courses you took, the professors’ names, what textbooks you used, and when to: Gary Stoudt Department of Mathematics Stright Hall Indiana University of PA Indiana, PA 15705 FAX (724) 357-7908 We will get to your era soon enough! Write to Us Send us your comments and suggestions on the newsletter or let us know what you are doing. You can write us at: Department of Mathematics Indiana University of Pennsylvania 233 Stright Hall Indiana, PA 15705-1072 You can send email to us at:
{"url":"http://www.iup.edu/page.aspx?id=18095","timestamp":"2014-04-19T19:47:57Z","content_type":null,"content_length":"55091","record_id":"<urn:uuid:c8dcbe20-d9df-4988-a734-8ff285ed88ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Conservation of momentum and energy contradiction? I understand it mathematically but not physically. It's my first time looking into these collisions. How does this work? Which process turns the uniform motion into other forms of energy? Why is it not dependent on other properties? Why does sticking together leads to KE loss? Try this. Do NOT assume that the balls stick together afterwards. Assume the initial conditions you gave. Conserve BOTH momentum and energy, and see what you expect the system to do. Now, ask yourself: Why do the balls not naturally stick together. The answer is, you need extra energy to force the balls to stick together. This energy comes from the kinetic energy of the ball. Thus, some of the energy is lost there and we cannot account for it. Of course, total energy = KE of both balls afterwards + Energy lost in forcing balls together is still conserved. But since we cannot calculate the latter (atleast directly), we cannot use energy conservation. You may ask of course, Why is MOMENTUM not lost in forcing the balls to stick together. The answer is that momentum is ALWAYS conserved if there are no net force. (Energy conservation is a more subtle issue). In this case, the balls experience forces that make it stick together, but all these forces are internal and effectively cancel out leaving zero net force. Thus, you may use momentum conservation, but not energy conservation here.
{"url":"http://www.physicsforums.com/showthread.php?s=02ea70d1292f00a62cbd84e0a33e2e76&p=3822717","timestamp":"2014-04-19T09:44:47Z","content_type":null,"content_length":"54166","record_id":"<urn:uuid:c9b99171-bbb8-4f5e-a27f-a927e3ad59ba>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
probability or odds, not sure which it is June 27th 2010, 09:40 AM probability or odds, not sure which it is I have 338 songs on my ipod. There are 17 songs from artist X. What are the odds a song from artist X will play? What are the odds 5 songs from artist X will play in a row given the ipod is still playing at random? For the life of me I cannot remember how to do this, thanks! June 27th 2010, 11:09 AM for the first one, if we are talking about the very next song, the probability that the author of it is X would be 17/338. So the odds would be 17/(338-17). second one, if we assume that once the song was played in can no longer be repeated: --------------------------- = 0.000000173 This is the probability that there would be 5 songs by X in a row. => The probability that there would not be such situation is 0.999999827. => the odds would be 0.000000173/0.999999827 otherwise, if repetition is allowed, P (5songs in a row) = (17/338)^5, the probability of the opposite one is 1-(17/338)^5. => odds would be ((17/338)^5)/(1-(17/338)^5)
{"url":"http://mathhelpforum.com/statistics/149513-probability-odds-not-sure-print.html","timestamp":"2014-04-17T06:53:06Z","content_type":null,"content_length":"4323","record_id":"<urn:uuid:95c5b3ed-4f26-4669-bdb6-4a1a605f5210>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: When math makes sense - w/ cooking, consruction Date: Jun 4, 2013 4:39 AM Author: Robert Hansen Subject: Re: When math makes sense - w/ cooking, consruction On Jun 3, 2013, at 7:32 PM, kirby urner <kirby.urner@gmail.com> wrote: > Basic finding an unknown, rules of equality, we can call those "algebra skills" and we can keep weaving in that same material. That isn't algebra. That is the dumbed down version that it became after decades of mass trying. That is algebra after you strip it of its art and sense. Algebra is making sense of one or more mathematical relationships and using that sense as the means to some mathematical end. It isn't any particular result or some finite set of instructions like the recipe for banana bread. That is why I don't care how many computers you have. Without that sense and art you can't apply those computers to "algebra" any more than a chimpanzee can write Shakespeare using a typewriter. This happens to many teachers. Day in and day out they are going through the motions of teaching mathematics to brand new faces and they forget the point of the process. They forget the pedagogy and development or they never understood it in the first place. Or I suppose they are charged with really difficult cases. They moan "Why am I teaching kids to add numbers? A calculator can add numbers!" And I tell them "You are not teaching them just to add numbers. You are teaching them the sense of ADDITION, and if you don't use numbers what the hell are you going to use?" We don't need to teach children that calculators can add numbers. We need to teach children what "add numbers" means. There is no other way to do that with sufficient payback (acquisition of senses) than to have children add numbers. The same applies to algebra. You cannot show students how to solve with tools (computers) before they have developed the personal sense of what "solve" means. They have to experience it and the nuances of it. Mentally. You are in a state of euphoria about all the things a middle aged Princeton educated man can talk about but your chief problem is that you seem to have no recollection or sense of how you got to where you are. You want to start these kids at the end of that journey, rather than at the beginning. Yet, in previous discussions you are quick to defend older texts like Dolciani. If I were you, I would get a copy of a textbook you studied as a child and go through it start to finish and try to put yourself back in that time. Try to remember the discussions and exercises and your transition from not knowing to knowing. There is no easy button to all of this. There are more computers in this world and they are more powerful than you or I would have ever imagined as children. And everyone has one. There has been no math revolution because it has nothing to do with computers. It has to do with thinking and being smart and the technology of thinking and being smart hasn't changed in the last 50,000 years and will not change in the next 50,000 years. Bob Hansen
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=9130289","timestamp":"2014-04-21T00:19:16Z","content_type":null,"content_length":"3992","record_id":"<urn:uuid:e4da22b2-88ea-4dff-b98e-1bc9b5ef7a28>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
3psi for a circle using the metric system April 7th 2011, 12:35 AM 3psi for a circle using the metric system Im trying to figure out a metric formula for Pi(r)²(3psi) without having to do any conversions. I will be using mm instead of in, and kg instead of lbs. Is this possible? Everything I have tried does not equal the same amount as when I do the whole problem in non-metric, then convert the lbs result to kg. I looked at other places online, but that started getting me into kPa which I have no idea about. Thanks in advance for you help! April 7th 2011, 01:04 AM Im trying to figure out a metric formula for Pi(r)²(3psi) without having to do any conversions. I will be using mm instead of in, and kg instead of lbs. Is this possible? Everything I have tried does not equal the same amount as when I do the whole problem in non-metric, then convert the lbs result to kg. I looked at other places online, but that started getting me into kPa which I have no idea about. Thanks in advance for you help! 1. I assume that you want to calculate the value of a force, right? 2. Forces are measured in N (Newton). kg is the unit for a mass. The weight (force!) of a mass of 1 kg is nearly 9.81 N (on Earth). 3. A force of 1 lb correspond to a force of 4.448222 N 4. the area of $1\ square-inch = (25.4\ mm)^2 = 645.16\ mm^2$ 5. Your formula becomes: $\pi \cdot r^2 \cdot 3\ psi = \pi \cdot r^2 \cdot 3 \ lb \cdot \dfrac{4.448222\ \frac N{lb}}{645.16\ mm^2}$ 6. Since you measure the radius in mm the resultant dimenson of your formula is N. April 7th 2011, 05:09 AM got it. Thank you!
{"url":"http://mathhelpforum.com/geometry/177103-3psi-circle-using-metric-system-print.html","timestamp":"2014-04-16T08:24:46Z","content_type":null,"content_length":"6004","record_id":"<urn:uuid:69e3d016-46c5-4b07-9fb1-7911b1d5ccd4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] negative values in diagonal of covariance matrix Pauli Virtanen pav@iki... Fri Dec 12 03:25:06 CST 2008 Fri, 12 Dec 2008 13:49:55 +0900, David Cournapeau wrote: > Do you mean the python wrapper miss some diagnostic information > available in fortran ? Otherwise, would it make sense to check the ier > value from fortran and at least generate a warning about failed > convergence ? The Python wrapper to minpack.lmder is quite thin, I'd expect any problems to be in the MINPACK code itself (which is actually LMDIF and not LMDER as I wrote earlier). The algorithm itself is something like Levenberg-Marquardt with a trust region. Unless the problem is in the minpack.leastsq code that forms the cov_x matrix from return values of LMDIF: perm = take(eye(n),retval[1]['ipvt']-1,0) r = triu(transpose(retval[1]['fjac'])[:n,:]) R = dot(r, perm) cov_x = inv(dot(transpose(R),R)) I don't have the time to check this now, though. Pauli Virtanen More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2008-December/019061.html","timestamp":"2014-04-17T13:27:55Z","content_type":null,"content_length":"3500","record_id":"<urn:uuid:8be83f04-319e-498e-8ac8-4f2284ad59ec>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
It's Okay To Be Smart Time in Tens Today I learned: During the 18th and 19th centuries there were passionate efforts to institute decimal time, a day divided into 10 hours, each consisting of 100 minutes, which would be further divided into 100 decimal seconds. Just like gunpowder, paper money, and countless other things, decimal time was developed long ago in China, as far back as 2,000 years ago, only to be eliminated in the 1600’s by those pesky European Jesuit missionaries and the oh-so-logical dozenal (12-based) time system that we all know and love. The French Revolution saw the most recent push for decimal time, with democratic reformers insisting on a base-10 calendar, and even manufacturing base-10/base-12 combo clocks like the one above (via Wikipedia). As recently as 1893, smart guy extraordinaire Henry Poincaré was pushing for a standard decimal time. But since so much of the world, from maritime navigation to daily appointment-keeping, had been built on the time that we still use today (and since making everyone buy a new clock is just mean), decimal time never caught on. Nice to know that the U.S. and its failure to adopt the metric system isn’t the only decimal failure in modern history! Mathematical knowledge is unlike any other knowledge. Its truths are objective, necessary and timeless. Edward Frenkel, author of the fantastic Love and Math: The Heart of a Hidden Reality, considers whether the universe is a simulation. (via explore-blog) Happy Valentine’s Day! I would have gotten you a card/flowers/candy, but this is so much better. Just copy/paste this into a Google search: sqrt(cos(x))cos(300x)+sqrt(abs(x))-0.7)*(4-x*x)^0.01, sqrt(6-x^2), -sqrt(6-x^2) from -4.5 to 4.5 One, Two, Eleven, Twelve: I drew up one of my favorite math tricks, which works in numerals Arabic and Roman, as well as in letters. Neat! Originally seen on the Numberplay blog. The Infinite Hotel Paradox - One of the best math thought experiments ever devised, by Jeff Dekofsky for TED-Ed. Luckily, I made an infinite reservation, so I should be just fine. After the last, here’s some more great math art from Mike Naylor. From the top: • A star counting 0-444 in base 5 • A staircase counting 0-333 in base 4 • Fractal squares representing the binary numbers 0 to 127 (0 to 1111111) If this is your thing, I’ve featured more math art in the past. Check out these mathematical tapestries from Albuquerque-based artist Donna Loraine Contractor. "Melt Into You" Two bodies, fractally intertwined, eventually dissolving into one. That’s some beautifully poetic mathematical art, courtesy of Mike Naylor. These boards are so pretty. I would have no heart to erase them. Totally forgot to carry the 2 there, pal. (via visualizingmath) Source: magictransistor.com Series of posters created for the love of math, nature, art, and education. Prints available: http://meganemoore.storenvy.com/ If only there were one for reticulated splines, the set would be complete! Source: cultivatevision This GIF makes my brain hurt in the best way. Does this mean coastlines are infinite? I think Veritasium did a video about that once. Source: reuben-thomas
{"url":"http://www.itsokaytobesmart.com/tagged/math/page/2","timestamp":"2014-04-19T12:59:58Z","content_type":null,"content_length":"108888","record_id":"<urn:uuid:07d6d084-52af-488e-8ad7-4b319eb5e8fe>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
The "IT DOESN'T WORK!" thread Re: The "IT DOESN'T WORK!" thread An erratum: Jplus wrote:[...] and you could be more efficient if you moved indices from both ends of the list. Let the left index go up until it meets a value that is [S:bigger:S]not smaller than the pivot, let the right index go down until it meets a value that is [S:smaller:S]not larger, break the loop if the left index and the right index have met or swap and repeat otherwise. (I'm going to fix this in the quoted post.) I read the dual pivot paper, but I don't think it's faster than the normal, single-pivot method. If you partition the array into three parts you'll need to recurse less deep on average, but you make more recursive function calls. You also have to perform more index calculations and you perform two comparisons per element per partitioning pass instead of one (the paper suggests that single pivot methods need two comparisons per element too, but this is incorrect). Choosing a sensible pivot also becomes slightly harder. The author of the paper clearly didn't have a very strong background in algorithmics. At best it's going to be not much slower , but you can test how it works for you anyway. PM 2Ring wrote: Jplus wrote:Finally, you can improve a little by delaying the insertion sort until the end. If the subrange is shorter than your insertion sort threshold, just return directly from the quicksort function. When the topmost quicksort has returned, do a single insertion sort pass over the entire array. Nice trick, Julian! Thanks. It wasn't my invention though. Read about , the algorithm that was used in the C++ standard library. (It states that the final insertion pass doesn't make sorting faster, but that's in C++ where all function calls can be automatically inlined by the compiler.) Re: The "IT DOESN'T WORK!" thread Julian, I've handled optimizations two through four but I'm having a little trouble grasping what needs to be done for: First of all, you're currently doing more swaps than strictly needed. There is no need to move the pivot to the end of the array, and you could be more efficient if you moved indices from both ends of the list. Let the left index go up until it meets a value that is not smaller than the pivot, let the right index go down until it meets a value that is not larger, break the loop if the left index and the right index have met or swap and repeat otherwise. I'm sure I actually just need to take a close look at what needs to happen and I'll work it out; but could you perhaps clarify a bit. If I understand correctly, I need two indexers (left and right) that will compare the values of the items at those indices with the pivot index value. When the left index finds a value not smaller than the pivot index value and the right index finds a value not larger than the pivot index value I compare them and if the left index value is greater than the right index value I break the loop. Otherwise I swap the left and right index values and repeat the Here's the Insertion Sort so far: Here's the Quicksort so far: I tested some more and changed the switching threshold to 20. The current average runtime is ~8 seconds. Re: The "IT DOESN'T WORK!" thread Breakfast wrote:I'm sure I actually just need to take a close look at what needs to happen and I'll work it out; but could you perhaps clarify a bit. If I understand correctly, I need two indexers (left and right) that will compare the values of the items at those indices with the pivot index value. When the left index finds a value not smaller than the pivot index value and the right index finds a value not larger than the pivot index value I compare them and if the left index value is greater than the right index value I break the loop. Otherwise I swap the left and right index values and repeat the loop. Ok, basically what you're saying here seems to be correct but I now realise I've been a bit ambiguous in my description. Let's distinguish the index value, which represents a position in your list, from the key value at that position. Changing your words accordingly, this would be the correct interpretation: I need two index values (left and right) that will compare the key values at the corresponding positions with the pivot (key value). When the left index points to a key value not smaller than the pivot and the right index points to a key value not larger than the pivot I compare the index values and if the left index value is greater than or equal to the right index value I break the loop. Otherwise I swap the key values pointed to by the left and right indices and repeat the loop. Your insertion sort appears to do exactly what I suggested. Your median-of-3 pivot selection could be more efficient. Currently you're doing at least 2 and at most 8 comparisons, but you could do it in at least 2 and at most 3 comparisons. For the challenge I'll let you figure that out by yourself. It's slightly disconcerting that your sort is still taking 8 seconds, I'd expect that it'd have sped up considerably by now already. Re: The "IT DOESN'T WORK!" thread So i am trying to learn a bit about how computers work at a very low level. ( It's fascinating :O ) And i fount a nice tutorial that explains a lot about how a boot loader works, as well as a short tutorial. Obviously, i'm jumping into a lot, but i want to try. So this should be a simple "Hello world" boot loader. Code: Select all [BITS 16] [ORG 0x7C00] MOV AL, 65 CALL PrintCharacter JMP $ MOV AH, 0x0E MOV BH, 0x00 MOV BL, 0x07 INT 0x10 TIMES 510 - ($ - $$) db 0 DW 0xAA55 It compiled fine, but i can't for the life of me figure out how to ( successfully ) put it into the boot sector of an iso. ( I am hoping to be able to use VirtualBox in order to run this. I should mention that i'm using MAC OS X and have an ok ability with the console. so I compile this Assembly to boot.bin, and have tried using dd in order to turn this into an ISO file. Code: Select all dd if=boot.bin bs=512 of=boot.iso Again, everything worked perfectly. ( I don't see why it shouldn't, it's a direct binary copy! ) However, i'm not sure if i'm doing this right, because the VM complains that it could not use this ISO. Do i need to mount an empty ISO and do instead something like this? Code: Select all dd if=boot.bin bs=512 of=/dev/[mounted iso here] in order to preserver the ISO? How would i create an ISO file to do so if that's the problem? thank you for your help in this. Re: The "IT DOESN'T WORK!" thread The way bootable CDs work is quite weird... they don't actually have an MBR, or a partition table, or anything of the sort. What they actually have, is that you make a 1.44MB (that is, 1.44*1000*1024 bytes exactly) floppy-disk image, and then that gets stuffed into the ISO headers when you build the CD (look up the -b flag for genisoimage). Then when you boot from it, it loads that image as a virtual floppy drive, and then boots from that. Don't ask me why they designed it this way. I'm not 100% certain, but I think the process is something like: Code: Select all dd if=/dev/zero of=boot.img bs=512 count=2880 dd if=boot.bin of=boot.img bs=512 count=1 conv=notrunc genisoimage -b boot.img -o boot.iso somefile.txt (the somefile.txt is because genisoimage will refuse to make a CD with no files on it) Probably an easier way would be to just make that 1.44MB floppy disk image, and then mount it directly in vbox as a floppy disk. That way you don't need to re-gen your ISO every time you make a change to your stuff;. While no one overhear you quickly tell me not cow cow. but how about watch phone? Re: The "IT DOESN'T WORK!" thread phlip wrote:Don't ask me why they designed it this way. Yeah, it's a bit weird, but I guess that design made it easier to get it to work with existing BIOS code. FWIW, there's also no-emulation mode. See El Torito (CD-ROM standard). Your virtual machine suggestion is good, phlip, but I'd probably just use a USB stick for something like this. Of course, that requires the BIOS being able to boot off the stick. While on the topic of bootloaders, I feel obliged to mention the Syslinux project. There's a lot of interesting info in the Syslinux docs. Re: The "IT DOESN'T WORK!" thread I'm not sure if I'm a complete idiot or if it's just been too long since I worked regularly in c++, but I'm having a 2d array problem. The following is an initialization function from a neural net AI I'm developing. Code: Select all void WeightedNet::setup(char* configFile) ifstream in; int numLayers; if (in.is_open()) in >> numLayers; layers = numLayers; layerSizes = new int[numLayers]; allNodes = new WeightedNode*[numLayers]; int nodesThisLayer, connsThisLayer; for (int cntr = numLayers-1; cntr >=0; cntr--) in >> nodesThisLayer >> connsThisLayer; layerSizes[cntr] = nodesThisLayer; allNodes[cntr] = new WeightedNode[nodesThisLayer]; for (int ndx = 0; ndx < nodesThisLayer; ndx++) WeightedNode blank; allNodes[cntr][ndx] = blank; for (int ndx = 0; ndx < connsThisLayer; ndx++) if (cntr < numLayers-1) int thisNode, connNode; float connWeight; in >> thisNode >> connNode >> connWeight; allNodes[cntr][thisNode].addConn(allNodes[cntr+1][connNode], connWeight); allNodes[cntr][thisNode].layer = cntr; allNodes[cntr][thisNode].index = thisNode; allNodes[cntr][ndx].layer = numLayers - 1; allNodes[cntr][ndx].index = ndx; outSize = layerSizes[layers-1]; Execution halts on the line Code: Select all allNodes[cntr] = new WeightedNode[nodesThisLayer]; and I can't figure out why. The only error I get is Code: Select all Signal received: SIGSEGV (?) with sigcode ? (?) From process: ? For program neural_nets, pid -1 You may discard the signal or forward it and you may continue or pause the process To control which signals are caught or ignored use Debug->Dbx Configure Which isn't telling me anything. Oh, and the weird thing is that in checking whether the constructor on the node was being called I found out that it is, in fact, being called 16172 times*. I have no idea what would cause that, "nodesThisLayer" has a value of 1 at this point. Thoughts? *You're probably thinking the same thing I was, but no, 16172 is not a power of 2. 2^14 is 16384. "It is bitter – bitter", he answered, "But I like it Because it is bitter, And because it is my heart." Re: The "IT DOESN'T WORK!" thread I don't see any obvious error (yet), but I do have a question that might help us clarify. What is the value of numLayers? Re: The "IT DOESN'T WORK!" thread At that point in execution "It is bitter – bitter", he answered, "But I like it Because it is bitter, And because it is my heart." Re: The "IT DOESN'T WORK!" thread I've snipped most lines: Spambot5546 wrote: Code: Select all allNodes[cntr] = new WeightedNode[nodesThisLayer]; for (int ndx = 0; ndx < connsThisLayer; ndx++) in >> thisNode >> connNode >> connWeight; allNodes[cntr][thisNode].addConn(allNodes[cntr+1][connNode], connWeight); allNodes[cntr][ndx].layer = numLayers - 1; allNodes[cntr][ndx].index = ndx; Are you sure that ndx < nodesThisLayer? Or rather that connsThisLayer < nodesThisLayer? Are you sure that thisNode < nodesThisLayer? Given that you are reading some numbers from a file, you really must put some sanity checks in so that you see immediately when things start to wrong. An extra space in the file causes all the numbers to go out of sync with your program. I think that you are probably overwriting memory at some iteration of the loop, and that it only crashes at a later iteration. Re: The "IT DOESN'T WORK!" thread I thought that, too, but if I throw in some debug lines thusly Code: Select all void WeightedNet::setup(char* configFile) cout << "In fx" << endl; ifstream in; int numLayers; if (in.is_open()) cout << "is open" << endl; in >> numLayers; layers = numLayers; layerSizes = new int[numLayers]; allNodes = new WeightedNode*[numLayers]; int nodesThisLayer, connsThisLayer; for (int cntr = numLayers-1; cntr >=0; cntr--) cout << "loop1" << endl; in >> nodesThisLayer >> connsThisLayer; layerSizes[cntr] = nodesThisLayer; allNodes[cntr] = new WeightedNode[nodesThisLayer]; for (int ndx = 0; ndx < nodesThisLayer; ndx++) cout << "loop2" << endl; WeightedNode blank; allNodes[cntr][ndx] = blank; for (int ndx = 0; ndx < connsThisLayer; ndx++) cout << "loop3" << endl; if (cntr < numLayers-1) int thisNode, connNode; float connWeight; in >> thisNode >> connNode >> connWeight; allNodes[cntr][thisNode].addConn(allNodes[cntr+1][connNode], connWeight); allNodes[cntr][thisNode].layer = cntr; allNodes[cntr][thisNode].index = thisNode; allNodes[cntr][ndx].layer = numLayers - 1; allNodes[cntr][ndx].index = ndx; outSize = layerSizes[layers-1]; Then I get the output Code: Select all In fx is open RUN SUCCESSFUL (total time: 166ms) Which both shows that it fails on the first execution but also begs the question "Why the tits does it say 'RUN SUCCESSFUL'!?" It so very clearly wasn't. Edit: Running it on my school's dev server (no, this isn't homework) which uses g++ in linux, it says "Segmentation fault (core dumped)" instead of "RUN SUCCESSFUL (total time: 166ms)". Edit2: Also, on the school's server it executes the constructor for WeightedNode 130,945 times, compared to the 16,172 times in Cygwin. This is seriously bizarre. Edit3: Guess who has two thumbs and is a moron. This guy! In trying to figure out why the constructor was being called so many times I took a look in the constructor and realized that I was initializing each nodes connections (which are also WeightedNode objects) resulting in infinite recursion and a segfault. Since I was doing that for literally no reason I just took those out and it executes more-or-less correctly. Still some debugging to do, but one mystery solved. "It is bitter – bitter", he answered, "But I like it Because it is bitter, And because it is my heart." Re: The "IT DOESN'T WORK!" thread Just for the record: when jaap said "insert some sanity checks" I don't think he meant "insert statements that report the point of execution". Rather, he meant "insert assertions to verify that your assumptions are correct". To go with the code that jaap quoted: Code: Select all assert(connsThisLayer < nodesThisLayer); allNodes[cntr] = new WeightedNode[nodesThisLayer]; for (int ndx = 0; ndx < connsThisLayer; ndx++) in >> thisNode >> connNode >> connWeight; assert(thisNode < nodesThisLayer); allNodes[cntr][thisNode].addConn(allNodes[cntr+1][connNode], connWeight); allNodes[cntr][ndx].layer = numLayers - 1; allNodes[cntr][ndx].index = ndx; Since the error wasn't in this part of your code it doesn't really matter anymore, but it's good to remember that inserting assertions into your code is generally a good idea. They help you to quickly identify errors in your assumptions and you can easily disable them at compilation with the option -DNDEBUG. All you need to do in order to use them is to #include <cassert>. Re: The "IT DOESN'T WORK!" thread This isn't so much a "doesn't work" problem as a "where/how to begin" problem, but I don't think it warrants a new thread. I'm using C++ for the first time for a school project emulating the different types of memory paging algorithms in such a way that it evaluates their effectiveness (page fault rates) and I want the number of available pages in "virtual" memory and the sequence of pages to be used/loaded to be input from the user. I'm going to be writing it in the Visual Studio IDE; the problem I'm having is that I've only used VS for BASIC and when doing so there's usually a default UI form to work with....there isn't with C++ and I'm unsure if it's expected that I code the UI myself (or if I should for that matter) or if there's something I'm completely missing because it's my first time working with C++. The interface doesn't need to be pretty or complex, I just need an easy way to prompt the user for their specifications, basically a label and a textbox. For what it's worth, I know this would be simple in Python, but my professor suggested using C++ so that I could use stacks (she feels like that's the simplest way to get the algorithms to work) and I've wanted to start learning the C-family of languages for awhile anyways. Re: The "IT DOESN'T WORK!" thread You're looking for a GUI, but one isn't necessary for this. Just make a basic console application and use cin to prompt the user for input. There ARE ways in Visual Studio to make windowed programs for C++. I think they even do WYSIWYG development. But unless a GUI is one of your requirements, don't waste your time. "It is bitter – bitter", he answered, "But I like it Because it is bitter, And because it is my heart." Re: The "IT DOESN'T WORK!" thread I just came back to say I figured out how to make a window (and that I need to be a little more patient in the future about looking and trying things) but thank you, you probably just saved me a bunch of time. edit: ....command line prompts always make me feel like i'm 6 years old again, in a very good way Re: The "IT DOESN'T WORK!" thread Quick question: When creating a dynamic memory array (not sure if it's specifically called a heap or not) in C++, if no values have been placed within the array are the blocks of memory considered to be holding null values, or something else? The slice of code in question: Code: Select all int virtualmem; char pagename; char * p; p = new (nothrow) char [virtualmem]; So will p[0] = '\0' ? Re: The "IT DOESN'T WORK!" thread No - if you create an object or an array of a non-trivial class, it'll be initialised, but if it's just of a basic data type, then it'll be uninitialised (so it'll contain whatever that patch of memory contained before)... While no one overhear you quickly tell me not cow cow. but how about watch phone? Re: The "IT DOESN'T WORK!" thread phlip wrote:No - if you create an object or an array of a non-trivial class, it'll be initialised, but if it's just of a basic data type, then it'll be uninitialised (so it'll contain whatever that patch of memory contained before)... Can I assume, at least when int virtualmem <=10 (or something comparatively small), that the system will be using free memory to create the new array? More specific to the problem I'm foreseeing for my program is that I need a way to distinguish between the initial value of p[for any k <= virtualmem] and the same block of memory once it has a value stored within it. It might be impossible to know the initial value, but it's my first time coding in C++ so I'm not very knowledgeable about alternate ways to do what I want. Basically, the very end of this piece is what I'm trying to figure out: Code: Select all for (k = 0; k < virtualmem; k++) if (pagename != p[k]; p[k] != // no idea what to put here as I don't know what the initial value will be ) Re: The "IT DOESN'T WORK!" thread You have to track that yourself. This is typically done either with wrapping all your stores in a light data structure (when implemented for real, this is sometimes done by reserving a few bits for tagging purposes, for example) so you can definitely tell the difference between "initial" and "holds a value", or by maintaining a parallel data store that tracks whether the main store has been written to or not in each index. (defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b))) Re: The "IT DOESN'T WORK!" thread Bharrata wrote:Can I assume, at least when int virtualmem <=10 (or something comparatively small), that the system will be using free memory to create the new array? I'm not sure what you mean by "free memory"... if you mean "memory that's not currently in use", then yes, it'll always be put in free memory, it'd be a poor allocator that let things overlap... if you mean "memory that's ever been used", then no, you can't assume that. But even if it was placed in brand new memory that's never been used since the computer was turned on, that still doesn't guarantee that it'd be filled with all-zeros. Bharrata wrote:More specific to the problem I'm foreseeing for my program is that I need a way to distinguish between the initial value of p[for any k <= virtualmem] and the same block of memory once it has a value stored within it. It might be impossible to know the initial value, but it's my first time coding in C++ so I'm not very knowledgeable about alternate ways to do what I want. Well, the usual way to make sure it's initialised is to just initialise it yourself after you create it: Code: Select all char *p = new char[virtualmem]; for (int i = 0; i < virtualmem; i++) p[i] = '\0'; Of course, this only works for your purpose if there's some value that you're never going to want to store in the array, that you can recognise as an "unused" value. If you could potentially want to store the entire range of values in there, then the usual trick is to have a separate value to store the state of the variable: Code: Select all char *p = new char[virtualmem]; bool p_filled = false; (If you need a separate "is filled" flag for each value in the array, you could make an array of such flags). As Xanthir suggested, if you're at all familiar with the classes and objects side of C++, you could wrap the value and is-filled flag together into a single class, and then have just an array (or, preferably, a vector/list/etc) of those. While no one overhear you quickly tell me not cow cow. but how about watch phone? Re: The "IT DOESN'T WORK!" thread phlip wrote: Bharrata wrote:More specific to the problem I'm foreseeing for my program is that I need a way to distinguish between the initial value of p[for any k <= virtualmem] and the same block of memory once it has a value stored within it. It might be impossible to know the initial value, but it's my first time coding in C++ so I'm not very knowledgeable about alternate ways to do what I want. Well, the usual way to make sure it's initialised is to just initialise it yourself after you create it: Code: Select all char *p = new char[virtualmem]; for (int i = 0; i < virtualmem; i++) p[i] = '\0'; Forehead slapping-ly simple solution Of course, this only works for your purpose if there's some value that you're never going to want to store in the array, that you can recognise as an "unused" value. It needs to recognize both "free" and "used but different value than the value being compared to it" so it works perfectly. (If you need a separate "is filled" flag for each value in the array, you could make an array of such flags). As Xanthir suggested, if you're at all familiar with the classes and objects side of C++, you could wrap the value and is-filled flag together into a single class, and then have just an array (or, preferably, a vector/list/etc) of those. Honestly, that was an idea I had initially for this project (in the pseudo-code) but for now I'm just going to use a separate counter variable to ++ each time the conditions for a value change are met, as again, this is my first foray into C-languages and I'm trying to take it step by step. I'd assume a <char, bool> tuple might be what you're describing? For how the project is structured it would work better as a "was used" flag rather than an "is filled". The char p[virtualmem] is smaller than the list being placed into it. As in, the pagename values come from an istream file which, for the sake of clarity, has "a b c d e f g" as individual characters on separate lines, which then line by line get compared with and possibly placed in p [virtualmem] which may only have 3 index positions but is initially n-sized. (So if pagename was an array instead of istream input it would be easy to flag the occurrence of the value use.) The number of occurrences of values being placed in p[virtualmem] is divided by the total number of pagename values to find the efficiency. Not very exciting, I know, but having taken only one programming course so far it gets me excited. It will be more interesting if I can modify it from a pseudo-FIFO policy to a LRU policy. Anyway, thank you muchly Xanthir and phlip, at the very least you've saved me the hassle of tracking down my professor tomorrow and educated me a bit more about the nuances of true OOP. Re: The "IT DOESN'T WORK!" thread Well, none of this is OOP so far... it's still just variables and stuff interacting with it. OOP would be wrapping this all into an object... something like: Code: Select all class FlaggedChar bool set; char value; FlaggedBool() : set(false) {} FlaggedBool(char v) : set(true), value(v) {} char getValue() if (!set) return '\0'; // alternatively: throw an exception return value; void setValue(char v) value = v; set = true; bool isSet() return set; void clear() set = false; (In a more-real C++ snippet, you'd want a bunch more code in here to handle other fanciness, like operator overloading and copy-constructors and templating and such... leaving those out here to just be a simple example for someone new to the language.) Then you'd be able to just go: Code: Select all FlaggedChar *p = new FlaggedChar[10]; // or, better, vector<FlaggedChar> p(10); if (p[0].isSet()) // This won't happen if (p[0].isSet()) // This will happen While no one overhear you quickly tell me not cow cow. but how about watch phone? Re: The "IT DOESN'T WORK!" thread I think OOP would be overkill in this case, though. Unrelated, I'd like to show some variants of this snippet. phlip wrote: Code: Select all char *p = new char[virtualmem]; for (int i = 0; i < virtualmem; i++) p[i] = '\0'; Using modern C++: Code: Select all #include <vector> typedef std::vector<char> vec_char; // ... vec_char p(virtualmem, '\0'); Using oldschool C: Code: Select all #include <stdlib.h> // ... char *p = (char*) calloc(virtualmem, sizeof(char)); Using oldschool C style in C++: Code: Select all #include <cstdlib> // ... char *p = static_cast<char*>(std::calloc(virtualmem, sizeof(char))); Last edited by Jplus on Tue Apr 10, 2012 11:12 am UTC, edited 1 time in total. Re: The "IT DOESN'T WORK!" thread (Technically, old-school C would be using malloc, not new[], so you'd just call calloc and be done with it...) While no one overhear you quickly tell me not cow cow. but how about watch phone? Re: The "IT DOESN'T WORK!" thread phlip wrote:(Technically, old-school C would be using malloc, not new[], so you'd just call calloc and be done with it...) Oops, right. I'll edit my post. Re: The "IT DOESN'T WORK!" thread ++C style (using C++ features to modify/make C code cleaner) Code: Select all template<typename T> T* calloc( size_t count = 1 ) return static_cast<T*>(calloc( count, sizeof(T) ) ); struct OnExit std::function< void() > doThis; static void doNothing() {} OnExit():doThis(doNothing) {} OnExit( std::function< void() > const& doThis_ ): doThis(doThis_) {} void abort( OnExit& onExit ) onExit.doThis = OnExit::doNothing; // ... char *p = calloc<char>( virtualmem ); OnExit cleanup_p( [p]{ if(p) free(p); } ); // ... return p; including a RAII based replacement for the "goto OnExit" C pattern. (This is what I sometimes do when I'm working on old style C code in a C++ compiler, and I don't want to do serious refactoring.) One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total. Re: The "IT DOESN'T WORK!" thread I'm using Python to automate as much of my work as possible. I wrote a program that takes a bunch of word files and moves them to another spot, and I'm trying to add in functionality to print every file that's been moved. I looked into it, and the best way to do this appears to be by making an os.system() call. So here's my trouble: Python's not passing everything I want it to to the command line. The call is like this: Code: Select all os.system('"C:\\Program Files\\Someotherstuff\\Winword.exe "+"C:\\Someotherdirectory\\somedirectory\\file.doc "+"switches for word to make it print how I want"') That runs find when I put it directly into the CLI. However, when I make the os.system() call from Python, it quits faster than I can read it with error code 1. By adding '|| ping 1.1.1.1' to the end, I slowed it down to see what the command line itself was saying, and it's "'C:\Program' is not recognized as an internal or external command, operable program or batch file." So, it looks to me like what's happening is that Python is not escaping the first double quote for some reason, passing the literal 'C:\Program Files' to the CLI instead of what I want it to pass, which is '"C:\Program Files', so Windows is trying to execute C:\Program instead of the entire path given. I can't figure out how to get it to pass what I want, though. What's going on? Re: The "IT DOESN'T WORK!" thread system() is clunky, especially on Windows with its never-reliable shell-quoting system. Stackoverflow says: use subprocess.call instead, as that takes the command-line parameters as separate arguments instead of one long command string. Another alternative is to use os.startfile(filename, "print"), and let shell32 do all the work of looking everything up and calling Word appropriately. This should behave identically to right-clicking the file in Explorer and choosing "Print". While no one overhear you quickly tell me not cow cow. but how about watch phone? Re: The "IT DOESN'T WORK!" thread Oh, heh. My Google Fu is weak. Thanks a bunch, phlip! I'll try those out tomorrow. Re: The "IT DOESN'T WORK!" thread Ahhhh why is the [] operator on STL maps sooooo stupid. I really appreciate how you can't use it on a const map. That's nice. (Sorry, that is all. We return you to your regularly-scheduled broadcast. Also, I know why it is that way, and the choice was somewhat reasonable, but still... it's obnoxious. Actually I guess I don't like the behavior of the [] operators in any STL class. :-p) Re: The "IT DOESN'T WORK!" thread phlip wrote:Another alternative is to use os.startfile(filename, "print"), and let shell32 do all the work of looking everything up and calling Word appropriately. This should behave identically to right-clicking the file in Explorer and choosing "Print". worked nicely. I'll toy around with subprocess if I ever need to twerk the program, but since the os module was already imported (and I am supremely lazy) this was an excellent fix. Re: The "IT DOESN'T WORK!" thread [quote="EvanED"]Ahhhh why is the [] operator on STL maps sooooo stupid. I really appreciate how you can't use it on a const map. That's nice.[\quote]Hmm? If you want to find only, find()!=end() idiom You do not like auto creation eh? One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total. Re: The "IT DOESN'T WORK!" thread Yakk wrote: EvanED wrote:Ahhhh why is the [] operator on STL maps sooooo stupid. I really appreciate how you can't use it on a const map. That's nice.[\quote]Hmm? If you want to find only, find()!=end() idiom works. You do not like auto creation eh? As far as I'm concerned, op[] is the easiest thing to use and should have the most desirable semantics. In the case of vector, I strongly feel this means it should do bounds checking. (And at() -- or even better, some unsafeAt() function -- should not.) If you're not doing bounds checking, it seems to me like half the benefit of vectors over raw arrays has evaporated. And so the present design makes everyone choose between the better syntax (v[x]) which people are used to and essentially every other programming language uses, and better semantics (v.at(x)). Now, in the case of map, consider an alternative design where there was no auto-insertion, and it instead threw an assertion. Under this design, if you did want auto-insertion, you'd have to call some function. m[x] turns into m.setdefault(x) or something (using Python's name). But under the current API, what do I have to do if I want that behavior? I need something like Code: Select all map<blah>::const_iterator iter = mymap.find(item); if (iter == mymap.end()) { throw Something(); then use iter->second. So one expression has exploded into multiple statements. Do you know how many times I've written a little helper function that does that so I can use the damn thing in an expression? Now, in part this isn't really the fault of [], it's the fault of the fact that the standards committee didn't add what you might say is the map equivalent of vector's at(). It'd be perfectly possible to envision such a function being present, and it would at least go a long way to alleviating my complaint, though what I said above about using worse syntax for the more natural operation still applies. And this may be a symptom of the "vocal minority", but I sort of feel like I do more manipulations of const maps than I do of non-const maps. Also, auto-insertion has another problem, which is that you can't use map's [] with a type that doesn't have a default constructor. This is also obnoxious. Re: The "IT DOESN'T WORK!" thread I think you and I have different tastes (for example, I don't think vector::at has "better" semantics than vector::operator[]). I imagine that to your taste, C++ is not a very nice language. Re: The "IT DOESN'T WORK!" thread Jplus wrote:I think you and I have different tastes (for example, I don't think vector::at has "better" semantics than vector::operator[]). I imagine that to your taste, C++ is not a very nice To a large extent that's true, to be honest. I'd much rather be using something else, but circumstances force my hand. Re: The "IT DOESN'T WORK!" thread I hope you don't mind that I fixed yoru quote-chain. EvanED wrote: Yakk wrote: EvanED wrote:Ahhhh why is the [] operator on STL maps sooooo stupid. I really appreciate how you can't use it on a const map. That's nice.[\quote]Hmm? If you want to find only, find()!= end() idiom works. You do not like auto creation eh? As far as I'm concerned, op[] is the easiest thing to use and should have the most desirable semantics. In C++, the decision was that the std library "most desirable semantics" would be "as fast as a hand-written C implementation, with as much free safety as you can pull off given that restriction". They did mess up, in that range based iteration would have been as fast (or faster! It could easily be easier for a compiler to optimize) and much safer than iterator based iteration. Throwing, for example, requires a non-trivial amount of overhead (even the existence of throwing in your compilation unit!), to the extent that many people compile C++ without throwing enabled to reduce that overhead. new returning nullptr instead of throwing on failure, for example, is a pretty common modification to new's syntax. This is one of the reasons why my favorite parts of C++ are the static safety. This offloads the "cost" of safety to compile time, where you aren't running on a (say) battery-operated smartphone, where every operation reduces the device's charge. One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total. Re: The "IT DOESN'T WORK!" thread So I have the following code to multiply to Matrices which gives me a 1.000000000 on the 0,5 index and 1, 6 index of the result matrix. Code: Select all Matrix<T> operator*(Matrix<T> rhs){ Matrix<T> results(rows, rhs.cols); int len = results.cols * results.rows; int matrixBIndex = 0; for(int index = 0; index < len; index++) int curRow = index / rhs.cols; int curCol = index % rows; int matrixAIndex = curRow * cols; T sum = T(); for(int i = 0; i < rhs.rows; i++) int mA = matrixAIndex + i; int mB = matrixBIndex + (i * rhs.cols); double valueA = items[mA]; double valueB = rhs.items[mB]; double value = valueA * valueB; sum += items[mA] * rhs.items[mB]; results.items[index] = sum; matrixBIndex = (matrixBIndex == rhs.cols) ? 0 : matrixBIndex; return results; and when I multiply Code: Select all Re: The "IT DOESN'T WORK!" thread gametaku wrote:So I have the following code to multiply to Matrices which gives me a 1.000000000 on the 0,5 index and 1, 6 index of the result matrix. You haven't said what answer you expected. Looking at the matrices, I would think that is the right answer: The only non-zero term is 0.0057696575348317692 * 173.32051234634636 == 1.0000 The only non-zero term is 0.0069817491263137865 * 143.23058332632735 == 1.0000 Re: The "IT DOESN'T WORK!" thread Freakin' namespaces... EDIT: aha! I found it. Why the heck did the person put this function declaration out of the namespace? The Great Hippo wrote:[T]he way we treat suspected terrorists genuinely terrifies me. Re: The "IT DOESN'T WORK!" thread Ugh, I am having serious trouble with polymorphism in C++. So I have a base class, Component, which is the base of all the components in my system. Every actual component I use is a specialized subclass, so I can define their behavior entirely separately. In this case, I'm doing collision detection and then trying to pass a message consisting of an enumerated value and an int value from the collider to the collidee. It'd be ideal if the collidee was polymorphic, as I don't necessarily want everything to react the same to a collision. Anyway. Component has a virtual function, virtual void receive(int, int) {};. The derived component, Collide, has an switch implemented in its member void receive(colmnessage message, int value). So I iterate over a list of component pointers, using a std::list<Component*>::iterator it, and if I detect a collision between a collider and the thing being pointed at by the pointer pointed at by it, then I call receive (specifically, I call (*it)->receive(collider->getMessage(), collider->getValue())). This calls the Component version of receive. I can get it to work by downcasting using dynamic_cast, but I feel like I don't know quite what's going on, and I don't understand why it doesn't work (I thought that since it's a virtual function, it'd check the subclass for the receive function first). Can someone tell me what is happening?
{"url":"http://forums.xkcd.com/viewtopic.php?p=2896708","timestamp":"2014-04-23T14:22:42Z","content_type":null,"content_length":"147051","record_id":"<urn:uuid:cf54e940-7dc0-4504-a210-f41b4ce7c9cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Probably Overthinking It This week's post contains solutions to My Favorite Bayes's Theorem Problems , and one new problem. If you missed last week's post , go back and read the problems before you read the solutions! If you don't understand the title of this post, brush up on your memes 1) The first one is a warm-up problem. I got it from Wikipedia (but it's no longer there): Suppose there are two full bowls of cookies. Bowl #1 has 10 chocolate chip and 30 plain cookies, while bowl #2 has 20 of each. Our friend Fred picks a bowl at random, and then picks a cookie at random. We may assume there is no reason to believe Fred treats one bowl differently from another, likewise for the cookies. The cookie turns out to be a plain one. How probable is it that Fred picked it out of Bowl #1? First the hypotheses: A: the cookie came from Bowl #1 B: the cookie came from Bowl #2 And the priors: P(A) = P(B) = 1/2 The evidence: E: the cookie is plain And the likelihoods: P(E|A) = prob of a plain cookie from Bowl #1 = 3/4 P(E|B) = prob of a plain cookie from Bowl #2 = 1/2 Plug in Bayes's theorem and get P(A|E) = 3/5 You might notice that when the priors are equal they drop out of the BT equation, so you can often skip a step. 2) This one is also an urn problem, but a little trickier. The blue M&M was introduced in 1995. Before then, the color mix in a bag of plain M&Ms was (30% Brown, 20% Yellow, 20% Red, 10% Green, 10% Orange, 10% Tan). Afterward it was (24% Blue , 20% Green, 16% Orange, 14% Yellow, 13% Red, 13% Brown). A friend of mine has two bags of M&Ms, and he tells me that one is from 1994 and one from 1996. He won't tell me which is which, but he gives me one M&M from each bag. One is yellow and one is green. What is the probability that the yellow M&M came from the 1994 bag? A: Bag #1 from 1994 and Bag #2 from 1996 B: Bag #2 from 1994 and Bag #1 from 1996 Again, P(A) = P(B) = 1/2. The evidence is: E: yellow from Bag #1, green from Bag #2 We get the likelihoods by multiplying the probabilities for the two M&M: P(E|A) = (0.2)(0.2) P(E|B) = (0.1)(0.14) For example, P(E|B) is the probability of a yellow M&M in 1996 (0.14) times the probability of a green M&M in 1994 (0.1). Plugging the likelihoods and the priors into Bayes's theorem, we get P(A|E) = 40 / 54 ~ 0.74 By introducing the terms Bag #1 and Bag #2, rather than "the bag the yellow M&M came from" and "the bag the green came from," I avoided the part of this problem that can be tricky: keeping the hypotheses and the evidence straight. 3) This one is from one of my favorite books, David MacKay's Information Theory, Inference, and Learning Algorithms Elvis Presley had a twin brother who died at birth. What is the probability that Elvis was an identical twin? To answer this one, you need some background information: According to the Wikipedia article on twins: ``Twins are estimated to be approximately 1.9% of the world population, with monozygotic twins making up 0.2% of the total---and 8% of all twins.'' There are several ways to set up this problem; I think the easiest is to think about twin birth events, rather than individual twins, and to take the fact that Elvis was a twin as background So the hypotheses are A: Elvis's birth event was an identical birth event B: Elvis's birth event was a fraternal twin event If identical twins are 8% of all twins, then identical birth events are 8% of all twin birth events, so the priors are P(A) = 8% P(B) = 92% The relevant evidence is E: Elvis's twin was male So the likelihoods are P(E|A) = 1 P(E|B) = 1/2 Because identical twins are necessarily the same sex, but fraternal twins are equally likely to be opposite sex (or, at least, I assume so). So P(A|E) = 8/54 ~ 0.15. The tricky part of this one is realizing that the sex of the twin provides relevant information! 4) Also from MacKay's book: Two people have left traces of their own blood at the scene of a crime. A suspect, Oliver, is tested and found to have type O blood. The blood groups of the two traces are found to be of type O (a common type in the local population, having frequency 60%) and of type AB (a rare type, with frequency 1%). Do these data (the blood types found at the scene) give evidence in favour [sic] of the proposition that Oliver was one of the two people whose blood was found at the scene? For this problem, we are not asked for a posterior probability; rather we are asked whether the evidence is incriminating. This depends on the likelihood ratio, but not the priors. The hypotheses are X: Oliver is one of the people whose blood was found Y: Oliver is not one of the people whose blood was found The evidence is E: two blood samples, one O and one AB We don't need priors, so we'll jump to the likelihoods. If X is true, then Oliver accounts for the O blood, so we just have to account for the AB sample: P(E|X) = 0.01 If Y is true, then we assume the two samples are drawn from the general population at random. The chance of getting one O and one AB is P(E|Y) = 2(0.6)(0.01) = 0.012 Notice that there is a factor of two here because there are two permutations that yield E. So the evidence is slightly more likely under Y, which means that it is actually exculpatory! This problem is a nice reminder that evidence that is with a hypothesis does not necessarily the hypothesis. 5) I like this problem because it doesn't provide all of the information. You have to figure out what information is needed and go find it. According to the CDC, ``Compared to nonsmokers, men who smoke are about 23 times more likely to develop lung cancer and women who smoke are about 13 times more likely.'' If you learn that a woman has been diagnosed with lung cancer, and you know nothing else about her, what is the probability that she is a smoker? I find it helpful to draw a tree: If y is the fraction of women who smoke, and x is the fraction of nonsmokers who get lung cancer, the number of smokers who get cancer is proportional to 13xy, and the number of nonsmokers who get lung cancer is proportional to x(1-y). Of all women who get lung cancer, the fraction who smoke is 13xy / (13xy + x(1-y)). The x's cancel, so it turns out that we don't actually need to know the absolute risk of lung cancer, just the relative risk. But we do need to know y, the fraction of women who smoke. According to , y was 17.9% in 2009. So we just have to compute 13y / (13y + 1-y) ~ 74% This is higher than many people guess. 6) Next, a mandatory Monty Hall Problem. First, here's the general description of the scenario, from Wikipedia: Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say Door A [but the door is not opened], and the host, who knows what's behind the doors, opens Door B, which has a goat. He then says to you, "Do you want to pick Door C?" Is it to your advantage to switch your choice? The answer depends on the behavior of the host when the car is behind Door A. In this case the host can open either B or C. Suppose he chooses B with probability and C otherwise. What is the probability that the car is behind Door A (as a function of The hypotheses are A: the car is behind Door A B: the car is behind Door B C: the car is behind Door C And the priors are P(A) = P(B) = P(C) = 1/3 The likelihoods are P(E|A) = p, because in this case Monty has a choice and chooses B with probability p, P(E|B) = 0, because if the car were behind B, Monty would not have opened B, and P(E|C) = 1, because in this case Monty has no choice. Applying Bayes's Theorem, P(A|E) = p / (1+p) In the canonical scenario, p=1/2, so P(A|E) = 1/3, which is the canonical solution. If p=0, P(A|E) = 0, so you can switch and win every time (when Monty opens B, that it). If p=1, P(A|E) = 1/2, so in that case it doesn't matter whether you stick or switch. When Monty opens C, P(A|E) = (1-p) / (2-p) [Correction: the answer in this case is not (1-p) / (1+p), which what I wrote in a previous version of this article. Sorry!]. 7) And finally, here is a new problem I just came up with: If you meet a man with (naturally) red hair, what is the probability that neither of his parents has red hair? Hints: About 2% of the world population has red hair . You can assume that the alleles for red hair are purely recessive . Also, you can assume that the Red Hair Extinction theory is false, so you can apply the Hardy–Weinberg principle Solution to this one next week! Please let me know if you have suggestions for more problems. An ideal problem should meet at least some of these criteria: 1) It should be based on a context that is realistic or at least interesting, and not too contrived. 2) It should make good use of Bayes's Theorem -- that is, it should be easier to solve with BT than without. 3) It should involve some real data, which the solver might have to find. 4) It might involve a trick, but should not be artificially hard. If you send me something that is not under copyright, or is usable under fair use, I will include it in the next edition of Think Stats and add you to the contributors list. 51 comments: 1. I don't understand the 3/5 answer to problem 1. 2. By Bayes's Theorem, we have P(A|E) = P(A) P(E|A) / P(E) P(E) = P(A) P(E|A) + P(B) P(E|B) 1/2 3/4 1/2 3/4 + 1/2 1/2 = 3/5. Since the priors are equal, they drop out. So we could have skipped a step and just used the likelihoods. 3. In 3), I'm not sure how you came up with the statement "If identical twins are 8% of all twins, then identical birth events are 8% of all birth events", as the source you quote states that identical twin births are 0.2% of total births. Yes, each member of a fraternal twin birth has an equal chance of being either sex (not opposite sex'd). However, since the background of the problem states that Elvis had a twin brother who died at birth, the chance of that arrangement (Male:Male fraternals) is not 1 in 2, but 1 in 4, viz., M:M, M:F, F:M or F:F. 4. EJ: Your first point is correct. I should have said "identical birth events are 8% of all twin birth events", and I have made that correction. But your second point is not correct: if Elvis had a fraternal twin, the probability that the twin was male is 1/2. This is similar to the "Girl Named Florida" problem in Mlodinow's The Drunkard’s Walk. 5. Why is the answer to (3) not simply 16%? Assume 25% of twin births are MM, 50% are MF/FM, and 25% are FF. Then 16% of same-sex twin births need to be identical (given that 8% of twin births are identical, but the opposite-sex half cannot be). Isn't the assumption that fraternal twin births are 50% likely to be opposite sex wrong? The 92% of twin births that are fraternal is made up of the 50% of twin births that are fraternal and opposite-sex, plus the 84%*50% = 42% of twin births that are fraternal and same-sex. So P(E|B) = 42/92 and not 1/2. What am I missing? 6. Boz wrote, "Assume 25% of twin births are MM..." That's not correct. 8% of twin births are identical and 50% of them are MM. 92% of twin births are fraternal, and 25% of them are MM. So the total fraction that are MM is 27%. Of MM twins, 4/27 are identical. 7. Ah, I see, thanks. So there are more same-sex twin births than opposite-sex ones. Makes sense when you think about the biological processes involved hehe. 8. On problem #2, shouldn't the total percentage of all six colors in the new mix add up to 100%? I get 96%. 9. @Woody: Oops. The Blue should be 24%. I'll fix that. Here's the source: 1. This comment has been removed by the author. 2. It needs fixing here as well: Not that it's relevant to the problem, but I scratched my head a couple of times over that as well... Thanks for your great material! 3. Done. Thanks! 10. For the M&M problem, why should we consider the probability of picking the green one since the question is related to yellow? Why not solve it as (p(prior) * p(picking yellow from 1994)) / p (picking yellow from both 1994 and 1996)? 1. It's true that the question is about the yellow M&M, but the answer depends on which bag is which, and the green M&M provides information about that. To see why, imagine if the green M&M had been blue. That would tell you for sure which bag was which, and that would affect the answer. Hope that helps. 2. Yes it helps. Thank you for taking time to answer. 3. Mmm, I really have some problem with the M&Ms one. How do you calculate the likelihoods? Seeing the solution, logically I can relate, but I can't formalize. Originally, I proceeded calculating two separate conditional probability: then I was hoping to "combine" the two. I tried: without luck. Help ^^'. 4. This should work: that is, you should be able to do an update with the yellow M&M followed by an update with the green M&M, and get the same result. I am working on a new book called Think Bayes that uses this example in Chapter 1: If you look at the way I presented the solution there, it might help. 5. First of all, thanks for your reply. Thanks for the link, too, even if it didn't help with this specific problem (being based on the same material you used for your lecture at the PyCon this year, which I have already checked). To solve following my original idea, what helped was "updating", I did what follow: - H-start -> P(94s|Box1)=P(96s|Box1)=0.5 - H-updated -> P(94s|yellow)=P(94u|Box1)=P(96u|Box2)=0.588 - P(96|green)=0.2*0.588/0.15 = 0.784 - P(94|yellow)=0.2*0.666/0.17 = 0.784 Just two more questions: - where does 4% difference come from? - how would you express formally hypotesis A and B? 6. Here's how I state the hypotheses in Think Bayes: A: the yellow M&M is from 1994, which implies that green is from 1996. B: the yellow M&M is from 1996 and green from 1994. I don't understand your first question: what 4% difference do you mean? 7. Doing "my way" the posterior turns out to be 78%, against the 74% of the proposed solution. 8. Hi Allen, for A: why wouldn't yellow from 1994 be 20/34? 20 in 1994/total yellows between the two. and for B: 14/34? Is it because we aren't considering the evidence yet? 11. Allen, am I missing something with the Elvis question? If 1.9% are twins and .2% are identical, wouldn't it be 2/19 or 10.5% of all twins are identical? 1. Hi David. Odd, isn't it? I suspect that the three numbers in the Wikipedia quote come from different sources, because they are not quite consistent with each other. But since 0.2% is reported with only one significant digit, the result of your division (10.5%) has only one sig fig as well. And at that level of precision, 10.5 and 8 are equal. 2. Thanks Allen, I was probably over thinking it ;) I couldn't find your quote from the wikipedia article (It currently has 1.1%) and was just curious if something was lost in translation. 12. The calculated results from the blood problem can't be right, right (type O and type AB)? For hypothesis X,don't we have to multiply by 2, because the AB perp could be either of the two people at the scene? 1. I think it's correct as written. If Oliver accounts for the type O sample, then there was only one other person at the scene who left a sample, and only one sample to explain, so no factor of two required. 13. I'm really frustrated with math teachers being universally suck. This page is no exception. Why can't anyone explain how the hell they get their answers? Is it that hard? 1. I'm sorry this page didn't work for you. You might want to try Think Bayes (at thinkbayes.com) which presents some of these examples in more detail. 14. Allen, I love this blog post! Thank you for putting it together. My girlfriend and I have worked through problem 4 together and got to the same answer. In discussing how we would explain this evidence to a jury, we considered the explanation that it is "20% less likely to expect someone of Oliver's blood type at the scene given the evidence." Would you say this is accurate? We get this by comparing the probabilities 0.01 vs 0.012. Thanks again! 15. I think it would be very hard to explain this result to a jury. Qualitatively, you could say "the evidence would be less likely if Oliver were guilty, so in light of the evidence it is less likely that Oliver is guilty." To make that quantitative, you could say that the likelihood ratio is 5:6. So if your odds before hearing the evidence were 1:1, your odds after hearing the evidence should be 5:6, or 45%. But that's probably too much math for a jury. 1. Great point, and thanks for the clarification. While solving this problem we also calculated the probability that at least 1 person of type O blood be at the scene of the crime and came out to roughly 83% if I recall correctly. Explaining to the jury that there's an 80%+ chance of a type O at the scene makes it pretty difficult to act on the evidence. If the suspect was non-O blood type it might be a very different Thanks again for the post & the explanation. We really enjoyed working through these practice problems. 16. This comment has been removed by the author. 17. Thanks for a great column. The problems illustrate interesting, real-world applications of Bayes Theorem. I would like to say, however, that I believe that your answer to the Monty Hall problem (#6) is not correct. If we are assuming that, after the contestant has made his/her choice, the host will always open the door which does not have the car, then p(A|E) is 1/3 and not 1/2. Therefore, it behooves the contestant to switch doors; it will in fact double his/her chances. 1. Hi and thanks for this comment. In the version of Monty Hall I present here, if the car is behind door A, Monty chooses B with probability p and C with probability 1-p. This is different from the usual statement of the problem, but when p=1/2 it reduces to the usual version with p(A|E)=1/3, as you say. 2. I think what chokurdak khem is pointing out is that your math isn't wrong, your interpretation of the math is wrong. Either that, or I've misunderstood what the problem is. Consider the following simple python code which gives a monte-carlo solution with p=1. Once you look at that, you'll see that the "full result" is independent of p: import numpy as np trials = 100000 (winsbyswitch, winsbynotswitch) = (0, 0) for door in np.random.randint(1,4,trials): if door==2: # monte opens door 3 and switching wins if door==1: # monte opens door 2 and not switching wins if door==3: # monte opens door 2 and switching wins print "Wins by switch: ",float(winsbyswitch)/float(trials) print "Wins by notswitch: ",float(winsbynotswitch)/float(trials) 3. Apologies about not being able to figure out how to get the whitespace right in the above post. Also apologies that I didn't make it clear that door==1 is A, door==2 is B, etc. In the end there are two comments: 1. If the "full problem" is the game where I choose door A and Monty chooses B if possible (i.e. there isn't a goat there) and C otherwise (p=1 case), the answer is that switching still wins 2/3rds of the time. Short answer is that my original choice is right only 1/3rd of the time, and the switching strategy is successful 2/3rds of the time. 2. My comment about "wrong interpretation" is that I think your calculation is correct at calculating the probability that the car is behind door A if Monty opens door B (and it's not there). Which is not the "full problem." Am I missing something? 4. Hi Ken, You are right, I did not answer the full problem. There are three steps: (1) what is P(A|E)? (2) what should you do? (3) assuming you do the right thing, what is your chance of winning? I only solved (1) and left the rest to the reader. As you said, the answer to (2) is that it is to your advantage to switch, except when p=1 (in which case it doesn't matter). But the answer to (3) depends on p. For example, when p=0 and Monty opens door B, switching to C wins 100% of the time. 5. Thanks! I was mainly trying to point out what might be some common confusion. Specifically the strategy of "Choosing Door A and Switching" will win 2/3rds of the time. This is what seems contrary to your statement of the solution. Specifically, it does not matter what Monty Hall does. i.e. Assuming that the car is randomly behind door A,B, or C: The strategy: 1. Pick Door A. 2. Whatever Monty Hall shows, change from Door A to the remaining door. The above strategy will win 2/3rds of the time and is _independent_ of p. i.e. Monty Hall could have p=1 and always open Door B if there isn't a car behind. The code (poorly indented as it is) shows why. It underscores that the original choice (Door A) is wrong 2/3rds of the time and, thus, switching is necessarily right 2/3rds of the time and this is independent of Monty Hall's choice of doors (as long has he doesn't open the door with a car ...). 6. Ah, yes. I think you have identified the point of confusion. Your analysis is correct before Monty opens a door. But after Monty opens a door you have more information, and the question asks specifically about the case where Monty opens door B. In that case, your chance of winning is 2/3 only if p=1/2. For other values of p, your chance of winning might be as low as 1/2 or as high as 1. 7. I'm pretty sure that is not correct. Consider the python code I posted in my first post. This simulates draws from the game: 1. Car is randomly put behind door 1, 2, or 3. 2. I pick door 1 3. Monty Hall operates with p=1. i.e. He picks door 2 if he can (it doesn't have a car). Otherwise Monty picks door 3. 4. I switch from door 1 to the door that Monty hall didn't open. The result is I win 2/3rds of the time. The probability you calculated (with the p=1 case), is the probability that I win by switching if Monte hall opens Door B (and it's not there) [50%]. 8. Good, we are agreeing now. As you say, the probability I report is for the case where Monty opens Door B, because that's what the question asks. 9. Yes. And thanks for your patience! I think we are on the same page. And (to hopefully further clarify) the "full solution" in the case that Monty Hall behaves according to p=1 is: P( I win by switching | Monty Hall opens B) * P( Monty Hall opens B) + P( I win by switching | Monty Hall opens C) * P( Monty Hall opens C) = 1/2 * 2/3 + 1 * 1/3 = 2/3 ... and you've done the general p case of the first term, i.e. you've calculated P( I win by switching | Monty Hall Opens B) 18. Hi, I was looking at and working through these problems. In problem 6, in the case you described where Monty opens door C, I believe that P(A|E) is (1-p)/(2-p), not (1-p)/(1+p). This is because I found: P(E) = 1/3 + (1-p)*1/3 so P(A|E) is (1-p)*1/3 / (1/3 + (1-p)*1/3) which simplifies to (1-p) / (2-p). 1. You are correct! I will make that correction in the article. 19. Hi, I would like to inquire about the working for Q2 quoted below: "The likelihoods are P(E|A) = (0.2)(0.2) P(E|B) = (0.1)(0.14) So P(A|E) = 40 / 54 ~ 0.74" Why is P(E|A)= 0.2*0.2 instead of 0.2*0.5? Why is P(E|B) = 0.1*0.14 instead of 0.5*0.14? I do not understand since P(A)=P(B) = 0.5 Thanks! (: 1. I added some explanatory text to the article. Please let me know if it answers your question. 20. This comment has been removed by a blog administrator. 21. Hi, for the Elvis' twin problem: If the percentage of twins in the world pop. is: #T/#Pop = 1.9%, and that the percentage of monozygotic twins over the worl pop. is: #MZT/#Pop = 0.2% Therefore the percentage of MZT over the number of all twins is: #MZT/#T= #MZT/#Pop*(#Pop/#MZT) = 0.2/1.9 = 0.105 = 10.5% Am I wrong ? 1. Odd, isn't it? I suspect that the three numbers in the Wikipedia quote come from different sources, because they are not quite consistent with each other. But since 0.2% is reported with only one significant digit, the result of your division (10.5%) has only one sig fig as well. And at that level of precision, 10.5 and 8 are equal. 22. OMG, where do I get one of those neon signs? I have never wanted to own a neon sign before. Did you have it made up special? I have been looking over the "Think Bayes" PDF and fondly remembering the estimation and detection course that I took once upon a time at MIT. THANK YOU for writing the missing textbook for that 1. Thanks for your kind words. Sadly, I don't own that sign. According to this thread: It lives in the office of Autonomy Corp. The photo is from Wikipedia. I should have given credit and pointed to this page:
{"url":"http://allendowney.blogspot.com/2011/10/all-your-bayes-are-belong-to-us.html","timestamp":"2014-04-16T13:02:44Z","content_type":null,"content_length":"243707","record_id":"<urn:uuid:862a6060-df69-4bc9-b920-15eb2902b455>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project De Finetti's Game Bruno de Finetti defined probability as a quantity that measures a subjective degree-of-belief rather than an objective quantity. According to de Finetti, such a (possibly very vague) degree-of-belief can be quantified through thought experiments involving urns with a varying fraction of differently colored balls. A degree-of-belief value is quantified by the willingness of people to risk money by betting on the result of randomly drawing a ball of a particular color. Assume you have taken an exam and you want to express your degree-of-belief that you have passed as a subjective probability value. In the de Finetti game you are confronted with the urn above, containing a mixture of red and black balls. Subsequently, you are given two options. Option 1: You randomly have a ball selected from that urn. If it turns out to be red, you will get €1000, while if it turns out to be black, you will have to pay the same amount. Option 2: You wait until the result of the exam is revealed. If you pass, you will get €1000, while if you fail, you will have to pay the same amount. If, looking at the urn, you choose option 1, your subjective probability estimate for passing must be less than the slider value. If you choose option 2, your subjective probability value must be larger than the slider value. You then increase the number of red balls in the urn by moving the slider to the right until the chances to win (or lose) money seem equally likely for options 1 and 2. The resulting fraction of red balls is your subjective probability value for passing the exam. [1] B. de Finetti, Theory of Probability: A Critical Introductory Treatment , Vol. 1 (A. Machí and A. Smith, trans.), New York: Wiley, 1974.
{"url":"http://demonstrations.wolfram.com/DeFinettisGame/","timestamp":"2014-04-19T17:05:12Z","content_type":null,"content_length":"42291","record_id":"<urn:uuid:45100e07-6abe-4b3f-97aa-7fdaf9dd688f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
This is an article about the geometrical shape. See The Pentagon for an article about the building near Washington, DC. See also: Pentagon (disambiguation). , a is any five-sided . However, the term is commonly used to mean a regular pentagon , where all sides are equal and all angles are equal (to 108°). The area of a regular pentagon with side length a is given by can be formed from a regular pentagon either by extending its sides or by drawing its diagonals, and the resulting figure contains various lengths related by the golden ratio , φ = (1+√5)/2. Constructing a pentagon A regular pentagram is constructible using a straightedge and compass . This process was described by in his circa 300 B.C. 1. Draw a horizontal line with a circle the size of your desired pentagon that has its center on this line. 2. Put your compasses' needle where the circle's circumference crosses the horizontal line, and draw a half-circle through the center of your first circle, crossing the circumference of the first circle in two places. Draw a vertical line through the points where the half-circle crosses the first circle. This line will pass through a point we call (a). 3. Open your compasses so that you can, when placing the needle in the two intersections between the horizontal line and the first circle, draw a small cross above and below the horizontal line, outside the first circle, with one line of the cross from each point. If you join these crosses you will obtain a line perpendicular to the horizontal line, also passing through the center of the first circle. The point where this line crosses the circumference of the first circle on the top, we call (b). This is the first corner of the pentagon. 4. Put the compasses' needle in (a) and drawing a circle segment passing through (b) and down through the horizontal line, obtaining a point on this line we call (c). 5. Put the needle in (b) and pass a circle segment through (c) and the first circle. These points on the first circle are the second and third corners of the pentagon. 6. Without extending the compasses, put its needle in the second and third corners, and draw circle segments passing through the first circle to find the two remaining corners. 7. Join each corner to the adjacent ones and you have a pentagon. 8. If you join the non-adjacent corners (drawing the diagonals of the pentagon), you obtain a pentagram, with a smaller regular pentagon in the center. Or if you extend the sides until the non-adjacent ones meet, you obtain a larger pentagram.
{"url":"http://july.fixedreference.org/en/20040724/wikipedia/Pentagon","timestamp":"2014-04-20T15:57:28Z","content_type":null,"content_length":"5685","record_id":"<urn:uuid:9a23a512-f76e-43ce-b38f-ffe093f5fc36>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Lost on Hard Substitution Indefinite Integral April 11th 2009, 08:33 PM Jim Marnell Lost on Hard Substitution Indefinite Integral $\int ((x^2-1)e^{x^3-3x})dx$ dont know any rules for these type of problems involving e. Thanks for any help in getting me started with this problem! April 11th 2009, 08:42 PM By differentiating $x^3-3x$ we get $3x^2-3=3(x^2-1).$ What's the substitution? April 11th 2009, 08:43 PM you want u and du to substitute for everything inside the integral. 1. You can do this by setting u= x^3-3x which gives du=3(x^2-1)dx get rid of the 3 by dividing both sides by it and you'll get 1/3du= (x^2-1)dx which is what you want. 2. Substitute into integral: should get something like this by taking the 1/3 out as a constant 1/3 integral e^u du , which shows everything is accounted for from the original. 3. Integrate. When you do you should get 1/3 e^u + C 4. Plug u back in and you're done. 1/3 e^(x^3-3x) + C Is it clear now? April 11th 2009, 08:46 PM Jim Marnell Thank you! yea it makes sense, i had what you had in step 1 and then everything else i had was wrong but i'll follow what you did for my other questions like this. Thanks again! April 14th 2009, 08:32 AM Jim Marnell Indefinite Integral Substitution Problem-Just needs checked Just needs checked: Not sure if i did it right $\int (x^2-1)(e^{x^3-3x})dx$ April 14th 2009, 08:36 AM That's OK ! (Clapping)
{"url":"http://mathhelpforum.com/calculus/83298-lost-hard-substitution-indefinite-integral-print.html","timestamp":"2014-04-17T08:05:38Z","content_type":null,"content_length":"8712","record_id":"<urn:uuid:90a4ce01-f615-40f5-9d5a-e3d67b18198a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Detecting Significant Changes In Your Data For statisticians, significance is an essential but often routine concept. For those who don’t remember the details of college statistics courses, significance is a nebulous concept that lends magical credence to whatever data it describes. Sometimes you make a change in your paid search program, watch the data come in, and want to claim that numbers are improving because of your How can you support this claim? Can you discredit the possibility that the apparent improvement is just noise? How can you apply that authoritative label of “significant”? Here I’d like to walk you through a basic test of significance that you can use to de-mystify changes in your paid search data. If you’d like to skip the math, click here. Let’s start with a situational example… say you’ve added Google Site Links to your brand ads and you want to show that brand click-through rate (CTR) has improved as a result. 1. First, you need to know what value brand CTR is potentially improving from. Let’s call this value mu (pronounced myoo), and you can choose it in a variety of ways: the average or median CTR over the past month, the average or median CTR from this time of year last year, etc. It should really be whatever value you believe CTR to truly center around. 2. Next, you need data points.That is, you need several days of CTR data since the Site Links have been running. How many days is up to you. Generally, more is better, but I’ll touch on that later. The number of days you have is n. Take the average of the CTRs from those days; this is called xbar. Lastly, take the standard deviation (excel function stdev) of these CTRs and call it s. 3. Now we can compute a t-score, and with it, the probability that the change in CTR you’re seeing is or isn’t attributable to chance. Set t = |xbar – mu| / (s/squareroot(n)). Then use the function tdist in excel, and for the arguments, plug in t, n-1, and 1. The number that this function returns is the probability that the change in CTR is simply due to chance, aka noise. If this probability is very small, then we say CTR has changed significantly. Enough Math! Is The Change In My Data Significant? I’ve prepared an excel spreadsheet that handles the arithmetic. In this model, change the gray shaded cells to reflect your data. Enter the data that you think has fundamentally changed in column C. Only include data points since the change began. Then, in cell G2, enter the value from which you believe the data to have changed. That is, the average value of the data before the change. The value p, produced in cell G7, is the probability that the change you’re seeing is only due to chance, and thus meaningless. Typically, a p-level must be below 5% to be considered significant. (If you want to be super, super sure, you can use 1% or 0.1% instead.) In other words, if your p-value is 5% or less, you can confidently say that the change in your data is real, definite, and due to something other than statistical noise. It’s a pretty safe bet that whatever initiative you took – whether it was switching landing pages, altering ad copy, or refining your bidding – was the catalyst for the improvement instead. Allow me to fill in the spreadsheet with an example. For an imaginary online retailer, brand CTR hovers around 4.4%, so I fill in cell G2 with the value 4.4. The retailer enables Google Site Links, and CTRs for the 3 days afterward are 4.3, 5.2, and 5. So I enter those three data points into column C. And voila… the p-level comes back as 12.66%. This says that there is a 12.66% chance that the rise in CTR was due only to noise. Not significant. Sorry, click-through-rates haven’t really increased, or at least, we can’t be very confident that the observed change is anything more than random noise. But… three days is not much data. As smart analysts, we are cautious when examining trends over only a few days, and this significance test incorporates such wisdom. As the number of data points (n) you use increases, p-levels fall. For example, if all the numbers in the above example were the same except that you used 7 days instead of 3 (so n=7), the corresponding probability drops to 2.6%. In this instance, it’s very unlikely (2.6% unlikely) that the increase in CTR was due to noise, so here you can rather confidently say, “Yes, CTR has increased, and it wasn’t due to chance. It was probably due to the site links.” 15 Responses to “Detecting Significant Changes In Your Data” 1. I think a lot of people make assumptions or form opinions without considering the stats behind them. By doing so, they come to conclusions that may be nothing more than normal variations that have no significance. I’d like to see more publishing of the raw data, so that statistically insignificant conclusions can be called out by the community. 2. Brilliant. Advanced PPC tactics at its best. I recently incorporated Sitelinks in 2 sites so this is actually very relevant and I am sure I will use your spreadsheet for my analysis. 3. Stephen, I was on a panel not long ago when one of the presenters did exactly that. Claimed that moving from last touch attribution to first touch sometimes moved results 300%, but his slide showed the raw data: moving from 1 order to 3 on 300 clicks or so for a particular term. I thought about calling him out on the fact that it’s random noise, and that going from 1 order to 3 is actually a 200% increase, not 300%…but I didn’t. Enough people in the industry are mad at me as it is :-) 4. Also keep in mind that the T-test (the formula in the excel) can get a little ‘iffy’ if the sample sizes are small, like, under a hundred or so. Everybody’s got a different opinion as to what’s a small sample size, but you shouldn’t get into too much trouble if you’re looking at 100 or bigger. 5. Brian – Are you sure you aren’t thinking of the Z-distribution instead? The t-distribution is like the Z-distribution but meant for a) small sample sizes and b) cases where the population standard deviation is estimated (rather than already known). The t-distribution is suited to smaller sample sizes because sample size itself is a parameter of the distribution (via the degrees of freedom)… that is, the smaller the sample size, the more spread out the distribution is, and the higher your t-statistic must be to get a statistically significant result. 6. Typically when testing for changes in CTR (or CVR for that matter), I consider the sample size to be the number of impressions during a given time frame. This is independent of the number of days, and I model CTR with a Bernoulli distribution to ascertain statistical significance. Do you guys have any thoughts on this? It seems to me that using the number of days as a sample size is rather immaterial. Why not hours? Or weeks? It’s an arbitrary selection. 7. Worth noting that if I were to have been more careful in my previous post – I would have said that in the case of CVR the number of clicks is my sample size. 8. Ken – I don’t immediately see any problems with your method, and I think it sounds like a great way to attack the problem. In that case, you’d use p for the proportion you’d expect to see, phat for the proportion you observe having changed, and the test statistic z = (phat – p)/((p*(1-p))/n). I might just use your method next time! 9. Thanks for the post! We’ve been running analysis to measure (at various confidence intervals) HOW MUCH improvement can be attributed to campaign changes versus metric noise. To do this, we calculate standard deviation of mu, in addition to mean. We then look at CTR following the test, knowing we can attribute (with 68% confidence) any delta that falls outside the mean + or – one standard deviation. For example, say pre-test CTR was 5% and standard deviation over that period was 0.5%. If, after a few days of testing, xbar settles around 5.7% we would say (with 68% confidence) that .2% of the increase was due to the change. Any feedback on our methodology? I’d also love to combine these calculations with those in your post. As more days go by, we’re be more confidant in our calculations as (n) increases. Is there a way to do using a formula? Thanks. 10. Thanks for a fantastic post Jen. An articulate refresher on basic stats that I for one was a bit fuzzy on from my class room days. Also really like Ken’s point. Do you think the same is true for day of week analysis? Looking at individual days leaves you with few data points but seems to underestimate the significance since there are a large number of data points behind each day’s average. 11. Ken, I like your approach, too. Running true A/B tests on web pages and looking for conversion rate differentials can be a very long and frustrating process. The number of conversions needed to detect small differences in conversion rates can make evaluating the results akin to the Bataan Death march. Sometimes, you can get a pretty valid read, with much less data by simply saying: starting today, if version A beats version B five days in a row, the odds of that happening by random chance are: 1/32 or ~3%. Sometimes that helps you identify a winner much sooner. The problem of course is that 1) you have to be disciplined: you can’t say over the last 20 days of the test were their any five day runs by one or the other? That won’t produce the right answer; and 2) if version A is only 2% better than version B, this 5-day test will usually fail to show that, too. 12. Shay – I definitely think the same logic applies to day of week analysis. George – That’s an extremely interesting way of approaching the problem. Have you done any sort of simulations to get a better read on what sort of confidence levels you’re dealing with when taking that approach? What I mean is (oversimplifying here), assume CVR for page A is lower than CVR for page B. What percent of times can page A outperform page B 5 days in a row? I realize this is a computational nightmare, but a Monte Carlo simulation could shed some light on the issue. I really, really wish I had the time to explore that question. :-) I bring it up because I’m hesitant to take an approach where it’s not clear to me what my true confidence level is. This is perhaps not the best solution, but when dealing with CVR, I typically use a higher alpha than I would otherwise. This is a subjective judgement, but I think the potential dangers associated with making a type I error are low – after all, it’s only advertising! 13. Ken, You’re absolutely right if the CR difference between A and B is small (2 or 3%) the odds of A running the table aren’t much worse than the odds of B running the table, hence you may end up picking the wrong winner. The stats tricks for dealing with sparse data are tricks, and as such the results are generally less certain than some folks would have us believe. The assumptions under the hood of MVT analysis, for example, are large and often mean that confidence levels are overstated. There’s no substitute for rich data. Check out what others are saying... 1. [...] Detecting Significant Changes In Your Data, Rimm Kaufman [...] 2. Social comments and analytics for this post… This post was mentioned on Twitter by AllThingsM: Detecting Significant Changes In Your Data http://bit.ly/dBdoQw…
{"url":"http://www.rimmkaufman.com/blog/detecting-significant-changes-in-your-data/24022010/","timestamp":"2014-04-19T09:28:21Z","content_type":null,"content_length":"73838","record_id":"<urn:uuid:111f343c-1cb4-4ce4-9b49-dca0504f5a2a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
DIMAP Seminar 17 Jun 2014 Tom Friedetzky TBA Durham University 14 Mar 2014 Thomas Sauerwald Balls into Bins via Local Search University of Cambridge 11 Mar 2014 Zun-Bin Zhao RSP-based analysis for the efficiency of l[1]-minimization for solving l[0]-problems University of Birmingham 04 Mar 2014 Guglielmo Lulli A new optimization framework for dynamic resource allocation problems University of Milano-Bicocca 25 Feb 2014 Nicolas Gillis Fast projection methods for robust separable nonnegative matrix factorization Université de Mons 18 Feb 2014 Paul Spirakis Temporal Networks : Optimization and Connectivity University of Liverpool 11 Feb 2014 László Végh Strongly polynomial algorithm for generalized flow maximization London School of Economics 31 Jan 2014 Ilias Diakonikolas Learning Sums of Independent Integer Random Variables University of Edinburgh 21 Jan 2014 Viresh Patel A domination algorithm for {0,1}-instances of the travelling salesman problem Queen Mary University of London 4 Dec 2013 Jan Bulánek Tight Lower Bounds for the Online Labeling Problem Charles University/Academy of Sciences 3 Dec 2013 Lutz Warnke The Evolution of Subcritical Achlioptas Processes University of Cambridge 26 Nov 2013 Rob van Stee A Unified Approach to Truthful Scheduling on Related Machines University of Leicester 26 Nov 2013 Milan Vojnovic Balanced Graph Partitioning for Massive Scale Computations Microsoft Research Cambridge 20 Nov 2013 Alex Grigoriev Bidimensionality on Geometric Intersection Graphs Maastricht University 19 Nov 2013 Sergey Kitaev Equidistributions on Planar Maps via Involutions on Description Trees University of Strathclyde 15 Nov 2013 Sanjeeb Dash Multi-Branch Split Cutting Planes for Mixed-Integer Programs IBM T. J. Watson Research Center, NY 12 Nov 2013 Ben Barber Partition Regularity in the Rationals University of Birmingham 5 Nov 2013 Zhan Pang Dynamic Financial Hedging Strategies for a Storable Commodity with Demand Uncertainty Lancaster University 30 Oct 2013 Piotr Indyk Faster Algorithms for the Sparse Fourier Transform 22 Oct 2013 Elias Koutsoupias Near-Optimal Multi-Unit Auctions with Ordered Bidders University of Oxford 15 Oct 2013 Robert Calderbank Information Theory and Compressed Sensing Duke University 10 Oct 2013 Alina Ene Routing in Directed Graphs with Symmetric Demands University of Warwick / Princeton 8 Oct 2013 Stanislav Böhm NL-completeness of Equivalence for Deterministic One-Counter Automata VSB - Technical University of Ostrava 1 Oct 2013 Sang-il Oum Vertex-Minors of Graphs 25 July Srikanta Tirthapura On Optimality of Clustering by Space Filling Curves 2013 Iowa State University 25 Jun 2013 Juan-Jose Salazar-Gonzalez Optimizing the Management of Crews and Aircrafts in Canary Islands University of La Laguna 20 Jun 2013 Graham Cormode Streaming Verification of Outsourced Computation University of Warwick 18 Jun 2013 Endre Boros A family of effective payoffs in stochastic games with perfect information Rutgers University 11 Jun 2013 John Fearnley Reachability in Two-Clock Timed Automata is PSPACE-complete University of Liverpool 4 Jun 2013 Martin Anthony Quadratization of pseudo-Boolean functions 28 May 2013 Michel Habib New trends for graph searches University Paris Diderot 21 May 2013 Igor Potapov Reachability Problems for Words, Matrices and Maps University of Liverpool 14 May 2013 Anna Huber Skew Bisubmodularity and Valued CSPs Durham University 13 May 2013 Lila Fontes The Tradeoff Between Privacy and Communication University of Toronto 7 May 2013 Andrei Krokhin Robust algorithms for constraint satisfaction problems Durham University 30 Apr 2013 Daniel Paulusma Coloring (H1,H2)-free graphs Durham University 23 Apr 2013 Alain Hertz On the maximum difference between several graph invariants Polytechnique Montreal 19 Mar 2013 Lehilton Chaves Squared Metric Facility Location Problem and the Upper Bound Factor-Revealing Programs University of Warwick 18 Mar 2013 Richard Cole Truthful Mechanisms for Approximating Proportionally Fair Allocations New York University 12 Mar 2013 Tri-Dung Nguyen Optimal coalition structure and stable payoff distribution in large games University of Southampton 05 Mar 2013 Roman Glebov On bounded degree spanning trees in the random graph University of Warwick 26 Feb 2013 Andrew Treglown More on perfect matchings in uniform hypergraphs Queen Mary, University of London 19 Feb 2013 David Conlon Ramsey multiplicity University of Oxford 12 Feb 2013 Roman Belavkin On Optimality of Deterministic and Non-Deterministic Transformations Middlesex University London 6 Feb 2013 Liron Yedidsion Approximation Algorithm for the Resource Dependent Assignment Problem Technion - Israel Institute of Technology 29 Jan 2013 Thomas Dueholm Hansen Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor Aarhus University 22 Jan 2013 Steve Alpern A new method of searching a network University of Warwick Claire Mathieu 15 Jan 2013 École Normale Supérieure and Brown Algorithms for optimization over noisy data 8 Jan 2013 Victor Zamaraev On the factorial layer of hereditary classes of graphs 27 Nov 2012 Peter Keevash Turan Numbers of Bipartite Graphs Plus an Odd Cycle Queen Mary, University of London 20 Nov 2012 Wolfram Wiesemann Distributionally Robust Convex Optimisation Imperial College London 13 Nov 2012 Justin Ward A Tight, Combinatorial Algorithm for Submodular Matroid Maximization University of Toronto 30 Oct 2012 Timothy Griffin Local Optimality in Algebraic Path Problems University of Cambridge 23 Oct 2012 Alexander Tiskin Semi-local String Comparison University of Warwick 16 Oct 2012 Marcin Mucha Lyndon Words and Short Superstrings Warsaw University 9 Oct 2012 Stanislav Živný The Power of Linear Programming for Valued CSPs University of Warwick 4 Jul 2012 Piotr Sankowski Algorithmic Applications of Baur-Strassen's Theorem: Shortest Cycles, Diameter and Matchings University of Warsaw 29 Jun 2012 Andrew McGregor Analyzing Graphs via Random Linear Projections University of Massachusetts, Amherst Fabrizio Grandoni 19 Jun 2012 Dalle Molle Institute for Artificial Pricing on Paths: A PTAS for the Highway Problem 06 Jun 2012 Etienne de Klerk Improved Lower Bounds on Crossing Numbers of Graphs Through Optimization Tilburg University 22 May 2012 Olof Sisask Arithmetic Progressions in Sumsets via (Discrete) Probability, Geometry and Analysis Queen Mary, University of London 18 May 2012 Benny Sudakov Induced Matchings, Arithmetic Progressions and Communication 15 May 2012 Igor Razgon Treewidth Reduction Theorem and Algorithmic Problems on Graphs University of Leicester 8 May 2012 Ashley Montanaro Exact Quantum Query Algorithms University of Cambridge 1 May 2012 Sergey Kitaev Representing Graphs by Words University of Strathclyde 24 Apr 2012 Catherine Greenhill Making Markov Chains Less Lazy University of New South Wales 16 Apr 2012 Tim Nonner Polynomial-Time Approximation Schemes for Shortest Path with Alternatives IBM Zurich Research Laboratory 23 Mar 2012 Andrew Treglown Minimum Degree Thresholds for Perfect Matchings in Hypergraphs Charles University 21 Mar 2012 Sebastian Stiller Robust Optimization over Integers TU Berlin 20 Mar 2012 Leslie Goldberg The Complexity of Computing the Sign of the Tutte Polynomial University of Liverpool 13 Mar 2012 Peter Butkovic Combinatorics of Tropical Linear Algebra University of Birmingham 6 Mar 2012 Christian Sohler Every Property of Hyperfinite Graphs is Testable Technical University Dortmund 5 Mar 2012 Justin Ward Improved Approximations for Monotone Submodular k-Set Packing and General k-Exchange Systems University of Toronto 28 Feb 2012 Jonathan Thompson Hybridising Heuristic and Exact Methods to Solve Scheduling Problems Cardiff University 14 Feb 2012 Martin Hoefer Local Matching Dynamics in Social Networks RWTH Aachen University 6 Feb 2012 Ola Svensson Approximating Graphic TSP by Matchings 31 Jan 2012 Eranda Dragoti-Çela The Quadratic Assignment Problem: on the Borderline Between Hard and Easy Special Cases Graz University of Technology Giacomo Zambelli 25 Jan 2012 London School of Economics and Political Extended Formulations in Combinatorial Optimization 24 Jan 2012 Vitaly Strusevich The "Power of ..." Type Results in Parallel Machine Scheduling University of Greenwich 10 Jan 2012 Daniel Kuhn Robust Markov Decision Processes Imperial College London 6 Dec 2011 Wojciech Samotij Independent Sets in Hypergraphs University of Cambridge 5 Dec 2011 J. Ian Munro Creating a Partial Order and Finishing the Sort University of Waterloo 29 Nov 2011 Guilhem Semerjian Statistical Mechanics Approach to the Problem of Counting Large Subgraphs in Random Graphs École Normale Supérieure 22 Nov 2011 Robert Brignall Applications and Studies in Modular Decomposition The Open University 15 Nov 2011 Nikolaos Fountoulakis Random Graphs on Spaces of Negative Curvature University of Birmingham 8 Nov 2011 Boris Bukh Upper Bound for Centerlines University of Cambridge 4 Nov 2011 Daniel Král' Algorithms for Testing FOL Properties Charles University in Prague 1 Nov 2011 Raphaël Clifford Lower Bounds for Online Integer Multiplication and Convolution in the Cell-probe Model University of Bristol 14 Oct 2011 Bernhard von Stengel Nash Codes for Noisy Channels London School of Economics 4 Oct 2011 Andreas Brandstädt On Clique Separator Decomposition of Some Hole-Free Graph Classes Universität Rostock 30 Aug 2011 Arie M.C.A. Koster Network Design under Demand Uncertainties RWTH Aachen University 25 Jul 2011 Laura Galli Cutting-Planes for the Max-Cut Problem Università di Bologna 12 Jul 2011 Leslie Valiant Holographic Algorithms Harvard University 29 Jun 2011 Maxim Sviridenko New and Old Algorithms for Matroid and Submodular Optimization IBM T.J. Watson Research 28 Jun 2011 Konstantinos Panagiotou Ultra-Fast Rumor Spreading in Models of Real-World Networks 21 Jun 2011 Vitaliy Koshelev On the Erdős-Szekeres Problem and its Modifications Moscow Institute of Physics and Technology 20 Jun 2011 Paweł Gawrychowski Optimal (Fully) LZW-compressed Pattern Matching University of Wrocław 14 Jun 2011 Igor Razgon Understanding the Kernelizability of Multiway Cut University of Leicester 7 Jun 2011 Jan van den Heuvel Fractional Colouring and Pre-colouring Extension of Graphs London School of Economics 31 May 2011 Viresh Patel Partitioning Posets Durham University 24 May 2011 Martin Kochol Solution to an Edge-coloring Conjecture of Grunbaum Slovak Academy of Sciences 17 May 2011 Lutz Warnke Achlioptas Process Phase Transitions are Continuous University of Oxford 3 May 2011 David Saad Dynamics of Boolean Networks - An Exact Solution Aston University 27 Apr 2011 David S. Johnson The Traveling Salesman Problem: Theory and Practice in the Unit Square 26 Apr 2011 Ammar Oulamara No-wait Flow Shop with Batching Machines 15 Mar 2011 Michel Deza Space Fullerenes École Normale Supérieure, Paris 8 Mar 2011 Michel Deza Elementary Polycycles and Applications École Normale Supérieure, Paris 23 Feb 2011 David Ellis Triangle-intersecting Families of Graphs University of Cambridge 15 Feb 2011 Fred Holroyd Overlap Colourings of Graphs: A Generalization of Multicolourings The Open University 11 Feb 2011 Amr Elmasry Number Systems and Data Structures University of Copenhagen 8 Feb 2011 Dave Cohen The Complexity of the Constraint Satisfaction Problem: Beyond Structure and Language Royal Holloway 18 Jan 2011 Xiaotie Deng Auction and Equilibrium in Sponsored Search Market Design University of Liverpool 11 Jan 2011 Robert Krauthgamer Polylogarithmic Approximation for Edit Distance and the Asymmetric Query Complexity Weizmann Institute 7 Dec 2010 Anders Yeo All Ternary Permutation Constraint Satisfaction Problems Parameterized Above Average Have Kernels with a Quadratic Number of Variables Royal Holloway 1 Dec 2010 Tim Pigden Defining Real-world Vehicle Routing Problems: An Object-based Approach 30 Nov 2010 Simone Severini A Role for the Lovasz Theta Function in Quantum Mechanics University College London 23 Nov 2010 Bill Jackson Boundedness, Rigidity and Global Rigidity of Direction-Length Frameworks Queen Mary 16 Nov 2010 Nik Ruskuc Grid Pattern Classes University of St Andrews 9 Nov 2010 Gregory Sorkin Unsatisfiability Below the Threshold(s) London School of Economics 4 Nov 2010 Andrew Treglown Matchings in 3-uniform Hypergraphs University of Birmingham 2 Nov 2010 Artem Pyatkin On Incidentor Colorings of Multigraphs Durham University 26 Oct 2010 Michele Zito The Empire Colouring Problem: Old and New Results University of Liverpool 19 Oct 2010 Ilia Krasikov Reconstruction Problems and Polynomials Brunel University 12 Oct 2010 Vadim Lozin A Decidability Result for the Dominating Set Problem University of Warwick 23 Sep 2010 Marc Demange Hardness and Approximation of Minimum Maximal Matching in k-regular graphs ESSEC Business School 12 Aug 2010 Sourav Chakraborty Query Complexity Lower Bounds for Reconstruction of Codes 10 Aug 2010 Alexander Skopalik Altruism in Atomic Congestion Games RWTH Aachen 29 Jun 2010 Wieslaw Kubiak Proportional Optimization and Fairness: Applications Memorial University 22 Jun 2010 Cole Smith A Decomposition Approach for Insuring Critical Paths University of Florida 21 Jun 2010 Prudence Wong Energy Efficient Job Scheduling with Speed Scaling and Sleep Management University of Liverpool 15 Jun 2010 Viresh Patel Edge Expansion in Graphs on Surfaces Durham University 8 Jun 2010 Guy Kortsarz A Survey of Connectivity Approximation via a Survey of the Techniques Rutgers University 2 Jun 2010 Troels Bjerre Sørensen The Computational Complexity of Trembling Hand Perfection and Other Equilibrium Refinements University of Warwick 1 Jun 2010 John Fearnley Exponential Lower Bounds For Policy Iteration University of Warwick 25 May 2010 Petr Golovach Paths of Bounded Length and their Cuts: Parameterized Complexity and Algorithms Durham University 18 May 2010 Paul Sant Colouring Pairs of Binary Trees and the Four Colour Problem - Results and Achievements University of Bedfordshire 11 May 2010 David Conlon An Approximate Version of Sidorenko's Conjecture University of Cambridge 4 May 2010 Juergen Branke Hybridizing Evolutionary Algorithms and Parametric Quadratic Programming to Solve Multi-Objective Portfolio Optimization Problems with University of Warwick Cardinality Constraints 26 Apr 2010 Shiva Kasiviswanathan A Rigorous Approach to Statistical Database Privacy Los Alamos National Laboratory 22 Apr 2010 F. Meyer auf der Heide Local Algorithms for Robotic Formation Problems University of Paderborn 16 Mar 2010 Stephan Kreutzer Algorithmic Meta-Theorems: Upper and Lower Bounds for the Parameterized Complexity of Problems on Graphs University of Oxford 10 Mar 2010 Martin Zimmermann Time-optimal Strategies for Infinite Games RWTH Aachen 9 Mar 2010 Marcin Kamiński Induced Minors and Contractions - An Algorithmic View 3 Mar 2010 Oliver Riordan The Generalized Triangle-triangle Transformation in Percolation University of Oxford 2 Mar 2010 Rico Zenklusen A Flow Model Based on Linking Systems with Applications in Network Coding EPF Lausanne 23 Feb 2010 Robert Krauthgamer A Nonlinear Approach to Dimension Reduction Weizmann Institute 16 Feb 2010 Heiko Röglin k-Means has Polynomial Smoothed Complexity Maastricht University 10 Feb 2010 Daniel Král' Total Fractional Colorings of Graphs with Large Girth Charles University, Prague 9 Feb 2010 Deryk Osthus Hamilton Decompositions of Graphs and Digraphs University of Birmingham 2 Feb 2010 Mikhail Moshkov Time Complexity of Decision Trees 27 Jan 2010 Stefanie Gerke Random Graph Processes Royal Holloway 26 Jan 2010 Peter Keevash Cycles in Directed Graphs Queen Mary 19 Jan 2010 Andrei Toom Solvable and Unsolvable in Cellular Automata University of Pernambuco 12 Jan 2010 Colin Cooper Using Neighbourhood Exploration to Speed up Random Walks King's College London 2 Dec 2009 Tomasz Radzik Robustness of the Rotor-router Mechanism for Graph Exploration King's College London 1 Dec 2009 Vadim Zverovich Discrepancy and Signed Domination in Graphs and Hypergraphs UWE Bristol 25 Nov 2009 Frits Spieksma An Overview of Multi-index Assignment Problems KU Leuven 24 Nov 2009 Troels Bjerre Sørensen Approximability and Parameterized Complexity of Minmax Values LMU Munich 18 Nov 2009 Rajiv Raman Profit-maximizing Pricing: The Highway and Tollbooth Problem University of Warwick 17 Nov 2009 Patrick Briest Pricing Lotteries University of Paderborn 12 Nov 2009 Altannar Chinchuluun Some Multiobjective Optimization Problems Imperial College London 11 Nov 2009 Codruţ Grosu The Extremal Function for Partial Bipartite Tilings University of Bukarest 10 Nov 2009 Gregory Sorkin The Power of Choice in a Generalized Pólya Urn Model IBM Research 10 Nov 2009 Colin McDiarmid Random Graphs with Few Disjoint Cycles University of Oxford 10 Nov 2009 Harald Räcke Oblivious Routing in the L_p-norm University of Warwick 4 Nov 2009 Charilaos Efthymiou Correlation Decay and Applications to Counting Colourings University of Edinburgh 3 Nov 2009 Robert Brignall Antichains and the Structure of Permutation Classes University of Bristol 27 Oct 2009 Oded Lachish Wiretapping a Hidden Network University of Warwick 13 Oct 2009 Peter Cameron Synchronization Queen Mary 6 Oct 2009 Bernard Ries On Graphs that Satisfy Local Pooling University of Warwick 29 Sep 2009 Matthias Westermann The Power of Online Reordering University of Bonn 24 Jun 2009 Alex Scott Triangles in Random Graphs University of Oxford 23 Jun 2009 Imre Leader Positive Projections University of Cambridge 16 Jun 2009 Dieter Rautenbach Cycles, Paths, Connectivity and Diameter in Distance Graphs TU Ilmenau 9 Jun 2009 Amin Coja-Oghlan Discrepancy and eigenvalues University of Edinburgh 3 Jun 2009 Piotr Krysta Social Context Games University of Liverpool 26 May 2009 Darek Kowalski 30 Years with Consensus: from Feasibility to Scalability University of Liverpool 19 May 2009 Daniel Paulusma Disconnecting a Graph by a Disconnected Cut Durham University 13 May 2009 Mary Cryan Algorithms for Counting/Sampling Cell-Bounded Contingency Tables University of Edinburgh 7 May 2009 Diana Piguet 4-Colour Ramsey Number of Paths Charles University Prague 6 May 2009 Julia Böttcher Sparse Random Graphs are Fault Tolerant for Large Bipartite Graphs TU München 6 May 2009 Jan Hladký Counting Flags in Triangle Free Digraphs Charles University Prague 5 May 2009 Mathias Schacht Regularity Lemmas for Graphs Humboldt-Universität Berlin 21 Apr 2009 Russell Martin Time and Space Efficient Anonymous Graph Traversal University of Liverpool 6 Apr 2009 Noga Alon Eliminating Cycles in the Torus Tel Aviv University 17 Mar 2009 Berthold Vöcking Oblivious Interference Scheduling RWTH Aachen 16 Mar 2009 Yoshiaki Oda Polynomial Time Solvable Cases of the Vehicle Routing Problem Keio University 11 Mar 2009 Dolores Romero Morales Bounding Revenue Deficiency in Multiple Winner Procurement Auctions University of Oxford 10 Mar 2009 Gunnar W. Klau Prize-Collecting Steiner Trees and Disease CWI Amsterdam 3 Mar 2009 Kousha Etessami Adding Recursion to Markov Chains, Markov Decision Processes, and Stochastic Games University of Edinburgh 2 Mar 2009 Ayşegül Altin The Robust Network Loading Problem under Polyhedral Demand Uncertainty: Formulation, Polyhedral Analysis, and Computations TOBB University 24 Feb 2009 Rudolf Müller Optimal Mechanisms for Scheduling University Maastricht 10 Feb 2009 Marc Wennink Distributed Optimisation in Network Control BT Innovate 3 Feb 2009 Christian Elsholtz Multidimensional Problems in Additive Combinatorics Royal Holloway 27 Jan 2009 Alexander Tiskin Fast Distance Multiplication of Unit-Monge Matrices University of Warwick 20 Jan 2009 Vadim Lozin Boundary Properties of Graphs and the Hamiltonian Cycle Problem University of Warwick 13 Jan 2009 Peter Allen Minimum Degree Conditions for Large Subgraphs University of Warwick 2 Dec 2008 Hajo Broersma Degree Sequence Conditions for Graph Properties University of Durham 25 Nov 2008 Bas Lemmens Min-Max Functions University of Warwick 18 Nov 2008 Kristina Vušković Even-hole-free Graphs and the Decomposition Method University of Leeds 11 Nov 2008 Piotr Krysta Approximability of Combinatorial Exchange Problems University of Liverpool 4 Nov 2008 Stefanie Gerke Sensor Networks and Random Intersection Graphs Royal Holloway 28 Oct 2008 Yves Crama Throughput Optimization in Two-Machine Flowshops with Flexible Operations University of Liège 21 Oct 2008 Stefan Szeider Tricky Problems for Graphs of Bounded Treewidth Durham University 15 Oct 2008 Jan van den Heuvel The External Network Problem and the Source Location Problem London School of Economics 7 Oct 2008 Andrew Thomason Extremal Graph Theory with Colours University of Cambridge 30 Sep 2008 Sinan Gürel Machine-Job Assignment Problems with Separable Convex Costs and Match-up Scheduling with Controllable Processing Times Warwick Business School 22 Sep 2008 Uri Zwick Discounted Deterministic Markov Decision Processes Tel Aviv University 24 Jun 2008 Haiko Müller On a Disparity Between Relative Cliquewidth and Relative NLC-width University of Leeds 17 Jun 2008 Tobias Müller Colouring Random Geometric Graphs Universiteit Eindhoven 10 Jun 2008 Jakub Mareček A Clique-Based Integer Programming Formulation of Graph Colouring and Applications University of Nottingham 4 Jun 2008 Richard Cole Fast-Converging Tatonnement Algorithms for One-Time and Ongoing Market Problems New York University 29 May 2008 Eli Upfal The Multi-Armed Bandit Meets the Web Surfer Brown University 20 May 2008 Peter Key Optimizing Communication Networks: Incentives, Welfare Maximization and Multipath Transfers Microsoft Research 13 May 2008 Andreas Bley Unsplittable Shortest Path Routing: Hardness and Approximation Algorithms Zuse Institute Berlin 6 May 2008 Sergey Nazarenko Diophantine Problems Arising in the Theory of Nonlinear Waves University of Warwick 29 Apr 2008 Ayalvadi Ganesh Controlling Epidemic Spread on Networks University of Bristol 22 Apr 2008 Bernhard von Stengel Strategic Characterization of the Index of an Equilibrium London School of Economics 11 Mar 2008 Paul Goldberg On Recent Progress on Computing Approximate Nash Equilibria University of Liverpool 7 Mar 2008 Matthias Englert Online Minimum Makespan Scheduling with Reordering RWTH Aachen 4 Mar 2008 Nicole Immorlica Balloon Popping with Applications to Ascending Auctions 26 Feb 2008 Martin Dyer Randomly Colouring a Random Graph University of Leeds 12 Feb 2008 Vassili Kolokoltsov Game theoretic Approach to Hedging of Rainbow Options University of Warwick 5 Feb 2008 Wilfrid Kendall Short-length Routes in Low-cost Networks via Poisson Line Patterns University of Warwick 29 Jan 2008 Christian Sohler Clustering for Metric and Non-Metric Distance Measures University of Paderborn 22 Jan 2008 Alexander Tiskin The Travelling Salesman Problem: Are there any Open Problems left? Part II. University of Warwick 15 Jan 2008 Vladimir Deineko The Travelling Salesman Problem: Are there any Open Problems left? Part I. University of Warwick 11 Dec 2007 Arie Koster Improving Integer Programming Solvers: {0,½}-Chvátal-Gomory Cuts University of Warwick 4 Dec 2007 Peter Bro Miltersen Strategic Game Playing and Equilibrium Refinements University of Aarhus 27 Nov 2007 Mark Jerrum An Approximation Trichotomy for Boolean #CSP Queen Mary 20 Nov 2007 Vadim Lozin From Tree-width to Clique-width: Excluding a Unit Interval Graph University of Warwick 13 Nov 2007 Christian Raack Cut-based Inequalities for Capacitated Network Design Problems Zuse Institute Berlin 6 Nov 2007 Graham Brightwell Submodular Percolation and the Worm Order London School of Economics 30 Oct 2007 Marc Reimann Ant Colony Optimization for Vehicle Routing University of Warwick 23 Oct 2007 Gregory Gutin Parameterized Algorithms for Directed Maximum Leaf Problems Royal Holloway 16 Oct 2007 Harald Räcke Hierarchical Graph Decompositions for Minimising Congestion University of Warwick 9 Oct 2007 Mike Paterson Overhang Bounds University of Warwick 19 Jul 2007 Carl Seger Connecting Bits with Floating-Point Numbers: Model Checking and Theorem Proving in Practice 28 Jun 2007 Vadim Lozin Tree-width and Optimization in Bounded Degree Graphs University of Warwick 29 May 2007 Prahladh Harsha A Survey of Short PCP Constructions TTI Chicago 3 May 2007 Arie Koster Solved and Unsolved Optimization Challenges in Telecommunications University of Warwick 17 June 2014, 16:00, MS.05 14 March 2014, 15:00, MS.05 Balls into Bins via Local Search We study a natural process for allocating m balls (tasks) into n bins (resources) that are organized as the vertices of an undirected graph G. Balls arrive one at a time. When a ball arrives, it first chooses a vertex u in G uniformly atrandom. Then the ball performs a local search in G starting from u until it reaches a vertex with local minimum load, where the ball is finally placed on. Then the next ball arrives and this procedure is repeated. In this talk we derive bounds on the maximum load of this process and the time until every bin has at least one ball allocated to it. 11 March 2014, 16:00, MS.05 RSP-based analysis for the efficiency of l[1]-minimization for solving l[0]-problems Many practical problems (e.g., signal and image processing) can be formulated as l[0]-minimization problems, which seek the sparsest solution to an underdetermined linear system. The recent study indicates that l[1]-minimization is efficient for solving l[0]-problems in many situations. From a mathematical point of view, however, the understanding of the relationship between l[0]- and l[1] -minimization remains incomplete. Their relationship can be further interpreted via the property of the range space of the transpose of matrices, which provides an angle to completely characterize the uniqueness of l[1]-minimizers and the uniform recovery of k-sparse signals. This analysis leads naturally to the concept of range space property (RSP) and the so-called 'full-column-rank' property, which altogether provide a broad understanding of the equivalence and the strong equivalence between l[0]- and l[1]-minimization problems. 04 March 2014, 16:00, MS.05 A new optimization framework for dynamic resource allocation problems Many decision problems can be cast as dynamic resource allocation problems, i.e., they can be framed in the context of a set of requests requiring a complex time-based interaction amongst a set of available resources. For instance scheduling, one of the classic problem in Operations Research, belongs to this class. For this class of problem we present a new optimization framework. The framework a) allows modeling flexibility by incorporating different objective functions, alternative sets of resources and fairness controls; b) is widely applicable in a variety of problems in transportation, services and engineering; and c) is tractable, i.e., provides near optimal solutions fast for large-scale instances. To justify these assertions, we model and report encouraging computational results on three widely studied problems - the Air Traffic Flow Management, the Aircraft Maintenance Problems and Job Shop Scheduling. Finally, we provide several polyhedral results that offer insights on its effectiveness. Joint work with D. Bertsimas and S. Gupta 25 February 2014, 16:00, MS.05 Fast projection methods for robust separable nonnegative matrix factorization Nonnegative matrix factorization (NMF) has become a widely used tool for analysis of high-dimensional data. In this talk, we first present a series of applications of NMF in image processing, text mining, and hyperspectral imaging. Then we address the problem of solving NMF, which is NP-hard in general. However, certain special cases, such as the case of separable data with noise, are known to be solvable in polynomial time. We propose a very fast successive projection algorithm for this case. Variants of this algorithm have appeared previously in the literature; our principal contribution is to prove that the algorithm is robust against noise and to formulate bounds on the noise tolerance. A second contribution is an analysis of a preconditioning strategy based on semidefinite programming that significantly improves the robustness against noise. We present computational results on artificial data and on a simulated hyperspectral imaging problem. (Joint work with Stephen Vavasis; see http://arxiv.org/abs/1208.1237 and http://arxiv.org/abs/1310.2273 for more details.) 18 February 2014, 16:00, MS.05 Temporal Networks : Optimization and Connectivity We consider here Temporal Networks. They are defined by a labelling L assigning a set of discrete time-labels to each edge of the graph G of the network. The labels of an edge are natural numbers. They indicate the discrete time moments at which the edge is available. We focus on path problems of temporal networks. In particular we consider time-respecting paths , i.e. paths whose edges are assigned by L a strictly increasing sequence of labels. We begin by giving efficient algorithms for computing shortest time-respecting paths. We then prove a natural analog of Menger’s theorem holding for arbitrary temporal networks. Finally we propose two cost minimization parameters for temporal network design. One is the “temporality” of G , in which the goal is to minimize the maximum number of labels per edge and the other is the temporal cost of G , where the goal is to minimize the total number of labels used. Optimization of these parameters is subject to some connectivity constraints. We prove several lower and upper bounds for the temporality and the temporal cost of some very basic graphs like rings, trees, directed acyclic graphs. We also examine how hard is to compute (even approximately) such costs. This work is joint with G. Mertzios and O. Michail and appeared in ICALP 2013. 11 February 2014, 16:00, MS.05 Strongly polynomial algorithm for generalized flow maximization The generalized flow model is a classical extension of network flows. Besides the capacity constraints, there is a gain factor given on every arc, such that the flow amount gets multiplied by this factor while traversing the arc. Gain factors can be used to model physical changes such as leakage or theft. Other common applications use the nodes to represent different types of entities, e.g. different currencies, and the gain factors correspond to the exchange rates. We investigate the flow maximization problem in such networks. This is described by a linear program, however, no strongly polynomial algorithm was known before, despite a vast literature on combinatorial algorithms. From a general linear programming perspective, this is probably the simplest linear program where Tardos's general result on strongly polynomial algorithms does not apply. We give the first strongly polynomial algorithm for the problem. It uses a new variant of the classical scaling technique, called continuous scaling. The main measure of progress is that within a strongly polynomial number of steps, an arc can be identified that must be tight in every dual optimal solution, and thus can be contracted, reducing the size of the problem instance. 31 January 2014, 15:00, MS.04 Learning Sums of Independent Integer Random Variables Let S be a sum of n independent integer random variables, each supported on {0,1,...,k−1}. How many samples are required to learn the distribution of S to high accuracy? In this talk I will show that the answer is completely independent of n, and moreover I will describe a computationally efficient algorithm which achieves this low sample complexity. More precisely, the algorithm learns any such S to ε-accuracy (with respect to the total variation distance) using poly(k,1/ε) samples, independent of n. Its running time is poly(k,1/ε) in the standard word RAM model. This result a broad generalization of the main result of [Daskalakis, Diakonikolas, Servedio-STOC'12] which gave a similar learning result for the special case k=2 (when the distribution S is a Poisson Binomial Distribution). Prior to this work, no nontrivial results were known for learning these distributions even in the case k=3. A key difficulty is that, in contrast to the case of k=2, sums of independent {0,1,2}-valued random variables may behave very differently from (discretized) normal distributions, and in fact may be rather complicated --- they are not log-concave, they can have Ω(n) local optima, there is no relationship between the Kolmogorov distance and the total variation distance for the class, etc. The heart of the learning algorithm is a new limit theorem which characterizes what the sum of an arbitrary number of arbitrary independent {0,1,...,k−1}-valued random variables may look like. Previous limit theorems in this setting made strong assumptions on the "shift invariance" of the constituent random variables in order to force a discretized normal limit. We believe that our new limit theorem, as the first result for truly arbitrary sums of independent {0,1,...,k−1}-valued random variables, is of independent interest. The talk will be based on a joint work with Costis Daskalakis, Ryan O'Donnell, Rocco Servedio and Li-Yang Tan. 21 January 2014, 16:00, MS.05 A domination algorithm for {0,1}-instances of the travelling salesman problem In this talk, I shall discuss an approximation algorithm for {0,1}-instances of the travelling salesman problem which performs well with respect to combinatorial dominance. More precisely, given a {0,1}-edge-weighting of the complete graph Kn on n vertices, our algorithm outputs a Hamilton cycle H of Kn with the following property: the proportion of Hamilton cycles of Kn whose weight is smaller than that of H is at most n^-1/29 = o(1). Our analysis is based on a martingale approach. Previously, the best result in this direction was a polynomial-time algorithm with domination ratio 1 /2 - o(1) for arbitrary edge weights. On the hardness side we can show that, if the Exponential Time Hypothesis holds, there exists a constant C such that n^-1/29 cannot be replaced by exp(-(log n)^ C) in the result above. (Joint work with Daniela Kühn and Deryk Osthus) 4 December 2013, 16:00, MS.03 Tight Lower Bounds for the Online Labeling Problem In the online labeling problem with parameters n and m we are presented with a sequence of n keys from a totally ordered universe U and must assign each arriving key a label from the label set {1,2,...,m} so that the order of labels (strictly) respects the ordering on U. As new keys arrive it may be necessary to change the labels of some items; such changes may be done at any time at unit cost for each change. The goal is to minimize the total cost. An alternative formulation of this problem is the file maintenance problem, in which the items, instead of being labeled, are maintained in sorted order in an array of length m, and we pay unit cost for moving an item. Although there exists simple algorithm for solving file maintenance problem for more than 30 years (Itai, Konheim, Rodeh, 1981) no matching lower bound was known until our recent result (Bulanek, Koucky, Saks, STOC 2012). For the case m=n^C for C>1, there was known lower bound (Dietz, Seiferas, Zhang, 2004), however the proof was rather complicated and could not be extended to even bigger m (for example m=n^log n). This was solved in our next result (Babka, Bulanek, Cunat, Koucky, Saks, ESA 2013). In this talk I would like to talk about the importance of the Online Labeling for Cache oblivious algorithms and also I would like to show basic ideas of our results. Joint work with Martin Babka, Vladimír Cunat, Michal Koucky, Michael Saks 3 December 2013, 16:00, MS.05 The Evolution of Subcritical Achlioptas Processes In Achlioptas processes, starting from an empty graph, in each step two potential edges are chosen uniformly at random, and using some rule one of them is selected and added to the evolving graph. Although the evolution of such 'local' modifications of the Erdös-Rényi random graph process has received considerable attention during the last decade, so far only rather simple rules are well understood. Indeed, the main focus has been on 'bounded-size' rules, where all component sizes larger than some constant B are treated the same way, and for more complex rules very few rigorous results are known. We study Achlioptas processes given by (unbounded) size rules such as the sum and product rules. Using a variant of the neighbourhood exploration process and branching process arguments we show that certain key statistics are tightly concentrated at least until the susceptibility (the expected size of the component containing a randomly chosen vertex) diverges. Our convergence result is most likely best possible for certain rules: in the later evolution the number of vertices in small components may not be concentrated. Furthermore, we believe that for a large class of rules the critical time where the susceptibility 'blows up' coincides with the percolation threshold. Joint work with Oliver Riordan. 26 November 2013, 16:00, MS.05 A Unified Approach to Truthful Scheduling on Related Machines We present a unified framework for designing deterministic monotone polynomial time approximation schemes (PTAS's) for a wide class of scheduling problems on uniformly related machines. This class includes (among others) minimizing the makespan, maximizing the minimum load, and minimizing the lp norm of the machine loads vector. Previously, this kind of result was only known for the makespan objective. Monotone algorithms have the property that an increase in the speed of a machine cannot decrease the amount of work assigned to it. The key idea of our novel method is to show that for goal functions that are sufficiently well-behaved functions of the machine loads, it is possible to compute in polynomial time a highly structured nearly optimal schedule. Monotone approximation schemes have an important role in the emerging area of algorithmic mechanism design. In the game-theoretical setting of these scheduling problems there is a social goal, which is one of the objective functions that we study. Each machine is controlled by a selfish single-parameter agent, where its private information is its cost of processing a unit sized job, which is also the inverse of the speed of its machine. Each agent wishes to maximize its own profit, defined as the payment it receives from the mechanism minus its cost for processing all jobs assigned to it, and places a bid which corresponds to its private information. For each one of the problems, we show that we can calculate payments that guarantee truthfulness in an efficient manner. Thus, there exists a dominant strategy where agents report their true speeds, and we show the existence of a truthful mechanism which can be implemented in polynomial time, where the social goal is approximated within a factor of 1+ε for every ε>0. 26 November 2013, 11:00, Mathematics A1.01 Balanced Graph Partitioning for Massive Scale Computations In recent time, there has been a lot of interest in balanced graph partitioning of large scale graphs to scale out computations that use as input a large-scale graph input data by running in parallel on distributed clusters of machines. Traditional balanced graph partitioning asks to partition a set of vertices such that the number of vertices across different partitions is balanced within some degree of slackness, and the number of cut edges is minimized. This problem is known to be NP hard and the best known approximation ratio is O(\sqrt{\log n \log k}) for partitioning of a graph with n vertices to k partitions. Substantial work was recently devoted to studying the online graph partitioning problem where vertices are required to be irrevocably assigned to partitions as they are observed in an input stream of vertices, mostly by studying various heuristics and developing some limited theoretical results. An alternative way to partition a graph that has emerged recently is so called edge partitioning, where instead, the set of edges is partitioned. The motivation for the latter strategy for partitioning a graph is that many real-world graphs exhibit a power-law degree sequence, so some portion of vertices have large degrees and allowing to assign the edges incident to high-degree vertices to different partitions may allow to achieve a good load balance and a small cut. The problem becomes even more challenging with the cut cost defined to account for the aggregation of messages that is allowed by many distributed computations. In this talk, we will present our work and results in the area. 20 November 2013, 16:00, MS.05 Bidimensionality on Geometric Intersection Graphs Let B be a finite collection of geometric (not necessarily convex) bodies in the plane. Clearly, this class of geometric objects naturally generalizes the class of disks, polylines, ellipsoids and even convex polyhedra. We consider geometric intersection graphs GB where each body of the collection B is represented by a vertex, and two vertices of GB are adjacent if the intersection of the corresponding bodies is non-empty. For such graph classes and under natural restrictions on their maximum degree or subgraph exclusion, we prove that the relation between their tree-width and the maximum size of a grid minor is linear. These combinatorial results vastly extend the applicability of all the meta-algorithmic results of the bidimensionality theory to geometrically defined graph 19 November 2013, 16:00, MS.05 Equidistributions on Planar Maps via Involutions on Description Trees Description trees were introduced by Cori, Jacquard and Schaeffer in 1997 to give a general framework for the recursive decompositions of several families of planar maps studied by Tutte in a series of papers in the 1960s. We are interested in two classes of planar maps which can be thought as connected planar graphs embedded in the plane or the sphere with a directed edge distinguished as the root. These classes are rooted non-separable (or, 2-connected) and bicubic planar maps, and the corresponding to them trees are called, respectively, β(1,0)-trees and β(0,1)-trees. Using different ways to generate these trees we define two endofunctions on them that turned out to be involutions. These involutions are not only interesting in their own right, in particular, from counting fixed points point of view, but also they were used to obtain non-trivial equidistribution results on planar maps, certain pattern avoiding permutations, and objects counted by the Catalan The results to be presented in this talk are obtained in a series of papers in collaboration with several researchers. 15 November 2013, 15:00, MS.03 Multi-Branch Split Cutting Planes for Mixed-Integer Programs Cutting planes (cuts, for short), or inequalities satisfied by integral solutions of systems of linear inequalities, are important tools used in modern solvers for integer programming problems. In this talk we present theoretical and computational results on split cuts — studied by Cook, Kannan and Schrijver (1990), and related to Gomory mixed-integer cuts — and recent generalizations, namely multi-branch split cuts. In particular, we give the first pure cutting plane algorithm to solve mixed-integer programs based on multi-branch split cuts. We also show that there are mixed-integer programs with n+1 variables which are unsolvable by (n-1)-branch split cuts. In computational work, we consider a family of quadratic unconstrained boolean optimization problems recently used in tests on the DWave quantum computer and discuss how they can be solved using special families of split cuts in reasonable time. This is joint work with Neil Dobbs, Oktay Gunluk, Tomasz Nowicki, Grzegorz Swirczsz and Marcos Goycoolea. 12 November 2013, 16:00, MS.05 Partition Regularity in the Rationals A system of linear equations is partition regular if, whenever the natural numbers are finitely coloured, it has a monochromatic solution. We could instead talk about partition regularity over the rational numbers, and if the system is finite then these notions coincide. What happens in the infinite case? Joint work with Neil Hindman and Imre Leader. 5 November 2013, 16:00, MS.05 Dynamic Financial Hedging Strategies for a Storable Commodity with Demand Uncertainty We consider a firm purchasing and processing a storable commodity in a volatile commodity price market. The firm has access to both a commodity spot market and an associated financial derivatives market. The purchased commodity serves as a raw material which is then processed into an end product with uncertain demand. The objective of the firm is to coordinate the replenishment and financial hedging decisions to maximize the mean-variance utility of its terminal wealth over a finite horizon. We employ a dynamic programming approach to characterize the structure of optimal time-consistent policies for inventory and financial hedging decisions of the firm. Assuming unmet demand is lost, we show that under forward hedges the optimal inventory policy can be characterized by a myopic state-dependent base-stock level. The optimal hedging policy can be obtained by minimizing the variance of the hedging portfolio, the value of excess inventory and the profit-to-go as a function of future price. In the presence of a continuum of option strikes, we demonstrate how to construct custom exotic derivatives using forwards and options of all strikes to replicate the profit-to-go function. The financial hedging decisions are derived using the expected profit function evaluated under the optimal inventory policy. These results shed new light into the corporate risk and operations management strategies: inventory replenishment decisions can be separated from the financial hedging decisions as long as forwards are in place, and the dynamic inventory decision problem reduces to a sequence of myopic optimization problems. In contrast to previous results, our work implies that financial hedges do affect optimal operational policies, and inventory and financial hedges can be substitutes. Finally, we extend our analysis to the cases with backorders, price-sensitive demand, and variable transaction costs. Joint work with Panos Kouvelis (Olin School of Business, Washington University in St. Louis, US) and Qing Ding (Huazhong University of Science and Technology, Wuhan, China.) 30 October 2013, 16:00, MS.05 Faster Algorithms for the Sparse Fourier Transform The Fast Fourier Transform (FFT) is one of the most fundamental numerical algorithms. It computes the Discrete Fourier Transform (DFT) of an n-dimensional signal in O(n log n) time. The algorithm plays an important role in many areas. It is not known whether its running time can be improved. However, in many applications, most of the Fourier coefficients of a signal are "small" or equal to zero, i.e., the output of the transform is (approximately) sparse. In this case, it is known that one can compute the set of non-zero coefficients faster than in O(n log n) time. In this talk, I will describe a new set of efficient algorithms for the sparse Fourier Transform. One of the algorithms has the running time of O(k log n), where k is the number of non-zero Fourier coefficients of the signal. This improves over the runtime of the FFT for any k = o(n). If time allows, I will also describe some of the applications, to spectrum sensing and GPS locking, as well as mention a few outstanding open problems. The talk will cover the material from the joint papers with Fadel Adib, Badih Ghazi, Haitham Hassanieh, Michael Kapralov, Dina Katabi, Eric Price and Lixin Shi. The papers are available at http:// 22 October 2013, 16:00, MS.05 Near-Optimal Multi-Unit Auctions with Ordered Bidders I will discuss prior-free profit-maximizing auctions for digital goods. In particular, I will give an overview of the area and I will focus on prior-free auctions with ordered bidders and identical items. In this model, we compare the expected revenue of an auction to the monotone price benchmark: the maximum revenue that can be obtained from a bid vector using prices that are non-increasing in the bidder ordering and bounded above by the second-highest bid. I will discuss an auction with constant-factor approximation guarantee for identical items, in both unlimited and limited supply settings. Consequently, this auction is simultaneously near-optimal for essentially every Bayesian environment in which bidders' valuation distributions have non-increasing monopoly prices, or in which the distribution of each bidder stochastically dominates that of the next. 15 October 2013, 16:00, MS.05 Information Theory and Compressed Sensing The central goal of compressed sensing is to capture attributes of a signal using very few measurements. In most work to date this broader objective is exemplified by the important special case of classification or reconstruction from a small number of linear measurements. In this talk we use information theory to derive fundamental limits on compressive classification, on the maximum number of classes that can be discriminated with low probability of error and on the tradeoff between the number of classes and the probability of misclassification. We also describe how to use information theory to guide the design of linear measurements by maximizing mutual information between the measurements and the statistics of the source. 10 October 2013, 16:00, CS1.01 Routing in Directed Graphs with Symmetric Demands In this talk, we consider some fundamental maximum throughput routing problems in directed graphs. In this setting, we are given a capacitated directed graph. We are also given source-destination pairs of nodes (s_1, t_1), (s_2, t_2), ..., (s_k, t_k). The goal is to select a largest subset of the pairs that are simultaneously routable subject to the capacities; a set of pairs is routable if there is a multicommodity flow for the pairs satisfying certain constraints that vary from problem to problem (e.g., integrality, unsplittability, edge or node capacities). Two well-studied optimization problems in this context are the Maximum Edge Disjoint Paths (MEDP) and the All-or-Nothing Flow (ANF) problem. In MEDP, a set of pairs is routable if the pairs can be connected using edge-disjoint paths. In ANF, a set of pairs is routable if there is a feasible multicommodity flow that fractionally routes one unit of flow from s_i to t_i for each routed pair (s_i, t_i). MEDP and ANF are both NP-hard and their approximability has attracted substantial attention over the years. Over the last decade, several breakthrough results on both upper bounds and lower bounds have led to a much better understanding of these problems. At a high level, one can summarize this progress as follows. MEDP and ANF admit poly-logarithmic approximations in undirected graphs if one allows constant congestion, i.e., the routing violates the capacities by a constant factor. Moreover, these problems are hard to approximate within a poly-logarithmic factor in undirected graphs even if one allows constant congestion. In sharp contrast, both problems are hard to approximate to within a polynomial factor in directed graphs even if a constant congestion is allowed and the graph is In this talk, we focus on routing problems in directed graphs in the setting in which the demand pairs are symmetric: the input pairs are unordered and a pair s_i t_i is routed only if both the ordered pairs (s_i,t_i) and (t_i,s_i) are routed. Perhaps surprisingly, the symmetric setting can be much more tractable than the asymmetric setting. As we will see in this talk, when the demand pairs are symmetric, ANF admits a poly-logarithmic approximation with constant congestion. We will also touch upon some open questions related to MEDP in directed graphs with symmetric pairs. This talk is based on joint work with Chandra Chekuri (UIUC). 8 October 2013, 16:00, MS.05 NL-completeness of Equivalence for Deterministic One-Counter Automata Emerging from formal language theory, a classical model of computation is that of pushdown automata. A folklore result is that equivalence of pushdown automata is undecidable. Concerning deterministic pushdown automata, there is still an enormous complexity gap, where the primitive recursive upper bound is not matched by the best-known lower bound of P-hardness. Thus, further subclasses have been studied. The aim of the talk is to sketch the ideas underlying the recent result that has been presented at STOC'13. It is shown that equivalence of deterministic one-counter automata is NL-complete. One-counter automata are pushdown automata over a singleton stack alphabet plus a bottom stack symbol. This improves the superpolynomial complexity upper bound shown by Valiant and Paterson in 1975. The talk is based on joint results with S. Göller and P. Jančar. 1 October 2013, 16:00, MS.05 Vertex-Minors of Graphs The vertex-minor relation of graphs is defined in terms of local complementation and vertex deletion. This concept naturally arises in the study of circle graphs (intersection graphs of chords in a circle) and rank-width of graphs. Many theorems on graph minors have analogous results in terms of vertex-minors and some graph algorithms are based on vertex-minors. We will survey known results and discuss a recent result regarding unavoidable vertex-minors in very large graphs. 25 July 2013, 15:00, CS104 On Optimality of Clustering by Space Filling Curves This talk addresses the following question: “How much do I lose if I insist on handling multi-dimensional data using one-dimensional indexes?”. A space filling curve (SFC) is a mapping from a multi-dimensional universe to a single dimension, and has been used widely in the design of data structures in databases and scientific computing, including commercial products such as Oracle. A fundamental quality metric of an SFC is its “clustering number”, which measures the average number of contiguous segments a query region can be partitioned into. We present the first non-trivial lower bounds on the clustering number of any space filling curve for a general class of multidimensional rectangles, and a characterization of the clustering number of a general class of space filling curves. These results help resolve open questions including one posed by Jagadish in 1997, and show fundamental limits on the performance of this data structuring This is based on joint work with Pan Xu. 25 June 2013, 16:00, MS.05 Optimizing the Management of Crews and Aircrafts in Canary Islands This talk deals with a routing-and-scheduling optimization problem that is faced by an airline company that operates in the Canary Islands. There are 10 airports, with around 180 flights each day in total, and the flight time between any two airports is around 30 minutes. All of the maintenance equipment is based at the airport in Gran Canaria, and therefore each aircraft must go to Gran Canaria after flying for two days. The crew members, however, live not only in Gran Canaria, but also in Tenerife. Each crew member expects to return to his domicile at the end of each day. There are also other regulations on the activities of the crew members to be considered in the optimization problem. The goal is to construct routes and schedules for the aircraft and the crew simultaneously, at minimum cost, while satisfying the above constraints. This problem can be modelled as a "2-depot vehicle routing problem with capacitated vehicles and driver exchanges". It is a new and challenging optimization, closely related to the classical vehicle routing problem, but with some new features requiring an ad-hoc analysis. In this seminar we show mathematical models in Integer Linear Programming, and describe "branch-and-cut" and "branch-and-price" techniques to find optimimal or near-optimal solutions. These techniques are capable of solving real-life instances. The software package is currently being used by the airline company. 20 June 2013, 16:00, MS.05 Streaming Verification of Outsourced Computation When handling large quantities of data, it is desirable to outsource the computational effort to a third party: this captures current efforts in cloud computing, but also scenarios within trusted computing systems and inter-organizational data sharing. When the third party is not fully trusted, it is desirable to give assurance that the computation has been performed correctly. This talk presents some recent results in designing new protocols for verifying computations which are streaming in nature: the verifier (data owner) needs only a single pass over the input, storing a sublinear amount of information, and follows a simple protocol with a prover (service provider) that takes a small number of rounds. A dishonest prover fools the verifier with only polynomially small probability, while an honest prover's answer is always accepted. Starting from basic aggregations, interactive proof techniques allow a quite general class of computations to be verified, leading to practical implementations. 18 June 2013, 16:00, MS.05 A family of effective payoffs in stochastic games with perfect information We consider two-person zero-sum stochastic games with perfect information and, for each integer k>=0, introduce a new payoff function, called the k-total reward. For k = 0 and 1 they are the so called mean payoff and total rewards, respectively. For all k, we prove Nash-solvability of the considered games in pure stationary strategies, and show that the uniformly optimal strategies for the discounted mean payoff (0-reward) function are also uniformly optimal for k-total rewards if the discount factor is close enough (depending on k) to 1. We also demonstrate that k-total reward games form a proper subset of (k + 1)-total reward games for each k. In particular, all these classes contain mean payoff games. Joint work with Vladimir Gurvich (Rutgers, New Brunswick, USA), Khaled Elbassioni (Masdar Institute, Abu Dhabi, UAE), and Kazuhise Makino (RIMS, Kyoto, Japan) 11 June 2013, 16:00, MS.05 Reachability in Two-Clock Timed Automata is PSPACE-complete Timed automata are a successful and widely used formalism, which are used in the analysis and verification of real time systems. Perhaps the most fundamental problem for timed automata is the reachability problem: given an initial state, can we perform a sequence of transitions in order to reach a specified target state? For timed automata with three or more clocks, this problem is PSPACE-complete, and for one-clock timed automata the problem is NL-complete. The complexity of reachability in two-clock timed automata has been left open: the problem is known to be NP-hard, and contained in PSPACE. Recently, Haase, Ouaknine, and Worrell have shown that reachability in two-clock timed automata is log-space equivalent to reachability in bounded one-counter automata. In this work, we show that reachability in bounded one-counter automata is PSPACE-complete, and therefore we resolve the complexity of reachability in two-clock timed automata. 4 June 2013, 16:00, MS.05 Quadratization of pseudo-Boolean functions A pseudo-Boolean function f is a real-valued function of 0,1 variables. Any such function can be represented uniquely by a multilinear expression in the binary input variables, and it is said to be quadratic if this expression has degree two or less. In recent years, several authors have proposed to consider the problem of `quadratizing' pseudo-Boolean functions by expressing f(x) as min {g (x,y): y in {0,1}^m } where g(x,y) is a quadratic pseudo-Boolean function of x and of additional binary variables y. We say that g(x,y) is a quadratization of f. In this talk, we investigate the number of additional variables needed in a quadratization. 28 May 2013, 16:00, MS.05 New trends for graph searches In this talk I will survey some new results of computation of graph parameters using graph search, but also how to discover graph structure using a series of graph searches. First I will describe how to evaluate the diameter of a graph using only 4 successive BFS (Breadth First searches). This heuristic is very acurate and can be applied on huge graphs. Furthermore it can be completed by an exact algorithm which seems to be very efficient (more that is worst case complexity). Then I will focuse on cocomparability graphs showing that first LDFS (Lexicographic Depth First Search) can be used to compute a minimum path cover. I will finish by describing some fixed point properties of graph search acting on a cocomparability graph, describing many open problems. 21 May 2013, 16:00, MS.05 Reachability Problems for Words, Matrices and Maps Most computational problems for matrix semigroups and groups are inherently difficult to solve and even undecidable starting from dimension three or four. The examples of such problems are the membership problem (including the special cases of the mortality and identity problems), vector reachability, scalar reachability, freeness problems and emptiness of matrix semigroups intersection. Many questions about the decidability and complexity of problems for two-dimensional matrix semigroups remain open and are directly linked with other challenging problems in the field. In this talk several closely related fundamental problems for words and matrices will be considered. I will survey a number of new techniques and encodings used to solve long standing open problem about reachability of Identity matrix; discuss hardness and decidability results for 2x2 matrix semigroups as well as highlight interconnections between matrix problems, reachability for iterative maps, combinatorics on words and computational problems for braids. 14 May 2013, 16:00, MS.05 Skew Bisubmodularity and Valued CSPs An instance of the Finite-Valued Constraint Satisfaction Problem (VCSP) is given by a finite set of variables, a finite domain of values, and a sum of rational-valued functions, each function depending on a subset of the variables. The goal is to find an assignment of values to the variables that minimises the sum. In this talk I will investigate VCSPs in the case when the variables can take three values and provide a tight description of the tractable cases. Joint work with Andrei Krokhin and Robert Powell. 13 May 2013, 16:00, MS.04 The Tradeoff Between Privacy and Communication Traditional communication complexity scenarios feature several participants, each with a piece of the input, who must cooperate to compute some function of the input. The difficulty of such problems is measured by the amount of communication (in bits) required. However, in more realistic scenarios, participants may wish to keep their own input (personal preferences, banking information, etc.) as private as possible, revealing only the information necessary to compute the function. In the 80s it was shown that most two-player functions are not computable while maintaining perfect privacy (Kushilevitz '89). Unwilling to let this rest, researchers have in recent years defined several relaxed notions of "approximate" privacy. Not surprisingly, protocols providing better privacy can require exponentially more bits than standard "simple" protocols. I will present all background necessary, as well as our result providing tight lower bounds on the tradeoff between privacy and communication complexity. If time permits, we may explore the relation between different notions of privacy, as well as several open questions that stem from these results. This talk will assume no background in communication complexity. 07 May 2013, 16:00, MS.05 Robust algorithms for constraint satisfaction problems In a constraint satisfaction problem (CSP), one is given a set of variables, a set of values for the variables, and constraints on combinations of values that can be taken by specified subsets of variables. An algorithm for a constraint satisfaction problem is called robust if it outputs an assignment satisfying at least a (1-f(epsilon))-fraction of constraints for each (1-epsilon) -satisfiable instance (i.e., such that at most an epsilon-fraction of constraints needs to be removed to make the instance satisfiable), where f approaches 0 as epsilon goes to 0. Barto and Kozik recently characterized all CSPs admitting a robust polynomial algorithm (with doubly exponential f), confirming a conjecture of Guruswami and Zhou. In the present talk, based on joint work with Victor Dalmau, we shall explain how homomorphism dualities, universal algebra, and linear programming are combined to give large classes of CSPs that admit a robust polynomial algorithm with polynomial loss, i.e., with f(epsilon)=O(epsilon^{1/k}) for some k. 30 April 2013, 16:00, MS.05 Coloring (H1,H2)-free graphs A k-coloring of a graph G=(V,E) is a mapping c: V -> {1,2,..,k} such that c(u) is not equal to c(v) whenever u and v are adjacent vertices. The Colouring problem is that of testing whether a given graph has a k-coloring for some given integer k. If a graph contains no induced subgraph isomorphic to any graph in some family {H1,..,Hp}, then it is called (H1,..,Hp)-free. The complexity of Colouring for H1-free graphs is known to be completely classified. For (H1,H2)-free graphs the classification is still open, although many partial results are known. We survey the known results and present a number of new results for this case. 23 April 2013, 16:00, MS.05 On the maximum difference between several graph invariants A graph invariant is a property of graphs that depends only on the abstract structure, not on graph representations such as particular labellings or drawings of the graph. Although bounds on invariants have been studied for a long time by graph theorists, the past few years have seen a surge of interest in the systematic study of linear relations (or other kinds of relations) between graph invariants. We focus our attention on the average distance in a graph as well as on the maximum orders of an induced (linear) forest, an induced tree, an induced bipartite graph, and a stable set. We give upper bounds on differences between some of these invariants, and prove that they are tight. 19 March 2013, 16:00, MS.04 Squared Metric Facility Location Problem and the Upper Bound Factor-Revealing Programs The facility location problem (FLP) aims to place facilities in a subset of locations, serving a given set of clients, and minimizing the cost of opening facilities and connecting clients to facilities. In the Metric FLP (MFLP), the underling distance function is a metric. We consider a variant of the FLP for different distances. Namely, we study the Squared Metric FLP (SMFLP), when the distance function is a squared metric. We analyze algorithms for the MFLP when applied to the SMFLP. Surprisingly, the (now) standard LP-rounding algorithm for the FLP (Chudak and Shmoys 2003, etc.) achieves the approximation lower bound for the SMFLP of 2.040..., unless P = NP. This lower bound is obtained by extending the 1.463... hardness of the metric case. We have also studied known primal-dual algorithms for the MFLP (Jain et al. 2003, and Mahdian, Ye, and Zhang 2006), and showed that they performed well for the SMFLP. More interestingly, we showed how to obtain 'upper bound factor-revealing programs' (UPFRPs) to bound the so called factor-revealing linear programs used in these primal-dual analyses. Solving one UPFRP suffices to obtain the approximation factor, as a more straightforward alternative to analytical proofs, that could be long and tedious. 18 March 2013, 16:00, B3.02 Truthful Mechanisms for Approximating Proportionally Fair Allocations We revisit the classic problem of fair division from a mechanism design perspective and provide a simple truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution concept of Proportional Fairness, a solution concept used in money-free settings. Unfortunately, Proportional Fairness cannot be implemented truthfully. Instead, we have designed a mechanism that discards carefully chosen fractions of the allocated resources so as to induce the agents to be truthful in reporting their valuations. For a multi-dimensional domain with an arbitrary number of agents and items, for all homothetic valuation functions, this mechanism provides every agent with at least a 1/e fraction of her Proportionally Fair valuation. We also uncover a connection between this mechanism and VCG-based mechanism design. Finally, we ask whether better approximation ratios are possible in more restricted settings. In particular, motivated by the massive privatization auction in the Czech republic in the early 90s we provide another mechanism for additive linear valuations that works particularly well when all the items are in high demand. Joint work with Vasilis Gkatzelis and Gagan Goel. 12 March 2013, 16:00, MS.05 Optimal coalition structure and stable payoff distribution in large games Cooperative games with transferable utilities belong to a branch of game theory where groups of players can form coalitions in order to jointly achieve the groups’ objectives. Two key questions that arise in cooperative game theory are (a) How to optimally form coalitions of players? and (b) How to share/distribute the cost/reward among the players? The first problem can be viewed as a complete set covering problem which can be formulated as an MILP with an exponentially large number of binary variables. For the second problem, the nucleolus and the Shapley value are often considered as the most important solution concepts thank to their attractive properties. However, computing the nucleolus is extremely difficult and existing methods can only solve games with less than 30 players. In this talk, we review some applications of cooperative game theory and present recent developments in solving these two problems. We test the algorithms with a number of simulated games with up to 400 5 March 2013, 16:00, MS.05 On bounded degree spanning trees in the random graph The appearence of certain spanning subraphs in the random graph is a well-studied phenomenon in probabilistic graph theory. In this talk, we present results on the threshold for the appearence of bounded-degree spanning trees in G(n,p) as well as for the corresponding universality statements. In particular, we show hitting time thresholds for some classes of bounded degree spanning trees. Joint work with Daniel Johannsen and Michael Krivelevich. 26 February 2013, 16:00, MS.05 More on perfect matchings in uniform hypergraphs Last year I gave a talk at Warwick on minimum degree conditions which force a hypergraph to contain a perfect matching. In this self-contained talk I will discuss recent work with Yi Zhao (Georgia State University) on this problem: Given positive integers k and r with k/2 ≤ r ≤ k−1, we give a minimum r-degree condition that ensures a perfect matching in a k-uniform hypergraph. This condition is best possible and improves on work of Pikhurko, who gave an asymptotically exact result. Our approach makes use of the absorbing method. 19 February 2013, 16:00, MS.05 Ramsey multiplicity Ramsey's theorem states that for any graph H there exists an n such that any 2-colouring of the complete graph on n vertices contains a monochromatic copy of H. Moreover, for large n, one can guarantee that there are at least c_H n^v copies of H, where v is the number of vertices in H. In this talk, we investigate the problem of optimising the constant c_H, focusing in particular on the case of complete graphs. 12 February 2013, 16:00, MS.05 On Optimality of Deterministic and Non-Deterministic Transformations Dynamical systems can be modelled by Markov semigroups, and we use Markov transition kernels to model computational machines and algorithms. We study points on a simplex of all joint probability measures corresponding to various types of transition kernels. In particular, deterministic or non-deterministic (e.g. randomised) algorithms correspond respectively to boundary or interior points of the simplex. The performance of the corresponding systems is evaluated by expected cost (or expected utility), which is a linear functional. The domain of optimisation is defined by information constraints. We use our result on mutual absolute continuity of optimal measures to show that optimal transition kernels cannot be deterministic, unless information is unbounded. As an illustration, we construct an example where any deterministic kernel can only have unbounded expected cost, unless the information constraint is removed. On the other hand, a non-deterministic kernel can have finite expected cost under the same information constraint. 6 February 2013, 16:00, D1.07 (Zeeman Building) Approximation Algorithm for the Resource Dependent Assignment Problem Assignment problems deal with the question of how to assign a set of n agents to a set of n tasks such that each task is performed only once and each agent is assigned to a single task so as to minimize a specific predefined objective. We define a special kind of assignment problem where the cost of assigning agent j to task i is not a constant cost but rather a function of the amount of resource allocated to this particular task. We assume this function, known as the resource consumption function, to be convex in order to ensure the law of diminishing marginal returns. The amount of resource allocated to each task is a continuous decision variable. One variation of this problem, proven to be NP-hard, is minimizing the total assignment cost when the total available resource is limited. In this research we provide an approximation algorithm to solve this problem, where a ρ-approximation algorithm is an algorithm that runs in polynomial time and produces a solution with cost within a factor of at most ρ of the optimal solution. 29 January 2013, 16:00, MS.05 Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor A fundamental model of operations research is the finite, but infinite-horizon, discounted Markov Decision Process. Ye showed recently that the simplex method with Dantzig pivoting rule, as well as Howard's policy iteration algorithm, solve discounted Markov decision processes, with a constant discount factor, in strongly polynomial time. More precisely, Ye showed that for both algorithms the number of iterations required to find the optimal policy is bounded by a polynomial in the number of states and actions. We improve Ye's analysis in two respects. First, we show a tighter bound for Howard's policy iteration algorithm. Second, we show that the same bound applies to the number of iterations performed by the strategy iteration (or strategy improvement) algorithm used for solving 2-player turn-based stochastic games with discounted zero-sum rewards. This provides the first strongly polynomial algorithm for solving these games. Joint work with Peter Bro Miltersen and Uri Zwick. 22 January 2013, 16:00, MS.05 A new method of searching a network An object (hider) is at an unknown point on a given network Q, not necessarily at a node. Starting from a known point (known to the hider), a searcher moves around the network to minimize the time T required to find (reach) the hider. The hider's location may be a known distribution or one chosen by the hider to make T large. We study the Bayesian problem where the hider's distribution over Q is known and also the game problem where it is chosen by an adversarial hider. Two types of searcher motion (continuous search or expanding search) are considered. 15 January 2013, 16:00, MS.05 Algorithms for optimization over noisy data To deal with NP-hard reconstruction problems, one natural possibility consists in assuming that the input data is a noisy version of some unknown ground truth. We present two such examples: correlation clustering, and transitive tournaments. In correlation clustering, the goal is to partition data given pairwise similarity and dissimilarity information, and sometimes semi-definite programming can, with high probability, reconstruct the optimal (maximum likelihood) underlying clustering. The proof uses semi-definite programming duality and the properties of eigenvalues of random matrices. The transitive tournament problem asks to reverse the fewest edge orientations to make an input tournament transitive. In the noisy setting it is possible to reconstruct the underlying ordering with high probability using simple dynamic programming. 8 January 2013, 16:00, MS.05 On the factorial layer of hereditary classes of graphs A class of graphs is hereditary if it is closed under taking induced subgraphs. It is known that the rates of growth of the number of n-vertex labeled graphs in hereditary classes constitute discrete layers. Alekseev and independently Balogh, Bollobas and Weinreich obtained global structural characterizations of hereditary classes in the first few lower layers. The minimal layer for which no such characterization is known is the factorial one, i.e. the layer containing classes with the factorial speed of growth of the number of n-vertex labeled graphs. Many classes of theoretical or practical importance belong to this layer, including forests, planar graphs, line graphs, interval graphs, permutation graphs, threshold graphs etc. In this talk we discuss some approaches to obtaining structural characterization of the factorial layer and related results. 27 November 2012, 16:00, MS.05 Turan Numbers of Bipartite Graphs Plus an Odd Cycle The theory of Turan numbers of non-bipartite graphs is quite well-understood, but for bipartite graphs the field is wide open. Many of the main open problems here were raised in a series of conjectures by Erdos and Simonovits in 1982. One of them can be informally stated as follows: for any family F of bipartite graphs, there is an odd number k, such that the extremal problem for forbidding F and all odd cycles of length at most k within a general graph is asymptotically equivalent to that of forbidding F within a bipartite graph. In joint work with Sudakov and Verstraete we proved a stronger form of this conjecture, with stability and exactness, for some specific families of even cycles. Also, in joint work with Allen, Sudakov and Verstraete, we gave a general approach to the conjecture using Scott's sparse regularity lemma. This proves the conjecture for some specific complete bipartite graphs, and moreover is effective for any F based on some reasonable assumptions on the maximum number of edges in a bipartite F-free graph, which are similar to the conclusions of another conjecture of Erdos and Simonovits. 20 November 2012, 16:00, MS.05 Distributionally Robust Convex Optimisation Distributionally robust optimisation is a decision-making paradigm that caters for situations in which the probability distribution of the uncertain problem parameters is itself subject to uncertainty. The distribution is then typically assumed to belong to an ambiguity set comprising all distributions that share certain structural or statistical properties. In this talk, we propose a unifying framework for modelling and solving distributionally robust optimisation problems. We introduce standardised ambiguity sets that contain all distributions with prescribed conic representable confidence sets and with mean values residing on an affine manifold. It turns out that these ambiguity sets are sufficiently expressive to encompass and extend several approaches from the recent literature. They also allow us to accommodate a variety of distributional properties from classical and robust statistics that have not yet been studied in the context of robust optimisation. We determine sharp conditions under which distributionally robust optimisation problems based on our standardised ambiguity sets are computationally tractable. We also provide tractable conservative approximations for problems that violate these conditions. 13 November 2012, 16:00, MS.05 A Tight, Combinatorial Algorithm for Submodular Matroid Maximization In recent joint work with Yuval Filmus, I considered the problem of maximizing a submodular function subject to a single matroid constraint, obtaining a novel 1 - 1/e approximation algorithm based on non-oblivious local search. This is the same approximation ratio previously attained by the "continuous greedy" algorithm of Calinescu, Chekuri, Vondrak, and Pal, and is the best possible in polynomial time for the general value oracle setting (or, alternatively, assuming that P ≠ NP). Unlike the continuous greedy algorithm, however, our algorithm is purely combinatorial, and extremely simple; it requires no rounding and considers only integral solutions. In this talk, I will give a brief overview of both the problem and prior algorithmic approaches, and present our new algorithm, together with some of its analysis. 30 October 2012, 16:00, MS.05 Local Optimality in Algebraic Path Problems Due to complex policy constraints, some Internet routing protocols are associated with non-standard metrics that fall outside of the approach to path problems based on semirings and "globally optimal" paths. Some of these exotic metrics can be captured by relaxing the semiring axioms to include algebras that are not distributive. A notion of "local optimality" can be defined for such algebras as a fixed-point of a matrix equation, which models solutions required by Internet routing protocols. Algorithms for solving such equations are explored, as well as applications beyond network routing. 23 October 2012, 16:00, MS.05 Semi-local String Comparison The computation of a longest common subsequence (LCS) between two strings is a classical algorithmic problem. Some applications require a generalisation of this problem, which we call semi-local LCS. It asks for the LCS between a string and all substrings of another string, and/or the LCS between all prefixes of one string and all suffixes of another. Apart from an important role that this generalised problem plays in string algorithms, it turns out to have surprising connections with semigroup algebra, computational geometry, planar graph algorithms, comparison networks, as well as practical applications in computational biology. The talk will present an efficient solution for the semi-local LCS problem, and will survey some related results and applications. Among these are dynamic LCS support; fast clique computation in special graphs; fast comparison of compressed strings; parallel string algorithms. 16 October 2012, 16:00, MS.05 Lyndon Words and Short Superstrings In the Shortest-Superstring problem, we are given a set of strings S and want to find a string that contains all strings in S as substrings and has minimum length. This is a classical problem in approximation and the best known approximation factor is 2 1/2, given by Sweedyk in 1999. Since then, no improvement has been made, however two other approaches yielding a 2 1/2-approximation algorithms have been proposed by Kaplan et al. and recently by Paluch et al., both based on a reduction to maximum asymmetric TSP path (Max-ATSP-Path) and structural results of Breslauer et al. In this talk we give an algorithm that achieves an approximation ratio of 2 11/23, breaking through the long-standing bound of 2 1/2. We use the standard reduction of Shortest-Superstring to Max-ATSP-Path. The new, somewhat surprising, algorithmic idea is to take the better of the two solutions obtained by using: (a) the currently best 2/3-approximation algorithm for Max-ATSP-Path and (b) a naive cycle-cover based 1/2-approximation algorithm. To prove that this indeed results in an improvement, we further develop a theory of string overlaps, extending the results of Breslauer et al. This theory is based on the novel use of Lyndon words, as a substitute for generic unbordered rotations and critical factorizations, as used by Breslauer et al. 9 October 2012, 16:00, MS.05 The Power of Linear Programming for Valued CSPs The topic of this talk is Valued Constraint Satisfaction Problems (VCSPs) and the question of how VCSPs can be solved efficiently. This problem can also be cast as how to minimise separable functions efficiently. I will present algebraic tools that have been developed for this problem and will also mention a recent result on the connection between linear programming and VCSPs (based on a paper with J. Thapper, to appear in FOCS'12). 4 July 2012, 16:00, MS.05 Algorithmic Applications of Baur-Strassen's Theorem: Shortest Cycles, Diameter and Matchings Consider a directed or an undirected graph with integral edge weights from the set [-W, W], that does not contain negative weight cycles. In this paper, we introduce a general framework for solving problems on such graphs using matrix multiplication. The framework is based on the usage of Baur-Strassen's theorem and of Strojohann's determinant algorithm. It allows us to give new and simple solutions to the following problems: • Finding Shortest Cycles - We give a simple Õ(Wn^ω) time algorithm for finding shortest cycles in undirected and directed graphs. For directed graphs this matches the time bounds obtained in 2011 by Roditty and Vassilevska-Williams. On the other hand, no algorithm working in Õ(Wn^ω) time was previously known for undirected graphs with negative weights. • Computing Diameter - We give a simple Õ(Wn^ω) time algorithm for computing a diameter of an undirected or directed graphs. This considerably improves the bounds of Yuster from 2010, who was able to obtain this time bound only in the case of directed graphs with positive weights. To the contrary, our algorithm works in the same time bound for both directed and undirected graphs with negative weights. • Finding Minimum Weight Perfect Matchings - We present an Õ(Wn^ω) time algorithm for finding minimum weight perfect matchings in undirected graphs. This resolves an open problem posted by Sankowski in 2006, who presented such an algorithm but only in the case of bipartite graphs. We believe that the presented framework can find applications for solving larger spectra of related problems. As an illustrative example we apply it to the problem of computing a set of vertices that lie on cycles of length at most t, for some t. We give a simple Õ(Wn^ω) time algorithm for this problem that improves over the Õ(tWn^ω) time algorithm given by Yuster in 2011. This is joint work work with Marek Cygan and Harold N. Gabow. A preliminary version of the paper appeared in arxiv.1204.1616. 29 Jun 2012, 14:00, MS.05 Analyzing Graphs via Random Linear Projections We present a sequence of algorithmic results for analyzing graphs via random linear projections or "sketches". We start with results for evaluating basic connectivity and k-connectivity and then use these primitives to construct combinatorial sparsifiers that allow every cut to be approximated up to a factor 1+ε. Our results have numerous applications including single-pass stream algorithms for constructing sparsifiers in fully-dynamic graph streams where edges can be added and deleted in the underlying graph. This is joint work work with Kook Jin Ahn and Sudipto Guha. 19 Jun 2012, 16:00, MS.05 Pricing on Paths: A PTAS for the Highway Problem In the highway problem, we are given an n-edge line graph (the highway), and a set of paths (the drivers), each one with its own budget. For a given assignment of edge weights (the tolls), the highway owner collects from each driver the weight of the associated path, when it does not exceed the budget of the driver, and zero otherwise. The goal is to choose weights so as to maximize the total profit. A lot of research has been devoted to this apparently simple problem. The highway problem was shown to be strongly NP-hard only recently [Elbassioni,Raman,Ray,Sitters-'09]. The best-known approximation is O(log n/loglog n) [Gamzu,Segev-'10], which improves on the previous-best O(log n) approximation [Balcan,Blum-'06]. Better approximations are known for a number of special cases. In this work we present a PTAS for the highway problem, hence closing the complexity status of the problem. Our result is based on a novel randomized dissection approach, which has some points in common with Arora's quadtree dissection for Euclidean network design [Arora-'98]. The basic idea is enclosing the highway in a bounding path, such that both the size of the bounding path and the position of the highway in it are random variables. Then we consider a recursive O(1)-ary dissection of the bounding path, in subpaths of uniform optimal weight. Since the optimal weights are unknown, we construct the dissection in a bottom-up fashion via dynamic programming, while computing the approximate solution at the same time. Our algorithm can be easily derandomized. The same basic approach provides PTASs also for two generalizations of the problem: the tollbooth problem with a constant number of leaves and the maximum-feasibility subsystem problem on interval matrices. In both cases the previous best approximation factors are polylogarithmic [Gamzu,Segev-'10,Elbassioni,Raman,Ray,Sitters-'09]. Joint work with Thomas Rothvoß. 06 Jun 2012, 11:00, MS.03 Improved Lower Bounds on Crossing Numbers of Graphs Through Optimization The crossing number problem for graphs is to draw (or embed) a graph in the plane with a minimum number of edge crossings. Crossing numbers are of interest for graph visualization, VLSI design, quantum dot cellular automata, RNA folding, and other applications. On the other hand, the problem is notoriously difficult. In 1973, Erdös and Guy wrote that: "Almost all questions that one can ask about crossing numbers remain unsolved." For example, the crossing numbers of complete and complete bipartite graphs are still unknown in general. The case of the complete bipartite graph is known as Turán's brickyard problem, and was already posed by Paul Turán in the 1940's. Moreover, even for cubic graphs, it is NP-hard to compute the crossing number. Different types of crossing numbers may be defined by restricting drawings; thus the k-page (book) crossing number corresponds to drawings where all vertices are drawn on a line (the spine of a book), and each edge on one of k planes intersecting the spine (the book pages). It is conjectured that the two-page and normal crossing numbers coincide for complete and complete bipartite graphs. In this talk, we will survey some recent results, where improved lower bounds were obtained for (k-page) crossing numbers of complete and complete bipartite graphs through the use of optimization techniques. (Joint work with D.V. Pasechnik and G. Salazar) 22 May 2012, 16:00, MS.05 Arithmetic Progressions in Sumsets via (Discrete) Probability, Geometry and Analysis If A is a large subset of {1,...,N}, then how long an arithmetic progression must A+A = {a+b : a,b in A} contain? Answers to this ostensibly combinatorial question were given by Bourgain and then Green, both using some very beautiful Fourier analytic techniques. Here I plan to discuss a new attack on the problem that rests on a lemma in discrete geometry, proven by a random sampling technique, and applied in a discrete analytic context. We shall not use any deep results, and we shall try to keep things as self-contained as possible. Based on joint work with Ernie Croot and Izabella Laba. 18 May 2012, 16:00, B3.02 Induced Matchings, Arithmetic Progressions and Communication Extremal Combinatorics is one of the central branches of discrete mathematics which deals with the problem of estimating the maximum possible size of a combinatorial structure which satisfies certain restrictions. Often, such problems have also applications to other areas including Theoretical Computer Science, Additive Number Theory and Information Theory. In this talk we will illustrate this fact by several closely related examples focusing on a recent work with Alon and Moitra. 15 May 2012, 16:00, MS.05 Treewidth Reduction Theorem and Algorithmic Problems on Graphs We introduce the so-called Treewidth Reduction Theorem. Given a graph G, two specified vertices s and t, and an integer k, let C be the union of all minimal s-t (vertex) separators of size at most k. Furthermore, let G^* be the graph obtained from G by contracting all the connected components of G - C into single vertices. The theorem states that the treewidth of G^* is bounded by a function of k and that the graph G^* can be computed in linear time for any fixed k. The above theorem allows us to solve the following generic graph separation problem in linear time for every fixed k. Let G be a graph with two specified vertices s and t and let Z be a hereditary class of graphs. The problem asks if G has an s-t vertex separator S of size at most k such that the subgraph induced by S belongs to the class Z. In other words, we show that this generic problem is fixed-parameter tractable. This allows us to resolve a number of seemingly unrelated open questions scattered in the literature concerning fixed-parameter tractability of various graph separation problems under specific constraints. The role of the Treewidth Reduction Theorem is that it reduces an arbitrary instance of the given problem to an instance of the problem where the treewidth is bounded. Then the standard methodology using Courcelle's theorem can be applied. The purpose of this talk is to convey the main technical ideas of the above work at an intuitive level. The talk is self-contained. In particular, no prior knowledge of parameterized complexity, treewidth, and Courcelle's theorem is needed. Everything will be intuitively defined in the first 10-15 minutes of the talk. Joint work with D. Marx and B. O'Sullivan. Available at http://arxiv.org/abs/1110.4765. 8 May 2012, 16:00, MS.05 Exact Quantum Query Algorithms The framework of query complexity is a setting in which quantum computers are known to be significantly more powerful than classical computers. In this talk, I will discuss some new results in the model of exact quantum query complexity, where the goal is to compute a boolean function f with certainty using the smallest possible number of queries to the input. It is known that quantum algorithms exist in this model which can achieve significant speed-ups over any possible classical algorithm; however, when f is a total function (ie. there is no promise on the input) the best quantum speed-up known is a factor of 2. I will present several families of total boolean functions which have exact quantum query complexity which is a constant fraction of their classical query complexity. These results were originally inspired by numerically solving the semidefinite programs characterising quantum query complexity for small problem sizes. I will also discuss the model of nonadaptive exact quantum query complexity, which can be characterised in terms of coding theory. The talk will be based on the paper arXiv:1111.0475, which is joint work with Richard Jozsa and Graeme Mitchison. 1 May 2012, 16:00, MS.05 Representing Graphs by Words A simple graph G=(V,E) is (word-)representable if there exists a word W over the alphabet V such that any two distinct letters x and y alternate in W if and only if (x,y) is an edge in E. If W is k-uniform (each letter of W occurs exactly k times in it) then G is called k-representable. It is known that a graph is representable if and only if it is k-representable for some k. The minimum k for which a representable graph G is k-representable is called its representation number. Representable graphs first appeared in algebra, in study of the Perkins semigroup, which has played a central role in semigroup theory since 1960, particularly as a source of examples and counterexamples. However, these graphs have connections to robotic scheduling and they are interesting from combinatorial and graph theoretical point of view (for example, representable graphs are a generalization of circle graphs, which are exactly 2-representable graphs). Some questions one can ask about representable graphs are as follows. Are all graphs representable? How do we characterize those graphs that are (non-)representable? How many representable graphs are there? How large can the representation number be for a graph on n nodes? In this talk, we will go through these and some other questions stating what progress has been made in answering them. In particular, we will see that a graph is representable if and only if it admits a so-called semi-transitive orientation. This allows us to prove a number of results about representable graphs, not least that 3-colorable graphs are representable. We also prove that the representation number of a graph on n nodes is at most n, from which one concludes that the recognition problem for representable graphs is in NP. This bound is tight up to a constant factor, as there are graphs whose representation number is n/2. 24 April 2012, 16:00, MS.05 Making Markov Chains Less Lazy There are only a few methods for analysing the rate of convergence of an ergodic Markov chain to its stationary distribution. One is the canonical path method of Jerrum and Sinclair. This method applies to Markov chains which have no negative eigenvalues. Hence it has become standard practice for theoreticians to work with lazy Markov chains, which do absolutely nothing with probability 1/2 at each step. This must be frustrating for practitioners, who want to use the most efficient Markov chain possible. I will explain how laziness can be avoided by the use of a twenty-year old lemma of Diaconis and Stroock's, or my recent modification of that lemma. Other relevant approaches will also be discussed. A strength of the new result is that it can be very easy to apply. We illustrate this by revisiting the analysis of Jerrum and Sinclair's well-known chain for sampling perfect matchings of a graph. 16 April 2012, 16:00, MS.04 Polynomial-Time Approximation Schemes for Shortest Path with Alternatives Consider the generic situation that we have to select k alternatives from a given ground set, where each element in the ground set has a random arrival time and cost. Once we have done our selection, we will greedily select the first arriving alternative, and the total cost is the time we had to wait for this alternative plus its random cost. Our motivation to study this problem comes from public transportation, where each element in the ground set might correspond to a bus or train, and the usual user behavior is to greedily select the first option from a given set of alternatives at each stop. We consider the arguably most natural arrival time distributions for such a scenario: exponential distributions, uniform distributions, and distributions with mon. decreasing linear density functions. For exponential distributions, we show how to compute an optimal policy for a complete network, called a shortest path with alternatives, in O(n(log n + δ^3)) time, where n is the number of nodes and δ is the maximal outdegree of any node, making this approach practicable for large networks if δ is relatively small. Moreover, for the latter two distributions, we give PTASs for the case that the distribution supports differ by at most a constant factor and only a constant number of hops are allowed in the network, both reasonable assumptions in practice. These results are obtained by combining methods from low-rank quasi-concave optimization with fractional programming. We finally complement them by showing that general distributions are NP-hard. 23 March 2012, 16:00, MS.05 Minimum Degree Thresholds for Perfect Matchings in Hypergraphs Given positive integers k and r where 4 divides k and k/2 ≤ r ≤ k-2, we give a minimum r-degree condition that ensures a perfect matching in a k-uniform hypergraph. This condition is essentially best possible and improves on work of Pikhurko, who gave an asymptotically exact result. Our approach makes use of the Hypergraph Removal Lemma as well as a structural result of Keevash and Sudakov relating to the Turan number of the expanded triangle. This is joint work with Yi Zhao. 21 March 2012, 16:00, MS.05 Robust Optimization over Integers Robust optimization is an approach for optimization under uncertainty that has recently attracted attention both from theory and practitioners. While there is an elaborate and powerful machinery for continuous robust optimization problems, results on robust combinatorial optimization and robust linear integer programs are still rare and hardly general. We consider robust counterparts of integer programs and combinatorial optimization problems, i.e., seek solutions that stay feasible if at most Γ-many parameters change within a given range. We show that one can optimize a not necessarily binary, cost robust problem, for which one can optimize a slightly modified version of the deterministic problem. Further, in case there is a ρ-approximation for the modified deterministic problem, we give a method for the cost robust counterpart to attain a (ρ+ε)-approximation (for minimization problems; for maximization we get a 2ρ-approximation), or again a ρ-approximation in a slightly more restricted case. We further show that general integer linear programs where a single or few constraints are subject to uncertainty can be solved, in case the problem can be solved for constraints on piecewise linear functions. In case these programs are binary, it suffices to solve the underlying non-robust program (n+1) times. We demonstrate the applicability of our approach on two classes of integer programs, namely, totally unimodular integer programs and integer programs with two variables per inequality. Further, for combinatorial optimization problems our method yields polynomial time approximations and pseudopolynomial, exact algorithms for robust Unbounded Knapsack Problems. This is joint work with Kai-Simon Goetzmann (TU Berlin) and Claudia Telha (MIT). 20 March 2012, 16:00, MS.05 The Complexity of Computing the Sign of the Tutte Polynomial The Tutte polynomial of a graph is two-variable polynomial that captures many interesting properties of the graph. In this talk, I will describe the polynomial and then discuss the complexity of computing the sign of the polynomial. Having fixed the two parameters, the problem is, given an input graph, determine whether the value of the polynomial is positive, negative, or zero. We determine the complexity of this problem over (most of) the possible settings of the parameters. Surprisingly, for a large portion of the parameter space, the problem is #P-complete. (This is surprising because the problem feels more like a decision problem than a counting problem --- in particular, there are only three possible outputs.) I'll discuss the ramifications for the complexity of approximately evaluating the polynomial. As a consequence, this resolves the complexity of computing the sign of the chromatic polynomial. Here there is a phase transition at q=32/27, which I will explain. The talk won't assume any prior knowledge about graph polynomials or the complexity of counting. (Joint work with Mark Jerrum.) 13 March 2012, 16:00, MS.05 Combinatorics of Tropical Linear Algebra Tropical linear algebra is an emerging and rapidly evolving area of idempotent mathematics, linear algebra and applied discrete mathematics. It has been designed to solve a class of non-linear problems in mathematics, operational research, science and engineering. Besides the main advantage of dealing with non-linear problems as if they were linear, the techniques of tropical linear algebra enable us to efficiently describe complex sets, reveal combinatorial aspects of problems and view the problems in a new, unconventional way. Since 1995 we have seen a remarkable expansion of this field following a number of findings and applications in areas as diverse as algebraic geometry, phylogenetics, cellular protein production, the job rotation problem and railway scheduling. We will give an overview of selected combinatorial aspects of tropical linear algebra with emphasis on set coverings, cycles in digraphs, transitive closures and the linear assignment problem. An application to multiprocessor interactive systems and a number of open problems will be presented. 6 March 2012, 16:00, MS.05 Every Property of Hyperfinite Graphs is Testable The analysis of complex networks like the webgraph, social networks, metabolic networks or transportation networks is a challenging problem. One problem that has drawn a significant amount of attention is the question to classify the domain to which a given network belongs, i.e. whether it is, say, a social network or a metabolic network. One empirical approach to solve this problem uses the concept of network motifs. A network motif is a subgraph that appears more frequently in a certain class of graphs than in a random graph. This approach raises the theoretical question about the structural properties we can learn about a graph by looking at small subgraphs and how one can analyze graph structure by looking at random samples, as these will typically contain frequent Obviously, it is not possible to to analyze classical structural properties like, for example, connectivity by only looking at small subgraphs. One needs a relaxed and more robust definition of graph properties. Such a definition is given by the concept of property testing. In my talk and within the framework of property testing I will give a partial answer to the question what we can learn about graph properties from the distribution of constant sized subgraphs. I will show that every planar graph with constant maximum degree is defined up to epsilon n edges by its distribution (frequency) of subgraphs of constant size. This result implies that every property of planar graphs is testable in the property testing sense. (Joint work with Ilan Newman) 5 March 2012, 10:00, MS.05 Improved Approximations for Monotone Submodular k-Set Packing and General k-Exchange Systems In the weighted k-set packing problem, we are given a collection of k-element sets, each with a weight, and seek a maximum weight collection of pairwise disjoint sets. In this talk, we consider a generalization of this problem in which the goal is to find a pairwise disjoint collection of sets that maximizes a monotone submodular function. We present a novel combinatorial algorithm for the problem, which is based on the notion of non-oblivious local search, in which a standard local search process is guided by an auxiliary potential function distinct from the problem's objective. Specifically, we use a potential function inspired by an algorithm of Berman for weighted maximum independent set in (k+1)-claw free graphs. Unfortunately, moving from the linear (weighted) case to the monotone submodular case introduces several difficulties, necessitating a more nuanced approach. The resulting algorithm guarantees a (k + 3)/2 approximation, improving on the performance of the standard, oblivious local search algorithm by a factor of nearly 2 for large k. More generally, we show that our algorithm applies to all problems that can be formulated as k-exchange systems, which we review in this talk. This class of independence systems, introduced by Feldman, Naor, Schwartz, and Ward, generalize the matroid k-parity problem in a wide class of matroids and capture many other combinatorial optimization problems. Such problems include matroid k-parity in strongly base orderable matroids, independent set in (k + 1)-claw free graphs (which includes k-set packing as a special case), k-uniform hypergraph b-matching, and maximum asymmetric traveling salesperson (here, k = 3). Our non-oblivious local search algorithm improves on the current state-of-the-art approximation performance for many of these specific problems, as well. 28 February 2012, 16:00, MS.05 Hybridising Heuristic and Exact Methods to Solve Scheduling Problems The research community has focussed on the use of heuristics and meta-heuristic methods to solve real life scheduling problems, as such problems are too large to solve exactly. However there is much to learn and utilise from exact models. This talk will explain how hybridising exact methods within heuristic techniques can enable better solutions to be obtained. Specifically, exact methods will be used to ensure that feasible solutions are guaranteed, allowing the heuristic to focus on improving secondary objectives. Two test cases will be described. The first is a nurse rostering problem where a knapsack model is used to ensure feasibility, allowing a tabu search method to locate high quality solutions. The second is the problem of allocating medical students to clinical specialities over a number of time periods. A network flow model is hybridised within a Greedy Randomised Adaptive Search Procedure framework. It will be demonstrated that this produces better solutions than using GRASP on its own. 14 February 2012, 16:00, MS.05 Local Matching Dynamics in Social Networks Stable marriage and roommates problems are the classic approach to model resource allocation with incentives and occupy a central position in the intersection of computer science and economics. There are a number of players that strive to be matched in pairs, and the goal is to obtain a stable matching from which no pair of players has an incentive to deviate. In many applications, a player is not aware of all other players and must explore the population before finding a good match. We incorporate this aspect by studying stable matching under dynamic locality constraints in social networks. Our interest is to understand local improvement dynamics and their convergence to matchings that are stable with respect to their imposed information structure in the network. 6 February 2012, 17:00, MS.03 Approximating Graphic TSP by Matchings We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4/3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified. 31 January 2012, 16:00, MS.05 The Quadratic Assignment Problem: on the Borderline Between Hard and Easy Special Cases The quadratic assignment problem (QAP) is a well studied and notoriously hard combinatorial optimisation problem both from the theoretical and the practical point of view. Other well known combinatorial optimisation problems as for example the travelling salesman problem or the linear arrangement problem can be seen as special cases of the QAP. In this talk we will first introduce the problem and briefly review some important computational complexity results. Then we will focus on polynomially solvable special cases and introduce structural properties of the coefficient matrices of the QAP which guarantee the efficient solvability of the problem. We will show that most of these special cases have the so called constant permutation property, meaning that the optimal solution of the problem does not depend on the concrete realization of the coefficient matrices as soon as those matrices possess the structural properties which turn the QAP polynomially solvable as mentioned above. We will show that the borderline between hard and easy special cases is quite thin, in the sense that slight perturbations of the structural properties mentioned above are enough to produce NP-hard special cases again. We will conclude with an outlook of further research and some open special cases which could be well worth analysing next in terms of computational complexity. 25 January 2012, 16:00, MS.05 Extended Formulations in Combinatorial Optimization Applying the polyhedral method to a combinatorial optimization problem usually requires a description of the convex hull P of the set of feasible solutions. Typically, P is determined by an exponential number of inequalities, and a complete description is often hopeless, even for polynomial-time solvable problems. In some cases, P can be represented in a simpler way as the projection of some polyhedron Q in a higher-dimensional space, where often Q is defined by a much smaller system of constraints than P. In this talk we discuss techniques to obtain extended formulations for combinatorial optimization problems, and cases where extended formulations prove useful. The work presented includes papers with M. Conforti, A. Del Pia, B. Gerards, and L. Wolsey. 24 January 2012, 16:00, MS.05 The "Power of ..." Type Results in Parallel Machine Scheduling In this talk, I will present the results on parametric analysis of the power of pre-emption for two and three uniform machines, with the speed of the fastest machine as a parameter. For identical parallel machines, I will present a series of results on the impact that adding an extra machine may have on the makespan and the total completion time. For the latter models, the solution approaches to the problem of the cost-optimal choice of the number of machines are reported. 10 January 2012, 16:00, MS.05 Robust Markov Decision Processes Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments. However, the solutions of MDPs are of limited practical use due to their sensitivity to distributional model parameters, which are typically unknown and have to be estimated by the decision maker. To counter the detrimental effects of estimation errors, we consider robust MDPs that offer probabilistic guarantees in view of the unknown parameters. To this end, we assume that an observation history of the MDP is available. Based on this history, we derive a confidence region that contains the unknown parameters with a pre-specified probability 1-β. Afterwards, we determine a policy that attains the highest worst-case performance over this confidence region. By construction, this policy achieves or exceeds its worst-case performance with a confidence of at least 1-β. Our method involves the solution of tractable conic programs of moderate size. 6 December 2011, 16:00, MS.05 Independent Sets in Hypergraphs We say that a hypergraph is stable if each sufficiently large subset of its vertices either spans many hyperedges or is very structured. Hypergraphs that arise naturally in many classical settings posses the above property. For example, the famous stability theorem of Erdős and Simonovits and the triangle removal lemma of Ruzsa and Szemerédi imply that the hypergraph on the vertex set E(K_n) whose hyperedges are the edge sets of all triangles in K_n is stable. In the talk, we will present the following general theorem: If (H_n)_n is a sequence of stable hypergraphs satisfying certain technical conditions, then a typical (i.e., uniform random) m-element independent set of H_n is very structured, provided that m is sufficiently large. The above abstract theorem has many interesting corollaries, some of which we will discuss. Among other things, it implies sharp bounds on the number of sum-free sets in a large class of finite Abelian groups and gives an alternate proof of Szemerédi's theorem on arithmetic progressions in random subsets of integers. Joint work with Noga Alon, József Balogh, and Robert Morris. 5 December 2011, 14:00, MS.05 Creating a Partial Order and Finishing the Sort In 1975, Mike Fredman showed that, given a set of distinct values satisfying an arbitrary partial order, one can finish sorting the set in a number of comparisons equal to the information theory bound (Opt) plus at most 2n. However, it appeared that determining what comparisons to do could take exponential time. Shortly after this several people, including myself, wondered about the complementary problem Fredman's "starting point" of arranging elements into a given partial order. My long term conjecture was that one can do this, using a number of comparisons equal to the information theory lower bound for the problem plus a lower order term plus O(n) through a reduction to multiple selection. Along the way some key contributions to the problems were made, including the development of graph entropy and the work of Andy Yao (1989) and Jeff Kahn and Jeong Hang Kim (1992). The problems were both solved, with Jean Cardinal, Sam Fiorini, Gwen Joret, Raph Jungers (2009, 2010), using the techniques of graph entropy, multiple selection and merging in polynomial time to determine the anticipated Opt + o(Opt) + O(n) comparisons. The talk will discuss this long term development and some amusing stops along the way. 29 November 2011, 16:00, MS.05 Statistical Mechanics Approach to the Problem of Counting Large Subgraphs in Random Graphs Counting the number of distinct copies of a given pattern in a graph can be a difficult problem when the sizes of both the pattern and the graph get large. This talk will review statistical mechanics approaches to some instances of this problem, focusing in particular on the enumeration of large matchings and long circuits of a graph. The outcomes of these methods are conjectures on the typical number of such patterns in large random graphs, and heuristic algorithms for counting and constructing them. In the case of the matchings, the heuristic statistical mechanics method has been also turned into a mathematically rigorous proof. 22 November 2011, 16:00, MS.05 Applications and Studies in Modular Decomposition The modular decomposition is a powerful tool for describing combinatorial structures (such as graphs, tournaments, posets or permutations) in terms of smaller ones. Since its appearance in a talk by Fraisse in the 1950s, and first appearance in print by Gallai in the 1960s, it has appeared in a wide variety of settings ranging from game theory to combinatorial optimisation. In this talk, after discussing some of these various historical settings, I will present a number of different settings where the modular decomposition has influenced my own research, including the enumeration and structural theory of permutations (in particular the general study of well-quasi-ordering), and -- quite unrelatedly -- recent connections made with the celebrated reconstruction Of increasing importance to this work has been our growing understanding of the "prime" structures: those that cannot be broken down into smaller structures under the modular decomposition. Started by Schmerl and Trotter in the early 1990s, there is now an industry of researchers looking at the fine structure of these objects, and I will present some recent work in this area. Taker a "broader" view, we also know, in the case of permutations, a Ramsey-theoretic result for these prime structures: every sufficiently long prime permutation must contain a subpermutation belonging to one of three families. However, it still remains to translate this result into one for graphs, and I will close by exploring some of the difficulties and differences discovered in our attempts to make this 15 November 2011, 16:00, MS.05 Random Graphs on Spaces of Negative Curvature Random geometric graphs have been well studied over the last 50 years or so. These are graphs that are formed between points randomly allocated on a Euclidean space and any two of them are joined if they are close enough. However, all this theory has been developed when the underlying space is equipped with the Euclidean metric. But, what if the underlying space is curved? The aim of this talk is to initiate the study of such random graphs and lead to the development of their theory. Our focus will be on the case where the underlying space is a hyperbolic space. We will discuss some typical structural features of these random graphs as well as some applications, related to their potential as a model for networks that emerge in social life or in biological 8 November 2011, 16:00, MS.05 Upper Bound for Centerlines Given a set of points P in R^3, what is the best line that approximates P? We introduce a notion of approximation which is robust under the perturbations of P, and analyse how good it is. 4 November 2011, 14:00, MS.01 Algorithms for Testing FOL Properties Algorithmic metatheorems guarantee that certain types of problems have efficient algorithms. A classical example is the theorem of Courcelle asserting that every MSOL property can be tested in linear time for graphs with bounded tree-width. Another example is a result of Frick and Grohe that every FOL property can be tested in almost linear time in graph classes with locally bounded tree-width. Such graph classes include planar graphs or graphs with bounded maximum degree. An example of an FOL property is the existence of a fixed graph as a subgraph. We extend these results in two directions: • we show that FOL properties can be tested in linear time for classes of graphs with bounded expansion (this is a joint result with Zdenek Dvorak and Robin Thomas), and • FOL properties can be polynomially tested (with the degree of the polynomial independent of the property) in classes of regular matroids with locally bounded branch-width (this is a joint result with Tomas Gavenciak and Sang-il Oum). An alternative proof of our first result has been obtained by Dawar and Kreutzer. 1 November 2011, 16:00, MS.05 Lower Bounds for Online Integer Multiplication and Convolution in the Cell-probe Model We will discuss time lower bounds for both online integer multiplication and convolution in the cell-probe model. For the multiplication problem, one pair of digits, each from one of two n digit numbers that are to be multiplied, is given as input at step i. The online algorithm outputs a single new digit from the product of the numbers before step i+1. We give a lower bound of Omega((d/w) *log n) time on average per output digit for this problem where 2^d is the maximum value of a digit and w is the word size. In the convolution problem, we are given a fixed vector V of length n and we consider a stream in which numbers arrive one at a time. We output the inner product of V and the vector that consists of the last n numbers of the stream. We show an Omega((d/w)*log n) lower bound for the time required per new number in the stream. All the bounds presented hold under randomisation and amortisation. These are the first unconditional lower bounds for online multiplication or convolution in this popular model of computation. 14 October 2011, 14:00, MS.01 Nash Codes for Noisy Channels We consider a coordination game between a sender and a receiver who communicate over a noisy channel. The sender wants to inform the receiver about the state by transmitting a message over the channel. Both receive positive payoff only if the receiver decodes the received signal as the correct state. The sender uses a known "codebook" to map states to messages. When does this codebook define a Nash equilibrium? The receiver's best response is to decode the received signal as the most likely message that has been sent. Given this decoding, an equilibrium or "Nash code" results if the sender encodes every state as prescribed by the codebook, which is not always the case. We show two theorems that give sufficient conditions for Nash codes. First, the "best" codebook for the receiver (which gives maximum expected receiver payoff) defines a Nash code. A second, more surprising observation holds for communication over a binary channel which is used independently a number of times, a basic model of information theory: Given a consistent tie-breaking decoding rule which holds generically, ANY codebook of binary codewords defines a Nash code. This holds irrespective of the quality of the code and also for nonsymmetric errors of the binary channel. (Joint work with P. Hernandez) 4 October 2011, 16:00, MS.05 On Clique Separator Decomposition of Some Hole-Free Graph Classes (joint work with Vassilis Giakoumakis and Frédéric Maffray) In a finite undirected graph G=(V,E), a vertex set Q ⊆ V is a clique separator if the vertices in Q are pairwise adjacent and G\Q has more connected components than G. An atom of G is a subgraph of G without clique separators. Tarjan and Whitesides gave polynomial time algorithms for determining clique separator decomposition trees of graphs. By a well-known result of Dirac, a graph is chordal if and only if its atoms are cliques. A hole is a chordless cycle of length at least five. Hole-free graphs generalize chordal graphs; G is chordal if and only if it is C_4-free and hole-free. We characterize hole- and diamond-free graphs (hole- and paraglider-free graphs, respectively) in terms of the structure of their atoms. Hereby a diamond is K_4-e, and a paraglider is the result of substituting an edge into one of the vertices of a C_4; equivalently, it is the complement graph of the disjoint union of P_2 and P_3. Thus, hole- and paraglider-free graphs generalize chordal graphs and are perfect. Hole- and diamond-free graphs generalize chordal bipartite graphs (which are exactly the triangle-free hole-free graphs). Our structural results have various algorithmic implications. Thus, the problems Recognition, Maximum Independent Set, Maximum Clique, Coloring, Minimum Fill-In, and Maximum Induced Matching can be solved efficiently on hole- and paraglider-free graphs. 30 August 2011, 16:00, MS.05 Network Design under Demand Uncertainties Telecommunication network design has been an extremely fruitful area for the application of discrete mathematics and optimization. To cover for uncertain demands, traffic volumes are typically highly overestimated. In this talk, we adapt the methodology of robust optimization to obtain more resource- and cost-efficient network designs. In particular, we generalize valid inequalities for network design to the robust network design problem, and report on their added value. 25 July 2011, 16:00, MS B3.02 Cutting-Planes for the Max-Cut Problem We present a cutting-plane scheme based on the separation of some product-type classes of valid inequalities for the max-cut problem. These include triangle, odd clique, rounded psd and gap inequalities. In particular, gap inequalities were introduced by Laurent & Poljak and include many other known inequalities as special cases. Yet, they have received little attention so far and are poorly understood. This paper presents the first ever computational results, showing that gap inequalities yield extremely strong upper bounds in practice. Joint work with Professor Adam N. Letchford and Dr. Konstantinos Kaparis, Lancaster University. 12 July 2011, 16:00, MS.05 Holographic Algorithms Using the notion of polynomial time reduction computer scientists have discovered an astonishingly rich web of interrelationships among the myriad computational problems that arise in diverse applications. These relationships can be used both to give evidence of intractability, such as that of NP-completeness, as well as to provide efficient algorithms. In this talk we discuss a notion of a holographic reduction that is more general than the traditional one in the following sense. Instead of locally mapping solutions one-to-one, it maps them many-to-many but preserves some measure such as the sum of the solutions. One application is to finding new polynomial time algorithms where none was known before. Another is to give evidence of intractability. There are pairs of related problems that can be contrasted in this manner. For example, for a skeletal version of Cook’s 3CNF problem (restricted to be planar and where every variable occurs twice positively) the problem of counting the solutions modulo 2 is NP-hard, but counting them modulo 7 is polynomial time computable. Holographic methods have proved useful in establishing dichotomy theorems, which offer a more systematic format for distinguishing the easy from the probably hard. Such theorems state that for certain wide classes of problems every member is either polynomial time computable, or complete in some class conjectured to contain intractable problems. 29 June 2011, 16:00, MS.05 New and Old Algorithms for Matroid and Submodular Optimization Many practical problems can be formulated in the language of matroids and submodular functions. Objective functions of many optimization problems are submodular. Such functions also model "economies of scale" and "law of diminishing returns" in a natural way. We study the problem of maximizing a submodular function subject to k matroid constraints. We show how our analytic framework can be adapted for various types of submodular functions such as symmetric, monotone or linear. We show that the local search algorithm has performance guarantee of roughly 2/k for the matroid hypergraph matching problem where k is the size of the largest hyperedge. In the end of the talk, we will discuss randomized rounding algorithms for matroid intersection and matroid base polytops with additional constraints (or objectives) that can be represented as polynomial constraints. While the measure concentration phenomenon is well-studied in probability theory, the concentration of polynomial functions of independent random variables is a relatively recent development (Kim- Vu (2000)). Our randomized rounding is inherently dependent, so we need to prove our own concentration inequalities for such processes. We will discuss multiple applications of such concentration bounds in optimization. 28 June 2011, 16:00, MS.05 Ultra-Fast Rumor Spreading in Models of Real-World Networks In this talk an analysis of the popular push-pull protocol for spreading a rumor on networks will be presented. Initially, a single node knows of a rumor. In each succeeding round, every node chooses a random neighbor, and the two nodes share the rumor if one of them is already aware of it. We present the first theoretical analysis of this protocol on two models of random graphs that have a power law degree distribution with an arbitrary exponent β > 2. In particular, we study preferential attachment graphs and random graphs with a given expected degree sequence. The main findings reveal a striking dichotomy in the performance of the protocol that depends on the exponent of the power law. More specifically, we show that if 2 < β < 3, then the rumor spreads to almost all nodes in Ω(loglog n) rounds with high probability. This is exponentially faster than all previously known upper bounds for the push-pull protocol established for various classes of networks. On the other hand, if β > 3, then Ω(log n) rounds are necessary. I will also discuss the asynchronous version of the push-pull protocol, where the nodes do not operate in rounds, but exchange information according to a Poisson process with rate 1. Surprisingly, if 2 < β < 3, the rumor spreads even in constant time, which is much smaller than the typical distance of two nodes. This is joint work with N. Fountoulakis, T. Sauerwald. 21 June 2011, 16:00, MS.05 On the Erdős-Szekeres Problem and its Modifications In our talk, we shall concentrate on a classical problem of combinatorial geometry going back to P. Erdős and G. Szekeres. First of all, we shall introduce and discuss the minimum number g(n) such that from any set of g(n) points in general position in the plane, one can choose a set of n points which are the vertices of a convex n-gon. Further, we shall proceed to multiple important modifications of the quantity g(n). In particular we shall consider a value h(n): it’s definition is almost the same as the just-mentioned definition of g(n); one should only replace in it “a convex n-gon” by “a convex empty n-gon”. Also we shall generalize h(n) to h(n, k) and to h(n, mod q), where the previous condition “an n-gon is empty” is substituted either by the condition “an n-gon contains at most k points” (so that h(n) = h(n, 0)) or by the condition “an n-gon contains 0 points modulo q”. Finally, we shall speak about various chromatic versions of the above quantities. We shall present a series of recent achievments in the field, and we shall discuss some new approaches and conjectures. 20 June 2011, 16:00, MS.05 Optimal (Fully) LZW-compressed Pattern Matching We consider the following variant of the classical pattern matching problem motivated by the increasing amount of digital data we need to store: given an uncompressed pattern s[1..m] and a compressed representation of a string t[1..N], does s occur in t? I will present a high-level description of an optimal linear time algorithm which detects the occurrence in t compressed using the Lempel-Ziv-Welsch method (widely used in real-life applications due to its simplicity and relatively good approximation ratio), thus answering a question of Amir, Benson, and Farach from 1994. Then I will show how to extend this method to solve the fully compressed version of the problem, where both the pattern and the text are compressed, also in optimal linear time, hence improving the previously known solution of Gąsieniec and Rytter, and essentially closing this line of research. 14 June 2011, 16:00, MS.05 Understanding the Kernelizability of Multiway Cut In this talk I will present results of my ongoing research whose goal is to understand kernelizability of the multiway cut problem. To make the talk self-contained, I will start from the definition of kernelization, accompanied by a simple kernelization procedure for the Vertex Cover problem. Then I will define the multiway cut problem, briefly overview the existing parameterization results, and provide reasons why understanding kernelizability of the problem is an interesting question. In the final part of my talk I will present a kernelization algorithm for a special, yet NP-hard, case of the multiway cut problem and discuss possible ways of its generalization. 7 June 2011, 16:00, MS.05 Fractional Colouring and Pre-colouring Extension of Graphs A vertex-colouring of a graph is an assignment of colours to the vertices of the graph so that adjacent vertices receive different colours. The minimum number of colours needed for such a colouring is called the chromatic number of the graph. Now suppose that certain vertices are already pre-coloured, and we want to extend this partial colouring to a colouring of the whole graph. Because of the pre-coloured vertices, we may need more colours than just the chromatic number. How many extra colours are needed under what conditions has been well-studied, and we will give a short overview of those results. A different way of colouring the vertices is so-called fractional colouring. For such a colouring we are given an interval [0,K] of real numbers, and we need to assign to each vertex a subset of [0,K] of measure one so that adjacent vertices receive disjoint subsets. The fractional chromatic number is the minimum K for which this is possible. Again we can look at this problem assuming that certain vertices are already pre-coloured (are already assigned a subset of measure one). Assuming some knowledge about the pre-coloured vertices, what K is required to guarantee that we can always extend this partial colouring to a fractional colouring of the whole graph? The answer to this shows a surprising dependence on the fractional chromatic number of the graph under consideration. This is joint work with Dan Kral, Martin Kupec, Jean-Sebastien Sereni and Jan Volec. 31 May 2011, 16:00, MS.05 Partitioning Posets It is well known and easy to prove that every graph with m edges has a cut containing at least m/2 edges. While the complete graph shows that the constant ½ cannot be improved, Edwards established a more precise extremal bound that includes lower order terms. The problem of determining such bounds is called the extremal maxcut problem and many variants of it have been studied. In the first part of the talk, we consider a natural analogue of the extremal maxcut problem for posets and some of its generalisations. (Whereas a graph cut is a set of all edges that cross some bipartition of the graph’s vertex set, we shall define a poset cut to be a set of all comparable pairs that cross some order-preserving partition of the poset’s ground set.) The algorithmic maxcut problem for graphs, i.e. the problem of determining the size of the largest cut in a graph, is well known to be NP-hard. In the second part of the talk, we examine the complexity of the poset analogue of the max-cut problem and some of its generalisations. 24 May 2011, 16:00, MS.05 Solution to an Edge-coloring Conjecture of Grunbaum By a classical result of Tait, the four color theorem is equivalent to the statement that each 2-edge-connected 3-regular planar graph has a 3-edge-coloring. An embedding of a graph into a surface is called polyhedral if its dual has no multiple edges or loops. A conjecture of Grunbaum, presented in 1968, states that each 3-regular graph with a polyhedral embedding into an orientable surface has a 3-edge-coloring. With respect to the result of Tait, it aims to generalize the four color theorem for any orientable surface. We present a negative solution to this conjecture, showing that for each orientable surface of genus at least 5, there exists a 3-regular non 3-edge-colorable graph with a polyhedral embedding into the surface. 17 May 2011, 16:00, MS.05 Achlioptas Process Phase Transitions are Continuous It is widely believed that certain simple modifications of the random graph process lead to discontinuous phase transitions. In particular, starting with the empty graph on $n$ vertices, suppose that at each step two pairs of vertices are chosen uniformly at random, but only one pair is joined, namely one minimizing the product of the sizes of the components to be joined. Making explicit an earlier belief of Achlioptas and others, in 2009, Achlioptas, D'Souza and Spencer conjectured that there exists a δ>0 (in fact, δ\ge 1/2) such that with high probability the order of the largest component `jumps' from o(n) to at least δ n in o(n) steps of the process, a phenomenon known as `explosive percolation'. We give a simple proof that this is not the case. Our result applies to all `Achlioptas processes', and more generally to any process where a fixed number of independent random vertices are chosen at each step, and (at least) one edge between these vertices is added to the current graph, according to any (online) rule. We also prove the existence and continuity of the limit of the rescaled size of the giant component in a class of such processes, settling a number of conjectures. Intriguing questions remain, however, especially for the product rule described above. Joint work with Oliver Riordan. 3 May 2011, 16:00, MS.05 Dynamics of Boolean Networks - An Exact Solution In his seminal work Kauffman introduced a very simple dynamical model of biological gene-regulatory networks. Each gene was modeled by a binary variable that can be in an ON/OFF state and interacts with other genes via a coupling Boolean function which determines the state of a gene at the next time-step. It was argued that this model, also known as Random Boolean network (RBN) or Kauffman net, is relevant to the understanding of biological systems. RBNs belong to a larger class of Boolean networks that exhibits a rich dynamical behavior, which is very versatile and has found its use in the modeling of genetic, neural and social networks as well as in many other branches of science. The annealed approximation has proved to be a valuable tool in the analysis of large scale Boolean networks as it allows one to predict the time evolution of network activity (proportion of ON/OFF states) and Hamming distance (the difference between the states of two networks of identical topology) order parameters. The broad validity of the annealed approximation to general networks of this type has remained an open question; additionally, while the annealed approximation provides accurate activity and Hamming distance results for various Boolean models with quenched disorder it cannot compute correlation functions, used in studying memory effects. In particular, there are models with strong memory effects where the annealed approximation is not valid in specific regimes. In the current work we study the dynamics of a broad class of Boolean networks with quenched disorder and thermal noise using the generating functional analysis; the analysis is general and covers a large class of recurrent Boolean networks and related models. We show that results for the Hamming distance and network activity obtained via the quenched and annealed approaches, for this class, are identical. In addition, stationary solutions of Hamming distance and two-time autocorrelation function (inaccessible via the annealed approximation) coincide, giving insight into the uniform mapping of states within the basin of attraction onto the stationary states. In the presence of noise, we show that above some noise level the system is always ergodic and explore the possibility of spin-glass phase below this level. Finally, we show that our theory can be used to study the dynamics of models with strong memory effects. Joint work with Alexander Mozeika 27 April 2011, 16:00, MS.05 The Traveling Salesman Problem: Theory and Practice in the Unit Square In the Traveling Salesman Problem (TSP) we are given a collection of cities and the distance between each pair, and asked to find the shortest route that visits all the cities and returns to its starting place. When writers in the popular press wish to talk about NP-completeness, this is the problem they typically use to illustrate the concept, but how typical is it really? The TSP has also been highly attractive to theorists, who have proved now-classical results about the worst-case performance of heuristics for it, but how relevant to practice are those results? In this talk I provide a brief introduction to the TSP, its applications, and key theoretical results about it, and then report on experiments that address both the above questions. I will concentrate on randomly generated instances with cities uniformly distributed in the unit square, which I will argue provide a reasonable surrogate for the instances arising in many real-world TSP applications. I'll first survey the performance of heuristics on these instances, and then report on an ongoing study into the average length and structure of their optimal tours, based on extensive data generated using state-of-the-art optimization software for the TSP, which can regularly find optimal solutions to TSP instances with 1000 cities or more. 26 April 2011, 16:00, MS.05 No-wait Flow Shop with Batching Machines Scheduling problems with batching machines are extensively considered in the literature. Batching means that sets of jobs which are processed on the same machine must be grouped into batches. Two types of batching machines have been considered, namely s-batching machines and p-batching machines. For s-batching machines, the processing time of a batch is given by the sum of the processing times of the jobs in the batch, whereas, for p-batching machines the processing time of batches is given by the maximum of the processing times of the jobs in the batch. In this talk we consider no-wait flowshop scheduling problems with batching machines. The first problem that will be considered is no-wait flowshop with two p-batching machines and three batching machines. For these problems we characterize the optimal solution and we give a polynomial time algorithm to minimize the makespan for the two-machine problem. For the three-machine problem we show the number of batches can be limited to nine and give an example where all optimal schedules have seven batches. The second problem that will be considered is the no-wait flowshop with two-machines, where the first machine is a p-batching machine, and the second machine is an s-batching machine. We show that the makespan minimization is NP-hard and we present some polynomial cases by reducing the scheduling problem to a matching problem with minimal cost in a specific graph. 15 March 2011, 16:00, MS.05 Space Fullerenes A (geometric) fullerene is a 3-valent polyhedron whose faces are hexagons and pentagons (so, 12 of them). A fullerene is said to be Frank-Kasper if its hexagons are adjacent only to pentagons; there are four such fullerenes: with 20, 24, 26 and 28 vertices. A space fullerene is a 4-valent 3-periodic tiling of R^3 by Frank-Kasper fullerenes. Space fullerenes are interesting in Crystallography (metallic alloys, zeolites, clathrates) and in Discrete Geometry. 27 such physical structures, all realized by alloys, were already known. A new computer enumeration method has been devised for enumerating the space fullerenes with a small fundamental domain under their translation groups: 84 structures with at most 20 fullerenes in the reduced unit cell (i.e. by a Biberbach group) were found. The 84 obtained structures have been compared with the 27 physical ones and all known special constructions: by Frank-Kasper-Sullivan, Shoemaker-Shoemaker, Sadoc-Mossieri and Deza-Shtogrin. 13 obtained structures are among the above 27, including A_{15}, Z, C_{15} and 4 other Laves phases. Moreover, there are 16 new proportions of 20-, 24-, 26-, 28-vertex fullerenes in the unit cell. 3 of them provide the first counterexamples to a conjecture by Rivier-Aste, 1996, and to the old conjecture by Yarmolyuk-Kripyakevich, 1974, that the proportion should be a conic linear combination of proportions (1:3:0:0), (2:0:0:1), (3:2:2:0) of A_{15}, C_{15}, Z. So, a new challenge to practical Crystallography and Chemistry is to check the existence of alloys, zeolites, or other compounds having one of the 71 new geometrical structures. This is joint work with Mathieu Dutour and Olaf Delgado. 8 March 2011, 16:00, MS.05 Elementary Polycycles and Applications Given q \in \mathbb{N} and R \subset \mathbb{N}, an (R,q)-polycycle is a non-empty, 2-connected, planar, locally finite (i.e. any circle contains only a finite number of its vertices) graph G with faces partitioned into two non-empty sets F_1 and F_2, so that: 1. all elements of F_1 (called proper faces) are combinatorial i-gons with i \in R, 2. all elements of F_2 (called holes, the exterior face(s) are amongst them) are pair-wise disjoint, i.e. have no common vertices, 3. all vertices have degree within {2,...,q} and all interior (i.e. not on the boundary of a hole) vertices are q-valent. Such a polycycle is called elliptic, parabolic or hyperbolic when 1/q+1/r-1/2 (where r={max_{i \in R}i}) is positive, zero or negative, respectively. A bridge of an (R,q)-polycycle is an edge, which is not on a boundary and goes from a hole to a hole (possibly the same hole). An (R,q)-polycycle is called elementary if it has no bridges. An open edge of an (R,q)-polycycle is an edge on a boundary, such that each of its end-vertices has degree less than q. Every (R,q)-polycycle is formed by the agglomeration of elementary (R,q)-polycycles along their open edges. We classify all elliptic elementary (R,q)-polycycles and present various applications. 23 Feb 2011, 16:00, B3.03 Triangle-intersecting Families of Graphs A family of graphs F on a fixed set of n vertices is said to be `triangle-intersecting' if for any two graphs G and H in F, the intersection of G and H contains a triangle. Simonovits and S\'{o}s conjectured that such a family has size at most $\frac{1}{8}2^{{n \choose 2}}$, and that equality holds only if F consists of all graphs containing some fixed triangle. Recently, the author, Yuval Filmus and Ehud Friedgut proved a strengthening of this conjecture, namely that if F is an odd-cycle-intersecting family of graphs, then $|F| \leq \tfrac{1}{8} 2^{{n \choose 2}}$. Equality holds only if $F$ consists of all graphs containing some fixed triangle. A stability result also holds: an odd-cycle-intersecting family with size close to the maximum must be close to a family of the above form. We will outline proofs of these results, which use Fourier analysis, together with an analysis of the properties of random cuts in graphs, and some results from the theory of Boolean functions. We will then discuss some related open questions. All will be based on joint work with Yuval Filmus (University of Toronto) and Ehud Friedgut (Hebrew University of Jerusalem). 15 Feb 2011, 16:00, MS.05 Overlap Colourings of Graphs: A Generalization of Multicolourings An r-multicolouring of a graph allocates r colours (from some `palette') to each vertex of a graph such that the colour sets at adjacent vertices are disjoint; in an (r,\lambda) overlap colouring the sets at adjacent vertices must also overlap by \lambda colours. The (r,\lambda) chromatic number, \chi_{r,\lambda}(G), is the smallest possible palette size. Classifying graphs by their overlap chromatic properties turns out to be strictly finer than by their multichromatic properties but not as fine as by their cores. I shall survey what is currently known: basically, everything concerning series-parallel (i.e. K_4-minor-free) and wheel graphs, and the asymptotics for complete graphs. 11 February 2011, 14:00, B3.03 Number Systems and Data Structures The interrelationship between numerical representations and data structures is efficacious. However, in many write-ups such connection has not been made explicit. As far as we know, their usage was first discussed in the seminar notes by Clancy and Knuth. Early examples of data structures relying on number systems include finger search trees and binomial queues. In this talk, we survey some known number systems and their usage in existing worst-case efficient data structures. We formalize properties of number systems and requirements that should be imposed on a number system to guarantee efficient performance on the corresponding data structures. We introduce two new number systems: the strictly-regular system and the five-symbol skew system. We illustrate how to perform operations on the two number systems and give applications for their usage to implement worst-case efficient data structures. We also give a simple method that extends any number system supporting increments to support decrements using the same number of digit flips. The strictly-regular system is a compact system that supports increments and decrements in constant number of digit flips. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the regular system for its efficient support of decrements, and superior to the extended-regular system for being more compact by using three symbols instead of four. To demonstrate the applicability of the new number system, we modify Brodal's meldable priority queues making delete require at most 2lg(n)+O(1) element comparisons (improving the bound from 7lg(n)+O(1)) while maintaining the efficiency and the asymptotic time bounds for all operations. The five-symbol skew system also supports increments and decrements with a constant number of digit flips. In this number system the weight of the ith digit is 2^i-1, and hence it can be used to implement efficient structures that rely on complete binary trees. As an application, we implement a priority queue as a forest of heap-ordered complete binary trees. The resulting data structure guarantees O(1) worst-case cost per insert and O(lg(n)) worst-case cost per delete. 8 February 2011, 16:00, MS.05 The Complexity of the Constraint Satisfaction Problem: Beyond Structure and Language The Constraint Satisfaction Problem (CSP) is concerned with the feasibility of satisfying a collection of constraints. The CSP paradigm has proven to be useful in many practical applications. In a CSP instance, a set of variables must be assigned values from some domain. The values allowed for certain (ordered) subsets of the variables are restricted by constraint relations. The general CSP is NP-hard. However there has been considerable success in identifying tractable fragments of the CSP: these have traditionally been characterised in one of two ways: The sets of variables that are constrained in any CSP instance can be abstracted to give a hypergraph structure for the instance. Subproblems defined by limiting the allowed hypergraphs are called structural. The theory of tractable structural subproblems is analogous to the theory of tractable conjunctive query evaluation in relational databases and many of the tractable cases derive from generalisations of acyclic hypergraphs. We have several important dichotomy theorems for the complexity of structural subproblems. Alternatively, it is possible to restrict the set of relations which can be used to define constraints. Subproblems defined in this way are called relational. It turns out that the complexity of relational subproblems can be studied by analysing a universal algebraic object: the clone of polymorphisms. This algebraic analysis is well advanced and again there are impressive dichotomy theorems for relational subproblems. As such, it is timely to consider so-called hybrid subproblems which can neither be characterised by structural nor relational restrictions. This exciting new research direction shows considerable promise. In this talk we present several of the new results (tractable classes) for hybrid tractability: Turan, Broken Triangle, Perfect and Pivots. 18 January 2011, 16:00, MS.05 Auction and Equilibrium in Sponsored Search Market Design The Internet enabled sponsored market has been one of those that have been attracting extensive attention. Within this framework of a new market design, the price and allocation of on-line advert placement through auction or market equilibrium become very important topic both in theory and in practice. Within this context, we discuss incentive issues of both the market maker (the seller) and the market participants (the buyers) within the market equilibrium paradigm, and discuss existence, convergence and polynomial time solvability results with the comparison to auction protocols. 11 Jan 2011, 16:00, MS.05 Polylogarithmic Approximation for Edit Distance and the Asymmetric Query Complexity We present a near-linear time algorithm that approximates the edit distance between two strings within a significantly better factor than previously known. This result emerges in our investigation of edit distance from a new perspective, namely a model of asymmetric queries, for which we present near-tight bounds. Another consequence of this new model is the first rigorous separation between edit distance and Ulam distance, by tracing the hardness of edit distance to phenomena that were not used by previous [Joint work with Alexandr Andoni and Krzysztof Onak.] 7 December 2010, 16:00, MS.05 All Ternary Permutation Constraint Satisfaction Problems Parameterized Above Average Have Kernels with a Quadratic Number of Variables We will consider the most general ternary Permutation Constraint Satisfaction Problem (CSP) and observe that all other ternary Permutation-CSPs can be reduced to this one. An instance of such a problem consists of a set of variables V and a multiset of constraints, which are ordered triples of distinct variables of V. The objective is to find a linear ordering alpha of V that maximises the number of triples whose ordering (under alpha) follows that of the constraint. We prove that all ternary Permutation-CSPs parameterized above average (which is a tight lower bound) have kernels with a quadratic number of variables. 1 December 2010, 11:00, B3.20 (WBS Scarman Road) Defining Real-world Vehicle Routing Problems: An Object-based Approach Much interest is currently being expressed in "Real-world" VRPs. But what are they and how can they be defined? The conventional method of problem formulations for the VRP are rarely extended to deal with many real-world constraints and rapidly become complex and unwieldy as the number and complexity of constraints increases. In Software Engineering practice, complex systems and constraints are commonplace, and the prevailing modelling and programming paradigm is Object-Oriented Programming. This talk will present an OOP model for the VRP and show it's application to some classic VRP variants as well as some real-world problem domains. To reserve your place please contact Sue Shaw <S.Shaw@wbs.ac.uk>. 30 November 2010, 16:00, MS.05 A Role for the Lovasz Theta Function in Quantum Mechanics The mathematical study of the transmission of information without error was initiated by Shannon in the 50s. Only in 1979, Lovasz solved the major open problem of Shannon concerned with this topic. The solution is based on a now well-known object called the Lovasz theta function. Its role greatly contributed to the developments of areas of Mathematics like semidefinite programming and extremal problems in combinatorics. The Lovasz theta function is an upper bound to the zero-error capacity, however it is not always tight. In the last two decades quantum information theory established itself as the natural extension of information theory to the quantum regime, i.e. for the study of information storage and transmission with the use of quantum systems and dynamics. I will show that the Lovasz theta function is an upper bound to the zero-error capacity when the parties of a channel can share certain quantum physical resources. This quantity, which is called entanglement-assisted zero-error capacity, can be greater than the classical zero-error capacity, in both single use of the channel and asymptotically. Additionally, I will propose a physical interpretation of the Lovasz theta function as the maximum violation of certain noncontextual inequalities. These are inequalities traditionally used to study the difference between classical, quantum mechanical, and more exotic theories of nature. This is joint work with Adan Cabello (Sevilla), Runyao Duan (Tsinghua/Sydney), and Andreas Winter (Bristol/Singapore). 23 November 2010, 16:00, MS.05 Boundedness, Rigidity and Global Rigidity of Direction-Length Frameworks A mixed graph G=(V;D,L) is a graph G together with a bipartition (D,L) of its edge set. A d-dimensional direct-length framework (G,p) is a mixed graph G together with a map p:V->R^d. We imagine that the vertices are free to move in R^d subject to the constraints that the directions of the direction edges and the lengths of the length edges edges remain constant. The framework is rigid if the only such motions are simple translations. Two frameworks (G,p) and (G,q) are equivalent if the edges in D have the same directions and edges in L have the same lengths in both (G,p) and (G,q). The framework (G,p) is globally rigid if every framework which is equivalent to (G,p), can be obtained from (G,p) by a translation or a dilation by -1. It is bounded if there exists K in R such that every framework (G,q) which is equivalent to (G,q), satisfies |q(u)-q(v)| <= K for all u,v in V. I will describe characterizations of boundedness and rigidity of generic 2-dimensional direction-length frameworks, and give partial results for the open problem of characterizing the global rigidity of such frameworks. This is joint work with Peter Keevash and Tibor Jordán. 16 November 2010, 16:00, MS.05 Grid Pattern Classes Matrix griddings are structural means of representing permutations as built of finitely many increasing and decreasing permutations. More specifically, let M be an m × n matrix with entries m_{ij} ∈ { 0,1,-1}. We say that a permutation π admits an M-gridding if the xy-plane in which the graph Γ of π has been plotted can be partitioned into an xy-parallel, m × n rectangular grid with cells C_ {ij}, such that the following hold: • if m_{ij} = 1 then Γ ∩ C_{ij} is an increasing sequence of points; • if m_{ij} = -1 then Γ ∩ C_{ij} is decreasing; • if m_{ij} = 0 then Γ ∩ C_{ij} = ∅. Let G(M) denote the set (pattern class) of all permutations which admit M-griddings. Grid classes have been present in the pattern classes literature from very early on. For example, Atkinson (1999) observed that the class of permutations avoiding 321 and 2143 is equal to G((1 1)) ∩ G((1 1)^t) and used this to enumerate the class. Much more recently, grid classes have played a crucial role in Vatter's (to appear) classification of small growth rates of pattern classes. These past uses hint strongly at the natural importance of grid classes in the general theory of pattern classes. If this is to be so, the next step is to establish `nice' general properties of grid classes themselves. A number of researchers, including M.H. Albert, M.D. Atkinson, M. Bouvel, R. Brignall, V. Vatter and myself, have been engaged on such a project over the past year, and I will report on their findings. The results are proved by an intriguing interplay of language-theoretic and combinatorial-geometric methods, the flavour of which I will try to convey. The talk will conclude with a discussion of some open problems concerning general grid classes, which ought to point the way for the next stage in this project. 9 November 2010, 16:00, MS.05 Unsatisfiability Below the Threshold(s) It is well known that there is a sharp density threshold for a random r-SAT formula to be satisfiable, and a similar, smaller, threshold for it to be satisfied by the pure literal rule. Also, above the satisfiability threshold, where a random formula is with high probability (whp) unsatisfiable, the unsatisfiability is whp due to a large ``minimal unsatisfiable subformula'' (MUF). By contrast, we show that for the (rare) unsatisfiable formulae below the pure literal threshold, the unsatisfiability is whp due to a unique MUF with smallest possible ``excess'', failing this whp due to a unique MUF with the next larger excess, and so forth. In the same regime, we give a precise asymptotic expansion for the probability that a formula is unsatisfiable, and efficient algorithms for satisfying a formula or proving its unsatisfiability. It remains open what happens between the pure literal threshold and the satisfiability threshold. We prove analogous results for the k-core and k-colorability thresholds for a random graph, or more generally a random r-uniform hypergraph. 4 November 2010, 15:00, D1.07 Matchings in 3-uniform Hypergraphs A theorem of Tutte characterises all graphs that contain a perfect matching. In contrast, a result of Garey and Johnson implies that the decision problem of whether an r-uniform hypergraph contains a perfect matching is NP-complete for r>2. So it is natural to seek simple sufficient conditions that ensure a perfect matching. Given an r-uniform hypergraph H, the degree of a k-tuple of vertices is the number of edges in H containing these vertices. The minimum vertex degree of H is the minimum of these degrees over all 1-tuples. The minimum codegree of H is the minimum of all the degrees over all (r-1)-tuples of vertices in H. In recent years there has been significant progress on this problem. Indeed, in 2009 Rödl, Ruciński and Szemerédi characterised the minimum codegree that ensures a perfect matching in an r-uniform hypergraph. However, much less is known about minimum vertex degree conditions for perfect matchings in r-uniform hypergraphs H. Hàn, Person and Schacht gave conditions on the minimum vertex degree that ensures a perfect matching in the case when r>3. These bounds were subsequently lowered by Markström and Ruciński. This result, however, is believed to be far from tight. In the case when r=3, Hàn, Person and Schacht asymptotically determined the minimum vertex degree that ensures a perfect matching. In this talk we discuss a result which determines this threshold exactly. This is joint work with Daniela Kühn and Deryk Osthus. 2 November 2010, 16:00, MS.05 On Incidentor Colorings of Multigraphs An incidentor in a directed or undirected multigraph is an ordered pair of a vertex and an arc incident to it. It is convenient to treat an incidentor as half of an arc incident to a vertex. Two incidentors of the same arc are called mated. Two incidentors are adjacent if they adjoin the same vertex. The incidentor coloring problem (indeed, a class of problems) is to color all incidentors of a given multigraph with the minimum number of colors satisfying some restrictions on colors of adjacent and mated incidentors. A review of various results on incidentor coloring will be given in the talk. 26 October 2010, 16:00, MS.05 The Empire Colouring Problem: Old and New Results Assume that the vertex set of a graph G is partitioned into blocks B_1, B_2, ... of size r>1, so that B_i contains vertices labelled (i-1)r+1, (i-1)r+2, ... , ir. The r-empire chromatic number of G is the minimum number of colours \chi_r(G) needed to colour the vertices of G in such a way that all vertices in the same block receive the same colour, but pairs of blocks connected by at least one edge of G are coloured differently.The decision version of this problem (termed the r-empire colouring problem) dates back to the work of Perci Heawood on the famous Four Colour Theorem (note that the 1-empire colouring problem is just planar graph colouring). In this talk I will present a survey of some old and new results on this problem. Among other things, I will focus on the computational complexity of the r-empire colouring problem and then talk about the colourability of random trees. 19 October 2010, 16:00, MS.05 Reconstruction Problems and Polynomials There are three classical unsolved reconstruction problems: vertex reconstruction of S.M. Ulam and P.J. Kelly (1941), edge reconstruction of F. Harary (1964) and switching reconstruction of R.P. Stanley (1985). It turns out that these and similar questions are intimately connected with a wide range of important and also mostly open problems related to polynomials. For example, a simplest analogue of vertex reconstruction - reconstruction of a sequence from its subsequences leads to Littlewood-type problems concerning polynomials with -1,0, 1 coefficients. In switching reconstruction one has to know the number of zero coefficients in the expansion of (1-x)^n (1+x)^m, which is the same as the number of integer zeros of Krawtchouk polynomials. In this talk I will try to explain these connections and show how they can be applied to reconstruction. 12 October 2010, 16:00, MS.05 A Decidability Result for the Dominating Set Problem We study the following question: given a finite collection of graphs G_1,...,G_k, decide whether the dominating set problem is NP-hard in the class of (G_1,...,G_k)-free graphs or not. In this talk, we prove the existence of an efficient algorithm that answers this question for k=2. 23 September 2010, 14:00, MS B3.03 Hardness and Approximation of Minimum Maximal Matching in k-regular graphs We consider the problem of finding a maximal matching of minimum size in a graph, and in particular, in bipartite regular graphs. This problem was motivated by a stable marriage allocation problem. The minimum maximal matching is known to be NP-hard in bipartite graphs with maximum degree 3. We first extend this result to the class of $k$-regular bipartite graphs, for any fixed $k\geq 3$. In order to find some “good” solutions, we compare the size $M$ of a maximum matching and the size $MMM$ of a minimum maximal matching in regular graphs. It is well known that $M\leq 2MMM$ in any graph and we show that it can be improved to $M\leq (2-1/k)MMM$ in $k$-regular graphs. On the other hand, we analyze a greedy algorithm finding in $k$-regular bipartite graphs a maximal matching of size $MM$ satisfying $MM\leq (1-\epsilon(k))M$. It leads to a $(1-\epsilon(k))(2-1/k)$-approximation algorithm for $k$-regular bipartite graphs. This is joint work with Tinaz Ekim and C. Tanasescu 12 August 2010, 16:00, MS B3.03 Query Complexity Lower Bounds for Reconstruction of Codes We investigate the problem of "local reconstruction", as defined by Saks and Seshadhri (2008), in the context of error correcting codes. The first problem we address is that of "message reconstruction", where given an oracle access to a corrupted encoding $w \in \{0,1\}^n$ of some message $x \in \{0,1\}^k$ our goal is to probabilistically recover $x$ (or some portion of it). This should be done by a procedure (reconstructor) that given an index $i$ as input, probes $w$ at few locations and outputs the value of $x_i$. The reconstructor can (and indeed must) be randomized, but all its randomness is specified in advance by a single random seed, such that with high probability ALL $k$ values $x_i$ for $1 \leq i \leq k$ are reconstructed correctly. Using the reconstructor as a filter allows to evaluate certain classes of algorithms on $x$ efficiently. For instance, in case of a parallel algorithm, one can initialize several copies of the reconstructor with the same random seed, and they can autonomously handle decoding requests while producing outputs that are consistent with the original message $x$. Another example is that of adaptive querying algorithms, that need to know the value of some $x_i$ before deciding which index should be decoded next. The second problem that we address is "codeword reconstruction", which is similarly defined, but instead of reconstructing the message our goal is to reconstruct the codeword itself, given an oracle access to its corrupted version. Error correcting codes that admit message and codeword reconstruction can be obtained from Locally Decodable Codes (LDC) and Self Correctible Codes (SCC) respectively. The main contribution of this paper is a proof that in terms of query complexity, these are close to be the best possible constructions, even when we disregard the length of the encoding. This is joint work with Eldar Fischer and Arie Matsliah. 10 August 2010, 16:00, MS B3.03 Altruism in Atomic Congestion Games We study the effects of introducing altruistic agents into atomic congestion games. Altruistic behavior is modeled by a linear trade-off between selfish and social objectives. Our model can be embedded in the framework of congestion games with player-specific latency functions. Stable states are the pure Nash equilibria of these games, and we examine their existence and the convergence of sequential best-response dynamics. In general, pure Nash equilibria are often absent and existence is \NP-hard to decide. Perhaps surprisingly, if all delay functions are linear, the games remain potential games even when agents are arbitrarily altruistic. This result can be extended to a class of general potential games and social cost functions, and we study a number of prominent examples. In addition to these results for uncoordinated dynamics, we consider a scenario with a central altruistic institution that can set incentives for the agents. We provide constructive and hardness results for finding the minimum number of altruists to stabilize an optimal congestion profile and more general mechanisms to incentivize agents to adopt favorable behavior. 29 June 2010, 16:00, MS B3.03 Proportional Optimization and Fairness: Applications The problem of allocating resources in proportion to some measure has been studied in various fields of science for a long time. The apportionment problem of allocating seats in a parliament in proportion to the number of votes obtained by political parties is one example. This presentation will show a number of other real-life problems, for instance the Liu-Layland problem, stride scheduling and fair queueing which can be formulated and solved as the problems of proportional optimization and fairness. 22 June 2010, 16:00, MS B3.03 A Decomposition Approach for Insuring Critical Paths We consider a stochastic optimization problem involving protection of vital arcs in a critical path network. We analyze a problem in which task finishing times are uncertain, but can be insured a priori to mitigate potential delays. We trade off costs incurred in insuring arcs with expected penalties associated with late completion times, where lateness penalties are lower semi-continuous nondecreasing functions of completion time. We provide decomposition strategies to solve this problem with respect to either convex or nonconvex penalty functions. In particular, we employ the Reformulation-Linearization Technique to make the problem amenable to solution via Benders decomposition. We also consider a chance-constrained version of this problem, in which the probability of completing a project on time is sufficiently large. 21 June 2010, 16:00, MS.03 Energy Efficient Job Scheduling with Speed Scaling and Sleep Management Energy usage has become a major issue in the design of microprocessors, especially for battery-operated devices. Many modern processors support dynamic speed scaling to reduce energy usage. The speed scaling model assumes that a processor, when running at speed s, consumes energy at the rate of s^\alpha, where \alpha is typically 2 or 3. In older days when speed scaling was not available, energy reduction was mainly achieved by allowing a processor to enter a low-power sleep state, yet waking up requires extra energy. It is natural to study job scheduling on a processor that allows both sleep state and speed scaling. In the awake state, a processor running at speed s>0 consumes energy at the rate s^\alpha + \sigma , where \sigma > 0 is the static power and s^\alpha is the dynamic power. In this case, job scheduling involves two components: a sleep management algorithm to determine when to work or sleep, and a speed scaling algorithm to determine which job to run and at what speed to run. Adding a sleep state changes the nature of speed scaling. Without sleep state, running a job slower is a natural way to save energy. With sleep state, one can also save energy by working faster to allow a longer sleep period. It is not trivial to strike a balance. In this talk, we will discuss some new scheduling results involving both speed scaling and sleep management. 15 June 2010, 16:00, MS B3.03 Edge Expansion in Graphs on Surfaces Edge expansion for graphs is a well-studied measure of connectivity, which is important in discrete mathematics and computer science. While there has been much recent work done in finding approximation algorithms for determining edge expansion, there has been less attention in developing exact polynomial-time algorithms to determine edge expansion for restricted graph classes. In this talk, I will present an algorithm that, given an n-vertex graph G of genus g, determines the edge expansion of G in time n^{O(g)}. 8 June 2010, 16:00, MS B3.03 A Survey of Connectivity Approximation via a Survey of the Techniques We survey some crucial techniques in approximating connectivity problems. The most general question we study is the Steiner Network problem, where we are given an undirected weighted graph with costs on the edges, and required number rij of paths between every i, j. The paths need to be vertex or edge disjoint depending on the problem. The goal is to find a minimum cost feasible solution. The full talk has the following techniques and problems: 1. Solving k out-connectivity in polynomial time in the edge (Edmonds) and in the vertex (Frank Tardos) cases. This gives a simple ratio 2 for edge k-connectivity. 2. The cycle lemma of Mader: together with technique 1 it both gives results for minimum power k-connectivity power problems (see the talk for exact definition) and an improved result for k-edge connectivity in the metric case. 3. Laminarity and the new charging scheme by Ravi et. al. getting a much simplified version of Jain's theorem of 2 approximating of Steiner network in the edge disjoint paths case. 2 June 2010, 16:00, MS.04 The Computational Complexity of Trembling Hand Perfection and Other Equilibrium Refinements The king of refinements of Nash equilibrium is trembling hand perfection. In this talk, we show that it is NP-hard and SQRTSUM-hard to decide if a given pure strategy Nash equilibrium of a given three-player game in strategic form with integer payoffs is trembling hand perfect. Analogous results are shown for a number of other solution concepts, including proper equilibrium, (the strategy part of) sequential equilibrium, quasi-perfect equilibrium and CURB. 1 June 2010, 16:00, MS B3.03 Exponential Lower Bounds For Policy Iteration We study policy iteration for infinite-horizon Markov decision processes. In particular, we study greedy policy iteration. This is an algorithm that has been found to work very well in practice, where it is used as an alternative to linear programming. Despite this, very little is known about its worst case complexity. Friedmann has recently shown that policy iteration style algorithms have exponential lower bounds in a two player game setting. We extend these lower bounds to Markov decision processes with the total-reward and average-reward optimality criteria. 25 May 2010, 16:00, MS B3.03 Paths of Bounded Length and their Cuts: Parameterized Complexity and Algorithms We study the parameterized complexity of two families of problems: the bounded length disjoint paths problem and the bounded length cut problem. From Menger's theorem both problems are equivalent (and computationally easy) in the unbounded case for single source, single target paths. However, in the bounded case, they are combinatorial distinct and are both NP-hard, even to approximate. Our results indicate that a more refined landscape appears when we study these problems with respect to their parameterized complexity. For this, we consider several parameterizations (with respect to the maximum length l of paths, the number k of paths or the size of a cut, and the treewidth of the input graph) of all variants of both problems (edge/vertex-disjoint paths or cuts, directed/ undirected). We provide several FPT-algorithms (for all variants) when parameterized by both k and l and hardness results when the parameter is only one of k and l. Our results indicate that the bounded length disjoint-path variants are structurally harder than their bounded length cut counterparts. Also, it appears that the edge variants are harder than their vertex-disjoint counterparts when parameterized by the treewidth of the input graph. Joint work with Dimitrios M. Thilikos (Athens). 18 May 2010, 16:00, MS B3.03 Colouring Pairs of Binary Trees and the Four Colour Problem - Results and Achievements The Colouring Pairs of Binary Trees problem was introduced by Gibbons and Czumaj, and its equivalence to the Four Colour Problem means that it is an interesting combinatorial problem. Given two binary trees Ti and Tj, the question is whether Ti and Tj can be 3-coloured in such a way that the edge adjacent to leaf k is the same colour in Ti and Tj. This talk will introduce the problem and discuss some of the results that have been achieved so far, and will also discuss the potential benefits of finding a general solution to the problem. In particular we present two approaches that lead to linear-time algorithms solving CPBT for specific sub-classes of tree pairs. This is joint work with Alan Gibbons. 11 May 2010, 16:00, MS B3.03 An Approximate Version of Sidorenko's Conjecture A beautiful conjecture of Erdos-Simonovits and Sidorenko states that if H is a bipartite graph, then the random graph with edge density p has in expectation asymptotically the minimum number of copies of H over all graphs of the same order and edge density. This conjecture also has an equivalent analytic form and has connections to a broad range of topics, such as matrix theory, Markov chains, graph limits, and quasirandomness. Here we prove the conjecture if H has a vertex complete to the other part, and deduce an approximate version of the conjecture for all H. Furthermore, for a large class of bipartite graphs, we prove a stronger stability result which answers a question of Chung, Graham, and Wilson on quasirandomness for these graphs. 4 May 2010, 16:00, MS B3.03 Hybridizing Evolutionary Algorithms and Parametric Quadratic Programming to Solve Multi-Objective Portfolio Optimization Problems with Cardinality Constraints The problem of portfolio selection is a standard problem in financial engineering and has received a lot of attention in recent decades. Classical mean-variance portfolio selection aims at simultaneously maximizing the expected return of the portfolio and minimizing portfolio variance, and determining all efficient (Pareto-optimal) solutions. In the case of linear constraints, the problem can be solved efficiently by parametric quadratic programming (i.e., variants of Markowitz’ critical line algorithm). However, there are many real-world constraints that lead to a non-convex search space, e.g., cardinality constraints which limit the number of different assets in a portfolio, or minimum buy-in thresholds. As a consequence, the efficient approaches for the convex problem can no longer be applied, and new solutions are needed. In this talk, we present a way to integrate an active set algorithm into a multi-objective evolutionary algorithm (MOEA). The idea is to let the MOEA come up with some convex subsets of the set of all feasible portfolios, solve a critical line algorithm for each subset, and then merge the partial solutions to form the solution of the original non-convex problem. Because this means the active set algorithm has to be run for the evaluation of every candidate solution, we also discuss how to efficiently implement parametric quadratic programming for portfolio selection. We show that the resulting envelope-based MOEA significantly outperforms other state-of-the-art MOEAs. 26 April 2010, 15:00, CS1.01 A Rigorous Approach to Statistical Database Privacy Privacy is a fundamental problem in modern data analysis. We describe "differential privacy", a mathematically rigorous and comprehensive notion of privacy tailored to data analysis. Differential privacy requires, roughly, that any single individual's data have little effect on the outcome of the analysis. Given this definition, it is natural to ask: what computational tasks can be performed while maintaining privacy? In this talk, we focus on the tasks of machine learning and releasing contingency tables. Learning problems form an important category of computational tasks that generalizes many of the computations applied to large real-life datasets. We examine what concept classes can be learned by an algorithm that preserves differential privacy. Our main result shows that it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the hypothesis class. This is a private analogue of the classical Occam's razor result. Contingency tables are the method of choice of government agencies for releasing statistical summaries of categorical data. We provide tight bounds on how much distortion (noise) is necessary in these tables to provide privacy guarantees when the data being summarized is sensitive. Our investigation also leads to new results on the spectra of random matrices with correlated rows. 22 Apr 2010, 16:00, MS B3.03 Local Algorithms for Robotic Formation Problems Consider a scenario with a set of autonomous mobile robots having initial positions in the plane. Their goal is to move in such a way that they eventually reach a prescribed formation. Such a formation may be a straight line between two given endpoints (short communication chain), a circle or any other geometric pattern, or just one point (gathering problem). In this talk, I consider simple local strategies for such robotic formation problems: the robots are limited to see only robots within a bounded radius; they are memoryless, without common orientation. Thus, their decisions where to move next are solely based on the relative positions of robots within the bounded radius. I will present local strategies for short communication chains and gathering, and present runtime bounds assuming different time models. All previous algorithms with a proven time bound assume global view on the positions of all robots. 16 Mar 2010, 16:00, MS B3.03 Algorithmic Meta-Theorems: Upper and Lower Bounds for the Parameterized Complexity of Problems on Graphs In 1990, Courcelle proved a fundamental theorem stating that every property of graphs definable in monadic second-order logic can be decided in linear time on any class of graphs of bounded tree-width. This theorem is the first of what is today known as algorithmic meta-theorems, that is, results of the form: every property definable in a logic L can be decided efficiently on any class of structures with property P. Such theorems are of interest both from a logical point of view, as results on the complexity of the evaluation problem for logics such as first-order or monadic second-order logic, and from an algorithmic point of view, where they provide simple ways of proving that a problem can be solved efficiently on certain classes of structures. Following Courcelle's theorem, several meta-theorems have been established, primarily for first-order logic with respect to properties of structures derived from graph theory. In this talk I will motivate the study of algorithmic meta-theorems from a graph algorithmic point of view, present recent developments in the field and illustrate the key techniques from logic and graph theory used in their proofs. So far, work on algorithmic meta-theorems has mostly focused on obtaining tractability results for as general classes of graphs as possible. The question of finding matching lower bounds, that is, intractability results for monadic second-order or first-order logic with respect to certain graph properties, has so far received much less attention. Tight matching bounds, for instance for Courcelle's theorem, would be very interesting as they would give a clean and exact characterisation of tractability for MSO model-checking with respect to structural properties of the models. In the second part of the talk I will present a recent result in this direction showing that Courcelle's theorem can not be extended much further to classes of unbounded tree-width. 10 Mar 2010, 16:00, MS B3.03 Time-optimal Strategies for Infinite Games The use of two-player games of infinite duration has a long history in the synthesis of controllers for reactive systems. Classically, the quality of a winning strategy is measured in the size of the memory needed to implement it. But often there are other natural quality measures: in many games (even if they are zero-sum) there are winning plays for Player 0 that are more desirable than others. In this talk, we define and compute time-optimal winning strategies for three winning conditions. In a Poset game, Player 0 has to answer request by satisfying a partially ordered set of events. We use a waiting time based approach to define the quality of a strategy and show that Player 0 has optimal winning strategies, which are finite-state and effectively computable. In Parametric LTL, temporal operators may be equipped with free variables for time bounds. We present algorithms that determine whether a player wins a game with respect to some, infinitely many, or all variable valuations. Furthermore, we show how to determine optimal valuations that allow a player to win a game. In a k-round Finite-time Muller game, a play is stopped as soon as some loop is traversed k times in a row. For k=n^2n!+1, the winner of the k-round Finite-time Muller game wins also the classical Muller game. For k=2, this equivalence does no longer hold. For all values in between, it is open whether the games are equivalent. 9 Mar 2010, 16:00, MS B3.03 Induced Minors and Contractions - An Algorithmic View The theory of graph minors by Robertson and Seymour is one of very active fields in modern (algorithmic) graph theory. In this talk we will be interested in two containment relations similar to minors -- contractions and induced minors. We will survey known classic results and present some new work. In particular, I would like to talk about two recent results: (1) a polynomial-time algorithm for finding induced linkages in claw-free graphs, and (2) a polynomial-time algorithm for contractions to a fixed pattern in planar graphs. The talk will be based on joint work with Jiri Fiala, Bernard Lidicky, Daniel Paulusma and Dimitrios Thilikos. 3 Mar 2010, 14:00, MS B3.02 The Generalized Triangle-triangle Transformation in Percolation (joint seminar with Statistical Mechanics) One of the main aims in the theory of percolation is to find the `critical probability' above which long range connections emerge from random local connections with a given pattern and certain individual probabilities. The quintessential example is Kesten's result from 1980 that if the edges of the square lattice are selected independently with probability p, then long range connections appear if and only if p>1/2. The starting point is a certain self-duality property, observed already in the early 60s; the difficulty is not in this observation, but in proving that self-duality does imply criticality in this setting. Since Kesten's result, more complicated duality properties have been used to determine a variety of other critical probabilities. Recently, Scullard and Ziff have described a very general class of self-dual planar percolation models; we show that for the entire class (in fact, a larger class), self-duality does imply criticality. 2 Mar 2010, 16:00, MS B3.03 A Flow Model Based on Linking Systems with Applications in Network Coding The Gaussian relay channel network, is a natural candidate to model wireless networks. Unfortunately, it is not known how to determine the capacity of this model, except for simple networks. For this reason, Avestimehr, Diggavi and Tse introduced in 2007 a purely deterministic network model (the ADT model), which captures two key properties of wireless channels, namely broadcast and superposition. Furthermore, the capacity of an ADT model can be determined efficiently and approximates the capacity of Gaussian relay channels. In 2009, Amaudruz and Fragouli presented the first polynomial time algorithm to determine a relay encoding strategy that achieves the min-cut value of an ADT network. In this talk, I will present a flow model which shares many properties with classical network flows (as introduced by Ford and Fulkerson) and includes the ADT model as a special case. The introduced flow model is based on linking systems, a structure closely related to matroids. Exploiting results from matroid theory, many interesting prop result. Furthermore, classical matroid algorithms can be used to obtain efficient algorithms for finding maximum flows, minimum cost flows and minimum cuts. This is based on joint work with Michel Goemans and Satoru Iwata. 23 Feb 2010, 16:00, MS B3.03 A Nonlinear Approach to Dimension Reduction The celebrated Johnson-Lindenstrauss lemma says that every n points in Euclidean space can be represented using O(log n) dimensions with only a minor distortion of pairwise distances. It has been conjectured that a much-improved dimension reduction representation is achievable for many interesting data sets, by bounding the target dimension in terms of the intrinsic dimension of the data, e.g. by replacing the log(n) term with the doubling dimension. This question appears to be quite challenging, requiring new (nonlinear) embedding techniques. We make progress towards resolving this question by presenting two dimension reduction theorems with similar flavour to the conjecture. For some intended applications, these results can serve as an alternative to the conjectured embedding. [Joint work with Lee-Ad (Adi) Gottlieb.] 16 Feb 2010, 16:00, MS B3.03 k-Means has Polynomial Smoothed Complexity The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, we study the k-means method in the model of smoothed analysis. Smoothed analysis is a hybrid of worst-case and average-case analysis, which is based on a semi-random input model in which an adversary first specifies an arbitrary input that is subsequently slightly perturbed at random. This models random influences (e.g., measurement errors or numerical imprecision) that are present in most applications, and it often yields more realistic results than a worst-case or average-case analysis. We show that the smoothed running time of the k-means method is bounded by a polynomial in the input size and 1/sigma, where sigma is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set. This talk is based on joint work with David Arthur (Stanford University) and Bodo Manthey (University of Twente). 10 Feb 2010, 16:00, MS B3.03 Total Fractional Colorings of Graphs with Large Girth A total coloring is a combination of a vertex coloring and an edge coloring of a graph: every vertex and every edge is assigned a color and any two adjacent/incident objects must receive distinct colors. One of the main open problems in the area of graph colorings is the Total Coloring Conjecture of Behzad and Vizing from the 1960's asserting that every graph has a total coloring with at most D+2 colors where D is its maximum degree. When relaxed to fractional total colorings, the Total Coloring Conjecture was verified by Kilakos and Reed. In the talk, we will present a proof of the following conjecture of Reed: For every real ε > 0 and integer D, there exists g such that every graph with maximum degree D and girth at least g has total fractional chromatic number at most D+1+ε. For D=3 and D=4,6,8,10,..., we prove the conjecture in a stronger form: there exists g such that every graph with maximum degree D and girth at least g has total fractional chromatic number equal to Joint work with Tomas Kaiser, František Kardoš, Andrew King and Jean-Sébastien Sereni. 9 Feb 2010, 16:00, MS B3.03 Hamilton Decompositions of Graphs and Digraphs A Hamilton decomposition of a graph or digraph G is a set of edge-disjoint Hamilton cycles which together contain all the edges of G. In 1968, Kelly conjectured that every regular tournament has a Hamilton decomposition. We recently proved an approximate version of this conjecture (joint work with D. Kuhn and A. Treglown). I will also describe an asymptotic solution of a problem by Nash-Williams (from 1971) on the number of edge-disjoint Hamilton cycles in a graph with given minimum degree (joint work with D. Christofides and D. Kuhn). 2 Feb 2010, 16:00, MS B3.03 Time Complexity of Decision Trees We study time complexity of decision trees over an infinite set of k-valued attributes. As time complexity measures, we consider the depth and its extension - weighted depth of decision trees. The problem under consideration is the following. Is it possible for an arbitrary finite set of attributes to construct a decision tree which recognizes values of these attributes for a given input, and has the weighted depth less than the total weight of the considered attributes? The solution of this problem for the case of depth is given in terms of independence dimension (which is closely connected with VC dimension) and a condition of decomposition of granules. Each granule can be described by a set of equations of the kind "attribute = value". The solution of the considered problem for the weighted depth is based on the solution for the depth. We also discuss the place of the obtained results in the comparative analysis of time complexity of deterministic and nondeterministic decision trees. 27 Jan 2010, 16:00, MS B3.03 Random Graph Processes The random triangle-free graph process starts with an empty graph and a random ordering on all the possible edges and in each step considers an edge and adds it too the graph if it remains triangle-free. In the same way one can define the random planar graph process where an edge is added when the graph remains planar. In this talk the two processes are compared. For example we show that with high probability at the end of the random planar process every fixed planar graph is a subgraph whereas in the triangle-process dense triangle-free subgraphs will not appear. 26 Jan 2010, 16:00, MS B3.03 Cycles in Directed Graphs There are many theorems concerning cycles in graphs for which it is natural to seek analogous results for directed graphs. I will survey some recent results of this type, including: 1. a solution to a question of Thomassen on an analogue of Dirac's theorem for oriented graphs, 2. a theorem on packing cyclic triangles in tournaments that "almost" answers a question of Cuckler and Yuster, and 3. a bound for the smallest feedback arc set in a digraph with no short directed cycles, which is optimal up to a constant factor and extends a result of Chudnovsky, Seymour and Sullivan. These are joint work respectively with (1.) Kuhn and Osthus, (2.) Sudakov, and (3.) Fox and Sudakov. 19 Jan 2010, 16:00, MS B3.03 Solvable and Unsolvable in Cellular Automata 12 Jan 2010, 16:00, MS B3.03 Using Neighbourhood Exploration to Speed up Random Walks We consider strategies that can be used to speed-up the cover time of a random walk on undirected connected graphs. The price of this speed up is normally some extra work that can be performed locally by the walk or by the vertices of the graph. Typical assumptions about what is allowed include: Biased walk transitions, Use of previous history, Local exploration of the graph. Methods of local exploration include the neighbour marking process RW-MARK and look-ahead RW-LOOK (searching to fixed depth). The marking process, RW-MARK, made by a random walk on an undirected graph G is as follows. Upon arrival at a vertex v, the walk marks v if unmarked and otherwise it marks a randomly chosen unmarked neighbor of v. Depending on the degree and the expansion of the graph, we prove several upper bounds on the time required by the process RW-MARK to mark all vertices of G. If, for instance G is the hypercube on n vertices the processes marks all vertices in time O(n), with high probability. This significantly reduces the n ln n cover time of the hypercube using a standard random walk. The process RW-MARK can be compared to the marking process where a vertex v is chosen uniformly at random (coupon collecting) at each step. For the hypercube also has a marking time of O(n). In the related look-ahead process RW-LOOK, the walk marks all neighbours of the visited vertex to some depth k. For the hypercube, for example, the performance of the processes RW-LOOK-1, and CC-LOOK-1 is asymptotic to n ln 2 with high probability. This research is joint work with Petra Berenbrink, Robert Elsaesser, Tomasz Radzik and Thomas Sauerwald. 2 Dec 2009, 16:00, MS.05 Robustness of the Rotor-router Mechanism for Graph Exploration We consider the model of exploration of an undirected graph G by a single agent which is called the rotor-router mechanism or the Propp machine (among other names). Let p_v indicate the edge adjacent to a node v which the agent took on its last exit from v. The next time when the agent enters node v, first the "rotor" at node v advances pointer p_v to the next edge in a fixed cyclic order of the edges adjacent to v. Then the agent is directed onto edge p_v to move to the next node. It was shown before that after initial O(mD) steps, the agent periodically follows one established Eulerian cycle (that is, in each period of 2m consecutive steps the agent will traverse each edge exactly twice, once in each direction). The parameters m and D are the number of edges in G and the diameter of G. We investigate robustness of such exploration in presence of faults in the pointers p_v or dynamic changes in the graph. In particular, we show that after the exploration establishes an Eulerian cycle, if at some step k edges are added to the graph, then a new Eulerian cycle is established within O(km) steps. We show similar results for the case when the values of k pointers p_v are arbitrarily changed and when an arbitrary edge is deleted from the graph. Our proofs are based on the relation between Eulerian cycles and spanning trees known as the "BEST" Theorem (after de Bruijn, van Aardenne-Ehrenfest, Smith and Tutte). This is joint work with: E. Bampas, L. Gasieniec, R. Klasing and A. Kosowski. 1 Dec 2009, 16:00, MS.05 Discrepancy and Signed Domination in Graphs and Hypergraphs For a graph G, a signed domination function of G is a two-colouring of the vertices of G with colours +1 and -1 such that the closed neighbourhood of every vertex contains more +1's than -1's. This concept is closely related to combinatorial discrepancy theory as shown by Fueredi and Mubayi [J. Combin. Theory, Ser. B76 (1999) 223-239]. The signed domination number of G is the minimum of the sum of colours for all vertices, taken over all signed domination functions of G. In this talk, we will discuss new upper and lower bounds for the signed domination number. These new bounds improve a number of known results. 25 Nov 2009, 16:00, MS.05 An Overview of Multi-index Assignment Problems In this presentation we give an overview of applications of, and algorithms for, multi-index assignment problems (MIAPs). MIAPs, and relatives of it, have a long history both in applications as well as in theoretical results, starting at least in the 1950's. In particular, we focus here on the complexity and approximability of special cases of MIAPs. A prominent example of a MIAP is the so-called axial three index assignment problem (3AP) which has many applications in a variety of domains including clustering. A description of 3AP is as follows. Given are three n-sets R, G, and B. For each triple in R X G X B a cost-coefficient c(i,j,k) is given. The problem is to find n triples such that each element is in exactly one triple, while minimizing total cost. We show positive and negative results for finding an optimal solution to this problem that depend upon different ways of how the costs c(i,j,k) are specified. 24 Nov 2009, 16:00, MS.05 Approximability and Parameterized Complexity of Minmax Values We consider approximating the minmax value of a multiplayer game in strategic form. Tightening recent bounds by Borgs et al., we observe that approximating the value with a precision of "ε log n" digits (for any constant ε > 0) is NP-hard, where n is the size of the game. On the other hand, approximating the value with a precision of c log log n digits (for any constant c ≤ 1) can be done in quasi-polynomial time.We consider the parameterized complexity of the problem, with the parameter being the number of pure strategies k of the player for which the minmax value is computed. We show that if there are three players, k = 2 and there are only two possible rational payoffs, the minmax value is a rational number and can be computed exactly in linear time. In the general case, we show that the value can be approximated with any polynomial number of digits of accuracy in time n^O(k). On the other hand, we show that minmax value approximation is W[1]-hard and hence not likely to be fixed parameter tractable. Concretely, we show that if k-CLIQUE requires time n^Ω(k) then so does minmax value computation. This is joint work with Kristoffer Arnsfelt Hansen, Thomas Dueholm Hansen, and Peter Bro Miltersen. 18 Nov 2009, 16:00, MS.05 Profit-maximizing Pricing: The Highway and Tollbooth Problem We consider the profit maximizing pricing problem for single-minded buyers. Here, we wish to sell a set of m items to n buyers, each of whom is interested in buying a single set of items. Our goal is to set prices for the items such that the profit obtained by selling the items to the buyers who can afford them is maximized. We also assume in our case, that we have arbitrarily many copies of each item to sell. When the underlying set of items are edges of a graph, and the buyers are interested in buying specific paths, this is called the tollbooth problem. We consider the special case where the graph is a tree, or a path. In the case of a tree, the problem is already known to be APX-hard. We give an O(log n) approximation algorithm. When the graph is a path, the problem is called the highway problem. In this case, we show that the problem is strongly NP-hard, complementing an earlier QPTAS. We also consider the discount model where some items are allowed to have negative prices, and show that a very simple case is already APX-hard. This is joint work with K. Elbassioni, S. Ray and R. Sitters. 17 Nov 2009, 16:00, MS.05 Pricing Lotteries Randomized mechanisms, which map a set of bids to a probability distribution over outcomes rather than a single outcome, are an important but ill-understood area of computational mechanism design. We investigate the role of randomized outcomes ("lotteries") in the context of a fundamental and archetypical multi-parameter mechanism design problem: selling heterogeneous items to unit-demand bidders. To what extent can a seller improve her revenue by pricing lotteries rather than items, and does this modification of the problem affect its computational tractability? We show that the answers to these questions hinge on the exact model of consumer behavior we deploy and present several tight bounds on the increase in revenue obtainable via randomization and the computational complexity of revenue maximization in these different models. This is joint work with Shuchi Chawla, Bobby Kleinberg, and Matt Weinberg. 12 Nov 2009, 14:00, CS1.04 Some Multiobjective Optimization Problems Multiobjective Optimization has many applications in such fields as the internet, finance, biomedicine, management science, game theory and engineering. However, solving MO problems is not an easy task. Searching for all Pareto optimal solutions is expensive and time consuming process because there are usually exponentially large (or infinite) Pareto optimal solutions. Even for the simplest problem determining whether a point belongs to the Pareto curve is NP-hard. In this talk we are going to discuss some continuous and combinatorial multiobjective optimization problems and their applications in management, finance and military. Exact and heuristic techniques for solving these problems are presented. We also consider nondifferentiable multiobjective programming problems involving generalized convex functions and present optimality conditions and duality results for the problems. 11 Nov 2009, 16:00, MS.05 The Extremal Function for Partial Bipartite Tilings For a fixed graph H, let ex(n,H) be the maximum number of edges of an n-vertex graph not containing a copy of H. The asymptotic estimation of ex(n,H) is a central problem to extremal graph theory, and for the case when H is non-bipartite the answer is given by the Erdős-Stone theorem. However despite considerable effort, the problem is still open when H is bipartite. A related topic is the study of ex(n, l × H), the maximum number of edges of an n-vertex graph not containing l-vertex disjoint copies of a graph H. Insofar, this function has been investigated only for some special values of H. In this talk I shall first discuss known results about ex(n, l × H). Then for a given α ∈ (0,1) I shall determine the asymptotic behaviour of ex(n, αn × H), in the particular case when H is a bipartite graph. The proof is an application of the regularity lemma. This is joint work with Jan Hladký. 10 Nov 2009, 16:30, University of Oxford, Mathematical Institute, Room SR2 The Power of Choice in a Generalized Pólya Urn Model We introduce a "Pólya choice" urn model combining elements of the well known "power of two choices" model and the "rich get richer" model. From a set of k urns, randomly choose c distinct urns with probability proportional to the product of a power γ > 0 of their occupancies, and increment one with the smallest occupancy. The model has an interesting phase transition. If γ ≤ 1, the urn occupancies are asymptotically equal with probability 1. For γ > 1, this still occurs with positive probability, but there is also positive probability that some urns get only finitely many balls while others get infinitely many. 10 Nov 2009, 14:45, University of Oxford, Mathematical Institute, Room L3 Random Graphs with Few Disjoint Cycles Fix a positive integer k, and consider the class of all graphs which do not have k+1 vertex-disjoint cycles. A classical result of Erdős and Pósa says that each such graph G contains a blocker of size at most f(k). Here a blocker is a set B of vertices such that G-B has no cycles. We give a minor extension of this result, and deduce that almost all such labelled graphs on vertex set 1,...,n have a blocker of size k. This yields an asymptotic counting formula for such graphs; and allows us to deduce further properties of a graph Rn taken uniformly at random from the class: we see for example that the probability that Rn is connected tends to a specified limit as n → ∞. There are corresponding results when we consider unlabelled graphs with few disjoint cycles. We consider also variants of the problem involving for example disjoint long cycles. This is joint work with Valentas Kurauskas and Mihyun Kang. 10 Nov 2009, 14:00, University of Oxford, Mathematical Institute, Room L3 Oblivious Routing in the L_p-norm Gupta et al. introduced a very general multi-commodity flow problem in which the cost of a given flow solution on a graph G=(V,E) is calculated by first computing the link loads via a load-function l, that describes the load of a link as a function of the flow traversing the link, and then aggregating the individual link loads into a single number via an aggregation function. We show the existence of an oblivious routing scheme with competitive ratio O(log n) and a lower bound of Ω(log n/loglog n) for this model when the aggregation function agg is an Lp-norm. Our results can also be viewed as a generalization of the work on approximating metrics by a distribution over dominating tree metrics and the work on minimum congestion oblivious. We provide a convex combination of trees such that routing according to the tree distribution approximately minimizes the Lp-norm of the link loads. The embedding techniques of Bartal and Fakcharoenphol et al. can be viewed as solving this problem in the L1-norm while the result on congestion minmizing oblivious routing solves it for L∞. We give a single proof that shows the existence of a good tree-based oblivious routing for any Lp-norm. 4 Nov 2009, 16:00, MS.05 Correlation Decay and Applications to Counting Colourings We present two algorithms for counting the number of colourings of sparse random graph. Our approach is based on correlation decay techniques originating in statistical physics. The first algorithm is based on establishing correlation decay properties of Gibbs distribution which are related to Dobrushin's condition for uniqueness of Gibbs measure on infinite trees. More specifically, we impose boundary conditions to a specific set of vertices of the graph and we show that the effect of this boundary decays as we move away. For the second algorithm we set a new context for exploiting correlation decay properties. Instead of imposing boundary conditions -fixing the colouring of vertices-, we impose a specific graph structure to some region (i.e. delete edges) and show that the effect of this change on the Gibbs distribution decays as we move away. It turns out that this approach designates a new set of spatial correlation decay conditions that can be used for counting algorithms. In both cases the algorithms with high probability provide in polynomial time a (1/poly(n))-approximation of the logarithm of the number of k-colourings of the graph ("free energy") with k constant. The value of k depends on the expected degree of the graph. The second technique gives better results than the first one in terms of minimum number of colours needed. Finally, the second algorithm can be applied to another class of graphs which we call locally a-dense graphs of bounded maximum degree Δ. A graph G = (V, E) in this family has following property: For all {u,v} in E the number of vertices which are adjacent to v but not adjacent to u are at most (1-a)Δ, where 0<a<1 is a parameter of the model. For a locally a-dense graph G with bounded Δ the algorithm computes in polynomial time a (1/polylogn)-approximation to the logarithm of the number of k-colourings, for k> (2-a)Δ. By restricting the treewidth of the neighbourhoods in G we can improve the approximation. 3 Nov 2009, 16:00, MS.05 Antichains and the Structure of Permutation Classes The analogue of hereditary properties of graphs for permutations are known as "permutation classes", defined as downsets in the "permutation containment" partial ordering. They are most commonly described as the collection "avoiding" some set of permutations, cf forbidden induced subgraphs for hereditary graph properties. Their origin lies with Knuth in the analysis of sorting machines, but in recent years have received a lot of attention in their own right. While much of the emphasis has been on exact and asymptotic enumeration of particular families of classes, an ongoing study of the general structure of permutations is yielding remarkable results which typically also have significant enumerative consequences. In this talk I will describe a number of these structural results, with a particular emphasis on the question of partial well-order -- i.e. the existence or otherwise of infinite antichains in any given permutation class. The building blocks of all permutations are "simple permutations", and we will see how these on their own contribute to the partial well-order problem. We will see how "grid classes", a seemingly independent concept used to express large complicated classes in terms of smaller easily-described ones, also have significant consequences in determining the existence of infinite antichains. Finally, I will present recent and ongoing work in combining these two concepts, both to describe a general method of constructing antichains and to prove when certain classes are partially well-ordered. 27 Oct 2009, 16:00, MS.05 Wiretapping a Hidden Network We consider the problem of maximizing the probability of hitting a strategically chosen hidden virtual network by placing a wiretap on a single link of a communication network. This can be seen as a two-player win-lose (zero-sum) game that we call the wiretap game. The value of this game is the greatest probability that the wiretapper can secure for hitting the virtual network. The value is shown to be equal the reciprocal of the strength of the underlying graph. We provide a polynomial time algorithm that finds linear-sized description of the maxmin-polytope, and a characterization of its extreme points. It also provides a succint representation of all equilibrium strategies of the wiretapper that minimize the number of pure best responses of the hider. Among these strategies, we efficiently compute the unique strategy that maximizes the least punishment that the hider incurs for playing a pure strategy that is not a best response. Finally, we show that this unique strategy is the nucleolus of the recently studied simple cooperative spanning connectivity game. Joint work with: Haris Aziz, Mike Paterson and Rahul Savani. 13 October 2009, 16:00, MS.05 A reset word for a finite deterministic automaton is a word which takes the machine to a fixed state from any starting state. Investigations into an old conjecture on the length of a reset word (if one exists) has led to new properties of permutation groups lying between primitivity and 2-transitivity, and a surprising fact about the representation of transformation monoids as endomorphism monoids of graphs. In the talk I will discuss some of these things and their connections. 6 October 2009, 16:00, MS.05 On Graphs that Satisfy Local Pooling Efficient operation of wireless networks and switches requires using simple (and in some cases distributed) scheduling algorithms. In general, simple greedy algorithms (known as Greedy Maximal Scheduling - GMS) are guaranteed to achieve only a fraction of the maximum possible throughput (e.g., 50% throughput in switches). However, it was recently shown that in networks in which the local pooling conditions are satised, GMS achieves 100% throughput. A graph G = (V,E) is said to satisfy the local pooling conditions if for every induced subgraph H of G there exists a function g : V(H) → [0, 1] such that Σv∈S g(v)=1 for every maximal stable set S in H. We first analyze line graphs and give a characterization of line graphs that satisfy local pooling. Line graphs are of interest since they correspond to the interference graphs of wireless networks under primary interference constraints. Finally we consider claw-free graphs and give a characterization of claw-free graphs that satisfy local pooling. This is joint work with Berk Birand, Maria Chudnovsky, Paul Seymour, Gil Zussman and Yori Zwols. 29 September 2009, 16:00, MS.03 The Power of Online Reordering Online algorithms studied in theory are characterized by the fact that they get to know the input sequence incrementally, one job at a time, and a new job is not issued until the previous one is processed by the algorithm. In real applications, jobs can usually be delayed for a short amount of time. As a consequence, the input sequence of jobs can be reordered in a limited fashion to optimize the performance. In this talk, the power and limits of this online reodering paradigm is discussed for several problems. 24 June 2009, 16:00, MS.03 Triangles in Random Graphs Let X be the number of triangles in a random graph G(n,1/2). Loebl, Matousek and Pangrac showed that X is close to uniformly distributed modulo q when q=O(log n) is prime. We extend this result considerably, and discuss further implications of our methods for the distribution of X. This is joint work with Atsushi Tateno (Oxford). 23 June 2009, 16:00, MS B3.03 Positive Projections If A is a set of n positive integers, how small can the set {a/(a,b) : a,b ∈ A} be? Here as usual (a,b) denotes the HCF of a and b. This elegant question was raised by Granville and Roesler, who also reformulated it in the following way: given a set A of n points in Z^d, how small can (A-A)^+, the projection of the difference set of A onto the positive orthant, be? Freiman and Lev gave an example to show that (in any dimension) the size can be as small as n^2/3 (up to a constant factor). Granville and Roesler proved that in two dimensions this bound is correct, i.e. that the size is always at least n^2/3, and asked if this held in any dimension. Holzman, Lev and Pinchasi showed that in three dimensions the size is at least n^3/5, and that in four dimensions the size is at least n^6/11 (up to a logarithmic factor), and they also asked if the correct exponent is always 2/3. After some background material, the talk will focus on recent developments, including a negative answer to the n^2/3 question. Joint work with Béla Bollobás. 16 June 2009, 16:00, MS B3.03 Cycles, Paths, Connectivity and Diameter in Distance Graphs Circulant graphs form an important and very well-studied class of graph. They are Cayley graphs of cyclic groups and have been proposed for numerous applications such as local area computer networks, large area communication networks, parallel processing architectures, distributed computing, and VLSI design. Their connectivity and diameter, cycle and path structure, and further graph-theoretical properties have been studied in great detail. Polynomial time algorithms for isomorphism testing and recognition of circulant graphs have been long-standing open problems which were completely solved only recently. Our goal here is to extend some of the fundamental results concerning circulant graphs to the similarly defined yet more general class of distance graphs. We prove that the class of circulant graphs coincides with the class of regular distance graphs. We study the existence of long cycles and paths in distance graphs and analyse the computational complexity of problems related to their connectivity and diameter. Joint work with L. Draque Penso und J.L. Szwarcfiter. 9 June 2009, 16:00, MS B3.03 Discrepancy and eigenvalues A graph has low discrepancy if its global edge distribution is "close" to that of a random graph with the same overall density. It has been known that low discrepancy is related to the spectra of various matrix representations of the graph such as the adjacency matrix or the normalized Laplacian. More precisely, a large spectral gap implies low discrepancy. The topic of this talk is the converse implication: does low discrepancy imply a large spectral gap? The proofs are based on the Grothendieck inequality and the duality theorem for semidefinite programs. 3 June 2009, 15:00, CS 1.01 Social Context Games We introduce the study of social context games. A social context game is defined by an underlying game in strategic form, and a social context consisting of an undirected graph of neighborhood among players and aggregation functions. The players and strategies in a social context game are as in the underlying game, while the players' utilities in a social context game are computed from their payoffs in the underlying game based on the graph of neighborhood and the aggregation functions. Examples of social context games are ranking games and coalitional con- gestion games. A signifcant challenge is the study of how various social contexts affect various properties of the game. In this work we consider resource selection games as the underlying games, and four basic social contexts. An important property of resource selection games is the existence of pure strategy equilibrium. We study the existence of pure strategy Nash equilibrium in the corresponding social context games. We also show that the social context games possessing pure strategy Nash equilibria are not potential games, and therefore are distinguished from congestion games. Joint work with Itai Ashlagi and Moshe Tennenholtz. 26 May 2009, 16:00, MS B3.03 30 Years with Consensus: from Feasibility to Scalability The problem of reaching consensus in a distributed environment is one of the fundamental topics in the area of distributed computing, systems and architectures. It was formally abstracted in late 70's by Lamport, Pease and Shostak. The early work on this problem focused on feasibility, i.e., what are the conditions under which the consensus can be solved. Later, the complexity of solutions has become of great importance. In this talk I'll introduce the problem, present selected classic algorithms and lower bounds, and conclude with my recent work on time and communication complexity of consensus in a message-passing system. 19 May 2009, 16:00, MS B3.03 Disconnecting a Graph by a Disconnected Cut For a connected graph G = (V, E), a subset U of vertices is called a k-cut if U disconnects G, and the subgraph induced by U contains exactly k≥1 components. More specifically, a k-cut U is called a (k, l)-cut if V \ U induces a subgraph with exactly l≥2 components. We study two decision problems, called k-CUT and (k, l)-CUT, which determine whether a graph G has a k-cut or (k, l)-cut, respectively. By pinpointing a close relationship to graph contractibility problems we first show that (k, l)-CUT is in P for k=1 and any fixed constant l≥2, while the problem is NP-complete for any fixed pair k, l≥2. We then prove that k-CUT is in P for k=1, and NP-complete for any fixed k≥2. On the other hand, we present an FPT algorithm that solves (k, l)-CUT on planar graphs when parameterized by k + l. By modifying this algorithm we can also show that k-CUT is in FPT (with parameter k) and DISCONNECTED CUT is solvable in polynomial time for planar graphs. The latter problem asks if a graph has a k-cut for some k≥2. Joint work with Takehiro Ito and Marcin Kamiński. 13 May 2009, 16:00, MS.03 Algorithms for Counting/Sampling Cell-Bounded Contingency Tables This is a survey talk about counting/sampling contingency tables and cell-bounded contingency tables. In the cell-bounded contingency tables problem, we are given a list of positive integer row sums r=(r1,...,rm), a list of positive integer column sums c=(c1,...,cn), and a non-negative integer bound bij, for every 1 ≤ i ≤ m, 1 ≤ j ≤ n. The problem is to count/sample the set of all m-by-n tables X ∈ Z^mn of non-negative integers which satisfy the given row and column sums, and for which 0 ≤ Xij ≤ bij for all i, j. I will outline a complicated (reduction to volume estimation) algorithm for approximately counting these tables in polynomial time, when the number of row is constant. I also hope to outline a more recent, 'cute' dynamic programming algorithm for exactly the same case of constantly-many rows. The case for general m is still open. Joint with Martin Dyer and Dana Randall 7 May 2009, 16:00, MS.03 4-Colour Ramsey Number of Paths The Ramsey number R(l x H) is the smallest integer m such that any l-colouring of the edges of Km induces a monochromatic copy of H. We prove that R(4 x Pn)=2.5n+o(n). Luczak proposed a way how to attack the problem of finding a long monochromatic path in a coloured graph G: one should search for a large monochromatic connected matching in the reduced graph of G. Here we propose a modification of Luczak's technique: we find a large monochromatic connected fractional matching. This relaxation allows us to use LP duality and reduces the question to a vertex-cover problem. This is a joint work with Jan Hladký and Daniel Král'. 6 May 2009, 16:00, MS.03 Sparse Random Graphs are Fault Tolerant for Large Bipartite Graphs Random graphs G on n vertices and with edge probability at least c(log n/n)^(1/Δ) are robust in the following sense: Let H be an arbitrary planar bipartite graph on (1-ε)n vertices with maximum degree Δ. Now you, the adversary, may arbitrarily delete edges in G such that no vertex looses more than a third of its neighbours. However, you will not be able to destroy all copies of H. This result is obtained (joint work with Yoshiharu Kohayakawa and Anusch Taraz) via a sparse version of the regularity lemma. In the talk I will provide some background concerning related results, introduce sparse regularity, and outline how this can be used to prove theorems such as the one above. 6 May 2009, 11:00, MS.03 Counting Flags in Triangle Free Digraphs An important instance of the Caccetta-Häggkvist conjecture asserts that an n-vertex digraph with minimum outdegree at least n/3 contains a directed triangle. Improving on a previous bound of 0.3532n due to Hamburger, Haxell, and Kostochka we prove that a digraph with minimum outdegree at least 0.3465n contains a directed triangle. The proof is an application of a recent combinatorial calculus developed by Razborov. This calculus enables one to formalize common techniques in the area (such as induction or the Cauchy-Schwartz inequality). In the talk I shall describe Razborov's method in general, and its application to the setting of the Cacceta-Häggvist Conjecture. This is joint work with Dan Král' and Sergey Norin. 5 May 2009, 16:00, MS B3.03 Regularity Lemmas for Graphs Szemerédi's regularity lemma is a powerful tool in extremal graph theory, which had have many applications. In this talk we present several variants of Szemerédi's original lemma (due to several researchers including Frieze and Kannan, Alon et al., and Lovász and Szegedy) and discuss their relation to each other. 21 Apr 2009, 16:00, MS B3.03 Time and Space Efficient Anonymous Graph Traversal We consider the problem of periodic graph traversal in which a mobile entity with constant memory has to visit all n nodes of an arbitrary undirected graph G in a periodic manner. Graphs are supposed to be anonymous, that is, nodes are unlabeled. However, while visiting a node, the robot has to distinguish between edges incident to it. For each node v the endpoints of the edges incident to v are uniquely identified by different integer labels called port numbers. We are interested in minimization of the length of the exploration period. This problem is unsolvable if the local port numbers are set arbitrarily (shown by Budach in 1978). However, surprisingly small periods can be achieved when carefully assigning the local port numbers. Dobrev, Jansson, Sadakane, and Sung described an algorithm for assigning port numbers, and an oblivious agent (i.e. an agent with no memory) using it, such that the agent explores all graphs with n vertices within period 10n. Providing the agent with a constant number of memory bits, the optimal length of the period was later proved to be no more than 3.75n (using a different assignment of the port numbers). Following on from this, a period of length at most (4 1/3)n was shown for oblivious agents, and a period of length at most 3.5n for agents with constant memory. This talk describes results in two papers by the speaker, which are joint work with several other authors. 6 April 2009, 12:00, MS B3.02 Eliminating Cycles in the Torus I will discuss the problem of cutting the (discrete or continuous) d-dimensional torus economically, so that no nontrivial cycle remains. This improves, simplifies and/or unifies results of Bollobás, Kindler, Leader and O'Donnell, of Raz and of Kindler, O'Donnell, Rao and Wigderson. More formal, detailed abstract(s) appear in http://www.math.tau.ac.il/~nogaa/PDFS/torus3.pdf and in http:// Joint work with Bo'az Klartag. 17 March 2009, 16:00, MS B3.03 Oblivious Interference Scheduling In the interference scheduling problem, one is given a set of n communication requests described by pairs of points from a metric space. The points correspond to devices in a wireless network. In the directed version of the problem, each pair of points consists of a dedicated sending and a dedicated receiving device. In the bidirectional version the devices within a pair shall be able to exchange signals in both directions. In both versions, each pair must be assigned a power level and a color such that the pairs in each color class can communicate simultaneously at the specified power levels. The feasibility of simultaneous communication within a color class is defined in terms of the Signal to Interference Plus Noise Ratio (SINR) that compares the strength of a signal at a receiver to the sum of the strengths of other signals. This is commonly referred to as the "physical model" and is the established way of modelling interference in the engineering community. The objective is to minimize the number of colors as this corresponds to the time needed to schedule all requests. We study oblivious power assignments in which the power value of a pair only depends on the distance between the points of this pair. We prove that oblivious power assignments cannot yield approximation ratios better than Ω(n) for the directed version of the problem, which is the worst possible performance guarantee as there is a straightforward algorithm that achieves an O(n) -approximation. For the bidirectional version, however, we can show the existence of a universally good oblivious power assignment: For any set of n bidirectional communication requests, the so-called "square root assignment" admits a coloring with at most polylog(n) times the minimal number of colors. The proof for the existence of this coloring is non-constructive. We complement it by an approximation algorithm for the coloring problem under the square root assignment. This way, we obtain the first polynomial time algorithm with approximation ratio polylog(n) for interference scheduling in the physical model. This is joint work with Alexander Fanghänel, Thomas Keßelheim and Harald Räcke 16 March 2009, 11:00, WBS E2.02 Polynomial Time Solvable Cases of the Vehicle Routing Problem The Traveling Salesman Problem (TSP) is one of the most famous NP-hard problems. So, much works have been done to study polynomially solvable cases, that is, to find good conditions for distances between cities such that an optimal tour can be found in polynomial time. These good conditions give some restriction on the optimal solution, for example, Monge property. For a given complete weighted digraph G, a vertex x of G, and a positive integer k, the Vehicle Routing Problem (VRP) is to find a minimum weight connected subgraph F of G such that F is a union of k cycles sharing only the vertex x. In this talk, we apply good conditions for the TSP to the VRP. We will show that if a given weighted digraph satisfies several conditions, which is known for the TSP, then an optimal solution of the VRP can also be computed in polynomial time. 11 March 2009, 14:00, MS B1.01 Bounding Revenue Deficiency in Multiple Winner Procurement Auctions Consider a firm, called the buyer, that satisfies its demand over a T-period time horizon by assigning the demand vector to a supplier via a procurement (reverse) auction; call this the Standard auction. The firm is considering an alternative procedure in which it will allow bids on one or more periods; in this auction, there can be more than one winner covering the demand vector; call this the Multiple Winner auction. Choosing the Multiple Winner auction over the Standard auction will tend to: (1) allow each supplier the option of supplying demand for any subset of periods of the T -period horizon; (2) increase competition among the suppliers, and (3) allow the buyer to combine bids from different suppliers in order to lower his purchase cost. All three effects might lead one to expect that the buyer's cost will always be lower in the Multiple Winner auction than in the Standard auction. To the contrary, there are cases in which the buyer will have a higher cost in the Multiple Winner auction. We provide a bound on how much greater the buyer's cost can be and show that this bound is sharp. 10 March 2009, 16:00, MS B3.03 Prize-Collecting Steiner Trees and Disease The identification of functional modules in protein-protein interaction networks is an important topic in systems biology. These modules might help, for example, to better understand the underlying biological mechanisms of different tumor subtypes. In this talk, I report on results of a cooperation with statisticians and medical researchers from the University of Würzburg. In particular, I will present an exact integer linear programming solution for this problem, which is based on its connection to the well-known prize-collecting Steiner tree problem from Operations Research. 3 March 2009, 16:00, MS B3.03 Adding Recursion to Markov Chains, Markov Decision Processes, and Stochastic Games I will decribe a family of finitely presented, but infinite-state, stochastic models that arise by adding a natural recursion feature to Markov Chains, Markov Decision Processes, and Stochastic These models subsume a number of classic and heavily studied purely stochastic models, including (multi-type) branching processes, (quasi-)-birth-death processes, stochastic context-free grammars, and others. They also provide a natural abstract model of probabilistic procedural programs with recursion. The theory behind the algorithmic analysis of these models, developed over the past few years, has turned out to be very rich, with connections to a number of areas of research. I will survey just a few highlights from this work. There remain many open questions about the computational complexity of basic analysis problems. I will highlight a few such open problems. (Based on joint work with Mihalis Yannakakis, Columbia University) 2 March 2009, 16:00, MS B3.02 The Robust Network Loading Problem under Polyhedral Demand Uncertainty: Formulation, Polyhedral Analysis, and Computations For a given undirected graph G, the Network Loading Problem (NLP) deals with the design of a least cost network by allocating discrete units of facilities with different capacities on the links of G so as to support expected pairwise demands between some endpoints of G. In this work, we relax the assumption of known demands and study robust NLP with polyhedral demands to obtain designs flexible enough to support changing communication patterns in the least costly manner. More precisely, we seek for a final design that remains operational for any feasible demand realization in a prescribed polyhedral set. Firstly, we give a compact multi-commodity formulation of the problem for which we prove a nice decomposition property obtained from projecting out the flow variables. This simplifies the resulting polyhedral analysis and computations considerably by doing away with metric inequalities, an attendant feature of the most successful algorithms on the Network Loading Problem. Then, we focus on a specific choice of the uncertainty description, called the "hose model", which specifies aggregate traffic upper bounds for selected endpoints of the network. We study the polyhedral aspects of the Network Loading Problem under hose demand uncertainty and present valid inequalities for robust NLP with arbitrary number of facilities and arbitrary capacity structures as the second main contribution of the present work. Finally, we develop an efficient Branch-and-Cut algorithm supported by a simple but effective heuristic for generating upper bounds and use it to solve several well-known network design instances. 24 February 2009, 16:00, MS B3.03 Optimal Mechanisms for Scheduling We study the design of optimal mechanisms in a setting where job-agents compete for being processed by a service provider that can handle one job at a time. Each job has a processing time and incurs a waiting cost. Jobs need to be compensated for waiting. We consider two models, one where only the waiting costs of jobs are private information (1-d), and another where both waiting costs and processing times are private (2-d). An optimal mechanism minimizes the total expected expenses to compensate all jobs, while it has to be Bayes-Nash incentive compatible. We derive closed formulae for the optimal mechanism in the 1-d case and show that it is efficient for symmetric jobs. For non-symmetric jobs, we show that efficient mechanisms perform arbitrarily bad. For the 2-d case, we prove that the optimal mechanism in general does not even satisfy IIA, the "independent of irrelevant alternatives" condition. We also show that the optimal mechanism is not even efficient for symmetric agents in the 2-d case. Joined work with Birgit Heydenreich, Debasis Mishra and Marc Uetz. 10 February 2009, 16:00, MS B3.03 Distributed Optimisation in Network Control Communication networks (such as the Internet or BT's network) are held together by a wide variety of interacting network control systems. Examples include routing protocols, which determine the paths used by different sessions, and flow control mechanisms, which determine the rate at which different sessions can send traffic. Many of these control processes can be viewed as distributed algorithms that aim to solve a network-wide optimisation problem. I will present a mathematical framework, based on Lagrangian decomposition, for the design and analysis of such algorithms. As an illustration, I will discuss how this framework has been used in BT Innovate to develop a new resource control system which integrates multi-path routing and admission control decisions. The notion of convexity plays an important role in convergence proofs for our algorithms. We can design a control system with stability guarantees as long as the system's target behaviour can be captured as the solution to a convex optimisation problem. Unfortunately, many standard network design problems are formulated as integer programming problems and therefore inherently non-convex. I will discuss some of the implications of this non-convexity and identify some related research challenges. 3 February 2009, 16:00, MS B3.03 Multidimensional Problems in Additive Combinatorics We discuss bounds on extremal sets for problems like those below: 1. What is the largest subset of (ℤ/nℤ)^r that does not contain an arithmetic progression of length k? 2. What is the largest subset/(multiset) of (ℤ/nℤ)^r that does not contain n elements that sum to 0? 3. What is the largest subset of [1,...,n]^r that does not contain a solution of x+y=z (i.e., which is a sum free set)? 4. Colour the elements of [1,...,n]^r red and blue. How many monochromatic Schur triples are there? 27 January 2009, 16:00, MS B3.03 Fast Distance Multiplication of Unit-Monge Matrices A matrix is called Monge, if its density matrix is nonnegative. Monge matrices play an important role in optimisation. Distance multiplication (also known as min-plus or tropical multiplication) of two Monge matrices of size n can be performed in time O(n^2). Motivated by applications to string comparison, we introduced in a previous work the following subclass of Monge matrices. A matrix is called unit-Monge, if its density matrix is a permutation matrix; we further restrict our attention to a subclass that we call simple unit-Monge matrices. Our previous algorithm for distance multiplication of simple unit-Monge matrices runs in time O(n^3/2). Landau conjectured in 2006 that this problem can be solved in linear time. In this work, we give an algorithm running in time O(n log^4 n), thus approaching Landau's conjecture within a polylogarithmic factor. The new algorithm implies immediate improvements in running time for a number of string comparison and graph algorithms: semi-local longest common subsequences between permutations; longest increasing subsequence in a cyclic permutation; longest pattern-avoiding subsequence in a permutation; longest piecewise monotone subsequence; maximum clique in a circle graph; subsequence recognition in a grammar-compressed string; sparse spliced alignment of genome strings. 20 January 2009, 16:00, MS B3.03 Boundary Properties of Graphs and the Hamiltonian Cycle Problem The notion of a boundary graph property is a relaxation of that of a minimal property. Several fundamental results in graph theory have been obtained in terms of identifying minimal properties. For instance, Robertson and Seymour showed that there is a unique minimal minor-closed property with unbounded tree-width (the planar graphs), while Balogh, Bollobás and Weinreich identified nine minimal hereditary properties with the factorial speed of growth. However, there are situations where the notion of minimal property is not applicable. A typical example of this type is given by graphs of large girth. It is known that for each particular value of k, the graphs of girth at least k have unbounded tree-, clique- or rank-width, their speed of growth is superfactorial and many algorithmic problems remain NP-hard for these graphs, while the “limit” property of this sequence (i.e., acyclic graphs) has bounded tree-, clique- and rank-width, its speed of growth is factorial, and most of algorithmic problems can be solved in polynomial time in this class. The notion of boundary properties of graphs allows to overcome this difficulty. In this talk we survey some available results on this topic and identify the first boundary class for the Hamiltonian cycle problem. Joint work with N. Korpelainen and A. Tiskin 13 January 2009, 16:00, MS D1.07 Minimum Degree Conditions for Large Subgraphs Turán's theorem shows that every n-vertex graph with minimum degree n/2 contains a triangle. A proof (for large n) of the Pósa conjecture shows that every n-vertex graph with minimum degree 2n/3 contains the square of a Hamilton cycle; this is a cyclic ordering of the vertices in which every three consecutive vertices forms a triangle. We fill in the gap between these theorems, giving the correct relationship (for large n) between the minimum degree of an n-vertex graph and the lengths of squared cycles which are forced to exist in the graph. We also discuss generalisations of all three theorems to other graphs with chromatic number three, and offer some conjectures on higher chromatic numbers. This is joint work with Julia Böttcher and Jan Hladký. 2 December 2008, 16:00, MS.05 Degree Sequence Conditions for Graph Properties We discuss recent work on graphical degree sequences, i.e., sequences of integers that correspond to the degrees of the vertices of a graph. Historically, such degree sequences have been used to provide sufficient conditions for graphs to have a certain property, such as being k-connected or hamiltonian. For hamiltonicity, this research has culminated in a beautiful theorem due to Chvátal (1972). This theorem gives a sufficient condition for a graphical degree sequence to be forcibly hamiltonian, i.e., such that every graph with this degree sequence is hamiltonian. Moreover, the theorem is the strongest of an entire class of theorems in the following sense: if the theorem does not guarantee that a sequence π is forcibly hamiltonian, then there exists a nonhamiltonian graph with a degree sequence that majorizes π. Very recently, Kriesell solved an open problem due to Bauer et al. by establishing similar conditions for k-edge connectivity for any fixed k. We will introduce a general framework for such conditions and discuss recent progress and open problems on similar sufficient conditions for a variety of graph properties. This is based on joint work with Bauer, van den Heuvel, Kahl, and Schmeichel. 25 November 2008, 16:00, MS.05 Min-Max Functions Min-Max function appear in various contexts in mathematics, computer science, and engineering. For instance, in the analysis of zero-sum two-player games on graphs, Min-Max functions arise as dynamic programming operators. They also play a role in the performance analysis of certain discrete event systems, which are used to model computer networks and manufacturing systems. In each of these fields it is important to understand the long-term iterative behaviour of Min-Max functions. In this talk I will explain how this problem is related to certain combinatorial geometric problems and discuss some results and a conjecture. 18 November 2008, 16:00, MS.05 Even-hole-free Graphs and the Decomposition Method We consider finite and simple graphs. We say that a graph G contains a graph F, if F is isomorphic to an induced subgraph of G. A graph G is F-free if it does not contain F. Let ℱ be a (possibly infinite) family of graphs. A graph G is ℱ-free if it is F-free, for every F ∈ ℱ. Many interesting classes of graphs can be characterized as being ℱ-free for some family ℱ. The most famous such example is the class of perfect graphs. A graph G is perfect if for every induced subgraph H of G, χ(H) = ω(H), where χ(H) denotes the chromatic number of H and ω(H) denotes the size of a largest clique in H. The famous Strong Perfect Graph Theorem states that a graph is perfect if and only if it does not contain an odd hole nor an odd antihole (where a hole is a chordless cycle of length at least four). In the last 15 years a number of other classes of graphs defined by excluding a family of induced subgraphs have been studied, perhaps originally motivated by the study of perfect graphs. The kinds of questions this line of research was focused on were whether excluding induced subgraphs affects the global structure of the particular class in a way that can be exploited for putting bounds on parameters such as χ and ω, constructing optimization algorithms (problems such as finding the size of a largest clique or a minimum coloring), recognition algorithms and explicit construction of all graphs belonging to the particular class. A number of these questions were answered by obtaining a structural characterization of a class through their decomposition (as was the case with the proof of the Strong Perfect Graph Theorem). In this talk we survey some of the most recent uses of the decomposition theory in the study of classes of even-hole-free graphs. Even-hole-free graphs are related to β-perfect graphs in a similar way in which odd-hole-free graphs are related to perfect graphs. β-Perfect graphs are a particular class of graphs that can be polynomially colored, by coloring greedily on a particular, easily constructable, ordering of vertices. 11 November 2008, 16:00, MS.05 Approximability of Combinatorial Exchange Problems In a combinatorial exchange the goal is to find a feasible trade between potential buyers and sellers requesting and offering bundles of indivisible goods. We investigate the approximability of several optimization objectves in this setting and show that the problems of surplus and trade volume maximization are inapproximable even with free disposal and even if each agent's bundle is of size at most 3. In light of the negative results for surplus maximization we consider the complementary goal of social cost minimization and present tight approximation results for this scenario. Considering the more general supply chain problem, in which each agent can be a seller and buyer simultaneously, we prove that social cost minimization remains inapproximable even with bundles of size 3, yet becomes polynomial time solvable for agents trading bundles of size 1 or 2. This yields a complete characterization of the approximability of supply chain and combinatorial exchange problems based on the size of traded bundles. We finally briefly address the problem of exchanges in strategic settings. This is joint work with Moshe Babaioff and Patrick Briest. 4 November 2008, 16:00, CS1.01 Sensor Networks and Random Intersection Graphs A uniform random intersection graph G(n,m,k) is a random graph constructed as follows. Label each of n nodes by a randomly chosen set of k distinct colours taken from some finite set of possible colours of size m. Nodes are joined by an edge if and only if some colour appears in both their labels. These graphs arise in the study of the security of wireless sensor networks. Such graphs arise in particular when modelling the network graph of the well known key predistribution technique due to Eschenauer and Gligor. In this talk we consider the threshold for connectivity and the appearance of a giant component of the graph G(n,m,k) when n → ∞ with k a function of n such that k≥2 and m=⌊n^α⌋ for some fixed positive real number α. 28 October 2008, 16:00, MS.05 Throughput Optimization in Two-Machine Flowshops with Flexible Operations We discuss the following scheduling problem. A two-machine flowshop produces identical parts. Each of the parts is assumed to require a number of manufacturing operations, and the machines are assumed to be flexible enough to perform different operations. Due to economical or technological constraints, some specific operations are preassigned to one of the machines. The remaining operations, called flexible operations, can be performed on either one of the machines, so that the same flexible operation can be performed on different machines for different parts. The problem is to determine the assignment of the flexible operations to the machines for each part, with the objective of maximizing the throughput rate. We consider various cases depending on the number of parts to be produced and the capacity of the buffer between the machines. Along the way, we underline the fact that this problem belongs to the class of "high multiplicity scheduling problem": for this class of problems, the input data can be described in a very compact way due to the fact that the jobs fall into a small number of distinct job types, where all the jobs of a same type possess the same attribute values. High multiplicity scheduling problems have been recently investigated by several researchers who have observed that the complexity of such problems must be analyzed with special care. Joint work with Hakan Gültekin. 21 October 2008, 16:00, MS.05 Tricky Problems for Graphs of Bounded Treewidth In this talk I will consider computational problems that (A) can be solved in polynomial time for graphs of bounded treewidth and (B) where the order of the polynomial time bound is expected to depend on the treewidth of the instance. Among the considered problems are coloring problems, factor problems, orientation problems and satisfiability problems. I will present an algorithmic meta-theorem (an extension of Courcelle's Theorem) that provides a convenient way for establishing (A) for some of the considered problems and I will explain how concepts from parameterized complexity theory can be used to establish (B). 15 October 2008, 16:00, CS1.04 The External Network Problem and the Source Location Problem The connectivity of a communications network can sometimes be enhanced if the nodes are able, at some expense, to form links using an external network. Using graphs, the following is a possible way to model such situations. Let G = (V,E) be a graph. A subset X of vertices in V may be chosen, the so-called external vertices. An internal path is a normal path in G; an external path is a pair of internal paths P1 = v1 · · · vs, P2 = w1 · · · wt in G such that vs and w1 are from the chosen set X of external vertices. (The idea is that v1 can communicate with wt along this path using an external link from vs to w1.) For digraphs we use similar vocabulary, but then using directed paths. Next suppose a certain desired connectivity for the network is prescribed in terms of edge, vertex or arc-connectivity. Say that for a given k there need to be at least k edge- (or vertex- or arc-) disjoint paths (which can be internal or external) between any pair of vertices. What is the smallest set X of external vertices such that this can be achieved? A related problem is the Source Location Problem : In this we need to find, given a graph or digraph and a required connectivity requirement, a subset S of the vertices such that from each vertex in the graph there are at least the required number of edge- (or vertex- or arc-) disjoint paths between any vertex and the set S. And again, the goal is to minimise the number of vertices in S. It seems clear that the External Network Problem and the Source Location Problem are closely related. In this talk we discuss these relations, and also show some instances where the problems behave quite differently. Some recent results on the complexity of the different types of problems will be presented as well. Joint work with Matthew Johnson (University of Durham) 7 October 2008, 16:00, MS.05 Extremal Graph Theory with Colours We consider graphs whose edges are painted with two colours (some edges might get both colours), and ask, given a fixed such graph H, how large another such graph G can be if it has many vertices but doesn't contain a copy of H. The notion of largeness here is the total weight of G if red edges are given weight p and blue edges weight q, with p+q=1. This question arises in the applications of Szemerédi's Regularity Lemma, in particular to Ramsey-type games, and to the study of edit distance and property testing. We shall describe an approach to answering this question and to settling some outstanding issues concerning edit distance. 30 September 2008, 16:00, MS.05 Machine-Job Assignment Problems with Separable Convex Costs and Match-up Scheduling with Controllable Processing Times In this talk, we discuss scheduling and rescheduling problems with controllable job processing times. We start with the optimal allocation of a set of jobs on a set of parallel machines with given machining time capacities. Each job may have different processing times and different convex compression costs on different machines. We consider finding the optimal machine-job allocation and compression level decisions for the total cost. We propose a strengthened conic quadratic formulation for this problem. We provide theoretical and computational results that prove the quality of the proposed reformulation. We next shortly discuss match-up scheduling approach in the same scheduling environment with an initial schedule subject to a machine breakdown. We show how processing time controllability provides flexibility in rescheduling decisions. 22 September 2008, 11:00, B1.01 Discounted Deterministic Markov Decision Processes We present an O(mn)-time algorithm for finding optimal strategies for discounted, infinite-horizon, Deterministic Markov Decision Processes (DMDP), where n is the number of vertices (or states) and m is the number of edges (or actions). This improves a recent O(mn^2)-time algorithm of Andersson and Vorobyov. Joint work with Omid Madani and Mikkel Thorup. 24 June 2008, 16:00, MS.04 On a Disparity Between Relative Cliquewidth and Relative NLC-width Cliquewidth and NLC-width are two closely related parameters that measure the complexity of graphs. Both clique- and NLC-width are defined to be the minimum number of labels required to create a labelled graph by certain terms of operations. Many hard problems on graphs become solvable in polynomial time if the inputs are restricted to graphs of bounded clique- or NLC-width. Cliquewidth and NLC-width differ by a factor of at most two. The relative counterparts of these parameters are defined to be the minimum number of labels necessary to create a graph while the tree-structure of the term is fixed. We show that the problems Relative Cliquewidth and Relative NLC-width differ significantly in computational complexity: the former is NP-complete, the latter is solvable in polynomial time. Additionally, our technique enables combinatorial characterisations of clique- and NLC-width that avoid the usual operations on labelled graphs. 17 June 2008, 16:00, MS.04 Colouring Random Geometric Graphs The random geometric graph G(n,r) is constructed by picking n vertices X1, · · · , Xn ∈ [0,1] ^d i.i.d. uniformly at random and adding an edge (Xi, Xj) if ‖X1-X2‖ < r. I will discuss the behaviour of the chromatic number of G(n,r) and some related random variables when n tends to infinity and r=r(n) is allowed to vary with n. 10 June 2008, 16:00, MS.04 A Clique-Based Integer Programming Formulation of Graph Colouring and Applications In this talk, we first give an overview of several exact approaches to graph colouring: known integer linear programming formulations, approaches based on SDP relaxations, as well as selected approaches not using any mathematical programming. Second, we introduce an integer programming formulation of graph colouring based on reversible clique partition or, seen another way, optimal polynomial-time transformation of graph colouring to multicolouring. The talk will be concluded by discussion of some specifics of applications of graph colouring in timetabling applications and utility of the proposed formulation in this context. 4 June 2008, 16:00, MS.04 Fast-Converging Tatonnement Algorithms for One-Time and Ongoing Market Problems Why might markets tend toward and remain near equilibrium prices? In an effort to shed light on this question from an algorithmic perspective, we formalize the setting of Ongoing Markets, by contrast with the classic market scenario, which we term One-Time Markets. The Ongoing Market allows trade at non-equilibrium prices, and, as its name suggests, continues over time. As such, it appears to be a more plausible model of actual markets. For both market settings, we define and analyze variants of a simple tatonnement algorithm that differs from previous algorithms that have been subject to asymptotic analysis in three significant respects: the price update for a good depends only on the price, demand, and supply for that good, and on no other information; the price update for each good occurs distributively and asynchronously; the algorithms work (and the analyses hold) from an arbitrary starting point. We show that this update rule leads to fast convergence toward equilibrium prices in a broad class of markets that satisfy the weak gross substitutes property. Our analysis identifies three parameters characterizing the markets, which govern the rate of convergence of our protocols. These parameters are, broadly speaking: • A bound on the fractional rate of change of demand for each good with respect to fractional changes in its price. • A bound on the fractional rate of change of demand for each good with respect to fractional changes in wealth. • The closeness of the market to a Fisher market (a market with buyers starting with money alone). Joint work with Lisa Fleischer. 29 May 2008, 12:00, CS1.01 The Multi-Armed Bandit Meets the Web Surfer The multi-armed bandit paradigm has been studied extensively for over 50 years in Operations Research, Economics and Computer Science literature, modeling online decisions under uncertainty in a setting in which an agent simultaneously attempts to acquire new knowledge and to optimize its decisions based on the existing knowledge. In this talk I'll discuss several new results motivated by web applications, such as content matching (matching advertising to page contest and user's profile) and efficient web crawling. 20 May 2008, 16:00, MS.04 Optimizing Communication Networks: Incentives, Welfare Maximization an1245210 Multipath Transfers We discuss control strategies for communication networks such as the Internet or wireless multihop networks, where the aim is to optimize network performance. We describe how welfare maximization can be used as a paradigm for network resource allocation, and can also be used to derive rate-control algorithms. We apply this to the case of multipath data transfers. We show that welfare maximization requires active balancing across paths by data sources, which can be done provided the underlying "network layer" is able to expose the marginal congestion cost of network paths to the "transport layer". When users are allowed to change the set of paths they use, we show that for large systems the path-set choices correspond to Nash equilibria, and give conditions under which the resultant Nash equilibria correspond to welfare maximising states. Moreover, simple path selection policies that shift to paths with higher net benefit can find these states. We conclude by commenting on incentives, pricing and open problems. 13 May 2008, 16:00, MS.04 Unsplittable Shortest Path Routing: Hardness and Approximation Algorithms Most data networks nowadays employ shortest path routing protocols to control the flow of data packets. With these protocols, all end-to-end traffic streams are routed along shortest paths with respect to some administrative routing weights. Dimensioning and routing planning are extremely difficult for such networks, because end-to-end routing paths cannot be configured individually but only jointly and indirectly via the routing weights. Additional difficulties arise if the communication demands must be sent unsplit through the network - a requirement that is often imposed to ensure tractability of end-to-end traffic flows and to prevent package reordering in practice. In this case, the lengths must be chosen such that the shortest paths are uniquely determined for all communication demands. In this talk we consider the minimum congestion unsplittable shortest path routing problem MinConUSPR: Given a capacitated digraph D = (V, A) and traffic demands d(s,t), (s, t) ∈ K ⊆ V^2, the task is to find routing weights λa ∈ ℤ+, a ∈ A, such that the shortest (s, t)-path is unique for each (s, t) ∈ K and the maximum arc congestion (induced flow/capacity ratio) is minimal. We discuss the approximability of this problem and the relation between the unsplittable shortest path routing paradigm and other routing schemes. We show that MinConUSPR is NP-hard to approximate within a factor of O(|V|^1-ε), but approximable within min(|A|, |K|) in general. For the special cases where the underlying graph is an undirected cycle or a bidirected ring, we present constant factor approximation algorithms. We also construct problem instances where the minimum congestion that can be obtained by unsplittable shortest path routing is a factor of Ω(|V|^2) larger than that achievable by unsplittable flow routing or shortest multi-path routing, and a factor of Ω(|V|) larger than that achievable by unsplittable source-invariant routing. 6 May 2008, 16:00, MS.04 Diophantine Problems Arising in the Theory of Monlinear Waves I will consider waves in finite domains, for example water waves in a swimming pool. For such waves to exchange energy among different modes, these modes must be in resonance in both frequency and wavenumber, which leads to a set of nontrivial conditions/equations in integers. I will discuss some known results and open questions about the solutions of these Diophantine equations. 29 April 2008, 16:00, MS.04 Controlling Epidemic Spread on Networks The observation that many natural and engineered networks exhibit scale-free degree distributions, and that such networks are particularly prone to epidemic outbreaks, raises questions about how best to control epidemic spread on such networks. We will begin with some background on threshold phenomena for epidemics, describing how the minimum infection rate needed to sustain long-lived epidemics is related to the cure rate and the topology of the graph. In particular, we will see that, on scale-free networks, this minimum infection rate tends to zero as the network grows large. We will describe two ways of tackling this problem. One involves allocating curing or monitoring resources non-uniformly, favouring high-degree nodes. Another mechanism, called contact tracing, involves identifying and curing neighbours (or contacts) of infected nodes, and is used in practice. We will present some mathematical results for both these mechanisms. Joint work with Borgs, Chayes, Saberi and Wilson. 22 April 2008, 16:00, MS.04 Strategic Characterization of the Index of an Equilibrium The index of a Nash equilibrium is an integer that is related to notions of ``stability'' of the equilibrium, for example under dynamics that have equilibria as rest points. The index is a relatively complicated topological notion, essentially a geometric orientation of the equilibrium. We prove the following theorem, first conjectured by Hofbauer (2003), which characterizes the index in much simpler strategic terms: Theorem. A generic bimatrix game has index +1 if and only if it can be made the unique equilibrium of an extended game with additional strategies of one player. In an mxn game, it suffices to add 3m strategies of the column player. The main tool to prove this theorem is a novel geometric-combinatorial method that we call the "dual construction," which we think is of interest of its own. It allows to visualize all equilibria of an mxn game in a diagram of dimension m-1. For example, all equilibria of a 3xn game are visualized with a diagram (essentially, of suitably connected n+3 points) in the plane. This should provide new insights into the geometry of Nash equilibria. Joint work with Arndt von Schemde. 11 March 2008, 16:00, MS.05 On Recent Progress on Computing Approximate Nash Equilibria In view of the apparent computational difficulty of computing a Nash equilibrium of a game, recent work has addressed the computation of "approximate Nash equilibrium". The standard criterion of "no incentive for any player to change his strategy" is here replaced by "small incentive for any player to change his strategy". The question is, how small an incentive can we aim for, before once again we have a computationally difficult problem? In this talk, I explain in detail what is meant by Nash equilibrium and approximate Nash equilibrium, and give an overview of some recent results. 7 March 2008, 15:00, MS.02 Online Minimum Makespan Scheduling with Reordering In the classic minimum makespan scheduling problem, we are given an input sequence of jobs with processing times. A scheduling algorithm has to assign the jobs to m parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. We consider online scheduling algorithms without preemption. Much effort has been made to narrow the gap between upper and lower bounds on the competitive ratio of this problem. A quite good and easy approximation can be achieved by sorting the jobs according to their size and than assigning them greedily to machines. However, the complete sorting of the input sequence contradicts the notion of an online algorithm. Therefore, we propose a new method to somewhat reorder the input in an online manner. Although our proofs are technically surprisingly simple, we give, for any number of identical machines, tight and significantly improved bounds on the competitive ratio for this new method. This talk is based on joint work with Deniz Özmen and Matthias Westermann 4 March 2008, 16:00, MS.05 Balloon Popping with Applications to Ascending Auctions We study the power of ascending auctions in a scenario in which a seller is selling a collection of identical items to anonymous unit-demand bidders. We show that even with full knowledge of the set of bidders' private valuations for the items, if the bidders are ex-ante identical, no ascending auction can extract more than a constant times the revenue of the best fixed price scheme. This problem is equivalent to the problem of coming up with an optimal strategy for blowing up indistinguishable balloons with known capacities in order to maximize the amount of contained air. We show that the algorithm which simply inflates all balloons to a fixed volume is close to optimal in this setting. Joint work with Anna Karlin, Mohammad Mahdian, and Kunal Talwar 26 February 2008, 16:00, MS.05 Randomly colouring a random graph We will consider generating a random vertex colouring of a graph using a Markov chain. The aim is to determine the smallest number of colours, measured as a multiple of the maximum degree, for which the chain converges in time polynomial in the number of vertices. We will examine this problem for the case of an Erdős-Rényi random graph. This has been studied previously for graphs with small edge density. However, the methods used in the sparse case do not seem applicable as the edge density becomes larger. We will describe a different approach, which can be used to obtain results for denser 12 February 2008, 16:00, MS.05 Game theoretic Approach to Hedging of Rainbow Options We suggest a game theoretic framework (so called now 'interval model') for the analysis of option prices in financial mathematics. This leads to remarkable explicit formulas and effective numeric procedures for rainbow (or coloured) options hedging. 5 February 2008, 16:00, MS.05 Short-length routes in low-cost networks via Poisson line patterns How efficiently can one move about in a network linking a configuration of n cities? Here the notion of "efficient" has to balance (a) total network length against (b) short network distances between cities. My talk will explain how to use Poisson line processes to produce networks which are nearly of shortest total length, which make the average inter-city distance almost Euclidean. This is joint work with David Aldous. 29 January 2008, 16:00, MS.05 Clustering for Metric and Non-Metric Distance Measures We study a generalization of the k-median problem with respect to an arbitrary dissimilarity measure D. Given a finite point set P, our goal is to find a set C of size k such that the sum of errors D (P,C) = Σ{p in P} min{c in C} {D(p,c)} is minimized. The main result in this talk can be stated as follows: There exists an O(n 2^(k/ε)^O(1)) time (1+ε)-approximation algorithm for the k-median problem with respect to D, if the 1-median problem can be approximated within a factor of (1+ε) by taking a random sample of constant size and solving the 1-median problem on the sample exactly. Using this characterization we obtain the first linear time (1+ε)-approximation algorithms for the k-median problem for doubling metrics, for the Kullback-Leibler divergence, Mahalanobis distances and some special cases of Bregman divergences. 22 January 2008, 16:00, MS.05 The Travelling Salesman Problem: Are there any Open Problems left? Part II. We consider the notorious travelling salesman problem (TSP) and such seemingly well-studied algorithms as the Held & Karp dynamic programming algorithm, the double-tree and Christofides heuristics, as well as algorithms for finding optimal pyramidal tours. We show how the ideas from these classical algorithms can be combined to obtain one of the best tour-constructing TSP heuristics. The main message of our presentation: despite the fact that the TSP has been investigated for decades, and thousands(!) of papers have been published on the topic, there are still plenty of exciting open problems, some of which (we strongly believe) are not that difficult to resolve. 15 January 2008, 16:00, MS.05 The Travelling Salesman Problem: Are there any Open Problems left? Part I. We consider the notorious travelling salesman problem (TSP) and such seemingly well-studied algorithms as the Held & Karp dynamic programming algorithm, the double-tree and Christofides heuristics, as well as algorithms for finding optimal pyramidal tours. We show how the ideas from these classical algorithms can be combined to obtain one of the best tour-constructing TSP heuristics. The main message of our presentation: despite the fact that the TSP has been investigated for decades, and thousands(!) of papers have been published on the topic, there are still plenty of exciting open problems, some of which (we strongly believe) are not that difficult to resolve. 11 December 2007, 16:00, CS1.04 Improving Integer Programming Solvers: {0,½}-Chvátal-Gomory Cuts Chvátal-Gomory cuts are among the most well-known classes of cutting planes for general integer linear programs (ILPs). In case the constraint multipliers are either 0 or ½, such cuts are known as {0,½}-cuts. This talk reports on our study to separate general {0,½}-cuts effectively, despite its NP-completeness. We propose a range of preprocessing rules to reduce the size of the separation problem. The core of the preprocessing builds a Gaussian elimination-like procedure. To separate the most violated {0,½} -cut, we formulate the (reduced) problem as an integer linear program. Some simple heuristic separation routines complete the algorithmic framework. Computational experiments on benchmark instances show that the combination of preprocessing with exact and/or heuristic separation is a very vital idea to generate strong generic cutting planes for integer linear programs and to reduce the overall computation times of state-of-the-art ILP-solvers. 4 December 2007, 16:00, CS1.04 Strategic Game Playing and Equilibrium Refinements Koller, Megiddo and von Stengel showed in 1994 how to efficiently find minimax behavior strategies of two-player imperfect information zero-sum extensive games using linear programming. Their algorithm has been widely used by the AI-community to solve very large games, in particular variants of poker. However, it is a well known fact of game theory that the Nash equilibrium concept has serious deficiencies as a prescriptive solution concept, even for the case of zero-sum games where Nash equilibria are simply pairs of minimax strategies. That these deficiencies show up in practice in the AI-applications was documented by Koller and Pfeffer. In this talk, we argue that the theory of equilibrium refinements of game theory provides a satisfactory framework for repairing the deficiencies, also for the AI-applications. We describe a variant of the Koller, Megiddo and von Stengel algorithm that computes a *quasi-perfect* equilibrium and another variant that computes all *normal-form proper* equilibria. Also, we present a simple yet non-trivial characterization of the normal form proper equilibria of a two-player zero-sum game with perfect information. The talk is based on joint papers with Troels Bjerre Sørensen, appearing at SODA'06 and SODA'08. 27 November 2007, 16:00, CS1.04 An Approximation Trichotomy for Boolean #CSP This talk examines the computational complexity of approximating the number of solutions to a Boolean constraint satisfaction problem (CSP). It extends a line of investigation started in a classical paper of Schaefer on the complexity of the decision problem for CSPs, and continued by Creignou and Hermann, who addressed exact counting. We find that the class of Boolean CSPs exhibits a trichotomy. Depending on the set of allowed relations, the CSP may be polynomial-time solvable (even exactly); or the number of solutions may be as hard to approximate as the number of accepting computations of a non-deterministic Turing machine. But there is a third possibility: approximating the number of solutions may be complete for a certain logically defined complexity class, and hence equivalent in complexity to a number of natural approximate counting problems, of which independent sets in a bipartite graph is an example. All the necessary background material on approximate counting and CSPs will be covered. This is joint work with Martin Dyer (Leeds) and Leslie Ann Goldberg (Liverpool). 20 November 2007, 16:00, CS1.04 From Tree-width to Clique-width: Excluding a Unit Interval Graph From the theory of graph minors we know that the class of planar graphs is the only critical class with respect to tree-width. In this talk, we reveal a critical class with respect to clique-width, a notion generalizing tree-width. This class is known in the literature under different names, such as unit interval, proper interval or indifference graphs, and has important applications in various fields. We prove that the unit interval graphs constitute a minimal hereditary class of unbounded clique-width and discuss some applications of the obtained result. 13 November 2007, 16:00, CS1.04 Cut-based Inequalities for Capacitated Network Design Problems In this talk we study basic network design problems with modular capacity assignment. The focus is on classes of strong valid inequalities that are defined for cuts of the underlying network. We show that regardless of the link model, facets of the polyhedra associated with such a cut translate to facets of the original network design polyhedra if the two subgraphs defined by the network cut are (strongly) connected. Accordingly, we focus on the facial structure of the cutset polyhedra. A large class of facets of cutset polyhedra is defined by flow-cutset inequalities. These inequalities are presented in a unifying way for different model variants. We propose a generic separation algorithm and in an extensive computational study on 54 instances from the survivable network design library (SNDlib), we show that the performance of CPLEX can significantly be enhanced by this class of cutting planes. 6 November 2007, 16:00, CS1.04 Submodular Percolation and the Worm Order Suppose we have a grid mxn, two real-valued functions f and g on [m] and [n] respectively, and a threshold value b. We declare a point (i,j) in the grid to be open if f(i) + g(j) is at most b. It turns out that, if there is a path of open points from the bottom point (1,1) to the top point (m,n), then there is a monotone path of open points, i.e., if we can get from bottom to top at all, then we can do so without having to retreat. This is an instance of the following much more general result. Let f be a submodular function on a modular lattice L; we show that there is a maximal chain C in L on which the sequence of values of f is minimal among all paths from 0 to 1 in the Hasse diagram of L, in a certain well-behaved partial order -- the "worm order" -- on sequences of reals. One consequence is that the maximum value of f on C is minimised over all such paths. We discuss the worm order, relate our work to that on searching a graph, and mention some sample applications. Joint work with Peter Winkler. 30 October 2007, 16:00, CS1.04 Ant Colony Optimization for Vehicle Routing In this talk I will first present the basic working principles of Ant Colony Optimization (ACO) and then explain some of our work on applying ACO to vehicle routing problems. The main idea in this context is to exploit problem knowledge to improve the performance of the general purpose heuristic ACO. 23 October 2007, 16:00, CS1.04 Parameterized Algorithms for Directed Maximum Leaf Problems We prove that finding a rooted subtree with at least k leaves in a digraph is a fixed parameter tractable problem. A similar result holds for finding rooted spanning trees with many leaves in digraphs from a wide family L that includes all strong and acyclic digraphs. This settles completely an open question of Fellows and solves another one for digraphs in L. We also prove the following combinatorial result which can be viewed as a generalization of many results for a "spanning tree with many leaves" in the undirected case, and which is interesting on its own: If a digraph D ∈ L of order n with minimum in-degree at least 3 contains a rooted spanning tree, then D contains one with at least (n/4)^1/3-1 leaves. Joint work with Noga Alon, Fedor Fomin, Michael Krivelevich and Saket Saurabh 16 October 2007, 16:00, CS1.04 Hierarchical Graph Decompositions for Minimizing Congestion An oblivious routing protocol makes its routing decisions independent of the traffic in the underlying network. This means that the path chosen for a routing request may only depend on its source node, its destination node, and on some random input. In spite of these serious limitations it has been shown that there are oblivious routing algorithms that obtain a polylogarithmic competitive ratios w.r.t. the congestion in the network (i.e., maximum load of a network link). In this talk I will present a new hierarchical decomposition technique that leads to good oblivious routing algorithms. This decomposition can also be used as a generic tool for solving a large variety of cut based problems in undirected graphs. 9 October 2007, 16:00, CS1.04 Overhang Bounds How far can a stack of n identical blocks be made to hang over the edge of a table? The question dates back to at least the middle of the 19th century and the answer to it was widely believed to be of order log n. Recently, we (Paterson and Zwick) constructed n-block stacks with overhangs of order n^1/3, exponentially better than previously thought possible. The latest news is that we (Paterson, Peres, Thorup, Winkler and Zwick) can show that order n^1/3 is best possible, resolving the long-standing overhang problem up to a constant factor. I shall review the construction and describe the upper bound proof, which illustrates how methods founded in algorithmic complexity can be applied to a discrete optimization problem that has puzzled some mathematicians and physicists for more than 150 years. 19 July 2007, 11:00, CS1.01 Connecting Bits with Floating-Point Numbers: Model Checking and Theorem Proving in Practice High performance floating point hardware is notoriously difficult to design correctly. In fact, until recently, the most expensive hardware bug encountered in industry, was the Intel FDIV bug in 1994 that lead to a charge of 450 million dollars against earnings. At the same time, floating point arithmetic is unique in that there is a fairly unambiguous specification, the IEEE 754 standard, which all implementations should obey. Unfortunately, bridging the gap between the mathematical specification of the standard and a low-level implementation is an extremely difficult task. At Intel we have developed an approach that combines the strengths of theorem proving with symbolic trajectory evaluation (a type of model checking), that allows us to completely verify the correctness of todayÇs floating point units. In this talk I will discuss the evolution of this approach and discuss some of the results that have been obtained in addition to areas of future research. 28 June 2007, 11:00, B3.02 Tree-width and Optimization in Bounded Degree Graphs It is well known that boundedness of tree-width implies polynomial-time solvability of many algorithmic graph problems. The converse statement is generally not true, i.e., polynomial-time solvability does not necessarily imply boundedness of tree-width. However, in graphs of bounded vertex degree, for some problems, the two concepts behave in a more consistent way. We study this phenomenon with respect to three important graph problems -- dominating set, independent dominating set and induced matching -- and obtain several results toward revealing the equivalency between boundedness of the tree-width and polynomial-time solvability of these problems in bounded degree graphs. Joint work with Martin Milanic. 29 May 2007, 12:00, Room CS1.01 A Survey of Short PCP Constructions The PCP theorem [AS, ALMSS] specifies a way of encoding any mathematical proof such that the validity of the proof can be verified probabilistically by querying the proof only at a constant number of locations. Furthermore, this encoding is complete and sound, i.e., valid proofs are always accepted while invalid proofs are accepted with probability at most 1/2. A natural question that arises in the construction of PCPs is by how much does this encoding blow up the original proof while retaining low query complexity. The study of efficient probabilistic checkable proofs was initiated in the works of Babai et. al. [BFLS] and Feige at. al. [FGLSS] with very different motivation and emphases. The work of Babai et. al. considered the direct motivation of verifying proofs highly efficiently. In contrast, Feige et. al. established a dramatic connection between efficient probabilistically checkable proofs (PCPs) and the inapproximability of optimization problems. In this talk, I'll review some of the recent works of Ben-Sasson et. al. [BGHSV'04, BGHSV'05], Dinur-Reingold [DR], Ben-Sasson and Sudan [BS'04] and Dinur [Din'06] that have led to the construction of PCPs that are at most a polylogarithmic factor longer than the original proof and furthermore, are checkable by a verifier running in time at most polylogarithmic in the length of the original An important ingredient in these constructions is a a new variant of PCPs called "PCPs of Proximity''. These new PCPs facilitate proof composition, a crucial component in PCP construction. I'll outline how these new PCPs allow for shorter PCP constructions and efficient proof verification 3 May 2007, 15:30, B2.13 (WBS Scarman Road Meeting Room) Solved and Unsolved Optimization Challenges in Telecommunications We give an overview of the optimization challenges addressed in recent years ranging from frequency assignment in mobile networks to multi-layer network design in backbone networks. We discuss the practical problems, the mathematical modelling, the methodologies used to tackle these problems, and the results achieved.
{"url":"http://www2.warwick.ac.uk/fac/cross_fac/dimap/seminars/index.html","timestamp":"2014-04-19T23:04:58Z","content_type":null,"content_length":"502650","record_id":"<urn:uuid:cb3f0568-9cac-4ef1-b648-ab20da9c3120>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
\documentclass[12pt]{article} \usepackage{html} \usepackage{epsf} \tolerance=11000 \parindent=0pt \parskip=5pt \voffset=-3cm \hoffset=-1cm \addtolength{\textheight}{5.5cm} \addtolength{\textwidth} {3cm} \begin{document} \title{MATTERS OF GRAVITY, The newsletter of the APS Topical Group on Gravitation} \begin{center} { \Large {\bf MATTERS OF GRAVITY}}\\ \bigskip \hrule \medskip {The newsletter of the Topical Group on Gravitation of the American Physical Society}\\ \medskip {\bf Number 28 \hfill Fall 2006} \end{center} \begin{flushleft} \tableofcontents \vfill \section*{\noindent Editor\ hfill} David Garfinkle\\ \smallskip Department of Physics Oakland University Rochester, MI 48309\\ Phone: (248) 370-3411\\ Internet: \htmladdnormallink{\protect {\tt{garfinkl-at-oakland.edu}}} {mailto:garfinkl@oakland.edu}\\ WWW: \htmladdnormallink {\protect {\tt{http://www.oakland.edu/physics/physics\textunderscore people/faculty/Garfinkle.htm}}} {http://www.oakland.edu/physics/ physics_people/faculty/Garfinkle.htm}\\ \section*{\noindent Associate Editor\hfill} Greg Comer\\ \smallskip Department of Physics and Center for Fluids at All Scales,\\ St. Louis University, St. Louis, MO 63103\\ Phone: (314) 977-8432\\ Internet: \htmladdnormallink{\protect {\tt{comergl-at-slu.edu}}} {mailto:comergl@slu.edu}\\ WWW: \htmladdnormallink{\protect {\tt{http://www.slu.edu/colleges /AS/physics/profs/comer.html}}} {http://www.slu.edu//colleges/AS/physics/profs/comer.html}\\ \bigskip \hfill ISSN: 1527-3431 \begin{rawhtml} \end{rawhtml} %{\bf \Large Contents:} \end{flushleft} \pagebreak \section*{Editorial} It is hard to imagine Matters of Gravity without Jorge Pullin as editor, and yet here it is. Greg Comer and I are new at this and we could use your help. In particular, if you have ideas for topics that should be covered by the newsletter, please email us and/or the relevant correspondent. Any comments/questions /complaints about the newsletter should be emailed to me. The next newsletter is due February 1st. This and all subsequent issues will be available on the web at \htmladdnormallink {\protect {\tt {http://www.oakland.edu/physics/Gravity.htm}}} {http://www.oakland.edu/physics/Gravity.htm} All previous issues are available at \htmladdnormallink {\protect {\tt {http://www.phys.lsu.edu/mog}}} {http://www.phys.lsu.edu/mog} A hardcopy of the newsletter is distributed free of charge to the members of the APS Topical Group on Gravitation upon request (the default distribution form is via the web) to the secretary of the Topical Group. It is considered a lack of etiquette to ask me to mail you hard copies of the newsletter unless you have exhausted all your resources to get your copy otherwise. \hfill David Garfinkle \bigbreak \vspace{-0.8cm} \parskip=0pt \section*{Correspondents of Matters of Gravity} \begin{itemize} \setlength{\itemsep}{-5pt} \setlength{\parsep}{0pt} \item John Friedman and Kip Thorne: Relativistic Astrophysics, \item Bei-Lok Hu: Quantum Cosmology and Related Topics \item Gary Horowitz: Interface with Mathematical High Energy Physics and String Theory \item Beverly Berger: News from NSF \item Richard Matzner: Numerical Relativity \item Abhay Ashtekar and Ted Newman: Mathematical Relativity \item Bernie Schutz: News From Europe \item Lee Smolin: Quantum Gravity \item Cliff Will: Confrontation of Theory with Experiment \item Peter Bender: Space Experiments \item Jens Gundlach: Laboratory Experiments \item Warren Johnson: Resonant Mass Gravitational Wave Detectors \item David Shoemaker: LIGO Project \item Peter Saulson and Jorge Pullin: former editors, correspondents at large. \end{itemize} \section*{Topical Group in Gravitation (GGR) Authorities} Chair: \'{E}anna Flanagan; Chair-Elect: Dieter Brill; Vice-Chair: David Garfinkle. Secretary-Treasurer: Vern Sandberg; Past Chair: Jorge Pullin; Delegates: Bei-Lok Hu, Sean Carroll, Vicky Kalogera, Steve Penn, Alessandra Buonanno, Bob Wagoner. \parskip=10pt \vfill \eject \section*{\centerline {Singularity avoidance in canonical quantum gravity}} \addtocontents{toc}{\protect\ medskip} \addtocontents{toc}{\bf Research Briefs:} \addcontentsline{toc}{subsubsection}{ Singularity Avoidance in Canonical Quantum Gravity, by Viqar Husain} \parskip=3pt \begin{center} Viqar Husain,University of New Brunswick \htmladdnormallink{vhusain-at-unb.ca} {mailto:vhusain@unb.ca} \end{center} {\bf Singularity Avoidance in Canonical Quantum Gravity} \bigskip The question of whether quantum gravity has something to say about spacetime curvature singularities has a long history, from the early investigations in the 1960's, to more quantitative work on symmetry reduced models in the 1970's using the Wheeler-DeWitt quantization. Some examples of this include work by Misner [1], and by Blythe and Isham [2] on Friedmann-Robertson-Walker models with a scalar field. After a lull of several years, the canonical quantum gravity programme was revitalised by the Ashtekar's triad-connection phase space variables for general relativity. The variables naturally represented as operators in this so-called loop quantum gravity approach is the densitized inverse triad, and the holonomy of its conjugate connection variable. The first significant development concerning singularity avoidance in the loop quantum gravity (LQG) programme occurred inadvertantly, in an attempt to define a regularized Hamiltonian constraint operator. This was the construction of a triad operator by Thiemann [3]. It was realized a few years later by Bojowald [4] that the algebraic relation used to define this operator in the full theory could also be used in quantum cosmology to define an ``inverse scale factor'' operator. The ``source'' of singularity avoidance in the symmetry reduced models studied so far in ``loop quantum cosmology'' (LQC) is that the inverse scale factor operator is bounded, at least in the isotropic models. The basic mechanism used in defining the inverse scale factor operator in LQC may be illustrated in a mechanical system. Consider a 2-dimensional phase space $(x,p)$ with planar topology, and consider a quantization on a spatial equi-spaced lattice with spacing $a$. There is a quantization and a basis such that the translation $e ^{ia p}$ is realized as a shift operator, and the position $x$ is diagonal. An inverse $x$ operator can now be defined by starting with the Poisson bracket identity $$ {1\over|x|} = -{4\over a^2} \ left( e^{-iap} \left\{e^{iap},\sqrt{|x|}\right\}\right)^2, $$ and writing an operator expression for the righthand side. Using the expressions for the translation and position operators, it is evident that this inverse position operator is diagonal in the position basis, and is bounded. This is the essence of the singularity avoidance result in LQC. The Poisson bracket ``trick'' used for the singularity avoidance result is applicable in other models for quantum gravity outside the LQG context; it does not depend on the use of the connection-traid variables which are at the basis of LQG, but rather on the choice of representation used for quantization. This observation was exploited by Husain and Winkler to revisit the quantization of models systems formulated using the ADM variables. This has led to singularity avoidance results for FRW models which are qualitatively similar to those found in LQC, and also for the gravitational collapse problem in spherical symmetry [5]. There are two aspects of the results of singularity avoidance. The first is kinematical in the sense that the operators corresponding to the inverse triad in the model systems is bounded. The second is dynamical in that Hamiltonian evolution is well defined and unitary beyond the point of the singularity -- that is evolution does not terminate there. This feature of evolution in LQG based models arises due to its innate lattice structure: the Hamiltonian constraint acts discretely so that the Hamiltonian constraint condition is a difference equation rather than a differential equation as in Wheeler-DeWitt quantization. For regions away from the classical singularity, the difference equation behaves merely like a discretisation of the corresponding Wheeler-DeWitt equation, but for regions close to the singularity this is not the case, due partly to the incorporation of the inverse scale factor operator in the difference equation. This second aspect was demonstrated explicitly by Ashtekar, Singh and Pawloski in an FRW model coupled to a massless scalar field [6]. By considering evolution using the scalar field as a time variable, they showed using a numerical computation that a wave packet loses coherence as it evolves toward the classically singular region, and after a bounce begins to regain coherence as it evolves away from this region. Beyond model systems, there has been a detailed investigation by Brunnemann and Thiemann of triad operators in full LQG [7]. One central result here is that such operators are not bounded above in the full theory in a strict sense -- there are certain classes of states in the kinematical Hilbert space of LQG that lead to the result. While the inverse scale factor in anisotropic LQC models is also unbounded, its eigenvalues on zero volume eigenstates is bounded. In full LQG, however, even on zero volume eigenstates the operator corresponding to the inverse scale factor is unbounded. This work brings to the fore in the LQG setting the old question of the relevance of mini-superspace quantization for understanding quantum gravity. Specifically to the issue of singularity avoidance, it raises the question of what features of the full theory are ultimately responsible for this, and how it manifests itself in symmetry reduced models. For instance, the second conclusion drawn from [7] is that the expectation value of the inverse scale factor operator remains bounded in the sense of expectation values with respect to a one parameter family of coherent states whose peak in phase space follows the classically singular trajectory. This result of full LQG implies a completely different sense of singularity avoidance than the one obtained in LQC, but at least it does not contradict the LQC result. Further understanding of the issues raised by this work entail going beyond the simplest models, to at least those that have some inhomogeneity. This is being studied by several people both in the context of cosmology and gravitational collapse. \vskip 0.5cm \begin{itemize} \item{[1]} C. Misner, Phys. Rev. 186, 1319 (1969). \item{[2]} W. F. Blyth, C. J. Isham, Phys. Rev. D11, 768 (1975). \item{[3]} T. Thiemann, Class. Quant. Grav. 15, 839 (1998) \htmladdnormallink{gr-qc/9606089}{http://arXiv.org/abs/gr-qc/9606089} \item{[4]} M. Bojowald, Phys. Rev. Lett. 86, 5227 (2001) \ htmladdnormallink{gr-qc/0102069}{http://arXiv.org/abs/gr-qc/0102069} ; Living Rev. Rel. 8, 11 (2005) \htmladdnormallink{gr-qc/0601085}{http://arXiv.org/abs/gr-qc/0601085} . \item{[5]} V. Husain, O. Winkler, Phys. Rev. D69, 084016 (2004)\htmladdnormallink{gr-qc/0312094}{http://arXiv.org/abs/gr-qc/0312094} ; Class. Quant. Grav. 22, L127 (2005) \htmladdnormallink{gr-qc/0410125}{http://arXiv.org/ abs/gr-qc/0410125} . \item{[6]} A. Ashtekar, T. Pawlowski, P. Singh, Phys. Rev. Lett. 96, 141301 (2006) \htmladdnormallink{gr-qc/0602086}{http://arXiv.org/abs/gr-qc/0602086} ; Phys. Rev. D73, 124038 (2006) \htmladdnormallink{gr-qc/0604013}{http://arXiv.org/abs/gr-qc/0604013} . \item{[7]} J. Brunnemann, T. Thiemann, Class. Quant. Grav. 23, 1395 (2006)\htmladdnormallink{gr-qc/0505032}{http:// arXiv.org/abs/gr-qc/0505032} ; Class. Quant. Grav. 23, 1429 (2006) \htmladdnormallink{gr-qc/0505033}{http://arXiv.org/abs/gr-qc/0505033} . \end{itemize} \section*{\centerline {What's new in LIGO}} \ addtocontents{toc}{\protect\medskip} \addcontentsline{toc}{subsubsection}{ What's new in LIGO, by David Shoemaker} \parskip=3pt \begin{center} David Shoemaker,MIT \htmladdnormallink {dhs-at-ligo.mit.edu} {mailto:dhs@ligo.mit.edu} \end{center} {\bf What's new in LIGO} \bigskip Here is a brief update on the advances in LIGO -- our name for the LIGO Laboratory (Caltech/MIT) and the greater LIGO Scientific Collaboration (LSC). \medskip {\bf Observing} The LIGO S5 science run, including the 4km- and 2km-length interferometers at Hanford, Washington, the 4km instrument at Livingston, Louisiana, and the GEO600 Detector near Hannover, Germany, continues. There have been some brief breaks for minor commissioning and repairs, and happily both the sensitivity and the uptime of all the instruments have been improving through the run. The LIGO instruments are now exceeding their sensitivity requirement by about a factor of three, and effectively meeting the goal sensitivity curve laid out in 1995. Our commitment to the NSF is to collect one integrated year of data with the instruments at their design sensitivity, and we have now roughly 40\% of those data. The German/UK GEO600 detector is also working nicely, with a very high duty cycle. \medskip {\bf Analysis} The analysis pipelines are continuing to be refined, and we are catching up on the continuous data stream from the instruments. Results are in preparation from the S4, and the first half of the S5 run. Recently published papers on data analysis are ``Search for Gravitational Wave Bursts in LIGO's Third Science Run'' (B. Abbott et al. (LSC), Class. Quantum Grav. 23, S29-S39 (2006); \htmladdnormallink{gr-qc/051146}{http://arXiv.org/abs/gr-qc/0511146} ) and ``Upper Limits on a Stochastic Background of Gravitational Waves'' (B. Abbott et al. (LSC), Phys. Rev. Lett. 95, 221101 (2005); \htmladdnormallink{gr-qc/0507254}{http://arXiv.org/abs/gr-qc/0507254} ). Soon to appear will be ``Search for gravitational wave bursts in LIGO's third science run'' (\htmladdnormallink{gr-qc/0511146}{http://arXiv.org/abs/gr-qc/0511146} ). \medskip {\bf Enhanced LIGO} Once the S5 run is complete, probably in the Fall of 2007, the Collaboration will make some incremental improvements to the detectors, principally to the interferometer readout system. An increase of sensitivity of roughly a factor of 2 over a broad range of frequencies is anticipated, increasing the volume of space searched by a factor of 8. We will run the instruments once again for an extended run with this improved sensitivity. The design for these improvements has made significant progress over the last six months. \medskip {\bf Advanced LIGO} Our plans for significant further improvements to the instrument sensitivity were reviewed by the NSF at the end of May. This `Baseline Review' is intended to complement an earlier technical review, and covered the organization, costing, schedule considerations, and readiness for project funding. The review committee stated in its written review that it was ``impressed'' with the plans for the upgrade, and we feel we have reason to be hopeful that the NSF, OMB, and Congress will find that this is a good use of the taxpayers' money. The Advanced LIGO instrument will have more than a factor of 10 better sensitivity than the Initial LIGO instruments now running, increasing the number of candidate sources by more than 1000, and should make observation of gravitational-wave sources a common event. We plan to start the project in 2008, start installing the new instruments in 2010, and be collected interesting data in 2014. \vfill\eject \section*{\centerline {Scanning New Horizons: GR Beyond 4 dimensions}} \addtocontents{toc}{\ protect\medskip} \addtocontents{toc}{\bf Conference reports:} \addcontentsline{toc}{subsubsection}{ Scanning New Horizons: GR Beyond 4 dimensions, by Donald Marolf} \parskip=3pt \begin{center} Donald Marolf, UC Santa Barbara \htmladdnormallink{marlof-at-physics.ucsb.edu} {mailto:marolf@physics.ucsb.edu} \end{center} For ten weeks this winter, a diverse collection of gravitational physicists gathered at Santa Barbara's Kavli Institute of Theoretical Physics to discuss myriad phenomena related to gravity in $d \neq 4$ spacetime dimensions. Though there was also interest in the case $d=3$, the program (organized by Luis Lehner, Rob Myers, and myself) largely focused on the case $d> 4$. Why study gravity outside of four dimensions? In my opinion, the most basic reason is that the dimension is a parameter one can dial to learn more about the deep nature of gravitational phenomena. Recall, for example, that via Kaluza-Klein reduction a theory with $d > 4$ can in some cases be regarded as a 4-dimensional theory with complicated matter fields. Thus, one might expect any truly universal property of gravity to apply equally well to both high and low dimensions. The Bekenstein-Hawking area law for black hole entropy is perhaps a prime example such such a dimension-independent phenomenon, and as such is widely regarded as a deep principle of gravitational physics. One aim of this program was to discover what other phenomena are similarly universal, and which phenomena are not. Other motivations for studying higher dimensional gravity include string theory, various `large extra dimension' scenarios for our universe, and general fun with mathematical physics. Gravity in higher dimensions exhibits a number of striking features which stretch our 3+1 intuition. For this reason, a primary goal of the program was to bring together higher-dimensional physicists with specialists (e.g. mathematical physicists or numerical physicists) who usually work in 3+1 dimensions. The hope was that by bringing to bear sophisticated tools, progress could readily be made on a number of higher dimensional issues, mostly centered on the physics of black holes. As usual in general relativity, these issues focused on existence, uniqueness, thermodynamics, stability, and dynamics. This fusion was quite successful, and the spectrum of results which came out of the workshop is too broad to fully summarize here. With apologies to those whose work I will not mention (and to those for whom I mention only a small part of their work), I'd like to quickly review a few areas which were the focus of much discussion and where progress was especially significant, and/or there is great potential for further input from inspired readers of this article. I emphasize that only a small fraction of the interesting results obtained are mentioned below. \medskip {\bf Existence, uniqueness, and thermodynamics} What sorts of black objects exist in higher dimensions? Recently, we have learned that higher dimensions host a rich spectrum of black objects, including stationary black rings for which cross-sections of the horizon are not spheres; see [1] for a recent review. In 3+1 dimensions, Hawking's theorem guarantees that the horizon topology is spherical. Using similar methods, some results on the higher dimensional case were already known. However, a forthcoming set of papers ([2] and others to appear) by Lars Andersson, Greg Galloway, Jan Metzger, and Rick Schoen will close an important loophole related to the possibility of Ricci-flat metrics on the horizon, further narrowing the possibilities. As with Hawking's theorem, it can be quite unclear how a given result will generalize to higher dimensions. A particularly interesting example is that of the black hole rigidity theorem, which states that every stationary black hole is axisymmetric; i.e., that it has a rotational Killing field. An important corollary of this result is that the event horizon of a stationary black hole is a Killing horizon, and thus that it has a well-defined surface gravity which is constant over the horizon. In this way, the rigidity theorem is deeply connected to black hole thermodynamics, and one would expect it to generalize readily to all dimensions. However, the standard proof in 3+1 dimensions [3] relies on the fact that cross sections of the horizon have topology $S^2$. The generalization to higher dimensions is highly nontrivial, and has only recently been established by Hollands, Ishibashi and Wald [4], in part as a result of the KITP program and interaction with Jim Isenberg and Vince Moncrief, from whom a related paper is expected soon. \medskip {\bf Stability} Stationary black rings exist, but are they dynamically stable? A number of potential instabilities have been discussed in the literature: radial instabilities, Gregory-Laflamme Instabilities (see below), super-radiant instabilities, and potential instabilities associated with absorption or emission of radiation. Our program saw significant progress in establishing that black rings {\it do} suffer from such instabilities. First, a work by Jordan Hovdebo and Rob Myers established [5] that very large black brings are unstable to a Gregory-Laflamme type instability. In addition, Henriette Elvang, Roberto Emparan and Amitabh Virmani provided evidence [6] that all neutral black rings (at least, in 4+1 dimensions) are unstable in some way. For the branch of the black ring solutions with the largest entropy, they show that such black rings suffer from a radial instability as well as a Gregory-Laflamme instability. Furthermore, they give evidence that the Gregory-Laflamme instability should lead the black ring to break up into a set of black holes with large {\it orbital} angular momentum. Finally, Oscar Dias showed [7] that if `doubly spinning' black rings exist, then they will necessarily have a super-radiant instability. \medskip {\bf Dynamics} A new dynamical issue that arises in $d > 4$ dimensions is the Gregory-Laflamme instability [8] of thin black strings, membranes, etc. It is known that many black string solutions are linearly unstable to perturbations that break translational symmetry along the string but which preserve rotational symmetry around the string. The instability causes the string to become `lumpy,' thickening in some places while thinning at certain `necks.' The endpoint of this instability has been a subject of much debate and discussion. The original work [8] conjectured that the thin necks might shrink to zero size and then `break,' so that the endpoint is a set of separated black holes. Much of the interest in this work revolves around the fact that such a bifurcation would violate certain forms of Cosmic Censorship. Because the interesting question involves the non-linear regime, it is natural to explore this question numerically. Indeed, the 2003 numerical simulation [9] was the focus of much discussion at the program. In particular, when plotted against their asymptotic time coordinate, the evolution of many quantities in the spacetime shows signs of slowing significantly near the point where their code crashes. One might take this as evidence in favor of a scenario (see e.g. [10]) in which the endpoint is simply a static lumpy black string. It goes without saying that better numerical simulations are needed (and the authors of [9] are making progress in this direction). However, our discussions also produced the conclusion that more physics could be obtained by analyzing the data of [9] in terms of a more physical time coordinate; e.g., the retarded time along past null infinity. Because this retarded time might differ substantially from the coordinate time of [9], such a new analysis might suggest very different results more in line with the original bifurcation suggestion of [8]. Interested individuals may wish to consult [10,11,12] for more detailed discussions of possible endpoints. \medskip {\bf Summary} The 2006 KITP program `Scanning new horizons: GR Beyond 4 dimensions' was a period of intense interaction and discussion between a diverse array of physicists which led to a number of exciting new results. Yet much remains to be done, and in particular there is much room for input from both mathematical and numerical relativists. Although the physics concerns large numbers of dimensions, interesting special cases often have sufficient symmetry to reduce problems to either 3 or 2+1 dimensions or less. Examples of such problems include the stability of `ultra-rotating' higher dimensional black holes, the existence of stationary black rings in $d > 5 $ dimensions, and the existence and stability of `braneworld' black holes. In such cases, numerical analyses can be especially useful. I will be only too happy if this short summary encourages others to enter this exciting field and to further develop existence, uniqueness, thermodynamic, stability, and dynamic results in gravity beyond 4 dimensions. Links to the program talks and discussions can be found at http://online.itp.ucsb.edu/online/highdgr06/, and provide a useful introduction to many such topics. {\bf References} [1] R.~Emparan and H.~S.~Reall, ``Black Rings,'' \htmladdnormallink{hep-th/ 0608012}{http://arXiv.org/abs/hep-th/0608012} . %%CITATION = HEP-TH 0608012;%% [2] G.~J.~Galloway, ``Rigidity of outer horizons and the topology of black holes,'' \htmladdnormallink{gr-qc/0608118} {http://arXiv.org/abs/gr-qc/0608118} . %%CITATION = GR-QC 0608118;%% [3] Hawking, S.W.: Black holes in general relativity. Commun. Math. Phys. 25, 152-166 (1972) [4] S.~Hollands, A.~Ishibashi and R.~M.~Wald, ``A higher dimensional stationary rotating black hole must be axisymmetric,'' \htmladdnormallink{gr-qc/o605106}{http://arXiv.org/abs/gr-qc/0605106} . %%CITATION = GR-QC 0605106;%% [5] J.~L.~Hovdebo and R.~C.~Myers, ``Black rings, boosted strings and Gregory-Laflamme,'' Phys.\ Rev.\ D {\bf 73}, 084013 (2006) \htmladdnormallink{hep-th/0601079}{http://arXiv.org/abs/hep-th/0601079} . %%CITATION = HEP-TH 0601079;%% [6] H.~Elvang, R.~Emparan and A.~Virmani, ``Dynamics and stability of black rings,'' \htmladdnormallink{hep-th/0608076}{http://arXiv.org/abs/hep-th/0608076} . %%CITATION = HEP-TH 0608076;%% [7] O.~J.~C.~Dias, ``Superradiant instability of large radius doubly spinning black rings,'' Phys.\ Rev.\ D {\bf 73}, 124035 (2006) \htmladdnormallink{hep-th/0602064} {http://arXiv.org/abs/hep-th/0602064} . %%CITATION = HEP-TH 0602064;%% [8] R.~Gregory and R.~Laflamme, %``Black strings and p-branes are unstable,'' Phys.\ Rev.\ Lett.\ {\bf 70}, 2837 (1993) \ htmladdnormallink{hep-th/9301052}{http://arXiv.org/abs/hep-th/9301052} . %%CITATION = HEP-TH 9301052;%% [9] M.~W.~Choptuik, L.~Lehner, I.~Olabarrieta, R.~Petryk, F.~Pretorius and H.~Villegas, ``Towards the final fate of an unstable black string,'' Phys.\ Rev.\ D {\bf 68}, 044001 (2003) \htmladdnormallink{gr-qc/o304085}{http://arXiv.org/abs/gr-qc/0304085} . %%CITATION = GR-QC 0304085;%% [10] G.~T.~Horowitz and K.~Maeda, ``Fate of the black string instability,'' Phys.\ Rev.\ Lett.\ {\bf 87}, 131301 (2001) \htmladdnormallink{hep-th/0105111}{http://arXiv.org/abs/hep-th/0105111} . %%CITATION = HEP-TH 0105111;%% [11] D.~Garfinkle, L.~Lehner and F.~Pretorius, ``A numerical examination of an evolving black string horizon,'' Phys.\ Rev.\ D {\bf 71}, 064009 (2005) \ htmladdnormallink{gr-qc/0412014}{http://arXiv.org/abs/gr-qc/0412014} . %%CITATION = GR-QC 0412014;%% [12] D.~Marolf, ``On the fate of black string instabilities: An observation,'' Phys.\ Rev.\ D {\bf 71}, 127504 (2005) \htmladdnormallink{hep-th/0504045}{http://arXiv.org/abs/hep-th/0504045} . %%CITATION = HEP-TH 0504045;%% \vfill\eject \section*{\centerline {Quantum Gravity in the Americas III}} \ addtocontents{toc}{\protect\medskip} \addcontentsline{toc}{subsubsection}{ Quantum Gravity in the Americas III, by Jorge Pullin} \parskip=3pt \begin{center} Jorge Pullin, Louisiana State University \ htmladdnormallink{pullin-at-lsu.edu} {mailto:pullin@lsu.edu} \end{center} The third edition of the ``Quantum gravity in the Americas'' workshop was held at PennState on August 23-26. The previous instances of the workshop were at Mexico City and the Perimeter Institute. The workshop was structured into discussion sessions. Each session centered around a topic with one or two ``rapporteur talks'' of 20-40 minutes each, and shorter talks contributed by participants. About 50 researchers from the US, Mexico, Canada and Uruguay attended the meeting. The first day had an interesting session on new results on the renormalization group applied to Einstein's theory, chaired by Max Niedermeier and with a presentation by Frak Saueressig. New results suggest that a fixed point may exist for Euclidean Einstein gravity in perturbation theory and that the theory somehow has dimensionality four at large scales and two at shorter scales, a result that matches those of dynamical triangulations and quantum geometry underlying loop quantum gravity. The second session of the day was about loop quantum cosmology, chaired by David Craig and with Parampreet Singh presenting. Newer formulations of loop quantum cosmology not only include the earlier attractive results of inflation and tunneling through the singularity but they also have a clear prediction for when the quantum regime stops and the classical one starts. The first afternoon session was chaired by Ted Jacobson and centered on cosmology and observations. Daniel Sudarsky spoke about how the quantum fluctuations of the early universe can morph into matter perturbations in cosmology and possible mechanisms for this to happen. Nelson Nunes commented on further results of loop quantum cosmology with possible observational implications. The last session of the first day was on effective descriptions, chaired by Martin Bojowald. Summarizing a very interesting recent development, Laurent Freidel considered 3-dimensional gravity coupled to scalar field and presented the scalar field theory on a non-commutative space that results by integrating the gravitational degrees of freedom. Radu Roiban discussed effective field theories in string theory and discussed their limitations in the regimes in which they predict occurrence of various types of singularities. On day two the morning started with a session on discrete approaches, chaired by Rodolfo Gambini with talks by Luca Bombelli and myself. Luca gave an overview of causal sets and I presented the new results we have on uniform discretizations for quantum gravity I chaired the next session on the physical sector of loop quantum gravity. Very nice talks by Jerzy Lewandowski on the status of the Hamiltonian constraint and by Bianca Dittrich on the ``master constraint program'' summarized our current understanding of the dynamics of the theory. The afternoon sessions started with Laurent Freidel chairing a session on spin foams where Robert Oeckl and Florian Conrady spoke on issues related to the path integral formulation of loop quantum gravity. The last session of the day was on quantum geometry and matter, chaired by Seth Major and with talks by Kevin Vandersloot and Mikhail Kagan on effective descriptions in cosmology and inclusion of inhomogeneities and by Fotini Markopoulou on a recent suggestion of a possible link between the mathematics of braiding in 3-dimensions and the physics of fundamental particles of the standard model. Saturday started with a session on quantum gravity phenomenology, chaired by Daniel Sudarsky and with a comprehensive review by David Mattingly. The second session was chaired by Jerzy Lewandowski on mathematical issues of loop quantum gravity and talks by Jose Antonio Zapata and Daniel Cartin. The afternoon session on Saturday was about the interface of gravity with thermodynamics and cosmology, chaired by Warner Miller and with talks by Ted Jacobson on the `derivation' of Einstein's equations from non-equilibrium thermodynamics and by Stephon Alexander on the possibility of the role of gravitational waves in Baryogenesis. The conference closed with a discussion and summary chaired by Chris Beetle including a round table on black hole entropy and a final summary by Abhay Ashtekar. The small setting of the conference, combined with ample time for discussions and the fact that the audience was technically savvy on the field led to very nice discussions and clarifications of points and left those attending with a crisp overview of this rapidly developing field. \vfill\eject \section*{\centerline {New Frontiers in Numerical Relativity, 2006}} \addtocontents{toc}{\protect\medskip} \addcontentsline{toc}{subsubsection}{ New Frontiers in Numerical Relativity, by Luciano Rezzolla} \parskip=3pt \begin{center} Luciano Rezzolla, Albert Einstein Institute \htmladdnormallink{rezzolla-at-aei.mpg.de} {mailto:rezzolla@aei.mpg.de} \end{center} Traditionally, frontiers represent a treacherous terrain to venture into, where hidden obstacles are present and uncharted territories lie ahead. At the same time, frontiers are also a place where new perspectives can be appreciated and have often been the cradle of new and thriving developments. With this in mind, the numerical-relativity group at the Albert Einstein Institute (AEI) organised a workshop with the goal of exploring and understanding these “New Frontiers”. The workshop took place from July 17-21, 2006 at the AEI campus in Golm, Germany. The meeting was focussed on the numerous issues that occur in numerical relativity, such as: formulations of the Einstein equations, initial data, multiblock approaches, boundary and gauge conditions, and of course relativistic fluids and plasmas. Almost 20 years since the homonymous meeting held at Urbana-Champaign (``Frontiers in Numerical Relativity'', 1988), this meeting saw the enthusiastic participation of a great part of the community, with 127 participants present (in 1988 there were 55) and with a large majority being represented by students and postdocs, a reassuring sign of good health for the community. The program was organised so as to have few talks with ample time dedicated to discussions, which were then continued over breaks, meals and late evenings. In addition, a whole session spanning the last afternoon was dedicated to an ``unconstrained'' discussion which covered some of the most controversial issues that emerged during the conference. During this discussion, led by E. Seidel, particular emphasis was placed on the need for systematic comparisons between waveforms generated by different codes, as well as on the connection to the data-analysis community. A good overview of the conference can be found on the webpage of the conference \texttt{http://numrel.aei.mpg.de/nfnr}, which contains the list of the participants, a copy of the program and downloadable version of the talks. Because of this, in what follows I will simply report the highlights of the different thematic sessions which composed the program. \begin{itemize} \item \textbf{Formulation of the Einstein equations} \smallskip This session saw talks covering issues that go from pointing out clues about ``why do codes crash'' (C. Bona), over to the generalized harmonic gauge conditions in use by the Caltech/Cornell group (L. Lindblom), to the well-posedness and equivalence of different formulations and their relations when ``live'' gauge conditions are used (J.M. Martin-Garcia), to conclude with a prescription on how to deal with constraint violations in first-order evolution systems. Particularly interesting was also the progress report on the ability to perform numerical simulations of the tensor wave equation with pseudospectral methods and which represents the first step towards the solution of a maximally-constrained formulation of the Einstein equations (J. Novak). \item \textbf {Initial-Value problem} This session covered a classical topic in numerical relativity: the construction of initial data for binary black hole systems. The talks focussed on solutions found with a parallel multigrid solver for binary systems with non-trivial spin combination (S. Hawley), on how to improve the Bowen-York prescription the initial data with spinning black holes (M. Hannam), or on how to use matched asymptotic expansions to obtain approximate but hopefully more realistic binary black hole initial data (W. Tichy). Particularly interesting were also the progress reports about the use of ingenious coordinate transformations to build quasi-equilibrium configurations of arbitrary binaries (M. Ansorg) or on how to take properly into account spin in the construction of initial data for binary black holes with spins and in circular orbits (H. Pfeiffer). Both approaches showed the impressive accuracy of pseudospectral methods for this type of problems. \item \textbf {Evolution of vacuum spacetimes} A lot of excitement preceded this session and it was all well motivated. A number of impressive results were in fact presented, some of them simply beyond (a realistic) imagination only a couple of years ago. Some of the results on the ``moving-punctures'' prescription, which have been recently published, were presented in great detail (C. Lousto, J. Baker) and led to a lively discussion. Equally interesting were the talks of other groups reporting their ability to now perform multiple orbits simulations of binary black hole systems when treated using moving punctures and a conformal traceless formulation of the Einstein equations (P. Marronetti, B. Br\"ugmann, F. Hermann, D. Pollney). Of topical relevance to the community engaged in puncture evolutions, was the recent work which studied the stationary slicing of puncture spacetimes and the behavior of fields at the puncture (Br\"ugmann, Pollney). Also rather impressive were the results on binary inspiral and merger carried out within a harmonic formulation of the equations either as second-order systems with finite-difference techniques (F. Pretorius) or as a first-order system with pseudospectral methods (M. Scheel). While the latter approach still needs to find an effective management of the domains at the time of the merger, the quality of the results presented for the inspiral has provided additional evidence of the accuracy of spectral methods. In addition, a useful comparison between the harmonic and conformal-traceless formulations was also presented as a first application of a newly developed code (B. Szil\'agyi). Very interesting work is also being done in areas beyond the binary black hole problem, with simulations of general singularities and the apparent validity of the BKL conjecture (D. Garfinkle), or the formation of naked singularities in the collapse of an ultrarelativistic fluid (M. Snajdr), or on a new prescription to smooth-out a singularity and perform stable and accurate simulations (E. Schnetter). \item \textbf{Evolution of non-vacuum spacetimes} The large number of abstracts submitted to this session is an important indication that numerical relativity is not interested only in evolutions of pure-black-hole spacetimes and that a wider bridge towards numerical relativistic-astrophysics can be built. The session saw talks over a wide range of topics, from the analysis of the dynamical barmode instability and which provided a conceptual framework to determine why and when the instability is suppressed (G. Manca), over to the use of a spectral-methods code to study the behaviour of rotating and magnetized stars in quasi-equilibrium (S. Bonazzola), and to the modelling of radio images of Sgr A* using accretion disk simulations from a General Relativistic Magnetohydrodynamics (GRMHD) code on a Kerr background (S. Noble). Focus of a lot of attention were also simulations of gravitational collapse with talks on either the collapse of stellar cores to proto-neutron stars or of dynamically unstable neutron stars to black holes. More specifically, results were presented of 3D simulations of realistic stellar cores employing a finite-temperature equation of state and an approximate treatment of deleptonization (C. Ott, H. Dimmelmeier) as well as of 2D simulations of magnetized stellar cores in the test-field approximation (T. Font). Also, results were presented of 3D simulations of uniformly rotating neutron stars in which a novel technique avoided the use of excision and has allowed calculation of the first complete waveform of the process (L. Baiotti), as well as of 2D simulations in full GRMHD of differentially rotating neutron stars, whose dynamics could be of help in modelling the engines powering short gamma-ray bursts (B. Stephens). Two newly developed codes were also presented which solve the equations of GRMHD either on a fixed black hole background and developed to model jet formation (Y. Mizuno), or on an arbitrary background and developed to extend the applications of the Whisky code to scenarios in which magnetic fields play an important role (B. Giacomazzo). Last but not least, a critical assessment was made of present techniques to handle surfaces and interfaces in relativistic hydrodynamics, and which will need to be improved for a future description of multiple fluids (I. Hawke). \item \textbf{Multiblock techniques and AMR} This session saw talks on an area of numerical relativity which has grown rapidly in recent years and is expected to be of increasing importance. In particular, details were given about the multidomain pseudospectral collocation methods used by the Caltech/Cornell group to evolve spacetimes with black holes (L. Kidder) as well on 6-patches schemes with either overlapping or simply touching patches. In the first case details were presented on the way in which the patch ghost zones are ``synchronized'' by interpolation, on the tensor basis used in each patch, and on the handling of non-tensor field variables (J. Thornburg). Similarly, the main ingredients for the touching patches approach, such as high-order summation by parts finite-differencing operators and compatible dissipation operators, the use of penalty methods for the inter-block boundaries and adaptive time stepping, were also discussed in detail (P. Diener). Finally, results were presented for a new class of analytic solutions to the linearized Einstein equations for the Bondi-Sachs metric to be used as a testbed in a code employing stereographic coordinates and six angular patches (N. Bishop) and for an AMR code with fourth-order discretization in space and time and exploting compactification in space. Examples were given on the use of this code for the study of non-interacting massive Klein-Gordon field together with Yang-Mills-Higgs systems (P. Csizmadia). \item \textbf{Boundary Conditions and perturbative methods} Recent work on another of the classical areas of research in numerical relativity, that which is interested in the definition of mathematically consistent and numerically accurate boundary conditions, was presented in this session. More specifically, a presentation was made on how to use trapping horizons to provide simple and satisfactory inner boundary conditions in black-hole spacetimes making use of the excision technique (E. Gourgoulhon). In addition, outer boundary conditions were the focus of several talks which addressed issues such as the definition of well-posed radiation-controlling boundary conditions for the harmonic formulation of the Einstein equations (O. Rinne), or the use of absorbing boundary conditions through a geometric approach (O. Sarbach) and the application of absorbing boundary conditions in examples of the linearized form of the Einstein equations (L. Buchman). Finally, results were presented on how to use black-hole perturbation theory and effective-one-body ideas to determine the gravitational-wave signal for the inspiral and merger of binary black-hole systems in the extreme mass-ratio limit (A. Nagar). \end {itemize} In recognition of his important work in the field, the conference hosted a public lecture by Jimmy York on ``Dynamical Principles of General Relativity'', held at the picturesque Schlosstheater im Neuen Palais, within the premises of the Sans Souci Park in Potsdam. Talks given at the conference will appear as regular refereed articles in a special issue of CQG to be published in 2007, with M. Campanelli and L. Rezzolla acting as editors. \vfill\eject \section*{\centerline {Teaching General Relativity to Undergraduates}} \addtocontents{toc}{\protect\medskip} \ addcontentsline{toc}{subsubsection}{ Teaching General Relativity to Undergraduates, by Greg Comer} \parskip=3pt \begin{center} Greg Comer, St. Louis University \htmladdnormallink{comergl-at-slu.edu} {mailto:comergl@slu.edu} \end{center} On July 20 and 21, 2006 the AAPT held a Topical Conference at Syracuse University on Teaching General Relativity to Undergraduates. \ Why? \ Because the time has arrived to incorporate special and general relativity fully into the general physics curriculum. \ If you need to be convinced, consider that the equivalence principle works so well it is almost obscene, GPS fails when GR is ignored, gravitational red-shift is a fact, the Hulse-Taylor Binary Pulsar is producing gravitational waves, gravitational lensing exists, the expansion of the universe is essential for cosmological nucleosynthesis (which produced the lighter elements H, He, Li, etc), supermassive black holes in galactic centers appear to be the norm rather than the exception, and the Laser Interferometer Gravitational-Wave Observatory is near its design operation. \ Let us not forget that Dirac's great prediction of antimatter came about after he merged the physics that is spacetime with quantum mechanics. \ Although not a reason for inclusion in physics courses, it is remarkable that Einstein's $E = m c^2$ is an icon of popular culture, and no doubt recognized by more people than Newton's $F = m a$. \ As always, knowing {\em why} relativity should be incorporated is one thing, knowing {\em how} is an entirely different matter. This conference was my first introduction to the increasingly hot pursuit of pedagogically sound models and curricula for teaching relativity at the popular, high school, and undergraduate levels. \ I learned that one does not have to be a relativity expert to participate fully. \ In fact, one of the goals is to develop a curriculum that does not require such expertise (for the simple reason that we cannot expect schools to have a relativist on staff). \ Another goal is to streamline delivery of the mathematics of relativity so as to deliver the ``goods'' that the students want to study and that educators want to teach: black holes, gravitational waves, cosmology, and so on. Pearson Addison-Wesley and Cambridge University Press provided participants with desk copies of five of their important GR textbooks. \ The authors of four of these texts, Jim Hartle, Bernard Schutz, and Edwin Taylor, were in attendance, and we were able to ``pick their brains'' and get first-hand accounts of their texts. \ Broadly speaking, most of the books reflect two different approaches: math first or physics first. \ The notable exception is a new text by Schutz which is designed for a general audience; namely, students who are taking their first (and perhaps only) physics course. \ The math first approach develops the mathematical foundations (tensors and differential geometry), introduces the Einstein equations, and then provides applications. \ In the physics first approach, applications occur first and only after key physics concepts are encountered do the mathematical underpinnings and Einstein equations appear. \ Understanding of gravity as a curved spacetime phenomenon is acquired via analysis of specific solutions to the Einstein equations (such as those for black holes, gravitational waves, and cosmology). \ Loosely speaking, we can think of math first as going from the general to the specific, and physics first as the other way around. The speakers were well chosen, their presentations were fascinating, and the discussions afterward and in breakout sessions were captivating. \ My desire for teaching relativity was invigorated, and I came away truely optimistic about the possibilities. \ Jorge Pullin gave a very good review of the central ideas of relativity (without inundating us with complicated mathematics) for participants who were new to teaching a GR course. \ Jim Hartle presented his rationale for the physics first approach. \ He has found that students can understand many GR physical effects quickly if they have some knowledge of mechanics. \ He emphasized that the structure of a course on GR very much depends on the context of where it is delivered: Who will teach it? \ How much time is available? \ What is the target audience? \ Tom Moore, who has a remarkable wealth of experience in teaching relativity to undergraduates, discussed ways in which the mathematics of GR can be more easily grasped by the students. \ For example, he has found that basic concepts can be effectively presented via analysis of two-dimensional metrics, such as using non-Cartesian coordinates for the flat-space metric and exploring the metric properties of the sphere. \ His experience also shows that students need lots of drilling on index manipulation. While Neil Ashby and Rai Weiss talked seriously about GPS and observation and experiment in GR, respectively, they also gave us some really juicy tabloid tidbits. \ We learned from Neil Ashby that while GR corrections were available to GPS, the necessary circuitry was not turned on initially (maybe because a highly placed, powerful individual was not convinced the corrections were needed). \ After the satellites were in orbit, it was, more or less, immediately determined that the system was not working. \ Only after the GR circuits were switched on, did GPS live up to its promise. \ As they say in football: ``Score!'' \ Rai Weiss regaled us with his personal experiences of learning and teaching relativity and performing experiments that test GR. \ While preparing to teach relativity for the first time (in the Sixties), he spent some time studying the existing data. \ In his own inimitable style, he expressed his, shall we say, disappointment---OK, it was disgust---with what he found for, say, measurements of light deflection by the Sun. A major point that was emphasized several times is that there should be significantly expanded discussion of acceleration in special relativity. \ Even among professional physicists there is a common misconception (first created, apparently, by Einstein himself) that GR is needed to really understand acceleration. \ It was pointed out, however, that we have no conceptual difficulties with accelerations in elementary particle scattering, so why should we have a problem when applying it to, say, the twin paradox? \ Even better, Don Marolf showed how one can use acceleration in special relativity to understand salient features of horizons in GR. While Marolf's talk indirectly addressed the misconception about acceleration, Peter Saulson's talk was a direct rebuttal of another, which is the misconception that laser interferometry cannot be used to detect gravitational waves. \ The incorrect reasoning says there will be no interference because a passing gravitational wave will stretch and squeeze the legs of the interferometer and the light in the same way. \ Of course, analysis based on GR and the Maxwell theory is uneqivocal; there {\em is} interference. Finally, the talk of Stamatis Vokos had, perhaps, the most to say about roots of misconceptions of relativity. \ He and his colleagues have done rigorous studies on how students go about absorbing, processing, and applying what they learn in a relativity course. \ At the most basic level they have found that a student's understanding of simultaneity is crucial for learning relativity. The main message of the conference is clear: there is much work to be done, but Relativity will soon rise out of the abyss of undergraduate syllabus topics that are labeled ``Time Permitting.'' \ The call is out and a concerted effort is in place. \ If you want to help, now is the time to act. \ As a matter of fact, I have a personal request: I am the editor of a new section on relativity that is to be put together for the ComPADRE project. \ It is a web-based network of educational resources supporting teachers and students in physics and astronomy (see http://www.compadre.org/portal/index.cfm). \ Bruce Mason, a P.I.~of the project, visited the conference and spoke briefly about ComPADRE's basic philosophy, current status, and future goals. \ For those who would like to have their materials on relativity made available, please send an e-mail to comergl@slu.edu with (1) a link to the site, (2) the target students, or level of the course, and (3) a two or three sentence description. The AAPT was assisted by LIGO/Caltech, the NSF Physics Frontier Center for Gravitational Wave Physics at Penn State, and the Syracuse University Department of Physics. \ Finally, many thanks go out to the Organizing Committee for producing such an excellent meeting: Michelle Larson (Chair), James Hartle, Charles Holbrow, Dale Ingram, Richard Price, Peter Saulson, John Thacker, and Stamatis Vokos. \ To learn more about results from the workshop go to http://www.aapt-doorway.org/TGRU.htm. \ Workshop posters, slides from presenters' talks, and workshop proceedings can all be found there. \vfill\eject \section*{\centerline {Ninth Capra Meeting on Radiation Reaction}} \addtocontents{toc}{\protect\medskip} \addcontentsline{toc}{subsubsection}{ Ninth Capra Meeting on Radiation Reaction, by Lior Burko} \parskip=3pt \begin{center} Lior Burko, University of Alabama in Huntsville \htmladdnormallink{burko-at-uah.edu} {mailto:burko@uah.edu} \end{center} The Capra series of meetings (named after the ranch in Southern California---that Caltech alumnus director Frank Capra bequeathed to his alma mater---the venue of the first meeting in 1998) are annual meetings on radiation reaction, that focus on the finite-mass corrections to the motion of a small mass in the gravitational field of a much larger mass, and on the emitted gravitational waves. In addition to being an interesting fundamental problem in general relativity, it is also a timely one, as some of the most promising sources for low frequency gravitational waves, that can be observed by space borne detectors such as LISA, are the waves emitted when a stellar mass compact object inspirals into a supermassive black hole at a galaxy's center. With a typical mass ratio of $10^{-6}$--$10^{-5}$, the description of these sources and their waveforms are the main motivation for the Capra meetings. The Ninth Annual Capra Meeting was hosted by the Center for Gravitation and Cosmology of the University of Wisconsin--Milawaukee from June 9 to 11, 2006 (and followed by the now traditional ``post Capra Workshop" June 12 to 14), and organized by Warren Anderson, John Friedman, Eirini Messaritaki, and Alan Wiseman. Administrative support was provided by Steve Nelson. In addition to the host Center, the meeting was supported financially by the Graduate School at UWM and by the Center for Gravitational Wave Astronomy at the University of Texas at Brownsville. The slides presented at the meeting are available online at the meeting's website, {\tt http://www.lsc-group.phys.uwm.edu/capra9/}). The author of this Summary, and surely all the participants of this highly successful meeting, would like to thank the organizers, and especially Eirini Messaritaki, for all the hard work they have put in it. Seventeen talks were presented. A number of talks were about various aspects of radiation reaction for particle motion in the spacetime of a Kerr black hole: Carlos Sopuerta (work with Pablo Laguna at Penn State) discussed advances in numerical simulations of extreme mass ratio inspirals, using finite element methods to handle spatial derivatives, and finite differences for the temporal derivatives. While in Schwarzschild very good agreement (to $0.05 \% $) with finite difference methods is found \cite{sopuerta}, in Kerr grid density adaptability is still a problem, that leads to accuracy problems. One of the problems related to the so-called ``gauge problem" is that the full and singular parts of the gravitational field of a particle are often written in different gauges. One way to attack this problem is to find instead the Weyl scalars, which are gauge independent. Notably, the reconstruction of the metric perturbations from the Weyl scalars is problematic when a source particle is present, because there are more gauge conditions to satisfy than gauge degrees of freedom. However, if one obtains the regularized Weyl scalars, then one has a solution of the {\em homogeneous} Einstein equation, and therefore one doesn't have the above problem in reconstructing the metric perturbations. Then, the metric perturbations can be found in {\em any} gauge, because the Weyl scalars are gauge independent. Bernard Whiting (work with Larry Price at the University of Florida) reported on work in progress related to the finding of the metric perturbations in Kerr by first regularizing the Weyl scalars, specifically the jump conditions on the radiative modes of the Weyl scalars. Whiting focused on circular but non-equatorial orbits in Kerr. Using this method Whiting found the leading regularization parameter ``$A$", and work is in progress on finding the other parameters. John Friedman (work with Tobias Keidl and Alan Wiseman at UWM) addressed a closely related question, of how to find the regularized Weyl scalars using a special gauge that is exploiting the separability of the Teukolsky equation. Keidl (work with Friedman, Swapnil Tripathi, and Wiseman at UWM) then applied this approach to a simple case of a static mass point in the Schwarzschild spacetime, and showed how to solve the Bardeen--Press equation for $\Psi_4$. For the case of a static electric charge in Schwarzschild, this method successfully reproduces the result of Smith and Will \cite{smith-will}. Dong--Hoon Kim (AEI) discussed work in progress on a mode sum calculation of regularization parameters in Kerr, specifically for scalar field self force for generic orbits in Kerr. Kim describes the singular field in THZ normal coordinates \cite{THZ}, and finds the regularization parameters ``$A$", ``$B$", and ``$C$". Katsuhiko Ganz (Kyoto University, work with Wataru Hikida, Hiroyuki Nakano, and Takahiro Tanaka) addressed the adiabatic evolution of orbits in Kerr with large inclination angles, extending previous work in \cite{ganz}. Ryuichi Fujita (work with Hideyuki Tagoshi at Osaka University) discussed a new method to integrate the Teukolsky equation in the frequency domain, that is based on the MST formalism \cite{mano}, in which one expands the homogeneous solutions in hypergeometric functions near the horizon and in Coulomb wave functions near spatial infinity, and matches the solutions across an overlap region. Fujita applied the method to inclined orbits in Kerr with small eccentricity. For cases where fluxes are available from other methods, agreement to at least 6 significant figures is found. In Schwarzschild, agreement to 15 significant figures was reported \cite{fujita}. Other talks discussed issues related to a Schwarzschild black hole: Lior Burko (UAH) discussed the evolution of quasi-circular orbits under a local self force, including conservative effects \cite{burko}. Nakano (Osaka University, work with Norichika Sago, Hikida, and Misao Sasaki) discussed the solution of the metric perturbations in the Regge--Wheeler gauge, including a change to the asymptotically-flat gauge for the non-radiative modes, and reported on a post Newtonian expansion for the radial component of the self force for a particle in circular orbit. Hikida (Kyoto University, work with Sanjay Jhingan, Nakano, Sago, Sasaki, and Tanaka) discussed the change in the orbital parameters of a scalar charge's motion under a self force, and obtained the latter in an expansion in the eccentricity. Hikida found that for small values of the eccentricity, the conservative effects on the phases are small, and are not likely to accumulate to $\pi$. However, gravitational conservative self force effects are expected to be larger than the scalar field counterpart. Steve Detweiler (work with Ian Vega at the University of Florida) discussed the gravitational self force effects on geodesics in Schwarzschild, extending previous work on the scalar field counterpart \cite{detweiler}. Vega (work with Detweiler) described work in progress on second-order gravitational perturbations due to a point particle, that is based on replacing the point particle with a small black hole at the world line. Abraham Harte (Penn State) reported on work based on Dixon's formalism to evaluate self forces on extended bodies, which was applied to electromagnetism in flat spacetime \cite{harte}. Eric Poisson (work with Roland Haas at the University of Guelph) described a method for calculation of regularized self forces based on a tetrad decomposition of the singular field \cite{poisson}. Instead of expanding a vector in vector harmonics, Poisson described its projection on an orthonormal tetrad, followed by an expansion in scalar harmonics. In a carefully chosen ``Cartesian" tetrad, only a finite number of coefficients are non-zero, which makes this formalism attractive. Haas described work in progress on the scalar self force for eccentric orbits in Schwarzschild based on a time domain calculation. Wiseman (work with Friedman, Keidl, and Tripathi at UWM and Samuel Gralla at Yale University and the University of Chicago) discussed the computation of the self force using a modification of the Quinn--Wald axioms, using a method based on finding an exact mode sum that represents the entire singular field. Carlos Lousto (UTB) reviewed the recent exciting advances in numerical relativity \cite{lousto}. \begin{thebibliography}{99} \bibitem{sopuerta} C.F.~Sopuerta and P.~Laguna, Phys.~Rev.~D {\bf 73}, 044028 (2006). \bibitem{smith-will} A.G.~Smith and C.M.~Will, Phys.~Rev.~D {\bf 22}, 1276 (1980). \bibitem{THZ} K.S.~Thorne and J.B.~Hartle, Phys.~Rev.~D {\bf 31}, 1815 (1985); X.-H.~Zhang, Phys.~Rev.~D {\bf 34}, 991 (1986). \bibitem{ganz} N.~Sago, T.~Tanaka, W.~Hikida, K.~Ganz, and H.~Nakano, \htmladdnormallink{gr-qc/0511151}{http://arXiv.org/abs/gr-qc/0511151} . \bibitem {mano} S.~Mano, H.~Suzuki, and E. Takasugi, Prog.~Theor.~Phys.~{\bf 95}, 1079 (1996). \bibitem{fujita} R.~Fujita and H.~Tagoshi, Prog.~Theor.~Phys.~{\bf 112}, 415 (2004). \bibitem{burko} L.M.~Burko, Class.~Quantum Grav.~{\bf 23}, 4281 (2006). \bibitem{detweiler} L.M.~Diaz--Rivera, E.~Messaritaki, B.~F.~Whiting, and S.~Detweiler, Phys.~Rev.~D {\bf 70}, 124018 (2004). \bibitem{harte} A.I.~Harte, Phys.~Rev.~D {\bf 73}, 065006 (2006). \bibitem{poisson} R.~Haas and E.~Poisson, \htmladdnormallink{gr-qc/0605077}{http://arXiv.org/abs/gr-qc/0605077} . \bibitem{lousto} F.~Pretorius, Phys.~Rev.~Lett.~{\bf 95}, 121101 (2005); M.~Campanelli, C.O.~Lousto, P.~Marronetti, and Y.~Zlochower, Phys.~Rev.~Lett.~{\bf 96}, 111101 (2006); J.G.~Baker, J.~Centrella, D.-I.~Choi, M.~Koppitz, and J.~van Meter, Phys.~Rev.~Lett.~{\bf 96}, 111102 (2006). \end{thebibliography} \end{document}
{"url":"http://www4.oakland.edu/upload/docs/physics/mog/mog28.tex","timestamp":"2014-04-19T17:03:36Z","content_type":null,"content_length":"64072","record_id":"<urn:uuid:867565e4-3d8d-46bf-94f8-7fc986f9f1d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
43: Abstract harmonic analysis [Search][Subject Index][MathMap][Tour][Help!] Abstract harmonic analysis: if Fourier series is the study of periodic real functions, that is, real functions which are invariant under the group of integer translations, then abstract harmonic analysis is the study of functions on general topological groups which are invariant under a (closed) subgroup. This includes topics of varying level of specificity, including analysis on Lie groups or locally compact abelian groups. This area also overlaps with representation theory of topological groups. Mackey, George W. : "Harmonic analysis as the exploitation of symmetry---a historical survey", Bull. Amer. Math. Soc. (N.S.) 3 (1980), no. 1, part 1, 543--698 (MR81d:01017) For other analysis on topological and Lie groups, See 22Exx One can carry over the development of Fourier series for functions on the circle and study the expansion of functions on the sphere; the basic functions then are the spherical harmonics -- see 33: Special Functions. There is only one division (43A) but it is subdivided: This is among the smaller areas in the Math Reviews database. Browse all (old) classifications for this area at the AMS. Berenstein, Carlos A.: "The Pompeiu problem, what's new?", Complex analysis, harmonic analysis and applications (Bordeaux, 1995), 1--11; Pitman Res. Notes Math. Ser., 347; Longman, Harlow, 1996. You can reach this page through http://www.math-atlas.org/welcome.html Last modified 2000/01/14 by Dave Rusin. Mail:
{"url":"http://www.math.niu.edu/~rusin/known-math/index/43-XX.html","timestamp":"2014-04-17T21:24:39Z","content_type":null,"content_length":"7280","record_id":"<urn:uuid:e073ccd3-2700-41ea-879e-c4c60d433b02>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Intermediate Algebra Department of Mathematical Sciences Welcome to Intermediate Algebra Updates: August 21st 2011 1)Note that the links on the left hand side have been updated for Fall 2011. 2) Only students enrolled in Section 84839/ instructor Moosai/MW 10-10.50pm and Section 58840/instructor Radulescu in ED 120 are required to spend 2 hrs of supervised lab on online assignments each This course prepares students for MAC 1105, College Algebra. Topics include sets, properties of real numbers, exponents and radicals, factoring of polynomial expressions, algebraic fractions, linear, quadratic and radical equations and their applications. T This course does not satisfy GORDON RULE mathematics graduation requirement but is a necessary prerequisite for GORDON RULE math courses. This course counts as elective credit only. You are not required to purchase a textbook for this course. You will have access to the e-book, Intermediate Algebra, by Sullivan & Struve, Prentice Hall, 2nd edition(2010), online through What is math anxiety? Math Myths Taking possession of math anxiety Strategies for studying mathematics
{"url":"http://math.fau.edu/web/Intermediate/index.htm","timestamp":"2014-04-20T05:42:26Z","content_type":null,"content_length":"65420","record_id":"<urn:uuid:42b89b93-4d0f-41d2-b364-6324546e28d5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
R-alpha: two-sided to one-sided formula Douglas Bates bates@stat.wisc.edu Tue, 2 Dec 1997 11:55:49 -0600 (CST) At times we want to convert a two-sided formula to a one-sided formula. In S we can do this by dropping the second entry in the formula. In R that object no longer has a formula class. R> ttt <- score ~ age | Infant R> class(ttt) [1] "formula" R> length(ttt) [1] 3 R> ttt[-2] age | Infant R> class(ttt[-2]) R> do.call("~", ttt[-(1:2)]) ~age | Infant In general it would not be a good idea to propagate the formula class to subsets but it does make sense in this case. We can get around it by replacing ttt[-2] by do.call("~", ttt[-(1:2)]) I suppose. Any opinions on whether ttt[-2] should still be a formula? r-devel mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html Send "info", "help", or "[un]subscribe" (in the "body", not the subject !) To: r-devel-request@stat.math.ethz.ch
{"url":"https://stat.ethz.ch/pipermail/r-devel/1997-December/017810.html","timestamp":"2014-04-16T16:07:11Z","content_type":null,"content_length":"3266","record_id":"<urn:uuid:dc3d58d3-366c-4780-be93-fdd91f19a6fe>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof a recursive formula with induction September 27th 2011, 02:27 PM Proof a recursive formula with induction Hi all! I'm new here and I could use some help on a homework question. Given the following recursive definition: $\\ a_0=1 \\ a_1=2 \\ a_n=\frac{(a_{n-1})^2}{a_{n-2}}$ I have to proof, using induction, that $\\ a_n=2^n$ My current proof is as follows: Base step The formula is correct for the cases n=0 and n=1 (Can be easily verified). Induction step Assume the formula is correct for n, we can fill this in: $a_n = \frac{(2^{n-1})^2}{2^{n-2}}$ $a_n = \frac{2^{2n-2}}{2^{n-2}}$ $a_n = \frac{2^{n-2} \times 2^n}{2^{n-2}}$ $a_n = 2^n$ However, I think I am missing some important induction steps. I don't see the connection between the base step and the induction step. (Worried) Any help is appreciated, thanks in advance! (Happy) September 27th 2011, 02:50 PM Re: Proof a recursive formula with induction In the induction step, you should fix an arbitrary n >= 2 and assume that the claim holds for n - 1 and n - 2. Then you need to prove it for n. In other words, you prove the following statement where P(n) denotes the claim for n: "For all n >= 2, if P(n - 2) and P(n - 1), then P(n)." Since you proved P(0) and P(1), the induction step gives P(2), then from P(1) and P(2) you get P(3) and so on. The calculations you did are correct.
{"url":"http://mathhelpforum.com/discrete-math/189010-proof-recursive-formula-induction-print.html","timestamp":"2014-04-17T22:03:54Z","content_type":null,"content_length":"5982","record_id":"<urn:uuid:62bdfe26-60da-49b4-b327-7af6faa37d70>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the margin error E December 19th 2005, 11:08 PM Find the margin error E Please help me solve this problem. I need to know what is the correct formula in order to solve the problem. Thanks Use the given confidence level and sample data to find (a) the margin error E and (b) a confidence interval for estimation the population mean u. Times between using a dishwasher: 90% confidence or 1.645 (critical value); n = 25, x = 5.24 sec, the population is normally distributed, and o is known to be 2.50 sec. Note: I was unable to put the minus sign over the x symbol. December 20th 2005, 12:03 AM Originally Posted by sinlee2010 Please help me solve this problem. I need to know what is the correct formula in order to solve the problem. Thanks Use the given confidence level and sample data to find (a) the margin error E and (b) a confidence interval for estimation the population mean u. Times between using a dishwasher: 90% confidence or 1.645 (critical value); n = 25, x = 5.24 sec, the population is normally distributed, and o is known to be 2.50 sec. Note: I was unable to put the minus sign over the x symbol. For a sample of size $N=25$, we observe a mean $\bar x = 5.24 \ sec$. We know that the population is normally distributed with known standard deviation $\sigma=2.5\ sec$. Then 90% margin of error for the mean is: $E= \frac{1.645 \frac{\sigma}{\sqrt N}}{\bar x}100\ \%\ \sim 15.67\ \%$ The 90% confidence interval is: $(\bar x-1.645 \frac{\sigma}{\sqrt N},\bar x+1.645 \frac{\sigma}{\sqrt N})=(4.4275,6.0725)$. (Note that this retains more precision than is warranted) December 20th 2005, 06:25 AM Find the Margin of Error Thanks for the help Captainblack...... :)
{"url":"http://mathhelpforum.com/advanced-statistics/1485-find-margin-error-e-print.html","timestamp":"2014-04-23T18:41:19Z","content_type":null,"content_length":"6438","record_id":"<urn:uuid:fab0b8ff-32d0-413e-af5c-b08a0d7e2beb>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Penllyn, PA Math Tutor Find a Penllyn, PA Math Tutor ...Education and Teaching Experience: I hold a BA in English from the University of Pennsylvania and an MA in English from the University of Oregon. I am currently pursuing a secondary English teaching certification and MS in Secondary Education at Saint Joseph's University. At the University of ... 17 Subjects: including ACT Math, SAT math, English, reading ...I'm familiar with many of the books taught in most elementary, secondary, and collegiate-level classes. To help my students learn from literature, I try to ask challenging questions and assign a variety of writing prompts to help them learn the underlying concepts of the text. As an English teacher, I am well-practiced in proofreading student writing. 12 Subjects: including prealgebra, reading, English, writing I would like every student to know that Math is achievable and can even be fun! Math can also require work but it should never be so hard as to make a student give up or cry. I am passionate about Math in the early years, from Pre-Algebra through Pre-Calculus. 9 Subjects: including geometry, Microsoft Outlook, algebra 1, algebra 2 ...My strong proficiencies in Mathematics and Science, along with 39 years of experience in the engineering field in both technical and managerial roles provide a strong foundation for working constructively with students to help pave the way toward successful academic results. I have many demonstr... 5 Subjects: including algebra 1, algebra 2, geometry, prealgebra I'm a retired college instructor and software developer and live in Philadelphia. I have tutored SAT math and reading for The Princeton Review, tutored K-12 math and reading and SAT for Huntington Learning Centers for over ten years, and developed award-winning math tutorials. 14 Subjects: including algebra 1, algebra 2, geometry, precalculus Related Penllyn, PA Tutors Penllyn, PA Accounting Tutors Penllyn, PA ACT Tutors Penllyn, PA Algebra Tutors Penllyn, PA Algebra 2 Tutors Penllyn, PA Calculus Tutors Penllyn, PA Geometry Tutors Penllyn, PA Math Tutors Penllyn, PA Prealgebra Tutors Penllyn, PA Precalculus Tutors Penllyn, PA SAT Tutors Penllyn, PA SAT Math Tutors Penllyn, PA Science Tutors Penllyn, PA Statistics Tutors Penllyn, PA Trigonometry Tutors Nearby Cities With Math Tutor Broad Axe, PA Math Tutors Center Square, PA Math Tutors Fair Oaks, PA Math Tutors Foxcroft Square, PA Math Tutors Foxcroft, PA Math Tutors Gulph Mills, PA Math Tutors Gwynedd Valley Math Tutors Jarrettown, PA Math Tutors Lower Gwynedd, PA Math Tutors North Hills, PA Math Tutors Plymouth Valley, PA Math Tutors Prospectville, PA Math Tutors Roslyn, PA Math Tutors Spring House Math Tutors Upper Dublin, PA Math Tutors
{"url":"http://www.purplemath.com/Penllyn_PA_Math_tutors.php","timestamp":"2014-04-19T10:05:09Z","content_type":null,"content_length":"23932","record_id":"<urn:uuid:ff7e5d71-7805-4063-bc2f-2a0e3917ad3a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Courses Core Courses These courses end in 7. Phy 507 Mathematical Methods in Physics 1 (3) Topics in theoretical and applied physics will be studied using both analytical and numerical approaches. Formal descriptions of phenomena and interpretations of results will involve a variety of classical and modern mathematical techniques. Prerequisites: Differential equations. Phy 517 Statistical Mechanics (3) An introduction to statistical methods and the description of a variety of phenomena on a statistical basis. Thermodynamics, statistical mechanics, and kinetic theory are presented from a unified point of view. Topics include elements of probability theory, interaction between macroscopic systems and their parameters, equilibrium, ensembles, classical and quantum statistics, systems of interacting particles, Boltzmann equation, irreversible processes, and fluctuations. Prerequisite: Phy 547 and Phy 460 or equivalent. Phy 527 Classical Mechanics (3) The fundamental principles of classical mechanics are covered. These include the Lagrangian formulation, action, variational principles, and equations of motion, Hamilton’s principle, conserved quantities, rigid bodies, Hamiltonian formulation and canonical equations, canonical transformations and generating functions, Liouville’s theorem. Corequisites: Phy507. Phy 537 Electrodynamics 1 (3) An in-depth survey of classical electrodynamics. Topics include: special relativity, motion of charges in electromagnetic fields, Maxwell's equations, energy and momentum in the electromagnetic field, electrostatics, the propagation and generation of electromagnetic waves. Prerequisite: Phy 350 or equivalent. Phy 547 Quantum Mechanics 1 (3) This foundations of quantum mechanics course includes a review of Schroedinger's equation and proceeds to Heisenberg formalism and its properties, basic principles and structure of quantum mechanics, states, observables, measurements and their mathematical descriptions, representation and transformation theory, bound states and scattering, applications to harmonic oscillator and central potentials. Prerequisite: Phy 450 or equivalent. Corequisite: Phy 507. Phy 557 Quantum Mechanics 2 (3) This second semester of quantum mechanics includes discussion of angular momentum, rotation, Clebsch-Gordan coefficients, Wigner-Eckart theorem, approximation methods, perturbation, variation and WKB approaches, identical particles, Thomas-Fermi model, Hartree-Fock equation, and the semiclassical theory of radiation. Prerequisite: Phy 547. Phy 577 Computational Methods (3) Applications of modern computational methods to current topics in physics. Basics of coding and use of standard software packages. Prerequisite Phy 507 or permission of instructor. Phy 587 Solid State Physics 1 (3) A broad survey of the phenomena of solid state physics. Symmetries of crystals and diffraction from periodic structures; vibrational states and electronic band structures in crystalline metals, semiconductors, and insulators; thermal, transport and optical properties of solids. Prerequisites: Phy 517 and Phy 547. Phy 508 Mathematical Methods in Physics 2 (3) Topics in theoretical and applied physics will be studied using both analytical and numerical approaches. Formal descriptions of phenomena and interpretations of results will involve a variety of classical and modern mathematical techniques. Prerequisite: Phy 507. Phy 515 Electronics (3) Topics covered in this course include transistors and their characteristics, electronic circuits, field effect transistors and applications, amplifiers, low and high frequency response, operational amplifiers, consideration of control-circuit design, fast-switching and counting devices, integrated circuits and their designs. Phy 516 Electronics Projects (3) Hands on independent study electronics projects. Prerequisite: Phy 515 and consent of instructor. Phy 519 Experimental Techniques in Physics (3) Techniques of contemporary experimental physics are highlighted through active participation in selected structured projects utilizing the various research facilities in the department. Emphasis will be given to the advantages and limitations of the techniques in the elucidation of the physics involved. Phy 520 Nuclear Physics (3) A broad survey of the phenomena of nuclear physics. Bulk properties of the nucleus; size, shape, spin, moments, and binding energy. Two-nucleon problem, deuteron, scattering, nuclear shell model, unified model, and collective motion. Electromagnetic and weak interactions. Nuclear reactions, compound nucleus, and direct reactions. Prerequisite: Phy 547 or permission of instructor. Phy 526 Introduction to Particle Physics (3) A broad survey of the phenomena of particle physics. Experimental methods. Classification of particles, quantum numbers, and interactions. External and internal symmetries. Electromagnetic, strong, and weak interactions; resonances; unitary symmetry; recent topics. Phy 528 The Physics of Radiation Therapy (3) This course will be taught at a level such that it will be accessible to ambitious undergraduate students also. The course focuses on radiation therapy physics with special emphasis on clinical applications. The course provides basic radiation physics and physical aspects of treatment planning using photon and electron beams and brachytherapy sources. The course consists of three parts: (i) Part I deals with the basic physics of radiation. (ii) Part II deals with classical radiation therapy, which includes dosimetry and treatment planning. (iii) Part III focuses on modern radiation therapy, which deals with conformal and intensity-modulated radiation therapy, stereotactic radiosurgery, high dose rate brachytherapy and prostate implants. The course will also involve lectures by Medical Physics experts from local hospitals. Students will write a report on a topic selected in consultation with the teacher. Prerequisites: Phy 340 and Phy 440 or equivalent. Phy 539 Electrodynamics 2 (3) An in-depth survey of classical electrodynamics in material media. Topics: conductors, dielectrics, magnetostatics, superconductivity, the interaction of electromagnetic waves with matter, including reflection, diffraction, scattering, and the diffraction of x-rays by crystals. Prerequisite: Phy 537. Phy 542 Introduction to General Relativity (3) Review of special relativity. Introduction to tensor analysis and the geometry of curved spaces. Einstein's equations. Applications to gravitational waves, black holes and expanding universes. Phy 543 Introduction to Cosmology (3) An introduction to cosmology, the study of the structure and evolution of the Universe. Topics: Newtonian cosmology, elements of general relativity (metric, geodesics, Einstein equations), Friedman equations and their solutions, dark matter, dark energy, inflation, introduction to quantum gravity. Phy 544 (Chm 544, Bms 570A) Theory and Techniques of Biophysics and Biophysical Chemistry (3) Comprehensive study of the physical chemistry of biopolymers; structure- confirmation-function interrelations, including systematic coverage of theoretical and experimental aspects of such topics as solution thermal dynamics, hydrodynamics, and optical and magnetic characteristics. Prerequisites: One year of biochemistry and one year of physical chemistry. Phy 545 Physics of Nuclear Medicine (3) The fundamental physics of nuclear medicine is explored. Topics to be covered include implantation and imaging of radioisotopes, brachytherapy and implantation for the treatment of tumors. Phy 548 Medical Imaging (3) This introduction to the physics of radiography includes discussions of CAT, PET, MRI, SPECT, fluoroscopy, and nuclear medicine. Image quality assessment concepts such as contrast, MTF, DQE(f), and ROC will also be covered. Phy 549 Introduction to Quantum Foundations and Quantum Information (3) Quantum theory has many mysterious features, such as entanglement and the probabilistic nature of measurements, which seem to defy understanding in terms of the mechanistic clockwork picture of reality that underlies classical physics. What do these features suggest about the nature of physical reality? For example, is there really “spooky action at a distance” in Nature, as Einstein quipped? In this course, we investigate possible answers to these questions, and form an understanding as to why these questions matter. In particular, we look at recent work which views quantum theory as a theory of information manipulation, and see that this provides extraordinary new insights into the nature of physical reality, which leads to new technological possibilities (such as quantum cryptography and entanglement-assisted computation) that harness quantum weirdness, and even helps us to derive the mathematics of quantum theory from simple physical assumptions. Phy 551 (Csi 551, Inf 551) Bayesian Data Analysis and Signal Processing (3) This course will introduce both the principles and practice of Bayesian and maximum entropy methods for data analysis, signal processing, and machine learning. This is a hands-on course that will introduce MATLAB computing language for software development. Students will learn to write their own Bayesian computer programs to solve problems relevant to physics, chemistry, biology, earth science, and signal processing, as well as hypothesis testing and error analysis. Optimization techniques to be covered include gradient ascent, fixed-point methods, and Markov chain Monte Carlo sampling techniques. Prerequisites: Csi 101or Csi 201, Mat 214, or equivalents, or permission of instructor. Phy 553 Microprocessor Applications (3) This course describes applications of microprocessors to data collection and process control. Topics include the capabilities of typical microprocessors and the techniques used to interface them to external devices, input/output programming, use of the data and address busses, interrupt handling, direct memory access, and data communications; characteristics of peripheral devices such as keyboards, printers, A/D and D/A converters, sensors, and actuators. Prerequisite(s): I Csi 201 or 204 or equivalent. Phy 554 Microprocessors Applications Laboratory (3) This course complements the theoretical development presented in A Phy 553. It centers around practical laboratory applications in both hardware and software of a particular microprocessor. Students will prototype a minimum system and expanded system. Applications include keyboard, printer, display, A/D, D/A, and control functions. A knowledge of a microprocessor and digital logic functions is desirable. Prerequisite(s): A Phy 515 or permission of instructor or A Phy 553. Phy 560 Atoms and Molecules (3) A broad survey of the phenomena in atomic and molecular physics. Atomic structure and spectroscopy, scattering and collisions; molecular structure, electronic, rotational and vibrational spectra, photon excitations, scattering and collisions. Prerequisite Phy 517 and Phy 587 or equivalent or permission of instructor. Phy 562 Structure and Properties of Materials (3) The physics of real materials: the structure of crystalline and amorphous solids; x-ray diffraction and electron microscopy; the thermodynamics and kinetics of phase transformations; crystallographic defects and their relation to mechanical properties. Prerequisite: Phy 517 and Phy 587 or permission of instructor. Phy 566 X-Ray optics, Analysis and Imaging (3) A broad survey of x-ray optics and their uses. Introduction to the theory of x-ray interaction with matter, including refraction, diffraction, total reflection, image formation, fluorescence, absorption, and surface roughness. Applications include x-ray astronomy, microscopy, lithography, materials analysis and medical imaging. A paper and presentation are required. Prerequisite Phy 587 or permission of instructor. Phy 568 Particle Physics (3) Particle interactions and symmetries. Introduction to classification and the quark model. Calculation of elementary processes using Feynman diagrams. Prerequisite: Phy 547. Phy 572 Fluid Mechanics (3) Most fluids are described by the Navier-Stokes equation. Simplifications or approximations are often needed to extract the physics from this complicated equation. Topics covered include: 1) Static fluids, pressure and surfaces; 2) The Euler equation, d'Alembert's paradox, Bernoulli's equation and circulation; 3) Viscosity, damping and the Reynolds number; 4) Boundary layers and turbulence; 5) Waves and sound propagation. Prerequisites: Phy 320 and Mat 214 or equivalent. Phy 580 Electron Diffraction and Microscopy (3) Topics covered which are related to electron diffraction and microscopy include the kinematic theory of electron diffraction, reciprocal lattices and fine structure of diffraction patterns, electron guns and electron lenses, the operation and calibration of electron microscopes, the dynamical theory of diffraction contrast, images of various defects in solids, analysis of dislocations and interfaces, many-beam effects and weak-beam images, phase contrast, lattice images and multislice method, sample preparation and analytical electron microscopy techniques. The course includes hands-on electron microscope experiments. Prerequisite Phy 587. Phy 588 Solid State Physics II (3) A broad survey of the phenomena of solid state physics (continuation of Solid State Physics I). Superconductivity; magnetic and dielectric properties of materials; spectroscopy with photons and electrons; point and line defects; surfaces and interfaces; alloys; noncrystalline solids. Prerequisite: Phy 587. Phy 619 Quantum Mechanics 3 (3) Potential scattering; wave packets, scattering amplitude, cross section, partial waves, phase shifts, Born approximation. Effective range, resonances, scattering matrix, dispersion relations. Formal collision theory. Relativistic wave equation. Klein-Gordon equation, Dirac equation, Relativistic electron theory, spin and negative energy, fine structure of the hydrogen atom. Prerequisite: Phy Phy 632 Spectroscopy: Magnetic Resonance (3) This course will develop a foundation for the understanding of modern magnetic resonance spectroscopy including both NMR and EPR. Topics will include, quantum mechanics of spins, and the density matrix, Fourier transform experiments, spin relaxation, double resonance, field gradients and imaging, two-dimensional experiments, multiple quantum coherence, protein structure and dynamics, solid state NMR and magic angle spinning. Prerequisite: Phy 557. Phy 640 Information Physics (3) The basic principles of information theory and their relation to the laws of physics. Probability and entropy as tools for inductive reasoning. The Cox axioms. Bayes’ theorem and its application to elementary data analysis. Relative entropy, the method of maximum entropy and the foundations of statistical mechanics and thermodynamics. Information geometry, the Fisher-Rao information metric. Shannon’s entropy and Shannon’s theorems. Entropy in quantum mechanics. Quantum information theory and quantum computation. Derivation of quantum mechanics from information theory. Prerequisites: Phy 517 and Phys 547, or consent of the instructor. Phy 642 Relativity and Cosmology (3) Tensor analysis-special relativity, Lorenz transformations, covariant formulation of physical laws. Principles of general relativity and solutions of the Einstein field equation. Tests of general relativity; cosmological models; topics in general relativity. Phy 651 Methods for Investigation of Electronic Structures of Atomic, Molecular and Solid State Systems 1 (3) Procedures for quantitative study of electronic structures and properties of atoms, small molecules, large molecules and solid state materials. Topics include Hartree-Fock and Many-Body theories for atomic systems and small molecules, empirical and approximate procedures for large molecules especially for systems of biological interest, band-structure and cluster approaches to solid state systems including metallic, semiconductor, ionic crystal and molecular solids, impurity centers in these systems and surfaces and interfaces. Prerequisites: Phy 587. Phy 652 Methods for Investigation of Electronic Structures of Atomic, Molecular and Solid State Systems 2 (3) Procedures for quantitative study of electronic structures and properties of atoms, small molecules, large molecules and solid state materials. Topics include Hartree-Fock and Many-Body theories for atomic systems and small molecules, empirical and approximate procedures for large molecules especially for systems of biological interest, band-structure and cluster approaches to solid state systems including metallic, semiconductor, ionic crystal and molecular solids, impurity centers in these systems and surfaces and interfaces. Prerequisites: Phy 587. Research, Seminar and Special Topics Courses Phy 680 Seminar in Physics (1) Faculty led seminars in ongoing physics research. Emphasis is placed developing skills to explain and discuss current research. Phy 695 Introduction to Research Problems in Physics (1-12) Individually directed investigation into areas of current research interest in the department. Prerequisite: Consent of faculty member who will act as supervisor for the investigation. Phy 699 Master's Thesis in Physics (1-12) Up to 6 credits can be used for graduation. Phy 782 Advanced Topics in Physics (3) Selected topics in physics. Phy 784 Special Topics in Physics (1-6) Selected coverage of specialized topics. Phy 810 Research in Physics (1-15) Phy 899 Doctoral Dissertation (1) Load graded. Appropriate for doctoral students engaged in research and writing of the dissertation. Prerequisite: Admission to doctoral candidacy.
{"url":"http://www.albany.edu/graduatebulletin/a_phy.htm","timestamp":"2014-04-21T07:49:08Z","content_type":null,"content_length":"39809","record_id":"<urn:uuid:1e154d2d-ad74-4a80-a5c9-f84f422e97df>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Equation for transverse wave It does not really matter but I am going to put the c^2 in its conventional place in what follows. Sorry that is what happens when you trust to an aging memory. [tex]\frac{{{\partial ^2}y}}{{\partial {x^2}}} = \frac{1}{{{c^2}}}\frac{{{\partial ^2}y}}{{\partial {t^2}}}[/tex] Now for fixed ends the boundary conditions are [tex]y(0,t) = y(l,t) = 0[/tex] If the ends are not 'free' but still participating in the wave then they can be attributed initial displacement and velocity conditions y(x,0) = f(x) \\ {\left( {\frac{{\partial y}}{{\partial t}}} \right)_{t = 0}} = g(x) \\ The wave equation itself may be solved by the method of separating the variables [tex]y(x,t) = F(x)G(t)[/tex] F is a function of x only and G a function of t only. Substituting and dividing through by y=FG [tex]\frac{1}{F}\frac{{{d^2}F}}{{d{x^2}}} = \frac{1}{{G{c^2}}}\frac{{{d^2}G}}{{d{t^2}}}[/tex] Both sides of this equation can only be equal if they are constant. Convention has this constant as -λ^2. Some algebra on the resultant pair of ordinary diffrential equations will lead to your required trigonometric solution ( not the one you offered ) where F has the form [tex]{F_n}(x) = \sin \frac{{n\pi x}}{l}[/tex] and G has the form [tex]{G_n}(t) = {A_n}\cos {\omega _n}t + {B_n}\sin {\omega _n}t[/tex] [tex]{y_n}(x,t) = {F_n}(x){G_n}(t) = \sin \frac{{n\pi x}}{l}\left[ {{A_n}\cos {\omega _n}t + {B_n}\sin {\omega _n}t} \right][/tex] Where A and B are determined by the intial conditiions. In general the solution above will not be complete since it depends upon n. To obtain a complete solution you need to sum solutions over n from n=1 to ∞ y(x,0) = f(x) = \sum\limits_{n = 1}^\infty {{A_n}} \sin \frac{{n\pi x}}{l} \\ {\left( {\frac{{\partial y}}{{\partial t}}} \right)_{t = 0}} = g(x) = \sum\limits_{n = 1}^\infty {{B_n}} {\omega _n}\sin \frac{{n\pi x}}{l} \\
{"url":"http://www.physicsforums.com/showpost.php?p=4200117&postcount=5","timestamp":"2014-04-17T12:41:28Z","content_type":null,"content_length":"9004","record_id":"<urn:uuid:f7fe2d0e-af9a-4042-b157-5b25dd05aca6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Vector tutorial suggestion June 12th 2010, 10:00 AM #1 Junior Member May 2010 Vector tutorial suggestion Hi I have this problem (attached) and I really need some brush up on vectors. Can you suggest a link for some good notes or tutorials that tackles part b) and c) . 2 lines are perpendicular if their direction vector are perpendicular (dot product equal to 0) 2 lines intersect if you can find one value for $\lambda$ and one for $\mu$ such that the coordinates are equal l1 and l2 intersect and are perpendicular ; in the plane formed by l1 and l2 place A, P and B June 16th 2010, 11:05 AM #2 MHF Contributor Nov 2008
{"url":"http://mathhelpforum.com/geometry/148685-vector-tutorial-suggestion.html","timestamp":"2014-04-18T20:56:11Z","content_type":null,"content_length":"30526","record_id":"<urn:uuid:76c1bc75-d377-45f0-92b4-4cb207d713fb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Bensenville Prealgebra Tutor ...In addition to a Bachelor of Arts in Spanish and Great Books from the University of Notre Dame, I studied pre-med in a post-baccalaureate program at Northwestern University and earned a master degree in counseling psychology from Lewis University. Though I do not have the same competency as with... 20 Subjects: including prealgebra, English, Spanish, grammar ...I worked with children ages 5-8 years old while working at B.A.S.I.C., Before and After School Instructional Care, at St. Walter School in Roselle, IL for one year. While working there I would help the children with their homework and play games with them. 19 Subjects: including prealgebra, reading, calculus, geometry ...I try to get to know the student and his/her method of learning, and I personalize the way I teach to make it easier on individual students. I have tutored students in math from elementary to college level. In high school, I was the student that other students would turn to for help in various subjects. 19 Subjects: including prealgebra, reading, English, writing ...But it is the reality of the current situation. And as such I am as much concerned with comprehension as with the student's ability to pass an exam so they can move to the next level. What I can offer is consistent evaluation throughout the tutoring process and share this with the student and/or parent. 8 Subjects: including prealgebra, reading, algebra 1, grammar ...If you are interested in taking home tutoring classes for your kids, and improving their grades, do not hesitate to contact me. Qualification: Masters in Computer Applications My Approach : I assess the child's learning ability in the first class and then prepare an individual lesson plan. I break down math problems for the child, to make him/her understand in an easy way. 8 Subjects: including prealgebra, geometry, algebra 1, algebra 2
{"url":"http://www.purplemath.com/bensenville_il_prealgebra_tutors.php","timestamp":"2014-04-17T07:43:19Z","content_type":null,"content_length":"24252","record_id":"<urn:uuid:6070820c-9dfe-45de-8550-1560c7afae5e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Well, if day two was the long day, day three was the short one. There were no plenary sessions, just a short moring parallel session, then excursions in the afternoon. It was a very nice day to be out on the Irish countryside, so that was very nice. Day four had the plenary sessions I was most interested in, a close collaborator, Quentin Mason, started off the day talking about lattice perturbaiton theory, which is what I do. Quentin has completed the two-loop calculation of the light quark mass, which allows for a much more accurate determination. Quentin also reviewed the determination of the strong coupling constand, which I've covered in a previous Next up was Zoltan Ligeti, who reveiwed progress in heavy quark physics from a non-lattice perspective. There's been lots of activity on this front over the past few years, with the development of a new expansion, the Soft Colinear Expansion. This theory is complicated enough that I won't even try to explain it. Finally, another collaborator, Masataka Okamoto, gave a very nice overview of the status of lattice calculations of the CKM matrix. This the the matrix which tells you how the various types (or flavours) of quarks interact in the standard model. It has nine entries (not all of them are independent) most of which can be computed from lattice QCD + an experimental result. Masataka has done a large amount of work, both doing many of the calculations himself, and collecting everything into a coherent picture. In the standerd model, the CKM matrix is unitary. If you accept that assumption, the Masataka has produced a complete determination of the CKM matrix from lattice QCD, experimental measurements, and the unitarity of the matrix. Of course, it would be nice to test the unitarity, from just theory+experiment with no extra assumption. In that case, you can check row by row in the matrix. Masataka showed how one row is completly determined without assuming unitarity. And in that row, the matrix is unitary, up to the errors. It will be a big challenge to repeat this for the other two rows. Hello again from Dublin. Day two of Lattice 2005 was the "busy" day, with three plenary sessions, a parallel session, and the poster session all in one day. Twelve solid hours of physics, which is rather tiring, particularly since I was presenting a poster. As such, today's update of the Plenary sessions will be brief. The morning started off with a talk by Herbert Neuberger on simulations of large N field theory. As discovered by 't Hooft, SU(N) gauge theory simplifies as you take the limit N -> infinity. However, this is hard to do in Lattice QCD, as you would need infinite sized matrices. However, there are techniques for attacking the problem. Neuberger reviewed the interesting phase structure you see in this system. The lattice version of large N QCD has 6 different phases. Next up was Simon Catterall, who reviewed his work on Lattice supersymmetry. In the early days of both supersymmetry and lattice QCD, it wasn't thought possible to put a supersymetric theory on the lattice without badly breaking the supersymmetry. However in recent years, a few different methods have been discovered. The basic idea is you construct a continuum theory with lots of supersymmetry and arrange things such that when you put the theory on the lattice, a remmenant of the supersymmetry remains. Simon reviewed his method for doing this, and briefly touched on some of the possible applications of these methods. After a coffee break, the sessions shifted in focus a little bit. Chris Dawson reviewed the state of Kaon Phenomenology on the lattice. The focus here is on kaon decays, which are hard to do in lattice QCD. For example, a kaon can decay into two pions. This is extremely hard to compute, since pions are very large, it's hard to fit two of them inside your finite lattice box. We swapped last names for the next talk, Chris Dawson became Chris Michael, who gave a lively review of the state of hadronic decays on the lattice. The people I work with are interested in doing very high precision calculations. This is good, however, it limits you to a small number of thing you can calculate. However lattice QCD can, in principle, calculate many many more interesting strong interaction processes. Chris gave an update of the state of some of these calculations, which are very very hard to do. You have an unstable particle in the intitial state, two or more hadrons in the final state, and a transition at some point between. I Hello from Dublin. As promised, I'm going to try to deliver daily reports from the plenary sessions. Unfortunately, getting wireless internet access in the conference room has proved problematic, so it'll have to be after the fact reports, rather than live blog updates. These comments are subjective, and I can cover every talk, so that's that. After the usual introductory speechs the conference got off to a bang with a talk by Julius Kuti, from the University of California San Deigo. The topic was Lattice QCD and String Theory, which is a growing field. There is a lot of interesting problems in the field, from more abstract things to practical things. Julius spent most of his talk on a practical goal, namely using lattice QCD simulations to understand effective string models of QCD. In some sense this is a return to the orgins of string theory. The original idea was to model the gluon field connecting two quarks as a peice of relativistic string. The naive application of this idea didn't work, and so string theory went off in a totally different direction. However, with all the things that have been learned about it, effective (four dimensional) string models can now be constructed. And lattice QCD is the ideal tool to test these models against. There are some issues, as there always are, but the results here were promising, and offer a lot of new territory to explore. Next up was one of the best field theorists in the world, Martin Luscher. He talked about effeciently simulating a certain type of dynamical fermions (Wilson quarks, for the experts) much more effeciently than they've been done before. His idea was to split the lattice up into smaller hypercubic blocks, about 0.5 fm on a side. Then you split your update algorithim into three parts, gluon part + inside block quark part + block boundry part Now, in the standard way of doing things, all of these parts are computed the same number of times (say 2000 times per lattice point). What Luscher (and his collaborators) do is take advantage of the physics of the system to drastically reduce the number of times you have to compute the block boundry part, which is the most expensive bit. The essential bit of physics is that the correlation between points on the boundry, and points deep inside the cube is very weak. This means you don't have to compute it's effects nearly as often as when you compute the gluon effects. As Luscher mentioned, comparing computer algorithms is a tricky business, however his simulations with this new method seem to be a factor of ten or more faster than comperable simulations with the standard methods. In the second plenary session we had a talk by Jim Napolitano, who is an experimentalist working on the CLEO-C experiment. CLEO-C is currently studying D meson physics in great detail at the CESR accelertor at Cornell. One of the main motivations for CLEO-C is to test lattice QCD predictions in the charm system, so that results in the B meson system can be confidently predicted. Jim ran over a number of new results from CLEO including the leptonic decay constant fD, and the masses of two new mesons, the h_c and the \Upsilon(1D). These measurements are very tough, they involve looking for rare raditive transitions in decays of highly excited mesons. The reason that the can be done at all is because CLEO has very good control over the initial state. Basically, they're colliding electrons and positrons right on top of a charm anti-charm quark resonance. This resonance decays to a pair of D mesons, almost at rest. In most cases, both D's decay in a shower of crap (pions, kaons, etc). But sometimes one decays into a shower of crap, and one does something rare. When this happens you're happy, because, from the shower of crap you can learn everything about one of the D's that decayed. And sinc the total momentum is nearly zero, conservation of momentum tells you that it's the same for the D that decayed in a rare way. With that information, and the final state of the rare decay, you can very accurately reconstruct what happened. As usual, listening to an experimental talk made me glad I'm in theory. What they do is really hard :) So there's lots going on here. I'll update tomorrow with the next round of talks. Been a while since I posted an update, so I thought I'd check in. The physics blog world is abuzz with the creation of a new physics group blog cosmic variance which features Sean Carroll, Mark Trodden, JoAnne Hewett, Clifford Johnson, and Risa Wechsler. A nice mix of cosmology, particle phenomenology and string theory. I've been busy calculating, and preparing for Lattice 2005, which is the big annual conference for lattice field theory. This year it's being held at Trinity college in Dublin. They have wireless around, so I should be able to liveblog at least some of the sessions. I'm also going to try to post nightly updates. An amusing note, the built in spellchecker for blogger flags "blog" and "liveblog" as spelling errors.
{"url":"http://latticeqcd.blogspot.com/2005_07_01_archive.html","timestamp":"2014-04-17T18:52:44Z","content_type":null,"content_length":"98683","record_id":"<urn:uuid:6920537a-ff79-4003-bfa3-3af8c0999d2d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
A set together with a method of combining elements, such as addition or multiplication, to get new ones, which satisfies only some of the properties required to get a group. In particular, a semigroup need not have an identity element and elements need not have inverses. Related category
{"url":"http://www.daviddarling.info/encyclopedia/S/semigroup.html","timestamp":"2014-04-18T10:39:14Z","content_type":null,"content_length":"5500","record_id":"<urn:uuid:d7d92d54-2662-4fa8-810b-a5a3d8db04e8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Direct variation We say that a quantity a varies directly as a quantity b, if, when b changes, a changes in the same ratio. That means that if b doubles in value, a will also double in value. If b increases by a factor of 3, then a will also increase by a factor of 3. While if the value of b becomes half, so will the value of a. Let the initial values of a and b be a[1], b[1], and let their final values be a[2], b[2]. Then, a varies directly as b means: Proportionally, a[2] : a[1] = b[2] : b[1] 10 is five times 2. Therefore, a[2] will be five times 7. Problem 1. a varies directly as b. When b = 12, a = 27. What is the value of a when b = 4? To see the answer, pass your mouse over the colored area. To cover the answer again, click "Refresh" ("Reload"). Do the problem yourself first! The value of b has gone from 12 to 4. Its final value, then, is a third of its initial value. Therefore, the final value of a will be a third of 27, which is 9. Problem 2. When b = 6, a = 42. What is the value of a when b = 9? a[2] : a[1] = b[2] : b[1]. a[2] : 42 = 9 : 6. Alternately, 42 is seven times 6. Therefore, a[2] will be seven times 9, which is 63. Or, 9 is one and a half times 6. (6 + 3 = 9.) Therefore, a[2] will be one and a half times 42. 42 + 21 = 63. The constant of proportionality When a varies directly as b, we often say, "a is proportional to b." When that is the case, the relationship between a and b takes this algebraic form: a = kb. k is called the constant of proportionality. The circumference C of a circle, for example, varies directly as the diameter D. The constant of proportionality is called π. C = πD. That constant has been the subject of investigation for over 2500 years. In scientific problems, the constant of proportionality is determined by experiment. In what is called Hooke's Law, the force F that a stretched spring exerts is proportional to the distance x that the spring has stretched. F = kx. In other words, the greater the stretch x, the greater the force F. Example 3. a) For a given spring, F has the value 35 when the spring has stretched 8 a) inches. What is the constant of proportionality for that spring? Solution. F = kx. That is, 35 = k· 8 When a varies directly as b, the constant of proportionality is the quotient of any observed or given values. Note: The units on the right must equal those of a on the left -- distance, time, force, whatever they might be. Since b[1] is the denominator:, the units of b will cancel as if they were numbers. See the following problem. Problem 3. a) The distance d that an automobile travels varies directly as the time t a) that it travels. After 2 hours, the car has traveled 115 miles. Write a) the equation that relates d and t. Problem 4. Prove: Varies directly is a transitive relation. That is, if a varies directly as b, and b varies directly as c, then a varies directly as c. If a = k[1]b, and b = k[2]c, then a = k[1]k[2]c. Problem 5. If the side of a square doubles, how will the perimeter change? The perimeter will also double, because the perimeter P varies as the side s. P = 4s. The constant of proportionality is 4. Problem 6. If the diameter of a circle changes from 6 cm to 12 cm, how will the circumference change? The circumference will also double, because the circumference varies as the diameter. C = πD. Problem 7. If the diameter of a circle changes from 6 cm to 9 cm, how will the circumference change? In going from 6 cm to 9 cm, the diameter has increased one and a half times; that is the ratio of 9 to 6. Therefore, the circumference will also increase one and a half times. Problem 8. The circumference C of a circle varies directly as the perimeter of the circumscribed square. Varies as the square A quantity a varies as the square of a quantity b, if, when b changes, a changes by the square of that ratio. Thus, if b changes by a factor of 4, then a will change by a factor of 4² = 16. If b changes to one third of its value, then a will change to one ninth of its value. Problem 9. a varies as the square of b. When b = 7, a = 4. What is the value of a when b = 35? In going from 7 to 35, b has changed by a factor of 5. a therefore will change by a factor of 5² = 25. a = 25· 4 = 100. Problem 10. a varies as the square of b. When b = 20, a = 32. What is the value of a when b = 15? In going from 20 to 15, b has become three fourths of its value. 15 is three fourths of 20. a therefore will become nine sixteenths of its value. Theorem. If a varies directly as b, then a² will vary as b². This is easily proved if we write the ratios in fractional form. a varies directly as b means: Therefore, on squaring both sides: This implies This means that a² varies as b²; which is what we wanted to prove. Problem 11. The area A of a circle varies directly as the area of the circumscribed square. That is, as the area of the square changes, the area of the circle changes proportionally. a) Show that this implies that the area A of the circle varies as the a) square of the radius r. The side of the circumscribed square is equal to the diameter D of the circle. Therefore the area of the circumscribed square is equal to D². Hence the area A of the circle varies as D². But D varies directly as r -- D = 2r -- and therefore, according to the theorem, D² varies as r². Therefore, since A varies as D², and D² varies as r², then transitively, A varies as r². The area of the circle varies as the square of the radius. b) If the radius of a circle changes from 6 cm to 12 cm, how will the b) area change? In going from 6 cm to 12 cm, the radius has doubled, that is, it has changed by a factor of 2. The area therefore will change by a factor of 2² = 4. It will be four times larger. c) What is the constant of proportionality that relates the area A to r²? π. A = πr². Example 4. The surface area of a sphere. The surface area of a sphere is proportional to the surface area of the circumscribed cube. Now, each face of the cube is a square whose side is equal to the diameter D of the sphere. And a cube has 6 faces. Therefore, the surface area of the cube is equal to 6D². In other words, the surface area A of a sphere is proportional to the square of its diameter. Do you know what the constant of proportionality is? π. A = πD² Problem 12. Show that the surface area of a sphere varies as the square of its radius. Write the equation that relates the surface area A to the radius r. Since A = πD², and D = 2r, then A = π(2r)² = 4πr². Section 2: Varies inversely. Varies as the inverse square. Table of Contents | Home Please make a donation to keep TheMathPage online. Even $1 will help. Copyright © 2014 Lawrence Spector Questions or comments? E-mail: themathpage@nyc.rr.com
{"url":"http://www.themathpage.com/alg/variation.htm","timestamp":"2014-04-16T10:49:03Z","content_type":null,"content_length":"27750","record_id":"<urn:uuid:c375c4e2-ad37-4fa8-895a-4eb1a2d950a7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mathematics of Planet Earth John Baez SAMS 2012, Stellenbosch University October 30, 2012 The Mathematics of Planet Earth The International Mathematical Union has declared 2013 to be the year of The Mathematics of Planet Earth. The global warming crisis is part of a bigger transformation in which humanity realizes that the Earth is a finite system and that our population, energy usage, and the like cannot continue to grow exponentially. If civilization survives this transformation, it will affect mathematics - and be affected by it - just as dramatically as the agricultural revolution or industrial revolution. We cannot know for sure what the effect will be, but we can already make some To watch the talk, click on the video above. To see slides of the talk, click here. To see the source of any piece of information in these slides, just click on it! If you're interested in science, energy and the environment, visit my blog and check out the Azimuth Project, which is a collaboration to create a focal point for scientists and engineers interested in saving the planet. We've got some interesting projects going. © 2012 John Baez
{"url":"http://math.ucr.edu/home/baez/planet/","timestamp":"2014-04-20T10:48:33Z","content_type":null,"content_length":"2371","record_id":"<urn:uuid:321e6086-bc8f-4df9-8191-3a0bd5b03050>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Gravity and String Theory 0908 Submissions [25] viXra:0908.0112 [pdf] submitted on 30 Aug 2009 On Holography and Quantum Mechanics in Yang's Noncommutative Spacetime with a Lower and Upper Scale Authors: Carlos Castro Comments: 14 pages, This article appeared in Progress in Physics vol. 2 April (2006) 86-92. We explore Yang's Noncommutative space-time algebra (involving two length scales) within the context of QM defined in Noncommutative spacetimes; the Holographic principle and the area-coordinates algebra in Clifford spaces. Casimir invariant wave equations corresponding to Noncommutative coordinates and momenta in d-dimensions can be recast in terms of ordinary QM wave equations in d+2-dimensions. It is conjectured that QM over Noncommutative spacetimes (Noncommutative QM) may be described by ordinary QM in higher dimensions. Novel Moyal-Yang-Fedosov-Kontsevich star products deformations of the Noncommutative Poisson Brackets (NCPB) are employed to construct star product deformations of scalar field theories. Finally, generalizations of the Dirac-Konstant and Klein-Gordon-like equations relevant to the physics of D-branes and Matrix Models are presented. Category: Quantum Gravity and String Theory [24] viXra:0908.0111 [pdf] submitted on 28 Aug 2009 On Modified Weyl-Heisenberg Algebras, Noncommutativity Matrix-Valued Planck Constant and QM in Clifford Spaces Authors: Carlos Castro Comments: 22 pages, This article appeared in the Journal of Physics A : Math. Gen 39 (2006) 14205-14229. A novel Weyl-Heisenberg algebra in Clifford-spaces is constructed that is based on a matrix-valued H^AB extension of Planck's constant. As a result of this modifiedWeyl-Heisenberg algebra one will no longer be able to measure, simultaneously, the pairs of variables (x, p[x]); (x, p[y]); (x, p[z]); (y, p[x]), ... with absolute precision. New Klein-Gordon and Dirac wave equations and dispersion relations in Clifford-spaces are presented. The latter Dirac equation is a generalization of the Dirac-Lanczos-Barut-Hestenes equation. We display the explicit isomorphism between Yang's Noncommutative space-time algebra and the area-coordinates algebra associated with Clifford spaces. The former Yang's algebra involves noncommuting coordinates and momenta with a minimum Planck scale λ (ultraviolet cutoff) and a minimum momentum p = ℏ/R (maximal length R, infrared cutoff ). The double-scaling limit of Yang's algebra λ → 0, R → ∞, in conjunction with the large n → ∞ limit, leads naturally to the area quantization condition λR = L^2 = nλ^2 ( in Planck area units ) given in terms of the discrete angular-momentum eigenvalues n. It is shown how Modified Newtonian dynamics is also a consequence of Yang's algebra resulting from the modified Poisson brackets. Finally, another noncommutative algebra ( which differs from the Yang's algebra ) and related to the minimal length uncertainty relations is presented . We conclude with a discussion of the implications of Noncommutative QM and QFT's in Clifford-spaces. Category: Quantum Gravity and String Theory [23] viXra:0908.0108 [pdf] submitted on 29 Aug 2009 The Dirac Equation in a Gravitational Field Authors: Jack Sarfatti Comments: 7 pages Einstein's equivalence principle implies that Newton's gravity force has no local objective meaning. It is an inertial force, i.e. a contingent artifact of the covariantly tensor accelerating (non-zero g's) Local Non-Inertial Frame (LNIF) detector. Indeed, Newton's gravity force disappears in a locally coincident non-accelerating (zero g) Local Inertial Frame (LIF). The presence or absence of tensor spacetime curvature is completely irrelevant to this fact. In the case of an extended test body, these remarks apply only to the Center of Mass (COM). Stresses across separated parts of the test body caused by the local objective tensor curvature are a logically independent separate issue. Garbling this distinction has generated not-even-wrong critiques of the equivalence principle among "philosophers of physics" and even among some venerable confused theoretical physicists. Non-standard terms coupling the spin-connection to the commutator of the Dirac matrices and to the Lorentz group Lie algebra generators are conjectured. Category: Quantum Gravity and String Theory [22] viXra:0908.0107 [pdf] submitted on 28 Aug 2009 Gauge Theories of Kac-Moody Extensions of W[∞] Algebras as Effective Field Theories of Colored W[∞] Strings Authors: Carlos Castro Comments: 11 pages, This article appeared in Advanced Studies in Theoretical Physics, 2, no. 17, (2008) 825 - 835. A novel invariant gauge field theory realization of Kac-Moody extensions of w[∞](w[1+∞]) algebras and based on a Lie group G is constructed. The most relevant physical feature of this theory is that it describes an effective field theory of "colored" internal w[∞](w[1+∞]) strings when G = SU(3). We conclude with a discussion of how these theories might provide infinite higher conformal-spins extensions of Grand Unified Models and the Standard Model in four dimensions. Category: Quantum Gravity and String Theory [21] viXra:0908.0106 [pdf] submitted on 28 Aug 2009 The Extended Relativity Theory in Born-Clifford Phase Spaces with a Lower and Upper Length Scales and Clifford Group Geometric Unification Authors: Carlos Castro Comments: 61 pages, This article appeared in Foundations of Physics vol 35, no.6 (2005) 971. We construct the Extended Relativity Theory in Born-Clifford-Phase spaces with an upper R and lower length λ scales (infrared/ultraviolet cutoff ). The invariance symmetry leads naturally to the real Clifford algebra Cl(2, 6,R) and complexified Clifford Cl[C](4) algebra related to Twistors. A unified theory of all Noncommutative branes in Clifford-spaces is developed based on the Moyal-Yang star product deformation quantization whose deformation parameter involves the lower/upper scale (ℏλ/R). Previous work led us to show from first principles why the observed value of the vacuum energy density (cosmological constant ) is given by a geometric mean relationship ρ ~ L^-2[Planck]R^-2 = L^-4[P](L[Planck]/R)^2 ~ 10^-122M^4[Planck], and can be obtained when the infrared scale R is set to be of the order of the present value of the Hubble radius. We proceed with an extensive review of Smith's 8D model based on the Clifford algebra Cl(1,7) that reproduces at low energies the physics of the Standard Model and Gravity, including the derivation of all the coupling constants, particle masses, mixing angles, ....with high precision. Geometric actions are presented like the Clifford-Space extension of Maxwell's Electrodynamics, and Brandt's action related to the 8D spacetime tangentbundle involving coordinates and velocities ( Finsler geometries ). Finally we outline the reasons why a Clifford-Space Geometric Unification of all forces is a very reasonable avenue to consider and propose an Einstein-Hilbert type action in Clifford-Phase spaces (associated with the 8D Phase space) as a Unified Field theory action candidate that should reproduce the physics of the Standard Model plus Gravity in the low energy limit. Category: Quantum Gravity and String Theory [20] viXra:0908.0104 [pdf] submitted on 27 Aug 2009 Strings and Membranes from Einstein Gravity, Matrix Models and W[∞] Gauge Theories as paths to Quantum Gravity Authors: Carlos Castro Comments: 49 pages, This article appeared in the Int. J. Mod. Phys. A 23, no. 24 (2008) 3891 It is shown how w[∞],w[1+∞] Gauge Field Theory actions in 2D emerge directly from 4D Gravity. Strings and Membranes actions in 2D and 3D originate as well from 4D Einstein Gravity after recurring to the nonlinear connection formalism of Lagrange-Finsler and Hamilton-Cartan spaces. Quantum Gravity in 3D can be described by aW1 Matrix Model in D = 1 that can be solved exactly via the Collective Field Theory method. We describe why a quantization of 4D Gravity could be attained via a 2D Quantum W[∞] gauge theory coupled to an infinite-component scalar-multiplet. A proof that non-critical W [∞] (super) strings are devoid of BRST anomalies in dimensions D = 27 (D = 11), respectively, follows and which coincide with the critical (super) membrane dimensions D = 27 (D = 11). We establish the correspondence between the states associated with the quasi finite highest weights irreducible representations of W[∞],W[∞] algebras and the quantum states of the continuous Toda molecule. Schroedinger-like QM wave functional equations are derived and solutions are found in the zeroth order approximation. Since higher-conformal spin W[∞] symmetries are very relevant in the study of 2D W[∞] Gravity, the Quantum Hall effect, large N QCD, strings, membranes, ...... it is warranted to explore further the interplay among all these theories. Category: Quantum Gravity and String Theory [19] viXra:0908.0102 [pdf] submitted on 27 Aug 2009 On Generalized Yang-Mills Theories and Extensions of the Standard Model in Clifford (Tensorial) Spaces Authors: Carlos Castro Comments: 31 pages, This article appeared in Annals of Physics vol. 321, no.4 (2006) 813-839. We construct the Clifford-space tensorial-gauge fields generalizations of Yang-Mills theories and the Standard Model that allows to predict the existence of new particles (bosons, fermions) and tensor-gauge fields of higher-spins in the 10 Tev regime. We proceed with a detailed discussion of the unique D[4] - D[5] - E[6] - E[7] - E[8] model of Smith based on the underlying Clifford algebraic structures in D = 8, and which furnishes all the properties of the Standard Model and Gravity in four-dimensions, at low energies. A generalization and extension of Smith's model to the full Clifford-space is presented when we write explictly all the terms of the extended Clifford-space Lagrangian. We conclude by explaining the relevance of multiple-foldings of D = 8 dimensions related to the modulo 8 periodicity of the real Cliford algebras and display the interplay among Clifford, Division, Jordan and Exceptional algebras, within the context of D = 26, 27, 28 dimensions, corresponding to bosonic string, M and F theory, respectively, advanced earlier by Smith. To finalize we describe explicitly how the E[8] X E[8] Yang-Mills theory can be obtained from a Gauge Theory based on the Clifford ( 16 ) group. Category: Quantum Gravity and String Theory [18] viXra:0908.0100 [pdf] submitted on 26 Aug 2009 Noncommutative (Super) P-Branes and Moyal-Yang Star Products with a Lower and Upper Scale Authors: Carlos Castro Comments: 8 pages, This article appeared in Phys. Letts B 626 (2005) 209. Noncommutative p-brane actions, for even p+1 = 2n-dimensional world-volumes, are written explicitly in terms of the novel Moyal-Yang ( Fedosov-Kontsevich ) star product deformations of the Noncommutative Nambu Poisson Brackets (NCNPB) that are associated with the noncommuting world-volume coordinates q^A, p^A for A = 1, 2, 3, ...n. The latter noncommuting coordinates obey the noncommutative Yang algebra with an ultraviolet L[P] (Planck) scale and infrared (R ) scale cutoff. It is shown why our p-brane actions in the "classical" limit ℏ[eff] = ℏL[P]/R → 0 still acquire nontrivial noncommutative corrections that differ from ordinary p-brane actions. Super p-branes actions in the light-cone gauge are also amenable to Moyal-Yang star product deformations as well due to the fact that p-branes moving in flat spacetime backgrounds, in the light-cone gauge, can be recast as gauge theories of volume-preserving diffeomorphisms. The most general construction of noncommutative super p-branes actions based on non ( anti ) commuting superspaces and quantum group methods remains an open problem. Category: Quantum Gravity and String Theory [17] viXra:0908.0095 [pdf] submitted on 25 Aug 2009 An Exceptional E[8] Gauge Theory of Gravity in D = 8, Clifford Spaces and Grand Unification Authors: Carlos Castro Comments: 22 pages, This article will appear in the Int J. Geom. Meth in Mod Phys vol 6 , no. 6 (Sept 2009) A candidate action for an Exceptional E[8] gauge theory of gravity in 8D is constructed. It is obtained by recasting the E[8] group as the semi-direct product of GL(8,R) with a deformed Weyl-Heisenberg group associated with canonical-conjugate pairs of vectorial and antisymmetric tensorial generators of rank two and three. Other actions are proposed, like the quartic E[8] group-invariant action in 8D associated with the Chern-Simons E[8] gauge theory defined on the 7-dim boundary of a 8D bulk. To finalize, it is shown how the E[8] gauge theory of gravity can be embedded into a more general extended gravitational theory in Clifford spaces associated with the Cl(16) algebra and providing a solid geometrical program of a grand-unification of gravity with Yang-Mills theories. The key question remains if this novel gravitational model based on gauging the E8 group may still be renormalizable without spoiling unitarity at the quantum level. Category: Quantum Gravity and String Theory [16] viXra:0908.0094 [pdf] submitted on 25 Aug 2009 Complex Gravitational Theory and Noncommutative Gravity Authors: Carlos Castro Comments: 11 pages, This article appeared in Phys Letts B 675, (2009) 226-230 Born's reciprocal relativity in flat spacetimes is based on the principle of a maximal speed limit (speed of light) and a maximal proper force (which is also compatible with a maximal and minimal length duality) and where coordinates and momenta are unified on a single footing. We extend Born's theory to the case of curved spacetimes and construct a deformed Born reciprocal general relativity theory in curved spacetimes (without the need to introduce star products) as a local gauge theory of the deformed Quaplectic group that is given by the semi-direct product of U(1,3) with the deformed (noncommutative) Weyl-Heisenberg group corresponding to noncommutative generators [Z[a],Z[b]] ≠ 0. The Hermitian metric is complex-valued with symmetric and nonsymmetric components and there are two different complex-valued Hermitian Ricci tensors R[μν], S[μν]. The deformed Born's reciprocal gravitational action linear in the Ricci scalars R, S with Torsion-squared terms and BF terms is presented. The plausible interpretation of Z[μ] = E[μ]^a Z[a] as noncommuting p-brane background complex spacetime coordinates is discussed in the conclusion, where E[μ]^a is the complex vielbein associated with the Hermitian metric G[μν] = g[(μν)] + ig[[μν]] = E[μ]^a Ē[ν]^b This could be one of the underlying reasons why string-theory involves gravity. Category: Quantum Gravity and String Theory [15] viXra:0908.0090 [pdf] submitted on 24 Aug 2009 On the Noncommutative and Nonassociative Geometry of Octonionic Spacetime, Modified Dispersion Relations and Grand Unification Authors: Carlos Castro Comments: 23 pages, This article appeared in the J. Math. Phys, 48, no. 7 (2007) 073517. The Octonionic Geometry (Gravity) developed long ago by Oliveira and Marques is extended to Noncommutative and Nonassociative Spacetime coordinates associated with octonionic-valued coordinates and momenta. The octonionic metric G[μν] already encompasses the ordinary spacetime metric g[μν], in addition to the Maxwell U(1) and SU(2) Yang-Mills fields such that implements the Kaluza-Klein Grand Unification program without introducing extra spacetime dimensions. The color group SU(3) is a subgroup of the exceptional G[2] group which is the automorphism group of the octonion algebra. It is shown that the flux of the SU(2) Yang-Mills field strength F[μν] through the area momentum Σ^μν in the internal isospin space yields corrections O(1/M^2[Planck]) to the energy-momentum dispersion relations without violating Lorentz invariance as it occurs with Hopf algebraic deformations of the Poincare algebra. The known Octonionic realizations of the Clifford Cl(8),Cl(4) algebras should permit the construction of octonionic string actions that should have a correspondence with ordinary string actions for strings moving in a curved Clifford-space target background associated with a Cl(3, 1) algebra. Category: Quantum Gravity and String Theory [14] viXra:0908.0089 [pdf] submitted on 24 Aug 2009 The Large N Limit of Exceptional Jordan Matrix Models and M, F Theory Authors: Carlos Castro Comments: 14 pages, This article appeared in the Journal of Geometry and Physics, 57 (2007) 1941-1949 The large N → ∞ limit of the Exceptional F[4],E[6] Jordan Matrix Models of Smolin-Ohwashi leads to novel Chern-Simons Membrane Lagrangians which are suitable candidates for a nonperturbative bosonic formulation of M Theory in D = 27 real, complex dimensions, respectively. Freudenthal algebras and triple Freudenthal products permits the construction of a novel E[7] X SU(N) invariant Matrix model whose large N limit yields generalized nonlinear sigma models actions on 28 complex dimensional backgrounds associated with a 56 real-dim phase space realization of the Freudenthal algebra. We argue why the latter Matrix Model, in the large N limit, might be the proper arena for a bosonic formulation of F theory. To finalize we display generalized Dirac-Nambu-Goto membrane actions in terms of 3 X 3 X 3 cubic matrix entries that match the number of degrees of freedom of the 27-dim exceptional Jordan algebra J[3][0]. Category: Quantum Gravity and String Theory [13] viXra:0908.0088 [pdf] submitted on 24 Aug 2009 On the Riemann Hypothesis and Tachyons in Dual String Scattering Amplitudes Authors: Carlos Castro Comments: 13 pages, This article appeared in the International Journal of Geometric Methods in Modern Physics (IJGMMP) vol 3, no. 2 (2006) 187-199. It is the purpose of this work to pursue a novel physical interpretation of the nontrivial Riemann zeta zeros and prove why the location of these zeros z[n] = 1/2+iλ[n] corresponds physically to tachyonic-resonances/tachyonic-condensates, originating from the scattering of two on-shell tachyons in bosonic string theory. Namely, we prove that if there were nontrivial zeta zeros (violating the Riemann hypothesis) outside the critical line Real z = 1/2 (but inside the critical strip), these putative zeros do not correspond to any poles of the bosonic open string scattering (Veneziano) amplitude A(s, t, u). The physical relevance of tachyonic-resonances/tachyonic-condensates in bosonic string theory, establishes an important connection between string theory and the Riemann Hypothesis. In addition, one has also a geometrical interpretation of the zeta zeros in the critical line in terms of very special (degenerate) triangular configurations in the upper-part of the complex plane. Category: Quantum Gravity and String Theory [12] viXra:0908.0084 [pdf] submitted on 22 Aug 2009 The Extended Relativity Theory in Clifford Spaces Authors: Carlos Castro, M. Pavšič Comments: 65 pages, This article appeared in Progress in Physics vol 1 (April 2005) 31-64 An introduction to some of the most important features of the Extended Relativity theory in Clifford-spaces (C-spaces) is presented whose "point" coordinates are non-commuting Clifford-valued quantities which incorporate lines, areas, volumes, hyper-volumes.... degrees of freedom associated with the collective particle, string, membrane, p-brane,... dynamics of p-loops (closed p-branes) in target Ddimensional spacetime backgrounds. C-space Relativity naturally incorporates the ideas of an invariant length (Planck scale), maximal acceleration, non-commuting coordinates, supersymmetry, holography, higher derivative gravity with torsion and variable dimensions/signatures. It permits to study the dynamics of all (closed) p-branes, for all values of p, on a unified footing. It resolves the ordering ambiguities in QFT, the problem of time in Cosmology and admits superluminal propagation ( tachyons ) without violations of causality. A discussion of the maximalacceleration Relativity principle in phase-spaces follows and the study of the invariance group of symmetry transformations in phase-space allows to show why Planck areas are invariant under acceleration-boosts transformations . This invariance feature suggests that a maximal-string tension principle may be operating in Nature. We continue by pointing out how the relativity of signatures of the underlying n-dimensional spacetime results from taking different n-dimensional slices through C-space. The conformal group in spacetime emerges as a natural subgroup of the Clifford group and Relativity in C-spaces involves natural scale changes in the sizes of physical objects without the introduction of forces nor Weyl's gauge field of dilations. We finalize by constructing the generalization of Maxwell theory of Electrodynamics of point charges to a theory in C-spaces that involves extended charges coupled to antisymmetric tensor fields of arbitrary rank. In the concluding remarks we outline briefly the current promising research programs and their plausible connections with C-space Relativity. Category: Quantum Gravity and String Theory [11] viXra:0908.0083 [pdf] submitted on 22 Aug 2009 The Exceptional E[8] Geometry of Clifford (16) Superspace and Conformal Gravity Yang-Mills Grand Unification Authors: Carlos Castro Comments: 33 pages, This article appeared in the IJGMMP vol 6, no. 3 (2009) 385. We continue to study the Chern-Simons E[8] Gauge theory of Gravity developed by the author which is a unified field theory (at the Planck scale) of a Lanczos-Lovelock Gravitational theory with a E[8] Generalized Yang-Mills (GYM) field theory, and is defined in the 15D boundary of a 16D bulk space. The Exceptional E[8] Geometry of the 256-dim slice of the 256 X 256-dimensional flat Clifford (16) space is explicitly constructed based on a spin connection Ω[M]^AB, that gauges the generalized Lorentz transformations in the tangent space of the 256-dim curved slice, and the 256 X 256 components of the vielbein field E[M]^A, that gauge the nonabelian translations. Thus, in one-scoop, the vielbein E[M]^A encodes all of the 248 (nonabelian) E[8] generators and 8 additional (abelian) translations associated with the vectorial parts of the generators of the diagonal subalgebra [Cl(8) ⊗ Cl(8)][diag] ⊂ Cl(16). The generalized curvature, Ricci tensor, Ricci scalar, torsion, torsion vector and the Einstein-Hilbert-Cartan action is constructed. A preliminary analysis of how to construct a Clifford Superspace (that is far richer than ordinary superspace) based on orthogonal and symplectic Clifford algebras is presented. Finally, it is shown how an E[8] ordinary Yang-Mills in 8D, after a sequence of symmetry breaking processes E[8] → E[7] → E[6] → SO(8, 2), and performing a Kaluza-Klein-Batakis compactification on CP^2, involving a nontrivial torsion, leads to a (Conformal) Gravity and Yang-Mills theory based on the Standard Model in 4D. The conclusion is devoted to explaining how Conformal (super) Gravity and (super) Yang-Mills theory in any dimension can be embedded into a (super) Clifford-algebra-valued gauge field theory. Category: Quantum Gravity and String Theory [10] viXra:0908.0081 [pdf] submitted on 21 Aug 2009 The Clifford Space Geometry of Conformal Gravity and U(4) X U(4) Yang-Mills Unification Authors: Carlos Castro Comments: 22 pages, This article will appear in the Int J. of Mod. Phys A. It is shown how a Conformal Gravity and U(4) X U(4) Yang-Mills Grand Unification model in four dimensions can be attained from a Clifford Gauge Field Theory in C-spaces (Clifford spaces) based on the (complex) Clifford Cl(4,C) algebra underlying a complexified four dimensional spacetime (8 real dimensions). Upon taking a real slice, and after symmetry breaking, it leads to ordinary Gravity and the Standard Model in four real dimensions. A brief conclusion about the Noncommutative star product deformations of this Grand Unified Theory of Gravity with the other forces of Nature is presented. Category: Quantum Gravity and String Theory [9] viXra:0908.0080 [pdf] submitted on 21 Aug 2009 P-Branes as Antisymmetric Nonabelian Tensorial Gauge Field Theories of Diffeomorphisms in P + 1 Dimensions Authors: Carlos Castro Comments: 44 pages, This article has been submitted to the Journal of Mathematical Physics, Aug 2009 Long ago, Bergshoeff, Sezgin, Tanni and Townsend have shown that the light-cone gauge-fixed action of a super p-brane belongs to a new kind of supersymmetric gauge theory of p-volume preserving diffeomorphisms (diffs) associated with the p-spatial dimensions of the extended object. These authors conjectured that this new kind of supersymmetric gauge theory must be related to an infinite-dim nonabelian antisymmetric gauge theory. It is shown in this work how this new theory should be part of an underlying antisymmetric nonabelian tensorial gauge field theory of p+1-dimensional diffs (upon supersymmetrization) associated with the world volume evolution of the p-brane. We conclude by embedding the latter theory into a more fundamental one based on the Clifford-space geometry of the p-brane configuration space. Category: Quantum Gravity and String Theory [8] viXra:0908.0077 [pdf] submitted on 21 Aug 2009 Dark Energy is Needed for the Consistency of Quantum Electrodynamics Heisenberg's Biggest Blunder? Authors: Jack Sarfatti Comments: 4 pages The argument that virtual photons can be globally gauged away I think is spurious. Indeed, the inconsistency with the boson commutation rules in Heisenberg and Pauli's historic 1929 attempt to quantize the electromagnetic field disappears once one uses the recently discovered dark energy density. Category: Quantum Gravity and String Theory [7] viXra:0908.0023 [pdf] replaced on 2013-09-09 06:50:38 Genes and Memes Authors: Matti Pitkänen Comments: 672 Pages. The first part of book discusses the new physics relevant to biology and the vision about Universe as topological quantum computer (tqc). </p><p Second part describes concrete physical models. </p> <p> <OL> <LI> The notion of many-sheeted DNA and a model of genetic code inspired by the notion of Combinatorial Hierarchy predicting what I call memetic code are introduced. The almost exact symmetries of the code table with respect to the third letter inspire the proposal that genetic code could have evolved as fusion of two-letter code and single-letter code. <LI> A model for how genome and cell membrane could act as tqc is developed. Magnetic flux tubes containing dark matter characterized by large value of Planck constant would make living matter a macroscopic quantum system. DNA nucleotides and lipids of the cell membrane would be connected by magnetic flux tubes and the flow of the 2-D liquid formed by lipids would induce dynamical braiding defining the computation. <LI> The net of magnetic flux tubes could explain the properties of gel phase. Phase transitions reducing Planck constant would induce a contraction of the flux tubes explaining why bio-molecules manage to find each other in a dense soup of bio-molecules. The topology of the net would be dynamical and ADP &harr; ATP transformation could affect it. The anomalies related to ionic currents, nerve pulse activity, and interaction of ELF radiation with vertebrate brain find an explanation in this framework. The number theoretic entanglement entropy able to have negative values could be seen as the real quintenssence associated with the metabolic energy transfer, and the poorly understood high energy phosphate bond could be interpreted in terms of negentropic entanglement rather than ordinary bound state entanglement. <LI> The discoveries of Peter Gariaev about interaction of ordinary and laser light with genome combined with ideas about dark matter and water memory lead to a model for the interaction of photons with DNA. Dark &harr; ordinary transformation for photons could allow to "see" dark matter by allowing ordinary light to interact with DNA. <LI> A physical model for genetic code emerged from an attempt to understand the mechanism behind water memory. Dark nuclei which sizes zoomed up to atomic size scale could represent genes. The model for dark nucleon consisting of three quarks predicts counterparts of 64 DNAs, 64 RNAs, and 20 aminoacids and allows to identify genetic code as a natural mapping of DNA type states to amino-acid type states and consistent with vertebrate genetic code. </OL> </p><p> The third part of the book discusses number theoretical models of the genetic code based on p-adic thermodynamics and maximization of entropy or negentropy. These models reproduce the genetic code but fail to make killer predictions. Category: Quantum Gravity and String Theory [6] viXra:0908.0019 [pdf] replaced on 2013-09-09 07:09:54 Towards M-Matrix Authors: Matti Pitkänen Comments: 1119 Pages. This book is devoted to a detailed representation of the recent state of quantum TGD. </p><p> The first part of the book summarizes quantum TGD in its recent form. </p><p> <OL> <LI> General coordinate invariance and generalized super-conformal symmetries are the basic symmetries of TGD and Equivalence Principle can be generalized using generalized coset construction. <LI> In zero energy ontology the basis of classical WCW spinors fields forms unitary U-matrix having M-matrices as its orthogonal rows. M-matrix defines time-like entanglement coefficients between positive and negative energy parts of the zero energy states. M-matrix is a product of a hermitian density matrix and unitary S-matrix commuting with it. The hermitian density matrices define infinite-dimensional Lie-algebra extending to a generalization of Kac-Moody type algebra with generators defined as products of hermitian density matrices and powers of S-matrix. Yangian type algebra is obtained if only non-negative powers of S are allowed. The interpretation is in terms of the hierarchy of causal diamonds with size scales coming as integer multiples of CP<sub>2</sub> size scale. Zero energy states define their own symmetry algebra. For generalized Feynman diagrams lines correspond to light-like 3-surfaces and vertices to 2-D surfaces. <LI> Finite measurement resolution realized using fractal hierarchy of causal diamonds (CDs) inside CDs implies a stringy formulation of quantum TGD involving replacement of 3-D light-like surfaces with braids representing the ends of strings. Category theoretical formulation leads to a hierarchy of algebras forming an operad. <LI> Twistors emerge naturally in TGD framework and several proposal for twistorialization of TGD is discussed in two chapters devoted to the topic. Twistorial approach combined with zero energy ontology, bosonic emergence, and the properties of the Chern-Simons Dirac operator leads to the conjecture that all particles -also string like objects- can be regarded as bound states of massless particles identifiable as wormhole throats. Also virtual particles would consist of massles wormhole throats but bound state property is not assumed anymore and the energies of wormhole throats can have opposite signs so that space-like momentum exchanges become possible. This implies extremely strong constraints on loop momenta and manifest finiteness of loop integrals. </p><p> An essential element of the formulation is exact Yangian symmetry obtained by replacing the loci of multilocal symmetry generators of Yangian algebra with partonic 2-surfaces so that conformal algebra of Minkowski space is extened to infinite-dimensional algebra bringing in also the conformal algebra assigned to the partonic 2-surfaces. Yangian symmetry requires the vanishing of both UV and IR divergences achieved if the physical particles are bound states of massless wormhole throats. </p><p> Rather general arguments suggest the formulation of TGD in terms of holomorphic 6-surfaces in the product CP<sub>3</sub>&times; CP<sub>3</sub> of twistor spaces leading to a unique partial differential equations determining these surfaces in terms of homogenous polynomials of the projective complex coordinates of the two twistor spaces. </OL> </p><p> Second part of the book is devoted to hyper-finite factors and hierarchy of Planck constants. </p><p> <OL> <LI> The Clifford algebra of WCW is hyper-finite factor of type II<sub>1</sub>. The inclusions provide a mathematical description of finite measurement resolution. The included factor is analogous to gauge symmetry group since the action of the included factor creates states not distinguishable from the original one. TGD Universe would be analogous to Turing machine able to emulate any internally consistent gauge theory (or more general theory) so that finite measurement resolution would provide TGD Universe with huge simulational powers. <LI> In TGD framework dark matter corresponds to ordinary particles with non-standard value of Planck constant. The simplest view about the hierarchy of Planck constants is as an effective hierarchy describable in terms of local, singular coverings of the imbedding space. The basic observation is that for K&auml;hler action the time derivatives of the imbedding space coordinates are many-valued functions of canonical momentum densities. If all branches for given values of canonical momentum densities are allowed, one obtains the analogs of many-sheeted Riemann surfaces with each sheet giving same contribution to the K&auml;hler action so that Planck constant is effectively a multiple of the ordinary Planck constant. Dark matter could be in quantum Hall like phase localized at light-like 3-surfaces with macroscopic size and analogous to black-hole horizons. </OL> Category: Quantum Gravity and String Theory [5] viXra:0908.0018 [pdf] replaced on 2013-09-09 07:08:05 TGD as a Generalized Number Theory Authors: Matti Pitkänen Comments: 973 Pages. The focus of this book is the number theoretical vision about physics. This vision involves three loosely related parts. </p><p> <OL><LI> The fusion of real physic and various p-adic physics to a single coherent whole by generalizing the number concept by fusing real numbers and various p-adic number fields along common rationals. Extensions of p-adic number fields can be introduced by gluing them along common algebraic numbers to reals. Algebraic continuation of the physics from rationals and their their extensions to various number fields (generalization of completion process for rationals) is the key idea, and the challenge is to understand whether how one could achieve this dream. A profound implication is that purely local p-adic physics would code for the p-adic fractality of long length length scale real physics and vice versa, and one could understand the origins of p-adic length scale hypothesis. <LI> Second part of the vision involves hyper counterparts of the classical number fields defined as subspaces of their complexificationsnwith Minkowskian signature of metric. Allowed space-time surfaces would correspond to what might be callednhyper-quaternionic sub-manifolds of a hyper-octonionic space and mappable to M<sup>4</sup>&times; CP<sub>2</sub> in natural manner. One could assign to each point of space-time surface a hyper-quaternionic 4-plane which is the plane defined by the induced or modified gamma matrices defined by the canonical momentum currents of K&auml;hler action. Induced gamma matrices seem to be preferred mathematically: they correspond to modified gamma matrices assignable to 4-volume action, and one can develop arguments for why K&auml;hler action defines the dynamics. </p><p> Also a general vision about preferred extremals of K&auml;hler action emerges. The basic idea is that imbedding space allows octonionic structure and that field equations in a given space-time region reduce to the associativity of the tangent space or normal space: space-time regions should be quaternionic or co-quaternionic. The first formulation is in terms of the octonionic representation of the imbedding space Clifford algebra and states that the octonionic gamma "matrices" span a complexified quaternionic sub-algebra. Another formulation is in terms of octonion real-analyticity. Octonion real-analytic function f is expressible as f=q<sub>1</sub>+Iq<sub>2</sub>, where q<sub>i</sub> are quaternions and I is an octonionic imaginary unit analogous to the ordinary imaginary unit. q<sub>2 </sub> (q<sub>1</sub>) would vanish for quaternionic (co-quaternionic) space-time regions. The local number field structure of the octonion real-analytic functions with composition of functions as additional operation would be realized as geometric operations for space-time surfaces. The conjecture is that these two formulations are equivalent. <LI> The third part of the vision involves infinite primes identifiable in terms of an infinite hierarchy of second quantized arithmetic quantum fields theories on one hand, and as having representations as space-time surfaces analogous to zero loci of polynomials on the other hand. Single space-time point would have an infinitely complex structure since real unity can be represented as a ratio of infinite numbers in infinitely many manners each having its own number theoretic anatomy. Single space-time point would be in principle able to represent in its structure the quantum state of the entire universe. This number theoretic variant of Brahman=Atman identity would make Universe an algebraic hologram. </p><p> Number theoretical vision suggests that infinite hyper-octonionic or -quaternionic primes could could correspond directly to the quantum numbers of elementary particles and a detailed proposal for this correspondence is made. Furthermore, the generalized eigenvalue spectrum of the Chern-Simons Dirac operator could be expressed in terms of hyper-complex primes in turn defining basic building bricks of infinite hyper-complex primes from which hyper-octonionic primes are obtained by dicrete SU(3) rotations performed for finite hyper-complex primes. </OL> </p><p> Besides this holy trinity I will discuss in the first part of the book loosely related topics such as the relationship between infinite primes and non-standard numbers. </p><p> Second part of the book is devoted to the mathematical formulation of the p-adic TGD. The p-adic counterpart of integration is certainly the basic mathematical challenge. Number theoretical universality and the notion of algebraic continuation from rationals to various continuous number fields is the basic idea behind the attempts to solve the problems. p-Adic integration is also a central problem of modern mathematics and the relationship of TGD approach to motivic integration and cohomology theories in p-adic numberfields is discussed. </p><p> The correspondence between real and p-adic numbers is second fundamental problem. The key problem is to understand whether and how this correspondence could be at the same time continuous and respect symmetries at least in discrete sense. The proposed explanation of Shnoll effect suggests that the notion of quantum rational number could tie together p-adic physics and quantum groups and could allow to define real-p-adic correspondence satisfying the basic conditions. </p><p> The third part is develoted to possible applications. Included are category theory in TGD framework; TGD inspired considerations related to Riemann hypothesis; topological quantum computation in TGD Universe; and TGD inspired approach to Langlands program. Category: Quantum Gravity and String Theory [4] viXra:0908.0017 [pdf] replaced on 2013-09-09 07:04:49 Physics in Many-Sheeted Space-Time Authors: Matti Pitkänen Comments: 985 Pages. This book is devoted to what might be called classical TGD. </p><p> <OL> <LI> Classical TGD identifies space-time surfaces as kind of generalized Bohr orbits. It is an exact part of quantum TGD. <LI> The notions of many-sheeted space-time, topological field quantization and the notion of field/magnetic body, follow from simple topological considerations. Space-time sheets can have arbitrarily large sizes and their interpretation as quantum coherence regions implies that in TGD Universe macroscopic quantum coherence is possible in arbitrarily long scales. Also long ranged classical color and electro-weak fields are predicted. <LI> TGD Universe is fractal containing fractal copies of standard model physics at various space-time sheets and labeled by p-adic primes assignable to elementary particles and by the level of dark matter hierarchy characterized partially by the value of Planck constant labeling the pages of the book like structure formed by singular covering spaces of the imbedding space M<sup>4</sup>&times; CP<sub>2</sub> glued together along four-dimensional back. Particles at different pages are dark relative to each other since local interactions defined in terms of the vertices of Feynman diagram involve only particles at the same page. </p><p> The simplest view about the hierarchy of Planck constants is as an effective hierarchy describable in terms of local, singular coverings of the imbedding space. The basic observation is that for K&auml;hler action the time derivatives of the imbedding space coordinates are many-valued functions of canonical momentum densities. If all branches for given values of canonical momentum densities are allowed, one obtains the analogs of many-sheeted Riemann surfaces with each sheet giving same contribution to the K&auml;hler action so that Planck constant is effectively a multiple of the ordinary Planck constant. <LI> Zero energy ontology brings in additional powerful interpretational principle. </OL> </p><p> The topics of the book are organized as follows. <OL> <LI> In Part I extremals of K&auml;hler action are discussed and the notions of many-sheeted space-time, topological field quantization, and topological condensation and evaporation are introduced. <LI> In Part II many-sheeted-cosmology and astrophysics are summarized. p-Adic and dark matter hierarchies imply that TGD inspired cosmology is fractal. Cosmic strings and their deformations giving rise to magnetic flux tubes are basic objects of TGD inspired cosmology. Magnetic flux tubes can in fact be interpreted as carriers of dark energy giving rise to accelerated expansion via negative magnetic "pressure". The study of imbeddings of Robertson-Walker cosmology shows that critical and over-critical cosmology are unique apart from their duration. The idea about dark matter hierarchy was originally motivated by the observation that planetary orbits could be interpreted as Bohr orbits with enormous value of Planck constant, and this picture leads to a rather detailed view about macroscopically quantum coherent dark matter in astrophysics and cosmology. </p><p> <LI> Part III includes old chapters about implications of TGD for condensed matter physics. The phases of CP<sub>2</sub> complex coordinates could define phases of order parameters of macroscopic quantum phases manifesting themselves in the properties of living matter and even in hydrodynamics. For instance, Z<sup>0</sup> magnetic gauge field could make itself visible in hydrodynamics and Z<sup>0</sup> magnetic vortices could be involved with super-fluidity. </OL> Category: Quantum Gravity and String Theory [3] viXra:0908.0016 [pdf] replaced on 2013-09-09 07:06:16 Physics as Infinite-Dimensional Geometry Authors: Matti Pitkänen Comments: 478 Pages. The topics of this book is a vision about physics as infinite-dimensional K&auml;hler geometry of the "world of classical worlds" (WCW), with "classical world" identified either as light-like 3-D surface X<sup>3</sup> of a unique Bohr orbit like 4-surface X<sup>4</sup>(X<sup>3</sup>) or X<sup>4</sup>(X<sup>3</sup>) itself. The non-determinism of K&auml;hler action defining K&auml;hler function forces to generalize the notion of 3-surface. Zero energy ontology allows to formulate this generalization elegantly using a hierarchy of causal diamonds (CDs) defined as intersections of future and past directed light-cones, and a geometric realization of coupling constant evolution and finite measurement resolution emerges. </p><p> The general vision about quantum dynamics is that the basis for WCW spinor fields defines in zero energy ontology unitary U-matrix having as orthogonal rows M-matrices. Given M-matrix is expressible as a product of hermitian square root of density matrix and S-matrix. M-matrices define time-like entanglement coefficients between positive and negative energy parts of zero energy states represented by the modes of WCW spinor fields. </p><p> One encounters two challenges. <OL><LI> Provide WCW with K&auml;hler geometry consistent with 4-dimensional general coordinate invariance. Clearly, the definition of metric must assign to given light-like 3-surface X<sup>3</sup> a 4-surface X<sup>4</sup>(X<sup>3</sup>) as kind of Bohr orbit. <LI> Provide WCW with spinor structure. The idea is to express configuration space gamma matrices using super algebra generators expressible using second quantized fermionic oscillator operators for induced free spinor fields at X<sup>4</sup>(X<sup>3</sup>). Isometry generators and contractions of Killing vectors with gamma matrices would generalize Super Kac-Moody algebra. </OL> </p><p> The condition of mathematical existence poses stringent conditions on the construction. </p><p> <OL> <LI> The experience with loop spaces suggests that a well-defined Riemann connection exists only if this space is union of infinite-dimensional symmetric spaces. Finiteness requires that vacuum Einstein equations are satisfied. The coordinates labeling these symmetric spaces do not contribute to the line element and have interpretation as non-quantum fluctuating classical variables. <LI> The construction of the K&auml;hler structure requires the identification of complex structure. Direct construction of K&auml;hler function as action associated with a preferred extremal for K&auml; hler action leads to a unique result. The group theoretical approach relies on direct guess of isometries of the symmetric spaces involved. Isometry group generalizes Kac-Moody group by replacing finite-dimensional Lie group with the group of symplectic transformations of &delta; M<sup>4</sup><sub>+</sub>&times; CP<sub>2</sub>, where &delta; M<sup>4</sup><sub>+</sub> is the boundary of 4-dimensional future light-cone. The generalized conformal symmetries assignable to light-like 3-surfaces and boundaries of causal diamonds bring in stringy aspect and Equivalence Principle can be generalized in terms of generalized coset construction. <LI> Configuration space spinor structure geometrizes fermionic statistics and quantization of spinor fields. Quantum criticality can be formulated in terms of the modified Dirac equation for induced spinor fields allowing a realization of super-conformal symmetries and quantum gravitational holography. <LI> Zero energy ontology combined with the weak form of electric-magnetic duality led to a breakthrough in the understanding of the theory. The boundary conditions at light-like wormhole throats and at space-like 3-surfaces defined by the intersection of the space-time surface with the light-like boundaries of causal diamonds reduce the classical quantization of K&auml;hler electric charge to that for K&auml;hler magnetic charge. The integrability of field equations for the preferred extremals reduces to the condition that the flow lines of various isometry currents define Beltrami fields for which the flow parameter by definition defines a global coordinate. The assumption that isometry currents are proportional to the instanton current for K&auml;hler action reduces K&auml;hler function to a boundary term which by the weak form of electric-magnetic duality reduces to Chern-Simons term. This realizes TGD as almost topological QFT. <LI> There are also number theoretical conjectures about the character of the preferred extremals. The basic idea is that imbedding space allows octonionic structure and that field equations in a given space-time region should reduce to the associativity of the tangent space or normal space so that space-time regions should be quaternionic or co-quaternionic. The first formulation is in terms of the octonionic representation of the imbedding space Clifford algebra and states that the octonionic gamma "matrices" span a quaternionic sub-algebra. Another formulation is in terms of octonion real-analyticity. Octonion real-analytic function f is expressible as f=q<sub>1</sub>+Iq<sub>2</sub>, where q<sub>i</sub> are quaternions and I is an octonionic imaginary unit analogous to the ordinary imaginary unit. q<sub>2</sub> (q<sub>1</sub>) would vanish for quaternionic (co-quaternionic) space-time regions. The local number field structure of octonion real-analytic functions with composition of functions as additional operation would be realized as geometric operations for space-time surfaces. The conjecture is that these two formulations are equivalent. <LI> An important new interpretational element is the identification of the K& auml;hler action from Minkowskian space-time regions as a purely imaginary contribution identified as Morse function making possible quantal interference effects. The contribution from the Euclidian regions interpreted in terms of generalized Feynman graphs is real and identified as K&auml;hler function. These contributions give apart from coefficient identical Chern-Simons terms at wormhole throats and at the space-like ends of space-time surface: it is not clear whether only the contributions these 3-surfaces are present. <LI> Effective 2-dimensionality suggests a reduction of Chern-Simons terms to a sum of real and imaginary terms corresponding to the total areas of of string world sheets from Euclidian and Minkowskian string world sheets and partonic 2-surfaces, which are an essential element of the proposal for what preferred extremals should be. The duality between partonic 2-surfaces and string world sheets suggests that the total area of partonic 2-surfaces is same as that for string world sheets. <LI> The approach leads also to a highly detailed understanding of the Chern-Simons Dirac equation at the wormhole throats and space-like 3-surfaces and K&auml; hler Dirac equation in the interior of the space-time surface. The effective metric defined by the anticommutators of the modified gamma matrices has an attractive interpretation as a geometrization for parameters like sound velocity assigned with condensed matter systems in accordance with effective 2-dimensionality and strong form of holography. </OL> Category: Quantum Gravity and String Theory [2] viXra:0908.0015 [pdf] replaced on 2013-09-09 07:01:40 TGD and Fringe Physics Authors: Matti Pitkänen Comments: 316 Pages. The topics of this book could be called fringe physics involving claimed phenomena which do not have explanation in terms of standard physics. </p><p> Many-sheeted space-time with p-adic length scale hierarchy, the predicted dark matter hierarchy with levels partially characterized by quantized dynamical Planck constant, and the prediction of long ranged color and weak forces alone predict a vast variety of new physics effects. Zero energy ontology predicts that energy can have both signs and that classical signals can propagate in reversed time direction at negative energy space-time sheets and an attractive identification for negative energy signals would be as generalizations of phase conjugate laser beams. This vision leads to a coherent view about metabolism, memory, and bio-control and it is natural to ask whether the reported anomalies might be explained in terms of the mechanisms giving hopes about understanding the behavior of living matter. </p><p> <OL> <LI> The effects involving coin words like antigravity, strong gravity, and electro-gravity motivate the discussion of possible anomalous effects related to long range electro-weak fields and many-sheeted gravitation. For instance, TGD leads to a model for the strange effects reported in rotating magnetic systems. <LI> Tesla did not believe that Maxwell's theory was an exhaustive description of electromagnetism. He claimed that experimental findings related to pulsed systems require the assumption of what he called scalar waves not allowed by Maxwell's electrodynamics. TGD indeed allows scalar wave pulses propagating with light velocity. The dropping of particles to larger space-time sheets liberating metabolic energy, transformation of ordinary charged matter to dark matter and vice versa, dark photons, etc... might be needed to explain Tesla's findings. Also phase conjugate, possibly dark, photons making possible communications with geometric past might be involved. <br> These speculative ideas receive unexpected support from the TGD inspired view about particle physics. The recent TGD inspired view about Higgs mechanism suggests strongly that photon eats the remaining component of Higgs boson and in this manner gets longitudinal polarization and small mass allowing to avoid infrared divergences of scattering amplitudes. <LI> The reports about ufos represent a further application for TGD based view about Universe. Taking seriously the predicted presence of infinite self hierarchy represented by dark matter hierarchy makes it almost obvious that higher civilizations are here, there, and everywhere, and that their relationship to us is like that of our brain to its neurons, so that Fermi paradox (Where are they all?) would disappear. Although the space travel might be quite too primitive idea for the civilizations at higher levels of hierarchy, ufos might be real objects representing more advanced technology rather than plasmoid like life forms serving as mediums in telepathic communications. </OL> Category: Quantum Gravity and String Theory [1] viXra:0908.0014 [pdf] replaced on 2013-09-09 07:11:27 Topological Geometrodynamics: Overview Authors: Matti Pitkänen Comments: 1106 Pages. This book tries to give an overall view about quantum TGD as it stands now. The topics of this book are following. <OL><LI> Part I: An overall view about the evolution of TGD and about quantum TGD in its recent form. Two visions about physics are discussed at general level. According to first vision physical states of the Universe correspond to classical spinor fields in the world of the classical worlds (WCW) identified as 3-surfaces or equivalently as corresponding 4-surfaces analogous to Bohr orbits and identified as special extrema of K&auml;hler action. TGD as a generalized number theory vision leading naturally also to the emergence of p-adic physics as physics of cognitive representations is the second vision. <LI> Part II: The vision about physics as infinite-dimensional configuration space geometry. The basic idea is that classical spinor fields in WCW describe the quantum states of the Universe. Quantum jump remains the only purely quantal aspect of quantum theory in this approach since there is no quantization at the level of the configuration space. Space-time surfaces correspond to special extremals of the K&auml;hler action analogous to Bohr orbits and define what might be called classical TGD discussed in the first chapter. The construction of the configuration space geometry and spinor structure are discussed in remaining chapters. <LI> Part III: Physics as generalized number theory. Number theoretical vision involves three loosely related approaches: fusion of real and various p-adic physics to a larger whole as algebraic continuations of what might be called rational physics; space-time as a hyper-quaternionic surface of hyper-octonion space, and space-time surfaces as a representations of infinite primes. <LI> Part IV: The first chapter summarizes the basic ideas related to von Neumann algebras known as hyper-finite factors of type II<sub>1</sub> about which configuration space Clifford algebra represents canonical example. Second chapter is devoted to the basic ideas related to the hierarchy of Planck constants and related generalization of the notion of imbedding space to a book like structure. <LI> Part V: Physical applications of TGD. Cosmological and astrophysical applications are summarized and applications to elementary particle physics are discussed at the general level. TGD explains particle families in terms of generation-genus correspondence (particle families correspond to 2-dimensional topologies labeled by genus). The general theory for particle massivation based on p-adic thermodynamics is discussed at the general level. </OL> Category: Quantum Gravity and String Theory
{"url":"http://vixra.org/qgst/0908","timestamp":"2014-04-17T06:46:01Z","content_type":null,"content_length":"71173","record_id":"<urn:uuid:8589d198-cc44-43da-9f63-83b0218b0212>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
And Logic Begat Computer Science: When Giants Roamed the Earth During the past fifty years there has been extensive, continuous, and growing interaction between logic and computer science. In fact, logic has been called "the calculus of computer science". The argument is that logic plays a fundamental role in computer science, similar to that played by calculus in the physical sciences and traditional engineering disciplines. Indeed, logic plays an important role in areas of computer science as disparate as architecture (logic gates), software engineering (specification and verification), programming languages (semantics, logic programming), databases (relational algebra and SQL), artificial intelligence (automated theorem proving), algorithms (complexity and expressiveness), and theory of computation (general notions of computability). This non-technical talk will provide an overview of the unusual effectiveness of logic in computer science by surveying the history of logic in computer science, going back all the way to Aristotle and Euclid, and showing how logic actually gave rise to computer science.
{"url":"http://www.newton.ac.uk/programmes/LAA/vardi2.html","timestamp":"2014-04-19T20:11:56Z","content_type":null,"content_length":"2931","record_id":"<urn:uuid:3982a39e-2119-4998-a929-099025350c78>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Brazilian Journal of Physics Services on Demand Related links Print version ISSN 0103-9733 Braz. J. Phys. vol.31 no.1 São Paulo Mar. 2001 Adiabatic plasma rotations in orthogonal coordinate systems Ricardo L. Viana Departamento de Física, Universidade Federal do Paraná, Caixa Postal 19081, 81531-990, Curitiba, Paraná, Brazil Received on 26 July, 2000 An equation for MHD stationary equilibrium of rotating plasmas in the azimuthal direction is derived in the case of an orthogonal curvilinear coordinate system. The basic assumptions we made are: (i) there is an ignorable coordinate so that surface quantities are independent of it; (ii) the entropy is a surface quantity. I Introduction Azimuthal rotation in Tokamaks and other fusion machines is observed, for example, when the confined plasma is subjected to neutral beam heating. The impacts of the beam particles with plasma electrons and ions amounts to a net momentum transfer with causes rotation in the toroidal direction [1, 2]. Plasma rotation with high Mach numbers have been observed in almost all operating regimes of Tokamaks [3, 4], as well as in reversed-field pinches [5]. A key problem in the theoretical study of azimuthal rotation is whether such a plasma flow could coexist with a state of MHD (stationary) equilibrium. The answer turns to be positive provided some requirements are fulfilled by the system. Without resistivity, Alfvén's theorem says that magnetic field lines rotate rigidly with the plasma. If axissymetry exists, field lines lie on magnetic flux surfaces with topology of tori and characterized by surface quantities, like the transversal magnetic flux. The set of ideal MHD equations allows us to derive a partial differential equation for it [6, 7]. Maschke and Perrin [8, 9] obtained a MHD equilibrium equation for azimuthal plasma flows supposing that either the temperature or the entropy were surface quantities. They have considered only cylindrical coordinates, having obtained exact analytical solutions for the transversal magnetic flux. There are a few other solved cases in cylindrical [10] and spherical [11] geometries, but considering the temperature as a surface quantity. The case where the plasma flow is adiabatic, however, demands the use of the entropy as a surface quantity. This is particularly important in the case of anisotropic plasmas, where a double-adiabatic theory is necessary to describe the situation [12, 13]. To apply the MHD equilibrium theory for realistic magnetic confinement schemes, one would need an equilibrium equation in a general curvilinear coordinate system. In a previous paper [14] an equilibrium equation for plasmas with azimuthal rotation was derived, assuming that the temperature was a surface quantity, in an orthogonal curvilinear coordinate system. In this paper we will derive a similar equation, but with the plasma entropy as a surface quantity, according the methodology introduced by Maschke and Perrin [8]. This paper is organized as follows: in the second section we outline the basic equations and thermodynamical relations to be used, the magnetic field and velocity representations. In section III we use these equations to obtain a pressure equilibrium equation, which is supplemented by a Bernoulli-like algebraic equation. Section IV presents a particular form of this equation, obtained by a special choice of some surface quantities. Section V discusses how the, general equations look like in some coordinate systems, like cylindrical, spherical and prolate spheroidal ones. II Basic Equations In a stationary MHD equilibrium theory, we consider an ideal (infinite conductivity) plasma of electrons and singly charged ions, where all partial time derivatives vanish, but allowing a constant velocity. The corresponding set of MHD equations are, in S.I. units [15]: where r = n (m[e] + m[]) is the mass density, n is the particle number density and m[e], m[] are the electronic and ionic masses, respectively. v, p, E, B, and j are the velocity, pressure, electric field, magnetic field and plasma current density, respectively. The specific internal energy e and the specific entropy S satisfy Gibbs' equation and the specific enthalpy h satisfies the thermodynamical identity We also suppose the plasma as an ideal gas, obeying the equation where T is the plasma temperature (sum of electronic and ionic temperatures), k being the gas constant and the Boltzmann constant, respectively. We assume that the thermodynamical processes involved with plasma rotation are adiabatic, so that the caloric equation of state holds where A = A(S) is a constant depending only on the entropy, and g = 5/3 is the ratio of specific heats. The internal energy in this case is so that the plasma temperature is given by The adiabatic sound velocity in the plasma is given by Eqs. (8) and (11) as For an adiabatic process we have an entropy constant in time, and so is any function of entropy: However, the convective derivative of A is given by where the partial derivative vanishes for equilibria, giving the following relation for S We will denote by (x^1, x^2, x^3) the contravariant coordinates in a curvilinear coordinate system, and assume that x^3 is an ignorable coordinate, such that surface quantities do not depend on it. [] (g[] = [] . [] are the covariant components of the metric tensor ^1. Only orthogonal coordinate systems, in which g[] = 0 for ¹ The representation of a solenoidal magnetic field in terms of two scalar surface functions is where Y and I are the transversal flux and current functions, respectively. A plasma flow satisfying mass conservation Eq. (1) may be a rotation with constant angular frequency W along the [3] (azimuthal) direction which satisfies Ferraro's iso-rotation law so that W = W(Y) is also a surface quantity. III The equation of motion The equation of motion for the rotating plasma in this ideal MHD theory is derived from the momentum balance equation (2), in which we have used the magnetic field and velocity representations, and Ampére's law, Eq. (4). A standard calculation would give the plasma current density in terms of the surface functions Y and I where the generalized Shafranov operator is given by The velocity-dependent term in the momentum balance equation Eq. (2) may be written as A straightforward algebra leads to Using the representation (18) for velocity, the condition (16) is identically satisfied due to axisymmetry, so it does not give any further information about the plasma entropy. Hence, we will assume that the entropy is a surface quantity: S = S(Y). Using (7) we have for the pressure gradient Ñp = r(Ñh - T ÑS). Using also the isorotation law (19) we obtain Let us define as a kind of centrifugally corrected enthalpy. Making the cross product with ÑY it follows that Q is also a surface quantity. In this way, for arbitrary and non-vanishing ÑY we obtain the form of Maschke-Perrin equation with entropy as a surface function In the limit of vanishing rotation W = 0 we have simply Q = h, and using Gibbs' equation (7) we re-obtain the Grad-Shafranov equation for static equilibria From Eq. (12), and since S and A are surface quantities, we have for the temperature-dependent term in (26) By the same token, then density can be eliminated by noting that, from (11), the enthalpy is which gives showing that, in this model, the plasma density is not a surface function neither an independent variable, rather being determined by the knowledge of the surface functions A, Q and W. These are fixed, on the other hand, by the solution of the equilibrium equation itself, since it contains four arbitrary surface functions (the fourth being the current function I). The complete system of equations contains Eq. (26) and an algebraic relation defining Q, Eq. (25). With appropriate boundary conditions it becomes a well-posed model. IV Alternative form of the equilibrium equation Let us define, for later convenience and rewrite the equilibrium equation (26) in the form and the primes denote differentiation with respect to the magnetic flux. It may be rewritten as Thus the equilibrium equation assumes the following form which has four surface functions to be specified: I(Y), A(Y), W(Y), and Q(Y). According to Ref. [8] we choose the latter two such that where w is a constant, and l is a characteristic length of the system. It is also convenient to introduce an auxiliary surface function The physical meaning of this function comes from the static limit of the problem. From (25) we have Q = h, and using (10), (11), and (31), a simple calculation shows that G ® p, so we may regard G as a kind of centrifugally corrected plasma pressure. With help of (36) and (37) we rewrite once more the equilibrium equation in a very concise form as where the number of surface functions to be specified has been reduced to just two: G and I. The Mach number for the plasma azimuthal notation is given by where v[< 3 > ]is the "physical" component of the velocity in the azimuthal direction, and we have used Eq. (13). It is worth-noting that the positive-definiteness of the function Q (since Q = (Wl/ w)^2 > 0) imposes a limit on the possible values of the Mach number for our hypothesis (36) to be valid. From (25) it follows that h > (W^2 g[33]/2). Now, using (11) and (13) there results that h(g- 1). Combining the two preceding inequalities we have the following condition for the rotational Mach number For g = 5/3, we have that » 1.73 times the adiabatic sound velocity. This is a rather restrictive condition since it precludes most of the supersonic regime. Finally it follows from the above analysis that the constant w which have entered in the equilibrium equation is related to the Mach number by the following formula For vanishing rotation we have simply w^2 = l^2/g[33], i.e., it is just a geometrical factor. V Particular cases Cylindrical geometry. In this case we have (x^1, x^2, x^3) = (R, Z, f), and take the angle f as our ignorable coordinate, so that g[33] = R^2. The characteristic length in this case will be written l = R[0], which may represent the radius of the cylindrical conductor shell surrounding the rotating plasma, with Y(R[0], f, Z) = 0 as a convenient boundary condition. The Shafranov operator (21) in this case is and the equilibrium equation (38) is In units where m[0] = 1, and after the following changes of notation: D^* ® Y® F, w ® W, and G ® p[s], this equation reduces to Eq. (3.8) of Ref. [8]. This partial differential equation has to be supplemented by an algebraic equation defining Q, obtained from (25) as Since W is the plasma angular velocity, the term R^2 W^2 / 2 is the specific rotational kinetic energy. On a given flux surface one has Y = constant, and so any surface function like Q(Y). As in fluid mechanics, we may write down a Bernoulli-type equation of the form h - (R^2 W^2 / 2) = const. along the points of a stream line of the plasma flow (v × dl = 0). This is possible because the entropy is a surface quantity (v.ÑS = 0), being an absent feature in isothermal plasma flows (in which the temperature is a surface quantity). The equilibrium equation contains two surface functions whose profiles have to be specified a priori. In Ref. [8] linear profiles were chosen for both with v, Y[0], I[0] and M are model constants. Maschke and Perrin [8] have found an exact and analytical solution for the equilibrium equation (43), using elementary functions only. Some features present in their solution may be cited here. First, the magnetic axis (defined as the extremum of the magnetic flux Y at the z = 0 plane) is displaced outwards due to the centrifugal effect. The magnitude of the displacement is proportional to W. Also, the magnetic flux surfaces and pressure (isobaric) surfaces do not coincide, as it occurs in the static case. This is easily understood by noting that the pressure p is no longer a surface quantity. In a later paper [9] an approximate analytical solution was given to a combined poloidal and toroidal adiabatic rotation. Up to our knowledge, there are no further analytical solutions of the equilibrium equation in cylindrical coordinates, when the entropy is a surface quantity. Even in the other case (with the temperature as a surface quantity) there are very few solutions. Spherical geometry. This symmetry turns to be necessary to deal with some fusion plasma schemes, like field-reversed configurations; and astrophysical problems, like magnetic stars. The corresponding equation in spherical coordinates (x^1, x^2, x^3) = (r,q,j) follows from the general equation (38) by choosing j as an ignorable coordinate so that g[33] = r^2 sin^2 q. Introducing a characteristic length r[0] we have the following equilibrium equation and the centrifugally corrected enthalpy is Adopting the same linear profiles, Eqs. (45) and (46), used in the cylindrical case, and using the adimensional radius x = r / r[0] we have the following inhomogeneous partial differential equation where ^* = D^* is a reduced Shafranov operator in spherical coordinates. This equation does not seem to allow for an exact analytical solution for arbitrary h. Approximate solutions (for low w) may be investigated by using a binomial expansion of the h-dependent term. Retaining only the lowest order term the resulting equation would be similar to that solved in Ref. [11], with the difference that here the entropy is supposed to be a surface quantity, rather than the temperature. With similar boundary conditions (at a spherical conductor shell) the analytical solutions would be similar as Prolate spheroidal geometry. This is a convenient coordinate systems to study MHD equilibria in spheromak-type and compact tori configurations. The z-axis is the symmetry axis and the curvilinear coordinates are (x, j), defined as [16]: where 0 £ x < ¥, 0 £ £ p, 0 £ j < 2p, and r = csinhxsinc > 0 being the distance between the foci of coordinate surfaces, which turn to be prolate spheroids of semi-major axis c coshx[0] and semi-minor axis csinhx[0]. For this system g[33] = c^2 sinh^2 xsin^2 l = c as our characteristic length. The generalized Shafranov operator in this coordinate system is entering the equilibrium equation, which reads in this case In the static case, evidently there is no difference between taking the entropy or temperature as surface quantities, since these thermodynamical hypotheses are no longer necessary in this situation. This limit was studied in the early eighties by Kaneko and Takimoto [17], who have used profiles for G ® p and I^2, linear and quadratic in Y, respectively. They also have found analytical solutions involving combinations of angular and radial spheroidal wave functions. Unlike the similar equation we have derived for spherical coordinates, it is very difficult to find a situation in which the Eq. (52) is amenable to analytic, even approximate, treatment. VI Conclusions We have extended the previous results regarding MHD equilibria with constant angular velocity in the azimuthal direction (using the entropy as a surface quantity) to a general context in which it suffices to specify three things about the coordinate system to be used: (i) an ignorable coordinate (with respect to it is the direction of the plasma flow); (ii) a corresponding component of the covariant metric tensor; (iii) a characteristic length, related to some obvious boundary condition or the plasma boundary itself. The equilibrium equation in this case is a nonlinear elliptic partial differential equation where the main variable is the transversal magnetic flux function Y. Four surface quantities (which depend only on Y) have to be previously set up in order to reduce the number of dependent variables. This equation has to be supplemented by a Bernoulli-type algebraic equation which describes the centrifugally corrected plasma enthalpy. A particular form of the equation is obtained by choosing a given form for two of the surface functions, reducing their number to just two. This choice, however, limits our treatment to rotations with Mach numbers up to Besides the cylindrical case, which was already known in the literature, we have applied our general equation to the spherical and prolate spheroidal geometries, which are relevant to describe some magnetic confinement schemes like Spheromaks and compact tori. The equation in spherical coordinates is most likely soluble, at least in an approximate situation (low Mach number). The prolate spheroidal case seems to be only amenable to numerical treatment. The author would like to acknowledge Dr. R. A. Clemente for useful discussions and valuable suggestions. [1] M.G. Bell, Nuclear Fusion, 19, 33 (1979). [ Links ] [2] S. Suckewer, H.P. Eubank, R.J. Goldston, E. Hinnov, and N.R. Sauthoff, Physical Review Letters, 43, 207 (1979). [ Links ] [3] K. Brau, M. Bitter, R. J. Goldston, E. Hinnov, and N. R. Sauthoff, Physical Review Letters 26, 1643 (1979). [ Links ] [4] R. C. Isler, et al. Nuclear Fusion 23, 1017 (1983). [ Links ] [5] L. Carraro, M.E. Puiatti, F. Sattin, P. Scarin, and M. Valisa, Plasma Physics and Controlled Fusion 40, 1021 (1998). [ Links ] [6] H.P. Zehrfeld, and B.J. Green, Nuclear Fusion, 10, 251 (1970). [ Links ] [7] A.I. Morosov, L.S. and Solov'ev, (1980) in Reviews of Plasma Physics, Ed.: M. A. Leontovich, Vol. 8, Chap. 2 (Consultants Bureau, New York). [ Links ] [8] E.K. Maschke, and H. Perrin, Plasma Physics, 22, 579 (1980). [ Links ] [9] E.K. Maschke, and H. Perrin, Phys. Lett. A, 102, 106 (1984). [ Links ] [10] R.A. Clemente, and R. Farengo, Physics of Fluids, 27, 776 (1984). [ Links ] [11] R.L. Viana, R.A. Clemente, and S.R. Lopes, Plasma Physics and Controlled Fusion, 39, 197 (1997). [ Links ] [12] R.A. Clemente, and R.L. Viana, Plasma Physics and Controlled Fusion, 41, 567 (1999). [ Links ] [13] R.A. Clemente, and R.L. Viana, Brazilian Journal of Physics, 29, 457 (1999). [ Links ] [14] R.L. Viana, International Journal of Theoretical Physics, 37, 2657 (1998). [ Links ] [15] W.M. Stacey Jr., (1981). Fusion Plasma Analysis, Wiley, New York. [ Links ] [16] P.M. Morse, and H. Feshbach, (1953). Methods of Theoretical Physics, McGraw Hill, New York, vol. 2, p. 1284. [ Links ] [17] S. Kaneko, and A. Takimoto, (1982). Proceedings of the Fourth US-Japan Workshop on Compact Toroids, Nagoya, Japan, p. 78. [ Links ] ^1 See the appendix of Ref. [14] for further details about curvilinear coordinate systems
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-97332001000100011&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-21T01:04:02Z","content_type":null,"content_length":"50398","record_id":"<urn:uuid:75f95cb5-6a6f-43a7-968d-5dab0f2aad8f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionArchitecture of the ModelPCNN: A Brief OverviewObtain TNF via PCNNThe Tsallis EntropyThe MLP ClassifierFeature extraction and analysis experimentsAttaining the Tsallis entropyTranslation, Scaling, and Rotation IndependenceFace RecognitionConclusionReferencesFigures and Tables Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s8117518 sensors-08-07518 Article Pattern Recognition via PCNN and Tsallis Entropy ZhangYuDong^* WuLeNan School of Information Science and Engineering, Southeast University, P.R. China; E-Mail: wuln@seu.edu.cn Author to whom correspondence should be addressed; E-Mail: zhangyudongnuaa@gmail.com 11 2008 25 11 2008 8 11 7518 7529 05 09 2008 07 11 2008 17 11 2008 © 2008 by the authors; licensee MDPI, Basel, Switzerland. 2008 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). In this paper a novel feature extraction method for image processing via PCNN and Tsallis entropy is presented. We describe the mathematical model of the PCNN and the basic concept of Tsallis entropy in order to find a recognition method for isolated objects. Experiments show that the novel feature is translation and scale independent, while rotation independence is a bit weak at diagonal angles of 45° and 135°. Parameters of the application on face recognition are acquired by bacterial chemotaxis optimization (BCO), and the highest classification rate is 72.5%, which demonstrates its acceptable performance and potential value. Pattern recognition feature extraction pulse coupled neural network Tsallis entropy Face recognition is a hot problem in image processing [1]. It has many potential applications, for example, for security systems, man-machine interfaces, and searches of video database or the WWW. Therefore, many researchers are actively working in this filed and many face recognition methods are proposed. Since face image data are usually high-dimensional and large-scale, it is crucial to design an effective feature extraction method. Researchers have developed many algorithms, such as the Eigenface method [2], the Fisherface method [3], the direct LDA method [4], the uncorrelated optimal discrimination vector (UODV) method [5], the Kernel PCA method [6], etc. However, such models are either subject to problems determined by geometric transforms (scaling, translation or rotation) or to high computational complexity [7]. Moreover, it is known that parallel processing could solve computational complexity but in order to take advantage of it we need parallelizable models [8]. Neural networks (NN) have been widely employed in face recognition applications. It is feasible for classification and results in similar or higher accuracies from fewer training samples. It has some advantages over traditional classifiers due to two important characteristics: their non-parametric nature and the non-Gaussian distribution assumption. Pulse coupled neural network (PCNN) is called the 3^rd generation NN since it has the following advantages: (i) global optimal approximation characteristic and favorable classification capability; (ii) rapid convergence of learning procedure; (iii) an optimal network to accomplish the mapping function in the feed-forward; (iv) no need to pre-train. Hence, in this article, we present a novel face recognition approach based on PCNN. The structure of this article is as follows. Section 2 introduces the architecture of our proposed novel model. Section 3 gives a brief overview on PCNN. Section 4 discusses how to obtain TNF by PCNN. Section 5 introduces the Tsallis entropy. Section 6 brings forward the MLP. Section 7 is the experiment which checks the Translation, Scaling, and Rotation Independence of our proposed feature. Section 8 is the face recognition system. This proposed method can achieve as high as 72.5% classification rate on a given database. Finally Section 9 concludes this paper. The model proposed in this article is based on three modules: the PCNN, the Tsallis entropy, and the MLP classifier (Figure 1). Information flow is feed-forward but there are also lateral interactions between the PCNN. Pulse coupled neural network (PCNN) is a result of research on artificial neuron model that was capable of emulating the behavior of cortical neurons observed in the visual cortices of animal [9]. According to the phenomena of synchronous pulse burst in the cat visual cortex, Eckhorn developed the linking field network. Since it does not need to pre-train, and inherits the advantages of artificial neural network (ANN), PCNN has been used for various applications, such as image feature extraction [10], and image segmentation [11], etc. Total number of firing (TNF) is an important parameter obtained in PCNN. It is almost unique and has strong anti-noise ability. How to measure the information obtained in TNF? Communication systems are less well defined in PCNN. Thus traditional Shannon entropy is not fit for measuring. R. Sneddon discussed the Tsallis entropy of neurons and found it is more accurate and useful [12]. Hence, we use Tsallis entropy to measure the TNF. Thus, the novel feature of image is extracted now. We put the Tsallis entropy of the TNF into the multi-layer perceptron (MLP). After training, the MLP will automatically classify the images A typical PCNN neuron consists of three parts: the receptive field, the modulation field and the pulse generator. This is shown in Figure 2. Suppose NP is the total number of iterations and n is current iteration, the neuromime of PCNN can be described by the following equations. F i j [ n ] = exp ( − α F ) F i j [ n − 1 ] + V F ∑ m ijkl Y k l [ n − 1 ] + I i j L i j [ n ] = exp ( − α L ) L i j [ n − 1 ] + V L ∑ ω ijkl Y k l [ n − 1 ] U i j [ n ] = F i j [ n ] ( 1 + β L j i [ n ] ) Y i j [ n ] = { 1 , U i j [ n ] > θ i j [ n − 1 ] 0 , U i j [ n ] ≤ θ i j [ n − 1 ] θ i j [ n ] = exp ( − α θ ) θ i j [ n − 1 ] + V θ Y i j [ n − 1 ]where the (i, j) pairs presents the position of a neuron. F, L, U, Y and θ are feeding inputs, linking inputs, internal activity, pulse output, and dynamic threshold, respectively. α[F], α[L] and α[θ] are time constants for feeding, linking and dynamic threshold. V[F], V[L] and V[θ] are normalizing constants, M and ω are the synaptic weights, and I[ij] and J[ij] are external inputs. β is the strength of the linking. Firstly, the neuron (i, j) receive input signals from other neurons and from external source through the receptive fields. Then the signals are divided into two channels. One is feeding channel (F), the other is linking channel (L). Secondly, in the modulation part the linking input L is weighted with β and added a constant bias, then multiplied with the feeding input F. The internal activity U is the output of the modulation part. Finally, in the pulse generator part U compares with the threshold θ. If U is larger than θ, the neuron will emit a pulse. Otherwise it will not emit. Y is the output. The threshold will be adjusted every step. If the neuron has fired, θ will increase; otherwise θ will decay. There exists a one-to-one correspondence between the image pixels and network neurons, which means, each pixel is associated with a unique neuron and vice versa. The exponential decay is too time-consuming for fast realization, so an improvement to conventional PCNN is proposed here: the external input I and the dynamic threshold θ are simplified into corresponding pixel values and an accelerated decrease model, respectively: F i j [ n ] = I i j θ i j [ n ] = { θ 0 , n = 0 , Y [ 0 ] = 0 f [ n ] , Y i j [ n − 1 ] = 0 , θ 0 > θ 1 + inf , Y i j [ n − 1 ] = 1 where j [n] presents a monotonically decreased function. The whole process is as follows: the dynamic threshold θ descends linearly from its original θ[1] to the terminal θ[0], and thus all neurons were initially inhibited (guaranteed by θ[1], meanwhile Y=0), and then transformed gradually to be activated. Once a neuron is activated, namely Y[ij]=1, it will never be activated again (guaranteed by θ[ij]=+inf). During the simulation, each iteration updates the internal activity and the output for every neuron in the network, based on the stimulus signal from the image and the previous state of the network. For each iteration the TNF over the entire PCNN is computed and stored in a global array G. The following describes the details: Initialize the range [θ[0], θ[1]] of dynamic threshold. Simplify the external input F into a corresponding gray value I, and set the inner linking matrix W to a 3 by 3 square matrix with the value of its elements being the reciprocal of square of the Euclidian distance between the central pixel and corresponding pixel. Determine the expression of f(n) as follows: f ( n ) = ( θ 1 − θ 0 ) n + θ 0 N P − θ 1 N P − 1where NP presents the whole steps of PCNN which is usually at the range [10, 50]. Perform the PCNN. Accumulate Y [i] and get S [ n ] = ∑ i = 1 n Y [ i ] , n = 1 , 2 , L , N Psince Y [i] are not overlapped, S [n] is also binary. With n augmented, the areas of “1” in S [j] will also be enlarged. Obtain the TNF from S [n] G [ n ] = ∑ i j S i j [ n ] , n = 1 , 2 , L , N PG [n] is then used at next stage of the system. Entropy is usually used to describe the information contained in the system. Shannon defined the concept of information and described the essential components of a communication system as the following: a sender, a receiver, a communication channel, and an encoding of the information set. However, communication systems are less well defined when they occur in nature, such as the neurons. Neurons appear to both senders and receivers of information. Another question is what are the communication channels for neurons. They might be the synaptic gaps between the axon and dendrites or not. The answer is not acceptable for neurophysiologists. A more question is what is the encoding of the information. There is no clear and obvious answer. In general, traditional Shannon entropy is not fit for measuring the information contained by PCNN. Thus, Tsallis entropy T is chosen for measuring the information contained in TNF. T = 1 − ∑ i = 1 N P q ( x i ) q − 1where, q is a parameter that is greater than 0. x is a random variable with a domain of N informational symbols, x[1]x[2] L, x[n]. Note that, in the limit that q goes to 1, this reduces to the standard Boltzman-Gibbs-Shannon measure. In R. Sneddon's work, he set q =2. However, in Section 8 we use an optimization algorithm to computer the optimal value of q. The information contained in TNF is calculated as in the following equation: T [ n ] = 1 − [ p q { G [ n ] = 0 } + p q { G [ n ] = 1 } q − 1where n stands for the current iteration. The classifier is basically a MLP. The neural architecture consists of one input layer, one hidden layer and one output neuron (Figure 3). The input layer contains a number of inputs equal to NP. Then, the hidden layer has an extension of about 10-20% of the input layer, here we suppose there are m neurons at the hidden layer, and the maximum steps is s. Because of the specific task, the output layer contained only one neuron. An output value of 1 is equivalent to target detection whereas a value of 0 means no target detection. A standard back-propagation algorithm is used for supervised training. The experiments consist of three stages. Firstly we give an example of obtaining Tsallis entropy of TNF. Secondly we check the property of position, scale and rotation independence. Take Lena as an example. First, we normalize the pixel values into the range [0, 1]. Then we perform PCNN with NP=20. Figure 3 shows s [n] at each step. Hence, from Figure 4 it is easy to obtain TNF and the Tsallis entropy, which are shown in Figure 4. The values of TNF vary too large, which is not suitable for directly sended into MLP. And after transforming it to Tsallis entropy, the range of values is compressed to a small interval [0, 0.5]. The T [n] is not linear with the pattern of Figure 4(a). The maximum point of T [n] from Figure 4(b) is at 14. Then it is obvious that the 14th subimage in Figure 4 is the most obvious and has the largest contrast. Thus, T [n] can be understood as the measurement of pattern obviousness. We use the Tsallis entropy T [n] as the feature to classify different patterns. At this experiment we will check whether it is a translation, scaling and rotation invariant feature. We use simple geometric shapes as input images, two of them were shown in Figure 5. As expected, the system showed total translation independence. Then seven different scales of the rectangle were selected for testing scaling independence (Figure 6). We find that the T [n] obtained from these seven scales are quite similar to each other, which demonstrates that this proposed feature is scaling independent. Table 1 shows the feature extracted from different scales of the rectangle. Here MSE is the shortened form of “mean square error”. As for the rotation independent, the triangle had been rotated at different angles and the MSE had been computed for each rotation. Results shown in Figure 7 prove that the maximum MSE is obtained for the two principal diagonals (45° and 135°), which seems the rotation independence is a bit weak especially at those two angles, but may be a clue indicating that the MSE is caused by pixel discretization [13]. Test on other hundreds of images also demonstrate the conclusion. From Section 7 it is obvious that the features extracted via our model is effective. Hence, we apply this method to face recognition. The datasets come from the University of Essex School of Computer Science and Electronic Engineering website (http://cswww.essex.ac.uk/mv/allfaces/faces96.html). A sequence of 20 images per individual was taken. During the sequence the subject takes one step forward towards the camera. This movement is used to introduce significant head variations between images of same individual. There is about 0.5 s between successive frames in the sequence. Figure 8 shows several typical faces used in this experiment. Each individual is averagely split into training and testing sets, namely, 10 images are used for training while the other 10 images are used for testing. The optimal parameters (q, NP, m, S) are acquired by the guidance of bacterial chemotaxis optimization (BCO) described in Ref. [14]. The final values are as listed in Table 2. The highest correct classification rate (CR) of our proposed algorithm has of 72.5%. Although it is not as ideal as some other mature face recognition algorithms, we consider it more potential since research on pattern recognition via PCNN is currently progressing. Firstly we adjust one parameter while fix others to check the robustness of our method. Firstly, the parameter q is tuned. The result is shown in Figure 9. From Figure 9, the classification rate remains an acceptable level (>70%) while q is in the interval [1.63 - 2.23]. It indicates that the algorithm is robust with q. The best value of q is 1.86. Secondly, the parameter NP was tuned while others remained unchanged. From Figure 10 it is obvious that when NP is less than 20, the CR improves with NP. As NP increases to 20, the CR remains nearly steady. Hence, NP is set as 20 taking calculation time into account. Finally we changed the value of m; the results are listed in Figure 11, which implies that the best value of m is 4. The three important parameters are analyzed above. It can conclude that the parameters are essential to the performance of this model. Hence, it is important for researchers to tune these parameters before the network works. In this study, a novel feature extraction method was described and applied for face recognition. This paper is just the first attempt to explore the potential of Tsallis entropy of TNF obtained by PCNN to handle the face recognition problems. Experiments demonstrate that this new feature is unique, translation and scale independent. The future work shall focus on combining this proposed feature with others to improve the classification rate. Another possible research topic is to simplify the procedures in this model. Actually, PCNN does not need pre-training, so the calculation time is expected to decline faster. Furthermore, this proposed approach is new and potential. It can be applied on all sorts of recognition fields. This research was supported under following projects: 1) National Technical Innovation Project Essential Project Cultivate Project (706928); 2) Nature Science Fund in Jiangsu Province (BK2007103). Also thanks to referees for their suggested improvements to the contents of this article. RogersS.K.KabriskyM.SPIE Optical Engineering PressBellingham, WA, USA1991 CevikalpH.NeamtuM.WilkesM.BarkanaA.Discriminative common vectors for face recognition20052741310.1109/TPAMI.2005.9 GulmezogluM.B.DzhafarovV.KeskinM.BarkanaA.A novel approach to isolated word recognition1999762062810.1109/89.799687 GulmezogluM.B.DzhafarovV.BarkanaA.The common vector approach and its relation to principle component analysis2001965566210.1109/89.943343 ScholkopfB.SmolaA.MullerK.Nonlinear component analysis as a Kernel eigenvalue problem1998101299131910.1162/089976698300017467 MartinezA.M.BenaventeR.The AR face databaseBarcelona, SpainJune1998 BecanovicV.Image object classification using saccadic search, spatio-temporal pattern encoding and self-organisation20002125326310.1016/S0167-8655(99)00154-3 HopfieldJ.J.Neural networks and physical systems with emergent collective computational abilities1982792554255810.1073/pnas.79.8.25546953413 EckhornR.ReitboeckH.J.ArndtM.DickeP.W.Feature linking via synchronous among distributed assemblies: simulations of results from cat visual cortex1990212531255 JohnsonJ.L.Pulse-coupled Neural Nets: Translation, Rotation, Scale, Distortion and Intensity Signal Invariance for Image19943362396253 KuntimadG.RanganathH.S.Perfect Image Segmentation Using Pulse Coupled Neural Networks19991059159810.1109/72.76171618252557 SneddonR.The Tsallis entropy of natural information200738610111810.1016/j.physa.2007.05.065 RaulC.M.Pattern recognition using pulse-coupled neural networks and discrete Fourier transforms20035148749310.1016/S0925-2312(02)00727-0 ZhangY.D.WuL.N.Optimizing Weights of Neural Network Using BCO20088318519810.2528/PIER08051403 The architecture of the recognition system. PCNN neuromime. s [n] of each step of PCNN on Lena. (a) Step 1. (b) Step 2. (c) Step 3. (d) Step 4. (e) Step 5. (f) Step 6. (g) Step 7. (h) Step 8. (i) Step 9. (j) Step 10. (k) Step 11. (l) Step 12. (m) Step 13. (n) Step 14. (o) Step 15. (p) Step 16. (q) Step 17. (r) Step 18. (s) Step 19. (t) Step 20. Feature extraction. (a)G [n]. (b)T [n]. Shapes used for testing. (a) A rectangle. (b) A triangle. Seven different scales of the rectangle. (a) Size=50 pixels. (b) Size=100 pixels. (c) Size=150 pixels. (d) Size=200 pixels. (e) Size=250 pixels. (f) Size=300 pixels. (g) Size=350 pixels. MSE for different rotation angles. Several typical faces in the database. The curve of CR with q. The curve of CR with NP. The curve of CR with m. The impact of scaling (Size of the origin image is 200 pixels). Size T [n] MSE 50 0.0016, 0.0361, 0.1247, 0.2414, 0.3714, 0.4608, 0.4969, 0.4920, 0.4548, 0.3875, 0.3233, 0.2516, 0.1813, 0.1100, 0.0612, 0.0423, 0.0245, 0.0135, 0.0096, 0.0000 3.9524e-3 100 0.0006, 0.0280, 0.1163, 0.2376, 0.3657, 0.4581, 0.4967, 0.4930, 0.4539, 0.3887, 0.3148, 0.2509, 0.1818, 0.1119, 0.0612, 0.0380, 0.0243, 0.0139, 0.0086, 0.0000 1.1543e-3 150 0.0004, 0.0287, 0.1143, 0.2368, 0.3629, 0.4577, 0.4967, 0.4926, 0.4536, 0.3908, 0.3169, 0.2482, 0.1809, 0.1098, 0.0603, 0.0379, 0.0239, 0.0142, 0.0085, 0.0000 7.59e-4 200 0.0006, 0.0288, 0.1167, 0.2371, 0.3634, 0.4579, 0.4968, 0.4925, 0.4533, 0.3913, 0.3162, 0.2480, 0.1818, 0.1106, 0.0618, 0.0377, 0.0238, 0.0141, 0.0090, 0.0000 0 250 0.0004, 0.0285, 0.1165, 0.2381, 0.3635, 0.4583, 0.4967, 0.4926, 0.4539, 0.3921, 0.3167, 0.2484, 0.1815, 0.1110, 0.0629, 0.0378, 0.0242, 0.0142, 0.0088, 0.0000 4.6218e-4 300 0.0007, 0.0290, 0.1168, 0.2384, 0.3638, 0.4582, 0.4967, 0.4926, 0.4538, 0.3923, 0.3165, 0.2487, 0.1817, 0.1116, 0.0627, 0.0377, 0.0240, 0.0143, 0.0090, 0.0000 5.2731e-4 350 0.0006, 0.0294, 0.1168, 0.2389, 0.3639, 0.4584, 0.4967, 0.4925, 0.4537, 0.3923, 0.3163, 0.2482, 0.1819, 0.1120, 0.0631, 0.0379, 0.0242, 0.0145, 0.0090, 0.0000 6.6282e-4 The optimal parameters used in face recognition. Parameters q NP m S CR Value 1.86 20 4 50000 72.5%
{"url":"http://www.mdpi.com/1424-8220/8/11/7518/xml","timestamp":"2014-04-21T07:32:41Z","content_type":null,"content_length":"51085","record_id":"<urn:uuid:a92162b3-952b-44ce-a417-1a79e81b1200>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
An Unreasonable Man Posted by John F. McGowan, Ph.D. in Suggested Reading Unsolved Problems on January 31st, 2010 | 8 responses The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. — George Bernard Shaw (attributed) On November 11, 2002, Grigory Perelman, a Russian mathematician known to his friends as “Grisha”, posted a research paper to the www.arXiv.org preprint server containing, amongst other things, the outline of a proof of the Poincaré Conjecture, a famous conjecture in topology first articulated in 1904 by the great mathematician Henri Poincaré. Dr. Perelman also e-mailed a few selected mathematicians directly, drawing attention to his somewhat curious paper. This rapidly created a stir as the mathematicians realized that he might well have proven the Poincaré Conjecture, an extremely difficult problem that had eluded the talents of many top mathematicians including Poincaré. Perelman went on to post two more papers to arXiv.org elaborating his proof. The Clay Institute, which had offered a prize of $1 million for the proof (or disproof) of the Poincaré Conjecture, funded two teams of mathematicians to verify Perelman’s proof. The National Science Foundation also funded efforts to verify and expand upon the proof. By 2006, the “consensus” in the mathematical community was that Dr. Perelman had proved the Poincaré Conjecture. Dr. Perelman was offered the prestigious Fields Medal, close to the Nobel Prize of mathematics. He became the first mathematician to decline the Fields for reasons that remain somewhat unclear. Two recent books attempt to tell the story of Grigory Perelman and the Poincaré Conjecture. Masha Gessen’s Perfect Rigor is the first biography of the elusive and enigmatic Perelman. It gives a great deal of information about the world of Soviet mathematics in which Perelman grew up and Perelman’s life to date. The author was unable to interview Perelman who has declined nearly all interviews; he has given an interview to Sylvia Nasar and David Gruber for their New Yorker article “Manifold Destiny“, about which more later. The book suffers from an unremittingly hostile, perhaps jealous, view of the unusual Dr. Perelman, who is variously portrayed as extremely naive, weird, and possibly mentally ill. Dr. Perelman’s father was an electrical engineer and his mother a mathematics teacher at a Soviet trade school. His mother apparently had a strong interest in mathematics and almost pursued a doctorate before marrying his father. Perelman appears to have been involved in mathematics at an early age and joined a competitive math club. He competed and won a gold medal at the International Math Olympiad in Budapest, Hungary in 1982 at the age of 16. He attended a special math and physics school, Leningrad Secondary School #239, usually identified as “School 239″ in Perfect Rigor. He then became a student at Leningrad State University. In 1987, he became a graduate student at the Leningrad (subsequently the St. Petersburg) branch of the Steklov Mathematical Institute, the mathematics division of the Soviet (now Russian) Academy of Sciences. The mathematician Yuri Burago was his adviser. Perelman defended his dissertation in 1990. He continued to work at the Steklov Institute until 1992, publishing a number of papers in Russian and American mathematical journals. In the fall of 1992, Perelman came to the United States for a semester at the Courant Institute at New York University and then another semester at the State University of New York Stony Brook in early 1993. At New York University, he met and may have become friends with the mathematician Gang Tian. Perelman and Gang Tian traveled together from NYU to the Institute for Advanced Study at Princeton to listen to mathematics lectures. Then, Perelman became a prestigious Miller Fellow at Berkeley. During this time he proved the Soul Conjecture, a difficult problem in topology. His Miller Fellowship ended in 1995. He received several job offers from a number of top universities. However, he wanted a tenured position. His job offers appear to have been untenured, tenure-track positions. He returned to Russia and the Steklov Institute in 1995 where he was part of the Mathematical Physics group, dropping almost entirely out of sight, publishing nothing. He appears to have spent the next seven years working on the Poincaré conjecture. In 2002, he stunned the mathematical world by posting his proof to the Internet, flouting tradition by declining to submit the proof to a peer reviewed mathematics journal. The Clay Institute would fund mathematicians John Morgan and Gang Tian (Perelman’s friend or acquaintance at NYU) as well as a separate team at the University of Michigan to verify Perelman’s work in the form of a peer reviewed academic book. In 2006, the prominent mathematician Shing-Tung Yau and two of his former students argued that Perelman had published an incomplete proof which they “fixed” in a lengthy paper published in the Asian Journal of Mathematics. At this point the elusive Dr. Perelman appears to have struck back with a vengeance, possibly exhibiting something other than the naivete imputed in Pefect Rigor. Perelman granted a rare interview to Sylvia Nasar, best known as author of A Beautiful Mind about the mathematician John Forbes Nash, and David Gruber for an article in the New Yorker magazine, “Manifold Destiny,” which all but openly accused Yau and his former students of blatant plagiarism. The article quotes Perelman attributing his decision to decline the Fields medal and withdraw from the mathematics profession to the low ethical standards of the profession (in his opinion). The article also discusses the alleged rivalry between Yau and his former student Gang Tian, Perelman’s acquaintance from NYU and co-author with John Morgan of the book on Perelman’s proof. Yau threatened legal action against the New Yorker which stood by its story. Yau soon appears to have retreated under a storm of negative publicity and criticism within the mathematics “community”. By most accounts, Perelman is an unusual person. He left his job at the Steklov Institute and apparently resides with his aging mother in her apartment in St. Petersburg. He has reportedly indicated that he is no longer interested in mathematics and generally refuses interviews, prizes, and so forth. It is not unlikely that many prominent research universities and institutions would fall over themselves to offer him a tenured professorship or something similar if he expressed any interest. It remains to be seen whether he will decline the Clay Institute’s $1 million prize if offered. Without knowing more about Perelman and his adventures in mathematics than can be found in Perfect Rigor or other accounts to date, it is difficult to draw firm conclusions about the man or even his Notwithstanding, a few thoughts come to mind. Perfect Rigor and some other accounts implicitly criticize Perelman for his decision to turn down the job offers in 1995 and return to the Steklov Institute, imputing arrogance or just plain nuttiness. Some mathematicians and scientists would kill for some of the offers that Perelman turned down. Most major breakthroughs take a long time, usually five years or more. Perelman spent at least seven years on the Poincaré conjecture and he probably was working on it while in the United States. Most tenure track positions involve a seven year period. The assistant professor is up for review typically in six years; he or she usually must produce allegedly ground breaking work within six years. If he or she is denied tenure, he or she has one year, the seventh year, to find another job. Most assistant professors have acquired a spouse and small children by this time. There is considerable pressure to produce research papers, write grant proposals and raise money. Perelman apparently published nothing from 1995 until 2002. He most likely would not have gotten tenure had he tried to do this at any of the jobs that he turned down in 1995. There appears to be a long history of mathematicians developing serious psychological problems. The aforementioned John Forbes Nash succumbed to mental illness, diagnosed as paranoid schizophrenia, and was well known to Princeton students for wandering around campus scribbling incomprehensible formulas on blackboards. Kurt Gödel developed psychological problems and allegedly starved himself to death. Georg Cantor became increasingly erratic as he got older. There are many anecdotal accounts of high levels of concentration and mental efforts sustained over months or years resulting in a kind of mental exhaustion and other problems. Both the western and eastern literature of meditation, which often involves prolonged concentration, contain warnings about various adverse psychological effects including anxiety attacks and hallucinations. Disillusioned former adherents of various meditation movements or “cults” have alleged serious adverse effects of heavy meditation, meaning many hours per day every day, similar to those recounted in ancient traditional sources on meditation. Although computer programming can be exhilarating, many programmers appear to experience mental exhaustion and “burnout” after lengthy programming projects involving high levels of sustained concentration. In engineering there is an adage: “if you are one step ahead, you are a genius; if you are two steps ahead, you are an idiot!” Perfect Rigor portrays Perelman as astonishingly naive, protected from the “real world” by the bizarre Soviet mathematical system. While this may have some truth, a number of Perelman’s actions may exhibit much foresight, like a champion chess player sacrificing a piece for subsequent gain. Is pretending not to notice the alleged anti-Semitism (Perelman is a Russian Jew) in the Soviet mathematical system naive or politically astute? Declining the Fields medal, as some have noted, attracted enormous attention to Perelman. He is now one of the best known recipients (or non-recipients in this case) of the Fields Medal. It also gave him a great deal of moral authority which he seems to have used effectively to fend off Shing-Tung Yau’s alleged attempt to steal credit for proving the Poincaré Conjecture. Refusing to grant interviews also means that Perelman probably has a great deal of leverage with journalists in the rare cases when he grants an interview, as he did with such great effect in The New Yorker in 2006. Perelman was a math prodigy, returning home with a gold medal and a perfect score from the 1982 International Math Olympiad. Prodigies are often not as successful as one might expect. Math and physics prodigies often flame out, sometimes catastrophically. While prodigies are more common among people who make major inventions and scientific discoveries than in the general population, they are not nearly as common as most people probably think. Perfect Rigor portrays Perelman’s success in proving the Poincaré Conjecture as a logical consequence of his youthful training and competition in the sometimes bizarre Soviet mathematical system. Since Perelman has revealed little about the process of his discovery, this is difficult to evaluate. Prodigies often run into problems and don’t realize their seeming potential later in life. This has been observed in math, physics, and other fields for many generations. There are probably several causes. Some prodigies are probably frauds, manufactured by ambitious parents; that such people fail to make major breakthroughs is not surprising. Some prodigies are probably the product of a hothouse environment, driven or manipulated by parents or others to practice heavily and perform at an unusually high level that is difficult to sustain. As they get older and establish their own lives, other interests or needs intervene. Some prodigies undoubtedly fall afoul of politics that they are ill-prepared to deal with. Academic homework, exams, competitions like the International Math Olympiad, admissions exams such as the SAT or GRE exams in the United States, specialized exams and competitions such as the famous Putnam math examinations, and so forth do not necessarily either teach or measure some of the skills required in actual invention or discovery. Exams and homework in math and physics tend to test the ability to accurately and quickly perform certain calculations or apply certain known mathematical methods to a problem. Some people either through heavy practice or rare natural ability can learn to perform these calculations rapidly with negligible error. This does not translate directly into the ability to handle unsolved research problems which often seem to require large amounts of frustrating trial and error and often deeper understanding of concepts, mental visualization, and so forth. Many topics taught at a high school, college, and even beginning graduate school level are quite mature. Logical and technical flaws that abound in original research papers have been cleaned up and eliminated. Teachers and textbook writers have learned how to present the material clearly so that a bright or highly motivated student may be able to easily master the material quickly. Prodigies can sometimes read a textbook and immediately start performing the methods described in the textbook very accurately. This becomes more difficult as one reaches the “bleeding edge” where the available learning materials are original research papers or badly written textbooks that may contain errors, impenetrable jargon, opaque language, and even deliberate obfuscation of logical or technical flaws. Prodigies may encounter a sudden drop off of their remarkable abilities which they may inaccurately attribute to a lack of the magic “ability” required for the field rather than the immature state of the bleeding edge knowledge. Perelman presumably navigated these difficulties as he progressed in mathematical research. One is reminded of the old sayings “actions speak louder than words” and “talk is cheap”. If Perelman’s proof stands the test of time, he has done much. If he is sincere in declining prizes, honors, and adulation, he sets an example by his actions. In reading Perelman’s story, one also cannot shake the impression that he may have had some unhappy experiences during his stay in the United States and went home silently vowing “I’ll show them,” which he apparently has. The Poincare Conjecture Donal O’Shea’s The Poincare Conjecture is a more pleasant book to read than Perfect Rigor, lacking the hostile tone of Perfect Rigor and sugar coating a number of topics. Perelman is “eccentric”. Little is said about “Manifold Destiny” or the ugly priority dispute. O’Shea focuses on the history of geometry, the Poincaré Conjecture, mostly inspiring stories about great mathematicians, and tries to explain the mathematics of the Poincaré Conjecture to a general audience. On the whole, The Poincare Conjecture is an enjoyable and informative book to read. O’Shea carefully debunks the myth that scholars in the Middle Ages and the ancient world believed the Earth was flat. He gives an interesting account of Columbus, the slow discovery of the exact shape and geography of the Earth, confirming the ancient theory of the spherical Earth. He slowly and deftly leads the reader through the history of mathematics and geometry to the Poincaré Conjecture, the many failed attempts to prove it, and the seeming final solution by Perelman. Some of the illustrations leave a bit to be desired. In discussing mathematics in the ancient world, O’Shea uses modern CIA maps of the modern world to show the ancient Greek kingdom of Ionia where Pythagoras was born and to show the Middle East. One map, for example, shows modern Bulgaria which did not exist in the time of Pythagoras. Similarly, O’Shea is discussing ancient Babylonia and Persia but the associated map shows modern Iraq and Iran. Hopefully, this will be fixed in a future edition. Some of the discussion of hyperbolic geometry and most of the chapter on Poincaré’s topology papers, which presents the actual Poincaré conjecture, could be improved. The diagrams and explanation on page 27 in the chapter “Possible Worlds” showing how the surface of a two-holed torus can be mapped to an octagon is hard to follow. O’Shea returns to the two-holed torus and the octagon in Chapter 10, “Poincaré’s Topological Papers”. Probably many readers will have forgotten the discussion on page 27 by then. The term “natural geometry” is used in this chapter but not defined clearly. A number of diagrams in this chapter are small and difficult to follow. Interested readers can find a better explanation of some of the relevant aspects of hyperbolic geometry in the second chapter of Roger Penrose’sThe Road to Reality which features some entertaining Escher prints showing the so-called “Poincaré disc model” of hyperbolic geometry (first discovered not by Poincaré, but by Eugenio Beltrami, Penrose carefully points out). One can only go so far with analogies to rubber sheets or cloth fabric in describing topology and especially differential geometry. This is a problem many popular mathematics and science books encounter. If we had a better way of explaining and introducing differential calculus to a general audience, this would improve the general public’s ability to follow issues in mathematics and science and also improve our educational system. Pure mathematics today suffers from a particularly opaque and confusing language. It now typically takes several months for a skilled person to master the arcane language of modern pure mathematics. Abstraction has been taken to an extreme. Words and phrases such as “algebra”, “ring”, “module”, “field”, and so forth have meanings in pure mathematics that differ both from common usage and the language of applied mathematics used in most engineering and also much physics. The Poincare Conjecture suffers in places from terms like “natural geometry” that have a special meaning in pure Both books focus on the genius of Perelman and famous mathematicians such as Gauss, Riemann, Poincaré, and others. Indeed, the subtitle of Perfect Rigor is “A Genius and the Mathematical Breakthrough of the Century”. This superman theory of scientific progress and a strong focus on extreme intelligence is common in popular science and math books and articles. The story of the Poincaré Conjecture, at least until Perelman, is a story of large amounts of trial and error (lots of error) as both books allude to. Henri Poincaré formulated the conjecture in 1904 and published an incorrect proof. Almost every year has seen publication or presentation of attempts to prove the Poincaré Conjecture. Numerous mathematicians, including very top mathematicians, have published incorrect proofs. Many different approaches to the problem have been developed. Most failed. Richard Hamilton developed the basic approach that Perelman built upon but apparently stopped making progress in the 1980′s or early 1990′s. It is common to find large amounts of trial and error in the detailed history of inventions and discoveries, including discoveries in pure and applied It is clear that Perelman spent at least seven years on the Poincaré conjecture. We have no idea how much trial and error and how much failure took place during those seven years. Perelman reportedly fixed two minor errors in his first paper in the subsequent two papers posted to www.arXiv.org in 2002 and 2003. Other inventors and discovers have frequently gone through long periods of trial and error and repeated failure before their “breakthrough”. While respecting Perelman’s accomplishments, we should also be interested in the precise process used to reach the answer and avoid attributing it to magical genius alone. Both Perfect Rigor and The Poincare Conjecture are interesting and informative books for general audiences. Even practicing mathematicians may gain some insights and new information from Perfect Rigor. Yet, Grigory Perelman remains an enigma. A definitive biography remains to be written. The world might learn a lot from more details on how he discovered his proof of the Poincaré Conjecture. (C) Copyright 2010, John F. McGowan, Ph.D. About the Author John F. McGowan, Ph.D. is a software developer, research scientist, and consultant. He works primarily in the area of complex algorithms that embody advanced mathematical and logical concepts, including speech recognition and video compression technologies. He has extensive experience developing software in C, C++, Visual Basic, Mathematica, and many other programming languages. He is probably best known for his AVI Overview, an Internet FAQ (Frequently Asked Questions) on the Microsoft AVI (Audio Video Interleave) file format. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at jmcgowan11@earthlink.net. 8 Responses to “An Unreasonable Man” 1. The characterization of Perfect Rigor as exhibiting an “unremittingly hostile” and “jealous” view of Perelman is in my opinion utterly unjustified. 2. It looks like Dr Perelman has not left mathematics – according to Arxiv one of the papers “Finite extinction time for the solutions to the Ricci flow on certain three-manifolds” was last edited on Dec 21, 2009 3. Anyone with perseverance and talent, besides from solving Poincaré’s conjecture, can rule the world. 4. Excellent Article! I have read many books over the years on mathematics and mathematical geniuses and I agree that the level of concentration and dedication can make many seem nutty. The truth is social convention. If someone isolates themselves and focuses on one thing for years, they will lose track of a lot of social dynamics that the rest of us take for granted. It is also true that gifted mathematical minds are wired differently and do exhibit psychological ailments. The rigorous mathematical work can bring things ailments to the fore front. Perelman may be anti-social, but he may just be angry that someone else is trying to claim credit for something he worked long and hard to achieve. I would be angry as well in his shoes. The vultures always show up when they smell the hint of a rotting corps. In this case the corpse would be Perelman’s seeming retreat from possibly great opportunities in the field of mathematics. Starting with the decline of the Fields Medal. Or! Maybe he’s smarter than we give him credit for and now he is truly remembered and immortalized by having declined the Fields Medal and all those opportunities. He could come back and accept the medal and take the opportunity he really wanted. I suppose that remains to be seen. 5. Dr Perelman spent 7 years on the Poincaré proof. That was 7 years ago. Time to start watching arxiv.org. 6. March 18, 2010 First Clay Mathematics Institute Millennium Prize Announced Today Prize for Resolution of the Poincaré Conjecture Awarded to Dr. Grigoriy Perelman □ Thanks John. I saw that yesterday, and I plan to have a small article about it up on Monday.
{"url":"http://math-blog.com/2010/01/31/an-unreasonable-man/","timestamp":"2014-04-20T15:50:47Z","content_type":null,"content_length":"64776","record_id":"<urn:uuid:2692574d-8cdd-43af-a730-53ea28a8a206>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Gerry Browning: Numerical Climate Models Gerry Browning of CIRA has contributed a post today discussing climate models. If you go to Google Scholar and search “Browning Kreiss”, you will get a list of formidable papers on numerical questions. Gerry has tried to distill the issues for a wider audience here. Recent awards include the NOAA Environmental Research Laboratories’ Outstanding Scientific Paper Award for: Browning, G.L. and H.O. Kreiss, “The Role of Gravity Waves in Slowly Varying in Time Mesoscale Motions,” Journal of the Atmospheric Sciences , 54, 9 1166 – 1184 (1997). Here is Gerry’s citation as winner of a 2002 Research Initiative Award from the Cooperative Institute for Research in the Atmosphere: In a series of papers, Browning and Professor Heinz Kreiss, a colleague and mentor, have extended Kreiss’ Bounded Derivate Theory (BDT) to multiscale flows in the atmosphere and oceans. There are many ramifications of this new theory. The first is that the well posed system introduced by Browning & Kreiss as a replacement for the ill-posed primitive equations (used in all models for large-scale atmospheric flows) also accurately describes both the dominant and gravity wave portions of all remaining atmospheric flows, i.e., the new system is the only well-posed multiscale system that accurately describes all atmospheric motions. The second is that the reduced system clearly indicates what balances are appropriate for all diabatic cases, i.e., it is the only method that has provided a hot start initialization for cases where the heating is the controlling influence on the solution. The impact of this theory is being felt in many areas, and Browning’s cuttingedge research and the potential for breakthrough applications in numerical modeling An Introduction to Climate and Weather Models Continuum Considerations Although there remains residual debate about the validity of various time dependent systems used to describe fluid motions, a number of these systems are in general use in both the engineering and scientific communities, e.g. the viscous, compressible Navier-Stokes equations, the Euler equations of gas dynamics (essentially the inviscid, compressible Navier-Stokes equations), and the magneto-hydrodynamic (plasma) equations. The continuum behavior of specific solutions of these systems can sometimes be understood by considering special cases (such as the propagation of sound) that lead to simpler systems that are more amenable to classical analysis. Sometimes simplifications of these systems are also made to make the numerical approximations of specific solutions of the continuum system on a computer more tractable. There can be two problems associated with either type of simplification. Although the original systems usually have known mathematical properties, e.g. the Navier-Stokes equations are a quasilinear differential system, the simplifications sometimes can lead to systems with unknown or even bad mathematical properties. A well known example of an equation with bad mathematical properties is the heat equation run backwards in time. A small perturbation of the initial conditions for this equation can lead to instantaneous, unbounded growth and time dependent systems that exibit this type of behavior are called ill-posed systems. It is quite surprising how often simplifications that have been made in practice have led to this type of problem. Therefore, any simplification of the original continuum equations should be checked to ensure that the simplified system accurately approximates the continuum solution of interest and is properly posed Difference Methods for Initial-Value Problems: Richtmyer and Morton Numerical Modeling Considerations Once the continuum system to be approximated has been determined to be properly posed, it can be approximated by a number of numerical methods, but all must be both accurate (consistent) and stable for the method to converge to the continuum solution of the initial-value problem (see Lax Equivalence Theorem in above reference). The accuracy of the numerical method determines how fast the numerical method will converge to the continuum solution, e.g. a fourth order method will take fewer mesh points than a second order method (assuming both are stable). However, the numerical accuracy can be reduced by a number of factors, e.g. errors in the approximations of the continuum equations or errors in the model. There can be two other significant problems with numerical models. If there are any boundaries present, those boundaries must be dealt with very carefully both in the continuum system and in the numerical model. This is an extremely delicate process and if handled improperly can reduce the accuracy of a numerical method and even lead to an incorrect solution. The other major problem is that the solution of the continuum system may contain a complete spectrum of waves, but a numerical model can only compute a finite part of that spectrum. This is a typical and very serious problem. Henshaw, Kreiss, and Reyna have determined the minimal scale that will be produced by the nonlinear, incompressible, Navier-Stokes equations for a given viscosity coefficient (molasses has a large viscosity and air a very small one). Convergent numerical solutions have shown that the estimates of this scale are extremely accurate. If the numerical model does not resolve the correct number of waves indicated by the estimate, the model blows up. If the model resolves the number of waves indicated by the estimate, the numerical method will converge to the continuum solution for long periods of time. Thus, if a numerical model is unable to resolve the spectrum of the continuum solution, the model is forced to artificially increase the viscosity coefficient or use a numerical method that has nonphysical viscosity built into the method. Analysis of Numerical Methods: Isaacson and Keller Time Dependent Problems and Difference Methods: Gustafsson, Kreiss, and Oliger Initial-Boundary Value Problems and the Navier Stokes Equations: Kreiss and Lorenz Large-Scale Weather Prediction Models Given the above brief introduction to time dependent partial differential equations and numerical methods for those systems, we can now discuss large-scale weather prediction and climate models. Clearly, the atmosphere contains motions of many spatial and time scales and no numerical model can hope to resolve all of those motions. For large-scale motions in the midlatitudes above the turbulent lower boundary layer, the inviscid, unforced Euler equations on a rotating sphere can be scaled under certain mathematical assumptions. For these motions, the vertical acceleration term is approximately 6 orders of magnitude smaller than the remaining terms in the time dependent equation for the vertical velocity and thus are typically neglected in large-scale weather prediction models leaving only the hydrostatic balance terms. The resulting system is sometimes referred to as the primitive or hydrostatic equations. The neglect of the vertical acceleration term made the equations tractable for computing, i.e. the inclusion of the vertical acceleration term would have required too small a time step to satisfy the stability criterion mentioned above, but altered the mathematical properties of the original system. After the derivation of the hydrostatic equations, approximations of the turbulent boundary layer, eddy viscosity (much larger than the true atmospheric viscosity and sometimes even of a different type, e.g. hyperviscosity), and all kinds of approximations to various atmospheric phenomena (parameterizations) are added onto the hydrostatic equations. In the fall of 2001, Sylvie Gravel (RPN) ran a series of tests on the Canadian large-scale operational large-scale numerical weather prediction model. The parameterizations could all be turned off and the turbulent boundary layer approximation greatly simplified without significant difference in the 36 hour model forecast. However, the large-scale weather prediction model quickly started to deviate from the observations at a later time and only by updating the winds in the jet stream every 12 hours did the model stay on track. (The satellite data did not help the model forecast unless there was also radiosonde data available at the same site.) A simple change in the data assimilation program based on the Bounded Derivative Theory had a substantial impact on the forecast and the Canadian global weather prediction model continues to perform better than the NOAA global weather prediction model even though the latter model employs a more accurate numerical method. Browning and Kreiss, 1986: Scaling and Computation of Smooth Atmospheric Motions Tellus, 38A, 295-313 (and Charney reference therein) Browning and Kreiss, 2002: Multiscale Bounded Derivative Initialization for an Arbitrary Domain, JAS, 59, 1680-1696 CMC Website Climate Models The updating discussed above is not possible in a climate model and because climate models use even a coarser mesh than a large-scale weather prediction model, they must use an effectively larger viscosity than a global weather prediction model. Recently (BAMS, 2004), it has been shown that a climate model also deviates from reality in a matter of hours because of the errors in the parameterizations (not unexpected based on result above) and over longer periods of time the effectively larger viscosity causes the numerical solution to produce a spectrum quite different than the real atmosphere unless forced in a nonphysical manner. 51 Comments 1. I’ve discussed some related issues in a couple of earlier posts. The Navier-Stokes equations discussed briefly here are a highly intractable problem. One of the Clay Institute’s 21st Century Hilbert prizes is for any progress on understanding the equations. I also posted up a note on Holloway on ocean dynamics, who seemed to have got to similar conclusions to Browning from a different vantage point. Here are a couple of quotes from Holloway: traditional geophysical fluid dynamics (GFD), with traditional eddy viscosities, violates the Second Law of Thermodynamics, assuring the wrong answers. … In principle we suppose that we know a good approximation to the equations of motion on some scale, e.g., the Navier–Stokes equations coupled with heat and salt balances under gravity and rotation. In practice we cannot solve for oceans, lakes or most duck ponds on the scales for which these equations apply. This enterprise is like seeking to reinvent the steam engine from molecular dynamics’ simulation of water vapour. What a brave, but bizarre, thing to attempt! I also posted up some comments on Kaufmann and Stern who observed that: none of the GCM’s have explanatory power for observed temperature additional to that provided by the radiative forcing variables that are used to simulate the GCM… They have had difficulty getting this paper published, despite being well-known authors. Kaufman entered into an interesting exchange at realclimate raising some pretty salient issues. He was too prominent for Gavin to simply censor him, so Gavin asked that the discussion, which was substantive and very interesting, be taken offline. I expressed my contempt here. 2. There’s also the issue of correct radiative modeling of the atmosphere. Instantaneous doubling of CO2 is not exactly physical, except maybe in the case of a giant meteorite hitting the earth. 3. Re: #2 Somehow I think that the greenhouse effect of large meteorite hitting the earth will be the least of our problems… 4. It’s not clear if the shell game on this one can continue indefinitely. On the one hand we read in Kaufmann’s RealClimate comment he’s told that the 2001 models are too old to be important, but Gavin indicates that the mean SAT is “a done deal”. I wonder who’s signed this contract and where it is? Apologies if I’m reading this wrong. none of the GCM’s have explanatory power for observed temperature additional to that provided by the radiative forcing variables that are used to simulate the GCM… So perhaps instead of a carbon tax to reduce AGW what we need is a VAT for GCMs. 6. It also doesn’t matter how stable and well-posed a numerical model becomes, if the physics underlying the model is poorly understood. “Recently (BAMS, 2004), it has been shown that a climate model also deviates from reality in a matter of hours because of the errors in the parameterizations (not unexpected based on result above) and over longer periods of time the effectively larger viscosity causes the numerical solution to produce a spectrum quite different than the real atmosphere unless forced in a nonphysical Modelers have shown* that state-of-the-art GCMs cannot predict global climate further out than a year, and cannot predict smaller-scale regional responses further than 10 years. Despite this, the GCMs are used to support the ‘disaster in 2100′ claims of AGW. The disconnect between the state of the actual science and the claims made in its name is incredible. *e.g., M. Collins (2002) “Climate predictability on interannual to decadal time scales: the initial value problem” Climate Dynamics 19, 671-692, and; Collins, ea (2002) “How far ahead could we predict El Nino?” GRL 29, 1492-95 7. Apart from the problem of constructing a correct physical model, there is the problem of implementing it correctly. I made some comments about this last year at Roger Pielke Sr’s blog, http:// I can recommend those interested in GCMs the posts over there at http://climatesci.atmos.colostate.edu/category/climate-models/ 8. First I want to thank Steve for his diligence. I visit both this site and RC and appreciate the back and forth between the competing camps, with exception of the occasional venomous ad-hominem attacks. I am a skeptic by nature and believe that the best science occurs when hypothesis and theory survives aggressive challenges intact. Throw the things repeatedly against the rocks and see if they continue to stand. All of you here at ClimateAudit are doing a fine job providing this service. No matter the eventual outcome of this debate, there are few better examples of the real scientific process than this. I know that CM’s can be very helpful in many areas of scientific / engineering research. But it amazes me that so many people put so much faith in them, considering the garbage in / garbage out nature of data crunching for predicting long term trends, especially when dealing with complex systems containing many unknowns. I have heard some PETA types advocate for banning drug or cosmetic testing on animals, because, they say, the modern computer is capable of simulating the testing process. What they can’t seem to understand or grasp is that it is impossible to code every single chemical process in the animal / human body. For as long as we humans have been poking, prodding, analyzing and tearing through the human body, there is still so much we don’t know, which is why drug manufacturers don’t put a drug on the market simply because it did well against a bacteria or virus in a petri dish. GCM’s are no different. They only show possible outcomes based on the limited data fed into the model. Then again, what do I know. I’m a geology school drop-out (calc killed me). PS. I almost misspelled geology. 9. I haven’t found any 2004 climate model papers for either GL Browning or Browning and Kreiss. Is it possible to get the actual citation for “BAMS (2004)”? Thanks 10. Does anyone else have a problem with calling these things ‘gravity waves’ instead of just atmospheric waves? Is this some kind of physics envy? 11. Climate is probably a Chaotic system. Chaos Theory tells us that any any computed prediction of climate will increasingly diverge from reality as time increases. Thus no GCM will be reliable. Climate Prediction Net has surely demonstrated that. 12. Hello all. I think that this is a fascinating discussion, and maybe scale is at the heart of the problem. When we try and anlayse reductively a complex system like the sea, we are unable to make meaningful predictions. For example, we can’t predict the movement of a sand grain in a breaking wave, nor the height of the 10th next wave. This seems to me to be analagous to the chaos and unpredictability of the weather. However, we can predict the evolution of other characteristics of the ocean system (e.g. the timing of high tides). This is analagous to predicting the large scale behaviour of the climate system. Certain large-scale characteristics of complex systems emerge at the large scale. What we appear to be dealing with is the tension between reductionism and emergence as explanatory devices in science. 13. # 10 – Gravity Waves I do have a problem with them, apart from the obvious, but it’s logical when dealing with a strictly mechanical system as the model for the atmosphere. The ubquitous presence of lightning somewhere on the surface of the earth at any given time, together with the enormous voltages measured over hurricanes (some +6K V), would suggest that a simple mechanical model for the atmosphere is over simplistic, despite the elegant mathematics. Much like asking a mechanic to describe a modern computer circuit board when the only objects allowed are spanners, spark plugs and batteries. 14. Re #13. I mean they are just a form of atmospheric wave, not ‘waves of gravity’ in the sense predicted by general relativity theory. 15. http://en.wikipedia.org/wiki/Gravity_wave For the concept in physics, see gravitational radiation. In fluid dynamics, gravity waves are those generated in a fluid medium or on an interface (e.g. the atmosphere or ocean) and having a restoring force of gravity or buoyancy. gravity wave”¢’ ¬?(Also called gravitational wave.) A wave disturbance in which buoyancy (or reduced gravity) acts as the restoring force on parcels displaced from hydrostatic equilibrium. 16. Re #16. It is not that simple. Lorenz found that simple mechanical systems can exhibit chaotic behaviour, i.e. small differences in initial conditions will grow exponential as time progress. But all solutions will stay on the chaotic attractor (the Lorenz butterfly). So even if you cannot say where on the attractor a solution will be, you know that it is on the attractor. Thus, even for chaotic systems you can extract information from the solutions, in principle. There are a lot of things to critique regarding computer modeling, but the fact that a system is chaotic does not rule out the possibility of, e.g., extracting average properties. 17. #17, if you look at the Lorentz Butterfly, here for example with two attractor sites, you’ll see that small changes in initial conditions (click rapidly twice) means that over time your prediction can be ~180 degrees out of phase with physical reality. And for a chaotic system, that would be true even if your physical model was exactly correct. If you think of the inner trajectories of the two attractors as representing glacial or tropical climates, resp., with temperate climates represented by trajectories further out along each radius, then it would be possible for your model to be predicting temperate climates while reality was, e.g., a glaciation. As surface temperatures would certainly follow such climates, the conclusion is that not even global average surface temperatures are predictable at long times, even with a perfect model. 18. Re #18 Yes, that is what I just said: small differences in initial conditions will grow exponential as time progress. My point is that for different set of parameters you get different attractors. Making a hypotetical analogue to climate predictions, x2 CO2 might give you an attractor with different properties (average temperature) compared to the attractor for x1 CO2. One cannot conclude that no information can be extracted just because a system is chaotic. 19. #17-19: You and the climate modelers are assuming that a chaotic attractor is a useful concept for a system with literally millions of degrees of freedom. It works well for the Lorenz equations because there are only three degrees of freedom and it’s easy to integrate the equations for a great many cycles so that it’s possible to extract average properties. To do the same with the climate models you have to make a lot of assumptions. Assumption number one is that the equations of the climate system are completely known and understood. This is a big and very questionable assumption. Just wander over to Roger Pielke Sr’s web site for the new and different climate forcing of the day. Assumption two, you can accurately integrate the equations without wandering off the attractor, whatever it means to have a multi-million dimension attractor. The fluid dynamics people don’t even consider the concept of an attractor particularly useful except at the very onset of turbulence. Once turbulence sets in statistical measures are more meaningful and useful. (See the thread on this web site by Gerry Brown for problems with integrating the climate equations.) Assumption three: Even if one and two are ok, you still have to integrate the equations for a very long time to be able to generate enough phase space trajectories to be able to take the averages. Is there enough computer power in the universe to do this? Integrating one or even a few hundred sets of initial conditions to simulate 100 years of climate is not enough. You have to integrate thousands(millions?) of initial conditions or a few initial conditions over a very long period of simulated time. You’d like to do both to make sure that you’re doing things right. OK, supposing that you are successful at integrating the multi-million degrees of freedom equations. Now you want to use it to predict the climate. You still have to measure all those millions of degrees of freedom in the real world and plug them into your model to make predictions. Good luck or you better start getting real clever in reducing the dimensionality of your model so that it’s actually tractable for use with available data. 20. #20 (and #19), not only that, but in any set of attractor trajectories, the average for any given time is a transect across the entire set of possibilities. Such an average is likely to have very large uncertainties, making a time-wise ‘prediction’ extremely uncertain. One could use a good model to study the behavior of the system under various conditions, and discover what forcings can radically perturb a given quasi-stable state. However, that is not the same as predicting which new stable state, of the set of possible stable states, will emerge. 21. [Just stopping by quickly in the midst of of wild times... no time for substantial reflection for a month or three :(] Question: is there a standard statistical/numerical method that can be used to create an hypothesis surrounding ongoing discovery of important new elements in GCMs? I’m thinking this should be doable in a manner similar to proven/unproven natural resource reserves, and our confidence (or lack thereof) in the current or future state of GCM’s. * January 2006, it is announced that “…plants worldwide produce millions of tonnes of methane each year, with the greatest share coming from the tropics, and that the plant contribution is likely to count for 10–30 per cent of annual methane emissions…plants produce more methane at higher temperatures, the amount doubling every ten degrees above 30 degrees Celsius.” Let’s put that in context. 2001 total radiative forcing was 2.425 W/m2 2005 Updated Total was 2.7911 (a 15.1% increase) 2006 discovery (above) now produces a total of 2.9351 (another 5.2% increase, a 21.0% increase over the 2001 estimate) Interestingly, ALL of the increases recorded are not due to measured changes in prior values, but to additional factors not previously identified. We could be silly and fit this data to a growth curve, to produce a reasonable (???) estimate that the total forcing discovered by 2020 will be ~18.3 W/m2, and thus that perhaps a discount rate of (1 – (2.9/18.3))= 84 percent could be applied to current GW forcing estimates (based on the assumption that we understand ‘A’ sources much better than ‘non-A’ sources). But let’s not do that. Instead, I’m curious how the above facts about ongoing factor-discovery impact quantitative confidence levels attributable to the various models and estimates used in climate science. I’ve seen awesome confidence level statements in many current papers. How much should the error bars grow, to take into account our obvious need for humility in claiming GCM completeness? 22. Oops, a typo: that 18.3 value is for 2025. [For the curious, I just used Excel's GROWTH() function to create the trend.] 23. #22. There were clerical errors in HITRAN for the near infrared absorption of water vapor used in IPCC TAR which were larger in wm-2 than the impact of 2xCO2. To my knowledge, no one ever reported on what the GCM s looked like merely with the changed water vapor values. Instead, the models were re-tuned. 24. While I’m at it: I’ve yet to discover evidence that missing data is generally being handled in a proper way. As others have noted, this can be an extraordinary source of misunderstanding, miscalculation, and so forth. Put simply, zero and “I dunno” are NOT the same. And interpolating to fill gaps is not a valid solution, particularly for noisy data sources. When a new measurement source is added to the mix, it is wildly inaccurate to presume that its data can simply be incorporated into the calculations from year XXYY on, and ignored before that date. Yet that is mostly what appears to happen here! I was stunned when I saw that. As shown in my example posted above, one can observe an “increase” in measured data simply by converting values from “unknown” to a known value. Yet, it may be that no increase of any kind has been actually measured; all that’s happened is more complete data is available. See any parallels to financial hype? Sigh. 25. #24 “the models were re-tuned.” Wow. 20/20 hindsight is such a wonderful thing! Who cares if the hypothesis represented by the old model fails to accomodate the new information? We just create a new model (hypothesis) and “move on.” Does any climate model, anywhere, attempt to accomodate future climate discoveries that might tend to invalidate present understanding? Steve M, I presume that natural resource models/estimates are relatively sophisticated in this regard, correct? (I’m becoming suspicious the whole house may fall down if realistic uncertainty levels are inserted… but don’t have the mathematical muscles to prove it. Is it possible to calculate something along the lines of “this modeling methodology will fail to converge if the data uncertainty exceeds N percent”?) Sorry for my uneducated and ignorant vocabulary. I’m a practician (knowing how numbers “feel” in practice), not an academic (able to prove how the numbers ought to work). No slight intended to either area of expertise BTW — we need both! 26. Re #20 why do you write You and the climate modelers? I did not write about climate models above, but commented on chaotic systems in general. Please read what I wrote in #16: There are a lot of things to critique regarding computer modeling, but the fact that a system is chaotic does not rule out the possibility of, e.g., extracting average properties. Thats all I said. Then you get into a different discussions with all the assumptions going into climate models, that I mainly agree with (and could add some more). 27. #27 Matt, my apologies. I read your statement as a defense of the current state of climate models. 28. #27 Please expand on the add some more in assumptions going into climate models, that I mainly agree with (and could add some more) 29. #26. With respect to the multiproxy models, the claimed confidence intervals are wildly inappropriate. If you look at the posts and exchanges in connection with MBH Confidence Intervals (see MBH98 Category), they calibrate confidence intervals in MBH98 (and other studies) based on calibration period standard errors. The standard error of the residuals is MUCH higher in the verification period (verification r2 of about 0 is the same effect expressed in different terms). Calculation of confidence intervals based on the verification period would lead to confidence intervals from the floor to teh ceiling. Wahl and Ammann argue that MBH somehow salvages “low-frequency” results but then you have only a couple of degres of freedom and again no ability to establish confidence intervals. 30. #30 “they calibrate confidence intervals in MBH98 (and other studies) based on calibration period standard errors” The implication of this statement is finally sinking in. Unbelievable. Another case of obfuscation via fancy terminology. That’s not a confidence level, it is (perhaps) a “quality of fit” measure, and of course only applies to the calibration period itself. I realize I’m saying nothing new: standard error of a calibration period cannot possibly predict confidence levels for the future/past. Obvious when observing any measure you like, for real-world data that has some level of variability. An acquaintance was recently asked to provide a peek into the future for some high net worth folks. He spent the first half hour demonstrating that nobody has ever predicted significant futures with any accuracy. I’ll see if I can dig up his data sometime… 31. I think the tendency to present “scenarios” in climate modeling rather than probabilistic projections is entirely because the researchers want to finesse the issue of how much confidence one could reasonably have in the forecasts. I think there have also been indications that the error terms in the model forecasts are likely to be highly positively skewed so that the extreme upper projections that get emphasized in the media releases are extremely low probability events. I think that scientists in the climate field should start demanding their GCM colleagues produce probabilistic forecasts that would be more useful for others trying to do related research into likely ancillary effects, likely costs and benefits of different actions etc. The “scenario story lines” are really worthless as a basis for any sensible policy analysis. 32. #31. it’s hard to believe that some of these practices could be in usage. I think that’s why people tend to disbelieve some of my findings. 33. The Idso’s web site has an interesting review of what appears to be a related paper Williams, P.D. 2005. Modelling climate change: the role of unresolved processes. Philosophical Transactions of the Royal Society A 363: 2931-2946. It appears to propose “stochastic techniques” as “an immediate, convenient and computationally cheap solution” to the problem that some potentially important climatological phenomena are simply too small to be adequately modeled at the present time. I guess this raises an issue that is perhaps contrary to my previous post #32 as to whether stochastic models can really serve as an adequate simplification of chaotic models that cannot be solved. There is also the issue that even if the underlying non-linear dynamic equations could be solved for known parameter values, there is uncertainty about some of the parameter values. What happens when you mix non-linear dynamics with stochastic parameter distributions? Can some of the physicists here enlighten us? 34. Re 32, thanks, Peter, an interesting post. You say: I think the tendency to present “scenarios” in climate modeling rather than probabilistic projections is entirely because the researchers want to finesse the issue of how much confidence one could reasonably have in the forecasts. I think there have also been indications that the error terms in the model forecasts are likely to be highly positively skewed so that the extreme upper projections that get emphasized in the media releases are extremely low probability events. I think that scientists in the climate field should start demanding their GCM colleagues produce probabilistic forecasts that would be more useful for others trying to do related research into likely ancillary effects, likely costs and benefits of different actions etc. The “scenario story lines” are really worthless as a basis for any sensible policy analysis. There is an interesting paper on this question at the Climate Science web site. The basic message of the study is that the size of the albedo is very poorly represented in the climate models, with a mean! error on the order of 3-4 w/m2. Since this error is about the size of the IPCC value for a doubling of CO2, you can see what this would do to a confidence interval for their results … like Steve has said, “floor to ceiling” … 35. Willis: Knowing that the distribution would reach from “floor to ceiling” certainly would have dramatic implications for greenhouse policy. Such a situation would imply that any investments in controlling CO2 would be much more risky in terms of their payoffs. That in turn would imply that the expected return on those investments would have to be much higher (ie the mean “bad effects” from the increase would have to be greater) in order to justify the policy as a rational expenditure of resources. Perhaps even more significantly, an increase in the variance of the effects would greatly increase the value of any “options” (in the financial sense) associated with greenhouse policy. In particular, the benefits of waiting to get information that reduces the uncertainty in the forecasts before taking action increase substantially. That is, waiting for more satellite observations etc to settle the issue of how well the models really work becomes a much more attractive alternative. 36. Further to #36: Thinking about this some more, the options point implies that we are interested not just in the mean and variance of the distribution but also the “tails” — skewness, kurtosis etc. Indeed, one can think of the “precautionary principle” as an analogous options pricing issue. If there is a really bad outcome and an available option that you are sure would enable you to avoid that outcome, the value of the option would be increased by anything that raised the probability of the bad outcome. The conclusion si that to make sensible deductions about greenhouse policies one needs the models to deliver forecasts not just of means and variances but also higehr moments of the outcomes. The scenario story lines are a wholly inadequate basis for constructing rational policies. 37. #34 What happens when you mix non-linear dynamics with stochastic parameter distributions? I don’t know about stochastic parameter distributions but I can say something about what happens when parameters change in a non-linear dynamical system. Most of the time the attractor changes shape but basically looks the same, it’s fatter, thinner, … But there are parameter values where the system will bifurcate and suddenly the attractor will be different. For example it might change from periodic to chaotic. My favorite is hysteretic transitions. In this case as you increase a parameter P there will come a value Pb where the system switches from attractor A to attractor B. But on lowering the parameter the switch back to attractor A won’t occur until you reach Pa. This means that there is a region Pa Taylor-Couette flow is famous for doing this. This throws an extra twist into nonlinear systems beyond the famous Lorenz “Butterfly effect.” Not only can’t you model the future very well because small errors get amplified exponentially, you also have to know how you got there because a different parameter path to the identical set of parameters could lead you to a completely different scenario. 38. End of the second paragraph #38 should read This means that there is a region Pa less_than P less_than Pb where two distinct states are possible but the one you’re in depends on past history. This may sound like fiction but Taylor-Couette flow is famous for doing this. 39. #36: “an increase in the variance of the effects would greatly increase the value of any “options” (in the financial sense) associated with greenhouse policy.” …particularly so since, as in the matter of tree-sourced methane, what was once thought to be a beneficial “option” may turn out to accomplish the opposite of what was intended at the first level. (Yes, planting trees has other benefits so no we don’t cut down all the forests… just saying that to avoid troll responses…) 40. The problem with computer models is far deeper than getting the equations right. Complex kinematic systems where the underlying equations-functions are known precisely cannot be accurately modeled. The output of the model diverges from reality as time proceeds in the model. In fact, when I visited NCAR in the late 90′s they had a display that illustrated the point. It was a swinging arm kinematic device that looked simple enough to any physics 101 student. But the problem was even though the kinematics was completely understood, the computer models all diverged from reality as time progressed in the model. Reason enough for extreme scepticism at any computer model that claims skill at predicting the future. That would be any future whether the economy, the stock market, climate, resources, population, etc. 41. James Annan has an extremely revealing observation in the context of this thread (see reply 6 at Pielke’s site) You seem to be defining “skill” simply in terms of the model agreeing with the observations (to within their error), which is clearly an invalid use of the term, because under this definition, no model can ever be expected to have skill. That’s a simple exercise in elementary algebra. I’m flabbergasted by this statement, though I probably shouldn’t be. 42. re: #42 In this case I agree with Annan, at least within the context where he was speaking. The definition of skill being used was that the model must do better than some other method of post-dicting the known climate. His point that you’re flabbergasted by was simply that using the actual instrumental data as the ‘method’ creates an impossible barrier. It’s impossible by definition for a model to do better at matching the actual climate than the measurements of climate themselves. Of course, as perhaps Steve will explain, this isn’t where MBH98 fails. 43. Re 42, 43: I don’t understand Annan’s point. He says that “skill’ consists doing better than some other prediction method. Post 43 agrees. But Pielke is not proposing that the computer models should “do better at matching the actual climate than the measurements of climate themselves” as # 43 proposes. For example, Pielke says: Skill can be quantified, for instance, by a model forecast as having a root mean squared error of 1.2C with respect to 500 hPa temperatures, for example. This is one type of model “skill”. The idea that Pielke is asking that the computer models should do better than the observations is a straw man. Pielke is only (and reasonably, in my opinion) asking that the results be what I call “lifelike” — that the standard deviation, mean, first derivatives, etc. of the model results be similar (in some specified mathematical sense) to the observational data. The appropriate metric is not, as Annan proposes, whether a computer model does better than some alternative method such as persistence or climatology. It is whether the results are in close agreement (again in some specified mathematical sense) with observational data. Annan claims that There are no examples using your definition to be found in the literature, because it simply doesn’t make sense. This is simply not true. The “Gerrity Score”, for example, is used by the UK Met Office to assess the “skill” of their forecasts. From their web site: The Monthly Outlook includes forecasts of expected temperature and rainfall categories. Five categories are used; (1) well below average, (2) below average, (3) near average, (4) above average or (5) well above average conditions for the time of year. To assess the accuracy of the forecast we compare the predicted category with the category that was actually observed to occur. We use a points-based scoring system in which maximum points are awarded to forecasts that are ‘spot on’ (i.e. the forecast category exactly matches the category that actually occurred), fewer points are awarded for ‘near misses’ (e.g. the forecast is wrong by one category), and points are subtracted for misleading forecasts (i.e. a forecast of above normal when below normal is observed). The score used is called the Gerrity Skill Score (GSS), and is one of the scores recommended by the World Meteorological Organization (WMO) for evaluation of long-range forecasts. The score is designed so that forecasts that are always ‘spot-on’ would achieve a score of 1.0, and forecasts based on simply ‘forecasting’ the long-term average (category 3) would receive a score of zero. Thus a positive score means the forecast is better than guesswork and better than assuming future conditions will be similar to the long-term average. Although the theoretical maximum score is 1.0, best scores achieved at the monthly range are of order 0.6, and found in the more predictable tropical regions. Note that they are not comparing to other forecasts, but to observational reality. I have repeatedly asked Gavin Schmidt whether they use the Gerrity Score or some other method to assess the “skill” of their models. To date, he has refused to reply to the question. 44. #43. All the uproar in climate science is because the models are predicting catastrophic warming a century from now. Yet Annan doesn’t want to test the models against observations. There is no scientific context in which his statement makes sense. What possible other criterion is there? If the models don’t agree with the observations why the excitement? Model versus model is just an exercise in computer generated fantasy. Who cares except the people who wrote the code? 45. Well, I’m not claiming that the definition of “skill” used by Annan et. al. is all that useful, at least not if you don’t have a process for estimating future temperatures which you’ve presented and are comparing your model to. Still, something like the HS isn’t the same as a model, so one can compare them to see which has the most ‘skill’ in matching the observational data. However there are other possibilities such as Annan mentions, such as assuming the continuation of various cycles and trends. I don’t know that this is actually what MBH98 did however. But I’ll wait to see what Steve has to say on the subject. 46. Hartley’s econ-jock oriented comments were very interesting. However, I doubt that even if the danger is well identified, that we will really take firm steps to limit GHG’s. The large issue is NOT Kyoto. It is the funding of Mannites and other sillies. 47. It’s really annoying to me, as a data driven guy, to see how after he made his post at RC, there came a flurry of scare mongering, junk science links, which were as far away from any sort of real mathmatical rebutal as tree sitters are from particle physics experiments. 48. RE: #20. This is the essence of the problem. Full stop. 49. RE: #32. Monte Carlo Simulation would be a start, but with millions of degrees of freedom, that’d be some fair gnarly calculatin’ ;) 50. There was this very interesting post on modelling on Roger Pielke Sr.’s blog, by a fellow named Gregor L. (http://climatesci.atmos.colostate.edu/). I’m copying it here for your interest: “After reading several of the comments, I had to chime in given that I formerly studied to be both a statistician and atmospheric scientist but am now engaged in creating highly sophisticated economic/statistical models for one of the top 5 largest hedge funds. Because of the proprietary nature of what I do, I cannot go into the nature of the models that I create and work on, but suffice it to say that both the complexity and computing power required are entirely on order of many GCMs – one may notice how many big/fast computers in the top 500 list are at financial institutions or trading organizations. Modeling financial markets is brutally difficult. The fundamental relationships are highly nonlinear and the associated data possess pathological distributions. Even worse, the nature of the markets means that one does not have a convenient set of Navier-Stokes equations upon which to ground the modeling, but instead has to deal with: 1) Underlying equations that may not be knowable from any form of first principles, and 2) Even when known, equations that are nonstationary in a very nasty and nonlinear way owing to the psychological/sociological aspects of financial markets More importantly, we have a very natural measure of skill. It is: 1) Be better on a relative level than our competition, and 2) Be right significantly more often than we are wrong The climate community too often embraces only the first aspect. On the other hand, if we ignore either one of our “skill” dimensions in my line of work, then the lights get turned off and we are out of business. Moreover, because of the very natural feedback of our industry, we cannot “game” our models. For example, I cannot create a model that looks great in testing but loses money in actual use; the finance literature is full of such models, but using them will cause one to go bankrupt. In the same fashion, I cannot examine behavior in individual financial markets and then “tweak” various differential equations and statistical parameters so that the model forecasts better match that market’s returns – such gamed models also fail miserably when used in reality. Now I will tie this in more directly with climate research. The measure of skill quoted by Pielke from Annan’s website: “Forecast skill is generally defined as the performance of particular forecast system in comparison to some other reference technique.” is exactly the definition I would expect from someone who does not have to be right more often than wrong. In fact, unlike what I do in the applied finance world, there has been little feedback whatsoever on whether or not climate model forecasts created to date have been correct thus far. Instead, the feedback seems to be how well one can fit a GCM to an observed data set in a publication (such data include historical near-surface temps, satellite observations, etc.) to see if it indeed matches historical data. Even though such GCM forecasts are integrated over time, the continual process of re-running and re-running them until better matches are created is a form of in-sample maximization that provides no information on what the model’s realized forecast efficacy will be (even the in-sample matches are currently poor, as has been noted above). From an outsider’s viewpoint, I have to ask a troubling question: What are the incentives to create an accurate climate model forecast going forward? I am not talking about short-term (upcoming season) or even early medium-term forecasting, but the long-run enhanced CO2 emissions GCM forecasts. Who will validate these forecasts? Will the validations be during the researchers’ careers? One can cynically note that there is certainly incentive to create attention-getting forecasts, a result that is generating a lot of climate research funding without any direct tie-in as to whether or not the forecasts will later prove to be true (which, I should note, they very well could be right after all). I am left with only two incentives to get the forecasts right: The integrity of the researchers/groups, and the overall integrity of the scientific process. I do believe that most researchers have such integrity, including those posting here, but I call into question the scientific process here, especially in the short-run. “Group think” behavior has been a stunning and recurrent problem throughout the history of scientific progress. And it can flourish when there is a lack of experimental verification and predictive accuracy feedback, especially when performance measures do not require one to be right. I know that in my world, if we could be successful under such lax performance measures, my own life would be a lot less stressful. “ 51. This is rich. My retort to a raypierreism. Probably will never be allowed to post over there, so here it is: RE: “Response: At this point, forecasting hurricane trends based on the supposed future of the AMO seems more speculative and less based on fundamental physics than factoring in the contribution of anthropogenic global warming. –raypierre” So, are you implying some sort of conspiracy, where NOAA are silencing those who share your own bias, and are promoting those who are, as you would call them, “deniers?” You actually believe that NOAA would risk poor forecasts to satisfy some sort of “Bushie political agenda?” You actually believe that? And you call yourself a scientist? Got tin foil? …. by Steve Sadlov One Trackback 1. [...] model predictions, you would indeed need to have faith and/or belief that they are accurate. This is an interesting summary of some of the key concerns about the predictive skill of climate models. [...] Post a Comment
{"url":"http://climateaudit.org/2006/05/15/gerry-browning-numerical-climate-models/","timestamp":"2014-04-16T19:17:00Z","content_type":null,"content_length":"162290","record_id":"<urn:uuid:35fbd92d-72ae-49b3-b26d-80ae913b4092>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Reform or Traditional?How do you describe your teaching style? Do you show students examples and then assign... - Homework Help - eNotes.com Reform or Traditional? How do you describe your teaching style? Do you show students examples and then assign homework doing multiple similar problems that mirror those examples, or do you prefer a more constructivist/ reform approach? I used to be a very traditional teacher, but over the years I have made a pretty significant reversal in my thinking about good mathematics teaching. A typical lesson in my class now centers around only a few very rich problems that employ multiple tasks and concepts. I focus my assessments on students understanding of the concepts and less on skill acquisition. One thing I have started doing recently that has helped my students develop fluency is giving my students a problem and four sample-student-solutions with work shown using multiple representations (one solution using a graph, one an equation, one a diagram, etc), then tell them that two of the solutions are correct and two are incorrect. They have to identify the correct solutions and add additional work and explanations to all the answers to justify their choices. Just doing a few of these per lesson has made a tremendous difference in student performance on my assessments. I just created a problem last night using this same strategy, I'll try to get it posted on here so you can take a look if you are interested. I think that this type of question goes at the heart of modern teaching. Having said that, I think there is a way to bring both reform based and traditional notions of teaching into the same pantheon. It would be using different tools for different jobs. There are times when innovative approaches work extremely well and moments when the traditionalist methodology would be more effective. The teacher has to gauge what works for the students, the pulse of the classroom and what approach works best with which groups of students. A group of higher end students might favor one approach, while a group of lower end students could respond better with another. The middle of the road students, who always seem to fall through the cracks, might respond better to one or the other, or even a hybrid of both. Teachers need to be able to be fluent in as many approaches as possible and examining their classroom dynamics, determine which one works for which student at which given moment. I agree with the preceding posts. There is a lot of flexibility in the field of teaching. If you are not considering multiple learning styles in your lesson plans, you aren't reaching as many of the students as you could if your lessons varied to reach visual, auditory, and kinesthetic learners. That having been said, I think I am more traditional--I have set routines, etc., but I also adapt my lessons to appeal to all kinds of learners. I think it was Parker Palmer who said, "We teach who we are." Some of us might be inherently conservative and consequently more traditional, and others of us might be inherently non-traditional and enjoy embracing the next new thing. But I do think that any teacher must be able to connect with the student in a way that reflects self-respect and respect for the student, no matter what. I also think that no matter who we are, we need to understand that some kinds of learning are constructivistand other kinds of learning might be more traditional. For example, I cannot imagine anyone being able to have the multiplication tables mastered without the involvement of some rote learning, and the quadratic formula really needs to be memorized. But having said that, I must also say that neither is of any use whatsoever if not used in a hands-on way. I agree with akannen that we need to understand both approaches and anything along the continuum that will work. I am becoming increasingly disenchanted with the phrase "best practices" because each interaction between teacher and student is unique and ineffable, a connection between who we are and who they #1: I love your idea of presenting actual student work and having them discover the 2 correct and 2 incorrect answers. This encourages students in finding multiple ways of arriving at the correct answer and shows them that there isn't just one way of doing it. I also like that they get to see what it means when we say "SHOW YOUR WORK" Thanks, Phil.. the reason I giving sample student solutions is that it takes a great deal of critical thinking to figure out what the student was thinking. Even when my students are able to figure out the correct solution strategy (or strategies), they find it much more challenging to explain why the strategy works. I teach management subject to students at post-graduate level. The method of teaching that I adopt certainly need to be different from what may be appropriate for younger students. Yet I find that I need to vary my teaching style from situation to situation based on following three consideration. 1. Nature of the subject content. Some topics are quite mechanistic - like Forecasting Methods largely consist of mathematical procedure. These methods can be best taught by describing the methods in classroom using simple example. Other subjects like Strategic Management require students to develop ability to think innovatively based on some general principles. IN cases like these it is better to rely on independent study and research by students followed up by discussions in class. 2. Level of students: When students are able to understand a subject easily, I would rely more on directed self study. However when I find students have difficulty in understanding a subject, I give more stress on classroom lectures with simple examples. I also try to give students some simple assignments to help them reinforce learning in the class. 3. Time available to teach the subject: In most of the institutions, where I teach, I have the flexibility to vary the width and depth of course coverage to some extent, but still there are some limitation. When I find that I am short of time, I find the classroom lectures is the faster method. In answering the original question, I'd say I'm traditional. Coming to teaching high school from the manufacturing sector, I jumped "into the fire" so to speak. What I found was that students lost the basics before they arrived. As a consequence, I am constantly stressing "fundamentals" in my Algebra 2 classes. Weekly, I give quizzes and include on tests problems such as add three numbers, add, subtract, multiply, and divide fractions, multiply and divide numbers and expressions with exponents, etc. Then as we progress through the textbook, I add to the fundamentals radicals and For me, when trying to teach students a new subject, I like to start first with the problem. I intrdocude problem for students then discuss with students how to fing the answer. This method jas proven to be very effective because it allow students to obtain the method themselves so they would never forget the formula because they helped obtained it. This way also helps you make sense of math for students always asking "what's the point" or "why are we learning this". So introducing the problem makes sense of the subject even before introduce it. I once read that an ideal classroom is one where, learning continues even when the teacher is not there. I structured my classroom around this idea...trying to have the students really enjoy the learning activities, encouraging them to talk with each other about the topics, asking them to work together, helping each other. When I needed to write lesson plans for a substitute I could just write "have them work on their independent work" and know that their time was not being wasted. I gave them many choices of things that could be worked on, so that they felt in charge of their learning...discipline problems were very minimal, as a result. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/reform-traditional-391443","timestamp":"2014-04-16T08:51:02Z","content_type":null,"content_length":"49739","record_id":"<urn:uuid:13eb203d-118f-4792-b40d-423a6f47f293>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Meditation, The Art of Exploitation Bayes probability model Bayes probability model is an important probability model to predict prior stage probabilities of a multi-stage probability model. Typically it's applied to a 2 stage probability model where the 2nd stage is dependent on the events in the first stage. Unlike 2 part probability model where events in either part are independent of events in the other part, 2 stage probability model is more complicated because 2nd stage probability is dependent on the 1st stage probability, where conditional probability applies. Bayes probability model quantifies the relationship between probabilities of different events happening in the model. It can be used to compute the 1st stage probability given the conditional probability of 2nd stage events. A simple dependent probability for event E and F is: P(E and F) = P(E) P(F|E) (the multiplication law), it says the probability that both E and F happen is the product of E happens and the conditional probability of F happens given that E happens. It often is written as P(F|E) = P(E and F)/P(E). The symmetry in the formulation can be explored: P(E and F) = P(F) P(E|F) = P(E) P(F|E). Thus, given P(E) = P(F) P(E|F)/P(F|E). Given conditional probabilities of P(E|F), P(F|E), and P(F), one can compute P(E). Let E and F represent simple event Ei and Fji (Fj happens given that Ei happens) in event sets, P(Ei and Fj) = pE_i * pF_j_i where P(Ei) = pE_i and P(Fj|Ei) = pF_j_i, therefore the chance Fj happens in a 2 stage experiment is sigma(P(Ei and Fj), i = 0..I) = sigma(pE_i * pF_j_i, i = 0..I) Thus we can rewrote the symmetry formula as P(Fj) P(Eij|Fj) = P(Ei) P(Fji|Ei). This illustrates that given 2nd stage probability P(Fj), we can reliably compute first stage conditional probability P (Eij|Fj) by rewrite it again as: P(Eij|Fj) = P(Ei) P(Fji|Ei)/P(Fj) = P(Ei) P(Fji|Ei) / [sigma(P(Ei) * P(Fji), i = 0..I)]
{"url":"http://meditation-art.blogspot.com/2008/02/bayes-probability-model.html","timestamp":"2014-04-20T03:09:40Z","content_type":null,"content_length":"20381","record_id":"<urn:uuid:50dbc90b-cacc-4eeb-94cc-29a26cdaa9f2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from September 2011 on Mathblogging.org -- the Blog September 27, 2011 § Leave a comment Tuesday — time for some picks from last week! On the expository side of blogging, Alasdair’s Musing gave an introduction on Ruth-Aaron pairs and Tito Eliatron Dixit a primer on the Vitali set (translation). Also, Mind Your Decision shared a case study of an unfriendly takeover bid gone bad. On the education side of blogs, Real Teaching Mean Real Learning described what differentiated assessment means, practically speaking — and check out Math Fail sharing a comic on Engineering. On the researcher side of blogs, Computational Complexity asked where theorems go to die while God Plays Dice nuked a simple problem with generating functions. Regarding research, Libres pensées d’un mathématicien ordinaire reflected upon an old favorite problem and its recent solution and Nuit Blanche examined a stunning claim. You might also want to keep an eye on Gowers’s Weblog starting a new series for beginning maths students to overcome the restrictions of traditional teaching. — and check out Not Even Wrong‘s TEDx talk. Finally, Statistical Modeling, Causal Inference, and Social Science discussed yet another dubious Wegman paper. Regarding the mathematical community, Piece of Mind and Geometry Bulletin Board followed up on the UK development and the recent letter by UK mathematicians to the Prime Minister. Frank Morgan asked for opinions on the NSF’s change of “Mathematical Sciences” to “Mathematical and Statistical Sciences”. September 20, 2011 § Leave a comment The week is on its way so here are some picks from last week. First of all, you should head over to Math Accent and its fantastic crowd-sourcing project for a creative commons book! It’s less than a day to go, so hurry up and give what you can! On the teacher side of blogs, ichoosemath taught without having the answers and Real teaching means real learning changed the definition of an exam. Continuous Everywhere but Differentiable Nowhere found the problem that never fails and Musing Mathematically shaded squares. Finally, Gyre&Gimble reflected on pre-chunking On the researcher side of blogs, Libres pensées d’un mathématicien ordinaire remembered John Michael Hammersley while Azimuth studied Fool’s Gold. Flavors and Seasons shared a reflection on discussions and Regularize started a series on the uncertainty principle for the windowed Fourier transform. On top of that, Peter Cameron’s Blog celebrated the 1254th London Algebra Colloquium. On the art side of blogs, Intersections shared a poem by Allison Hedge Coke and Singing Banana gave the solution to its Game of Nine. On the philosophy side, M-Phi asked if/why mathematicians should care about philosophy of mathematics. Last but not least, Rachel Binx wrote about her data visualization work at this years MTV Video Music Awards, Maurizio Codogno started a series on Gödel’s Incompleteness theorem (translation) and MathBlog.dk gave a short exposition on the pigeon hole principle. September 14, 2011 § Leave a comment It’s already Wednesday — time for some picks from last week! On the research side of blogs, Freakonometrics looked at some old data from “The origin of sex differences in Science” — and finds the difference to be elsewhere (translation). After its epic series on the “crazy Greeks” and their geometry, Gli Studenti Oggi started a new series on how computers calculate logarithms (translation) while Computational Complexity had a guest post (and lots of discussion) from the IT History Society. Finally, Libres pensées d’un mathématicien ordinaire shared a large list of references on Markov Chains and The Geomblog saw some conference blogging via guest-writing students — best practices. On the teacher side of blogs, Math Mambo decided to not just help, but join a student taking an online course (including fun grief). Misscalcul8 shared a conversation with Marcelle Good on textbooks. Think, Thank, Thunk shared the story of students discoverning the product rule by themselves while Exzuberant embraced Rebecca Black to make statistics a hit. Also, Learning and Teaching Maine handed out the criteria for the upcoming Math&Multimedia Carnival — deadline is Sept 23. Elsewhere, Richard Wiseman’s Blog re-shared an amazing illusion, The Mathematical Tourist wrote about “Pythagorean Fractal Tree” by Koos Verhoeff and Math For Grownups had to endure a small shitstorm after a book review didn’t get the math right. September 7, 2011 § Leave a comment We are back with (a lot of) new picks! The story making the most waves last week was probably this article on the NYT homepage on how to fix math education. It was picked up by Mathlog (in German, here is a translation), the Whizz blog, Rational Mathematics Education, dy/dan, and A Recursive Process. Staying with math-education for a second, Lost in Recursion describes an idea of how to gain authority without overpowering the students, I Choose Math wants to give his students time to do what they feel like and decides to try different approach for correcting worksheets this year, Musing Mathematically remembers the wonders of exploring infinity for the first time, and Mr Honner continues his series on the 2011 New York State Math Regents Exams. More essayistic texts were posted on Research Tips, seeing the necessity to change the way mathematical publishing works, Statistical Modeling, Causal Inference, and Social Science, musing about the honesty of an apology offered by an author who plagiarized himself, Computational Complexity gives examples for how to get ideas for things to work on (which, to be honest, is from two weeks ago), Azimuth fears tipping points, QED Insight rants a bit and then gives his opinion on how science and beliefs can coexist, Mind Your Decisions warns that some people making stupid choices can rig the game for everyone, and finally there was a guest-post of Barbara Jolie on Peter Cameron’s blog about randomness, followed up by a post on where randomness originates from. Posts more exclusively concerned with mathematical research included xamuel.com on the compactness theorem, n-Category Café on Hadwiger’s theorem, Terry Tao starting a series of posts on Hilbert’s fifth problem, and Vismath (in German, translation) and 0xDE on tilings. Also noteworthy is the post on Alasdair’s Musings on Matlab-alternatives, Walking Randomly‘s report on developments in math-software in August, and two book reviews on Xi’an’s Og. If you want to relax after reading all these posts, Mathematics for Teaching offers a nice video on mathematical patterns in nature, which goes together well with this post on La Covancha Matematica (in Spanish, translation). You might also enjoy this visualization of a very old goodie on Math Is The New Black. Which brings us to the lighter side of math: Math Fail had some entertaining pics last week, Misscalculate wrote an ode to math, and are these the dangers of applying mathematics to real life we can see on Math Is The New Black? If after all this you should still be hungry for more, Let’s Play Math! collected some recent Carnivals.
{"url":"http://mathblogging.wordpress.com/2011/09/","timestamp":"2014-04-21T10:42:16Z","content_type":null,"content_length":"31640","record_id":"<urn:uuid:bc93d464-9f7f-42d3-9d8c-e6f587f0b1bd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Your intuitions are not magic - Less Wrong Comments (28) Sort By: Best Thanks for the well-written article. I enjoyed the analogy between statistical tools and intuition. I'm used to questioning the former, but more often than not I still trust my intuition, though now that you point it out, I'm not sure why. You shouldn't take this post as a dismissal of intuition, just a reminder that intution is not magically reliable. Generally, intuition is a way of saying, "I sense similarities between this problem and other ones I have worked on. Before I work on this problem, I have some expectation about the answer." And often your expectation will be right, so it's not something to throw away. You just need to have the right degree of confidence in it. Often one has worked through the argument before and remembers the conclusion but not the actual steps taken. In this case it is valid to use the memory of the result even though your thought process is a sort of black box at the time you apply it. "Intuition" is sometimes used to describe the inferences we draw from these sorts of memories; for example, people will say, "These problems will really build up your intuition for how mathematical structure X behaves." Even if you cannot immediately verbalize the reason you think something, it doesn't mean you are stupid to place confidence in your intuitions. How much confidence depends on how frequently you tend to be right after actually trying to prove your claim in whatever area you are concerned with. I do know why I trust my intuitions as much as I do. My intuitions are partly the result of natural selection and so I can expect that they can be trusted for the purposes of surviving and reproducing. In domains that closely resemble the environment where this selection process took place I trust my intuition more, in domains that do not resemble that environment I trust my intuition Black box or not, the fact that we are here is good evidence that they (our intuitions) work (on net). How sexy is that? If you are evaluating intuitions, there are two variables you should account for. The similarity with evolutionary environment, indeed. AND your current posterior belief of the importance of this kind of act in the variance of offspring production. We definitely evolved in an environment full of ants. Does that mean my understanding of ant-colony intelligence is intuitive? I'm very curious how you decide what constitutes a similar environment to that of natural selection, and what sorts of decisions your intuition helps make in such an environment. So then anything that has evolved may be relied upon for survival? It is impossible to rationalize faith in an irrational cognitive process. In the book Blink, the author asserts that many instances of intuition are just extremely rapid rational thoughts, possibly at a sub-conscious level. Intuition seems to be one of the least studied areas of cognitive science, at least until very recently. The Wikipedia entry on cognitive sciences that the post links to has no mention of "intuition", and one paper I found said that the 1999 MIT Encyclopedia of Cognitive Sciences doesn't even have a single index entry for it (while "logic" has almost 100 references). After a bit more searching, I found a 2007 book titled Intuition in Judgment and Decision Making, which apparently represents the current state of the art in understanding the nature of intuition. i don't know why we prefer to hold on to our intuitions. your claim, that " we persist on holding onto them exactly because we do not know how they work" has not been proven, as far as I can tell, and seems unlikely. I also don't know why our own results seem sharper than what we learn from the outside [although about this later point, i bet there's some story about lack of trust in homo hypocritus societies or something] . As somebody who fits into the "new to the site" category, I enjoyed your article. Welcome to Less Wrong! Feel free to post an explicit introduction on that thread, if you're hanging around. I think the critical point is in the next sentence: We only get the feeling of certainty, a knowledge of this being right, and that feeling cannot be broken into parts that could be subjected to criticism to see if they add up. Yes, we don't know what the interiors are - but the original source of our confidence is our (frequently justified) trust in our intuitions. I think another related point is made in How An Algorithm Feels From Inside, which talks about an experience which is illusory, merely reflecting an artifact of the way the brain processes data. The brain usually doesn't bother flagging a result as a result, it just marks it as true and charges forward. And as a consequence we don't observe that we are generalizing from the pattern of news stories we watched, and therefore don't realize our generalization may be wrong. I think it's a combination of not understanding the process with a lifetime of experience where's it's far more right than wrong (Even for younger people, if they have 10-15 years of instinctive behavior being rewarded on some level, it's hard to accept there are situations it doesn't work as well). Combine that with the tendency of positive outcomes to be more memorable than others, and it's not too difficult to understand why people trust their intuition as much as they do. your claim, that " we persist on holding onto them exactly because we do not know how they work" has not been proven, as far as I can tell, and seems unlikely. It may not be the only reason, but an accurate understanding of how intuitions work would make it easier to rely less on it in situations it's not as we'll equipped for, just as an understanding of different biases makes it easier to fight them in our own thought processes. People who know a little bit of statistics - enough to use statistical techniques, not enough to understand why or how they work - often end up horribly misusing them. How often do people harm themselves with statistics, rather than further their goals through deception? Scientists data-mining get publications; financiers get commissions; reporters get readers. ETA: the people who are fooled are harming themselves with statistics. But I think the people want to understand for themselves generally only use statistics that they understand. True, but many of those scientists and reporters really do want to unravel the actual truth, even if it means less material wealth or social status. These people would enjoy being corrected. There is also an opportunity cost to the poor use of statistics instead of proper use. This may be only externalities (the person doing the test may actually benefit more from deception), but overall the world would be better if all statistics were used correctly. I enjoyed your article and as a scientist, I've been interested to understand this: what seems an intuitive method to use to solve a scientific problem is not seen as an intuitive method while solving 'other' problems. By 'other', I mean things like psychological problems or problems that arise from conflicts amongst people. It may be obvious why it is not 'intuitive' but what goes beyond my understanding is most will not even consider using the scientific method for the latter types of problem ever. Having just pressed "Send" on an email that estimates statistics based on my intuitions, this feels particularly salient to me. Really well written. Great work Kaj. Thanks for reminding me that my thoughts aren't magic. Elegantly done - clear and informative. This was an excellent read- I particularly enjoyed the comparison drawn between our intuition and other potentially "black box" operations such as statistical analysis. As a mathematics teacher (and recreational mathematician) I am constantly faced with, and amused by, the various ways in which my intuition can fail me when faced with a particular problem. A wonderful example of the general failure of intuition can be seen in the classic "Monty Hall Problem." In the old TV game show Monty Hall would offer the contestant their choice of one of three doors. One door would have a large amount of cash, the other two a non-prize such as a goat. Here's where it got interesting. After the contestant makes their choice, Monty opens one of the "loosing" doors, leaving only two closed (one of which contains the prize), then offers the contestant he opportunity to switch from their original door to the other remaining door. The question is, should they switch? Does it even matter? For most people (myself included) our intuition tells us it doesn't matter. There are two doors, so there's a 50/50 chance of winning whether you switch or not. However a quick analysis of the probabilities involved shows us that they are in fact TWICE as likely to win the prize if they switch than if they stay with their original choice. That's a big difference- and a very counterintuitive result when first encountered (at least in my opinion) I was first introduced to this problem by a friend who had received as a classroom assignment "Find someone unfamiliar with the Monty Hall problem and convince them of the right answer." The friend in question was absolutely the sort of person who would think it was fun to convince me of a false result by means of plausible-sounding flawed arguments, so I was a very hard sell... I ended up digging my heels in on a weird position roughly akin to "well, OK, maybe the probability of winning isn't the same if I switch, but that's just because we're doing something weird with how we calculate probabilities... in the real world I wouldn't actually win more often by switching, cuz that's absurd." Ultimately, we pulled out a deck of cards and ran simulated trials for a while, but we got interrupted before N got large enough to convince me. So, yeah: counterintuitive. I remember how my roommates and I drew a game tree for the Monty Hall problem, assigned probabilities to outcomes, and lo, it was convincing. It continues to embarrass me that ultimately I was only "convinced" that the calculated answer really was right, and not some kind of plausible-sounding sleight-of-hand, when I confirmed that it was commonly believed by the right people. One of my favorites for exactly that reason- if you don't mind, let me take a stab at convincing you absent "the right people agreeing." The trick is that once Monty removes one door from the contest you are left with a binary decision. Now to understand why the probability differs from our "gut" feeling of 50/50 you must notice that switching amounts to winning IF your original choice was wrong, and loosing IF your original choice was correct (of course staying with your original choice results in winning if you were right and loosing if you were wrong). So, consider the probability that you original guess was correct. Clearly this is 1/3. That means the probability of your original choice being incorrect is 2/3. And there's the rub. If you will initially guess the wrong door 2/3 of the time, then that means that when you are faced with the option to switch doors you're original choice will be wrong 2/3 of the time, and switching would result in you switching to the correct door. Only 1/3 of the time will your original choice be correct, making switching a loosing strategy. It becomes more clear if you begin with 10 doors. In this modified Monty Hall problem, you pick a door, then Monty opens 8 doors, leaving only your original choice and on other (one of which contains the prize money). In this case your original choice will be incorrect 9/10 times, which means when faced with the option to switch, switching will result in a win 9/10 times, as opposed to staying with your original choice, which will result in a win only 1/9 times. (nods) Yah, I'm familiar with the argument. And like a lot of plausible-sounding-but-false arguments, it sounds reasonable enough each step of the way until the absurd conclusion, which I then want to reject. :-) Not that I actually doubt the conclusion, you understand. Of course, I've no doubt that with sufficient repeated exposure this particular problem will start to seem intuitive. I'm not sure how valuable that is. Mostly, I think that the right response to this sort of counterintuitivity is to get seriously clear in my head the relationship between justified confidence and observed frequency. Which I've never taken the time to do. Yet our brains assume that we hear about all those disasters [we read about in the newspaper] because we've personally witnessed them, and that the distribution of disasters in the newspapers therefore reflects the distribution of disasters in the real world. Even if we had personally witnessed them, that wouldn't, in itself, be any reason to assume that they are representative of things in general. The representativeness of any data is always something that can be critically assessed. For many people, representativeness is the primary governing factor in any data analysis, not just a mere facet of reasoning that should be critically assessed. Also, aside from the mentioned media bias that is indeed relatively easily correctable, there are many subtler instances of biasing via representativess, on the level of cognitive processes. "However, like with vitamin dosages and their effects on health, two variables might have a non-linear relationship." if we limit our interval we can make a linear approximation within that interval. this is often good enough if we don't much care about data outside that interval. the easy pitfall of course is people wanting to extend the linearization beyond the bounds of the interval. Voted down because tangential replies that belong elsewhere really get on my nerves. Please comment on the post about the vitamin study, linked in the OP. 0_o I was responding directly to the OP.
{"url":"http://lesswrong.com/lw/2bu/your_intuitions_are_not_magic/","timestamp":"2014-04-20T13:34:39Z","content_type":null,"content_length":"126783","record_id":"<urn:uuid:04aede31-e395-43d1-a43a-ad69126e476f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Semantical analysis of higher-order abstract syntax Results 1 - 10 of 69 - Formal Aspects of Computing , 2002 "... Abstract. The permutation model of set theory with atoms (FM-sets), devised by Fraenkel and Mostowski in the 1930s, supports notions of ‘name-abstraction ’ and ‘fresh name ’ that provide a new way to represent, compute with, and reason about the syntax of formal systems involving variable-binding op ..." Cited by 207 (44 self) Add to MetaCart Abstract. The permutation model of set theory with atoms (FM-sets), devised by Fraenkel and Mostowski in the 1930s, supports notions of ‘name-abstraction ’ and ‘fresh name ’ that provide a new way to represent, compute with, and reason about the syntax of formal systems involving variable-binding operations. Inductively defined FM-sets involving the name-abstraction set former (together with Cartesian product and disjoint union) can correctly encode syntax modulo renaming of bound variables. In this way, the standard theory of algebraic data types can be extended to encompass signatures involving binding operators. In particular, there is an associated notion of structural recursion for defining syntax-manipulating functions (such as capture avoiding substitution, set of free variables, etc.) and a notion of proof by structural induction, both of which remain pleasingly close to informal practice in computer science. 1. - In 14th Annual Symposium on Logic in Computer Science , 1999 "... Syntax Involving Binders Murdoch Gabbay Cambridge University DPMMS Cambridge CB2 1SB, UK M.J.Gabbay@cantab.com Andrew Pitts Cambridge University Computer Laboratory Cambridge CB2 3QG, UK ap@cl.cam.ac.uk Abstract The Fraenkel-Mostowski permutation model of set theory with atoms (FM-sets) ..." Cited by 146 (14 self) Add to MetaCart Syntax Involving Binders Murdoch Gabbay Cambridge University DPMMS Cambridge CB2 1SB, UK M.J.Gabbay@cantab.com Andrew Pitts Cambridge University Computer Laboratory Cambridge CB2 3QG, UK ap@cl.cam.ac.uk Abstract The Fraenkel-Mostowski permutation model of set theory with atoms (FM-sets) can serve as the semantic basis of meta-logics for specifying and reasoning about formal systems involving name binding, ff-conversion, capture avoiding substitution, and so on. We show that in FM-set theory one can express statements quantifying over `fresh' names and we use this to give a novel set-theoretic interpretation of name abstraction. Inductively defined FM-sets involving this name-abstraction set former (together with cartesian product and disjoint union) can correctly encode object-level syntax modulo ff-conversion. In this way, the standard theory of algebraic data types can be extended to encompass signatures involving binding operators. In particular, there is an associated n... - Mathematics of Program Construction, volume 1837 of Lecture Notes in Computer Science , 2000 "... This paper describes work in progress on the design of an ML-style metalanguage FreshML for programming with recursively defined functions on user-defined, concrete data types whose constructors may involve variable binding. Up to operational equivalence, values of such FreshML data types can faithf ..." Cited by 88 (15 self) Add to MetaCart This paper describes work in progress on the design of an ML-style metalanguage FreshML for programming with recursively defined functions on user-defined, concrete data types whose constructors may involve variable binding. Up to operational equivalence, values of such FreshML data types can faithfully encode terms modulo alpha-conversion for a wide range of object languages in a straightforward fashion. The design of FreshML is `semantically driven', in that it arises from the model of variable binding in set theory with atoms given by the authors in [7]. The language has a type constructor for abstractions over names ( = atoms) and facilities for declaring locally fresh names. Moreover, recursive definitions can use a form of pattern-matching on bound names in abstractions. The crucial point is that the FreshML type system ensures that these features can only be used in well-typed programs in ways that are insensitive to renaming of bound names. - in MacroML. In the International Conference on Functional Programming (ICFP ’01 , 2001 "... ..." - In Proceedings of LICS 2000: the 15th IEEE Symposium on Logic in Computer Science (Santa Barbara , 2000 "... We study syntax-free models for name-passing processes. For interleaving semantics, we identify the indexing structure required of an early labelled transition system to support the usual pi-calculus operations, defining Indexed Labelled Transition Systems. For noninterleaving causal semantics we de ..." Cited by 24 (3 self) Add to MetaCart We study syntax-free models for name-passing processes. For interleaving semantics, we identify the indexing structure required of an early labelled transition system to support the usual pi-calculus operations, defining Indexed Labelled Transition Systems. For noninterleaving causal semantics we define Indexed Labelled Asynchronous Transition Systems, smoothly generalizing both our interleaving model and the standard Asynchronous Transition Systems model for CCS-like calculi. In each case we relate a denotational semantics to an operational view, for bisimulation and causal bisimulation respectively. We establish completeness properties of, and adjunctions between, categories of the two models. Alternative indexing structures and possible applications are also discussed. These are first steps towards a uniform understanding of the semantics and operations of name-passing calculi. , 2006 "... Nominal logic is an extension of first-order logic which provides a simple foundation for formalizing and reasoning about abstract syntax modulo consistent renaming of bound names (that is, α-equivalence). This article investigates logic programming based on nominal logic. This technique is especial ..." Cited by 23 (8 self) Add to MetaCart Nominal logic is an extension of first-order logic which provides a simple foundation for formalizing and reasoning about abstract syntax modulo consistent renaming of bound names (that is, α-equivalence). This article investigates logic programming based on nominal logic. This technique is especially well-suited for prototyping type systems, proof theories, operational semantics rules, and other formal systems in which bound names are present. In many cases, nominal logic programs are essentially literal translations of “paper” specifications. As such, nominal logic programming provides an executable specification language for prototyping, communicating, and experimenting with formal systems. We describe some typical nominal logic programs, and develop the model-theoretic, proof-theoretic, and operational semantics of such programs. Besides being of interest for ensuring the correct behavior of implementations, these results provide a rigorous foundation for techniques for analysis and reasoning about nominal logic programs, as we illustrate via two examples. - In Proceedings of the 7th International Conference on Typed Lambda Calculi and Applications , 2005 "... Abstract. Higher-order encodings use functions provided by one language to represent variable binders of another. They lead to concise and elegant representations, which historically have been difficult to analyze and manipulate. In this paper we present the ∇-calculus, a calculus for defining gener ..." Cited by 23 (3 self) Add to MetaCart Abstract. Higher-order encodings use functions provided by one language to represent variable binders of another. They lead to concise and elegant representations, which historically have been difficult to analyze and manipulate. In this paper we present the ∇-calculus, a calculus for defining general recursive functions over higher-order encodings. To avoid problems commonly associated with using the same function space for representations and computations, we separate one from the other. The simply-typed λ-calculus plays the role of the representation-level. The computationlevel contains not only the usual computational primitives but also an embedding of the representation-level. It distinguishes itself from similar systems by allowing recursion under representation-level λ-binders while permitting a natural style of programming which we believe scales to other logical frameworks. Sample programs include bracket abstraction, parallel reduction, and an evaluator for a simple language with first-class continuations. 1 - In IEEE Symposium on Logic in Computer Science , 2008 "... Variable binding is a prevalent feature of the syntax and proof theory of many logical systems. In this paper, we define a programming language that provides intrinsic support for both representing and computing with binding. This language is extracted as the Curry-Howard interpretation of a focused ..." Cited by 21 (6 self) Add to MetaCart Variable binding is a prevalent feature of the syntax and proof theory of many logical systems. In this paper, we define a programming language that provides intrinsic support for both representing and computing with binding. This language is extracted as the Curry-Howard interpretation of a focused sequent calculus with two kinds of implication, of opposite polarity. The representational arrow extends systems of definitional reflection with a notion of scoped inference rules, which are used to represent binding. On the other hand, the usual computational arrow classifies recursive functions defined by pattern-matching. Unlike many previous approaches, both kinds of implication are connectives in a single logic, which serves as a rich logical framework capable of representing inference rules that mix binding and computation. 1 - IN 3RD WORKSHOP ON THE FOUNDATIONS OF GLOBAL UBIQUITOUS COMPUTING , 2004 "... We present a meta-logic that contains a new quantifier (for encoding "generic judgment") and inference rules for reasoning within fixed points of a given specification. We then specify the operational semantics and bisimulation relations for the finite π-calculus within this meta-logic. Since we ..." Cited by 21 (11 self) Add to MetaCart We present a meta-logic that contains a new quantifier (for encoding "generic judgment") and inference rules for reasoning within fixed points of a given specification. We then specify the operational semantics and bisimulation relations for the finite π-calculus within this meta-logic. Since we "... We present parametric higher-order abstract syntax (PHOAS), a new approach to formalizing the syntax of programming languages in computer proof assistants based on type theory. Like higherorder abstract syntax (HOAS), PHOAS uses the meta language’s binding constructs to represent the object language ..." Cited by 20 (2 self) Add to MetaCart We present parametric higher-order abstract syntax (PHOAS), a new approach to formalizing the syntax of programming languages in computer proof assistants based on type theory. Like higherorder abstract syntax (HOAS), PHOAS uses the meta language’s binding constructs to represent the object language’s binding constructs. Unlike HOAS, PHOAS types are definable in generalpurpose type theories that support traditional functional programming, like Coq’s Calculus of Inductive Constructions. We walk through how Coq can be used to develop certified, executable program transformations over several statically-typed functional programming languages formalized with PHOAS; that is, each transformation has a machine-checked proof of type preservation and semantic preservation. Our examples include CPS translation and closure conversion for simply-typed lambda calculus, CPS translation for System F, and translation from a language with ML-style pattern matching to a simpler language with no variable-arity binding constructs. By avoiding the syntactic hassle associated with first-order representation techniques, we achieve a very high degree of proof automation. Categories and Subject Descriptors F.3.1 [Logics and meanings
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=198439","timestamp":"2014-04-16T11:44:11Z","content_type":null,"content_length":"39049","record_id":"<urn:uuid:6ffad692-b646-45c1-93a0-f919a4ba8ac0>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
De Broglie Wavelength Deriving the De Broglie Wavelength De Broglie derived his equation using well established theories through the following series of substitutions: 1. De Broglie first used Einstein's famous equation relating matter and energy: \[ E = mc^2 \] • \(E\) = energy, • \(m\) = mass, • \(c\) = speed of light 2. Using Planck's theory which states every quantum of a wave has a discrete amount of energy given by Planck's equation: \[ E= h \nu\] • \(E\) = energy, • \(h\) = Plank's constant (6.62607 x 10^-34 J s), • \(\nu\)= frequency 3. Since de Broglie believed particles and wave have the same traits, he hypothesized that the two energies would be equal: \[ mc^2 = h\nu\] 4. Because real particles do not travel at the speed of light, De Broglie submitted velocity (\(v\)) for the speed of light (\(c\)). \[ mv^2 = h\nu \] 5. Through the equation \(\lambda\), de Broglie substituted \( v/\lambda\) for \(\nu\) and arrived at the final expression that relates wavelength and particle with speed. \[ mv^2 = \dfrac{hv}{\lambda} \] \[ \lambda = \dfrac{hv}{mv^2} = \dfrac{h}{mv} \] Although De Broglie was credited for his hypothesis, he had no actual experimental evidence for his conjecture. In 1927, Clinton J. Davisson and Lester H. Germer shot electron particles onto onto a nickel crystal. What they see is the diffraction of the electron similar to waves diffraction against crystals (x-rays). In the same year, an English physicist, George P. Thomson fired electrons towards thin metal foil providing him with the same results as Davisson and Germer. A majority of Wave-Particle Duality Problems are simple plug and chugs with some variation of canceling out units Example 1 Find the de Broglie wavelength for an electron moving at the speed of 6.63 x 10^6 m/s (mass of an electron = 9.1 x 10^-31 kg).
{"url":"http://chemwiki.ucdavis.edu/Physical_Chemistry/Quantum_Mechanics/Quantum_Theory/De_Broglie_Wavelength","timestamp":"2014-04-19T17:06:01Z","content_type":null,"content_length":"45101","record_id":"<urn:uuid:543a436a-d8f0-4924-a325-89c1ebc05553>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
diefenbach / django-lfs - ceccf90 Cleaned up docs and doc strings. - EQUAL, LESS_THAN, LESS_THAN_EQUAL, GREATER_THAN, GREATER_THAN_EQUAL, IS_SELECTED, IS_NOT_SELECTED, IS_VALID, IS_NOT_VALID, CONTAINS + EQUAL, LESS_THAN, LESS_THAN_EQUAL, GREATER_THAN, GREATER_THAN_EQUAL, IS_SELECTED, IS_NOT_SELECTED, IS_VALID, IS_NOT_VALID, CONTAINS -[DEL: :DEL] Go to TinyMCE website and `download <http://www.tinymce.com/download/download.php>`_[DEL: ``TinyMCE x.x jQuery package``:DEL] -[DEL: :DEL] Extract downloaded ``TinyMCE`` into your :doc:`Theme </developer/howtos/theme/index>`, e.g. ``theme/static/manage/tiny_mce_x_x/`` + Extract downloaded ``TinyMCE`` into your :doc:`Theme </developer/howtos/theme/index>`, e.g. ``theme/static/manage/tiny_mce_x_x/`` -[DEL: :DEL] Go to TinyMCE website and `download <http://www.tinymce.com/i18n/index.php?ctrl=lang&act=download&pr_id=1>`_[DEL: language file(s):DEL] that you need + Go to TinyMCE website and `download <http://www.tinymce.com/i18n/index.php?ctrl=lang&act=download&pr_id=1>`_ that you need - Go to folder where you've just extracted main package of TinyMCE, e.g.: ``theme/static/manage/tiny_mce_x_x/`` + Go to folder where you've just extracted main package of TinyMCE, e.g.: ``theme/static/manage/tiny_mce_x_x/`` -[DEL: :DEL] Copy ``lfs/templates/manage/manage_base.html`` to your theme: ``mytheme/templates/manage/manage_base.html`` + Copy ``lfs/templates/manage/manage_base.html`` to your theme: ``mytheme/templates/manage/manage_base.html`` -[DEL: :DEL] <script type="text/javascript" src="{{ STATIC_URL }}manage/tiny_mce_x_x/jquery.tinymce.js"></script> + <script type="text/javascript" src="{{ STATIC_URL }}manage/tiny_mce_x_x/jquery.tinymce.js"></script> Note that for some languages ``LANGUAGE_CODE`` used by Django may differ from language code used by TinyMCE. For such cases you'll probably have to write your own tag/filter that will map Django's language code to TinyMCE's - EQUAL, LESS_THAN, LESS_THAN_EQUAL, GREATER_THAN, GREATER_THAN_EQUAL, IS_SELECTED, IS_NOT_SELECTED, IS_VALID, IS_NOT_VALID, CONTAINS + EQUAL, LESS_THAN, LESS_THAN_EQUAL, GREATER_THAN, GREATER_THAN_EQUAL, IS_SELECTED, IS_NOT_SELECTED, IS_VALID, IS_NOT_VALID, CONTAINS content_type = models.ForeignKey(ContentType, verbose_name=_(u"Content type"), related_name="content_type") Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js. Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java. Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory. Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml. Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file. Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o. Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.
{"url":"https://bitbucket.org/diefenbach/django-lfs/commits/ceccf90746325ecf409027775db0a6a99fd6e987","timestamp":"2014-04-17T04:22:31Z","content_type":null,"content_length":"376317","record_id":"<urn:uuid:565998c0-f1a6-48c1-b0a6-085b24c5cc63>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
An R programmer looks at Julia In January of this year I first saw mention of the Julia language in the release notes for LLVM. I mentioned this to Dirk Eddelbuettel and later we got in contact with Viral Shah regarding a Debian package for Julia. There are many aspects of Julia that are quite intriguing to an R programmer. I am interested in programming languages for "Computing with Data", in John Chambers' term, or "Technical Computing", as the authors of Julia classify it. I believe that learning a programming language is somewhat like learning a natural language in that you need to live with it and use it for a while before you feel comfortable with it and with the culture surrounding it. A common complaint for those learning R is finding the name of the function to perform a particular task. In writing a bit of Julia code for fitting generalized linear models, as described below, I found myself in exactly the same position of having to search through documentation to find how to do something that I felt should be simple. The experience is frustrating but I don't know of a way of avoiding it. One word of advice for R programmers looking at Julia, the names of most functions correspond to the Matlab/octave names, not the R names. One exception is the d-p-q-r functions for distributions, as I described in an earlier posting. Evaluating a new language There is a temptation to approach a new programming language with the idea that it should be exactly like the language I am currently using but better, faster, easier, ... This is, of course, ridiculous. If you want R syntax, semantics, function names, and packages then use R. If you consider another language you should accept that many tasks will be done differently. Some will be done better than in your current language and some will not be done as well. And the majority will just be done differently. Two primary Julia developers, Jeff Bezanson and Stefan Karpinski, recently gave a presentation at the lang.NEXT conference. The slides and video can help you get a flavor of the language. So, what does Julia provide that R doesn't? • A JIT. The first time that a function is called with a particular argument signature it is compiled to machine code using the LLVM compiler backend. This, naturally, provides a speed boost. It can also affect programming style. In R one instinctively writes vectorized code to avoid a performance hit. In Julia non-vectorized code can have similar and, under some circumstances, better performance than vectorized code. • A richer type system. Scalars, vectors/arrays, tuples, composite types and several others can be defined in Julia. In R the atomic types are actually atomic vectors and the composite types are either lists (which is actually a type of vector in R, not a lisp-like list), vectors with attributes (called structures in R but not to be confused with C's structs). The class systems, S3 and S4, are built on top of these structures. Currently it is difficult to work with 64-bit integers in R but not in Julia • Multiple dispatch. In Julia all functions provide multiple dispatch to handle different signatures of arguments. In some sense, functions are just names used to index into method tables. In R terminology all functions are generic functions with S4-like dispatch, not S3-like which only dispatches on the first argument. • Parallelism. A multiple process model and distributed arrays are a basic part of the Julia language. • A cleaner model for accessing libraries of compiled code. As Jeff and Stefan point out, languages like R and Matlab/octave typically implement the high-level operations in "glue" code that, in turn, accesses library code. In Julia one can short-circuit this process and call the library code from the high-level language. See the earlier blog posting on accessing the Rmath library from Julia. This may not seem like a big deal unless you have written thousands of lines of glue code, as I have. There are some parts of R not present in Julia that programmers may miss. • Default arguments of functions and named actual arguments in function calls. The process of matching actual arguments to formal arguments in R function calls is richer. All matching in Julia is by position within function signature. Function design with many arguments requires more thought and care in Julia than in R. • The R package system and CRAN. One could also throw in namespaces and function documentation standards as part of the package system. These are important parts of R that are not yet available. However, that does not preclude simular facilities being developed for Julia. The R package system did not spring fully grown from the brow of Zeus. • Graphics. One of the big advantages of R over similar languages is the sophisticated graphics available in ggplot2 or lattice. Work is underway on graphics models for Julia but, again, it is early days still. An example: Generalized Linear Models (GLMs) Most R users are familiar with the basic model-fitting functions like lm() and glm(). These functions call model.frame and model.matrix to form the numerical representation of the model from the formula and data specification, taking into account optional arguments such as subset, na.action and contrasts. After that they call the numerical workhorses, lm.fit and glm.fit. What I will describe is a function like glm.fit (the name in Julia cannot include a period and I call it glmFit) and structures like the glm "family" in R. A glm family is a collection of functions that describe the distribution of the response, given the mean, and the link between the linear predictor and the mean response. The link function takes the mean response, mu, to the "linear predictor", eta, of the form X %*% beta # in R X * beta # in Julia where X is the model matrix and beta is the coefficient vector. In practice we more frequently evaluate the inverse link, from eta to mu, and its derivative. We define a composite type type Link name::String # name of the link linkFun::Function # link function mu -> eta linkInv::Function # inverse link eta -> mu muEta::Function # derivative eta -> d mu/d eta with specific instances logitLink = mu -> log(mu ./ (1. - mu)), eta -> 1. ./ (1. + exp(-eta)), eta -> (e = exp(-abs(eta)); f = 1. + e; e ./ (f .* f))) logLink = mu -> log(mu), eta -> exp(eta), eta -> exp(eta)) identityLink = mu -> mu, eta -> eta, eta -> ones(eltype(eta), size(eta))) inverseLink = mu -> 1. ./ mu, eta -> 1. ./ eta, eta -> -1. ./ (eta .* eta)) for the common links. Here I have used the short-cut syntax mu -> log(mu ./ (1. - mu)) to create anonymous functions. The muEta function for the logitLink shows the use of multiple statements within the body of the anonymous function. Like R, the last expression evaluated in a function is the value of a Julia function. I also ensure that the functions in the Link object are safe for vector arguments by using the componentwise-forms of multiplicative operators ("./" and ".*"). These forms apply to scalars as well as vector. I use explicit floating-point constants, such as 1., to give the compiler a hint that the result should be a floating-point scalar or vector. A distribution characterizes the variance as a function of the mean, and the "deviance residuals" used to evaluate the model-fitting criterion. For some distributions, including the Gaussian distribution the model-fitting criterion (residual sum of squares) is different from the deviance itself but minimizing this simpler criterion provides the maximum-likelihood (or minimum deviance) estimates of the coefficients. When we eventually fit the model we want the value of the deviance itself, which may not be the same as the sum of squared deviance residuals. Other functions are used to check for valid mean vectors or linear predictors. Also each distribution in what is called the "exponential family", which includes many common distributions like Bernoulli, binomial, gamma, Gaussian and Poisson, has a canonical link function derived from the distribution itself. The composite type representing the distribution is type Dist name::String # the name of the distribution canonical::Link # the canonical link for the distribution variance::Function # variance function mu -> var devResid::Function # vector of squared deviance residuals deviance::Function # the scalar deviance mustart::Function # derive a starting estimate for mu validmu::Function # check validity of the mu vector valideta::Function # check validity of the eta vector with specific distributions defined as ## utilities used in some distributions logN0(x::Number) = x == 0 ? x : log(x) logN0{T<:Number}(x::AbstractArray{T}) = reshape([ logN0(x[i]) | i=1:numel(x) ], size(x)) y_log_y(y, mu) = y .* logN0(y ./ mu) # provides correct limit at y == 0 BernoulliDist = mu -> max(eps(Float64), mu .* (1. - mu)), (y, mu, wt)-> 2 * wt .* (y_log_y(y, mu) + y_log_y(1. - y, 1. - mu)), (y, mu, wt)-> -2. * sum(y .* log(mu) + (1. - y) .* log(1. - mu)), (y, wt)-> (wt .* y + 0.5) ./ (wt + 1.), mu -> all((0 < mu) & (mu < 1)), eta -> true) gammaDist = mu -> mu .* mu, (y, mu, wt)-> -2 * wt .* (logN0(y ./ mu) - (y - mu) ./ mu), (y, mu, wt)-> (n=sum(wt); disp=sum(-2 * wt .* (logN0(y ./ mu) - (y - mu) ./ mu))/n; invdisp(1/disp); sum(wt .* dgamma(y, invdisp, mu * disp, true))), (y, wt)-> all(y > 0) ? y : error("non-positive response values not allowed for gammaDist"), mu -> all(mu > 0.), eta -> all(eta > 0.)) GaussianDist = mu -> ones(typeof(mu), size(mu)), (y, mu, wt)-> (r = y - mu; wt .* r .* r), (y, mu, wt)-> (n = length(mu); r = y - mu; n * (log(2*pi*sum(wt .* r .* r)/n) + 1) + 2 - sum(log(wt))), (y, wt)-> y, mu -> true, eta -> true) PoissonDist = mu -> mu, (y, mu, wt)-> 2 * wt .* (y .* logN0(y ./ mu) - (y - mu)), (y, mu, wt)-> -2 * sum(dpois(y, mu, true) * wt), (y, mu)-> y + 0.1, mu -> all(mu > 0), eta -> true) Finally the GlmResp type consists of the distribution, the link, the response, the mean and linear predictor. Occasionally we want to add an offset to the linear predictor expression or apply prior weights to the responses and these are included. type GlmResp # response in a glm model eta::Vector{Float64} # linear predictor mu::Vector{Float64} # mean response offset::Vector{Float64} # offset added to linear predictor (usually 0) wts::Vector{Float64} # prior weights y::Vector{Float64} # response With just this definition we would need to specify the offset, wts, etc. every time we construct such an object. By defining an outer constructor (meaning a constructor that is defined outside the type definition) we can provide defaults ## outer constructor - the most common way of creating the object function GlmResp(dist::Dist, link::Link, y::Vector{Float64}) n = length(y) wt = ones(Float64, (n,)) mu = dist.mustart(y, wt) GlmResp(dist, link, link.linkFun(mu), mu, zeros(Float64, (n,)), wt, y) ## another outer constructor using the canonical link for the distribution GlmResp(dist::Dist, y::Vector{Float64}) = GlmResp(dist, dist.canonical, y) The second outer constructor allows us to use the canonical link without needing to specify it. There are several functions that we wish to apply to the glmResp object. deviance( r::GlmResp) = r.dist.deviance(r.y, r.mu, r.wts) devResid( r::GlmResp) = r.dist.devResid(r.y, r.mu, r.wts) drsum( r::GlmResp) = sum(devResid(r)) muEta( r::GlmResp) = r.link.muEta(r.eta) sqrtWrkWt(r::GlmResp) = muEta(r) .* sqrt(r.wts ./ variance(r)) variance( r::GlmResp) = r.dist.variance(r.mu) wrkResid( r::GlmResp) = (r.y - r.mu) ./ r.link.muEta(r.eta) wrkResp( r::GlmResp) = (r.eta - r.offset) + wrkResid(r) Our initial definitions of these functions are actually method definitions because we declare that the argument r must be a GlmResp. However, defining a method also defines the function if it has not already been defined. In R we must separately specify S3 generics and methods. An S4 generic is automatically created from a method specification but the interactions of S4 generics and namespaces can become tricky. Note that the period ('.') is used to access components or members of a composite type and cannot be used in a Julia identifier. The function to update the linear predictor requires both a GlmResp and the linear predictor value to assign. Here we use an abstract type inheriting from Number to allow a more general specification updateMu{T<:Number}(r::GlmResp, linPr::AbstractArray{T}) = (r.eta = linPr + r.offset; r.mu = r.link.linkInv(r.eta); drsum(r)) Updating the linear predictor also updates the mean response then returns the sum of the square deviance residuals. Note that this is not functional semantics in that the object r being passed as an argument is updated in place. Arguments to Julia functions are passed by reference. Those accustomed to the functional semantics of R (meaning that you can't change the value of an argument by passing it to a function) should beware. Finally we create a composite type for a predictor type predD # predictor with dense X X::Matrix{Float64} # model matrix beta0::Vector{Float64} # base coefficient vector delb::Vector{Float64} # increment and an outer constructor ## outer constructor predD(X::Matrix{Float64}) = (zz = zeros((size(X, 2),)); predD(X, zz, zz)) Given the current state of the predictor we create the weighted X'X matrix and the product of the weighted model matrix and the weighted working residuals in the accumulate function function accumulate(r::GlmResp, p::predD) w = sqrtWrkWt(r) wX = diagmm(w, p.X) (wX' * (w .* wrkResp(r))), (wX' * wX) This function doesn't need to be written separately but I hope that doing so will make it easier to apply these operations to distributed arrays, which, to me, seem like a natural way of applying parallel computing to many statistical computing tasks. A function to calculate and apply the increment is increment(r::GlmResp, p::predD) = ((wXz, wXtwX) = accumulate(r, p); bb = wXtwX \ wXz; p.delb = bb - p.beta0; updateMu(r, p.X * bb)) and finally we get to the glmFit function function glmFit(p::predD, r::GlmResp, maxIter::Uint, minStepFac::Float64, convTol::Float64) if (maxIter < 1) error("maxIter must be positive") end if (minStepFac < 0 || 1 < minStepFac) error("minStepFac must be in (0, 1)") end cvg = false devold = typemax(Float64) # Float64 version of Inf for i=1:maxIter dev = increment(r, p) if (dev < devold) p.beta0 = p.beta0 + p.delb error("code needed to handle the step-factor case") if abs((devold - dev)/dev) < convTol cvg = true devold = dev if !cvg error("failure to converge in $maxIter iterations") glmFit(p::predD, r::GlmResp) = glmFit(p, r, uint(30), 0.001, 1.e-6) Defining two methods allows for default values for some of the arguments. Here we have a kind of all-or-none approach to defaults. It is possible to create other method signatures for defaults applied to only some of the arguments. In retrospect I think I was being too cute in declaring the maxIter argument as an unsigned int. It only makes sense for this to be positive but it might make life simpler to allow it to be an integer as positivity is checked anyway. Note the use of string interpolation in the last error message. The numeric value of maxIter will be substituted in the error message. Checking that it works The script ## generate a Bernoulli response n = 10000 X = hcat(fill(1, (n,)), 3. * randn((n, 2))) beta = randn((3,)) eta = X * beta mu = logitLink.linkInv(eta) y = float64(rand(n) < mu) println("Coefficient vector for simulation: $(beta)") ## establish the glmResp object rr = GlmResp(BernoulliDist, y) pp = predD(X) glmFit(pp, rr) println("Converged to coefficient vector: $(pp.beta0 + pp.delb)") println("Deviance at estimates: $(deviance(rr))") ## reinitialize objects for timing rr = GlmResp(BernoulliDist, y) pp = predD(X) println("Elapsed time for fitting binary response with X $(size(X,1)) by $(size(X,2)): $(@elapsed glmFit(pp, rr)) seconds") Coefficient vector for simulation: [-0.54105, 0.308874, -1.02009] Converged to coefficient vector: [-0.551101, 0.301489, -1.02574] Deviance at estimates: 6858.466827205941 Elapsed time for fitting binary response with X 10000 by 3: 0.09874677658081055 seconds
{"url":"http://dmbates.blogspot.com/2012/04/r-programmer-looks-at-julia.html","timestamp":"2014-04-19T01:49:41Z","content_type":null,"content_length":"77683","record_id":"<urn:uuid:f004ffdf-2b67-4eb3-8dfa-f1bb549dfab6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
36 Arguments for the Existence of God (2010) Rebecca Goldstein This new novel by Rebecca Goldstein, whose Strange Attractors is one of my favorite works of mathematical fiction, features as two main characters a woman known as "the goddess of game theory" and a Hasidic... (more) Against the Odds (2001) Martin Gardner Luther Washington, a young, African-American boy in Butterfield, KS must overcome several kinds of prejudice to become a mathematician. First, he must face the prejudices of his father that his interest... (more) Alex Detail's Revolution (2009) Darren Campo A teenage genius uses (among other things) knowledge of the Golden Ratio to defeat an alien invasion. Campo handles the description of the math a bit better than some other authors ([cough]...Dan Brown...[cough]) but in the end it is nothing other than a bit of unbelievable mumbo jumbo in an otherwise math-free Sci-Fi adventure. (more) An Angel of Obedience (2010) John Giessmann Due to his new obsession with fractal geometry, thirteen year-old prodigy Jackson Carter has just ended an illustrious career as a classical musician and enrolled as a math major at Harvard. There he... (more) Antonia's Line (1995) Marleen Gorris About three or more generations of strong and self-sufficient women who live on a farm and the people around them. Antonia's granddaughter is a genius, namely a mathematician and a musician. But she... (more) Arcadia (1993) Tom Stoppard Stoppard's critically successful play includes long discussions of topics of mathematical interest including: Fermat's Last Theorem and Newtonian determinism, iterated algorithms, the second law of thermodynamics, Fourier's... (more) Batorsag and Szerelem [a.k.a. Beautiful Ohio] (2006) Ethan Canin A very sensitively written story about a child, William, who grows up in the shadow of his brother, Clive, who is a math prodigy. Clive, in addition to his strong mathematical skills, is also a very... (more) A Beautiful Mind (2001) Sylvia Nasar / Akiva Goldsman Although the book A Beautiful Mind: A Biography of John Forbes Nash, Jr. is not fictional, Ron Howard's film (released December 2001) most certainly is. (I say this not as a complaint, but just to justify... (more) The Book of Getting Even (2009) Benjamin Taylor A brilliant homosexual teenager uses mathematics as an escape from the pressures of everyday life, including his father, a rabbi in 1970's New Orleans. Along the way, he gets to know (and love, in a variety of ways) the family of a Nobel prize winning physicist and he himself becomes a cosmologist. (more) Brain Wave (1954) Poul Anderson This debut novel from SF superstar Anderson explains that the human intelligence is far more powerful than we have thus far seen. In fact, once we escape from the effects of a force field that is limiting... (more) Carry On, Mr. Bowditch (1955) Jean Lee Latham The life of early American mathematician Nathaniel Bowditch, famous for his work on techniques of navigation, is fictionalized in this novel for young adults. Although the mathematical details are not... (more) Catching Genius (2007) Kristy Kiernan A novel about a pair of sisters, one of whom is a "math genius". The title refers to the fact that she thinks "eyecue" is a disease when she first hears as a child that she has a high one and warns her... (more) The Clueless Girl's Guide to Being a Genius (2011) Janice Repka An excellent book for 4th – 5th graders but one I would recommend for all teachers and students. Written as an interlaced, first-person account of two young girls – Aphrodite, who is a math prodigy... (more) Continuums (2008) Robert Carr The decisions we make and the difficulty in accepting the consequences is the main focus of this book about a Romanian mathematician who leaves her country and her daughter to be in a place that she could... (more) Disciple of the Masses (2008) Xujun Eberlein A pathos-filled short story set in rural China toward the end of Mao’s Cultural Revolution. It captures beautifully the sense of loss inherent in a centrally-directed and enforced revolution, with... (more) Erasmus with Freckles [aka Dear Brigitte] (1963) John Haase The novel Erasmus with Freckles (1963) about a college English professor who hates math and science whose son is a math prodigy, was adapted into the film Dear Brigitte (1965) and re-released as a novel... (more) Evil Genius (2005) Catherine Jinks I am pleased to report that the titular "evil genius" in this children's novel is not the stereotypical cold mathematician in so many other works of mathematical fiction. In fact, the title character... (more) Eye of the Beholder (2005) Alex Kasman Shortly after a stunning success in her research, personal tragedy forces a math professor to change careers and begin work at the NSA where her work on cryptography involves some difficult ethical decisions.... (more) Family Ties (Episode: My Tutor) (1985) Jace Richdale (Screenplay) / Sam Weisman (Director) I'm writing to bring your attention to a television episode for possible addition to your mathematical fiction website. The television show is "Family Ties" and the episode is entitled, "My Tutor".... (more) Forgotten Milestones in Computing No. 7: The Quenderghast Bullian Algebraic Calculator (1990) Alex Stewart A very creative story about a mathematician which History has entirely forgotten - one "Thaddeus Q. Quenderghast III, of Nettlebend, Wyoming". Born around 1821, a contemporary of Charles Babbage and... (more) The Four-Color Puzzle: Falling Off the Map (2013) Lior Samson A math professor becomes intrigued with a high school student he meets at an online tutoring site when she presents him with what appears to be a short and very clever proof of the four-color theorem.... (more) The Fringe (Episode: The Equation) (2008) J.R. Orci (Screenplay) / David H. Goodman (Screenplay) The ``Fringe Team'' (an FBI agent, a mad scientist and his son) investigate a series of kidnappings in which the victim is hypnotized with red and green lights. In each case, the victim was about to... (more) The Ganymede Club (1995) Charles Sheffield A group of space explorers attempt to protect the secret that they are no longer aging in this well written SF novel. Although these (essentially) immortal characters are not especially mathematical,... (more) Geek Abroad (2008) Piper Banks Miranda Bloom, the mathematical prodigy first introduced in Geek High returns in another novel for teenagers, this time emphasizing her participation in mathematical competitions. For instance, we see... (more) Geek High (2007) Piper Banks Miranda Bloom is a mathematically talented girl trying to deal with normal teenage problems (family, boys, etc.) Although mental calculations have always come easy to Miranda, she does not appear to be... (more) The Geometry of Sisters (2009) Luanne Rice Young Beck hopes her mathematical skills will somehow bring back her dead father. Other reviewers have mostly complained that this novel does not work as the serious family drama it intends to be. From... (more) Gifted: A Novel (2007) Nikita Lalwani This novel tells the coming-of-age story of a girl whose Indian father is a professor of mathematics in Wales. She is talented at mathematics and even uses sophisticated math in her everyday life (e.g.... (more) A Girl Named Digit (2012) Annabel Monaghan A girl nicknamed "Digit" by her classmates because of her mathematical abilities and interests discovers a terrorist plot and begins working with the FBI to catch a double agent in this adventure aimed... (more) The Girl Who Played with Fire (2009) Stieg Larsson In this sequel to the stunningly popular The Girl with the Dragon Tattoo, the self-taught, nearly autistic, young genius, Lisbeth Salander, once again becomes involved in a thrilling mystery allied with... (more) The God Patent (2009) Ransom Stephens After his life falls apart, an engineer tries to revive a collaboration with the fundamentalist Christian with whom he once wrote two patents based on the Bible. While he viewed these patents for what... (more) Gomez (1954) Cyril M. Kornbluth this story is about a physics prodigy, but a mathematical equation appears in it -- the first time I read story the equation didn't make any sense to me, but eventually I realized that it was a... Good Will Hunting (1997) Gus Van Sant (director) / Matt Damon (Screenplay) A young janitor at MIT solves a (supposedly) difficult problem left on a black board by a Fields medalist. This successful film did make many more people aware of the existence of the Fields medal.... (more) Hannah, Divided (2002) Adele Griffin The story of a 13 year old girl living in rural Pennsylvania in 1934, "Hannah" presents us with yet another fictional account of someone who is not only talented in mathematics but also psychologically... (more) The Infinite Tides (2012) Christian Kiefer A somber novel about an astronaut whose daughter dies tragically and wife leaves him while he is in space. Since he and his daughter were both mathematical prodigies, for whom math was not only a beloved... (more) The Ishango Bone (2012) Paul Hastings Wilson Amiele becomes the first female student at Trinity College and goes on to disprove the Riemann Hypothesis at the age of 26, but is denied the Fields Medal. Written as if it were her life story recorded... (more) It's My Turn (1980) Claudia Weill (director) About a mathematician who writes a proof of the Snake Lemma at the speed of light. Her love interest was Michael Douglas, some sort of athlete. One mathematician I know claims he wrote a paper just... (more) Lepel (2005) Willem van de Sande Bakhuyzen (director) / Mieke de Jong (screenplay) In this charming family film from the Netherlands, a boy who believes his name is "Lepel" runs away from the mean button thief who has watched over him since his parents disappeared. If you have come... (more) Life After Genius (2008) M. Ann Jacoby Although his family would normally expect him to stay in their small town and take over the family business (a combination of a furniture store and funeral home), Mead Fegley's "genius" gives him the unprecedented... (more) The Long Chalkboard (2006) Jenny Allen / Jules Feiffer (Illustrator) Allen's book is a collection of three short-short stories spread out over book length with illustrations on every page, in the usual style of children's literature, complete with charmingly simple... (more) Long Division (2003) Michael Redhill The title of this short story refers both to arithmetic, a beloved subject of the school age child at its center, and the separation that his mother feels from him and his father due to the child's extraordinary... (more) Lost (2011) Tamora Pierce A mathematically talented little girl from a mystical medieval realm is abused by her anti-intellectual father and unappreciated by a mean math teacher who insists that she show all of her work. However,... (more) Measuring the World (2006) Daniel Kehlmann Two famous Germans of the 19th Century, mathematician Carl Friedrich Gauss and explorer/geologist Alexander von Humboldt, are irreverently presented in this novel which topped the sales charts in Germany... (more) Mefisto: A Novel (1986) John Banville Although the mathematics is only discussed in this novel in the vaguest terms, it is of the greatest importance to the book. Gabriel Swan, the main character/narrator is so focused on numbers and equations... (more) Mercury Rising (1998) Harold Becker (director) Bruce Willis is an FBI agent trying to protect an autistic child whose mathematical abilities allow him to break the government's top secret codes. Now, it is true that some of the most frequently used... (more) Misfit (1939) Robert A. Heinlein A crew of misfits ships out to the asteroid belt. One member turns out to be a misfit among the misfits: he's a mathematical prodigy. His skills prove to be very valuable. reprinted in THE PAST... Monster's Proof (2009) Richard Lewis With parents and a younger brother who are all "mathematical geniuses", Livey Ell (who is in danger of getting kicked out of cheerleading unless she improves her algebra grades) is a bit too normal. Things... (more) No One You Know (2008) Michelle Richmond Having felt overshadowed by her mathematician older sister when she was alive, the main character becomes obsessed with her murder after the sister is killed. Using her sister's notebook describing her... (more) Nothing but the Truth (and a few white lies) (2006) Justina Chen Headley This is a novel for young adults about a half Asian teenager who is sent to a summer Math Camp at Stanford by her overprotective mother. She enjoys the camp more than she expected to, until her mother... (more) NUMB3RS (2005) Nick Falacci / Cheryl Heuton This TV crime drama (premiered January 2005) follows the adventures of a pair of brothers, one a mathematics professor and the other an FBI agent, as they combine forces to solve mysteries. Cool effects... (more) Numberland (1987) George Weinberg The co-author (with John Schumaker) of STATISTICS: AN INTUITIVE APPROACH, and practicing psychotherapist, tells a charming little fable about Numberland. Peace, harmony,... (more) Numbers (2009) Dana Dane Hip Hop artist Dana Dane wrote this novel about a NYC youth with mathematical talent who gets caught up in a life of crime. There is no actual mathematics discussed. Rather, it appears in a few brief comments only to justify the protagonist's nickname of "Numbers" and presumably to convince us that he had the potential for a bright future under the right circumstances. (more) Prince of Mathematics: Carl Friedrich Gauss (2006) Margaret B.W. Tent A fictionalized account of the life and achievements of one of history's greatest mathematicians, told in a style which is appropriate for children but also maintains the interest of adult readers. (I'm... (more) Pythagoras' Revenge: A Mathematical Mystery (2009) Arturo Sangalli Freelance science journalist Sangalli has written a book which presents some historical information about Pythagoras and his beliefs in the form of a novel of the detail driven conspiracy theory adventure... (more) Ratner's Star (1976) Don DeLillo Billy Terwilliger (aka Twillig) is not your typical 14 year old boy. True, he is beginning to get interested in sex and thinks that the word "fart" is entertaining, but he is also a number theorist and... (more) Recess (Episode: A Genius Among Us) (2000) Brian Hamill This episode of Disney's Saturday Morning cartoon "Recess" is clearly a parody of the film "Good Will Hunting". I hope this doesn't lower anyone's opinion of me...but I personally liked it better than... (more) Regarding Roderer (1994) Guillermo Martinez A short novel about Gustavo Roderer, a brilliant but troubled young man in Argentina. Mathematics is not a central theme, but arises as Roderer's friend (the narrator) talks with him about the philosophical... (more) A Rite of Spring (1977) Fritz Leiber Leiber has stretched out a very flimsy story line into a 50-page trivia-fest on the number seven. A genius of a mathematician yearns for his childhood ability to visualize and play with mathematics as... (more) Simple Genius (2007) David Baldacci A small child with an inexplicable ability to factor large numbers threatens the security of the Western world in this political thriller from popular author Baldacci. Although it is nice to see mathematics... (more) Sophie's Diary (2004) Dora Musielak Sophie Germain famously studied mathematics at night by candlelight despite her parents' insistence that she give up this unfeminine discipline. She then went on to become one of the great mathematician's... (more) Star, Bright (1952) Mark Clifton How would you feel if your daughter could make deep mathematical discoveries, even when she was a toddler? If you were the parent of little Star in this story, you'd feel a combination of pride and... (more) Tangents (1986) Greg Bear There are far too many mathematical stories about finding a way to travel into "other dimensions". Still, this one is one of my favorites. Not only do we see a clever approach to this "old" storyline,... (more) Thursday Next: First Among Sequels (2007) Jasper Fforde As Vijay Fafat points out, the eponymous heroine of this series of humorous, fantasy mysteries has a daughter who is a math prodigy. Among other things, in this novel she finds a counter-example to Fermat's... (more) Time, Like an Ever Rolling Stream (1992) Judith Moffett The aliens have come to save us from ourselves (which they do by passing environmental laws and sterilizing all humans to prevent overpopulation). One of the aliens, as a pet project, recruits eight young... (more) Without a Trace (Episode: Claus and Effect) (2007) David Amann (writer) / Alicia Kirk (writer) / Bobby Roth (Director) In this Christmas special episode of the TV crime drama, a department store Santa turns out to be a mathematical prodigy who has quit his job as a mathematician/programmer due to ethical concerns that his work will cause others to lose their jobs. He becomes involved in a scheme to make money by applying mathematics to gambling. (more) The Wizard (1989) C.S. Godshalk A mathematically talented youth in a bad neighborhood becomes a drug dealer and may not be able to take advantage of his genius by attending the private school which has offered him a scholarship. In... (more) The Writing on the Wall (2005) Steve Stanton When he was eight years old, David was visited by an image of his future self, causing him to write mathematical formulas on the wall. (Unfortunately, his parents paint over it before he has a chance... (more) WWW: Wake (2009) Robert J. Sawyer A blind math prodigy uses her ability to "see" what is going on in the Internet (watch out for the pun: Websight) to discover the emergence of a virtual life form. This is a solid and very readable hard... (more) Young Archimedes (1924) Aldous Huxley A couple vacationing in Italy meet a peasant boy with strong mathematical abilities. The most mathematical portion of the text is a discussion of a proof of the Pythagorean theorem which the boy develops.... (more)
{"url":"http://kasmana.people.cofc.edu/MATHFICT/search.php?go=yes&motif=pro&orderby=title","timestamp":"2014-04-18T03:43:02Z","content_type":null,"content_length":"52380","record_id":"<urn:uuid:4cbdb4e7-b3c5-4c3a-b000-dd921e82c679>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
V.26 The Prime Number Theorem and the Riemann Hy 714 general harder than problems in P. The P versus N P problem asks for a proof that the complexity classes P and N P really are distinct. For a detailed discussion of the problem, see computational complexity [IV.20]. V. Theorems and Problems these structures are the three-dimensional versions of Euclidean, spherical, and hyperbolic geometry (see [I.3 §6]). Another is the infinite "cylinder" S 2 × R : that is, the product of a 2-sphere with an infinite line. Simi- larly, one can take the product of the hyperbolic plane with an infinite line and obtain a fifth structure. The other three are slightly more complicated to describe. Thurston also gave significant evidence for his con- jecture by proving it in the case of so-called Haken manifolds. The geometrization conjecture implies the Poincaré conjecture; both were proved by Grigori Perelman, who completed a program that had been set out by Richard Hamilton. The main idea of this program was to solve the problems by analyzing ricci flow [III.78]. The solu- tion was announced in 2003 and checked carefully by several experts over the next few years. For more details, see differential topology [IV.7]. V.25 The Poincaré Conjecture The Poincaré conjecture is the statement that a com- pact [III.9] smooth n-dimensional manifold that is homotopy equivalent [IV.6 §2] to the n-sphere S n must in fact be homeomorphic to S n . One can think of a compact manifold as a manifold that lives in a finite region of R m for some m and that has no boundary: for example, the 2-sphere and the torus are compact manifolds living in R 3 , while the open unit disk or an infinitely long cylinder is not. (The open unit disk does not have a boundary in an intrinsic sense, but its realization as the set {(x, y) : x 2 + y 2 < 1} has the set {(x, y) : x 2 + y 2 = 1} as its boundary.) A manifold is called simply connected if every loop in the manifold can be continuously con- tracted to a point. For instance, a sphere of dimen- sion greater than 1 is simply connected but a torus is not (since a loop that "goes around" the torus will always go around the torus, however you continuously deform it). In three dimensions, the Poincaré conjec- ture asks whether two simple properties of spheres, compactness and simple connectedness, are enough to characterize spheres. The case n = 1 is not interesting: the real line is not compact and a circle is not simply connected, so the hypotheses of the problem cannot be satisfied. poincaré [VI.61] himself solved the problem for n = 2 early in the twentieth century, by completely classify- ing all compact 2-manifolds and noting that in his list of all possible such manifolds only the sphere was simply connected. For a time he believed that he had solved the three-dimensional case as well, but then discov- ered a counterexample to one of the main assertions of his proof. In 1961, Stephen Smale proved the con- 5, and Michael Freedman proved the jecture for n n = 4 case in 1982. That left just the three-dimensional problem open. Also in 1982, William Thurston put forward his famous geometrization conjecture, which was a pro- posed classification of three-dimensional manifolds. The conjecture asserted that every compact 3-manifold can be cut up into submanifolds that can be given metrics [III.56] that turn them into one of eight par- ticularly symmetrical geometric structures. Three of V.26 The Prime Number Theorem and the Riemann Hypothesis How many prime numbers are there between 1 and n? A natural first reaction to this question is to define (n) to be the number of prime numbers between 1 and n and to search for a formula for (n). However, the primes do not have any obvious pattern to them and it has become clear that no such formula exists (unless one counts highly artificial formulas that do not actually help one to calculate (n)). The standard reaction of mathematicians to this kind of situation is to look instead for good estimates. In other words, we try to find a simply defined func- tion f (n) for which we can prove that f (n) is always a good approximation to (n). The modern form of the prime number theorem was first conjectured by gauss [VI.26] (though a closely related conjecture had been made by legendre [VI.24] a few years earlier). He looked at the numerical evidence, which suggested to him that the "density" of primes near n was about 1/log n, in the sense that a randomly chosen integer near n would have a probability of roughly 1/log n of being a prime. This leads to the conjectured approx- imation of n/log n for (n), or to the slightly more sophisticated approximation n (n) 0 dx . log x The function defined by the integral on the right-hand side is called li(n) (which stands for the
{"url":"http://my.safaribooksonline.com/book/math/9781400830398/part-v-theorems-and-problems/v26_the_prime_number_theorem_a","timestamp":"2014-04-21T04:42:54Z","content_type":null,"content_length":"96692","record_id":"<urn:uuid:22f8b202-bceb-48b8-91f5-d50814cfd1e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Don't understand the proof of a simple theorem October 15th 2009, 11:28 AM #1 Oct 2009 Don't understand the proof of a simple theorem I found the theorem and its proof in the attachment file. The thing is that I don't understand why suddenly "either disjoint or equal" become "disjoint" for "Ix=Iy" condition. Could you explain this for me? It is actually easy. But you need to understand that $I_x$ is a connect set. In $R$ the open connected sets are $(-\infty ,a),~(b, \infty ),\text{ or }(a,b)$. If two connected set have a point in common there union is connected and has one of those three forms. Now by definition of $I_x$is a maximum connected open set. Thus, either $I_x=I_y\text{ or }I_x\cap I_y=\emptyset$. October 15th 2009, 12:55 PM #2
{"url":"http://mathhelpforum.com/differential-geometry/108256-don-t-understand-proof-simple-theorem.html","timestamp":"2014-04-17T03:13:09Z","content_type":null,"content_length":"36066","record_id":"<urn:uuid:00808a43-a6b3-423e-9871-1b4143be280e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Units of Measurement/Volume Dimension L^3 Usual Symbol V Coherent units System Unit Symbol SI cubic metre m^3 CGS cubic centimetre cm^3 Imperial cubic foot ft^3 The base unit of volume in the SI system is the liter. There are 1000 liters per cubic meter, or 1 liter contains the same volume as a cube with sides of length 10cm. A cube with sides of length of 1 cm or 1 cm^3 contains a volume of 1 milliliter. A liter contains the same volume as 1000 ml or 1000 cm^3. Imperial unitsEdit Last modified on 8 May 2011, at 21:04
{"url":"http://en.m.wikibooks.org/wiki/Units_of_Measurement/Volume","timestamp":"2014-04-19T01:52:27Z","content_type":null,"content_length":"15331","record_id":"<urn:uuid:ee3e3c3e-16b7-4caf-9df6-e3faccba83d4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Which topological spaces have the property that their sheaves of continuous functions are determined by their global sections? up vote 13 down vote favorite I hope I'm using the terminology correctly. What I mean is this: fix $K = \mathbb{R}$ or $\mathbb{C}$ (I'm interested in both cases). Which topological spaces $X$ have the property that for every open set $U$, every continuous function $f : U \to K$ is a quotient of continuous functions $\frac{g}{h}$ where $g, h : X \to K$ and $h \neq 0$ on $U$? sheaf-theory gn.general-topology 1 Don't you mean that $h$ is nowhere vanishing on U? – Yemon Choi Nov 20 '09 at 6:40 As topological spaces $\mathbb C = \mathbb R \times \mathbb R$, and so for the purposes of detecting spaces there is no difference. – Theo Johnson-Freyd Nov 20 '09 at 8:03 2 @Theo: we are of course using the multiplicative structure on K, at least in the question. So while there might turn out to be no difference I'm not convinced it's for the reason you give. – Yemon Choi Nov 20 '09 at 8:07 1 @Theo: while it's always nice to see love for Gelfand-Naimark, I don't quite understand its relevance to Qiaochu's question... (Also, C*-algebras are very rigid and non-sheafy objects) – Yemon Choi Nov 20 '09 at 8:21 2 For locally compact hausdorff spaces this is equivalent to finding h continuous on X such that fh extends to a continuous function up to the boundary, since than you can extend the product to a g continuous on the full space. – Gian Maria Dall'Ara Nov 20 '09 at 11:27 show 7 more comments 2 Answers active oldest votes This isn't a complete answer, but I think that whatever the family is, it contains compact metric (metrisable) spaces. With a paracompactness argument, I suspect that it would extend to locally compact, and I would not be surprised if one could replace "metrisable" by something weaker (though I think that it would need that separation property one-above-normal which I can never remember the name of: namely that every closed set is the zero set of a continuous function). Here's a proof (I hope): Let $M$ be a compact metric space, $U \subseteq M$ an open subset, $f : U \to \mathbb{R}$ a continuous function. Let's write $K$ for the complement of $U$ in $M$. For each $n \in \mathbb{N}$, let $C_n \subseteq U$ be the subset consisting of points at least distance $1/n$ away from $K$. Then $C_n$ is closed in $M$, hence compact, and $\ bigcup C_n = U$. Let $h_0 : M \to \mathbb{R}$ be the "distance from $K$" function (so that $C_n = h_0^{-1}([1/n,\infty))$). Let $V_n$ be the complement of $C_n$. As $C_n$ is compact, $f$ is bounded on $C_n$. Let $a_n = \max\{|f(x)| : x \in C_n\}$, then $(a_n)$ is an increasing sequence. Let $(b_n)$ be a decreasing sequence that goes to $0$ faster than $(a_n)$ increases, specifically that $(a_nb_n) \to 0$. Let $r : [0,\infty) \to [0,\infty)$ be a continuous decreasing function such that $r(1/n) = b_{n+1}$ (as $(b_n) \to 0$ (this always exists) and let $h = r \circ h_0$. Then for $x \in V_{n-1}$, $h_0(x) \lt 1/(n-1)$ so $h(x) \lt b_n$. Then $h : M \to \mathbb{R}$ is a continuous function. Moreover, $h f$ (the product, with $h$ restricted to $U$) has the property that for $x \in C_n \setminus C_{n-1} = V_{n-1} \ setminus V_n$, $$ |(f h)(x)| = |f(x)| |h(x)| \le a_n b_n $$ Thus as $x \to K$, $(f h)(x) \to 0$ and so we can extend $f h$ to a continuous function $g : M \to \mathbb{R}$ by defining it to be $0$ on $K$. Then on $U$, $f = g/h$. (I made this up, so obviously, there may be something I've overlooked in this so please tell me if I'm not correct.) Edit: This one's been bugging me all weekend. I've even gone so far as to look up perfectly normal. This property holds for perfectly normal spaces. In a perfectly normal space, every closed set is the zero set of a function (to $\mathbb{R}$, and this characterises perfectly normal spaces according to Wikipedia). up vote 5 Here's the proof. Let $X$ be a perfectly normal space. Let $U \subseteq X$ be an open set, and $f : U \to \mathbb{R}$ a continuous function. Let $r : X \to \mathbb{R}$ be such that the down vote zero set of $r$ is the complement of $U$. Let $s : \mathbb{R} \to \mathbb{R}$ be the function $s(t) = \min\lbrace 1, |t|^{-1}\rbrace$. The crucial fact is that if $p : U \to \mathbb{R}$ is a bounded function then the pointwise product $r \cdot p : U \to \mathbb{R}$ (technically, $p$ should be restricted to $U$ here) extends to a continuous function on $X$ by defining it to be zero on $X \setminus U$. From this, the rest follows easily. 1. The composition $s \circ f$ is bounded on $U$, hence $r \cdot (s \circ f)$ extends to a continuous function on $X$, say $h$. 2. The product $(s \circ f) \cdot f$ is also bounded on $U$, since $(s \circ f)(x) = \min\lbrace 1, |f(x)|\rbrace)$. Hence $r \cdot (s \circ f) \cdot f$ extends to a continuous function on $X$, say $g$. 3. As $s(t) \ne 0$ for all $t \in \mathbb{R}$, $(s \circ f)(x) \ne 0$ for all $x \in X$. Hence $h(x) \ne 0$ for all $x \in U$. 4. Finally, on $U$, $g(x) = h(x) \cdot f(x)$, whence, as $h$ is never zero on $U$, $f = g/h$ as required. This isn't a complete characterisation of these spaces. Essentially, this result holds if there are enough continuous functions (as above) on $X$ and if there are too few. As an example of the latter, consider a topological space $X$ where every pair of non-trivial open sets has non-empty intersection. Then there can be no non-constant functions to $\ mathbb{R}$, either on $X$ or on any open subset thereof. Hence every continuous function on an open subset of $X$ trivially extends to the whole of $X$. However, there's probably some argument that says that once you have sufficient continuous functions (say, if the space is functionally Hausdorff - i.e. continuous functions to $\mathbb {R}$ separate points) then it would have to be perfectly normal. The difficulty I have with making this into a proof is that there's no requirement that the function $g$ be zero on the Finally, note that metric spaces are perfectly normal so this supersedes my earlier proof. I leave it up, though, in case it's of use to anyone to see the workings as well as the current state. (Actually, for the record I ought to declare that initially I thought that this was false for almost all spaces. However, once I'd examined my counterexample closely, I realised my error and now I'm having difficulty thinking of a reasonable space where it does not hold.) Metrizability seems too much. After all, what you use is sigma compactness of every open set and the existence of the function going to zero fast enough. If K_n is the exhaustion of U by compact sets than a positive map whose value is b_n on Cl(K_n\K_n-1) and 0 outside Int(K_n+1\K_n-2) should be enough. So I think the right condition, at least for your argument to work is the existence of bump functions (loc comp hausdorff) and sigma compactness of open sets. – Gian Maria Dall'Ara Nov 20 '09 at 13:30 Sigma compactness is a natural condition, which is useful even in measure theory since it ensures the regularity of measures given by Riesz representation theorem. – Gian Maria Dall'Ara Nov 20 '09 at 13:30 I suspect that sigma compactness is too much as well, or at least with paracompactness can be replaced by a sort of local sigma compactness. I'll be interested to learn if there's a simple description of all the topological spaces that have this property. However, for this answer I just wanted something that would work and wasn't too restrictive. – Andrew Stacey Nov 20 '09 at 13:51 Note that I got rid of both metrisability and sigma compactness now. – Andrew Stacey Nov 22 '09 at 21:13 Very nice proof! Now, even if it is not a characterisation, it's much more satisfying. – Gian Maria Dall'Ara Nov 22 '09 at 21:49 show 2 more comments I tried a bit of thinking, but I haven't worked all the details. I have a hint though that may lead to the answer of your question. You may want to regard the continuous functions over an open set as a ring. This ring is reduced and commutative (thus there is a so-called rational completion) and we could then look at rational completion of them and this may lead to an A good and downloadable reference of this is found here. A classical reference (and also the best one) is the book of Lambek "Lectures on Rings and Modules" by Lambek (please don't up vote 0 confuse it with the book of Lam, who happens to have the same first 3 letters in his last name, entitled "Lectures on Modules and Rings"), see for instance sections 2.3 and 4.4 of the down vote book. A few years ago, I had written a small entry in Planetmath that characterized rational extensions of commutative reduced rings. And you can use that as an easy definition. 1 Note: I know the OP asks about the topology of X. But to study the the rational extensions of continuous function, information about Spec C(U) is passed (C(U) is the ring of cont's functions from U to K) and there is a relation between the topology of Spec C(U) and the topology of U. – Jose Capco Nov 21 '09 at 1:47 add comment Not the answer you're looking for? Browse other questions tagged sheaf-theory gn.general-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/6227/which-topological-spaces-have-the-property-that-their-sheaves-of-continuous-func","timestamp":"2014-04-21T07:31:21Z","content_type":null,"content_length":"74708","record_id":"<urn:uuid:3fc0b95a-dfc1-4178-af3a-e8fbfb8e6b8e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Why Shutter Speed and Aperture Numbers are Upside Down - Photo Tips @ Earthbound Light Why Shutter Speed and Aperture Numbers are Upside Down There's a lot of halving and doubling of numbers in photography. Exposure calculation just works that way. But what's confusing for some is that when you double the aperture from, perhaps f/8 to f/ 16, the image ends up darker. Both aperture and shutter speed numbers can seem upside down. But there is a method to the madness once you know what the numbers actually mean. Let's start with shutter speed since it's the easier of the two. When you press the shutter release, the camera goes click. But what sounds like a single click is really the sound of two events — the shutter opening, and the shutter closing. The typical shutter speed is fast enough that those two sounds blur into what we perceive as a single click. Indeed, shutter speeds are so fast that, as you probably already know, they're almost always less than a second. So when your camera shows numbers like 15, 30, 60, 125 and so on, those are really 1/ 30, 1/60 and 1/125 second. Constantly putting that "1/" fraction stuff on every number must have seemed like too much bother to somebody long ago and now we're all stuck with it. When shooting long exposures it is possible to go above one second and things can get even more interesting. Those same 15, 30, 60 and so on numbers now have to have two tick marks (essentially a quotation mark) after them to clarify that now we're talking about seconds rather than fractions of a second. A single tick mark is a short hand for minutes, and two tick marks stands for seconds. So rather than putting the one-over fraction prefix in front of all the short number we get to put the seconds double-quote after then long ones. It does make for a kind of short hand I guess, but it can confuse new photographers. It really doesn't take that long to understand any of this, but it can take a while to get used to it. It can be all too easy to instinctively increase shutter speed when you mean to decrease it because of these upside down numbers. Aperture numbers can similarly seem upside down and for a somewhat similar reason. When we say that a shot was made at f/4, that number does look somewhat like fraction, doesn't it? But in this case, the top number isn't a number at all, it's the letter "f." Aperture values are indeed fractions. The numbers are the ratio of the focal length (that's where the "f" comes from) to the effective diameter of the lens opening. This helps explain why shorter focal length lenses often have a faster maximum aperture (widest open aperture) than do telephotos. It's not unusual to have f/4 wide angle lenses. Standard 50mm lenses with f/1.8 or even f/1.4 are common. But if you own a telephoto that can go to that wide it weighs a bit and costs a lot. To achieve an f/4 aperture on a 40mm lens requires a lens opening of only 10mm. That same f/4 aperture on a 200mm lens means you'll need a 50mm opening. It's worth pointing out that the opening at issue here isn't the front lens element, the one you can immediately see when you take the lens cap off. It's the effective opening of the lens which lies deep in the middle of the lens body and is dictated by design of the lens optics. In rough terms though, you can think about the front of the lens when making aperture comparisons, assuming the lenses have similar construction. This fraction definition also helps explain why the standard series of aperture numbers don't double or halve as you go from one to the next. Instead, they change by multiples of 1.4, or the square root of two. They amount of light that makes it through a lens relates to the area of the opening while the aperture fractions are in terms of the lens opening diameter. The area increases as to the square of the diameter, so if we multiply or divide the area by two, the diameter changes by the square root of two. And as with shutter speed numbers, intellectually understanding that they're fractions is only half the battle to really understanding them. To really become comfortable with these number scales requires using them. So go take some pictures.
{"url":"http://www.earthboundlight.com/phototips/why-shutter-speed-and-aperture-are-upside-down.html","timestamp":"2014-04-20T16:17:02Z","content_type":null,"content_length":"35301","record_id":"<urn:uuid:1302047b-cf16-424b-b689-49b8803af83b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Math 371 Each of these assignments involve elements that must be coompleted using MATLAB. You can access MATLAB on any campus computer. Alternatively, you can access MATLAB by using the Virtual Desktop Infrastructure.The instructions for this can be found here. If you wish to purchase a student version of MATLAB you can do so here. Lab Assignments/Homework Problems: Homework 1 Due Friday Jan. 14 Homework 1 Programming Tips Matlab Intro Other Matlab Intro Files: File 1, File 2 Homework 2 Due Thursday Jan. 27 Bisection Method Lab Homework 3 Due Friday Feb. 11 at 2 PM Divided Difference Lab Homework 4 Due Monday Feb 28th at 2PM Cubic Spline Lab Homework 5 Due March 14th at 2PM MatLab file Lecture Notes: Periodically, I will give you some notes to summarize things we have done in class. These notes are meant to just give an overview of what we have done in class. Rate and Order of Convergence (pdf, Mathematica) Newton vs. Secant (pdf, Mathematica) Accelerating Convergence (pdf, Mathematica) Neville/Newton (pdf) Chebyshev Polynomials (pdf, Mathematica) Cubic Spline (pdf) Derivatives and Richardson Extrapolation (pdf, Mathematica) Richardson Extrapolation Example (pdf) Trapezoid and Simpson's Rule (pdf, Mathematica) Iterative Methods (pdf, Mathematica) Exam Prep: New material outine for the Final Exam
{"url":"http://fac-staff.seattleu.edu/difranco/web/Math_371_W11/homeworkM371W11.html","timestamp":"2014-04-16T16:15:09Z","content_type":null,"content_length":"8351","record_id":"<urn:uuid:480dcbe0-5e5f-42a0-9cfa-d2954c08a3ac>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Symmetry: A 'Key to Nature's Secrets' by Steven Weinberg Symmetry: A ‘Key to Nature’s Secrets’ Mike King The five regular polyhedra. Steven Weinberg writes that ‘they satisfy the symmetry requirement that every face, every edge, and every corner should be precisely the same as every other face, edge, or corner…. Plato argued in Timaeus that these were the shapes of the bodies making up the elements: earth consists of little cubes, while fire, air, and water are made of polyhedra with four, eight, and twenty identical faces, respectively. The fifth regular polyhedron, with twelve identical faces, was supposed by Plato to symbolize the cosmos.’ When I first started doing research in the late 1950s, physics seemed to me to be in a dismal state. There had been a great success a decade earlier in quantum electrodynamics, the theory of electrons and light and their interactions. Physicists then had learned how to calculate things like the strength of the electron’s magnetic field with a precision unprecedented in all of science. But now we were confronted with newly discovered esoteric particles—muons and dozens of types of mesons and baryons—most existing nowhere in nature except in cosmic rays. And we had to deal with mysterious forces: strong nuclear forces that hold partiicles together inside atomic nuclei, and weak nuclear forces that can change the nature of these particles. We did not have a theory that would describe these particles and forces, and when we took a stab at a possible theory, we found that either we could not calculate its consequences, or when we could, we would come up with nonsensical results, like infinite energies or infinite probabilities. Nature, like an enemy, seemed intent on concealing from us its master plan. At the same time, we did have a valuable key to nature’s secrets. The laws of nature evidently obeyed certain principles of symmetry, whose consequences we could work out and compare with observation, even without a detailed theory of particles and forces. There were symmetries that dictated that certain distinct processes all go at the same rate, and that also dictated the existence of families of distinct particles that all have the same mass. Once we observed such equalities of rates or of masses, we could infer the existence of a symmetry, and this we thought would give us a clearer idea of the further observations that should be made, and of the sort of underlying theories that might or might not be possible. It was like having a spy in the enemy’s high command.^1 I had better pause to say something about what physicists mean by principles of symmetry. In conversations with friends who are not physicists or mathematicians, I find that they often take symmetry to mean the identity of the two sides of something symmetrical, like the human face or a butterfly. That is indeed a kind of symmetry, but it is only one simple example of a huge variety of possible The Oxford English Dictionary tells us that symmetry is “the quality of being made up of exactly similar parts.” A cube gives a good example. Every face, every edge, and every corner is just the same as every other face, edge, or corner. This is why cubes make good dice: if a cubical die is honestly made, when it is cast it has an equal chance of landing on any of its six faces. The cube is one example of a small group of regular polyhedra—solid bodies with flat planes for faces, which satisfy the symmetry requirement that every face, every edge, and every corner should be precisely the same as every other face, edge, or corner. Thus the regular polyhedron called a triangular pyramid has four faces, each an equilateral triangle of the same size; six edges, at each of which two faces meet at the same angle; and four corners, at each of which three faces come together at the same angles. (See illustration on this page.) These regular polyhedra fascinated Plato. He learned (probably from the mathematician Theaetetus) that regular polyhedra come in only five possible shapes, and he argued in Timaeus that these were the shapes of the bodies making up the elements: earth consists of little cubes, while fire, air, and water are made of polyhedra with four, eight, and twenty identical faces, respectively. The fifth regular polyhedron, with twelve identical faces, was supposed by Plato to symbolize the cosmos. Plato offered no evidence for all this—he wrote in Timaeus more as a poet than as a scientist, and the symmetries of these five bodies representing the elements evidently had a powerful hold on his poetic imagination. The regular polyhedra in fact have nothing to do with the atoms that make up the material world, but they provide useful examples of a way of looking at symmetries, a way that is particularly congenial to physicists. A symmetry is a principle of invariance. That is, it tells us that something does not change its appearance when we make certain changes in our point of view—for instance, by rotating it or moving it. In addition to describing a cube by saying that it has six identical square faces, we can also say that its appearance does not change if we rotate it in certain ways—for instance by 90° around any direction parallel to the cube’s edges. The set of all such transformations of point of view that will leave a particular object looking the same is called that object’s invariance group. This may seem like a fancy way of talking about things like cubes, but often in physics we make guesses about invariance groups, and test them experimentally, even when we know nothing else about the thing that is supposed to have the conjectured symmetry. There is a large and elegant branch of mathematics known as group theory, which catalogs and explores all possible invariance groups, and is described for general readers in two recently published books: Symmetry: A Journey into the Patterns of Nature by Marcus du Sautoy and Why Beauty Is Truth: A History of Symmetry by Ian Stewart. The symmetries that offered the way out of the problems of elementary particle physics in the 1950s were not the symmetries of objects, not even objects as important as atoms, but the symmetries of laws. A law of nature can be said to respect a certain symmetry if that law remains the same when we change the point of view from which we observe natural phenomena in certain definite ways. The particular set of ways that we can change our point of view without changing the law defines that symmetry. Laws of nature, in the modern sense of mathematical equations that tell us precisely what will happen in various circumstances, first appeared as the laws of motion and gravitation that Newton developed as a basis for understanding Kepler’s description of the solar system. From the beginning, Newton’s laws incorporated symmetry: the laws that we observe to govern motion and gravitation do not change their form if we reset our clocks, or if we change the point from which distances are measured, or if we rotate our entire laboratory so it faces in a different direction.^2 There is another less obvious symmetry, known today as Galilean invariance, that had been anticipated in the fourteenth century by Jean Buridan and Nicole Oresme: the laws of nature that we discover do not change their form if we observe nature within a moving laboratory, traveling at constant velocity. The fact that the earth is speeding around the sun, for instance, does not affect the laws of motion of material objects that we observe on the earth’s surface.^3 Newton and his successors took these principles of invariance pretty much for granted, as an implicit basis for their theories, so it was quite a wrench when these principles themselves became a subject of serious physical investigation. The crux of Einstein’s 1905 Special Theory of Relativity was a modification of Galilean invariance. This was motivated in part by the persistent failure of physicists to find any effect of the earth’s motion on the measured speed of light, analogous to the effect of a boat’s motion on the observed speed of water waves. It is still true in Special Relativity that making observations from a moving laboratory does not change the form of the observed laws of nature, but the effect of this motion on measured distances and times is different in Special Relativity from what Newton had thought. Motion causes lengths to shrink and clocks to slow down in such a way that the speed of light remains a constant, whatever the speed of the observer. This new symmetry, known as Lorentz invariance,^4 required profound departures from Newtonian physics, including the convertibility of energy and mass. The advent and success of Special Relativity alerted physicists in the twentieth century to the importance of symmetry principles. But by themselves, the symmetries of space and time that are incorporated in the Special Theory of Relativity could not take us very far. One can imagine a great variety of theories of particles and forces that would be consistent with these space-time symmetries. Fortunately it was already clear in the 1950s that the laws of nature, whatever they are, also respect symmetries of other kinds, having nothing directly to do with space and time. There are four forces that allow particles to interact with one another: the familiar gravity and electromagnetism, and the less well-known weak nuclear force (which is responsible for certain types of radioactive decay) and strong nuclear force (which binds protons and neutrons in the nucleus of an atom). (I am writing of a time, during the 1950s, before the formulation of the modern Standard Model, in which the three known forces other than gravity are now united in a single theory.) It had been known since the 1930s that the unknown laws that govern the strong nuclear force respect a symmetry between protons and neutrons, the two particles that make up atomic nuclei. Even though the equations governing the strong forces were not known, the observations of nuclear properties had revealed that whatever these equations are, they must not change if everywhere in these equations we replace the symbol representing protons with that representing neutrons, and vice versa. Not only that, but the equations are also unchanged if we replace the symbols representing protons and neutrons with algebraic combinations of these symbols that represent superpositions of protons and neutrons, superpositions that might for instance have a 40 percent chance of being a proton and a 60 percent chance of being a neutron. It is like replacing a photo of Alice or of Bob with a picture in which photos of both Alice and Bob are superimposed. One consequence of this symmetry is that the nuclear force between two protons is not only equal to the force between two neutrons—it is also related to the force between a proton and a neutron. Then as more and more types of particles were discovered, it was found in the 1960s that this proton–neutron symmetry was part of a larger symmetry group: not only are the proton and neutron related by this symmetry to each other, they are also related to six other subatomic particles, known as hyperons. The symmetry among these eight particles came to be called “the eightfold way.” All the particles that feel the strong nuclear force fall into similar symmetrical families, with eight, ten, or more members. Mike King A spinning nucleus ejects an electron while decaying, as does its reflection in a mirror. The electron is ejected in the direction of the nuclear spin (represented by the vertical arrow) in the real world, but opposite to the direction of spin in the mirror, violating mirror symmetry. Steven Weinberg writes, ‘In 1957 experiments showed convincingly that, while the electromagnetic and strong nuclear forces do obey mirror symmetry, the weak nuclear force does not. Experiments showed, for example, that it was possible to distinguish a cobalt nucleus in the process of decaying—as a result of the weak nuclear force—from its mirror image, spinning in the opposite direction.’ Adapted from an illustration in A. Zee, Fearful Symmetry: The Search for Beauty in Modern Physics (Princeton University Press, 2007). But there was something puzzling about these internal symmetries: unlike the symmetries of space and time, these new symmetries were clearly neither universal nor exact. Electromagnetic phenomena did not respect these symmetries: protons and some hyperons are electrically charged; neutrons and other hyperons are not. Also, the masses of protons and neutrons differ by about 0.14 percent, and their masses differ from those of the lightest hyperon by 19 percent. If symmetry principles are an expression of the simplicity of nature at the deepest level, what are we to make of a symmetry that applies to only some forces, and even there is only approximate? An even more puzzling discovery about symmetry was made in 1956–1957. The principle of mirror symmetry states that physical laws do not change if we observe nature in a mirror, which reverses distances perpendicular to the mirror (that is, something far behind your head looks in the mirror as if it is far behind your image, and hence far in front of you). This is not a rotation—there is no way of rotating your point of view that has the effect of reversing directions in and out of a mirror, but not sideways or vertically. It had generally been taken for granted that mirror symmetry, like the other symmetries of space and time, was exact and universal, but in 1957 experiments showed convincingly that, while the electromagnetic and strong nuclear forces do obey mirror symmetry, the weak nuclear force does not. Experiments showed, for example, that it was possible to distinguish a cobalt nucleus in the process of decaying—as a result of the weak nuclear force—from its mirror image, spinning in the opposite direction. (See illustration on this page.) So we had a double mystery: What causes the observed violations of the eightfold way symmetry and of mirror symmetry? Theorists offered several possible answers, but as we will see, this was the wrong question. The 1960s and 1970s witnessed a great expansion of our conception of the sort of symmetry that might be possible in physics. The approximate proton–neutron symmetry was originally understood to be rigid, in the sense that the equations governing the strong nuclear forces were supposed to be unchanged only if we changed protons and neutrons into mixtures of each other in the same way everywhere in space and time (physicists somewhat confusingly use the adjective “global” for what I am here calling rigid symmetries). But what if the equations obeyed a more demanding symmetry, one that was local, in the sense that the equations would also be unchanged if we changed neutrons and protons into different mixtures of each other at different times and locations? In order to allow the different local mixtures to interact with one another without changing the equations, such a local symmetry would require some way for protons and neutrons to exert force on each other. Much as photons (the massless particles of light) are required to carry the electromagnetic force, a new massless particle, the gluon, would be needed to carry the force between protons and neutrons. It was hoped that this sort of theory of symmetrical forces might somehow explain the strong nuclear force that holds neutrons and protons together in atomic nuclei. Conceptions of symmetry also expanded in a different direction. Theorists began in the 1960s to consider the possibility of symmetries that are “broken.” That is, the underlying equations of physics might respect symmetries that are nevertheless not apparent in the actual physical states observed. The physical states that are possible in nature are represented by solutions of the equations of physics. When we have a broken symmetry, the solutions of the equations do not respect the symmetries of the equations themselves.^5 The elliptical orbits of planets in the solar system provide a good example. The equations governing the gravitational field of the sun, and the motions of bodies in that field, respect rotational symmetry—there is nothing in these equations that distinguishes one direction in space from another. A circular planetary orbit of the sort imagined by Plato would also respect this symmetry, but the elliptical orbits actually encountered in the solar system do not: the long axis of an ellipse points in a definite direction in space. At first it was widely thought that broken symmetry might have something to do with the small known violations of symmetries like mirror symmetry or the eightfold way. This was a false lead. A broken symmetry is nothing like an approximate symmetry, and is useless for putting particles into families like those of the eightfold way. But broken symmetries do have consequences that can be checked empirically. Because of the spherical symmetry of the equations governing the sun’s gravitational field, the long axis of an elliptical planetary orbit can point in any direction in space. This makes these orbits acutely sensitive to any small perturbation that violates the symmetry, like the gravitational field of other planets. For instance, these perturbations cause the long axis of Mercury’s orbit to swing around 360° every 2,254 centuries. In the 1960s theorists realized that the strong nuclear forces have a broken symmetry, known as chiral symmetry. Chiral symmetry is like the proton–neutron symmetry mentioned above, except that the symmetry transformations can be different for particles spinning clockwise or counterclockwise. The breaking of this symmetry requires the existence of the subatomic particles called pi mesons. The pi meson is in a sense the analog of the slow change in orientation of an elliptical planetary orbit; just as small perturbations can make large changes in an orbit’s orientation, pi mesons can be created in collisions of neutrons and protons with relatively low energy. The path out of the dismal state of particle physics in the 1950s turned out to lead through local and broken symmetries. First, electromagnetic and weak nuclear forces were found to be governed by a broken local symmetry. (The experiments now underway at Fermilab in Illinois and the new accelerator at CERN in Switzerland have as their first aim to pin down just what it is that breaks this symmetry.) Then the strong nuclear forces were found to be described by a different local symmetry. The resulting theory of strong, weak, and electromagnetic forces is what is now known as the Standard Model, and does a good job of accounting for virtually all phenomena observed in our laboratories. It would take far more space than I have here to go into details about these symmetries and the Standard Model, or about other proposed symmetries that go beyond those of the Standard Model. Instead I want to take up one aspect of symmetry that as far as I know has not yet been described for general readers. When the Standard Model was put in its present form in the early 1970s, theorists to their delight encountered something quite unexpected. It turned out that the Standard Model obeys certain symmetries that are accidental, in the sense that, though they are not the exact local symmetries on which the Standard Model is based, they are automatic consequences of the Standard Model. These accidental symmetries accounted for a good deal of what had seemed so mysterious in earlier years, and raised interesting new possibilities. The origin of accidental symmetries lies in the fact that acceptable theories of elementary particles tend to be of a particularly simple type. The reason has to do with avoidance of the nonsensical infinities I mentioned at the outset. In theories that are sufficiently simple these infinities can be canceled by a mathematical process called “renormalization.” In this process, certain physical constants, like masses and charges, are carefully redefined so that the infinite terms are canceled out, without affecting the results of the theory. In these simple theories, known as “renormalizable” theories, only a small number of particles can interact at any given location and time, and then the energy of interaction can depend in only a simple way on how the particles are moving and spinning. For a long time many of us thought that to avoid intractable infinities, these renormalizable theories were the only ones physically possible. This posed a serious problem, because Einstein’s successful theory of gravitation, the General Theory of Relativity, is not a renormalizable theory; the fundamental symmetry of the theory, known as general covariance (which says that the equations have the same form whatever coordinates we use to describe events in space and time), does not allow any sufficiently simple interactions. In the 1970s it became clear that there are circumstances in which nonrenormalizable theories are allowed without incurring nonsensical infinities, but that the relatively complicated interactions that make these theories nonrenormalizable are expected, under normal circumstances, to be so weak that physicists can usually ignore them and still get reliable approximate results. This is a good thing. It means that to a good approximation there are only a few kinds of renormalizable theories that we need to consider as possible descriptions of nature. Now, it just so happens that under the constraints imposed by Lorentz invariance and the exact local symmetries of the Standard Model, the most general renormalizable theory of strong and electromagnetic forces simply can’t be complicated enough to violate mirror symmetry.^6 Thus, the mirror symmetry of the electromagnetic and strong nuclear forces is an accident, having nothing to do with any symmetry built into nature at a fundamental level. The weak nuclear forces do not respect mirror symmetry because there was never any reason why they should. Instead of asking what breaks mirror symmetry, we should have been asking why there should be any mirror symmetry at all. And now we know. It is accidental. The proton–neutron symmetry is explained in a similar way. The Standard Model does not actually refer to protons and neutrons, but to the particles of which they are composed, known as quarks and gluons.^7 The proton consists of two quarks of a type called “up” and one of a type called “down”; the neutron consists of two down quarks and an up quark. It just so happens that in the most general renormalizable theory of quarks and gluons satisfying the symmetries of the Standard Model, the only things that can violate the proton–neutron symmetry are the masses of the quarks. The up and down quark masses are not at all equal—the down quark is nearly twice as heavy as the up quark—because there is no reason why they should be equal. But these masses are both very small—most of the masses of the protons and neutrons come from the strong nuclear force, not from the quark masses. To the extent that quark masses can be neglected, then, we have an accidental approximate symmetry between protons and neutrons. Chiral symmetry and the eightfold way arise in the same accidental way. So mirror symmetry and the proton–neutron symmetry and its generalizations are not fundamental at all, but just accidents, approximate consequences of deeper principles. To the extent that these symmetries were our spies in the high command of nature, we were exaggerating their importance, as also often happens with real spies. The recognition of accidental symmetry not only resolved the old puzzle about approximate symmetries; it also opened up exciting new possibilities. It turned out that there are certain symmetries that could not be violated in any theory that has the same particles and the same exact local symmetries as the Standard Model and that is simple enough to be renormalizable.^8 If really valid, these symmetries, known as lepton and baryon conservation,^9 would dictate that neutrinos (particles that feel only the weak and gravitational forces) have no mass, and that protons and many atomic nuclei are absolutely stable. Now, on experimental grounds these symmetries had been known long before the advent of the Standard Model, and had generally been thought to be exactly valid. But if they are actually accidental symmetries of the Standard Model, like the accidental proton–neutron symmetry of the strong forces, then they too might be only approximate. As I mentioned earlier, we now understand that interactions that make the theory nonrenormalizable are not impossible, though they are likely to be extremely weak. Once one admits such more complicated nonrenormalizable interactions, the neutrino no longer has to be strictly massless, and the proton no longer has to be absolutely stable. There are in fact possible nonrenormalizable interactions that would give the neutrino a tiny mass, of the order of one hundred millionth of the electron mass, and give protons a finite average lifetime, though one so long that typical protons in matter today will last much longer than the universe already has. Experiments in recent years have revealed that neutrinos do indeed have such masses. Experiments are under way to detect the tiny fraction of protons that decay in a year or so, and I would bet that these decays will eventually be observed. If protons do decay, the universe will eventually contain only lighter particles like neutrinos and photons. Matter as we know it will be gone. I said that I would be concerned here with the symmetries of laws, not of objects, but there is one thing that is so important that I need to say a bit about it. It is the universe. As far as we can see, when averaged over sufficiently large scales containing many galaxies, the universe seems to have no preferred position, and no preferred directions—it is symmetrical. But this too may be an There is an attractive theory, called chaotic inflation, according to which the universe began without any special spatial symmetries, in a completely chaotic state. Here and there by accident the fields pervading the universe were more or less uniform, and according to the gravitational field equations it is these patches of space that then underwent an exponentially rapid expansion, known as inflation, leading to something like our present universe, with all nonuniformities in these patches smoothed out by the expansion. In different patches of space the symmetries of the laws of nature would be broken in different ways. Much of the universe is still chaotic, and it is only in the patches that inflated sufficiently (and in which symmetries were broken in the right ways) that life could arise, so any beings who study the universe will find themselves in such patches. This is all quite speculative. There is observational evidence for an exponential early expansion, which has left its traces in the microwave radiation filling the universe, but as yet no evidence for an earlier period of chaos. If it turns out that chaotic inflation is correct, then much of what we observe in nature will be due to the accident of our particular location, an accident that can never be explained, except by the fact that it is only in such locations that anyone could live. 1. 1 This article is based in part on a talk given at a conference devoted to symmetry at the Technical University of Budapest in August 2009. ↩ 2. 2 For reasons that are difficult to explain without mathematics, these symmetries imply important conservation laws: the conservation of energy, momentum, and angular momentum (or spin). Some other symmetries imply the conservation of other quantities, such as electric charge. ↩ 3. 3 Strictly speaking, Galilean invariance applies only approximately to the motion of the earth, since the earth is not moving in a straight line at constant speed. It is true that the earth’s motion in its orbit does not affect the laws we observe, but this is because gravity balances the effects of the centrifugal force caused by the earth’s curved motion. This too is dictated by a symmetry, but the symmetry here is Einstein’s principle of general covariance, the basis of the general theory of relativity. ↩ 4. 4 Lorentz had tried to explain the constancy of the observed speed of light by studying the effect of motion on particles of matter. Einstein was instead explaining the same observation by a change in one of nature’s fundamental symmetries. ↩ 5. 5 Consider the equation x ^3 equals x. This equation has a symmetry under the transformation that replaces x with– x; if we replace x with– x, we get the same equation. The equation has a solution x = 0, which respects the symmetry;–0 = 0. But it also has a solution in which x = 1. This does not respect the symmetry;–1 is not equal to 1. This is a broken symmetry. Of course, this equation is not much like the equations of physics. ↩ 6. 6 Honesty compels me to admit that here I am gliding over some technical complications. ↩ 7. 7 These particles are not observed experimentally, not because they are too heavy to be produced (gluons are massless, and some quarks are quite light), but because the strong nuclear forces bind them together in composite states like protons and neutrons. ↩ 8. 8 Again, I admit to passing over some technical complications. ↩ 9. 9 Lepton number is defined as the number of electrons and similar heavier charged particles plus the number of neutrinos, minus the number of their antiparticles. (This conservation law requires the neutrino to be massless because neutrinos and antineutrinos, respectively, spin only counterclockwise and clockwise around their directions of motion. If neutrinos have any mass then they travel at less than the speed of light, so it is possible to reverse their apparent direction of motion by travelling faster past them, hence converting the spin from counterclockwise to clockwise, and neutrinos to antineutrinos, which changes the lepton number.) Baryon number is proportional to the number of quarks minus the number of antiquarks. ↩
{"url":"http://www.nybooks.com/articles/archives/2011/oct/27/symmetry-key-natures-secrets/?pagination=false","timestamp":"2014-04-19T05:26:22Z","content_type":null,"content_length":"75802","record_id":"<urn:uuid:31312e00-3d4d-4b46-b358-3325bdefddd9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Xenover Ver. 2.0 Article By Grey Rollins ┃Home│News│Equip. Reviews │Show Reports│Partner Mags│ ┃ ┃ Please Support Our Advertisers ┃ ┃ ┃ ┃ ┃ Fall 2009 Xenover Ver. 2.0 Article By Grey Rollins Difficulty Level^ There is a funny thing about active crossovers. Everyone knows that bi-amping your speakers can bring about marvelous improvements in sound quality, but very few people actually do it. Granted, bi-amping means you've got to buy more equipment, decide on crossover frequencies and slopes, and perhaps even modify your speakers if they're not already wired to accept bi-amplification, but given the benefits, you'd think more people would take the plunge. In an attempt to get at least some of you moving, I'll provide a crossover design that's simple, extremely high performance, and Let us begin by talking about JFETs for a moment. Your run of the mill JFET has as near infinite impedance as you could ask for, virtually no Gate current, and will self-bias with the addition of a simple resistor. Yes, tube folks have been enjoying these same qualities for years, but you have to admit that, lacking a "P-channel" version of the classic vacuum tube, it does tend to limit topologies a bit. In particular, direct-coupling a push-pull tube output stage (yes, it can be done) is annoyingly difficult, involving all sorts of power supply gymnastics and other annoyances. JFETs allow us to sidestep all that while retaining the benefits. Back about 40 years ago, John Curl developed a cute little complementary follower circuit utilizing one N-channel and one P-channel device. It has low output impedance and high input impedance. Being a follower, it has unity gain — that is a fancy way of saying that it doesn't amplify the signal. In fact, it'll do an outstanding job of replacing an OpAmps follower, and as a fringe benefit, we can dispense with about ninety of the transistors lurking under the hood of the average OpAmps. On to the matter of filter topologies. Many are the words that could be spilled on filters; few corners of the audiophile's realm are more filled with buzzwords than filters. Passive/active, first/ second/third/fourth order, Butterworth, Bessel, Linkwitz-Riley, and the list goes on.\ The simplest is the first order filter. Each ‘order' represents 6dB per octave of rolloff. A first order crossover has a 6dB/octave slope, meaning that the response drops six decibels for each octave away from the crossover point. It will be down 6dB one octave away (slightly less, actually — it takes a little while for the filter to reach its theoretical slope), then 12dB two octaves away, 18dB three octaves away... each octave representing a doubling or halving of frequency. A first order crossover has one crucial benefit and one deficit. On the plus side, it's the least damaging to the phase relationships between the music above the crossover point and those below. The problem is that 6dB/octave is, well, slow in the rolloff department. It doesn't provide a lot of protection for delicate tweeters. In other words, it's a tradeoff, like everything else. A second order crossover offers 12dB/octave of attenuation, which is an improvement in terms of protection, but at the price of somewhat more disruption of the phase characteristics. The stereo gods giveth and the stereo gods taketh away — ours is to take what they give us and be happy (or try to find a way to cheat, but that's another story). The first order crossover is easy to build. A high pass (which passes high frequencies, like you'd want for a tweeter) filter incorporates one capacitor in series with the signal and one resistor to ground. A low pass reverses the order of things—resistor in series, capacitor to ground. Done. Yes, it's really that simple. The second order crossover is a bit more involved. We can skip topologies like Chebyshev and elliptic and go directly to the three that are likely to be of greatest use in audio: Butterworth, Bessel, and Linkwitz-Riley. Just three? Whew! That simplifies things a lot. The Butterworth is the flattest, frequency-wise. The Bessel is the best in terms of phase relationships. The Linkwitz-Riley is an attempt to tame some of the more unruly phase aspects of higher order filters and is very popular among aficionados of fourth order filters. So how do we achieve these crossover slopes? The first order filter comes pre-packaged with its own topology. For higher order filters, there's only one real contender: Sallen-Key, named after the two fellows who developed it back in the 1950s. Even though we've simplified things by several orders of magnitude, there are still a number of permutations to run through, so with further ado let's start throwing parts together. Click here to download the schematics. Schematic 1 shows the basic JFET buffer building block. I've chosen to run two JFETs in parallel to increase the available output current and lower the output impedance. In most circumstances, you can run just one N-ch and one P-ch and be perfectly happy. The Level control varies the level of the driver. The DC offset pot (V2) sets the output to 0Vdc. We'll come back to that later. For the time being, it's enough to say that this circuit is a non-inverting unity gain buffer and we'll treat it as a unit. (I'd like to note in passing that you can use the circuit as shown as a drop-in replacement for a "passive" preamp. Ta-da! No more high impedance output problems.) Schematic 2 shows a 6dB/octave high pass filter, based on the buffer shown in schematic 1. The filter is comprised of C1 and R6. Calculate the values by choosing a crossover frequency and a value for C1, then calculate R6 with the following formula: R6 = 1/(2*П*F*C1) R6 is the chosen resistor value F is the desired crossover frequency If your calculator doesn't have a PI button, 3.14159 will give you more accuracy than you're likely to need. Let us assume you want a crossover point at 1 kHz. We'll also assume that you want to use a .01uF capacitor. That yields a value of 15915 Ohms for R6. Real world resistor values include 16k in the E-24 (aka 5%) series, and either 15.8k or 16.2k in the E-96 (aka 1%) series. Your choice. And, no, you haven't done anything wrong… electronic engineers face exactly the same sorts of approximations every day. Real parts don't always come in exactly the value that you calculated. Do not worry, the 16k value gives a crossover frequency of 995 Hz. The ‘missing' 5 Hz is not worth losing sleep over as it represents less than a 1% error. Your drivers are more likely to be the problem, here. Rare, indeed is the driver that is manufactured to less than 1% tolerance. The matching low pass filter is created using the alternate circuit. The value for C2 is the same as for C1 and the value for R5 is the same as R6. R7 ground references the input of the second buffer. It's only needed for the low pass, as R6 performs that function in the high pass version. If you feel like fine tuning things a bit, you can recalculate the value of R5 to reflect the fact that C2 sees it in parallel with R7, but as long as you're working with resistor values on the order of, say, 10k for R5, there will be very little difference. Should you only need first order crossover slopes, you're done. The only thing left to do is adjust the DC offset pots in the buffers and you're ready to listen. In an ideal world, the Idss of any JFET would match that of others of its kind. Unfortunately, that's not the case. Worse yet, N-ch and P-ch parts are often miles apart. The DC offset pots allow for imbalances between the N devices and the P devices. All you have to do is crank up the circuit the first time, set the output of the buffer to 0Vdc, relative to ground, and walk away. Thirty minutes later, come back and repeat the adjustment and you're done. If you need a steeper slope, perhaps a second order filter will do the job. This is where the Sallen-Key topology comes in. Schematic 3 shows the same buffer sections with slightly more complicated Beginning with the high pass, calculate the parts values for a Butterworth filter as follows: C = C1 = C3 R6 = 0.7071/(2*П*F*C) R8 = 1.414/(2*П*F*C) The low pass takes the resistor values as constant and calculates the capacitor values: R = R5 = R7 C2 = 1.414/(2*П*F*R) C4 = 0.7071/(2*П*F*R) The only thing that might appear a little odd is that R6 (if you're using the high pass) and C2 (if you're using the low pass) connect to the output. If you trace the signal through, you'll find that this amounts to a bit of positive feedback. Note that this is true regardless of whether you're using OpAmps, FETs, bipolars, or tubes. It's inherent in the Sallen-Key topology and serves to touch up the response around the crossover point a bit. Should you choose to use a Bessel function instead of Butterworth, the topology remains the same, but the formulas change slightly. High pass: C = C1 = C3 R6 = 1.1017/(2*П*F*C) R8 = 1.4688/(2*П*F*C) Low pass: R = R5 = R7 C2 = 0.9076/(2*П*F*R) C4 = 0.6809/(2*П*F*R) If you want to tri-amp, the tweeter will get a high pass filter and the woofer will get a low pass, just like a bi-amped system. The midrange will require a band pass, however. Don't worry, band pass filters are easy. Just put a low pass and a high pass together and you'll get a band pass. All the formulas are the same and the topologies need not change. You can even save one buffer stage by running the output of the first crossover directly into the second filter section. Third and fourth order crossovers get more complicated and deserve more space. If there's sufficient interest and if Steven is willing, I'll cover those at another time. Oh, and the name? Xeno, in Greek, means foreign or strange, and this is an unusual crossover circuit. Pronounce the "X" as "Z" the same way you would xylophone.
{"url":"http://www.enjoythemusic.com/diy/1109/xenover_2.htm","timestamp":"2014-04-20T03:11:29Z","content_type":null,"content_length":"36461","record_id":"<urn:uuid:db87c40e-bade-47c6-9c30-d3943d4fee6f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Tutors Boston, MA 02116 Expert Tutoring + Test Prep (SAT, GRE, GMAT) with MIT/Harvard Grad ...Most high schools these days don't offer a course solely in trigonometry; rather, trig is typically integrated into a pre-calculus, 2, or geometry course. I studied literature as an undergraduate at MIT and Harvard, and I'm currently an Assistant Editor... Offering 10+ subjects including algebra 1 and algebra 2
{"url":"http://www.wyzant.com/Hull_MA_algebra_tutors.aspx","timestamp":"2014-04-19T12:39:34Z","content_type":null,"content_length":"58960","record_id":"<urn:uuid:9c3f939b-2668-48ed-acc0-4e25d07e26fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Sudbury Algebra 1 Tutor Find a Sudbury Algebra 1 Tutor ...Unfortunately the company went out of business and I took some time off. I love to tutor and help others and recently helped my own husband with his recent microbiology class. I should also mention that while in graduate school I taught a few classes in Microbiology to third year medical students. 22 Subjects: including algebra 1, reading, English, grammar I have 9 years of experience teaching all levels of high school mathematics in the public schools. I also have more than 6 years of experience tutoring mathematics to students ranging from 7 years old through adult learners. I have taught and/or tutored mathematics from basic addition and subtraction through calculus. 14 Subjects: including algebra 1, calculus, geometry, algebra 2 ...My concentration in Grad school on Mathematics Education prepared me to understand how Math is learned. I have 13 years of teaching Math in the Middle School. My teaching style is clear and 9 Subjects: including algebra 1, GRE, GED, elementary (k-6th) ...I am available to start right now as I am all settled and moved back home from college! :)I have just graduated on December 20th from Fitchburg State University with a dual license in Special Education (Moderate Disabilities) grades PreK-8 and Elementary grades 1-6. During my last semester of sc... 22 Subjects: including algebra 1, English, reading, geometry I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment... 14 Subjects: including algebra 1, calculus, statistics, geometry Nearby Cities With algebra 1 Tutor Acton, MA algebra 1 Tutors Concord, MA algebra 1 Tutors Hudson, MA algebra 1 Tutors Lincoln Center, MA algebra 1 Tutors Lincoln, MA algebra 1 Tutors Maynard, MA algebra 1 Tutors Needham Jct, MA algebra 1 Tutors Newton Center algebra 1 Tutors Newton Centre, MA algebra 1 Tutors Wayland, MA algebra 1 Tutors Wellesley algebra 1 Tutors Wellesley Hills algebra 1 Tutors Westboro, MA algebra 1 Tutors Westborough algebra 1 Tutors Weston, MA algebra 1 Tutors
{"url":"http://www.purplemath.com/sudbury_ma_algebra_1_tutors.php","timestamp":"2014-04-21T04:43:07Z","content_type":null,"content_length":"23841","record_id":"<urn:uuid:03fc7bf6-a38d-494c-b20f-b0ac241b624f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
can someone poke a hole in this? March 26th 2009, 01:20 PM can someone poke a hole in this? I take AP Calculus AB at my high school and I think I have found a flaw in my textbook. In a section about the length of curves, there is this problem asking the reader to find a curve based on a given integral and the part b of the problem asks how many curves are possible. The back of the book says that there is only one curve possible, but I think I have found another. The problem is Find a curve through the point (1,1) whose length integral is $L= \int_{1}^{4} \sqrt{1 + \frac {1}{4x}}dx$ so following the formula for the length of a curve, (dy/dx)^2=(1/4x) so (dy/dx) = + or - $\frac {1}{2\sqrt{x}}$. based on those dy/dx's, y is equal to either the square root of x plus a constant or the negative square root of x plus a constant, depending on whether you use the positive dy/dx or the negative one. After solving for constants I find two solutions that work: $y= \sqrt{x}<br />$ and $y= -\sqrt{x} + 2$ but my text book only thinks the first one is an acceptable answer apparently. can someone help me out? March 26th 2009, 06:00 PM I take AP Calculus AB at my high school and I think I have found a flaw in my textbook. In a section about the length of curves, there is this problem asking the reader to find a curve based on a given integral and the part b of the problem asks how many curves are possible. The back of the book says that there is only one curve possible, but I think I have found another. The problem is Find a curve through the point (1,1) whose length integral is $L= \int_{1}^{4} \sqrt{1 + \frac {1}{4x}}dx$ so following the formula for the length of a curve, (dy/dx)^2=(1/4x) so (dy/dx) = + or - $\frac {1}{2\sqrt{x}}$. based on those dy/dx's, y is equal to either the square root of x plus a constant or the negative square root of x plus a constant, depending on whether you use the positive dy/dx or the negative one. After solving for constants I find two solutions that work: $y= \sqrt{x}<br />$ and $y= -\sqrt{x} + 2$ but my text book only thinks the first one is an acceptable answer apparently. can someone help me out? textbook answers have been wrong on many occasions. March 26th 2009, 07:19 PM yeah i realize that i guess i was just wondering if i was missing something
{"url":"http://mathhelpforum.com/calculus/80802-can-someone-poke-hole-print.html","timestamp":"2014-04-19T15:44:39Z","content_type":null,"content_length":"7654","record_id":"<urn:uuid:b798fc98-77d6-4023-8587-c86269bd4cc0>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Spacetime Deformation-Induced Inertia Effects Advances in Mathematical Physics Volume 2012 (2012), Article ID 692030, 41 pages Research Article Spacetime Deformation-Induced Inertia Effects Division of Theoretical Astrophysics, Byurakan Astrophysical Observatory, 378433 Byurakan, Armenia Received 4 March 2012; Revised 18 April 2012; Accepted 22 April 2012 Academic Editor: Giorgio Kaniadakis Copyright © 2012 Gagik Ter-Kazarian. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We construct a toy model of spacetime deformation-induced inertia effects, in which we prescribe to each and every particle individually a new fundamental constituent of hypothetical 2D, so-called master space (MS), subject to certain rules. The MS, embedded in the background 4D-spacetime, is an indispensable companion to the particle of interest, without relation to every other particle. The MS is not measurable directly, but we argue that a deformation (distortion of local internal properties) of MS is the origin of inertia effects that can be observed by us. With this perspective in sight, we construct the alternative relativistic theory of inertia. We go beyond the hypothesis of locality with special emphasis on distortion of MS, which allows to improve essentially the standard metric and other relevant geometrical structures referred to a noninertial frame in Minkowski spacetime for an arbitrary velocities and characteristic acceleration lengths. Despite the totally different and independent physical sources of gravitation and inertia, this approach furnishes justification for the introduction of the weak principle of equivalence (WPE), that is, the universality of free fall. Consequently, we relate the inertia effects to the more general post-Riemannian geometry. 1. Introduction Governing the motions of planets, the fundamental phenomena of gravitation and inertia reside at the very beginning of the physics. More than four centuries passed since the famous far-reaching discovery of Galileo (in 1602–1604) that all bodies fall at the same rate [1], which led to an early empirical version of the suggestion that gravitation and inertia may somehow result from a single mechanism. Besides describing these early gravitational experiments, Newton in Principia Mathematica [2] has proposed a comprehensive approach to studying the relation between the gravitational and inertial masses of a body. In Newtonian mechanics, masses are simply placed in absolute space and time, which remain external to them. That is, the internal state of a Newtonian point particle, characterized by its inertial mass, has no immediate connection with the particles’ external state in absolute space and time, characterized by its position and velocity. Ever since, there is an ongoing quest to understand the reason for the universality of the gravitation and inertia, attributing to the WPE, which establishes the independence of free-fall trajectories of the internal composition and structure of bodies. In other words, WPE states that all bodies at the same spacetime point in a given gravitational field will undergo the same acceleration. However, the nature of the relationship of gravity and inertia continues to elude us and, beyond the WPE, there has been little progress in discovering their true relation. Such interesting aspects, which deserve further investigations, unfortunately, have attracted little attention in subsequent developments. Only hypothesis, which in some extent relates inertia and matter, is the Mach principle, see for example, [3 –15], but in the same time it is a subject to many uncertainties. The Mach’s ideas on inertial induction were proposed as the theoretical mechanism for generating the inertial forces felt during acceleration of a reference frame. The ensuing problem of the physical origin of inertial forces led Mach to hypothesize that inertial forces were to be of gravitational origin, occurring only during acceleration relative to the fixed stars. In this model the ratio of inertial to gravitational mass will depend on the average distribution of mass in the universe, in effect making gravitational constant a function of the mass distribution in the universe. The general relativity (GR), which preserves the idea of relativity of all kinds of motion, is built on the so-called strong principle (SPE) that the only influence of gravity is through the metric and can thus (apart from tidal effects) be locally, approximately transformed away by going to an appropriately accelerated reference frame. Despite the advocated success of GR, it is now generally acknowledged, however, that what may loosely be termed Mach principle is not properly incorporated into GR. In particular, the origin of inertia remains essentially the same as in Newtonian physics. Brans thorough analysis [4–6] has shown that no extra inertia is induced in a body as a result of the presence of other bodies. Various attempts at the resolution of difficulties that are encountered in linking Machs principle with Einsteins theory of gravitation have led to many interesting investigations. For example, by [ 14] is shown that the GR can be locally embedded in a Ricci-flat 5D manifold such that every solution of GR in 4D can be locally embedded in a Ricci-flat 5D manifold and that the resulting inertial mass of a test particle varies in space time. Anyhow, the difficulty is brought into sharper focus by considering the laws of inertia, including their quantitative aspects. That is, Mach principle and its modifications do not provide a quantitative means for computing the inertial forces. At present, the variety of consequences of the precision experiments from astrophysical observations makes it possible to probe this fundamental issue more deeply by imposing the constraints of various analyses. Currently, the observations performed in the Earth-Moon-Sun system [16–35], or at galactic and cosmological scales [36–41], probe more deeply both WPE and SPE. The intensive efforts have been made, for example, to clear up whether the rotation state would affect the trajectory of test particle. Shortly after the development of the work by [22], in which is reported that, in weighing gyros, it would be a violation of WPE, the authors of [23–26] performed careful weighing experiments on gyros with improved precision but found only null results which are in disagreement with the report of [22]. The interferometric free-fall experiments by [27, 28] again found null results in disagreement with [22]. For rotating bodies, the ultraprecise Gravity Probe B experiment [29–34], which measured the frame-dragging effect and geodetic precession on four quartz gyros, has the best accuracy. GP-B serves as a starting point for the measurement of the gyrogravitational factor of particles, whereas the gravitomagnetic field, which is locally equivalent to a Coriolis field and generated by the absolute rotation of a body, has been measured too. This, with its superb accuracy, verifies WPE for unpolarized bodies to an ultimate precision—a four-order improvement on the noninfluence of rotation on the trajectory, and ultraprecision on the rotational equivalence [35]. Moreover, the theoretical models may indicate cosmic polarization rotations which are being looked for and tested in the CMB experiments [40]. To look into the future, measurement of the gyrogravitational ratio of particle would be a further step, see [41] and references therein, towards probing the microscopic origin of gravity. Also, the inertia effects in fact are of vital interest for the phenomenological aspects of the problem of neutrino oscillations; see, for example, [42–56]. All these have evoked the study of the inertial effects in an accelerated and rotated frame. In doing this, it is a long-established practice in physics to use the hypothesis of locality for extension of the Lorentz invariance to accelerated observers in Minkowski spacetime [57, 58]. This in effect replaces the accelerated observer by a continuous infinity of hypothetical momentarily comoving inertial observers along its wordline. This assumption, as well as its restricted version, so-called clock hypothesis, which is a hypothesis of locality only concerned about the measurement of time, is reasonable only if the curvature of the wordline could be ignored. As long as all relevant length scales in feasible experiments are very small in relation to the huge acceleration lengths of the tiny accelerations we usually experience, the curvature of the wordline could be ignored and that the differences between observations by accelerated and comoving inertial observers will also be very small. In this line, in 1990, Hehl and Ni proposed a framework to study the relativistic inertial effects of a Dirac particle [59], in agreement with [60–62]. Ever since this question has become a major preoccupation of physicists; see, for example, [63–84]. Even this works out, still, it seems quite clear that such an approach is a work in progress, which reminds us of a puzzling underlying reality of inertia and that it will have to be extended to describe physics for arbitrary accelerated observers. Beyond the WPE, there is nothing convincing in the basic postulates of physics for the origin and nature of inertia to decide on the issue. Despite our best efforts, all attempts to obtain a true knowledge of the geometry related to the noninertial reference frames of an arbitrary observer seem doomed, unless we find a physical principle the inertia might refer to, and that a working alternative relativistic theory of inertia is formulated. Otherwise one wanders in a darkness. The problem of inertia stood open for nearly four centuries, and the physics of inertia is still an unknown exciting problem to be challenged and allows various attempts. In particular, the inertial forces are not of gravitational origin within GR as it was proposed by Einstein in 1918 [85], because there are many controversies to question the validity of such a description [57, 58, 60–91]. The experiments by [87–90], for example, tested the key question of anisotropy of inertia stemming from the idea that the matter in our galaxy is not distributed isotropically with respect to the earth, and hence if the inertia is due to gravitational interactions, then the inertial mass of a body will depend on the direction of its acceleration with respect to the direction towards the center of our galaxy. However, these experiments do not found such anisotropy of mass. The most sensitive test is obtained in [88, 89] from a nuclear magnetic resonance experiment with an nucleus of spin . The magnetic field was of about 4700 gauss. The south direction in the horizontal plane points within 22 degrees towards the center of our galaxy, and 12 hours later this same direction along the earth’s horizontal plane points 104 degrees away from the galactic center. If the nuclear structure of is treated as a single proton in a central nuclear potential, the variation of mass with direction, if it exists, was found to satisfy . This is by now very strong evidence that there is no anisotropy of mass which is due to the effects of mass in our galaxy. Another experimental test [91] using nuclear-spin-polarized ions also gives null result on spatial anisotropy and thus supporting local Lorentz invariance. This null result represents a decrease in the limits set by [88–90] on a spatial anisotropy by a factor of about 300. Finally, another theoretical objection is that if the curvature of Riemannian space is associated with gravitational interaction, then it would indicate a universal feature equally suitable for action on all the matter fields at once. The source of the curvature as conjectured in GR is the energy-momentum tensor of matter, which is rather applicable for gravitational fields but not for inertia, since the inertia is dependent solely on the state of motion of individual test particle or coordinate frame of interest. In case of accelerated motion, unlike gravitation, the curvature of spacetime might arise entirely due to the inertial properties of the Lorentz-rotated frame of interest, that is, a “fictitious gravitation” which can be globally removed by appropriate coordinate transformations [57]. This refers to the particle of interest itself, without relation to other systems or matter fields. On the other hand, a general way to deform the spacetime metric with constant curvature has been explicitly posed by [92–94]. The problem was initially solved only for 3D spaces, but consequently it was solved also for spacetimes of any dimension. It was proved that any semi-Riemannian metric can be obtained as a deformation of constant curvature matric, this deformation being parameterized by a 2-form. A novel definition of spacetime metric deformations, parameterized in terms of scalar field matrices, is proposed by [95]. In a recent paper [96], we construct the two-step spacetime deformation (TSSD) theory which generalizes and, in particular cases, fully recovers the results of the conventional theory of spacetime deformation [92–95]. All the fundamental gravitational structures in fact—the metric as much as the coframes and connections—acquire the TSSD-induced theoretical interpretation. The TSSD theory manifests its virtue in illustrating how the curvature and torsion, which are properties of a connection of geometry under consideration, come into being. Conceptually and techniquewise this method is versatile and powerful. For example, through a nontrivial choice of explicit form of a world-deformation tensor, which we have at our disposal, in general, we have a way to deform that the spacetime displayed different connections, which may reveal different post-Riemannian spacetime structures as a corollary, whereas motivated by physical considerations, we address the essential features of the theory of teleparallel gravity-TSSD- and construct a consistent TSSD- Einstein-Cartan (EC) theory, with a dynamical torsion. Moreover, as a preliminary step, in the present paper we show that by imposing different appropriate physical constraints upon the spacetime deformations, in this framework we may reproduce the term in the well-known Lagrangian of pseudoscalar-photon interaction theory, or terms in the Lagrangians of pseudoscalar theories [41, 97–101], or in modification of electrodynamics with an additional external constant vector coupling [102, 103], as well as in case of intergrand for topological invariant [ 104] or in case of pseudoscalar-gluon coupling occurred in QCD in an effort to solve the strong CP problem [105–107]. Next, our purpose is to carry out some details of this program to probe the origin and nature of the phenomenon of inertia. We ascribe the inertia effects to the geometry itself but as having a nature other than gravitation. In doing this, we note that aforementioned examples pose a problem for us that physical space has intrinsic geometrical and inertial properties beyond 4D spacetime derived from the matter contained therein. Therefore, we should conceive of two different spaces: one would be 4D background space-time, and another one should be 2D so-called master space (MS), which, embedded in the 4D background space, is an indispensable individual companion to the particle, without relation to the other matter. That is, the key to our construction procedure is an assignment in which we prescribe to each and every particle individually a new fundamental constituent of hypothetical MS, subject to certain rules. In the contrary to Mach principle, the particle has to live with MS companion as an intrinsic property devoid of any external influence. The geometry of MS is a new physical entity, with degrees of freedom and a dynamics of its own. This together with the idea that the inertia effects arise as a deformation (distortion of local internal properties) of MS is the highlights of the alternative relativistic theory of inertia (RTI), whereas we build up the distortion complex (DC), yielding a distortion of MS, and show how DC restores the world-deformation tensor, which still has to be put in [96] by hand. Within this scheme, the MS was presumably allowed to govern the motion of a particle of interest in the background space. In simple case, for example, of motion of test particle in the free 4D Minkowski space, the suggested heuristic inertia scenario is reduced to the following: unless a particle was acted upon by an unbalanced force, the MS is being flat. This causes a free particle in 4D Minkowski space to tend to stay in motion of uniform speed in a straight line or a particle at rest to tend to stay at rest. As we will see, an alteration of uniform motion of a test particle under the unbalanced net force has as an inevitable consequence of a distortion of MS. This becomes an immediate cause of arising both the universal absolute acceleration of test particle and associated inertial force in space. This, we might expect, holds on the basis of an intuition founded on past experience limited to low velocities, and these two features were implicit in the ideas of Galileo and Newton as to the nature of inertia. Thereby the major premise is that the centrifugal endeavor of particles to recede from the axis of rotation is directly proportional to the quantity of the absolute circular acceleration, which, for example, is exemplified by the concave water surface in Newton’s famous rotating bucket experiments. In other words, it takes force to disturb an inertia state, that is, to make an absolute acceleration. In this framework, the relative acceleration (in Newton’s terminology) (both magnitude and direction), to the contrary, cannot be the cause of a distortion of MS and, thus, it does not produce the inertia effects. The real inertia effects, therefore, can be an empirical indicator of absolute acceleration. The treatment of deformation/distortion of MS is instructive because it contains the essential quantitative elements for computing the relativistic inertial force acting on an arbitrary observer. On the face of it, the hypothesis of locality might be somewhat worrisome, since it presents strict restrictions, replacing the distorted MS by the flat MS. Therefore, it appears natural to go beyond the hypothesis of locality with spacial emphasis on distortion of MS. This, we might expect, will essentially improve the standard metric, and so forth, referred to a noninertial system of an arbitrary observer in Minkowski spacetime. Consequently, we relate the inertia effects to the more general post-Riemannian geometry. The crucial point is to observe that, in spite of totally different and independent physical sources of gravitation and inertia, the RTI furnishes justification for the introduction of the WPE [108, 109]. However, this investigation is incomplete unless it has conceptual problems for further motivation and justification of introducing the fundamental concept of MS. The way we assigned such a property to the MS is completely ad hoc and there are some obscure aspects of this hypothesis. All these details will be further motivated and justified in subsequent paper. The outline of the rest of the present paper is as follows. In Section 2 we briefly revisit the theory of TSSD and show how it can be useful for the theory of electromagnetism and charged particles. In Section 3, we explain our view of what is the MS and lay a foundation of the RLI. A general deformation/distortion of MS is described in Section 4. In Section 5, we construct the RTI in the background 4D Minkowski space. In Section 6, we go beyond the hypothesis of locality, whereas we compute the improved metric and other relevant geometrical structures in noninertial system of arbitrary accelerating and rotating observer in Minkowski spacetime. The case of semi-Riemann background space is dealt with in Section 7, whereby we give justification for the introduction of the WPE on the theoretical basis. In Section 8, we relate the RTI to more general post-Riemannian geometry. The concluding remarks are presented in Section 9. We will be brief and often ruthlessly suppress the indices without notice. Unless otherwise stated we take natural units, . 2. TSSD Revisited: Preliminaries For the benefit of the reader, this section contains some of the necessary preliminaries on generic of the key ideas behind the TSSD [96], which needs one to know in order to understand the rest of the paper. We adopt then its all ideas and conventions. The interested reader is invited to consult the original paper for further details. It is well known that the notions of space and connections should be separated; see, for example, [110–113]. The curvature and torsion are in fact properties of a connection, and many different connections are allowed to exist in the same spacetime. Therefore, when considering several connections with different curvatures and torsions, one takes spacetime simply as a manifold and connections as additional structures. From this view point in a recent paper [96] we have tackled the problem of spacetime deformation. In order to relate local Lorentz symmetry to curved spacetime, there is, however, a need to introduce the soldering tools, which are the linear frames and forms in tangent fiber-bundles to the external curved space, whose components are so-called tetrad (vierbein) fields. To start with, let us consider the semi-Riemann space, , which has at each point a tangent space, , spanned by the anholonomic orthonormal frame field, , as a shorthand for the collection of the 4-tuplet , where . All magnitudes related to the space, , will be denoted by an over “”. These then define a dual vector, , of differential forms, as a shorthand for the collection of the , whose values at every point form the dual basis, such that , where denotes the interior product, namely, this is a -bilinear map where denotes the -modulo of differential -forms on . In components . On the manifold, , the tautological tensor field, , of type can be defined which assigns to each tangent space the identity linear transformation. Thus for any point , and any vector , one has . In terms of the frame field, the give the expression for as , in the sense that both sides yield when applied to any tangent vector in the domain of definition of the frame field. One can also consider general transformations of the linear group, , taking any base into any other set of four linearly independent fields. The notation will be used hereinafter for general linear frames. The holonomic metric can be defined in the semi-Riemann space, , as with components in the dual holonomic base . The anholonomic orthonormal frame field, , relates to the tangent space metric, , by , which has the converse because . For reasons that will become clear in the sequel, next we write the norm, , of the infinitesimal displacement, , on the general smooth differential 4D-manifold, , in terms of the spacetime structures of , as where is the world deformation tensor, is the frame field, and is the coframe field defined on , such that , or in components, ; also the procedure can be inverted, . Hence the deformation tensor, , yields local tetrad deformations: The components of the general spin connection then transform inhomogeneously under a local tetrad deformations (2.3): This is still a passive transformation, but with inverted factor ordering. The matrices are called first deformation matrices, and the matrices ,second deformation matrices. The matrices , in general, give rise to right cosets of the Lorentz group; that is, they are the elements of the quotient group , because the Lorentz matrices, , leave the Minkowski metric invariant. A right-multiplication of by a Lorentz matrix gives an other deformation matrix. If we deform the tetrad according to (2.3), in general, we have two choices to recast metric as follows: either writing the deformation of the metric in the space of tetrads or deforming the tetrad field: In the first case, the contribution of the Christoffel symbols, constructed by the metric , reads with representing the curls of the base members in the semi-Riemann space: where is the anfolonomity 2-form. The deformed metric can be split as follows [96]: where , and . In the second case, we may write the commutation table for the anholonomic frame, , and define the anholonomy objects: The usual Levi-Civita connection corresponding to the metric (2.8) is related to the original connection by the following relation: provided that where the controvariant deformed metric, , is defined as the inverse of , such that . Hence, the connection deformation acts like a force that deviates the test particles from the geodesic motion in the space (for more details see [96]). Next, we deal with the spacetime deformation , to be consisted of two ingredient deformations (, ). Provided, we require that the first deformation matrix, , satisfies the following peculiar condition: where is the spin connection defined in the semi-Riemann space. By virtue of (2.13), the general deformed spin connection vanishes, and a general linear connection, , is related to the corresponding spin connection, through the inverse which is the Weitzenböck connection revealing the Weitzenböck spacetime of the teleparallel gravity. Thus, can be referred to as the Weitzenböck deformation matrix. All magnitudes related to the teleparallel gravity will be denoted by an over “”. The components of the general spin connection then transform inhomogeneously under a local tetrad deformations: such that is referred to as the deformation-related frame connection, which represents the deformed properties of the frame only. Then, it follows that the affine connection transforms inhomogeneously through where we have , also the procedure can be inverted, , and is the spin connection. For our convenience, hereinafter the notation will be used for general linear frames: where , or in components, ; also the procedure can be inverted, , provided that Hence, the affine connection (2.17) can be rewritten in the abbreviated form: Since the first deformation matrices and are arbitrary functions, the inhomogeneously transformed general spin connections and , as well as the affine connection (2.21), are independent of tetrad fields and their derivatives. In what follows, therefore, we will separate the notions of space and connections—the metric-affine formulation of gravity. A metric-affine space is defined to have a metric and a linear connection that need not be dependent on each other. The lifting of the constraints of metric-compatibility and symmetry yields the new geometrical property of the spacetime, which are the nonmetricity 1-form and the affine torsion 2-form representing a translational misfit (for a comprehensive discussion see [114–118]. These, together with the curvature 2-form , symbolically can be presented as [119, 120] where for a tensor-valued -form density of representation type , the -covariant exterior derivative reads and is the general nonmetricity connection. This notation will be used instead of , such that . In what follows, however, we may still maintain the former notation to be referred to as corresponding connection constrained by the metricity condition. We may introduce the affine contortion 1-form given in terms of the torsion 2-form . In tensor components we have , where the torsion tensor given with respect to a holonomic frame, , is a third-rank tensor, antisymmetric in the first two indices, with 24 independent components. The TSSD- theory (see [96]) considers curvature and torsion as representing independent degrees of freedom. The RC manifold, , is a particular case of general metric-affine manifold , restricted by the metricity condition, when a nonsymmetric linear connection is said to be metric compatible. Taking the antisymmetrized derivative of the metric condition gives an identity between the curvature of the spin-connection and the curvature of the Christoffel connection: where Hence, the relations between the scalar curvatures for an manifold read This means that the Lorentz and diffeomorphism invariant scalar curvature, , becomes either a function of only or a function of only. Certainly, it can be seen by noting that the Lorentz gauge transformations can be used to fix the six antisymmetric components of to vanish. Then in both cases diffeomorphism invariance fixes four more components out of the six , with the four components being non dynamical, obviously, leaving only two dynamical degrees of freedom. This shows that the equivalence of the vierbein and metric formulations holds. According to (2.25), the relations between the Ricci scalars read To recover the TSSD- theory, one can choose the EC Lagrangian, , as where is the cosmological constant, is the curvature tensor, is the Lagrange multiplier, and . The basis is consisting in the Hodge dual of exterior products of tetrads by means of the Levi-Civita object: , which yields and , where we used the abbreviated notations for the wedge product monomials, , and denotes the Hodge dual. The variation of the total action given by the sum of the gravitational field action, , with the Lagrangian (2.27) and the macroscopic matter sources, , with respect to the , 1-form and , which is a -form representing a matter field (fundamentally a representation of the or of some of its subgroups), gives where is the Planck length,, and is the dual 3-form corresponding to the canonical spin tensor, which is identical with the dynamical spin tensor , namely, provided that, and that To obtain some feeling for the tensor language in a holonomic frame then we may recast the first two field equations in (2.29) in the tensorial form: where is Einstein’s tensor, and the modified torsion reads Thus, the equations of the standard EC theory can be recovered for . However, these equations can be equivalently replaced by the set of modified EC equations for : We may impose different physical constraints upon the spacetime deformation , which will be useful for the theory of electromagnetism and charged particles: with as a scalar or pseudoscalar function of relevant variables. Here . Then we obtain which recovers the term in the Lagrangian of pseudoscalar-photon interaction theory [41, 97–101], such that the nonmetric part of the Lagrangian can be put in the well-known form of the framework: where has the usual meaning for electromagnetism. This is equivalent, up to integration by parts in the action integral (modulo a divergence), to the Lagrangian According to (2.39), the gravitational constitutive tensor [40] of the gravitational fields (e.g., metric , (pseudo)scalar field etc.) reads The special case is considered by [102, 103], for modification of electrodynamics with an additional external constant vector coupling. Imposing other appropriate constraints upon the spacetime deformation , in the framework of TSSD- theory we may reproduce the various terms in the Lagrangians of pseudoscalar theories, for example, as intergrand for topological invariant [104], or pseudoscalar-gluon coupling occurred in QCD in an effort to solve the strong CP problem [105–107]. 3. The Hypothetical Flat MS companion: A Toy Model As a preliminary step we now conceive two different spaces: one would be 4D background Minkowski space, , and another one should be MS embedded in the , which is an indispensable individual companion to the particle, without relation to the other matter. This theory is mathematically somewhat similar to the more recent membrane theory. The flat MS in suggested model is assumed to be 2D Minkowski space, : The ingredient 1D-space is spanned by the coordinates , where we use the naked capital Latin letters to denote the world indices related to . The metric in is where is the infinitesimal displacement. The basis at the point of interest in consists of two real null vectors: The norm, , given in this basis reads , where is the tautological tensor field of type (1,1), is a shorthand for the collection of the 2-tuplet , and . We may equivalently use a temporal and a spatial variables , such that The norm, , now can be rewritten in terms of displacement, , as where and are, respectively, the temporal and spatial basis vectors: The MS companion () of this particle is assumed to be smoothly (injective and continuous) embedded in the . Suppose that the position of the particle in the background space is specified by the coordinates with respect to the axes of the inertial system . Then, a smooth map is defined to be an immersion—an embedding is a function that is a homeomorphism onto its image: In fact, we assume that the particle has to be moving simultaneously in the parallel individual space and the ordinary 4D background space (either Minkowskian or Riemannian). Let the nonaccelerated observer uses the inertial coordinate frame for the position of a free test particle in the flat . We may choose the system in such a way as the time axis lies along the time axis of a comoving inertial frame , such that the time coordinates in the two systems are taken the same, . For the case at hand, Hence, given the inertial frames , , , in the , in this manner we may define the corresponding inertial frames , , , in the . Continuing on our quest, we next define the concepts of absolute and relative states of the ingredient spaces . The measure for these states is the very magnitude of the velocity components of the Definition 3.1. The ingredient spaceof the individual MS companion of the particle is said to be in Therefore, the MS can be realized either in the semiabsolute state (rel, abs), or (abs, rel), or in the total relative state (rel, rel). It is remarkable that the total-absolute state, (abs, abs), which is equivalent to the unobservable Newtonian absolute two-dimensional spacetime, cannot be realized because of the relation . An existence of the absolute state of the is an immediate cause of the light traveling in empty space along the -axis with a maximal velocity (we reinstate the factor ()) in the -direction corresponding to the state (rel, abs), and in the -direction corresponding to the state (abs, rel). The absolute state of manifests its absolute character in the important for SR fact that the resulting velocity of light in the empty space is the same in all inertial frames , , ,; that is, in empty space light propagates independently of the state of motion of the source—if , then . Since the is the very key measure of a deviation from the absolute state, we might expect that this has a substantial effect in an alteration of the particle motion under the unbalanced force. This observation allows us to lay forth the foundation of the fundamental RLI as follows. Conjecture 1 (RIL conjecture). The nonzero local rate of instantaneously change of a constant velocity (both magnitude and direction) of a massive test particle under the unbalanced net force is the immediate cause of a deformation (distortion of the local internal properties) of MS: . We can conclude therefrom that, unless MS is flat, a free particle in 4D background space in motion of uniform speed in a straight line tends to stay in this motion and a particle at rest tends to stay at rest. In this way, the MS companion, therefore, abundantly serves to account for the state of motion of the particle in the 4D background space. The MS companion is not measurable directly, but in going into practical details, in Section 4 we will determine the function and show that a deformation (distortion of local internal properties) of MS is the origin of inertia effects that can be observed by us. Before tempting to build realistic model of accelerated motion and inertial effects, for the benefit of the reader, we briefly turn back to physical discussion of why the MS is two dimensional and not higher. We have first to recall the salient features of MS which admittedly possesses some rather unusual properties; namely, the basis at the point of interest in MS, embedded in the 4D spacetime, would be consisted of the real null vectors, which just allows only two-dimensional constructions (3.3). Next, note that the immediate cause of inertia effects is the nonlinear process of deformation (distortion of local internal properties) of MS, which yields the resulting linear relation (see (2.19)–(5.35)) with respect to the components of inertial force in terms of the relativistic force acting on a purely classical particle in . This ultimately requires that MS should only be two dimensional, because to resolve the afore-mentioned relationship of nonlinear and linear processes we may choose the system in only allowed way as the time axis lies along the time axis of a comoving inertial frame , in order that the time coordinates in the two systems are taken the same, and that another axis lies along the net 3-acceleration () (5.26). 4. The General Spacetime Deformation/Distortion Complex For the self-contained arguments, we now extend just necessary geometrical ideas of the spacetime deformation framework described in Section 2, without going into the subtleties, as applied to the 2D deformation . To start with, let be 2D semi-Riemann space, which has at each point a tangent space, , spanned by the anholonomic orthonormal frame field, , as a shorthand for the collection of the 2-tuplet , where , where the holonomic frame is given as . Here, we use the first half of Latin alphabet to denote the anholonomic indices to related the tangent space, and the capital Latin letters with an over , to denote the holonomic world indices related to either the space or . All magnitudes referred to the space, , will be denoted by an over . These then define a dual vector, , of differential forms, , as a shorthand for the collection of the , whose values at every point form the dual basis, such that . In components . On the manifold, , the tautological tensor field, , of type (1,1) can be defined which assigns to each tangent space the identity linear transformation. Thus for any point , and any vector , one has . In terms of the frame field, the give the expression for as , in the sense that both sides yield when applied to any tangent vector in the domain of definition of the frame field. We may consider general transformations of the linear group, , taking any base into any other set of four linearly independent fields. The notation will be used hereinafter for general linear frames. The holonomic metric can be defined in the semi-Riemann space, , as with components in the dual holonomic base . The anholonomic orthonormal frame field, , relates to the tangent space metric, , by , which has the converse because of the relation . With this provision, we build up a general distortion-complex, yielding a distortion of the flat space , and show how it recovers the world-deformation tensor , which still has to be put in [96] by hand. The DC members are the invertible distortion matrix , the tensor , and the flat-deformation tensor . Symbolically, The principle foundation of a distortion of local internal properties of MS comprises then two steps. The first is to assume that the linear frame , at given point (), undergoes the distortion transformations, conducted by and , respectively, relating to and , recast in the form Then, the norm of the infinitesimal displacement on the general smooth differential 2D-manifold can be written in terms of the spacetime structures of and : where is the frame field and is the coframe field defined on , such that . The deformation tensors and imply provided that such that Hence the anholonomic deformation tensor, , yields local tetrad deformations: The matrices are referred to as the first deformation matrices, and the matrices , second deformation matrices. The matrices , in general, give rise to right cosets of the Lorentz group; that is, they are the elements of the quotient group , because the Lorentz matrices, , leave the Minkowski metric invariant. A right multiplication of by a Lorentz matrix gives another deformation matrix. So, all the fundamental geometrical structures on deformed/distorted MS in fact—the metric as much as the coframes and connections—acquire a deformation/distortion-induced theoretical interpretation. If we deform the tetrad according to (4.8), in general, we have two choices to recast metric as follows: either writing the deformation of the metric in the space of tetrads or deforming the tetrad field: In the first case, the contribution of the Christoffel symbols, constructed by the metric , reads The deformed metric can be split as follows [96]: where , and In the second case, we may write the commutation table for the anholonomic frame, , and define the anholonomy objects: The usual Levi-Civita connection corresponding to the metric (4.11) is related to the original connection by the following relation: provided that where the controvariant deformed metric, , is defined as the inverse of , such that . That is, the connection deformation acts like a force that deviates the test particles from the geodesic motion in the space, . Taking into account (4.4), the metric (4.9) can be alternatively written in a general form of the spacetime or frame objects: A significantly more rigorous formulation of the spacetime deformation technique with different applications as we have presented it may be found in [96]. 5. Model Building in the 4D Background Minkowski Spacetime In this section we construct the RTI in particular case when the relativistic test particle accelerated in the Minkowski 4D background flat space, , under an unbalanced net force other than gravitational. Here and henceforth we simplify DC for our use by imposing the constraints and, therefore, The (4.5), by virtue of (4.4) and (5.1), gives where the deformation tensor, , yields the partial holonomic frame transformations: or, respectively, the yields the partial local tetrad deformations: Hence, (4.4) defines a diffeomorphism : where . The conditions of integrability, , and nondegeneracy, , immediately define a general form of the flat-deformation tensor , where is an arbitrary holonomic function. To make the remainder of our discussion a bit more concrete, it proves necessary to provide, further, a constitutive ansatz of simple, yet tentative, linear distortion transformations, which, according to RLI conjecture, can be written in terms of local rate of instantaneously change of the measure of massive test particle under the unbalanced net force : Clearly, these transformations imply a violation of the relation (3.3) for the null vectors . Now we can use (4.4) to observe that for dual vectors of differential forms and we may obtain We parameterize the tensor in terms of the parameters and as where and . Then, the relation (5.8) can be recast in an alternative form: Suppose that a second observer, who makes measurements using a frame of reference which is held stationary in deformed/distorted space , uses for the test particle the corresponding spacetime coordinates . The (4.4) can be rewritten in terms of spacetime variables as where and are, respectively, the temporal and spatial basis vectors: The transformation equation for the coordinates, according to (5.10), becomes which gives the general transformation equations for spatial and temporal coordinates as follows : Hence, the general metric (4.17) in reads provided that The difference of the vector, (3.5), and the vector, (5.11), can be interpreted by the second observer as being due to the deformation/distortion of flat space . However, this difference with equal justice can be interpreted by him as a definite criterion for the absolute character of his own state of acceleration in , rather than to any absolute quality of a deformation/distortion of . To prove this assertion, note that the transformation equations (5.14) give a reasonable change at low velocities , as thereby Then (5.17) becomes conventional transformation equations to accelerated axes if we assume that and , where is a magnitude of proper net acceleration. In high-velocity limit , ), we have and so (5.14) and (5.15), respectively, give To this end, the inertial effects become zero. Let be a local net 3-acceleration of an arbitrary observer with proper linear 3-acceleration and proper 3-angular velocity measured in the rest frame: where is the 4-velocity. A magnitude of can be computed as the simple invariant of the absolute value as measured in rest frame: Following [57, 58], let us define an orthonormal frame , carried by an accelerated observer, who moves with proper linear 3-acceleration and and proper 3-rotation . Particular frame components are denoted by hats, . Let the zeroth leg of the frame be 4-velocity of the observer that is tangent to the worldline at a given event and we parameterize the remaining spatial triad frame vectors , orthogonal to , also by . The spatial triad rotates with proper 3-rotation . The 4-velocity vector naturally undergoes Fermi-Walker transport along the curve , which guarantees that will always be tangent to determined by : where the antisymmetric rotation tensor splits into a Fermi-Walker transport part and a spatial rotation part : The 4-vector of rotation is orthogonal to 4-velocity , therefore, in the rest frame it becomes , and is the Levi-Civita tensor with . Then (5.17) immediately indicates that we may introduce the very concept of the local absolute acceleration (in Newton’s terminology) brought about via the Fermi-Walker transported frames as where we choose the system in such a way as the axis lies along the net 3-acceleration (). Hereinafter, we may simplify the flat-deformation tensor by setting , such that (5.9) becomes and the general metric (4.17) in reads . Hence (5.26) gives Combining (5.14) and (5.26), we obtain the key relation between a so-called inertial acceleration, arisen due to the curvature of MS, and a local absolute acceleration as follows: where are the Christoffel symbols constructed by the metric (5.16). Then (5.30) provides a quantitative means for the inertial force : In case of absence of rotation, we may write the local absolute acceleration ( 5.26) in terms of the relativistic force acting on a particle with coordinates : Here is the force defined in the rest frame of the test particle, and is the Lorentz transformation matrix : where . So and hence (5.31), (5.26), and (5.34) give At low velocities and tiny accelerations we usually experience, one has ; therefore (5.35) reduces to the conventional nonrelativistic law of inertia: At high velocities (), if , the inertial force (5.35) becomes and, in agreement with (5.21), it vanishes in the limit of the photon . Thus, it takes force to disturb an inertia state, that is, to make the absolute acceleration (). The absolute acceleration is due to the real deformation/distortion of the space . The relative () acceleration (in Newton’s terminology) (both magnitude and direction), to the contrary, has nothing to do with the deformation/distortion of the space and, thus, it cannot produce inertia effects. 6. Beyond the Hypothesis of Locality The standard geometrical structures, referred to a noninertial coordinate frame of accelerating and rotating observer in Minkowski spacetime, were computed on the base of the hypothesis of locality [ 59–66], which in effect replaces an accelerated observer at each instant with a momentarily comoving inertial observer along its wordline. This assumption represents strict restrictions, because, in other words, it approximately replaces a noninertial frame of reference , which is held stationary in the deformed/distorted space , with a continuous infinity set of the inertial frames given in the flat . In this situation the use of the hypothesis of locality is physically unjustifiable. Therefore, it is worthwhile to go beyond the hypothesis of locality with special emphasis on distortion of MS, which, we might expect, will essentially improve the standard results. The notation will be slightly different from the previous section. We denote the orthonormal frame (5.24), carried by an accelerated observer, with the over “breve” such that with , and . Here, following [58, 64], we introduced a geodesic coordinate system —coordinates relative to the accelerated observer (laboratory coordinates)—in the neighborhood of the accelerated path. The coframe members are the objects of dual counterpart: . We choose the zeroth leg of the frame, , as before, to be the unit vector that is tangent to the worldline at a given event , where is a proper time measured along the accelerated path by the standard (static inertial) observers in the underlying global inertial frame. The condition of orthonormality for the frame field reads . The antisymmetric acceleration tensor [64–68, 121–125] is given by provided that , where is the metric compatible, torsion-free Levi-Civita connection. According to (5.24) and (5.25), and in analogy with the Faraday tensor, one can identify , with as the translational acceleration and as the frequency of rotation of the local spatial frame with respect to a nonrotating (Fermi-Walker transported) frame . The invariants constructed out of establish the acceleration scales and lengths. The hypothesis of locality holds for huge proper acceleration lengths and , where the scalar invariants are given by and () [64–66, 121–125]. Suppose that the displacement vector represents the position of the accelerated observer. According to the hypothesis of locality, at any time along the accelerated worldline the hypersurface orthogonal to the worldline is Euclidean space and we usually describe some event on this hypersurface ( local coordinate system) at to be at , where and are connected via and Let be coordinates relative to the accelerated observer in the neighborhood of the accelerated path in MS, with spacetime components implying As long as a locality assumption holds, we may describe, with equal justice, the event at (6.3) to be at point , such that and , in full generality, are connected via and where the displacement vector from the origin reads , and the components can be written in terms of . Actually, from (6.3) and (6.5) we may obtain
{"url":"http://www.hindawi.com/journals/amp/2012/692030/","timestamp":"2014-04-20T15:30:36Z","content_type":null,"content_length":"1040979","record_id":"<urn:uuid:1df46539-9130-4d0e-92ee-c24f50dd82bf>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
PBL Birdside View I know I am supposed to share my first few years of PBL. I have it all in my head. I just got distracted with school. Things are starting to be "normal" for me so I am going to try and write weekly. Key word is try :). I want to share something that happened in class this week. It is not PBL but a key aspect of helping all kids learn math deeply which is the heart of PBL. In my Algebra II classes, we started the systems of equations unit on Monday. I always start a unit with a small pre-assessment. It is never more than four multiple-choice or written response questions and it is basically skills students need to be successful during the unit. See this link if you want to know the pre-assessment questions. Well, the pre-assessment let me know that most kids didn't have the necessary background and I would have to do some reteaching as we learn how to solve a system of equations. We started with the simplest way to solve--graphing. We reviewed how to convert an equation into slope-intercept form so that they could graph the equations. The example I used was 2x + 4y = 36 and 10y - 5x = 0. I knew from the pre-test that some students knew how to convert from standard form to slope-intercept form so I just asked for volunteers on how to convert the two equations. This gave me the opportunity to see how students approached converting the equation. For the first equation, some students wanted to divide by four for each term. Although this is a perfectly acceptable first step, I asked for another way just to make sure I didn't throw off students by putting fractions in early. In each class, various students helped me follow the typical process of subtracting/adding the term with the x variable, then dividing by the number with the y variable at the end. With each step, I reminded students of the desired result (isolating the variable of y) and why we are doing each step. We then completed the last step of placing the equations in the graphing calculator to determine the solution. I send them off to work time like normal and this is where the class becomes interesting. I completely expected to help students with using the graphing calculator. We have only had them a week. However, I was shocked by the number of students who were still lost with converting the equation. At the end of the day, I discovered the root of the problem was students lacked the fundamental understanding of equations. They had practiced such a specific algorithm they didn't know what to do if it came in a different form. For instance, the first problem in the book was 2y - 3x = 7 and 5x = 4y - 12. A few didn't know where to begin while others were doing the typical process. As I helped student after student, everyone wanted to subtract 5x as for he second equation. Just like other teachers, I thought about what to do the next day. The plan calls for me to move on to substitution. They have a test on Thursday that includes them solving by graphing and substitution. The students don't have a conceptual or even a solid procedural fluency of equations. What would you do? How do you help students who have misinformation while staying on track with learning new information? Yesterday, I was tweeting with Harry (@hblyleven) and Al (@alfredbie) about challenges with implementing PBL with math educators. This conversation led to me agreeing to blog about my own PBL journey. I think I will not only blog about my past but I will try to share my journey this year with implementing PBL and the new Common Core State Standards. My journey began in 2005 when I switched from corporate to education. I was hired at a local charter school. During our orientation, the school CEO said they are a project-based learning school. She showed a video of a project where a student was studying marine life. I was intrigued while watching the video. I thought, "schools really have changed. I like this new way of teaching." After the video ended, the CEO said a few more housekeeping measures and that was it. My only training at that point was a video. Since I was wanting to implement, I asked veteran teachers once school started. I soon discovered that many of the teachers did not teach through projects but in the traditional way. Those who did complete projects where more about cutting and pasting then deeper learning like I saw in the video. Despite this hit of reality, I was not swayed from my interest in how to do project-based learning. I put my journalism background to use and started researching. It was slightly difficult to find material in 2005 compared to today. However, I quickly found out that PBL had a few names. The most common name was project-based learning however there was also problem-based learning, inquiry-based learning, authentic learning, etc. After reading a few articles, I discovered they were basically the same thing just with different names and styles of delivery. So I moved on to trying to find books that could help me implement the framework in my classroom. The first two books I found were How to Use Problem-Based Learning in the Classroom and a book where students build a city of the future (sorry, I don't remember the name). I remember how excited I was to have these books. The Problem-Based Learning book was great but lacked a step by step process that I was needing. The second book (authored by a teacher who completed it with his students) was a complete step by step. I was so happy to have directions given this was my first year teaching. I was still scared to implement so I tried it with my honors Geometry only. Well, I wish I can say it was the most beautiful experience ever. Although it was not terrible, it did not have the outcome like the video. The students excitement about the project made me want to continue the journey. Here are my lessons learned from that first year: • A step-by-step process is more harmful than helpful. I realized that this is a learning experience and following directions is not a learning experience. If I did have steps, I needed to look at those steps as guidance only. What worked for that teacher and his students may not be exactly how I need to do it for my style and my students. • Work completed at home has varied results. I announced to the students that they were going to build a city of the future. I gave them a sheet that included the parameters and told them it would be due in a couple of weeks. For the next two weeks, class was just like normal with them working at home on the project. I didn't even teach anything that was related to the project. According to the book, this was a way to "check" to see if they got the information previously taught. When two weeks were up, I had an array of "cities". All of them were elaborate but definitely showed signs of only them working on it. I wish I could say this moment made me realize that I should have them work in class. It didn't. I just thought this is the result of projects and some will be good and some not so good. However, it did make me realize the next point. • Rubrics are a way of helping students know quality and expectations. I realized that I only had parameters not details of expectations. This made grading really hard. Out of guilt, I just gave everyone an "A" for effort. I realized that I probably should use a rubric in the future. • Limited resources does not mean limited teaching/PBL. My first year of teaching was really interesting. In addition to finding resources to learn PBL, I was trying to figure out how to teach with limited resources. I had one class that did not have textbooks, a 100 copy a month limit, class sets of copies had to be submitted for approval (3 day minimum wait), a whiteboard that didn't wipe off, a teacher desktop computer and an overhead projector (yes, overhead not projector). Although I had almost nothing, I realized that I could teach with little or no resources. I found creative ways to help students learn. This experience made me want to learn more about inquiry and assessment. I wanted to learn if there is a way to incorporate these strategies into PBL. My first year was a huge learning experience. It gave me a lot of goals to make for my second year. So, please share your first year lessons. AP Calculus AB is typically the highest course for many high school students. Coming up with a possible Calculus project has been challenging. Not because the course is hard. It has been challenging because my desire to keep the project authentic and connected to the major concepts of the subject. Every time I create a project, I want it to be real-world work. I also want to have students think deeply about the "power standards" of the subject. Identifying the power standards for Calculus was simple. I agree with Steve Strogatz's post in The NY Times, "Calculus is the mathematics of change. The subject is gargantuan - and so are its textbooks. But within that bulk you'll find two ideas shining through. Those two ideas are the 'derivative' and the 'integral'." Although knowing that a project on derivates or integrals was easy to identify, the real world application of it has been much harder. Every time I run into a person whose profession is mathematics based such as an engineer, I would ask them how do they use calculus in their work. They would often laugh and say they don't. Despite these roadblocks, I am determined to find an authentic use of calculus in the workplace. After searching, I think I have some ideas for a derivative project. The main criteria I am using in thinking of ideas is that there are multiple solutions. Please let me know what you think of them and please add other possible projects in the comments below. • Drug Flow: Healthcare is huge topic today. It is a constant discussion at dinner tables. One aspect of healthcare is drugs. Many people do not understand the affects of these on the body. The Center for Policy Analysis wants to educate the public on how various drugs affects the body so that people will be more empowered about their health care. They want to explain their website to include this information. It must include how to calculate the effects in layperson's terms (idea is adapted from extended applications in Calculs with Applications by Lial, Greenwell and • Classic Box: Kellogg is in the process of repackaging their cereal boxes to respond to market demands. Increase in gas over the last decade has caused the company to increase the cost of cereal. However, recent changes in tax code is lowering Americans ability to pay increases in cereal. The company wants to redesign the boxes to achieve maximum volume but cost less to produce. The company's director of product development is requesting a proposal to meet the company's needs. It must include calculations of the cost to produce and the volumes of the new products (idea is adapted from the classic box problem in most calculus textbooks. There is also an activity by TI-Instruments with their TI-Nspire calculator). • Start-up Company: Terri Miller has left her position as a manger of a local company to start her own business. She makes lady handbags. The demand has really taken off and she is in full production. She needs to get more capital to expand but that requires creating a proposal that details various financial aspects of the business. Investors are wanting a lot of information including her marginal cost, revenue and profit for the organization. She is small and don't have the staffing to help with the creation of this document. She is seeking assistance to help grow her business (idea is adapted from Calculs with Applications by Lial, Greenwell and Ritchey). I have taken a semester off from school to work on creating projects for all grade levels. I ask that you join me in helping shape this work. Your help is really simple, you can do any of the • Share likes • Give a suggestion of how to improve • Try it out Any and all comments are welcome. This is the biggest endeavor of my life. I hope you will join me in the adventure. The class began with a review of the format of my class, rules and expectations. The day I started the project was the second day of getting all new students. Originally, I was teaching Geometry and Algebra II. Now, I am only teaching Algebra II. I had to catch up the new students but also review some information regarding systems with the old group. The class ended with the introduction of their first project. To understand how I introduced the project, you must understand the set up of my classroom. Some of my readers know about my classroom. However, for those of you who are not familiar with my classroom, it is a fictional corporation called Logic Inc. Students act as consultants and managers that solve real world problems of individuals or businesses. The mission of the company is to help people see math in all areas. In addition to helping clients, the students promote the company through a "company website" where they provide tutorials and information on math concepts. Since my class is a company, the students get all their information from the company's intranet site (norfarlogic.pbworks.com). It is here the class gets information on the company's first "fictional" client. Students understand the clients are fictional right now so that they can gain experience solving problems. Students read the clients situation and their driving question. Students then created a concept map in mywebspiriation of what they know and didn't know about the clients problem. To see the actual results of day one, check out the information below: • This is the powerpoint presented in class- • This is a web page link of the client problem information-http://norfarlogic.pbworks.com/September+18-Smith+Project • This is a view of the concept map created and how it evolved through the project-http://mywebspiration.com/view/182956a2ba2d I am discovering quickly that once you start a project it is hard to sometimes stop and write about the flow of it. It is also difficult when all of your class dynamics change before you introduce the first project. One week ago today, the students were introduced to the Smith Project. Let me explain the situation regarding the Smith Project and the make-believe family in it. Kate Smith is a single parent who lives in the area. Kate lost her job at the GM plant. Since losing her job, Kate has had three other jobs. She is now in a stable job at a local manufacturing business. However, Kate is not making as much money. While working at GM, Kate used to take her and her three daughters to the state fair. Since losing her job, they have not been to the state fair. She really wants to take the girls this year. The girls are teenagers and Kate wants to do as much with them as possible before they graduate. She heard we may be able to help her go to the fair on her limited budget. She has $150 that she can spend. The driving question(focus of the project): How can Smith make the most of her $150 budget for the state fair? This is a very small almost seemingly easy answer at first. However, once students investigate the cost associated with the fair, it has some complexities to it. The goal of the project is to teach systems of equations and inequalities while also helping to get the students prepared to work in teams, think critically and talk mathematically. I have several things I did to complete the project. The project ends Friday. I will discuss what occurred each day over the next few days. I can't wait to talk to you. Stay tuned for details and links to student work. This is the second portion to my video posting. Below is a video of me practicing presenting a PBL to a group of educators. The educators are pretending to be like my high school students. Finally, I was able to find the time to make the videos from my filming at the IMSA institute. It is two different videos. Below is my information on getting funding for your professional development and classroom. There is absolutely to many things on my plate. I am a member of the alumni association of Northwest Classen High School. As a member, I agreed to produce the newsletter. I am five months behind producing the newsletter. I was supposed to have it out in April. I got completely caught up with understanding student council responsibilities and then a summer full of activities. As I sit trying to pull the articles together, my alarm sounds and I realize it is time to BLOG! The funny part about this blog is along with completing the newsletter, I am trying to formulate all the projects that my students will complete this semester. I hope to have at least some ideas jotted down that match the standards I have to cover. I can't wait to get started but I have to get this newsletter done before our meeting tomorrow at 4:30 p.m. Please make sure you check back into my blog to give me your advice on the projects I come up with tonight. Yesterday, I discovered why it is great to work in teams. As I said in the earlier post, I was confused on the order to complete items. All of my colleagues are going to follow the book, chapter by chapter. This will help me with following an order even though I will still operate on concepts rather than specific chapters. I also found out that matrices are not as necessary of a skill for calculus preparation so I will be able to concentrate only on the other three methods to solving a system-graphing, substitution and elimination. I would rather teach a few ways that are strong then have students confused on a concept. I also had a great lead for a project-circuits. I asked the team what is the emphasis for complex numbers. This lead to a possible project about electrical circuits. I have a colleague who is going to share with me what she does and I may be discussing soon how I will make it into a project. I am EXCITED!
{"url":"http://pbl-birdside.blogspot.com/","timestamp":"2014-04-21T02:06:26Z","content_type":null,"content_length":"103107","record_id":"<urn:uuid:9ff94c12-fbf9-4868-b65a-a404d307d76e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
The competition number of a generalized line graph is at most two Boram Park, Yoshio Sano In 1982, Opsut showed that the competition number of a line graph is at most two and gave a necessary and sufficient condition for the competition number of a line graph being one. In this note, we generalize this result to the competition numbers of generalized line graphs, that is, we show that the competition number of a generalized line graph is at most two, and give necessary conditions and sufficient conditions for the competition number of a generalized line graph being one. Full Text: PDF PostScript
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/2041","timestamp":"2014-04-21T16:40:58Z","content_type":null,"content_length":"11074","record_id":"<urn:uuid:2af8d72b-d438-49e1-afb9-af5214b63b51>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Bull Direct N°31, November 2008 For two years, mathematicians throughout the world had been dreaming of discovering a 10 million digit prime number, a discovery which would allow them to claim the $100.000 award offered by the Electronic Frontier Foundation (EFF). The 44th Mersenne prime had been discovered in September 2006, and came tantalizingly close to qualifying for the award, with 9,808,358 digits. Since then, prime number enthusiasts had been holding their breath to see who would get the prize. Incredibly, the 45th and 46th Mersenne primes were discovered out of sequence and within two weeks of each On August 23rd, Edson Smith discovered the 45th known Mersenne prime using a UCLA Mathematics Department's desktop computer: 243,112,609-1, a gigantic 12,978,189 digit number! And on September 6th, the 46th known Mersenne prime, 237,156,667-1, an 11,185,272 digit number, was found by Hans-Michael Elvenich, a prime number enthusiast from Langenfeld near Cologne, Germany! Both primes were independently verified by Tony Reix of the Bull Research Center in Grenoble, France using 16 1.6 GHz Intel® Itanium®2 CPUs of a Bull NovaScale 6160 HPC system and the multithreaded Glucas program. What are Mersenne primes? A Mersenne number is a positive integer that is one less than a power of two: Mq = 2q-1 where the exponent q is a prime. Since Mersenne primes are fairly rare, they are known by a “nickname” that indicates their ranking by size. For example, M31 = 231-1 (= 2.147.483.647) is called M8 because it is the 8th Mersenne prime, discovered by Euler in 1750. In 1876, M12 (a huge 39 digit number for the time!) was proved to be a prime by Edouard Lucas, using a revolutionary method that is still used today – but was manual at the time! The search for Mersenne primes was revolutionized by the introduction of the computer. The first successful identification of a Mersenne prime by a computer was achieved in 1952 using the U.S. National Bureau of Standards Western Automatic Computer (SWAC) at UCLA, with a program written by Pr. Raphael Robinson. After a gap of 38 years during which no new Mersenne prime was discovered, the UCLA program allowed to identify five Mersenne primes within a few months. Then 17 new Mersenne primes were identified until 1996. Since 1996, the search for Mersenne primes has organized by GIMPS (Great Internet Mersenne Prime Search), a distributed computing project powered by volunteers and founded by George Woltman, who also wrote the prime testing software Prime95. 100.000 volunteers are contributing computing time – usually their personal computer’s idle time -- to the project, which distributes the exponents to be tested amongst the contributors. To this day, GIMPS has found a total of twelve Mersenne primes. Once a contributor has found an exponent that gives a Mersenne prime, it must be verified twice, with a different program, on different hardware and by another person. This is how Bull’s Tony Reix came to be the first to verify M41, M42, M43 and M44, and came second for the verification of M45 and M46. How long is the quest for Mersenne primes going to continue? Are there more Mersenne primes to be found? Well, the set of Mersenne primes is probably infinite, like the number of ordinary primes, but this has never been proved. What is certain is they are getting harder to discover, first because they become rarer as their size increases, and second because the length of time necessary to test an exponent increases as the size of the tested exponents But then the number of GIMPS contributors is expanding, and processing power is also progressing fast with the availability of multicore architectures. So this is not the end of the hunt for very large prime numbers. EFF has offered a $150,000 award for an immensely more difficult challenge: the discovery of the first 100-million-digit prime. When it will be discovered is anybody’s guess. Our expert Tony Reix’s forecast is that it will be found around 2020, and will be M50 or M51. Time – and available computing power – will tell…
{"url":"http://www.bull.com/bulldirect/N31/bref.html","timestamp":"2014-04-19T04:23:08Z","content_type":null,"content_length":"29760","record_id":"<urn:uuid:f8d2ba76-2c2e-4a47-aeca-53f3d4a82dcf>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Gio Gonzalez on Natstradamus When I projected that the 2013 Nats were going to win 94 games, I did so with a bit of trepidation. Not only did this mean that I was projecting a performance so good that it would have been literally unbelievable only a few years before, but because I have certain doubts about the construction of my model. As you might have gathered from the title of this post, I think my model has been systematically under-counting playing time for pitchers and hitters. In the spirit of Top of the Inning/Bottom of the Inning nature of the Natstradamus projections, I’ll deal with the pitching issues first, and then the batting problems in the bottom of the inning. EDIT: Astute readers noted that I should have reduced relief pitcher innings by as much as I increased starting pitching innings. I have amended the relevant analysis. This results in a 98-win Executive Summary for the TL;DR Crowd: Our earlier projection wasn’t as accurate as it should have been in counting playing time: A slight adjustment in innings pitched for starters–with a corresponding reduction in relief pitching innings– yielded a decrease in runs scored by 2—but a better/more nuanced look at plate appearances by the starting line-up yielded an astonishing increase in runs scored, from 692 to 725. This revises our win projection for the 2013 Nats to 98 wins. Innings, Limits, and Other Stuff to Tear Your Hair Out With First, pitching. If you look back at the projected innings pitched column in my pitching runs allowed projections, you will notice that I assume that pitchers in the starting rotation will pitch about 190 innings each, with Strasburg pitching only 180. How does that stack up with reality? • Gio Gonzalez (199.1 IP); • Jordan Zimmermann (195.2 IP) • Edwin Jackson (189.2 IP). Looking at things like this, it’s starting to look like our 180-inning starting rotation baseline is off by a little bit. Is it really, though? None of the top three for the Braves (Minor, Hudson, Hanson) pitched over 180 innings last year. The Phillies had Hamels (215.1) and Lee (211.0), then a sharp drop-off (injuries). The Mets had Dickey (232.2) and Niese (190.1), and then a precipitous dropoff to Santana (117.0). Things get a bit better when we look at the Reds, whose top five were remarkably consistent as far as innings, with Cueto (217), Latos (209.1), Bailey (208), Arroyo (202) , and Leake (179). Likewise, the Giants got a lot of innings out of their starters, with Cain (219.1), Bumgarner (208.1), Vogelsong (189.2), Lincecum (186), and Zito (184.1). In fact, it’s the rare National League team that gets more than 180 innings from all of its top five starters–only the Giants managed this in 2012, and we all know how that worked out for them, Anyway, returning to our projections: is there a better way we can match the innings expectations for Nationals starting pitchers? Maybe we can. During the height of the Strasburg Shutdown hysteria last year, I wrote that the organization has a general innings-limiting principle: The Nats have a policy–and a remarkably enlightened one, at that–of limiting starting-pitcher workloads to 120% of the innings a pitcher had pitched the previous year, wherever those innings happened (whether as an amateur, the minor leagues, or the majors). For pitchers returning from major injuries, the innings limit seems to be about 120% of the pitcher’s previous single-season career high total innings pitched. The conventional wisdom is that this limit may not apply to pitchers like Gio Gonzalez (age 27) and Dan Haren (age 32). Jordan Zimmermann (age 26) might have arguably “aged out” of this system, too, since he pitched 195.2 innings last year. Detwiler (age 26) might have aged out, as well, but last year’s 164.1 IP represented his professional maximum, so let’s assume we’re stretching him out more carefully and put him on the limit. Strasburg (age 24), it should go without saying, is probably under this silent limit as well. Applying those limits, and looking at last year’s performances, we get the following: • Stephen Strasburg. 120% of last year’s innings for Strasburg works out to 190.2 innings for Strasburg. Plugging that into our model, that works out to 54.23 runs allowed, an increase of 3.03 • Jordan Zimmermann. JZ pitched 195.2 innings. It would be foolish to assume he would pitch any more. Let’s assume he pitches 195 innings, then. That works out to 80.38 runs allowed, an increase of 2.06 runs. • Gio Gonzalez. 199 innings is a lot, but he pitched over 200 innings in the two preceding years, so I don’t think it’s too much of a stretch to give Gio 200 innings in 2013. Ten more innings of Gio than in our initial model yields 84.67 runs, an increase of 4.24 runs. • Ross Detwiler. Detwiler’s 151 innings in 2012 was a career high for him. Increasing that by 120% yields 181 innings. Fortunately, the old model pegged him at 180 innings to begin with. We’ll leave well enough alone, then. • Dan Haren. Haren’s a little harder to judge. He only pitched 176.2 innings in 2012, but before his back got balky, he pitched well in excess of 200 innings for seven consecutive seasons. Various projections have him pitching as many as 218 innings and as few as 170. Let’s say he recovers form and pitches 190 innings–which is what we had in the original model. Great. After adjusting for an increase in innings pitched, we see that the Nats give up a few more runs– 9.33 runs. That’s enough to cost them one full game in the Natstradamus projection–so that leaves them with 93 wins, instead. Not so fast. You will notice that we’ve increased Gio’s innings by 10, Strasburg’s innings by 10, and Zimmermann’s innings by 5. That means we need to reduce relief pitcher innings accordingly. If we reduce Craig Stammen’s 110 innings to 95 innings (-6.6 runs allowed) and Zach Duke’s innings from 90 to 80 (-4.8 runs allowed), we actually end up saving about 2 runs. That keeps us steady at 94 wins for now. But how about the hitting? Batters: Up. The crude assumption built into the model was that every one of the starting position players got 600 plate appearances each. This is, of course, false. The ever-astute David Huzzard reminded me that the number of plate appearances varies with position in the batting order. Fortunately, Baseball Reference lets us look at exactly how many plate appearances, on average, each batting order position got in the National League in 2012. As you can see, the lead-off batter gets, on average 750 plate appearances–125% more than our model assumed! What does it look like? Split Pa Batting 1st 750 Batting 2nd 732 Batting 3rd 716 Batting 4th 699 Batting 5th 684 Batting 6th 666 Batting 7th 647 Batting 8th 625 Batting 9th 606 In fact, we see that in the NL, the only batting average position that gets even close to 600 plate appearances is the number 9 batter–which is usually the pitcher’s spot! Safe to say, then, that the model is broken as far as runs scored. To fix it, we need to figure out what the batting order is going to be and award plate appearances in proportion to that player’s spot in the batting order. To keep things consistent with our defensive statistics, we’ll assume that each “every day” position player appears in 150 games. With that in mind, let’s assign some plate appearances to a hypothetical Player PA Denard Span 695 Jayson Werth 678 Bryce Harper 663 Adam LaRoche 647 Ryan Zimmerman 633 Ian Desmond 617 Danny Espinosa 599 Wilson Ramos/Kurt Suzuki 579 Pitchers 561 That leaves us with some 453 plate appearances to distribute among the other bench players. Let’s assume, crudely, that we distribute them evenly among Tracy, Moore, Lombardozzi, and Bernadina, giving them 113 plate appearances each. Let’s also further assume that the “Pitchers” spots are evenly distributed among all the starting pitchers, giving each of the starting five 112 plate appearances each. The results are shocking: Player Name 4-year total PA 4-year total wRC 4-yr moving avg wRC/PA Projected PA Projected wRC Team Total wRC Jayson Werth 2803 425 0.151623260792009 678 102.80 Ryan Zimmerman 2844 426 0.149789029535865 633 94.82 Tyler Moore 171 26 0.152046783625731 113 17.18 Bryce Harper 597 86 0.144053601340034 663 95.51 Adam LaRoche 2622 361 0.13768115942029 647 89.08 Denard Span 2671 334 0.125046798951703 695 86.91 Wilson Ramos 613 76 0.123980424143556 290 35.95 Ian Desmond 1849 214 0.115738236884803 617 71.41 Danny Espinosa 1428 164 0.11484593837535 599 68.79 Roger Bernadina 1150 121 0.105217391304348 113 11.89 Chad Tracy 845 85 0.100591715976331 113 11.37 Kurt Suzuki 2703 274 0.101368849426563 290 29.40 Steve Lombardozzi 448 42 0.09375 113 10.59 Stephen Strasburg 83 3 0.036144578313253 112 4.05 Drew Storen 2 0 0 0 0.00 Dan Haren 240 19 0.079166666666667 112 8.87 Craig Stammen 90 3 0.033333333333333 30 1.00 Jordan Zimmermann 166 4 0.024096385542169 112 2.70 Zach Duke 226 1 0.004424778761062 0.00 Tyler Clippard 14 0 0 0 0.00 Gio Gonzalez 84 -5 -0.05952380952381 112 -6.67 Ross Detwiler 97 -9 -0.092783505154639 112 -10.39 Ryan Mattheus 1 0 0 0 0.00 Rafael Soriano 0 0 0 0 0.00 Bill Bray 0 0 0 0 0.00 That’s a huge jump in runs scored, from 692 up to 725! Putting it Together Having adjusted our playing-time expectations somewhat, our revised projection has the 2013 Nats allowing 600 runs, while scoring 725 runs. Running that through the Pythagorean Win Expectation Formula gives us a revised win projection for the 2013 season of 98 wins, or four more than we had initially projected. The vast undercount of offensive plate appearances made a huge difference in terms of runs scored, and added two whole wins. The increase in starting pitching at the expense of middle relief yields two more wins. There are a few caveats, of course. Naturally, this all assumes that every player involved will stay healthy all year, and that they all perform according to their four-year trailing average performances. A realignment of the batting order will affect runs scored in very real ways: this is particularly true in the case of Bryce Harper. The current line up puts two left-handed power hitters, Harper and LaRoche, back-to-back, which may be suboptimal in matchup situations. But moving Harper down in the order will deprive him of plate appearances and run-creating chances. I have goosebumps just thinking about this. The Trial of Gio Gonzalez I didn’t want this to be the first post of the 2013 season, but I guess I don’t have much of a choice in the matter. I’m going to come right out and say it: given the evidence available right now, I don’t think there is enough evidence to find that Gio Gonzalez possessed or used any banned performance-enhancing By now, you all know the story. A report in the Miami New Times seems to have uncovered a doping ring operating out of Miami. One of the players mentioned in connection with the doping ring is Nats left-hander and all-around nice guy Gio Gonzalez: There’s also the curious case of Gio Gonzalez, the 27-year-old, Hialeah-native, left-handed hurler who won 21 games last year for the Washington Nationals. Gonzalez’s name appears five times in Bosch’s notebooks, including a specific note in the 2012 book reading, “Order 1.c.1 with Zinc/MIC/… and Aminorip. For Gio and charge $1,000.” (Aminorip is a muscle-building protein.) Gio has denied any connection with the lab in question. I've never used performance enhancing drugs of any kind and I never will ,I've never met or spoken with tony Bosch or used any substance— Gio Gonzalez (@GioGonzalez47) January 29, 2013 Provided by him.anything said to the contrary is a lie.— Gio Gonzalez (@GioGonzalez47) January 29, 2013 Today, the New Times released images of every time Gio’s name appears in connection with the lab in question. Also, we found out today that Anthony Bosch, who operated the lab, has also vehemently denied liability. Those are, as I see them, all the relevant facts. Before we dive deeper, let’s pause here and consider what, precisely, we’re talking about. The MLB drug testing and compliance regime grows out of a document called the Joint Drug Prevention and Treatment Program [PDF]. The Joint Drug Program defines everything we need to know about what sorts of substances players are not allowed to use or possess (Section 2); and the means by which the League will impose discipline on them (Section 7). By now, we’re all familiar with the punishment provision of the Plan: a first violation of the Plan’s “performance enhancing substance” provisions leads to an automatic 50-game suspension. (Section 7 But go back and re-read the first paragraph of Section 7(A): the punishments apply to a player who “tests positive for a Performance Enhancing Substance, or otherwise violates the Program through use or possession of a Performance Enhancing Substance.” (emphasis added). The or is important: it means that punishments will apply even without a positive test–in what the jargon-lovers call “non-analytic positives.” So, the League need not show that analytical chemistry revealed the presence of a banned substance in Gio’s body: they only need to show that Gio either used or possessed a banned substance. OK, still with me? Based on the New Times‘ published images of the Bosch notebooks, this is what we think we know: • Gio’s name (and what some figure to be his stat line 3 games into last season) appear on the same page as a recipe for something called “Pink Cream,” that, in turns, seems to contain “Test. [osterone?], 3%” among its ingredients. • Another entry, the fifth in the series, contains the cocktail described in the original report: “1.c.1 with Zinc/MIC/… and Aminorip.” MIC seems to refer to a supplement injection thought (without much support) to promote weight loss. [I am aware of the irony here: I am linking to LiveStrong in a story about doping.] “Aminorip” is a proprietary amino-acid supplement. Neither contain any ingredients that are on the list of prohibited performance enhancers–nor, to my knowledge, are any of them “anabolic androgenic steroids and agents (including hormones) with antiestrogenic activity that may not be lawfully obtained or used in the United States,” that would violate Section 2(B) of the Joint Drug Agreement. • The most troubling entry, to my mind, is actually the fourth entry, which seems to read thus: “MAX/GIO GNZLZ/CREAMS [illegible] AMINORIP & [illegible]“. That doesn’t look good. It links either Gio or his father, Max, with “creams,” and we already suspect that the lab has been dealing in testosterone cream (which, it bears repeating is a banned substance). But that’s all we know. Remember, to violate the Joint Drug Agreement (and thus be subject to the fifty-game suspension), the league must prove that Gio Gonzalez possessed or used a performance-enhancing substance. What we have here, so far, is evidence that the lab here prepared a “cream” (contents unknown!) for either Gio or his father, Max. There is no evidence–yet– that Gio had it on his person, in his house, in his car, in his bag, in a box with a fox, anywhere. In short, although this is worrying news, we don’t have any evidence yet that Gio has violated the Joint Drug Agreement at all. The evidence itself is subject to question. We only “know” what we know because we assume that the materials in the New Times report are genuine: that is, that they really are what the New Times’ sources say they are. Although for now these seem to be what they say they are, Bosch, who is alleged to be their author, has himself denied them. That’s going to be subject to proof. I’ll close with this: I can’t really pretend to be impartial. I’m a huge Nats fan, and a huge Gio Gonzalez fan. I love Gio. But this news has saddened me a bit, because it has brought him under suspicion. And, even though I don’t think there’s enough evidence for the League to punish Gio for doping, there will be a cloud over Gio’s name for as long as anyone can see his name next to an order for “CREAMS.” The slimy feeling from that association will take a very long time to disappear–if it ever does. The Nats Playoff Rotation: A Poem First we’ll use Gio, Next up, J-Zim Followed, we hope, by a Detwiler win. Next series? Gio Followed by Zim Then Det and E.J. (if he doesn’t give in.) This was obviously inspired by Gerald Hern’s “Spahn and Sain and Pray for Rain“ Milestones on K Street? A friend of mine remarked recently: So the Rays’ pitchers just set the record for most K’s in a season by an AL team with 1,246. The 2003 Cubs hold the MLB record with 1,404. The Nats currently have 1,237 K’s on the season. What are the odds that the Nats’ pitchers break the Cubs’ mark in the next 3 years? I say even money. This is one of those things that sneaks up on you. As much as I follow the Nats’ pitching staff, I had not really been keeping track of their cumulative strikeout figures. Currently, the Nats are third in the league, behind the Phillies (1290) and the Brewers (1299), although I have to believe the Brewers’ strikeout totals are somewhat inflated from having to face the Astros and the Pirates (who are, respectively first and second in strike-outs while batting) so often. Let’s get one obvious thing out of the way. The Nats pitching staff posts a collective 8.18 K/9. There are about 90 innings left in the year. Assuming nothing changes radically, we’d expect around 82 more strikeouts through the end of this year, bringing the total to something like 1,328 or so. So, no way the 2012 Nats come close to the 2003 Cubs’ unbelievable strikeout totals. Could the Nats equal such a mark? We can try to make an extremely crude projection. Let’s assume an unlimited, 200-inning Stephen Strasburg. Let’s further assume that Edwin Jackson re-signs with the organization, and that Ross Detwiler remains in the rotation. That gives us a five-man rotation of Strasburg, Gio, Jordan Zimmermann, Edwin Jackson, and Ross Detwiler. So let’s start by looking at how they’d do. Looking at totals since 2008, here’s what the K/9 rates look like: Strasburg: 11.21 Gonzalez: 8.79 Zimmermmann: 7.41 Jackson: 6.92 Detwiler: 5.48 Assuming all of them pitch 190 innings (I know, very very crude here), this is what it looks like: Strasburg: 236 strikeouts Gonzalez: 186 strikeouts Zimmermann: 156 strikeouts Jackson: 146 strikeouts Detwiler: 116 strikeouts. That gives us a starting pitching rotation total of 840 strikeouts. So far, in 2012, those same five have recorded 800 strikeouts. This seems plausible. So the 840 strikeouts from the starting rotation would need an additional 564 strikeouts from relievers to equal the 2003 Cubs. 2012 Nats relievers put up 433 strikeouts, all together. What if we don’t bother with all this tiresome averaging over the past several years, and assume the Nats pitch at the same level they’ve done in 2012? Well, assuming 190 innings for everybody: Strasburg: 11.13 K/9; 235 K’s Gonzalez: 9.36 K/9; 198 K Zimmermann: 6.95 K/9; 147 K Jackson: 8.03 K/9; 170 K Detwiler: 5.68 K/9; 120 K For a staff total of 870 strikeouts. But let’s look back at those 2003 Cubs K/9 rates: Kerry Wood: 11.35 K/9; 211 IP; 266 K Mark Prior: 10.43 K/9; 211.1 IP; 245 K Matt Clement: 7.63 K/9; 201.2 IP; 171 K Carlos Zambrano 7.07 K/9: 214 IP; 168 K Shawn Estes: 6.11 K/9; 151.2 IP; 103 K Wow. Strasburg today has nothing on Wood and Prior in 2003. They got more strikeouts, more often, over far more innings than we now think prudent. The forgotten man here was Shawn Estes, who racked up 103 strikeouts in 28 starts for the 2003 Cubs. If the Nats are going to challenge the 2003 Cubs for the most strikeouts by a pitching staff in a single season, they’re going to have to hope that several of the following happen in the same year: • Stephen Strasburg pitches over 200 innings • Jordan Zimmermann pitches over 200 innings • Gio Gonzalez pitches over 200 innings • Ross Detwiler discovers some way to get 2 more strikeouts per 9 innings • The bullpen gets more strikeouts A New Era Yesterday, the 2012 Washington Nationals defeated the St. Louis Cardinals by a score of 4 to 3. Stephen Strasburg pitched well, but got a no-decision–the vagaries of the rule-book having awarded the win to Ryan Mattheus. Tyler Clippard recorded a save. Later, the national media would focus on the news that Strasburg’s season would end on September 12. As the rest of the baseball world considered this, something more amazing happened: The 2012 Nationals won their 81st game. That exceeds the 2011 Nationals’ 80 wins. It ties the 2005 Nationals, whose improbable, roller-coaster, flip-a-coin debut in the Capital was a delirious first love affair for this generation of baseball fans. There are still 29 games remaining. That’s right, Nats town. If the Nationals win so much as one out of the next 29 games, they will have completed their best season ever since they arrived in DC. We knew this was coming, of course. As soon as Gio Gonzalez recorded his 16th win of the season on August 19, beating Livan Hernandez’s 15-win 2005, we knew. But somehow it hasn’t been real until now: the 2012 Nationals have begun to outrun the long shadow of futility. They cannot be compared to the Natinals of years past. They are tied with the Texas Rangers for the highest run differential in all baseball–a feat they achieve with only an average offense, because they allow the fewest runs per game (3.6) in the National League. And, although I complain constantly, the Nats have scored 269 runs in the months of July and August–second only to the Milwaukee Brewers in runs scored during that period. These Nats are pretty good, you guys. So don’t sweat the Strasburg Shutdown drama. Whatever happens, we Nats fans already have the team we dreamed about since the last out was recorded in 2005: a team that can beat any other team in the league on any given day. In the words of Ryan Zimmerman–a man who knows a thing or two about these things–the Nats have finally given DC baseball fans a team to cheer for. Today, as you get ready to watch those Nats face the Cubs, remember that. Today, you have a team to cheer for, one that is the equal of any other in baseball–and perhaps better than most. Today, savor how awesome that is. Today, root, root, root for the home team. The Ten Percent Problem Twelve and four! If you had told me in January that today, with ten percent of the baseball season behind them, the Nats would have lost only four games and won twelve–I would have laughed at you. But as I type these words, I’m watching the last-place Phillies founder against the Diamondbacks. I never thought I’d see the day. The Nats continue to outperform my pre-season projections. According to my calculations, the Nats should be about 9-7 (I actually had them projected .543). They should have scored 61 runs and allowed 59 runs. As I predicted last post, the offense has cooled somewhat. To date, the Nats have scored 58 runs, marginally fewer than my preseason predictions would have suggested. What should really amaze us, though is this: to date, the Nationals have allowed only 45 runs. Look again: that’s a whopping fourteen fewer runs than the preseason prediction. That means that the Nats success is largely attributable to dominant pitching–especially the K Street rotation. You know the statistics. As I write this, the Nats pitching staff leads all baseball in staff ERA (2.34), FIP (2.30), xFIP (3.16), and strikeouts (144). The Nats’ pitching staff, collectively, has the lowest opponents’ batting average (.199). Of the top fifteen pitchers in all baseball in xFIP, four are Nationals: Gio Gonzalez (no. 2), Ross Detwiler (no. 9), Edwin Jackson (no. 13), and Stephen Strasburg (no. 14). Add all of that up, and that’s worth three wins, I suppose. It all makes for thrilling baseball. But the Nats are scoring only 3.63 runs per game so far. Again, that’s less than the Natstradamus-predicted rate of 3.80 runs per game. The National League average so far is 3.90. This does not bode well for the long term. Then again, the Nats have the fewest runs allowed per game so far (2.80)–vastly outperforming the Natstradamus-projected 3.5 runs allowed per game. If the Nats are going to stay hot, they are going to need to find offense somewhere. With Michael Morse hurt, all eyes will turn to Tyler Moore, whose arrival in Nats Town seems imminent. Until then, the Nats are going to balance on the razor’s edge–and Nats town is going to watch their every move breathlessly. Looking at the Bullpen: Shutdowns and Meltdowns Not even in my most optimistic moments would have said that the Nats would win two in a row out of the gate! As I write this on Easter Sunday morning, the Nats are sitting pretty, sharing first place atop the National League’s Eastern Division with the Mets (the Mets!). And all this despite a lackluster debut for Gio “the Motown Kid” Gonzalez. The Nats won yesterday behind the unexpected heroics of former Hiroshima Carp Chad Tracy, and some absolutely phenomenal pitching from the “B” bullpen, with Craig “Matinee Idol” Stammen in long relief, followed by Ryan “Firework” Mattheus, Tyler Clippard, and some pitching from Hot Rod that was pretty frickin’ bueno. The Nats’ late-inning heroics aren’t great to my stomach lining, though. I’ve been wondering how I could better quantify the feeling I have when relievers come in. I attempted this earlier, of course, when I introduced my heartburn index–but I’m now convinced that the heartburn index doesn’t give a complete picture. Fortunately, FanGraphs has ridden to the rescue again, with a new, and, I think, extremely helpful, pair of statistics for measuring relief pitcher performance: Shutdowns and Meltdowns. As the proponent of the new stats explains them: Shutdowns (SD) and Meltdowns (MD) are two relatively new statistics, created as an alternative to Saves in an effort to better represent a relief pitcher’s value. While there are some odd, complicated rules surrounding when a pitcher gets a save, Shutdowns and Meltdowns strip away these complications and answer a simple question: did a relief pitcher help or hinder his team’s chances of winning a game? If they improved their team’s chances of winning, they get a Shutdown. If they instead made their team more likely to lose, they get a Meltdown. Intuitive, no? Using Win Probability Added (WPA), it’s easy to tell exactly how much a specific player contributed to their team’s odds of winning on a game-by-game basis. In short, if a player increased his team’s win probability by 6% (0.06 WPA), then they get a Shutdown. If a player made his team 6% more likely to lose (-0.06), they get a Meltdown. Shutdowns and meltdowns correlate very well with saves and blown saves; in other words, dominant relievers are going to rack up both saves and shutdowns, while bad relievers will accrue meltdowns and blown saves. But shutdowns and meltdowns improve upon SVs/BSVs by giving equal weight to middle relievers, showing how they can affect a game just as much as a closer can, and by capturing more negative reliever performances. Nats fans are by now intimately familiar with WPA, thanks to the hard work of Federal Baseball. The squiggly-lined graphs he pots after every game show the ebb & flow of the game as measured by WPA. A “Shutdown” happens when a reliever bends the line towards the Nats’ favor. A “Meltdown” happens when a reliever bends the line in favor of the opponent. The Shutdown/Meltdown stat pair thus give us a good indication of whether a reliever is helping or hurting his ballclub–which is kind of neat! So what does that mean for the Nats bullpen in 2012? Using my standard measuring interval (2008-2011 seasons), here’s how the pitching staff looks: Name Holds Saves Blown Saves Shutdowns Meltdowns Heartburn Brad Lidge 9 100 16 92 28 6.85 Tyler Clippard 64 1 18 77 35 5.22 Sean Burnett 54 8 9 63 42 5.62 Drew Storen 13 48 7 59 22 4.34 Henry Rodriguez 13 2 4 13 13 8.51 Tom Gorzelanny 7 1 2 12 5 6.01 Ryan Mattheus 8 0 0 7 6 5.63 Craig Stammen 2 0 0 5 2 4.09 A few things jump out at me at once: • Since 2008, Brad Lidge is unquestionably the Shutdown King of the current Nats bullpen. The 100 Shutdowns mean that he left his ballclub in a better position to win after his appearance than before one hundred times–and only made them worse 28 times. This makes me wonder whether Philadelphia unloaded him more because of his relatively high heartburn factor than any other measurable quality as a relief pitcher. On the other hand, Lidge’s ridiculous 2008 season may have gone a very very long way towards inflating his stats here. In any case, Lidge was pretty good on opening day this year. • We all know that Tyler Clippard is an awesome relief pitcher. He was an all-star in 2011. But now we have a clearer idea why. He’s second only to Lidge in shutdowns since 2008, and leads the staff in Holds. • Sean Burnett has collected 63 shutdowns since 2008–apparently, while I was averting my eyes in terror. The more I study him, the more I am forced to conclude that I have been terribly unfair to Burnett over the past few years. • We also now have a better idea why Drew “Batman” Storen is such a good reliever. He hasn’t been relieving nearly as long as Lidge, but he’s already accumulated 59 shutdowns. His 2.68 Shutdown/ Meltdown ratio is second only to Lidge’s. • Henry “Hot Rod” Rodriguez is, by this set of measures, not even nearly in the same class as Storen or Lidge. 13 Shutdowns and 13 Meltdowns, giving him an abysmal SD/MD ratio of 1.00–the lowest on the staff. I’m still hoping that he will improve during 2012 and pitch to his potential, though. • Tom Gorzelanny has a shutdown/meltdown ratio of 2.40. That’s fourth, behind Lidge, Storen and Stammen. I guess he really is better as a reliever than as a starter? Then again, he’s only recorded 12 shutdowns, total–so maybe we don’t know enough about him to judge. • I was expecting a tighter correlation between high shutdown numbers and low heartburn index numbers. That’s not what we see. Lidge, for instance, ought to give me more heartburn than his shutdown numbers suggest. Mattheus looks pretty bad next to his heartburn near-equivalent Burnett–but then, Mattheus hasn’t had all that many chances yet. If the Nats’ starting rotation can routinely get through 6 or 7 innings, there are enough high-shutdown arms in the bullpen to keep the game in hand. This is very encouraging news for the rest of
{"url":"http://natstradamus.wordpress.com/tag/gio-gonzalez/","timestamp":"2014-04-19T11:56:17Z","content_type":null,"content_length":"82793","record_id":"<urn:uuid:a787921d-4ba4-4488-9b3a-87aad91ebe8d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Math for Adults/Decimals From Wikibooks, open books for an open world Decimals are basically fractions expressed without a denominator, rather replaced by a power of ten, and then the decimal point is inserted into the numerator at a position corresponding to the power of ten of the denominator. It is usual to add a leading zero to the left of the decimal point when the number is less than one. $\frac{2}{5} = \frac{2 \times 2}{5 \times 2} = \frac{4}{10^1} = 0.4$ Adding Decimal Numbers[edit] Add decimal numbers much the same way you would add integers. Line up decimal points, and then proceed to add each column and carry at the top. The decimal point in the answer should line up with all of the others. Here is an example: \begin{align} {12.3}\\ +\underline{24.2}\\ {36.5} \end{align} Subtracting Decimal Numbers[edit] Subtract as you would whole numbers, but remember to follow all the rules from addition of decimals. \begin{align} {312.9}\\ -\underline{111.4}\\ {201.5} \end{align} Converting Fractions to Decimal Numbers[edit] To convert a fraction to a decimal number, divide the numerator by the denominator. • $\frac{3}{4} = 0.75$ • $\frac{10}{3} = 3.333333333333333...$ Repeating and Terminating Decimals[edit] A repeating decimal is a decimal that is infinite. For instance, the 3.33... in the second example above. The threes just keep repeating. Instead of writing many 3's, you can draw a line above the number that is repeating. $\frac{10}{3} = {3.}\overline{3}$ When there are two numbers repeating, such as .232323, you have to draw a line above the 2 and the 3. A terminating decimal is a decimal that ends at one point and does not go on forever. ex. 1.25 Multiplying Decimal Numbers[edit] Multiplying decimal numbers can be tricky at times, but most of the time, it is similar to multiplying any integers. Although there are easier methods of multiplying, this is one of the methods. You can make both decimal numbers have same multiple of a power of ten. $0.6 \times 0.75 = (60 \times 10^{-2}) \times (75 \times 10^{-2})$ Then multiply the first terms together, and the second terms. $(60 \times 75) \times (10^{-2} \times 10^{-2}) = 4500 \times (10^{-4})$ Then insert the decimal point into a corresponding power of ten. $4500 \times (10^{-4}) = 450 \times (10^{-3}) = 45 \times (10^{-2}) = 4.5 \times (10^{-1}) = .45 \times (10^{0}) = .45 \times (1) = .45$ Dividing Decimal Numbers[edit] Dividing decimal numbers is similar to multiplying them. Make both decimal numbers have same multiple of a power of ten. $0.3 / 0.4 = (3 \times 10^{-1}) / (4 \times 10^{-1})$ Then divide the first terms together, and the second terms. $(3 \times 10^{-1}) / (4 \times 10^{-1}) = (3/4) / 10^{0}$ Then insert the decimal point into a corresponding power of ten. $(3/4) / 10^{0} = 0.75 / 1 = 0.75\,\!$ Alternatively, you can make the numbers integers (if the decimal is finite) and perform a simple division. $0.3 / 0.4 = (0.3 \times 10) / (0.4 \times 10) = 3/4 = 0.75$
{"url":"https://en.wikibooks.org/wiki/Basic_Math_for_Adults/Decimals","timestamp":"2014-04-19T13:07:41Z","content_type":null,"content_length":"31844","record_id":"<urn:uuid:e6201af1-2335-474a-be18-6f790a8c95ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: The "standard" way to get to 3NF Oracle FAQ Your Portal to the Oracle Knowledge Grid HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US Home -> Community -> Usenet -> comp.databases.theory -> Re: The "standard" way to get to 3NF Re: The "standard" way to get to 3NF From: Jan Hidders <jan.hidders_at_REMOVETHIS.pandora.be> Date: Wed, 14 Apr 2004 19:42:08 GMT Message-ID: <kugfc.71448$OG1.4811821@phobos.telenet-ops.be> Jan Hidders wrote: > Jan Hidders wrote: >> [...] The usual algorithm that gets you to 3NF in one step (the one >> using the minimal cover) splits as little as possible. See for example >> sheet 46 on: >> http://cs.ulb.ac.be/cours/info364/relnormnotes.pdf > > > Did anyone notice that this algorithm is actually not correct? Take the > following example of a relation R(A,B,C,D,E) with the set of FDs: > > { AB->C, AB->D, BC->D } > > It is clear that the relation ABCD is not in 3NF. Since the set of FDs > it is already a minimal cover the resulting decomposition is: > > { ABCD, BCD } > > But that gives us our old relation back (plus a projection) so this is > definitely not in 3NF. As was pointed out to me by Ramez Elmasri, the counterexample is not correct since the set of FDs is not a minimal cover. The reason for this is that AB->D can be derived from AB->C and BC->D. So a proper minimal cover would be { AB->C, BC->D } and that leads to the decomposition { ABC, BCD } which is indeed in 3NF. I now officially declare this thread closed an will stop replying to myself. :-) Received on Wed Apr 14 2004 - 14:42:08 CDT HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
{"url":"http://www.orafaq.com/usenet/comp.databases.theory/2004/04/14/0431.htm","timestamp":"2014-04-21T09:36:30Z","content_type":null,"content_length":"8510","record_id":"<urn:uuid:8320bc7a-901c-4806-b90a-547108e1c315>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding What It's Like to Program in Forth Newest Entry Complete Archives I'm a recovering programmer who has been designing video games since the 1980s, doing things that seem baroquely hardcore in retrospect, like writing Super Nintendo games entirely in assembly language. These days I use whatever tools are the most fun and give me the biggest advantage. james.hague @ gmail.com Where are the comments? Understanding What It's Like to Program in Forth I write Forth code every day. It is a joy to write a few simple words and solve a problem. As brain exercise it far surpasses cards, crosswords or Sudoku I've used and enjoyed Forth quite a bit over the years, though I rarely find myself programming in it these days. Among other projects, I've written several standalone tools in Forth, used it for exploratory programming, wrote a Forth-like language for handling data assets for a commercial project, and wrote two standalone 6502 cross assemblers using the same principles as Forth assemblers. It's easy to show how beautiful Forth can be. The classic example is: : square dup * ; There's also Leo Brodie's oft-cited washing machine program. But as pretty as these code snippets are, they're the easy, meaningless examples, much like the two-line quicksort in Haskell . They're trotted out to show the the strengths of a language, then reiterated by new converts. The primary reason I wrote the Purely Functional Retrogames series, is because of the disconnect between advocates saying everything is easy without destructive updates, and the utter lack of examples of how to approach many kinds of problems in a purely functional way. The same small set of pretty examples isn't enough to understand what it's like to program in a particular language or style. Chuck Moore's Sudoku quote above is one of the most accurate characterizations of Forth that I've seen. Once you truly understand it, you'll better see what's fun about the language, and also why it isn't as commonly used. What I'd like to do is to start with a trivially simple problem, one that's completely straightforward, even simpler than the infamous Write a Forth word to add together two integer vectors (a.k.a. arrays) of three elements each. The C version, without bothering to invent custom data types, requires no thought: void vadd(int *v1, int *v2, int *v3) { v3[0] = v1[0] + v2[0]; v3[1] = v1[1] + v2[1]; v3[2] = v1[2] + v2[2]; } In Erlang it's: vadd({A,B,C}, {D,E,F}) -> {A+D, B+E, C+F}. In APL and J the solution is a single character: First Forth Attempt So now, Forth. We start with a name and stack picture: : vadd ( v1 v2 v3 -- ) Getting the first value out of v1 is easy enough: rot dup @ " brings v1 to the top, then we grab the first element of the array (remember that we need to keep v1 around, hence the ). Hmmm...now we've got four items on the stack: v2 v3 v1 a "a" is what I'm calling the first element of v1, using the same letters as in the Erlang function. There's no way to get v2 to the top of the stack, save the deprecated word , so we're stuck. Second Forth Attempt Thinking about this a bit more, the problem is we have too many items being dealt with at once, too many items on the stack. v3 sitting there on top is getting in the way, so what if we moved it somewhere else for a while? The return stack is the standard location for a temporary value, so let's try it: >r over @ over @ + r> ! Now that works. We get v3 out of the way, fetch v1 and v2 (keeping them around for later use), then bring back v3 and store the result. Well, almost, because now v3 is gone and we can't use it for the second and third elements. Third Forth Attempt This isn't as bad as it sounds. We can just keep v3 over on the return stack for the whole function. Here's an attempt at the full version of : vadd ( v1 v2 v2 -- ) >r over @ over @ + r@ ! over cell+ @ over cell+ @ + r@ cell+ ! over 2 cells + @ over 2 cells + @ + r> 2 cells + ! drop drop ; is roughly the same as in C. " 2 cells + " is equivalent to " cell+ cell+ ". Notice how v3 stays on the return stack for most of the function, being fetched with . The " drop drop " at the end is to get rid of v1 and v2. Some nicer formatting helps show the symmetry of this word: : vadd ( v1 v2 v2 -- ) >r over @ over @ + r@ ! over cell+ @ over cell+ @ + r@ cell+ ! over 2 cells + @ over 2 cells + @ + r> 2 cells + ! drop drop ; This can be made more obvious by defining some vector access words: : 1st ; : 2nd cell+ ; : 3rd 2 cells + ; : vadd ( v1 v2 v2 -- ) >r over 1st @ over 1st @ + r@ 1st ! over 2nd @ over 2nd @ + r@ 2nd ! over 3rd @ over 3rd @ + r> 3rd ! drop drop ; A little bit of extra verbosity removes one quirk in the pattern: : vadd ( v1 v2 v2 -- ) >r over 1st @ over 1st @ + r@ 1st ! over 2nd @ over 2nd @ + r@ 2nd ! over 3rd @ over 3rd @ + r@ 3rd ! rdrop drop drop ; And that's it--three element vector addition in Forth. One solution at least; I can think of several completely different approaches, and I don't claim that this is the most concise of them. It has some interesting properties, not the least of which is that there aren't any named variables. On the other hand, all of this puzzling, all this revision...to solve a problem which takes no thought at all in most languages. And while the C version can be switched from integers to floating point values just by changing the parameter types, that change would require completely rewriting the Forth code , because there's a separate floating point stack. Still, it was enjoyable to work this out. Better than Sudoku? Yes.
{"url":"http://prog21.dadgum.com/33.html","timestamp":"2014-04-17T15:26:16Z","content_type":null,"content_length":"8408","record_id":"<urn:uuid:41494f7f-ae69-4e9f-b68d-c36d2109193b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Electrocomp Corporation: Determining the Corner Point 1. 5382 Electrocomp Corporation: Determining the Corner Point The Electrocomp Corporation manufactures two electrical products: air conditioner and large fans. The assembly process for each is similar in that both require a certain amount of wiring and drilling. Each air conditioner takes 3 hours of wiring and 2 hours of drilling. Each fan must go through 2 hours of wiring and 1 hour of drilling. During the next production, period 240 hours of wiring time are available and up to 140 hours of drilling time may be used. Each air conditioner sold yields a profit of $25. Each fan assembled may be sold for a profit of $15. Formulate and solve this LP production mix situation to find the best combination of air conditioners and fans that yields the highest profit. Use the graphic method. This solution is provided in an attached Word document. Step-by-step instructions are given for formulating and solving the LP problem and the graph is included.
{"url":"https://brainmass.com/math/linear-programming/5382","timestamp":"2014-04-19T15:17:20Z","content_type":null,"content_length":"29051","record_id":"<urn:uuid:9aa82ac4-4740-4194-8b01-fdc4ef15545e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric Probability Probability is a tricky beast, and even more tricky when it comes to analyzing geometric data. The problems arise from how to define accurate measures over geometric spaces; the Buffon needle problem is a classic example of needing to do this kind of calculation carefully. A short (but dense) starts with the Buffon needle problem and delves deeper into notions of valuation (a variant of measure) and how to define them correctly, the key problem being the issue of invariance: how to define measures that are invariant under geometric transformations (read this review To illustrate some examples of the "weirdness" of geometric probability, consider the following three results: 1. If we pick n points at random from the unit square, the expected size of the convex hull of these points is log n. 2. If we pick n points at random from a k-gon, the expected size of the convex hull is k log n 3. If we pick n points at random from a unit disk, the expected size of the convex hull is n^1/3 ! All of these results are collected in a nice manuscript by Sariel Har-Peled. But a result that I found even more surprising is this one , by An arrangement of n lines in R^2 induces a vertex set (all intersection points) whose convex hull has expected complexity O(1) ! Their result uses a fact that is well known (Feller vol. 2), but is still remarkably elegant. Choose points x[1], x[2], ..., x[n] at random on the unit interval. Compute their sorted order x[(1)], x[(2)], ... x[(n)], the order statistic. Then the variable x(k) is distributed (roughly) as the sum of k i.i.d exponentially distributed random variables. Specifically this means that the minimum is distributed as an exponential distribution, and the rest are distributed as the appropriate gamma distributions Bill Steiger recently gave a talk at the NYU Geometry seminar . I could not attend, but the abstract of his talk indicates that the above result holds even in R I should clarify that the O(1) result mentioned above was first proved by Devroye and Toussaint ; their distribution on lines is different, and their proof is more complex. The above paper presents a simpler proof for an alternate distribution.
{"url":"http://geomblog.blogspot.com/2004/09/geometric-probability.html?showComment=1096052040000","timestamp":"2014-04-16T16:41:28Z","content_type":null,"content_length":"141931","record_id":"<urn:uuid:fbf6dd6d-63da-4c5e-ad12-16a50586f4a6>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistically Justifiable Visible Residue Limits - Pharmaceutical Technology Irrespective of which method is used, two aspects of considerable importance in determining VRLs are the acceptance criterion for establishing VRLs and the number of observers participating in the VC verification studies. The acceptance criterion indicates the probability of detection (observed or predicted) at which a specific residue concentration could be regarded as the VRL. For the current method, the acceptance criterion is 1, which is based on the idea that if all the observers are able to detect a specific residue concentration during the VC verification phase, then the same residue concentration could also be detected by an observer in future situations. However, the same acceptance criterion could not be used for the logistic-regression model because the probabilities that it predicts can neither be less than 0 nor greater than 1. To estimate VRLs at different acceptance criteria, the model was inverted to estimate the concentrations that yield a certain response probability. The residue concentration at which the given acceptance criterion intersects with the logistic curve (see Figure 3) was determined. Table VI shows the residue concentrations thus obtained. The notation P [x] (e.g., P [50]) denotes the residue concentration that would give a response of x% according to the model (e.g., the probability that the residue would be detected by 50% of the observers). Confidence intervals for P [x] were then derived using Fieller's theorem. These point estimates provided a framework for evaluating the reliability of the logistic model. The logistic models and associated point estimates could be considered reliable if the observed probability of detection was found to be consistent with the predicted probability of detection. If one assumes 0.999 to be approximately equal to 1, then the VRL for the given residue should be 2.921 μg/cm^2, which is larger than the one obtained with the current method (see Table VI). Thus, based on the logistic-regression model, the residue concentration at 2.921 μg/cm^2 is predicted to be detected by all the observers with a 95% confidence interval of 2.266–4.761. Because the probability of detection increases significantly with an increase in spiked-residue concentration, setting higher acceptance criteria would give larger VRLs. Table VI shows that as the acceptance criterion approaches 1, the relationship requires a larger change in the explanatory variable to have the same effect as a smaller change in the explanatory variable at the middle of the curve. For example, a change in the predicted probability of detection from 0.9 to 0.99 requires a larger change in residue concentration (i.e., a change of 0.674 μg/cm^2) than does a change in the probability from 0.5 to 0.6 (i.e., a change of 0.114 μg/cm^2). Similarly, the confidence interval for these VRLs would tend to be wider as the acceptance criterion increases. Logistic regression may provide a much larger VRL than the current method. However, manufacturers may achieve lower VRLs by adjusting the acceptance Unlike continuous responses, binary responses require a large number of observations. The more trials are attempted, the more accurate the estimated probability is. For VC verification studies with a small number of observers, a large number of observations with some replicates at each spiking level is recommended. However, for an accurate estimation of sample size, one may use the formula proposed by Hsieh et al. (15). Logistic regression, as previously described, can be generalized to incorporate more than one explanatory variable, which may be continuous or categorical. However, care should be taken when interpreting and reporting results from multiple logistic-regression models. To correctly interpret the results from a multiple logistic-regression analysis and arrive at meaningful conclusions, appropriate steps must be taken to incorporate statistical interaction or curvilinear effects properly (e.g., including additional x [1] × x [2] or polynomial terms such as x [1] ^2 in the systemic component of the model) (13). If the logistic coefficient for the product or polynomial term is not statistically significant, then the interaction or curvilinear effect is not statistically significant. One problem that may arise while modeling multiple explanatory variables is that sometimes the value of one or more independent variables may raise the probability of the dependent variable close to 1, therefore the effects of other variables cannot have much influence. In that case, such variables should be excluded from the model or individual VRLs should be determined for the most appropriate viewing conditions.
{"url":"http://www.pharmtech.com/pharmtech/article/articleDetail.jsp?id=660526&sk=&date=&pageID=5","timestamp":"2014-04-17T04:00:45Z","content_type":null,"content_length":"149529","record_id":"<urn:uuid:63f6ebea-c961-4b07-8b1e-2476e2a117c2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/s/medals","timestamp":"2014-04-21T10:29:12Z","content_type":null,"content_length":"77969","record_id":"<urn:uuid:7652a84a-9318-4a0b-a109-685e15e200c8>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Figure 1: Dynamical phase coexistence in a micromaser. A beam of excited two-level atoms (green arrow) pumps a resonant cavity. The resulting random sequence of photon emissions defines a quantum trajectory. The gray line depicts the first-derivative of the large deviation function, which corresponds to the mean number of atoms having emitted a photon, as a function of the conjugate field s (which in this case classifies the trajectories according to their activity). At the onset of the micromaser bistability, the discontinuity or first-order transition occurs in the physical space (at s=0) and separates a high-activity phase (red) from a low-activity phase (blue). The micromaser then operates at the phase coexistence between these two dynamical phases.
{"url":"http://physics.aps.org/articles/large_image/f1/10.1103/Physics.3.34","timestamp":"2014-04-18T06:28:27Z","content_type":null,"content_length":"1668","record_id":"<urn:uuid:f4b4a065-00b4-4f08-94e5-2d90bf351b18>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Average position is a really perverse metric. Let’s say that I have only 2 keywords in an account, each with one ad: On Day 1, ad #1 is in position 2 and gets 100 impressions per day, while ad #2 is in position 9 and gets 10 impressions per day. The account’s average position on Day 1 (100×2 + 10×9, divided by 110) is thus 2.64. Now let’s say that on Day 2 both ads move up one position. Ad #2 is now in position 8. An increase in position tends to result in more impressions, so let’s say that ad #2 now gets 40 more impressions, for a total of 50 impressions on Day 2. Ad #1 is now in position 1 and let’s say it also gets 40 more impressions, for a total of 140 impressions for that ad on Day 2. The account’s average position on Day 2 is thus 2.84. That is, the average position has dropped (from 2.64 to 2.84) even though both ads in the account moved up one position. What makes average position even more perverse is that this relationship is only true sometimes. For instance, in the example above, if ad #2 had been in position 6 on Day 1 and moved to position 5 on Day 2 (instead of from position 9 to 8), then the account’s average position on Day 1 would have been 2.36 and on Day 2 would have been 2.05. That is, the average position would not have dropped as both ads moved up one position. In case that hasn’t frustrated you enough, the average position of a group of ads/keywords can change even if all the ads stay in the same position. If ad #1 had been in position 2 on both days and ad #2 had been in position 9 on both days, but the number of impressions had still been 100 and 140, and 10 and 50, as described above, then the average position on Day 1 still would have been 2.64, but the average position on Day 2 would have been 3.84. That is, the average position would have dropped by more than 1 full point even though neither ad changed position at all! To make matters even worse, the average position of an individual ad/keyword isn’t necessarily the position at which all, or even most, of its impressions occurred. Let’s say that a search engine tells us that one of our ads got 4 impressions yesterday and had an average position of 2.0. Looking at the figure below, we see that there are only 5 possible ways to show 4 impressions such that their average position is 2.0. (If you’re not convinced these are the only solutions, try for yourself to find others.) The most obvious is solution 1. If all 4 impressions were shown in position 2, then their average position will be 2.0. Slightly less obvious is solution 2: if 2 of the impressions were in position 2 and one each in positions 1 and 3, then their average position will still be 2.0. Even less obvious is solution 3: if two impressions happened in both position 1 and position 3, then the average position will still be 2.0 even though no impressions actually occurred in position 2! There are two other possible configurations of impressions, solutions 4 and 5, which you can check for yourself have average positions of 2.0. That’s it. Those are the only 5 possible configurations of impressions with an average position of 2.0. Unfortunately, we have no way from the data the search engines provide to determine which of these 5 cases actually occurred. What’s strange is that 3 out of these 5 possibilities have more impressions in position 1 than in position 2! If that ad got 1 click yesterday, did that click come from an impression that was actually in position 2, or was from an impression in position 1 (or position 3? or position 4 or 5)? When it comes to determining which position performs best for this ad, I’d like to know! If the search engines told us not only the average position at which our impressions occurred, but also the standard deviation of that average position, then we could figure out which configuration of impressions actually occurred. For example, if they told the average position was 2.0 and the standard deviation was 0.0 (that is, no impressions happened outside position 2), then we’d know that solution 1 was the case that actually happened. If they told us the standard deviation was 1.0 (that is, the ad was shown on average 1 position away from position 2), then we’d know that only solutions 3 or 4 could have been the actual configuration of impressions. Part of the problem here, I think, is terminology. The metric we commonly call the ‘average position’ is really the ‘impression-weighted position’. And just as there’s an impression-weighted position, there’s also a click-weighted position. So, if the search engines told us that our ad got 4 impressions in average position 2.0 with a standard deviation of 1.0, and also 1 click in average position 3.0, then we’d be able to determine immediately that configuration 3 was the one that actually occurred. The reporting burden for the search engines would only be marginally increased, since they’d have to report 4 metrics for every ad instead of one (impression-weighted position, impression-weighted standard deviation, click-weighted position and click-weighted standard deviation, rather than just ‘average position’), but the benefit to search marketers would be enormous. (Perhaps that’s why they don’t do it…) In the meantime, we’ll just have to take our average position with measure of skepticism by remembering how perverse the average position metric can be.
{"url":"http://www.thesearchagents.com/2009/06/average-position-is-a-really-perverse-metric/print/","timestamp":"2014-04-16T22:54:17Z","content_type":null,"content_length":"10060","record_id":"<urn:uuid:6882b87d-1a3f-4786-913e-f3ae109cc1e5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Time Value of Money Time Value of Money - Six Functions of a Dollar Using Assessors’ Handbook Section 505 (Capitalization Formulas and Tables) Appraisal Training: Self-Paced Online Learning Session Lesson 1: Overview This lesson serves as an introduction to the topic and discusses the following: • The concept of the time value of money, • Timelines for cash flows, • Simple versus compound interest, and • Use of Assessors´ Handbook Section 505, Capitalization Formulas and Tables. Objectives and Format of Learning Session The objectives of this learning session are to: • Provide an understanding of the time value of money. • Explain the compound interest functions presented in Assessors' Handbook Section 505, Capitalization Formulas and Tables, and their relationship to appraisal. • Demonstrate how to use the factor tables and each compound interest function in Assessors' Handbook Section 505, to estimate the market value of property for property tax purposes. In this learning session, instruction is provided through structured reading and illustrated examples. To assist in the attainment of the learning objectives, examples are incorporated within the lessons to illustrate the concept being discussed. Problems available at the end of lessons are to be worked by participants to ensure comprehension of concepts and calculations discussed in the lesson. A final examination is also available at the conclusion of this session, in order that certified property tax appraisers employed by a California County Assessor’s Office or the Board of Equalization may demonstrate their overall comprehension of the subject matter, attest to their participation in the learning session and receive training credit for completion of the session. The Concept of the Time Value of Money It´s intuitive to most people that a dollar today is preferable to a dollar to be received in the future. (Think about $1,000,000 today compared to $1,000,000 to be received 5 years from today. Which would you rather have?) A dollar today can be invested to accumulate to more than a dollar in the future, which also makes a future dollar worth less than a dollar today. Hence, money has a time value. More generally, the time value of money is the relationship between the value of a payment at one point in time and its value at another point in time as determined by the mathematics of compound interest. Because of the time value of money, payments made at different points in time cannot be directly compared. The compound interest functions—the mathematics of the time value of money – allow us to bring the payments to the same point in time for comparison purposes. Time value of money calculations have wide application in finance, real estate, and personal financial decisions. An understanding of them is essential in the field of valuation. There are several situations in real estate where dollars at different points in time are compared: • An investor buys a property today for a certain amount of money in order to receive expected income from the property in the future (the purchase price today is compared to the expected future income stream). • A lender makes a loan today in exchange for a promise by the borrower to pay scheduled amounts in the future (the loan amount is compared to the promised payments). • A property owner wants to prepare for a future estimated expense by setting aside a given amount of money each month or year (the future expense is compared to the money set aside each year). • An investor expects that a property will be worth a certain amount in the future and wants to estimate the equivalent amount today (the expected future value is compared to a present value). The process by which payments (or cash flows) are moved forward in time is called compounding. The process by which payments (or cash flows) are moved backward in time is called discounting. All time value of money calculations involve either compounding or discounting — that is, moving amounts either forward or backward in time. Timelines for Cash Flows A series of cash flows can be graphically represented using a cash flow timeline. A timeline depicts the timing and amount of the cash flows. For example, the following timeline depicts cash inflows of $100 to be received at the end of each of the next 5 years: Cash flows in a timeline are often labeled positive or negative. By convention, positive cash flows correspond to cash inflows; negative cash flows correspond to cash outflows. Whether a cash flow is an inflow or an outflow depends on perspective (i.e., as a borrower or lender). The borrower´s inflow is the lender´s outflow, and vice versa. In using timelines, and in solving time value of money problems, one should adopt the perspective of either borrower or lender and stay with that perspective throughout the problem. Consider a simple time value of money problem. In making a purchase you are given two payment alternatives: • Pay $400 immediately. • Pay five installments of $100 each at the end of each of the next five years. As depicted on a cash flow timeline: In deciding which alternative is better, we can´t simply add up the five payments of $100 and compare this sum ($500) with alternative 1 ($400 today). To do so would ignore the time value of money because the two alternatives involve payments at different times. Instead, we must determine the value today (at time 0) of the five future payments of $100 of alternative 2 and compare this to $400, which is the value today of alternative 1. As we shall see, determining the value today (the present value) of the five payments under alternative 2 involves calculating the present value of those payments at a given rate of interest. Time value of money calculations allow us to solve problems such as the one above and many others. Simple versus Compound Interest When money is borrowed, the amount borrowed is called the principal. The consideration paid for the use of money is called interest. • The rate of interest can be thought of as a price per period for the use of money. • From the perspective of the lender, interest is earned; from the perspective of the borrower, interest is paid. Simple Interest Simple interest refers to the situation in which interest is calculated on the original principal amount only. With simple interest, the base on which interest is calculated does not change, and the amount of interest earned each period also does not change. Compound Interest Compound interest refers to the situation in which interest is calculated on the original principal and the accumulated interest. With compound interest, interest is calculated on a base that increases each period, and the amount of interest earned also increases with each period. Application of Simple Interest Suppose someone invests $100 for 50 years and receives 5% per year in simple interest. To calculate simple interest, multiply the beginning balance by the rate 0.05 × $100 = $5. The growth in the investment is depicted in the table below: With simple interest, each year´s interest is based on the original principal amount only. Application of Compound Interest Assume the same investment of $100 for 50 years, but at compound interest: With compound interest, interest is earned on both the original principal and accumulated interest. Interest is earned on interest. In the preceding example, with simple interest, the accumulated amount after 50 years is only $350. With compound interest, the accumulated amount is $1,147. As the term increases, the difference between the final amount with compound interest versus simple interest becomes more and more dramatic. "Miracle of Compound Interest" In a well-known transaction, Dutch colonists bought Manhattan Island in 1624 for the equivalent of $24. • This seems like a steal, but if the seller had deposited the $24 and earned an annual rate of 6%, the future compound amount would have been about $141 billion in the year 2010. • This is roughly equal to the total assessed value of all land and improvements in the City and County of San Francisco in the year 2010. • Over the same time period (386 years), the future value of $24 at simple interest of 6% would have been only $580. From a lender´s (or investor´s) perspective, compound interest is a good thing; the lender earns interest on interest from the borrower. Conversely, from a borrower´s perspective, compound interest is not so good. The borrower, in effect, pays interest on interest throughout the term of the loan. Compound Interest Functions Six compound interest functions are used to solve time value of money problems. Not surprisingly, all of the functions are based on compound, not simple, interest. Each compound interest function is defined by a formula, which is the basis for calculating the compound interest factors for that function. Each formula requires a periodic interest rate and the number of periods Most time value of money problems involve the use of only one compound interest function (or factor), but some require the use of two or more. Understanding the compound interest functions, and how the factors derived from them are used to solve time value of money problems, is the heart of this subject matter. Each compound interest formula, and the factors derived from it, involves three variables: 1. An interest rate, 2. A term (number of periods), and 3. A compounding interval (how frequently interest is compounded). In essence, using a compound interest factor does one of two things: 1. Adds compound interest to a present value to arrive at a future value. 2. Subtracts compound interest from a future value to arrive at a present value. Published tables of compound interest factors are used to solve time value of money problems. It´s easier to refer to a table of factors than to calculate the desired factor from one of the formulas each time you need it. Time value of money problems can also be solved using a financial calculator or spreadsheet software. Essentially, the software calculates the necessary factor and processes the calculation. We approach the subject by first showing how compound interest factors are derived from each of the formulas, then showing how the factors are used to solve various time value of money problems. This approach provides a fundamental understanding of the material and a good basis for later using financial calculators and spreadsheet applications to solve time value of money problems. The six compound interest functions are listed below; the following table briefly describes each function and gives an example of how it might be used. • Future Worth of $1 (FW$1) • Present Worth of $1 (PW$1) • Future Worth of $1 Per Period (FW$1/P) • Sinking Fund Factor (SFF) • Present Worth of $1 Per Period (PW$1/P) • Periodic Repayment (PR) Review the above table as an introduction to the compound interest functions. Note that the first two compound interest functions (FW$1 and PW$1) deal with a single amount, or payment, while the remaining four deal with a series of payments (an annuity). Although all of the compound interest functions have appraisal applications, two are of particular importance because of their central role in the income approach – the present worth of $1 (PW$1) and the present worth of $1 per period (PW$1/P). Using Assessors’ Handbook Section 505 Assessors´ Handbook Section 505 (AH 505), Capitalization Formulas and Tables, contains a set of compound interest factors for use by property tax appraisers. • AH 505 begins with introductory material (pages 1-11) that describes the compound interest functions, shows their formulas, and demonstrates some sample problems. We will cover this material in the presentation. • The remainder of AH 505 (pages 12-109) contains tables of compound interest factors over a range of interest rates for the six compound interest functions. • AH 505 contains compound interest factors at interest rates from 1 to 25%, at 1/2 % intervals, for time periods up to 50 years. If a problem requires a term longer than 50 years, the tables can be extended using certain calculations. • For each interest rate, there is a separate page for either annual or monthly compounding. The frequency of compounding (such as on a monthly basis versus annual) directly affects the present and future values in discounting – see separate lesson on Frequency of Compounding (Lesson 9). Briefly look through AH 505 (Use browser's back button to return to lessons). To use the AH 505 tables, follow these steps: 1. Locate the page at the desired interest rate, selecting the page for either annual or monthly compounding (a common error is using the annual page when the monthly page is required, or vice 2. Find the desired term (number of periods, months or years) by going down the far left side of the page. 3. Go across to the proper column to find the factor for the desired compound interest function. (For solving the examples, we have recreated portions of the AH505 tables. We also provide a link to the actual pages in AH 505. If you go to an actual page, use the zoom feature for the best view.) Example 1: Find the FW$1 factor at 10% for 10 years, given annual compounding. In AH 505: • Search for the page at 10%, annual compounding, (step 1 above); the correct page is page 49. • Go down 10 years and across to column 1 (FW$1) (steps 2 and 3); the correct factor is 2.593742. Example 2: Find the PW$1 factor at 5% for 5 years, given monthly compounding. In AH 505: • Search for the page at 5%, monthly compounding; the correct page is page 28. • Go down 5 years and across to column 4 (PW$1); the correct factor is 0.779205 Example 3: Find the PW$1/P factor at 12.5% for 8 years, given annual compounding. In AH 505: • Search for the page at 12.5%, annual compounding; the correct page is page 59. • Go down 8 years and across to column 5 (PW$1/P); the correct factor is 4.882045
{"url":"http://www.boe.ca.gov/info/tvm/lesson1.html","timestamp":"2014-04-18T08:11:24Z","content_type":null,"content_length":"58425","record_id":"<urn:uuid:766051bc-f7ae-4f2e-9d77-d8ef5313f035>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Deriving Structural Hylomorphisms From Recursive Definitions Results 11 - 20 of 33 - Journal of Functional Programming , 2005 "... Monads are commonplace programming devices that are used to uniformly structure computations with effects such as state, exceptions, and I/O. This paper further develops the monadic programming paradigm by investigating the extent to which monadic computations can be optimised by using generalisatio ..." Cited by 15 (7 self) Add to MetaCart Monads are commonplace programming devices that are used to uniformly structure computations with effects such as state, exceptions, and I/O. This paper further develops the monadic programming paradigm by investigating the extent to which monadic computations can be optimised by using generalisations of short cut fusion to eliminate monadic structures whose sole purpose is to “glue together ” monadic program components. We make several contributions. First, we show that every inductive type has an associated build combinator and an associated short cut fusion rule. Second, we introduce the notion of an inductive monad to describe those monads that give rise to inductive types, and we give examples of such monads which are widely used in functional programming. Third, we generalise the standard augment combinators and cata/augment fusion rules for algebraic data types to types induced by inductive monads. This allows us to give the first cata/augment rules for some common data types, such as rose trees. Fourth, we demonstrate the practical applicability of our generalisations by providing Haskell implementations for all concepts and examples in the paper. Finally, we offer deep theoretical insights by showing that the augment combinators are monadic in nature, and thus that our cata/build and cata/augment rules are arguably the best generally applicable fusion rules obtainable. - Coalgebraic Methods in Computer Science, number 44.1 in Electronic Notes in Theoretical Computer Science , 2001 "... We give a necessary and sufficient condition for when a set-theoretic function can be written using the recursion operator fold, and a dual condition for the recursion operator unfold. The conditions are simple, practically useful, and generic in the underlying datatype. 1 ..." Cited by 9 (3 self) Add to MetaCart We give a necessary and sufficient condition for when a set-theoretic function can be written using the recursion operator fold, and a dual condition for the recursion operator unfold. The conditions are simple, practically useful, and generic in the underlying datatype. 1 - Fundamenta Informaticae , 2005 "... Abstract. The subject of this paper is functional program transformation in the so-called point-free style. By this we mean first translating programs to a form consisting only of categorically-inspired combinators, algebraic data types defined as fixed points of functors, and implicit recursion thr ..." Cited by 9 (5 self) Add to MetaCart Abstract. The subject of this paper is functional program transformation in the so-called point-free style. By this we mean first translating programs to a form consisting only of categorically-inspired combinators, algebraic data types defined as fixed points of functors, and implicit recursion through the use of type-parameterized recursion patterns. This form is appropriate for reasoning about programs equationally, but difficult to actually use in practice for programming. In this paper we present a collection of libraries and tools developed at Minho with the aim of supporting the automatic conversion of programs to point-free (embedded in Haskell), their manipulation and rule-driven simplification, and the (limited) automatic application of fusion for program transformation. 1 , 1996 "... It is well known that not all programs are susceptible to automatic program specialization. Traditionally, complicated analyses are performed before actual specialization, in order to uncover as much of the useful program properties as possible. This is particularly the case for automatic program tr ..." Cited by 8 (0 self) Add to MetaCart It is well known that not all programs are susceptible to automatic program specialization. Traditionally, complicated analyses are performed before actual specialization, in order to uncover as much of the useful program properties as possible. This is particularly the case for automatic program transformers that specialize function calls with arguments containing only constructors and variables. We describe a novel approach for achieving better program specialization by preprocessing a program before subjecting it to actual specialization. The preprocessing phase involves simple syntactic analyses and program transformation, which is based on the well-understood fold/unfold strategy with generalization on terms. We ensure the termination of the transformation used in this phase, and outline a proof of its total correctness. Our approach greatly simplifies the task of program specialization in the later stage. Compared to other existing semantics-based approaches, our syntax-based method is considerably simpler, yet still widely applicable. Our approach is formulated for nonstrict first-order programs. It can help obtain programs that are more susceptible to a variety of program specializers, including partial evaluation, deforestation, and the elimination of repeated pattern testing. , 1996 "... List homomorphisms are functions which can be efficiently computed in parallel since they ideally suit the divide-and-conquer paradigm. However, some interesting functions, e.g., the maximum segment sum problem, are not list homomorphisms. In this paper, we propose a systematic way of embedding them ..." Cited by 5 (4 self) Add to MetaCart List homomorphisms are functions which can be efficiently computed in parallel since they ideally suit the divide-and-conquer paradigm. However, some interesting functions, e.g., the maximum segment sum problem, are not list homomorphisms. In this paper, we propose a systematic way of embedding them into list homomorphisms so that parallel programs are derived. We show, with an example, how a simple, and "obviously" correct, but possibly inefficient solution to the problem can be successfully turned into a semantically equivalent almost homomorphism by means of two transformations: tupling and fusion. "... . Systematic parallelization of sequential programs remains a major challenge in parallel computing. Traditional approaches using program schemes tend to be narrower in scope, as the properties which enable parallelism are difficult to capture via ad-hoc schemes. In [CTH98], a systematic approac ..." Cited by 5 (3 self) Add to MetaCart . Systematic parallelization of sequential programs remains a major challenge in parallel computing. Traditional approaches using program schemes tend to be narrower in scope, as the properties which enable parallelism are difficult to capture via ad-hoc schemes. In [CTH98], a systematic approach to parallelization based on the notion of preserving the context of recursive sub-terms has been proposed. This approach can be used to derive a class of divide-and-conquer algorithms. In this paper, we enhance the methodology by using invariants to guide the parallelization process. The enhancement enables the parallelization of a class of recursive functions with conditional and tupled constructs, which were not possible previously. We further show how such invariants can be discovered and verified systematically, and demonstrate the power of our methodology by deriving a parallel code for maximum segment product. To the best of our knowledge, this is the first systematic parall... , 1998 "... Correctness-preserving program transformation has recently received a particular attention for compiler optimization in functional programming [Kelsey and Hudak 1989; Appel 1992; Peyton Jones 1996]. By implementing a compiler using many passes, each of which is a transformation for a particular opti ..." Cited by 4 (4 self) Add to MetaCart Correctness-preserving program transformation has recently received a particular attention for compiler optimization in functional programming [Kelsey and Hudak 1989; Appel 1992; Peyton Jones 1996]. By implementing a compiler using many passes, each of which is a transformation for a particular optimization, one can attain a modular compiler. It is no surprise that the modularity would increase if transformations are structured, i.e. constructed in a modular way. Indeed, the program transformation in calculational form (or program calculation) can help us to attain this goal. - In T Ida, A Ohori, and M Takeichi, eds, Proceedings 2nd Fuji Int Workshop on Functional and Logic Programming, Shonan Village , 1996 "... Program fusion (or deforestation) is a well-known transformation whereby compositions of several pieces of code are fused into a single one, resulting in an efficient functional program without intermediate data structures. Recent work has made it clear that fusion transformation is especially succe ..." Cited by 4 (2 self) Add to MetaCart Program fusion (or deforestation) is a well-known transformation whereby compositions of several pieces of code are fused into a single one, resulting in an efficient functional program without intermediate data structures. Recent work has made it clear that fusion transformation is especially successful if recursions are expressed in terms of hylomorphisms . The point of this success is that fusion transformation proceeds merely based on a simple but effective rule called Acid Rain Theorem [10]. However, there remains a problem. The Acid Rain Theorem can only handle hylomorphisms inducting over a single data structure. For hylomorphisms, like zip, which induct over multiple data structures, it will leave some of the data structures remained which should be removed. In this paper, we extend the Acid Rain Theorem so that it can deal with such hylomorphisms, enabling more intermediate data structures to be eliminated. 1. Introduction Functional programming constructs a complex program ... - Pages 47–71 of: Semantics, Applications, and Implementation of Program Generation , 2001 "... Abstract. Short cut fusion is a particular program transformation technique which uses a single, local transformation — called the foldr-build rule — to remove certain intermediate lists from modularly constructed functional programs. Arguments that short cut fusion is correct typically appeal eithe ..." Cited by 3 (1 self) Add to MetaCart Abstract. Short cut fusion is a particular program transformation technique which uses a single, local transformation — called the foldr-build rule — to remove certain intermediate lists from modularly constructed functional programs. Arguments that short cut fusion is correct typically appeal either to intuition or to “free theorems ” — even though the latter have not been known to hold for the languages supporting higher-order polymorphic functions and fixed point recursion in which short cut fusion is usually applied. In this paper we use Pitts ’ recent demonstration that contextual equivalence in such languages is relationally parametric to prove that programs in them which have undergone short cut fusion are contextually equivalent to their unfused counterparts. The same techniques in fact yield a much more general result. For each algebraic data type we define a generalization augment of build which constructs substitution instances of its associated data structures. Together with the well-known generalization cata of foldr to arbitrary algebraic data types, this allows us to formulate and prove correct for each a contextual equivalence-preserving cata-augment fusion rule. These rules optimize compositions of functions that uniformly consume algebraic data structures with functions that uniformly produce substitution instances of them. 1 , 1996 "... Program fusion techniques have long been proposed as an effective means of improving program performance and of eliminating unnecessary intermediate data structures. This paper proposes a new approach on program fusion that is based entirely on the type signatures of programs. First, for each functi ..." Cited by 3 (0 self) Add to MetaCart Program fusion techniques have long been proposed as an effective means of improving program performance and of eliminating unnecessary intermediate data structures. This paper proposes a new approach on program fusion that is based entirely on the type signatures of programs. First, for each function, a recursive skeleton is extracted that captures its pattern of recursion. Then, the parametricity theorem of this skeleton is derived, which provides a rule for fusing this function with any function. This method generalizes other approaches that use fixed parametricity theorems to fuse programs. 1 Introduction There is much work recently on using higher-order operators, such as fold [11] and build [8, 5], to automate program fusion [2] and deforestation [13]. Even though these methods do a good job on fusing programs, they are only effective if programs are expressed in terms of these operators. This limits their applicability to conventional functional languages. To ameliorate this pr...
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.38.2678&sort=cite&start=10","timestamp":"2014-04-20T07:29:47Z","content_type":null,"content_length":"40980","record_id":"<urn:uuid:87a6e533-28e8-47e4-9c8d-034b5a9f626c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
Parameter estimation for robust HMM analysis of ChIP-chip data • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Bioinformatics. 2008; 9: 343. Parameter estimation for robust HMM analysis of ChIP-chip data Tiling arrays are an important tool for the study of transcriptional activity, protein-DNA interactions and chromatin structure on a genome-wide scale at high resolution. Although hidden Markov models have been used successfully to analyse tiling array data, parameter estimation for these models is typically ad hoc. Especially in the context of ChIP-chip experiments, no standard procedures exist to obtain parameter estimates from the data. Common methods for the calculation of maximum likelihood estimates such as the Baum-Welch algorithm or Viterbi training are rarely applied in the context of tiling array analysis. Here we develop a hidden Markov model for the analysis of chromatin structure ChIP-chip tiling array data, using t emission distributions to increase robustness towards outliers. Maximum likelihood estimates are used for all model parameters. Two different approaches to parameter estimation are investigated and combined into an efficient procedure. We illustrate an efficient parameter estimation procedure that can be used for HMM based methods in general and leads to a clear increase in performance when compared to the use of ad hoc estimates. The resulting hidden Markov model outperforms established methods like TileMap in the context of histone modification studies. 1 Background High density oligonucleotide tiling arrays allow the investigation of transcriptional activity, protein-DNA interactions and chromatin structure across a whole genome. Tiling arrays have been used in a wide range of studies, including investigation of transcription factor activity [1] and of histone modifications in animals [2] and plants [3], as well as DNA methylation [4]. Analyses of these data are usually based either on a sliding window [1,5], or on hidden Markov models (HMMs) [6-8]. Other approaches have been suggested, e.g., by Huber et al. [9] and Reiss et al. [10], but are less Parameter estimates for sliding window approaches as well as hidden Markov models are typically ad hoc. Although there are some notable exceptions in gene expression studies [8,11], no established procedures exist to obtain good parameter estimates from tiling array data, especially in the context of chromatin immunoprecipitation (ChIP-chip) experiments. Attempts have been made to obtain parameter estimates by integrating genome annotations into the analysis [12]. While this may provide good results when investigating transcriptional activity in well studied organisms, it is limited by the quality of available annotations. For ChIP-chip studies the required annotation data is unavailable. A method for the localisation of transcription factors from ChIP-chip experiments by Keleş [13] does obtain the required parameter estimates from the data and allows for variations in length of enriched regions. Methods designed for the analysis of ChIP-chip data focus almost exclusively on the study of transcription factors [[6,7,10], and [13]]. While this is an important class of experiments, ChIP-chip studies are not limited to transcription factors, and the analysis of other ChIP-chip experiments may require new methods. One other area of active research that utilises ChIP-chip experiments is the study of histone modifications and chromatin structure [3]. Although both types of experiment employ the same technology, there are several important differences between them. Most importantly, the 147 bp of DNA bound by a histone complex are considerably longer than the typical transcription factor binding site, and the histone modifications of interest are expected to affect several neighbouring histones. Consequently the ChIP fragments derived from a transcription factor binding site all originate from a small region containing the given binding site while regions affected by histone modifications can be much longer than the ChIP fragments used. As a result of this, the data from histone modification experiments usually contain long regions of interest encompassing several non-overlapping ChIP fragments, rather than the short and relatively isolated peaks produced by transcription factor studies. Here we consider the analysis of data from a histone modification study in Arabidopsis [3]. These data consist of four ChIP samples for histone H3 with lysine 27 trimethylation (H3K27me3) and four histone H3 ChIP samples that act as a control. The aim of this analysis is to identify and characterise regions throughout the genome that exhibit enrichment for H3K27me3. It is desirable to use a method which is specifically designed for the analysis of histone modifications or flexible enough to accomodate the varying length of enriched regions. Furthermore, the method should obtain all parameter estimates from the data without the use of genome annotations and be robust towards outliers. Amongst the methods discussed above TileMap [7] comes closest to these requirements. Although it was developed with transcription factor analysis in mind it is general enough that it should provide useful results for other ChIP-chip experiments. This is emphasised by its application to histone modification [3] and DNA methylation [4] data as well as transtription factor analysis [14,15]. TileMap obtains some, but not all, of the required parameter estimates from the data. To provide a method which meets the requirements oulined above we develop a two state HMM with t emission distributions. All parameter estimates for the model are obtained by maximum likelihood estimation using the Baum-Welch algorithm [16] and Viterbi training [17]. These methods have the advantage that no prior knowledge about parameter values is required and one need not rely on frequently unavailable genome annotations. To assess the performance of our model, we apply it to simulated and real data. Results are compared to those produced by TileMap. The remainder of this article is structured as follows. In Section 2 the hidden Markov model is developed and MLEs for all parameters are derived in Section 4. The performance of the resulting model is assessed in terms of sensitivity and specificity on simulated data in Sections 2.3.3–2.3.6. In Section 2.3.7 the model is used to analyse a public ChIP-chip data set [3] and results are compared to the original analysis of these data. 2 Results and discussion Tiling array data consists of a series of measurements taken along the genome. Typically, microarray probes are designed to interrogate the genome at regular intervals. Design constraints such as probe affinity and uniqueness cause differences in probe density along the genome and can lead to large gaps between probes. Here we assume that the probe density is homogeneous except for a number of large gaps where the distance between adjacent probes is larger than max_gap. In the following analyses we use max_gap = 200 bp. This is identical to the value used by Zhang et al. [3], allowing for a direct comparison of results. Consider a ChIP-chip tiling array experiment with two conditions, a ChIP sample X[1 ]targeting the protein of interest and a control sample X[2]. Each sample X[l ] has m[l ]replicates (l = 1, 2) providing measurements for K genomic locations. The measurements for each probe are summarised by the "shrinkage t" statistic [18]: where $vlk∗$ is a James-Stein shrinkage estimate of the probe variance obtained by calculating and $slk2$ are the usual unbiased empirical variances and $λˆl∗$ is the estimated optimal pooling parameter Other moderated t statistics have been suggested and could be used instead, most notably the empirical Bayes t statistic used by Ji and Wong [7] and the moderated t of Smyth [19]. All of these approaches are designed to increase performance compared to the ordinary t statistic by incorporating information from all probes on the microarray into individual probe statistics. Here we choose the "shrinkage t" because it does not require any knowledge about the underlying distribution of probe values while providing similar performance compared to more complex models [18]. 2.1 Hidden Markov Model To detect enriched regions we use a two state discrete time hidden Markov model with continuous emission distributions and homogeneous transition probabilities (Figure (Figure1),1), i.e., the transition probabilities depend only on the current state of the model. The use of homogeneous transition probabilities assumes equally-spaced probes within each observation sequence as well as a geometric distribution of the length of enriched regions. As discussed above there will be some variation in probe distances. Using a relatively small value for max_gap ensures that the assumption of homogeneity holds at least approximately. The two states of the model correspond to enrichment or no enrichment in the ChIP sample. The model is characterised by the set of states S = {S[1], S[2]}, the initial state distribution p, the matrix of transition probabilities A and the state specific emission density functions f[i], i = 1, 2. The emission distribution of state S[i ]is modelled as a t distribution with location parameter μ[i], scale parameter σ[i], and ν[i ]degrees of freedom. Hidden Markov model for the analysis of ChIP-chip tiling array data. The use of t distributions has the advantage that their sensitivity to outliers can be adjusted via the degrees of freedom parameter, making them more robust and versatile than normal distributions. This is particularly useful when ν is estimated from the data [20]. It should be noted that the y[k ]modelled here are from a t-like distribution (Equation (1)). While this in itself might suggest the use of t distributions for the f[i]s, they are primarily chosen for their robustness. In the following we will refer to this model by its parameter vector θ = (θ[1], θ[2]), where θ[1 ]is the ordered pair (p, A) and θ[2 ]the ordered triple (μ, σ, ν). Given a hidden Markov model θ and an observation sequence Y, it is possible to compute the sequence of states Q = q[1]q[2]...q[K ]that is most likely to produce Y. There are several approaches to obtaining Q [21]. Usually Q is computed either by maximising the posterior probabilities P(q[k ]= S[i]|Y; θ), k = 1, ..., K or by calculating the sequence that maximises P(Q|Y; θ). The latter provides the single most likely sequence of states and can be computed efficiently by the Viterbi algorithm [22]. For the particular model used here both approaches are equivalent. 2.2 Parameter Estimation In this section we will discuss two different approaches to estimate θ for the model described in Section 2.1. The methods under consideration are the EM algorithm, which is usually known as the Baum-Welch algorithm in the context of HMMs, and Viterbi training. While the Baum-Welch algorithm is guaranteed to converge to a local maximum of the likelihood function, it is computationally intensive. Viterbi training provides a faster alternative but may not converge to a local maximum. 2.2.1 Initial Estimates Both optimisation algorithms discussed here require initial parameter estimates. These are obtained from the data by first partitioning the vector of observations Y into two clusters using k-means clustering [23]. From these clusters the location and scale parameters of the corresponding states are obtained as the mean and variance of the observations in the cluster. In the following, ν[1 ]= ν [2 ]= 6 is used as initial estimate for the degrees of freedom parameters. 2.2.2 Baum-Welch Algorithm The Baum-Welch algorithm [16] is a well established iterative method for estimating parameters of HMMs. It represents the EM algorithm [24] for the specific case of HMMs. This algorithm can be used to optimise the transition parameters θ[1 ]as well as the emission parameters θ[2]. Each iteration of the algorithm consists of two phases. During the first phase, the current parameter estimates are used to determine for each probe statistic in the observation sequence how likely it is to be produced by the different states of the model. In the second phase, parameters for the emission distributions of each state are estimated using contributions from all observations, according to the probability that they were produced by the respective state of the model. The state transition parameters are updated in a similar fashion, accounting for the probability of transitions between states based on the observation sequence and the current model. After each iteration this procedure results in a model which explains the observed data better than the previous one, approaching a locally optimal solution. Using this method parameter estimates are updated until convergence is achieved. The details of the resulting algorithm are outlined in Section 4.1. This method of parameter estimation is computationally expensive and time-consuming for a typical tiling array data set. The computing time can be reduced by fixing the degrees of freedom for the emission distributions in advance, thus avoiding the root-finding required for the estimation of these parameters. While this does not provide the same flexibility as estimating the required degree of robustness from the data it reduces the complexity of the optimisation problem. It is noted by Liu and Rubin [25] that attempts to estimate the degrees of freedom are more likely to produce results which are of little practical interest. The impact on classification performance of this choice is investigated in Section 2.3. The formulation of the Baum-Welch algorithm used in this article is based on the description given by Rabiner [21] and on the EM algorithm derived by Peel and McLachlan [26] for fitting mixtures of t 2.2.3 Viterbi Training While the Baum-Welch algorithm described in Section 2.2.2 is expected to provide good parameter estimates, it is computationally expensive. A faster model-fitting procedure can be devised by replacing the first phase of the Baum-Welch algorithm with a maximisation step. This method was introduced in [17] as segmental k-means and is now commonly referred to as Viterbi training. Unlike the Baum-Welch algorithm which allows each probe statistic to contribute to the parameter estimates for all states, Viterbi training assigns each observation to the state that is most likely to produce the given probe statistic. Thus each observation contributes to exactly one state of the model. While each iteration of this method is faster than one iteration of the Baum-Welch algorithm some iterations may decrease the likelihood of the model, thus failing to advance it towards a useful solution. See Section 4.2 for further details on the implementation of Viterbi training used here. 2.3 Testing 2.3.1 Simulated Data To assess the ability to distinguish between enriched and non-enriched probes of the models obtained by the different parameter estimation methods discussed in Section 2.2, we simulate data with known enriched regions. To ensure that the simulation study is providing meaningful results, it is based as closely on real data as possible. To this end, two independent analyses of the H3K27me3 data published by Zhang et al. [3] are carried out, one using TileMap [7], the other based on our model. The result of each analysis is used to generate a new dataset with known enriched regions. See Section 4 for further details. In the following these data are referred to as datasets I and II respectively. Since the simulation procedure is likely to bias results towards the model that was used in the process, we concentrate on the analysis of dataset I, with some results for dataset II presented for comparison. The use of data based on both models allows us to consider their performance under advantageous and disadvantageous conditions. 2.3.2 Performance Measure The performance of different models on these data is determined in terms of false positive and false negative rates at probe level. While the relative importance of false positives and false negatives depends on the experiment under consideration, they are often equally problematic in the context of ChIP-chip experiments, especially when considering experiments which investigate differences between different cell lines or developmental stages, where all incorrect classifications are of equal concern. In this context, we define false positives as probes that are classified as non-enriched by the analysis of the real data but are called enriched in the subsequent analysis of simulated data, and vice versa for false negatives. The output of each model is the estimated posterior probability of enrichment for each probe. In practice, probe calls ("enriched" or "non-enriched") are generated from this posterior probability based on a 0.5 cut-off. For any given model, classification performance will change with the chosen threshold. Thus we assess model performance across a range of cut-offs, reporting the relative number of false positives and false negatives as well as the error rate. The latter is also used to determine the cut-off that minimises incorrect classification results, and model performance is judged on the numbers of incorrect classifications at this optimal cut-off and at the usual 0.5 cut-off, and on the distance between the optimal cut-off and 0.5. The trade-off between sensitivity and specificity provided by the different models is characterised with ROC curves and the associated AUC values. Another measure of interest is the ability to characterise the length distribution of enriched regions correctly. When studying chromatin structure the extent of structural changes is of interest; this is the case for the data studied in Section 2.3.7. This property of the different models is investigated in Section 2.3.6. 2.3.3 Estimating Degrees of Freedom We now consider the performance of both the Baum-Welch procedure and Viterbi training when all model parameters, including the degrees of freedom ν, are estimated from the data. Both parameter estimation methods are used to fit an HMM to datasets I and II, and the performance of resulting models is assessed in terms of the achieved error rate (Figure (Figure2),2), ROC curves (Figure (Figure3)3) and their associated AUC (Table (Table1)1) for both datasets. To assess how well these methods perform in comparison to an established algorithm, we also fit a TileMap model to the two simulated datasets. The three models are compared to each other, as well as an ad hoc model which simply uses, without optimisation, the initial parameter estimates used by the two parameter optimisation methods. When comparing the performance of these models on both simulated datasets, it is important to consider that the simulation procedure introduces a bias towards the underlying Error rate for different models on datasets I and II. Error rate resulting from the different models on dataset I (left) and II (right). When the total number of incorrect probe calls is considered, both parameter estimation procedures outperform TileMap ... ROC curves for different models on datasets I and II. TileMap and the models with Baum-Welch and Viterbi training parameter estimates show similar performance on dataset I (left) with a small advantage for the models with optimised parameters. Comparison ... AUC for different models on both simulated datasets. Estimating all parameters from the data with either the Baum-Welch algorithm or Viterbi training leads to models with high sensitivity, producing fewer false negatives than TileMap for any given cut-off [see Additional file 1]. At the same time they lead to an increased number of false positives [see Additional file 2] compared to TileMap, indicating a slight reduction of specificity. When considering the error rate it becomes apparent that both Baum-Welch and Viterbi training provide a favourable trade-off between sensitivity and specificity. These models reduce the number of incorrect classifications compared to TileMap both at the usual 0.5 cut-off and at the optimal cut-off. Moreover, while the Baum-Welch algorithm and Viterbi training both lead to models with an optimal cut-off close to 0.5 (0.51 and 0.42 respectively), TileMap provides an optimal cut-off of 0.19, indicating that it underestimates the posterior probability of enrichment. This becomes even more apparent when considering the result for dataset II where the optimal cut-off for TileMap is at 0.002 compared to 0.5 for Baum-Welch and 0.41 for Viterbi training. This result suggests that TileMap is more tuned towards avoiding false positives than false negatives. From the above results we estimate that the weight given to false positives by TileMap is approximately 3.2 and 26 times larger than the weight for false negatives on datasets I and II respectively. The ROC curves (Figure (Figure3)3) provide further evidence that the models with MLEs outperform TileMap. Although all three models perform well on dataset I, both parameter optimisation methods lead to better results than TileMap. The benefits of optimising parameter estimates are further highlighted by the performance of the model with ad hoc estimates that is used as starting point for the optimisation procedures. On both datasets, optimised parameters provide a notable increase in performance, with TileMap performing only slightly better than the ad hoc model on dataset II. 2.3.4 Fixed Degrees of Freedom Estimating ν, the degrees of freedom, for t distributions from the data is time-consuming and may not be very accurate, especially for relatively large values of ν. In this section we investigate the effect of fixing ν a priori for both states of the model. Only the case ν[1 ]= ν[2 ]is considered here. The remaining parameters are estimated from the training data using the Baum-Welch algorithm and Viterbi training with ν = 3, 4, ..., 50. For each value of ν, we report the error rate (Figure (Figure4)4) as well as the AUC (Figure (Figure5)5) on the simulated data. Model performance for different choices of ν. The Baum-Welch model (red) performs better for relatively small values of ν while Viterbi training (blue) favours larger ν. For the optimal choice of ν the Baum-Welch parameter ... AUC for different choices of ν and increasing numberof iterations. Change in AUC for different choices of ν (left). The Baum-Welch model performs better for relatively small values of ν while Viterbi training favours larger ν ... For the best combination of ν and cut-off, both parameter estimation methods result in models with a classification performance comparable to the case of variable degrees of freedom (Figure (Figure2).2). While the Baum-Welch algorithm tends to produce models with an optimal cut-off close to 0.5, Viterbi training only achieves this for large values of ν. Notably, the best classification performance of the Viterbi trained model is achieved with 14 degrees of freedom and a 0.37 cut-off compared to 7 degrees of freedom and a 0.49 cut-off from Baum-Welch. This results in a decreased performance of the Viterbi model relative to the Baum-Welch model at the 0.5 cut-off. 2.3.5 Convergence To reduce the time required for parameter estimation it is useful to limit the number of iterations. While each iteration of the Baum-Welch algorithm is guaranteed to improve the likelihood of the model, small changes to the parameter values do not necessarily lead to significant changes in the classification result. Furthermore, Viterbi training is not guaranteed to converge to a local maximum of the likelihood function and a likelihood based convergence criterion may not be appropriate for this method. Here we investigate the convergence of both algorithms based on the error rate and AUC to gauge the number of iterations required to achieve good classification results. Parameter estimation is performed with 60 iterations for both algorithms. Current estimates are used to classify the test data at every 5^th iteration and AUC (Figure (Figure5)5) and error rate (Figure (Figure6)6) are determined. Error rate at optimal and 0.5 cutoff for increasing number of iterations. Parameter estimates obtained by the Baum-Welch algorithm (filled symbols) and Viterbi training (open symbols) improve model performance with increasing nuber of iterations. Viterbi ... The most striking difference in the convergence behaviour of the two methods is that Viterbi training appears to obtain good parameter estimates within a small number of iterations. Further iterations of the algorithm do not improve results substantially, whereas the Baum-Welch procedure provides parameter estimates that are better than the ones obtained by Viterbi training, both in terms of likelihood and classification performance, but takes substantially longer to obtain these estimates. The Baum-Welch algorithm not only requires more iterations than Viterbi training, but the time required for each iteration is also longer. 2.3.6 Length Distribution of Enriched Regions When studying histone modifications one possible characteristic of interest is the length of enriched regions. To assess how accurately the different methods reflect the length distribution of enriched regions, we compare the length of regions predicted by TileMap and by the model (using Baum-Welch parameter estimates) to the length distribution of enriched regions in the simulated data (the "true length distribution"). Note that this length distribution may vary from the one found in real data. Nevertheless this comparison highlights some of the differences between the two models. Quantile-quantile plots of the respective length distributions show that TileMap systematically underestimates the length of enriched regions (Figure (Figure77 (bottom left) and Figure Figure88 (bottom left)). While this effect is relatively small on dataset I there is some indication that it increases with region length and long regions may not be characterised appropriately by TileMap (Figure (Figure77 (top left)). This observation is further supported by the length distribution of enriched regions produced by TileMap on dataset II (Figure (Figure88 (left)). Enriched regions in dataset II are generally longer than regions in dataset I. This difference is not captured by TileMap. Both TileMap and the Baum-Welch trained model produce several regions that are shorter than the shortest enriched region in the simulated data (Figure (Figure77 (bottom)). There are two possible explanations for these short regions. They may be caused by underestimating the length of enriched regions, possibly splitting one enriched region into several predicted regions, or they may represent spurious enriched results produced by the model. In each case there is the possibility that the occurrence of extremely short regions is caused either by an intrinsic shortcoming of the model or by artifacts introduced during the simulation process. Since the simulation relies on TileMap to identify enriched and non-enriched probes it is inevitable that some probes will be misclassified. Subsequently these probes may be included in the simulated data, causing short disruptions of enriched and non-enriched regions. A sufficiently sensitive model could detect these unintended changes between enriched and non-enriched states. Length distribution of enriched regions from dataset I. Quantile-quantile plots comparing length distributions of enriched regions found with TileMap (left) and with the model based on maximum likelihood estimates (right) to the true length distribution ... Length distribution of enriched regions from dataset II. Quantile-quantile plots comparing length distributions of enriched regions found with TileMap (left) and with the model based on maximum likelihood estimates (right) to the true length distribution ... To investigate further which of these is the case, we first examine the number of enriched probes contained in the short regions found by the Baum-Welch model and by TileMap respectively. The model with Baum-Welch parameter estimates found 126 regions with less than 10 probes. These regions contain a total of 866 probes of which 717 are in enriched regions. While this indicates that the majority of short regions is due to underestimating the length of enriched regions, several spurious probe calls remain. TileMap produced 249 regions with less than 10 probes, containing a total of 1781 probes, of which 1753 are in enriched regions. This is strong evidence that almost all of these short regions are caused by underestimating the length of enriched regions, and is consistent with the above observation that TileMap systematically underestimates the length of enriched regions. To investigate whether the spurious short regions produced by the Baum-Welch model are due to an intrinsic shortcoming of the model or are artifacts introduced by the simulation procedure, we turn to real data. Here we focus on enriched regions containing only a single probe, which are most likely to be false positives. On dataset I the Baum-Welch model produced six of these extremely short regions. One of these probes is a true positive from an enriched region containing ten probes, i.e., the length of this region is underestimated by the Baum-Welch model. Of the remaining five probes three are identical, leaving three unique probes to be investigated further. For each of these three probes, we determine its position in the real data and its distance from enriched regions identified by TileMap and by our model (Section 2.3.7). Two of the probes are found to be located close to enriched regions identified by TileMap (142 and 391 bp) and all three probes are contained within enriched regions identified by our model [see Additional file 3]. This suggests that these probes may have been misclassified by TileMap during the original analysis, leading to an overestimation of the number of false positives produced by the Baum-Welch model on dataset I. 2.3.7 Application to ChIP-Chip Data To investigate the performance of our model further, we apply it to the data of [3] and compare the result to the original analysis. Based on the results of the simulation study (Sections 2.3.3–2.3.6) we use the following procedure: 1. Quantile normalise and log transform data; 2. Calculate probe statistics (Equation (2)); 3. Obtain initial estimates (Section 2.2.1); 4. Use 5 iterations of Viterbi training to improve initial estimates; 5. Use 15 iterations of Baum-Welch algorithm to obtain maximum likelihood estimates; 6. Apply resulting model to data to identify enriched regions. This results in the detection of 5285 H3K27me3 regions covering 12.9 Mb of genomic sequence. Of these enriched regions, 3962 (~75%) are overlapping at least one annotated transcript. A total of 4982 or about 18.9% of all annotated genes are found to be enriched for H3K27me3. While most of the enriched regions cover a single gene, some regions are found to contain up to seven genes (Figure 9(b)). Enriched regions are predominantly longer than 1 kb with some extending over more than 20 kb (Figure 9(c)). Analysis of ChIP-chip data. (a) Gene density in areas surrounding genes that contain H3K27me3 enriched regions and genes that do not contain enriched regions. (b) Number of genes found in H3K27me3 regions. While most enriched regions cover a single gene, ... To assess whether there is a difference between regions of the genome that show H3K27me3 enrichment and the rest of the genome, we investigate the density of genes in the neighbourhood of genes that appear to be regulated by H3K27me3, and compare this to the gene density in other regions of the genome. For this purpose we obtain the gene density for the 50 kb upstream and downstream of each gene as (bp annotated as genes)/100 kb. The resulting gene densities for genes with and without enriched regions are summarised in Figure 9(a). There are visible differences between the two distributions which we test for significance with a two sided Kolmogorov-Smirnov test; this results in an approximate p-value of 2 × 10^-15. The significance of this result is further confirmed by a resampling experiment: the smallest p-value obtained from a series of 10000 resampled datasets is 1 × 10^-6. 3 Conclusion With the use of MLEs for all model parameters, our model clearly improves classification performance on simulated data compared to ad hoc estimates, and outperforms TileMap. While our model produced some short regions that appear to be false positives, they are readily explained as a result of the simulation process. Comparison of results on simulated and real data suggests that TileMap produced a large number of false negatives in the original analysis used as the basis for the simulation. Inevitably, these false negatives were selected as part of non-enriched regions during the simulation process. The fact that the model with Baum-Welch parameter estimates was able to identify these isolated enriched probes despite the non-enriched contexts where they appeared emphasises the high sensitivity of the model. TileMap's apparent tendency to penalise false positives more than false negatives clearly contributes to its relatively low performance in our comparisons which are based on the assumption that both types of error are equally problematic. While this is the case for the application considered here, one may argue that false positives are indeed of greater concern in some cases. When this is the case, TileMap's trade-off between sensitivity and specificity may lead to better results. However, it should be noted that the relative weights given to false positives and false negatives by TileMap can vary substantially between datasets. The parameter estimation procedure used for our model on the other hand provides consistent performance at the chosen cut-off. The model-fitting procedure derived from the results of the simulation study (Sections 2.3.3–2.3.6) provides a fast and reliable approach to parameter estimation. This method retains all the favourable properties of the Baum-Welch algorithm while utilising the reduced computing time provided by Viterbi training. The use of MLEs ensures that model parameters are appropriate for the data. Results from the simulation study show that estimating model parameters from the data improves the model's ability to recognise enriched regions of varying length and generally improves classification performance. 3.1 Future Work The analysis of the H3K27me3 data (Section 2.3.7) largely confirms the analysis of [3] although there are some notable differences. Most importantly, the H3K27me3 regions detected by our analysis are longer than the ones determined by TileMap (Figure (Figure10).10). While Zhang et al. [3] found few regions longer than 1 kb, our analysis indicates that over 70% of enriched regions have a length of at least 1 kb, with the longest region spanning over 20 kb. Accordingly we find more regions that extend over several genes (Figure 9(b)). This may have implications for conclusions about the spreading of H3K27me3 regions in Arabidopsis. Length distribution of enriched regions from real data. Length distribution of enriched regions as determined by TileMap (blue) and Baum-Welch (red). Region length is determined in terms of probes per region. Both distributions were truncated at 10 for ... At this stage, the biological significance of the observed difference in gene density in the neighbourhood of enriched and non-enriched genes is unclear. However, it indicates that the two groups of genes differ in a significant way. This suggests that the partition into enriched and non-enriched genes produced by our analysis is indeed meaningful. The hidden Markov model presented in this article uses homogeneous transition probabilities, assuming that all probes are spaced out equally along the genome. To satisfy this assumption at least approximately, we use a fixed cut-off of 200 bp to partition the sequence of probe statistics such that there are no large gaps between probes. This arbitrary cut-off could be avoided by using a continuous time hidden Markov model. 4 Methods 4.1 Baum-Welch Algorithm The Baum-Welch algorithm [16] used to estimate parameters for our model is outlined in Section 2.2.2; further details are given below. Computing the likelihood of the long observation sequences produced by tiling arrays involves products of many small contributions. This typically results in likelihoods below machine precision. To avoid this effect computations are carried out in log-space, using the identity ln(x + y) = ln(x) + ln (1 + e^ln(y)-ln(x)). In the following we use ^ln∑ to denote summations which should be computed via Equation (4). The sequence of probe statistics Y is split into D observation sequences Y ^(d) such that the distance between probes within each observation sequence is at most max_gap and the distance between the end points of different observation sequences is greater than max_gap. The emission distribution of state S[i ]is given as For a given parameter set θ we can obtain new parameter estimates for transition probabilities by calculating Here α[k ]and β[k ]are known as forward and backward variables. For observation sequence d, d = 1, ..., D, they are defined as where 1 ≤ i ≤ N, 1 ≤ j ≤ N, 1 ≤ k <K[d ]and where 1 ≤ i ≤ N, 1 ≤ i ≤ N, k = K[d ]- 1, ..., 1. Note that ln [P(Y ^(d); θ)] is given by $ln⁡∑i=1NαKdd$ We then calculate Combining the estimates from all observation sequences we obtain new parameter estimates for the transition probabilities: Calculations for the re-estimation of θ[2 ]may involve negative values and cannot be carried out in log-space. To obtain the required parameter estimates we first define $ln⁡[τkid]=γkid$ and then compute There is no closed form estimate for ν[i]. To obtain $νˆi$ one has to find a solution to the equation where ψ is the digamma function. Standard root-finding techniques are employed to find a solution to (20). 4.2 Viterbi Training Viterbi training provides a faster alternative to the Baum-Welch algorithm. See Section 2.2.3 for a high level description of the algorithm. Details of the parameter estimation procedure are given below. Instead of calculating the conditional expectation of the complete data log likelihood, this algorithm first computes the most likely state sequence Q given the observation sequence Y and the current model θ. The sequence Y is partitioned according to Q, assigning each observation to the state that it most likely originated from. New estimates for θ[1 ]are then obtained by calculating $aˆij=|{d=1,...,D:qkd=Si and qk+1d=Sj}|∑d=1D(Kd−1).$ Updates for μ and σ are obtained as in Section 2.2.1. The degrees of freedom ν can be either fixed in advance or estimated from the data using Equation (20) by setting $τkid=1$ if $(qkid,qk+1d)= (Si,Sj)$ and $τkid=0$ otherwise. 4.3 Simulated Data In a first step following the original analysis by [3], TileMap [7] is used with the HMM option to define enriched and non-enriched probes. Note that, although this classification of probes is not perfect, it can be assumed that most probes are assigned to the correct group. The length distribution of enriched and non-enriched regions detected by TileMap is used to determine the length distributions for the simulated data after removing all regions that contain less than 10 probes (Figure (Figure10).10). Data are generated by first determining the length of enriched and non-enriched regions from the empirical length distributions and then sampling data points from the respective TileMap generated clusters. Following this procedure, 600 sequences with one to ten enriched regions in each sequence are generated. A second dataset is generated by applying the model described in Section 2. Note that, although this procedure relies on the classifications produced by the respective models, the resampling procedure will place individual probe values in a new context of surrounding probes, which may lead to different probe calls in the analysis of the simulated data. Prior to analysis all data are quantile normalised. 5 Availability The parameter estimation methods used in this article are available as part of the R package tileHMM from the authors' webpage http://www.bioinformatics.csiro.au/TileHMM/ and from CRAN. The simulated data used in this study is available from the authors' web page. 6 Authors' contributions PH conducted the research and wrote the manuscript. DB critically revised the manuscript. GS conceived the project. DB and GS provided supervision to PH. All authors have read and approved the final Supplementary Material Additional file 1: False negative probe calls resulting from different models. For any given cut-off TileMap produces more false negatives than the Baum-Welch and Viterbi trained models. Additional file 2: False positive probe calls resulting from different models. For any given cut-off TileMap produces fewer false positives than the Baum-Welch and Viterbi trained models. Additional file 3: Origin of isolated enriched probes in dataset I. The isolated enriched probes identified in dataset I by the Baum-Welch model originate from enriched regions identified by the Baum-Welch model in the real data. Two out of three probes are located close to enriched regions identified by TileMap. PH is supported by an MQRES scholarship from Macquarie University and a top-up scholarship from CSIRO. The authors would like to thank Michael Buckley for his helpful suggestions. • Cawley S, Bekiranov S, Ng HH, Kapranov P, Sekinger EA, Kampa D, Piccolboni A, Sementchenko V, Cheng J, Williams AJ, Wheeler R, Wong B, Drenkow J, Yamanaka M, Patel S, Brubaker S, Tammana H, Helt G, Struhl K, Gingeras TR. Unbiased Mapping of Transcription Factor Binding Sites along Human Chromosomes 21 and 22 Points to Widespread Regulation of Noncoding RNAs. Cell. 2004;116:499–509. doi: 10.1016/S0092-8674(04)00127-8. [PubMed] [Cross Ref] • Bernstein BE, Kamal M, Lindblad-Toh K, Bekiranov S, Bailey DK, Huebert DJ, McMahon S, Karlsson EK, III EJK, Gingeras TR, Schreiber SL, Lander ES. Genomic Maps and Comparative Analysis of Histone Modifications in Human and Mouse. Cell. 2005;120:169–181. doi: 10.1016/j.cell.2005.01.001. [PubMed] [Cross Ref] • Zhang X, Clarenz O, Cokus S, Bernatavichute YV, Goodrich J, Jacobsen SE. Whole-Genome Analysis of Histone H3 Lysine 27 Trimethylation in Arabidopsis. PLoS Biol. 2007;5:e129. doi: 10.1371/ journal.pbio.0050129. [PMC free article] [PubMed] [Cross Ref] • Zhang X, Yazaki J, Sundaresan A, Cokus S, Chan SWL, Chen H, Henderson IR, Shinn P, Pellegrini M, Jacobsen SE, Ecker JR. Genome-wide High-Resolution Mapping and Functional Analysis of DNA Methylation in Arabidopsis. Cell. 2006;126:1189–1201. doi: 10.1016/j.cell.2006.08.003. [PubMed] [Cross Ref] • Bertone P, Stolc V, Royce TE, Rozowsky JS, Urban AE, Zhu X, Rinn JL, Tongprasit W, Samanta M, Weissman S, Gerstein M, Snyder M. Global Identification of Human Transcribed Sequences with Genome Tiling Arrays. Science. 2004;306:2242–2246. doi: 10.1126/science.1103388. [PubMed] [Cross Ref] • Li W, Meyer CA, Liu XS. A hidden Markov model for analyzing ChIP-chip experiments on genome tiling arrays and its application to p53 binding sequences. Bioinformatics. 2005;21:i274–i282. doi: 10.1093/bioinformatics/bti1046. [PubMed] [Cross Ref] • Ji H, Wong WH. TileMap: create chromosomal map of tiling array hybridisations. Bioinformatics. 2005;21:3629–3636. doi: 10.1093/bioinformatics/bti593. [PubMed] [Cross Ref] • Munch K, Gardner PP, Arctander P, Krogh A. A hidden Markov model approach for determining expression from genomic tiling micro arrays. BMC Bioinformatics. 2006;7:239. doi: 10.1186/ 1471-2105-7-239. [PMC free article] [PubMed] [Cross Ref] • Huber W, Toedling J, Steinmetz LM. Transcript mapping with high-density oligonucleotide tiling arrays. Bioinformatics. 2006;22:1963–1970. doi: 10.1093/bioinformatics/btl289. [PubMed] [Cross Ref] • Reiss DJ, Facciotti MT, Baliga NS. Model-based deconvolution of genome-wide DNA binding. Bioinformatics. 2008;24:396–403. doi: 10.1093/bioinformatics/btm592. [PubMed] [Cross Ref] • Toyoda T, Shinozaki K. Tiling array-driven elucidation of transcriptional structures based on maximum-likelihood and Markov models. The Plant Journal. 2005;43:611–621. doi: 10.1111/ j.1365-313X.2005.02470.x. [PubMed] [Cross Ref] • Du J, Rozowsky J, Korbel JO, Zhang ZD, Royce TE, Schultz MH, Snyder M, Gerstein M. A supervised hidden Markov model framework for efficiently segmenting tiling array data in transcriptional an ChIP-chip experiments: systematically incorporating validated biological knowledge. Bioinformatics. 2006;22:3016–3024. doi: 10.1093/bioinformatics/btl515. [PubMed] [Cross Ref] • Keleş S. Mixture Modeling for Genome-Wide Localization of Transcription Factors. Biometrics. 2007;63:10–21. doi: 10.1111/j.1541-0420.2005.00659.x. [PubMed] [Cross Ref] • Ji H, Vokes SA, Wong WH. A comparative analysis of genome-wide chromatin immunoprecipitation data for mammalian transcription factors. Nucl Acids Res. 2006;34:e146. doi: 10.1093/nar/gkl803. [PMC free article] [PubMed] [Cross Ref] • Sandmann T, Girardot C, Brehme M, Tongprasit W, Stolc V, Furlong EEM. A core transcriptional network for early mesoderm development in Drosophila melanogaster. Genes & Development. 2007;21 :436–449. doi: 10.1101/gad.1509007. [PMC free article] [PubMed] [Cross Ref] • Baum LE, Petrie T, Soules G, Weiss N. A Maximization Technique Occuring in the Statistical Analysis of Probabilistic Functions of Markov Chains. The Annals of Mathematical Statistics. 1970;41 :164–171. doi: 10.1214/aoms/1177697196. [Cross Ref] • Juang BH, Rabiner LR. A segmental k-means algorithm for estimating parameters of hidden Markov models. IEEE Transactions on Acoustics, Speech, and Signal Processing. 1990;38:1639–1641. doi: 10.1109/29.60082. [Cross Ref] • Opgen-Rhein R, Strimmer K. Accurate Ranking of differentially expressed genes by a distribution-free shrinkage approach. Statistical applications in Genetics and Molecular Biology. 2007;6:Article 9. doi: 10.2202/1544-6115.1252. [PubMed] [Cross Ref] • Smyth GK. Linear Models and Empirical Bayes Methods for Assessing Differential Expression in Microarray Experiments. Statistical Applications in Genetics and Molecular Biology. 2004;3:Article 3. doi: 10.2202/1544-6115.1027. [PubMed] [Cross Ref] • Lange KL, Little RJA, Taylor JMG. Robust Statistical Modeling Using the t Distribution. Journal of the American Statistical Association. 1989;84:881–896. doi: 10.2307/2290063. [Cross Ref] • Rabiner LR. A tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE. 1989;77:257–286. doi: 10.1109/5.18626. [Cross Ref] • Viterbi AJ. Error bounds for convolutional codes and an assymptotically optimal decoding algorithm. IEEE Transactions on Information Theory. 1967;13:260–269. doi: 10.1109/TIT.1967.1054010. [Cross • Hartigan JA, Wong MA. A K-means clustering algorithm. Applied Statistics. 1979;28:100–108. doi: 10.2307/2346830. [Cross Ref] • Dempster AP, Laird NM, Rubin DB. Maximum Likelihood for Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, Series B. 1977;39 • Liu C, Rubin DB. ML estimation of the t distribution using EM and its extensions, ECM and ECME. Statistica Sinica. 1995;5:19–39. • Peel D, McLachlan GJ. Robust mixture modelling using the t distribution. Statistics and Computing. 2000;10:339–348. doi: 10.1023/A:1008981510081. [Cross Ref] Articles from BMC Bioinformatics are provided here courtesy of BioMed Central Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2536674/?tool=pubmed","timestamp":"2014-04-19T12:56:12Z","content_type":null,"content_length":"153557","record_id":"<urn:uuid:a272d029-076d-48f3-9933-6ada02888661>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Challenge – First Common Ancestor 2 Mar 11 Challenge – First Common Ancestor Question: How would you find the first common ancestor of two nodes in a binary search tree? First as in the lowest in the tree. Another way to ask is to find the lowest common ancestor of two nodes. Challenge: Do you know the answer to this question? Post in the comments. Answers will be posted March 6th. Meanwhile, check out the challenges from previous weeks here. TreeNode findFirstCommonAncestor(TreeNode root, TreeNode p, TreeNode q) { if (root == null) { return null; if (root == p || root == q) { return root; TreeNode left = findFirstCommonAncestor(root.left, p, q); TreeNode right = findFirstCommonAncestor(root.right, p, q); if ((left == p && right == q) || (left == q && right == q)) { return root; return (left != null) ? left : right; TreeNode findFirstCommonAncestor(TreeNode root, int p, int q) { if (root == null) { return null; if (root.value == p || root.value == q) { return root; if (root.value > p && root.value > q ) { return findFirstCommonAncestor(root.left, p, q); else if (root.value < p && root.value < q ) { return findFirstCommonAncestor(root.right, p, q); else { return root; Thanks Dave and Sunil for pointing out the alternate solution. 19 Responses 1. I have not read the paper yet (thank you Cosmin for posting it), but the posts here are all incomplete. Here are the points to consider and test your proposed algorithm on: 1. Your routine should return null if one or both nodes you are looking for are not in the tree. All posted algorithms fail this test if one of the two values are present and the other is 2. Your routine should work if one of the sought nodes is the ancestor of the other. Again, both proposed solutions in the text fail this, since the moment you find one of the sought nodes the algorithm returns and the children of that node are never examined. 3. Your routine should work if the two sought values are the same. 2. Okay. How to do this problem in an iterative way? One possibility is that the BST is totally unbalanced. NODE *LCA(NODE *t, int sml, int lgr){ if(t == NULL) return NULL; if(sml > lgr) {printf(“sml=%d larger than lgr=%d\n”, sml, lgr); exit(1);} while(t && !(t->data > sml && t->data data > lgr) t = t->left; if(t->data right; return t; 3. Sorry! did not see that the solution was posted already:( 4. It is a binary search tree. So given two nodes say x and y. Assume x x and < y. The algorithm: Starting from root, if it is greater than both x and y, then we check the left child of the root. If the root is less than both x and y, then we check the right child of the root. This check will keep going until find a node z (x<z<y). 5. Why is the official answer an algorithm that runs in time linear in the size of the tree when there are algorithms that use the fact that the tree is a binary search tree to find the answer in time linear in the depth of the tree? 6. Just do in-order traversal of the binary tree, the FIRST node whose value lies between the values of two given nodes is the lowest common ancestor. 7. Recursive solution? Node p1 = leftNode; Node p2 = rightNode; Node ancestor(p1,p2){ if(p1==p2) return p1; // common ancestor if(p1.parent != root) return ancestor(p1.parent, p2); if(p2.parent != root) return ancestor(p1, p2.parent); 8. Following up on my previous post: it’s the smallest/largest (depending on the ordering) element found in /ps/ and /qs/. 9. If the type doesn’t provide any insight in how it’s represented, but provides method for doing common traversals (pre- and postorder in particular), you can: do a preorder traversal and store the nodes before any of the two sought nodes are found, call them /ps/. Then do a postorder traversal and skip all elements while not both sought nodes are seen, call the rest /qs/. Now the LCA is the element present both in /ps/ and in /qs/. Maybe this solution is discussed in the paper above, I’m not sure. It’d be nice if someone could confirm or show that the above algorithm works. 10. It swallowed half my code, so here’s a repost A O(1) space, O(log(n)) time algorithm (if the tree is balanced). data Tree a = Node a (Tree a) (Tree a) | Leaf visit n x1 x2 = if x1 < x2 then visit' n x2 x1 else visit' n x1 x2 visit' Leaf _ _ = None visit' (Node v l r) x1 x2 | v >= x1 && v <= x2 = Some (Node v l r) visit' (Node v l r) x1 x2 | v < x1 = visit' r x1 x2 visit' (Node v l r) x1 x2 | v > x2 = visit' l x1 x2 visit' _ _ _ = None 11. A O(1) space, O(log(n)) time algorithm (if the tree is balanced). data Tree a = Node a (Tree a) (Tree a) | Leaf visit n x1 x2 = if x1 = x1 && v <= x2 = Some (Node v l r) visit' (Node v l r) x1 x2 | v x2 = visit' l x1 x2 visit' _ _ _ = None 12. Do a binary search for node 1 and keep track of every node that you see along the way. Then do the same for node 2. Compare your results and take the smallest node that is found in both sets. 13. DFS from the first root and mark all the nodes as seen. DFS from the second root and take the node at the lowest depth that has a seen value O(n). 14. Given: root = tree root n1 = node1 n2 = node2 //make sure n1 is smaller than n2 if (n1>n2) swap(n1,n2); //find point where n1<root<n2 if (n2left; else if(n1>root) root = root->right; else return root; 15. Two pass algorithm with O(1) memory and O(log n) time. pass 1) start from node n1 and n2, for each go up to the root, to know how deep in tree we are, record height as h1, and h2. pass 2) restart from n1 and n2. for deeper node (say n2, with h2 depth), perform h2-h1 jumps to the parent. Then start synchronously travering up by one in n1 and n2, and stop when you land in some step you have same node on both sides (which will be eventually root). 16. Find the parents of both nodes ordered in their natural order of parenthood. Take the intersection of both sets and pick the smallest element in the intersection. 17. The paper “The LCA problem revisted” solves the your task in O(n) preprocessing time and O(1) time for each query. 18. Repost of code: search(node* n1, node* n2, node* root) { int minValue = min(n1->value, n2->value); int maxValue = max(n1->value, n2->value); while (true) { //check if min and max reside on different sides of root if (minValue <= value && maxValue >= root->value) return root; if (minValue < left) //both smaller, go left root = root->left; //both larger, go right root = root->right; 19. The first common ancestor is the same as the root of the smallest tree containing both nodes. Start at the root, search for the two nodes. Stop when the two nodes would reside in different subtrees. search(node* n1, node* n2, node* root) { int minValue = min(n1->value, n2->value); int maxValue = max(n1->value, n2->value); while (true) { //check if min and max reside on different sides of root if (minValue value && maxValue >= root->value) return root; if (minValue left; //both larger, go right root = root->right; Using Gravatars in the comments - get your own and be recognized! XHTML: These are some of the tags you can use: <a href=""> <b> <blockquote> <code> <em> <i> <strike> <strong> 19 Comments Filed in Challenge, Trees Tagged ancestor, binary, common, first, interview, tree, tricky
{"url":"http://www.mytechinterviews.com/first-common-ancestor","timestamp":"2014-04-17T15:44:27Z","content_type":null,"content_length":"45224","record_id":"<urn:uuid:cf77ef64-9c67-4683-9cd6-8ef3f6e17d73>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
2. Examples of Random Probability 2. Examples of Random Probability When it comes to audience familiarity, the best examples of random probability can be found in the world of gambling. Therefore, two excellent examples are the lottery, and the game of 5-card poker. Warning: ahoy, there be mathematics here! If you hated math in school, you might have trouble getting through this page. If you think you won't be able to handle it, feel free to run like a coward to Page 3, but be aware that by skipping this page, you are proving that you don't have what it takes to discuss probability at even the most basic level. Keep that in mind if you're a creationist and you intend to write me an E-mail telling me how wrong I am. The Lottery Lottery games are random (by law), so they are obviously a good example of random probability. Let's take an example lottery where you must pick 6 unique numbers in any order from 1 to 49. Remember that the first step is to count the number of possible combinations: there are 49 choices for the first number. You can't pick the same number twice, so there are only 48 choices for the second number, 47 choices for the third number, and so on. Therefore, the total number of possible combinations is 49*48*47*46*45*44 (or 49!/43! on your calculator), which equals approximately 10.07 billion . However, you can pick the 6 numbers in any order, and there are 720 possible ways to arrange 6 numbers (6*5*4*3*2*1, or 6! on your calculator; now you know what that x! button is for). Therefore, the overall probability is 10.07 billion divided by 720 orders, or 13.98 million (probability math short-hand for this whole process is "49 choose 6", or 49C6). Therefore, the odds of any given set of numbers coming up in this type of lottery are roughly 1 in 14 million. And if you play two sets of numbers in the same lottery draw, then your odds of winning are roughly 2 in 14 million, or 1 in 7 million. Note: an important recurring formula in probability is "x choose y", and it looks like this: x choose y = x!/(x-y)!/y! This formula, also expressed as xCy (eg- 49C6) will be used henceforth in lieu of bulky expressions such as (49*48*47*46*45)/(6*4*3*2*1). The exclamation mark stands for "factorial". Fun Fact: if you enter "49 choose 6" in Google Search, it will automatically compute the result for you. Try it! It must be noted that your odds of winning the lottery do not go up if you have been playing every week for the last 10 years. Did you notice that in the above calculation, no mention whatsoever was made of previous plays? That is because previous plays do not factor into the probability equation at all. If your intuition tells you that your 10 years of prior play should count for something, remember that if one is to learn about scientific and mathematical concepts, one must first learn to listen to the equations, not your intuition. Your intuition is the sum total of your life experience up to now, and it is a very unreliable guide when learning about new and unfamiliar things. So now you know how to calculate the odds of drawing a particular number in a lottery. Here's an exercise: if you understood the preceding, then calculate the odds of any given set of numbers coming up in a lottery where you pick 4 numbers from 1 to 39. But in this lottery, the order of the numbers does matter, and you can pick the same number more than once. Click here for the answer. In the game of poker, you have 52 cards: 4 suits (clubs, spades, hearts, and diamonds), with 13 cards (ranked from ace to king, although ace can be either low or high) in each suit (no jokers or other wildcards for now). Remember that the first step is to calculate the number of possible combinations. You can't draw the same card twice, and the order doesn't matter, so if you draw a 5-card hand from a deck of 52 cards, then the math is similar to the first lottery example: "52 choose 5" = 52C5 = 2598960. In order to determine the probability of drawing any particular hand out of those 2598960 possibilities, you must determine how many different examples of that hand exist. For example, a royal flush is the five highest cards in any given suit, from ten to ace, like this example: Click on the cards to see all 4 royal flushes: one for each suit. Since there are 4 royal flushes, the odds of a royal flush are 4 in 2598960, or 1 in 649740. Another way of calculating the odds is For a trickier example, the odds of getting a triple (three of the same number) can be computed by determining how many triples exist in the deck. A triple, also known as "three of a kind", is three cards of the same rank, like this: Click on any card to see all the combinations available. There are 13 possible ranks of triple, from ace to king, and for each rank, there are 4 ways to get a triple out of the four available suits. Therefore, there are 13*4=52 possible triples. But we still have to pick the last two cards, don't we? Remember that there are 52 cards, we've already used up 3, and the 4th card of the same rank is off-limits because we don't want four of a kind, so so there are 48 cards left to choose from for our 1st unknown. For our 2nd unknown, we don't want a card of the same rank as the 1st unknown because that would be a full house (a triple and a double), so that means there are only 44 cards left to choose from. Therefore, there are 48*44 ways to pick our two unknowns which can be arranged in two orders (1-2 and 2-1), so the total number of combinations is 48*44/2, or 1056. Therefore, there are 52*1056=54912 different possible hands containing triples, so your odds of a triple are 54912 out of 2598960, or approximately 1 in 47. So now you have an idea of how to calculate poker odds. It's trickier than lottery odds, because when you work with poker odds, the total number of possibilities is just the beginning. Here's an exercise: if you understood the preceding, calculate the odds of drawing a full house. That's a triple and a double, and if you paid attention when we solved the triple, the answer should be easy. Click here for the answer. If you're feeling confident, you can try two more exercises: first, determine the odds of a straight flush. That's five cards in sequential order, all from the same suit. The lowest card can be an ace (aces can be low or high) but it cannot be a 10, because that would make it a royal flush. Click here for the answer. For a more difficult challenge, try to determine the odds of getting a straight flush if you add two jokers to the deck, and make them both wildcards (a wildcard can be used to fill in for any other card, thus increasing the likelihood of getting a rare hand). Click here for the answer. So what did we learn? Hopefully, we learned how simple probability looks at first, but we also learned how tricky it can get. Think of how easily one could go astray when trying to calculate the odds of a triple. If we forgot that the last two cards could be in any order, we would have forgotten to divide by 2!, and our odds would be off by 100%. Continue to 3. A Series of Unlikely Events Jump to:
{"url":"http://www.creationtheory.org/Probability/Page02.xhtml","timestamp":"2014-04-19T15:27:48Z","content_type":null,"content_length":"24650","record_id":"<urn:uuid:4bbd275c-f811-4908-b593-c592ea30fac6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00589-ip-10-147-4-33.ec2.internal.warc.gz"}