content
stringlengths
86
994k
meta
stringlengths
288
619
Show that something cannot be a ring February 24th 2009, 12:55 PM #1 Feb 2009 Ring question I have been asked to show that the set of n x n real matrices with determinant 1 is not a ring. I've been fine with this topic so far, but this question stumped me. I'm not sure how I can go about manipulating examples of n x n matrices with the only piece of information that I have being that det(M) = 1. The only thing I can think of is linking it in some way with invertible matrices. Last edited by FractalMath; February 24th 2009 at 11:57 PM. Well, in order for it to be a ring, it would have to be an abelian group under addition right? In particular, it would have to contain the additive identity, but what is the additive identity? What is its determinant? February 24th 2009, 02:23 PM #2 Jul 2008
{"url":"http://mathhelpforum.com/advanced-algebra/75558-show-something-cannot-ring.html","timestamp":"2014-04-18T10:05:06Z","content_type":null,"content_length":"31575","record_id":"<urn:uuid:694e943b-0793-4c87-b12e-52ae95461d16>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Pembroke Pines SAT Math Tutor Find a Pembroke Pines SAT Math Tutor ...I have also taken the training with Literacy of Austin group. Teaching English to adults has been a rewarding and interesting experience and allowed me to expand my teaching methods and help the students to reach their potential in a second language. I have taught students from all Latin America and Russia. 16 Subjects: including SAT math, Spanish, chemistry, biology ...I am aware of strategies and skills that are required to build from basic operations to more complex applied math. I have passed the certification exam for Elementary Education through the American Board. I have Housse highly qualified status in Elementary education as well. 33 Subjects: including SAT math, reading, writing, geometry ...Last summer, I took the Dental Admissions Test (DAT) and scored in the 94th percentile in Academic Average (AA), 91st percentile in Total Science (TS), and 94th percentile in Perceptual Ability (PAT). On the Math/Quantitative Reasoning section of the DAT, I scored in the 93rd percentile. I scor... 8 Subjects: including SAT math, chemistry, biology, GED I have been tutoring for over 20 years at the high school level (mainly private tutoring and 6 years at the University level). I am extremely passionate about my students success and will go the extra mile to ensure their learning. My greatest reward in teaching is not the salary, but the success.... 11 Subjects: including SAT math, statistics, geometry, algebra 2 ...During my four years at UM, I spent a considerable amount of time tutoring, both one-on-one and in front of larger groups. I worked for the Academic Resource Center, which provides free tutoring for undergraduate students, for three years tutoring both chemistry and calculus. As a senior, I was nominated for the Excellence in Tutoring Award. 14 Subjects: including SAT math, chemistry, calculus, geometry
{"url":"http://www.purplemath.com/Pembroke_Pines_SAT_Math_tutors.php","timestamp":"2014-04-19T19:50:51Z","content_type":null,"content_length":"24435","record_id":"<urn:uuid:1944bb68-b516-443b-8d51-bf4aebab067b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Foundation Maths supports the UK School Curriculum Early Years Foundation Stage and Key Stage 1. Suitable for children aged 3 to 6 and voiced with a lovely soft English accent giving positive verbal encouragement throughout. Foundation Maths includes 500 maths questions across 4 skill levels in 8 topic areas - Sums can be read out loud with a tap of the speaker button - Help in each section is given with a tap of the help button Eight Topic Areas to Learn: - Learn the Numbers: learn the operator symbols and the numbers 1 to 30 with fruit to count - Learn to Count: Random fruit are shown and the child counts and picks the right number - Number Patterns: Children find the missing number in a sequence of 5 numbers - Write the Numbers: Learn to correctly form the numbers 1 - 9 by tracing dotted number forms - Addition: Learn to add two numbers together - Subtraction: Children learn to take one number from another - Multiplication: One number multiplied by another - Division: One number divided by another Four Levels to Progress Through: - Beginner: Simple questions with answers up to 10 and illustrations to help young children - Intermediate: Questions with answers up to 20 - Advanced: Questions with answers up to 30 - Genius: Algebra questions, the answer is given and the child finds the missing part of the question. Foundation Maths supports the following Early Years learning goals: - Counting and using numbers to at least 10 in familiar contexts - Recognising numerals 1 to 9 - Beginning to understand addition as 'combining' two groups of objects - Beginning to understand subtraction as 'taking away' - Using early mathematical ideas to solve practical problems. Foundation Maths also supports the following Key Stage 1 learning outcomes: - Use the correct language, symbols and vocabulary associated with number and data - Count reliably up to 20 objects at first and recognise, be familiar with the numbers 11 to 20; gradually extend counting. - Create and describe number patterns; explore and record patterns related to addition and subtraction, and then patterns of multiples of 2, 5 and 10 explaining the patterns and using them to make predictions; recognise sequences, including odd and even numbers to 30 then beyond; recognise the relationship between halving and doubling. - Use the symbol '=' to represent equality; solve simple missing number problems [for example, 2 + ? = 6 ] We hope you enjoy Foundation Maths and find it useful in teaching your children some basic maths skills, if you have any suggestions or queries we would love to hear from you! Email contact@readwritephonics.co.uk or tweet @readwritephonic if you have any questions or problems - remember we can't reply to comments! Level up your mathematics skills and become King of Math! King of Math is a fast-paced mathematics game with lots of fun and diverse problems in different areas. Starting as a male or female farmer, you level up your character by answering math questions and improving your total score. New character design and music for each of the ten levels. Collect stars, get achievements and compare your scores against your friends and players all over the world! Playing King of Math is a great way to improve or refresh you mathematical skills and you will have a lot of fun doing it! The mathematics level is about Middle School/Junior High School. The game includes: - Addition - Subtraction - Mixed 1 - Multiplication - Division - Arithmetic - Geometry - Fractions - Powers - Statistics - Equations - Mixed 2 Math BINGO The object of Math BINGO is to practice math facts while playing BINGO! -Choose from 5 games: Addition, Subtraction, Multiplication, Division and Mixed -Choose from 3 different levels of difficulty: Easy, Medium and Hard -Create up to 5 player profiles -Choose from 8 different fun cartoon avatars -Keep track of number of games played by player profile -The Scoreboard keeps track of scores for each game and level -Collect and play with BINGO Bugs when you earn a high score! -Fun bonus game: BINGO Bug Bungee How to Play and Win Math Bingo The object of Math Bingo is to get a pattern of five Bingo Bugs in a row by correctly answering math problems. Math problems are presented at the top of the game screen. Feedback is presented at the bottom of the game screen. Correct solutions to problems answered incorrectly will be displayed. The score in Math Bingo is determined by the time to complete a game plus a two second penalty for each problem answered incorrectly. Time to complete game 0:45 seconds Incorrect answers: 3 Score: 45+6=51 Player Profiles Math Bingo saves player information with custom player profiles. Each player profile consists of a unique name (up to 20 characters) and a fun cartoon avatar. The player profiles display total number of games played, total number of high scores achieved, and total number of Bingo Bugs collected. The guest profile does not store any information and therefore cannot achieve high scores or collect Bingo Bugs. The guest profile is useful for adults who want to play Math Bingo without their scores going into the scoreboard. Selecting a Game There are five different games in Math Bingo: Addition, Subtraction, Multiplication, Division and Mixed. Players will be prompted to select a game type before game play. Selecting a Level There are three levels of game play in Math Bingo: easy, medium, and hard. Players will be prompted to select a game level before game play. Score Board Math Bingo displays high scores in the Score Board. There are four different Score Boards: addition, subtraction, multiplication and division. Each Score Board displays the top three scores for each level: easy, medium, and hard. Bingo Bugs Bingo Bugs are rewards for achieving a high score. A player who earns a high score will collect a Bingo Bug! Players can collect a variety of Bingo Bugs and interact with them by tapping the Bingo Bug button in the player profile or by tapping the My Bingo Bugs button after playing a game. HINT: Tilt your device to make the Bingo Bugs move around, tap on a Bingo Bug to make them giggle and The settings menu can be accessed by clicking the information button on the lower right hand side the introduction screen. The settings menu will allow you to reset all high scores and to delete player profiles. Math BINGO: - Does not contain third-party ads. - Does not contain in-app purchases. - Does not contain integration with social networks. - Does not use analytics/data collection tools. - Does include links to www.abcya.com For more information on our privacy policy please visit: Browse over 1,400 formulas, figures, and examples to help you with math, physics, chemistry and more. Use an expanding list of helpful tools such as a unit converter, quadratic solver, and triangle solver to perform common calculations. * 2nd Place Best Young Adults App - 2011 Best App Ever Awards * * 3rd Place Best Education App - 2012 Best App Ever Awards * Current Main Categories: - Algebra - Geometry - Trigonometry - Linear Algebra - Series & Sequences - Derivatives - Integration - Table of Integrals - Vector Calculus - Differential Equations - Discrete - Probability and Statistics - Physics - Chemistry (includes Periodic Table) - Algorithms - Financial (includes Real Estate) - Prime Numbers - Greek Alphabet - Tools ranging from Algebra to Physics Anything you'd like to see, email us at: info@happymaau.com See our site www.happymaau.com for more information. *Network access only needed for sending feedback. "We were amazed by the large variety of exercises" - Top Kids Apps "A great help to children who are trying to sharpen their math skills" - The iPhone Mom "I like the variety of ways that problems are presented. . . . It gives the kids different concepts related to the arithmetic they’re doing." - GeekDad King of Math Junior is a mathematics game in a medieval environment where you climb the social ladder by answering math questions and solving puzzles. Collect stars, get medals and compete against friends and family. Master the game and become a King or Queen of Math! King of Math Junior is suitable for age 6 and up and introduces mathematics in an accessible and inspiring manner. Its educational strength lies in awakening curiosity and putting mathematics in a fun context. Players are encouraged to think for themselves and see mathematical concepts from different angles by solving problems in many areas. - Counting - Addition - Subtraction - Multiplication - Division - Geometry - Comparing - Measuring - Puzzles - Fractions Kids Maths is a great app perfect for preschoolers to learn about numbers and counting. The kid can have hands on following activities: * Learn Numbers- Choose from Play mode, previous and next buttons * Count and tell * Find missing number * Find bigger number * Find Smaller number * Learn number names. * Find a match- play the popular fun memory game to match numbers * Draw and Learn- Practice to draw the numbers within the stencil with color of your choice. Tags:, kids math, educational games, education game, mathematics, flashcards, number flash cards, puzzle, puzzles, kids game, kid games, draw and learn, fun games, play, colors, colour, colours, match picture, child android apps, preschool, pre school, pre-school, children's, children, application, child, play, learn, education, teach, toddler application, toddlers, learning apps, educational apps, educational games for kids KS1 (Key Stage 1) Maths presents maths exercises for kids aged 5-7 years. Thousands of maths exercise are available to practice which kids will like and will enjoys learning. There are various of maths exercises to develop various mathematical skills required at this age. There are different sets of exercises for primary school Year 1 and Year 2. Year 1 : Basic Counting Counting in Multiples Numbers up to 100 Comparing Numbers Numbers in Words Additions with pictures Subtraction with pictures Additions - Numbers upto 10 Subtraction - Numbers upto 10 Additions - Numbers upto 20 Subtraction - Numbers upto 20 Multiplication with pictures Geometry - 2D Shapes Geometry - 3D Shapes Money - Recongise the coins Money - count the money with 2 coins Money - count the money with 2+ coins Time - Read the Clock (hour and half past the hour... Time - Match the clock and time Year 2: Skip Counting with pictures Skip Counting - Forward/Backward Place Values (tens, units) Number Sequence/Line Comparing Numbers Numbers in words - upto 100 Addition with Objects Additions - A two- digit number and ones Additions - A two- digit number and tens Additions - Two two-digit numbers Additions - Adding three one-digit numbers Additions - Making Tens Additions - Word Problems Subtraction - With Objects Subtraction - A one-digit number from two-digit nu... Subtraction - A two-digit number from two-digit nu... Subtract tens Subtraction - Word Problems Multiplication sentences Multiplication tables (2, 5 and 10) Multiplication tables (up to 10) Division (by 2, 5, and 10) Division sentences Fractions - recognise fractions Fractions - name the fraction Fractions - match the fraction Measurement - Units Measurement - Units for Length Measurement - Units for Weight/Mass Geometry - Identify 2D Shapes Geometry - Identify 3D Shapes Geometry - Properties of 2D Shapes Geometry - Properties of 3D Shapes Geometry - Shapes of everyday objects Money - count the money Time - Read the Clock (hour,half past the hour, quarter past hour) Time - Match the clock and time The principal focus of mathematics teaching in key stage 1 is to ensure that pupils develop confidence and mental fluency with whole numbers, counting and place value. This should involve working with numerals, words and the four operations, including with practical resources [for example, concrete objects and measuring tools ]. At this stage, pupils should develop their ability to recognise, describe, draw, compare and sort different shapes and use the related vocabulary. Teaching should also involve using a range of measures to describe and compare different quantities such as length, mass, capacity/volume, time and money. By the end of year 2, pupils should know the number bonds to 20 and be precise in using and understanding place value. An emphasis on practice at this early stage will aid fluency. Pupils should read and spell mathematical vocabulary, at a level consistent with their increasing word reading and spelling knowledge at key stage 1 ★ Math for kids is either a big task or a milestone. Teaching your child math isn't something that happens overnight. ★ Remember KISS - Keep It Simple, Stupid! Small children aren't going to be able to handle too much complexity with math at this point. Be patient and never go faster than your child can take. ★ Begin teaching them with an interactive activity. Kids Learn Math teaches math with fun. You will be amazed how your kid will start loving math and he / she might be found screaming… Math is fun , Math is fun , Math is fun , Math is fun , Math is fun ! ★ Kids Learn Math has 7 Games: 1. Learn Kids Math ( Kids 1-20 ) 2. Kids Numbers – Learn Counting ( Kids 1-20 Kids Math ) 3. Kids Numbers – Find Largest Number ( Kids 123 Kids Math ) 4. Kids Numbers – Find Smallest Number ( Kids 123 Kids Math ) 5. Kids Numbers - Learn Kids Addition ( Kids 123 Kids Math ) 6. Kids Numbers - Learn Kids Subtraction ( Kids 123 Kids Math ) 7. Math for kids - Kids Memory Match ( Kids 123 Kids Math ) ★ Kids Learn Math ( Kids Math ) is an interactive app which makes learning maths easy, simple and fun which helps kids learn faster.( Kids 1-20 Kids Math ) ★ Kids Learn Math is an interactive educational game through which your child can develop hand-eye coordination and observation skills.( Kids 1-20 Kids Math ) ★ Observation Skills are developed by paying close attention to many details, focus, analysing, reasoning and memory. Kids Learn Math helps your child to do all these.( Kids 1-20 Kids Math ) ★ Features of the game: ✔ It enables your child to learn preschool Math & its basics. ✔ It will also help develop your child’s hand-eye coordination & observation skills. ✔ It will help in improvement of your child’s memory. ★ What does the game offer? ( Math Games for kids ) ✔ Learn through Kids Learn Math & Kids Learn Counting ( Kids Flashcards Kids Math ) - Study the Kids Math ( Kids Flashcards ) and get familiar with numbers and basics related to Math. ✔ Learn - Largest Number , Smallest Number , Kids Addition & Kids Subtraction ( Kids Flashcards Kids Math ) - Find the largest and smallest numbers & learn basics of Kids addition and Kids subtraction ( Kids Flashcards Kids Math ) ✔ Test your memory with Kids Math - Kids Memory Match ( Kids Flashcards Kids Math ) - Test your memory by matching the numbers. ★ How to play? 1. Learn Kids Math ( Math kids Kids Math) - Study the Kids Math dictionary. - Learn the numbers - how it sounds and learn how to spell it. 2. Kids Math – Learn Counting ( Math Games for kids ) - Learn Counting with fun. 3. Kids Puzzle – Largest Number ( Math Games for kids , Kids Puzzle ) - Tap on the largest number from the numbers shown. 4. Kids Puzzle – Smallest Number ( Math Games for kids , Kids Puzzle ) - Tap on the smallest number from the numbers shown. 5. Kids Math – Addition Kids Quiz ( Math Games for kids , Kids Quiz ) - Find the correct answer by adding two numbers. 6. Kids Math – Subtraction Kids Quiz ( Math for kids , Kids Quiz ) - Find the correct answer by adding two numbers. 7. Kids Math - Kids Memory Match ( Math for kids, Kids Quiz ) - Tap on a tile to turn it over and reveal a number. Then tap on the second tile, trying to find the matching number. If the images match, the tiles are removed from the board. ★ It is said that Humans are only fully Human when they Play; well, then kids will be only fully kids when they play. Even better if they learn while playing! Research also shows that kids, who enjoy learning, learn quicker and memorize better than those who are forced to learn and study. So why not make learning fun and play. ★ We at UPUP are trying to do the same – Make your child enjoy learning. ★ UPUP gives you all these- free games for kids, educational games for kids, fun educational games for kids, ESL for kids ★ Keywords: Educational, Children, Preschool , Cognitive Skills, Autistic , Kids Math , Kids Learn Math , Kids Quiz , Kids Puzzle , 123 , 1-20 Maths Formulae lists some of the useful maths formulas that helps you as a quick reference (cheat sheet).Using this App you can access the formulas anywhere and anytime you want.It covers formulas for College Grade/Higher Grade Students/School Students.It lists out all the important formulas/topics in Algebra, Geometry, Trigonometry and Statistics.Review these formulas/concepts regularly and improve your grades. The topics covered in the app include: 1.) ALGEBRA - Elementry Algebra - Basic identities - Boolean Algebra - Conics - Quadratic formula - Exponents 2.) GEOMETRY - Solid Geometry - Plane Geometry 3.) TRIGONOMETRY - Angles - Right angled triangle - Trignometric functions - Pythagorean identity - Double angle identities - Symmetry - Shifts and periodicity 4.) STATISTICS - Probablity - Combinatorics Keywords:formulae, formulas, formula, math, maths,equation, reference, science, equations, math games, math game, math workout, maths help, , maths workout, Mathsbrain, maths kids, math tricks, math tutor, math teacher, math test, maths for kids, math drills, math flash cards, math formulas, math facts, math homework, math magic, math maniac, math reference, math ref, math tricks, math skill, math wizard, brain teaser, math problem solving, math logic, SAT, PSAT, GRE, GMAT, ACT, MCAT,CET,CAT,XAT,UPSC,IAS,10th grade,12th grade,IPS,AIEEE,IIT Jee,MAT, JMET,ATMA,SNAP,NMAT,IIFT,FMS,HSC,SSC The application "math expert" is a collection of formulas out of mathematics and physics. The special feature is that the application can calculate the formulas. The calculation is based on the motto "Tell me what you know, and I will check which calculations are possible." We have a new ICON Solve Math problems and plot functions. Full featured scientific calculator . It can help you solve basic calculations to college calculus. With "Maths Solver" You can solve complex Math problems or plot multiple functions with accuracy and speed. No network access required! which means you don't need internet connected to use its features. You can plot functions and zoom-in , zoom-out (Press back button to make graph full screen by hiding keyboard ) It covers following areas of Mathematics. Basic Algebra Solve equation and system of linear equations Indefinite and Definite Integration Set operations i.e Union, Intersection, Mean, Median, Max, Min Matrix : Determinant, Eigenvalues, EnigenVectors, Transpose, Power Curl and Divergence 2D function plots For full list of features see Catalog and Examples in app. Keywords: Math Calculator, Maths, Equations, Scientific Calculator, Graph Plot, Functions Plot, Maths Calculator What's 51 x 51? Can your child do this in less than five seconds? Well, he really can do! This Mental Maths app helps improve your mental calculation skills through introduction to various methods & exercises. This app has the most exhaustive content around mental maths. With this app, our goal is to provide you easy to read material which you can read and follow at your own pace. Chapters are written in conversational language with lot of pictorial illustrations. Every chapter provides practice exercises. App also has tons of consolidated exercises covering all topics. This is great app for students so they can finish exams within the time limit and yet be assured of good grades. These simple techniques go a long way in building your self confidence in mathematics, which is generally a hard subject. These mental math techniques are not just magic. They have sound base in algebra and uses fundamental principles to simplify calculations. Please send in your feedback and comments to exploreinandroid2010@gmail.com Tags: math games, math workout, maths help, mathskill, maths workout, Mathsbrain, maths for kids, math tricks, math tutor, math teacher, test, brain teaser, maths for kids, math drills, math flash cards, math formulas, math facts, math homework solver, math magic, math maniac, math reference, math ref, math tricks, educational apps, Mathmagician, mathmasia, calculations, learning apps This is a algebra calculator that solves algebra for you, and gives you answers teachers will accept! Unlike other similar apps this app is built directly for students. The answers that appear will be in the form your teacher will want. No more weird decimal answers! Finally an app with answers teachers will accept! This does a variety of algebra calculations to help with your math class. Its like your own personal algebra tutor app. This is great for students in all algebra classes including algebra 2 it will act as a algebra 2 calculator and calculator for all algebra classes. It will act as a calculator to solve algebra problems like Factoring and complete the square. Also Including quadratic equation solver, system of linear equations with two or three equations. pythagorean theorem, with two versions, one for simple problems and one for more advanced problems. It will also solve slope, y-intercept and give the equation in slope intercept form. Simplifying square roots, and is a calculator that calculates square roots, cubed roots and any other root. We have added the foil method and exponents to the list of problems that can be solved. Reduce, add, subtract, divide and mixed number fractions. Finding LCM and GCD (least common multiple and greatest common divider). There are many more solvers included in this app, as well as a list of formulas! This is great for kids and adults its, it can be used to help with math workout and math games. Free updates are constantly being released to improve the app based on your responses! Download now for this special Math Helper Free solves math problems and shows step-by-step solution. Math Helper is an universal assistant app for solving mathematical problems for Algebra I, Algebra II, Calculus and Math for secondary and college students, which allows you not only to see the answer or result of a problem, but also a detailed solution (in full version). [✔] Linear Algebra - Operations with matrices [✔] Linear algebra - Solving systems of linear equations [✔] Vector algebra - Vectors [✔] Vector algebra - Shapes [✔] Calculus - Derivatives [✔] Calculus - Indefinite Integrals (integrals solver) - Only in Full version [✔] Calculus - Limits - Only in Full version [✔] The theory of probability [✔] The number and sequence [✔] Function plotter Derivatives, limits, geometric shapes, the task of statistics, matrices, systems of equations and vectors – this and more in Math Helper! ✧ 10 topics and 43+ sub-section. ✧ Localization for Russian, English, Italian, French, German and Portuguese ✧ Intel ® Learning Series Alliance quality mark ✧ EAS ® approved ✧ More than 10'000 customers all over the world supported development of Math Helper by doing purchase ✧ The application is equipped with a convenient multi-function calculator and extensive theoretical guide ✪ Thank you all helping us to reach 800'000+ downloads of Math Helper Lite ✪ You could also support us with good feedback at Google Play or by links below ✪ Our Facebook page: https://www.facebook.com/DDdev.MathHelper ✪ Or you could reach us directly by email We have plans to implement ● Numbers and polynomial division and multiplication ● Implement new design and add 50+ new problems ● New applications, like Formulae reference for college and university and symbolic calculator MathHelper is a universal assistant for anyone who has to deal with higher mathematics, calculus, algebra. You can be a student of a college or graduate, but if you suddenly need emergency assistance in mathematics – a tool is right under your fingertips! You could also use it to prepare for SAT, ACT or any other tests. Don't know how to do algebra? Stuck during calculus practice, need to solve algebra problems, just need integral solver or limits calculus calculator? Math Helper - not only math calculator, but step-by-step algebra tutor, will help to solve derivative and integral, maths algebra, contains algebra textbooks - derivative and antiderivative rules (differentiation and integration), basic algebra, algebra 1 and 2, etc. Good calculus app and algebra for college students! Better than any algebra calculator with x, algebra solving calculator or algebra graphing calculator - this is calculus solver, algebra 1 and 2 solver, with step-by-step help, calculator, integrated algebra textbooks and formulas for calculus. This is not just a math answers app, but math problem solver! This mathematics solver can solve any math problem - from basic math problems to integral and derivative, matrix, vectors, geometry and much more. For everyone into math learning. This is not just math ref app - this is ultimate math problem solver. Discover a new shape of mathematics with MathHelper! Real math treasure! Train your brain, learn math tricks and amaze others! With Math Tricks, it will be easy to solve math problems in just a few seconds! 11 * 86 = 946 YOU can calculate this in LESS than two seconds! You can learn how to convert temperature from Celsius to fahrenheit in just few seconds! You can learn to multiply any number with 125! Useful Features to become a math genius: - Multiplication Tricks - Division Tricks - Number Tricks - Square Tricks - Remainder Tricks - Date Tricks We constantly update the app with new tricks. Check out the screenshots and see for yourself! *** Issue with our app? - Please send us an email: support@jmtapps.com *** Please kindly RATE & SHARE this App as it is free :-) Like us on facebook : https://www.facebook.com/whitesofinfo Follow us on Twitter: https://www.twitter.com/whitesof Find our apps at : http://www.whitesof.com/apps Use Horizontal swipe on individual formulas if content not fully displayed Mathematical Formulas aggregated in most useful way. Please email us at " contactus@whitesof.com" to add any new formulas or suggestions or topics. The App covers topics like 9)Analytical Geometry 10)Boolean Algebra 11) Series The app is continuously updated with latest details and added with new topics frequently. Math BINGO is a fun way for children to practice math facts on your phone/tablet. Choose from addition, subtraction, multiplication or division BINGO, then select a level of difficulty. More from developer Teach your child to read and write the easy way! Read Write Phonics teaches the 44 phonic sounds of the English language, how to blend the phonic sounds together to form words, and how to write the letters of the alphabet by tracing dotted letter - Read: learn the correct 44 phonic sounds of the English language! - Write: learn to write the letters of the English alphabet, with star rewards! - Phonics: learn how to blend the phonic sounds to form words! Learn by doing: tap and swipe through the App listening to the correct sounds and blending the sounds into words. Fun encouragement and reward stars are given in the Write section. Clear and simple flash cards system with simple intuitive navigation. Clear Sassoon font developed especially for children’s first books and traditional learning aids. Clean design, there are no distracting images or sounds in the Write or Phonic sections. Focus on learning the sounds and learning how to form letters, without distraction. The example word cards in the Read section each use one simple bright word image. Move through the App using the arrow keys, the speaker button replays sounds and the redo button restarts animation. The home key takes you up a menu level and the envelope key will open an email to us at Read Write Phonics. The Read section contains a flash card for each of the 44 sounds, showing the letter or letter combination which represent them. Once the sound has played wait for the letter/s to animate into an example word, tapping the speaker button plays the sound or word again. The Write section shows an example of how letters are written, then allows the child to try tracing the dotted letter, with a prompt where to begin and a prompt if they forget to dot the i or cross the ts! Then wait for your mark, 0 to 3 stars! The App marks the child's effort based on the smoothness of their line and the distance from the ideal line. Encouragement is always given even if no stars are achieved. The Phonics section includes small, two and three letter words and big, four letter and above example words. Tapping the speaker button plays the full word, tapping each letter or letter combination pays the phonic sound. Swiping or stroking through the word plays the sounds in order blending or synthesising the word. (Note: the ck sound has been included three times with cards, c, ck, and k all making the same phonic sound. qu and x also included for completeness but are actually consonant blends (k + w) and (k + s). The letters and letter combinations chosen for read write phonics are the most common ones and the ones taught first for example “ee” in “bee” instead of “y” in “many”) Email contact@readwritephonics.co.uk or tweet @readwritephonic if you have any questions or problems - remember we can't reply to comments! This is a free *demonstration* version of Read Write Phonics and includes 10 sounds, 8 letters to try tracing and 4 words to blend. The full version of Read Write Phonics is £1.49 and teaches the 44 phonic sounds of the English language, how to blend the phonic sounds together to form words, and how to write the letters of the alphabet by tracing dotted letter forms! - Read: learn the correct 44 phonic sounds of the English language! - Write: learn to write the letters of the English alphabet, with star rewards! - Phonics: learn how to blend the phonic sounds to form words! Learn by doing: tap and swipe through the App listening to the correct sounds and blending the sounds into words. Fun encouragement and reward stars are given in the Write section. Clear and simple flash cards system with simple intuitive navigation. Clear Sassoon font developed especially for children’s first books and traditional learning aids. Clean design, there are no distracting images or sounds in the Write or Phonic sections. Focus on learning the sounds and learning how to form letters, without distraction. The example word cards in the Read section each use one simple bright word image. Move through the App using the arrow keys, the speaker button replays sounds and the redo button restarts animation. The home key takes you up a menu level and the envelope key will open an email to us at Read Write Phonics. The Read section contains a flash card for each of the 44 sounds (in the full version), showing the letter or letter combination which represent them. Once the sound has played wait for the letter/s to animate into an example word, tapping the speaker button plays the sound or word again. The Write section shows an example of how letters are written, then allows the child to try tracing the dotted letter, with a prompt where to begin and a prompt if they forget to dot the i or cross the ts! Then wait for your mark, 0 to 3 stars! The App marks the child's effort based on the smoothness of their line and the distance from the ideal line. Encouragement is always given even if no stars are achieved. The Phonics section includes small, two and three letter words and big, four letter and above example words. Tapping the speaker button plays the full word, tapping each letter or letter combination pays the phonic sound. Swiping or stroking through the word plays the sounds in order blending or synthesising the word. Email contact@readwritephonics.co.uk or tweet @readwritephonic if you have any questions or problems - remember we can't reply to comments! This is a free *demonstration* version of Foundation Maths and includes 60 questions and demonstrations of each section. The full version includes 500 questions. Foundation Maths supports the UK School Curriculum Early Years Foundation Stage and Key Stage 1. Suitable for children aged 3 to 6 and voiced with a lovely soft English accent giving positive verbal encouragement throughout. Foundation Maths includes 500 maths questions across 4 skill levels in 8 topic areas - Sums can be read out loud with a tap of the speaker button - Help in each section is given with a tap of the help button Eight Topic Areas to Learn: - Learn the Numbers: learn the operator symbols and the numbers 1 to 30 with fruit to count - Learn to Count: Random fruit are shown and the child counts and picks the right number - Number Patterns: Children find the missing number in a sequence of 5 numbers - Write the Numbers: Learn to correctly form the numbers 1 - 9 by tracing dotted number forms - Addition: Learn to add two numbers together - Subtraction: Children learn to take one number from another - Multiplication: One number multiplied by another - Division: One number divided by another Four Levels to Progress Through: - Beginner: Simple questions with answers up to 10 and illustrations to help young children - Intermediate: Questions with answers up to 20 - Advanced: Questions with answers up to 30 - Genius: Algebra questions, the answer is given and the child finds the missing part of the question. Foundation Maths supports the following Early Years learning goals: - Counting and using numbers to at least 10 in familiar contexts - Recognising numerals 1 to 9 - Beginning to understand addition as 'combining' two groups of objects - Beginning to understand subtraction as 'taking away' - Using early mathematical ideas to solve practical problems. Foundation Maths also supports the following Key Stage 1 learning outcomes: - Use the correct language, symbols and vocabulary associated with number and data - Count reliably up to 20 objects at first and recognise, be familiar with the numbers 11 to 20; gradually extend counting. - Create and describe number patterns; explore and record patterns related to addition and subtraction, and then patterns of multiples of 2, 5 and 10 explaining the patterns and using them to make predictions; recognise sequences, including odd and even numbers to 30 then beyond; recognise the relationship between halving and doubling. - Use the symbol '=' to represent equality; solve simple missing number problems [for example, 2 + ? = 6 ] We hope you enjoy the Foundation Maths DEMO and find it useful in teaching your children some basic maths skills, if you have any suggestions or queries we would love to hear from you! Email contact@readwritephonics.co.uk or tweet @readwritephonic if you have any questions or problems - remember we can't reply to comments! 300 High Frequency Words is a great interactive way to teach your child to recognise and read the first 300 high frequency words of the English language. These are the common words such as: the, and, said, he, I, be, they, go, etc etc. Some of these words are decodable using phonics but some have to be learnt as "Sight" or Red" words. Giving your child a way to quickly learn these will enable them to read more challenging books more quickly. Tapping the screen voices the word in a soft English accent. 300 High Frequency Words is split into 8 manageable sets of words starting at the most common words moving through to the least common. Within each set the words are shown in a different order each time the app is used, this ensures that your child cannot simply learn the order of the words. 300 High Frequency words can be used by a child alone tapping each word to hear it and then tapping the arrow to move through to the next one learning the words at their own pace. Parents can use the app in a similar way to traditional flash cards to support learning and test their child's knowledge, asking them to read the word, then tap the word to confirm or correct, then move to the next. Once a child grows in confidence you can see how quickly they can move through each set of words timing and offering rewards for shorter and shorter times. The quicker they can move through the set the sooner they will be able to use the knowledge in reading and enjoying books. The 300 High Frequency Words are the same words found in Appendix 7 of the Letters and Sounds: Principles and Practice of High Quality Phonics document produced by the UK Department for Education and commonly used in UK schools. 300 High Frequency Words DEMO is a *demonstration* version including 40 of the 300 words in the full App. This is a great interactive way to teach your child to recognise and read the first 300 high frequency words of the English language. These are the common words such as: the, and, said, he, I, be, they, go, etc etc. Some of these words are decodable using phonics but some have to be learnt as "Sight" or Red" words. Giving your child a way to quickly learn these will enable them to read more challenging books more quickly. Tapping the screen voices the word in a soft English accent. 300 High Frequency Words is split into 8 manageable sets of words starting at the most common words moving through to the least common. This DEMO version includes 40 words, 5 sample words in each set. Within each set the words are shown in a different order each time the app is used, this ensures that your child cannot simply learn the order of the words. 300 High Frequency words DEMO can be used by a child alone tapping each word to hear it and then tapping the arrow to move through to the next one learning the words at their own pace. Parents can use the app in a similar way to traditional flash cards to support learning and test their child's knowledge, asking them to read the word, then tap the word to confirm or correct, then move to the next. Once a child grows in confidence you can see how quickly they can move through each set of words timing and offering rewards for shorter and shorter times. The quicker they can move through the set the sooner they will be able to use the knowledge in reading and enjoying books. The 300 High Frequency Words are the same words found in Appendix 7 of the Letters and Sounds: Principles and Practice of High Quality Phonics document produced by the UK Department for Education and commonly used in UK schools.
{"url":"https://play.google.com/store/apps/details?id=uk.co.readwritephonics.maths123","timestamp":"2014-04-18T03:01:57Z","content_type":null,"content_length":"196977","record_id":"<urn:uuid:a171425d-ec08-4835-be79-5d483056c472>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
David G. Bonagura Jr. - Common Core’s Newer Math The following sentences from the New York Times could have been written today in homage to the Common Core Standards Initiative, the recently adopted national standards for the teaching of mathematics and English-language arts in grades K–12. “Instead of this old method, the educators would stress from the earliest grades the new concept of the unity of mathematics and an understanding of its structure, using techniques that have been developed since the turn of the century. . . . The new concepts must be taught in high school to prepare the students for the type of mathematics that they will find when they reach college.” But the century in question here is the 20th, not the 21st. This article, written in 1961, is not about today’s Common Core, but about New Math, the program that was supposed to transform mathematics education by emphasizing concepts and theories rather than traditional computation. Instead, after a few short years of propagating ignorance of all things mathematical, New Math became the butt of jokes nationwide (the Peanuts comic strip took aim more than once) before it was unceremoniously abandoned. Flash forward 50 years, and Common Core is today making the same : “The standards are designed to be robust and relevant to the real world, reflecting the knowledge and skills that our young people need for success in college and careers. With American students fully prepared for the future, our communities will be best positioned to compete successfully in the global economy.” But what makes us think Common Core will live up to its hype? And how is it substantially different from New Math, as well as subsequent math programs such as Sequential Math, Math A/B, and the National Council of Teachers of Mathematics Standards? These have all failed America’s children — even though each program promised to transform them into young Einsteins and Aristotles. The problem with Common Core is not that it provides standards, but that, despite its claims, there is a particular pedagogy that accompanies the standards. And this pedagogy is flawed, for, just as in New Math, from the youngest ages Common Core buries students in concepts at the expense of content. Take, for example, my first-grade son’s Common Core math lesson in basic subtraction. Six- and seven-year-olds do not yet possess the ability to think abstractly; their mathematics instruction, therefore, must employ concrete methodologies, explanations, and examples. But rather than, say, count on a number line or use objects, Common Core’s standards mandate teaching first-graders to “decompose” two-digit numbers in an effort to emphasize the concept of place value. Thus 13 – 4 is warped into 13 – 3 = 10 – 1 = 9. Decomposition is a useful skill for older children, but my first-grade son has no clue what it is about or how to do it. He can, however, memorize the answer to 13 – 4. But Common Core does not advocate that tried-and-true technique. Common Core’s elevation of concept over computation continues in its place-value method for multiplying two-digit numbers, which is taught in fourth grade. Rather than multiply each digit of the number from right to left, Common Core requires students to multiply each place value so that they have to add four numbers, rather than two, as the final step in finding the product. Common Core’s most distinctive feature is its insistence that “mathematically proficient students” express understanding of the underlying concepts behind math problems through verbal and written expression. No longer is it sufficient to solve a word problem or algebraic equation and “show your work”; now the work is to be explained by way of written sentences. I have seen this “writing imperative” first-hand in my sons’ first- and third-grade Common Core math classes. There is certainly space in their respective books for traditional computation, but the books devote enormous space to word problems that have to be answered verbally as well as numerically, some in sections called Write Math. The reason, we are told, is that the Common Core–driven state assessments will contain large numbers of word problems and spaces for students to explain their answers verbally. This prescription immediately dooms grammar-school students who have reading difficulties or are not fluent in English: The mathematical numbers that they could have grasped are now locked into sentences they cannot understand. The most egregious manifestation of the “writing imperative” is the Four Corners and a Diamond graphic organizer that my sons’ school has implemented to help prepare for the writing portion of the state assessments. The “fourth corner” requires students to explain the problem and solution in multiple sentences. How all this writing helps them with math is yet to be demonstrated. Hence Common Core looks terribly similar to the failed New Math program, which also emphasized “the why rather than the how, the fundamental concepts that unify the various specialties, from arithmetic to the calculus and beyond, rather than the mechanical manipulations and rule memorizations.” Common Core may not completely eschew the “how,” and it may not be obsessed with binary sets and matrices as New Math was, but it is likely to lose the “how” — the content — in its efforts to move the “why” — the concepts — into the foreground. The problem is not that students, including those in the primary grades, should not be presented the basic concepts of mathematics — they should be. But there is a difference between learning basic concepts and expressing the intricacies of true mathematical proofs that Common Core desires. Mathematical concepts require a high aptitude for abstract thinking — a skill not possessed by young children and never attained by many. What will happen to students who already struggle with math when they not only are forced to explain what they do not understand, but are presented new material in abstract conceptual formats? All students must learn to perform the basic mathematical operations of addition, subtraction, multiplication, and division in order to function well in society. Knowing why these operations work as they do is a great benefit, but it is not essential. And in mathematics, concepts are often grasped long after students have mastered content — not before. In trying to learn both the “why” and the “how” in order to prepare for the state assessments, students will not fully grasp either: They will not receive the instructional time needed to learn how to do the operations because teachers will be forced to devote their precious few classroom minutes to explaining concepts, as the assessments require. The “how” of the basic operations, which need to be memorized and practiced over and over, will be insufficiently learned, since Common Core orders teachers to serve two masters. The result is simple arithmetic: Instead of developing college- and career-ready students, we will have another generation of students who cannot even make change from a $5 bill, all courtesy of the latest set of bureaucrat-promoted standards that promise to save American education. By giving concept priority over content, Common Core has failed to learn the history lesson from New Math. Students instructed according to Common Core standards will ultimately know neither the “why” nor the “how,” and we will eventually consign these standards to the ever-expanding dustbin of failed educational initiatives, until the next messianic program is unveiled. And, of course, this doomed educational experiment, like its predecessors, has a high cost: our children’s ability to do math. — David G. Bonagura Jr. is a teacher and writer in New York. He has written about education for Crisis, The Catholic Thing, The University Bookman, and the Wall Street Journal.
{"url":"http://www.nationalreview.com/article/368595/common-cores-newer-math-david-g-bonagura-jr","timestamp":"2014-04-18T15:45:48Z","content_type":null,"content_length":"75785","record_id":"<urn:uuid:be8f2962-e6e2-4eb1-8303-5338fad117d9>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Billerica Statistics Tutor Find a Billerica Statistics Tutor ...I also used excel to show that the area and parameter of a regular polygon approaches the area and circumference of a circle as n (number of sides) approaches infinity. I've built on these results to show that the volume of a pyramid approaches a cone as n approaches infinity. I've used tables to show the relation between numbers of sides to the sum of interior and exterior angles. 13 Subjects: including statistics, physics, ASVAB, algebra 1 ...I am a second year graduate student at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. My academic strengths are in mathematics and French. 16 Subjects: including statistics, French, elementary math, algebra 1 ...A full Precalculus contains a great deal of material and is often a daunting task for the student. I have taught at the University level as the professor for Precalculus. I take the topics in Algebra, Exponentials, Logarithms, Trigonometry, and make them easier to understand. 24 Subjects: including statistics, chemistry, calculus, physics ...Consulting me on study skill techniques will make your study time efficient and effective. I was a Special Education Teacher in ELA and Math for students with moderate special needs in grades 1- 5 for Springfield Public Schools 2007-2008. I was JV Guard and Captain for 3 years in High School. 30 Subjects: including statistics, English, reading, writing ...I graduated from MIT and am currently working on a start-up part time and at MIT as an instructor. I miss the one-on-one academic environment and am keen to share some of my knowledge. I would like to teach math (through BC calculus), science (physics, chemistry, biology, environmental), engine... 63 Subjects: including statistics, chemistry, reading, calculus
{"url":"http://www.purplemath.com/Billerica_statistics_tutors.php","timestamp":"2014-04-17T07:53:54Z","content_type":null,"content_length":"24089","record_id":"<urn:uuid:e2c9d7b2-3756-431f-8ccc-156883a9a163>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Psychology 100a > Mcauliffe > Flashcards > Psych 100A Midterm #1 | StudyBlue There is a clear order, but the size of the differences between one value and the next is not the same. (A ranking) --> Not often used in research studies. i.e. Agree, Strongly Agree, Disagree, strongly disagree i.e. top 5 fave foods
{"url":"http://www.studyblue.com/notes/note/n/psych-100a-midterm-1/deck/2475762","timestamp":"2014-04-21T10:13:32Z","content_type":null,"content_length":"58400","record_id":"<urn:uuid:39a544dc-810d-4130-9ce6-714fd190bbc9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
A000241 - OEIS Verified for n=11, 12 by Shengjun Pan and R. Bruce Richter, in "The Crossing Number of K_11 is 100", J. Graph Theory 56 (2) (2007) 128-134. The values for n >= 13 are probably only conjectural. Also the sum of the dimensions of the irreducible representations of su(3) that first occur in the [n-5]th tensor power of the tautological representation. - james dolan (jdolan(AT)math.ucr.edu), Jun It appears that a(n)=C(floor(n/2),2)*C(floor((n-1)/2),2). [From Paul Barry, Oct 02 2008] From Paul Barry, Oct 02 2008: (Start) We conjecture that this sequence is given by one half of the third coefficient of the denominator polynomial of the n-th convergent to the g.f. of n!, in which case the next numbers are 784,1008,1296,1620, 2025, 2475,... Essentially sum{k=0..n, (-1)^(n-k) floor(k/2)ceiling(k/2)floor((k-1)/2)ceiling((k-1)/2)/2}. (End) One of the most basic questions in knot theory remains unresolved: is crossing number additive under connected sum? In other words, does the equality c(K1#K2) = c(K1) + c(K2) always hold, where c(K) denotes the crossing number of a knot K and K1#K2 is the connected sum of two (oriented) knots K1 and K2? Theorem 1.1. Let K1, . . .,Kn be oriented knots in the 3-sphere. Then (c(K1) + . . . + c(Kn)) / 152 <= c(K1# . . . #Kn) <= c(K1) + . . . + c(Kn). [From Jonathan Vos Post, Aug 26 2009] Ábrego, Bernardo M.; Aichholzer, Oswin; Fernández-Merchant, Silvia; Ramos, Pedro; Salazar, Gelasio. The 2-Page Crossing Number of K_n. Discrete Comput. Geom. 49 (2013), no. 4, 747--777. MR3068573 Jean-Paul Delahaye, in Pour La Science, Feb. 2013, #424, Logique et Calcul. Le problème de la fabrique de briques. (The problem of the brick factory), in French. P. Erdos and R. K. Guy, Crossing number problems, Amer. Math. Monthly, 80 (1973), 52-58. R. K. Guy, The crossing number of the complete graph, Bull. Malayan Math. Soc., Vol. 7, pp. 68-72, 1960. Kainen, Paul C. On a problem of P. Erdos. J. Combinatorial Theory 51968 374--377. MR0231744 (38 #72) D. McQuillan and R. B. Richter, A parity theorem for drawings of complete ... graphs, Amer. Math. Monthly, 117 (2010), 267-273. A. Owens, On the biplanar crossing number, IEEE Trans. Circuit Theory, 18 (1971), 277-280. Shengjun Pan and R. Bruce Richter, "The Crossing Number of K_11 is 100", J. Graph Theory 56 (2) (2007) 128-134. Saaty, Thomas L., On polynomials and crossing numbers of complete graphs. J. Combinatorial Theory Ser. A 10 (1971), 183--184. MR0291013 (45 #107) T. L. Saaty, The number of intersections in complete graphs, Engrg. Cybernetics 9 (1971), no. 6, 1102-1104 (1972).; translated from Izv. Akad. Nauk SSSR Tehn. Kibernet. 1971, no. 6, 151-154 (Russian). Math. Rev. 58 #21749. N. J. A. Sloane, A Handbook of Integer Sequences, Academic Press, 1973 (includes this sequence). N. J. A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995 (includes this sequence). C. Thomassen, Embeddings and minors, pp. 301-349 of R. L. Graham et al., eds., Handbook of Combinatorics, MIT Press.
{"url":"http://oeis.org/A000241","timestamp":"2014-04-21T03:21:40Z","content_type":null,"content_length":"24291","record_id":"<urn:uuid:e3238b0e-e959-47ab-981a-18257a0b17cf>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
West Boxford Statistics Tutors ...Many computer applications involve having to solve these types of problems, and most use iterative methods of estimation. We studied these methods and then compared them to the methods used by the Mathematica program (Wolfram Research) for accuracy and benchmark performance. We then set out to write our own programming functions to solve numerous classic Differential and Integral 46 Subjects: including statistics, calculus, geometry, algebra 1 ...I earned a 5 on the AP Chemistry exam in high school, and as an undergraduate at MIT I spent two years working in a renowned biomedical engineering laboratory designing iron oxide nanoparticles for cancer imaging and therapy. My work won a $1000 MIT Biomedical Engineering Society's Award for Res... 47 Subjects: including statistics, English, reading, chemistry ...I can show a person how to look online for macros and other solutions when SAS gets confusing. I have taken many classes in biostatistics - basics through survival analysis, logistic and other regression analysis, and some factor analysis. I have about five years of experience working with publ... 18 Subjects: including statistics, English, writing, GRE ...I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. I am a second year graduate student at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. 16 Subjects: including statistics, French, elementary math, algebra 1 ...As an editor, I helped proofread the solution manual to Poole's linear algebra textbook. I received my B.S. in Chemical Engineering from Penn State. During my graduation year, I received Penn State's Omega Chi Epsilon award given to the most outstanding chemical engineering student. 23 Subjects: including statistics, chemistry, calculus, writing
{"url":"http://www.algebrahelp.com/West_Boxford_statistics_tutors.jsp","timestamp":"2014-04-18T11:54:32Z","content_type":null,"content_length":"25425","record_id":"<urn:uuid:56faa71d-c922-43b0-a0ab-46f1b0bd8d09>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Skillman Math Tutor ...I also took two classes on partial differential equations in graduate school. Differential equations are directly related to calculus (integration vs. differentiation). Viewing it this way helps students understand what they are trying to accomplish when solving these problems. I took a class on linear algebra in college as part of the requirements for my BS in mathematics. 12 Subjects: including algebra 1, algebra 2, calculus, geometry ...Just hard work and sleepless nights, chocolate and coffee, and talking to yourself while brushing your teeth in the morning trying to remember the quadratics formula. ***************************************************** I like taking exams, and I enjoy helping others do the same. I used to wo... 34 Subjects: including precalculus, ACT Math, SAT math, English ...I graduated with honors. I received my Masters degree in Education from Grand Canyon University. I try several different methods to help students understand the concepts they are learning and I try to relate them to concepts that the students already know. 9 Subjects: including calculus, ACT Math, algebra 1, algebra 2 ...My major has required me to take a variety of classes, specifically in math and science. I'm an easy going guy and work really well with kids, considering I have a 10 year old brother that I have tutored for many years. I've also worked at independent tutoring centers in Bergen County that have taught me the intricacies of keeping a child's attention. 30 Subjects: including algebra 2, calculus, elementary math, geometry I have over 08 years of experience as science tutor. I started in NJ at Bergen community college tutoring center as a peer tutor. Since then, I had 3 years as a sciences teacher in a catholic High School: Honor Chemistry, AP Physics, Maths. 12 Subjects: including algebra 1, precalculus, algebra 2, SAT math
{"url":"http://www.purplemath.com/skillman_nj_math_tutors.php","timestamp":"2014-04-17T13:43:32Z","content_type":null,"content_length":"23813","record_id":"<urn:uuid:6de860d2-d386-4c4b-b67b-484854a9756b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex Coefficient Polynomial Roots October 5th 2010, 06:25 AM #1 Oct 2010 I am looking for an algorithm which can be used to approximate the complex roots of a complex coefficient polynomial. Such as: (3-4j)x^4 + (1+2j)x + (-4+1) The polynomial is not of any specific order, and could be quite large. Any help would be appreciated as i am completely stuck at the moment. I am looking for an algorithm which can be used to approximate the complex roots of a complex coefficient polynomial. Such as: (3-4j)x^4 + (1+2j)x + (-4+1) The polynomial is not of any specific order, and could be quite large. Any help would be appreciated as i am completely stuck at the moment. First you need to localise the roots, so that you know where to look. The Cauchy bound and its veriants will do this for you. Newton-Raphson will find complex roots if you start close enough. October 6th 2010, 12:14 AM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/calculus/158482-complex-coefficient-polynomial-roots.html","timestamp":"2014-04-18T00:35:14Z","content_type":null,"content_length":"33666","record_id":"<urn:uuid:cdc28f7d-4705-4499-8883-c8c50b3415b2>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic equation November 25th 2008, 07:26 AM Quadratic equation 12/x+1 for length and 6/4-x for width The rectangle given has a permieter of 14 units. Find the values of x factoring a quadratic equation. So basically i really have no idea, i drew it, but from there Im not sure, help me out please November 25th 2008, 08:01 AM next time, use parentheses (as corrected above) next time to make your problem more clear to those that read it. you should know that P = 2(L+W) ... that means L+W = half the perimeter, correct? so, solve the equation ... $\frac{12}{x+1} + \frac{6}{4-x} = 7$ first step is to get a common denominator.
{"url":"http://mathhelpforum.com/algebra/61589-quadratic-equation-print.html","timestamp":"2014-04-18T10:10:14Z","content_type":null,"content_length":"4697","record_id":"<urn:uuid:07a2f7d3-f874-44d2-a2d0-a8c8175fd202>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
The Long Straddle and Gamma Scalping The Long Straddle The long straddle is a limited risk strategy used when the tactical option investor is forecasting a large move in the underlying or an increase in the implied volatility or both. I don’t believe that traders should have a strategy bias. Traders should study the market and apply the strategy that best applies to their market forecast. Traders with a strategy bias tend to limit their opportunities. However, I do think that it’s ok to have a few favorite strategies that you employ when you think that the time is right. One of my preferred strategies is the long straddle. For those of you unfamiliar with the strategy, the long straddle is the simultaneous purchase of at the money call and put. For example if XYZ stock is trading at $25, an investor would purchase the 25 strike call and put at the same time. The long straddle is a limited risk, theoretically unlimited profit potential strategy. Why purchase a straddle? A straddle should be purchased when the investor forecasts a large price move in the underlying or an increase in implied volatility, or both. It is easier to forecast a move in the implied volatility than to attempt to predict price movement. One of the best times to put on a long straddle is in the weeks preceding a quarterly earnings report. Implied volatilities can have a tendency to rise in anticipation of the earnings numbers and peak out just prior to the announcement. A straddle purchased before the volatility increase can be profitable. Generally they should be put on 3-4 weeks before the announcement, so they can be purchased when the implied volatility is low and appreciate in value as the implied volatility increases as the earnings announcement date is approached. What are the risks associated with the long straddle? Well the implied volatility may not increase and the stock price may remain very stable. In that case, the enemy of the option purchaser, time decay or theta will take its toll on the position. You may also want to consider the volatility of the broad market before purchasing a straddle. If the broad market has a very high volatility level due to some recent event, it may not be the best time for a straddle. If the VIX is at high levels you might want to consider another strategy. If the VIX is at normal levels or has declined and the investor is forecasting a rise in the VIX and a rise in the implied volatility of an individual equity due to an impending earnings announcement, a long straddle may be an appropriate strategy. Is there any way to offset the effects of the time decay as measured by the theta? One tool that can be employed by aggressive traders is known as gamma scalping. When you purchase a straddle you have the right to buy or sell the underlying at the strike price. So, if you are long or short the stock in the same amount of shares as your equivalent number of straddle contracts, you have protection against an adverse move in your stock position regardless of whether you are long or short. I like to do some scanning to find stocks with a history of implied volatility increase as the earnings date approaches. Then I use a 20-day window and try to locate issues that are trading near a strike price and at a 20 day moving average. I use soft numbers, so the entry can be plus or minus a few cents from either parameter. Then I calculate the daily standard deviation by dividing the annual standard deviation by sixteen. Why use the number 16? The square root of time is used to calculate standard deviations across multiple time frames. There are 256 trading days in a year. The square root of 256 is 15.87, so that is rounded to sixteen. So I have now entered my straddle when the price of the underlying is at a strike and near a moving average. My position is close to being delta neutral. The at the money calls and puts should have roughly the same delta. Again, I use soft numbers, so I consider a delta of -50 to +50 as being delta neutral. Because I have a long position it will be gamma positive, so that means that the delta can change rapidly with movement in the underlying and that I will profit from large price swings. Once the option position is on, if the stock moves up by one standard deviation, I’ll short enough shares to make my position delta neutral again. If the stock moves down by one standard deviation, I’ll take a long position in the stock. Using round lots, I’ll buy enough shares to become delta neutral once again. When the stock returns to its mean, I’ll close the position for a small gain. Remember if the stock moves one standard deviation from its mean, there is a 68% probability that it will return to the mean. If it makes a two standard deviation move, there is a 95% chance that it will return to the mean. By gamma scalping this way, the investor can attempt to earn day trading profits sufficient enough to offset the time decay of the position. When things go right, I have been able to earn enough day trading profits from this method to completely cover the cost of the straddle before it is liquidated around the time of the earnings announcement. I don’t have a hard rule for exiting the straddle. If profits are adequate from the implied volatility increase I may liquidate the entire position just before the announcement. On the other hand I may decide to sell part of the position and keep some through the announcement and try to profit from a large move in the stock. If the position has been paid for by gamma scalping on a daily basis along the way, the investor has a lot of flexibility with their money management strategy at the exit. About sellacalloption Portfolio manager and chief option strategist for Fusion Asset Management. Professional option trader with VTrader Pro, LLC.
{"url":"http://sellacalloption.com/2012/03/31/the-long-straddle-and-gamma-scalping/","timestamp":"2014-04-21T14:41:06Z","content_type":null,"content_length":"64957","record_id":"<urn:uuid:d6a01bc4-7a8f-464e-a6a1-bf0ee388ff12>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate a Ratio in Excel In Excel, if you divide 2 by 8, the result is 0.25. if you format the cell as a fraction, the cell might show 1/4 as the result. What if you want to show the result as a ratio? How can you get the cell to show 1:4 instead of 1/4? There may be other ways, but here's the formula that I used. =B2/GCD(B2,C2) & ":" & C2/GCD(B2,C2) This formula requires that the Analysis ToolPak be installed, in Excel 2003 and earlier versions. It divides each cell by the greatest common divisor (GCD), and puts a colon between the two numbers. Would you use a different formula? __________________ Share and Enjoy Thank! I have an immediate use for this. Works well if numbers have a common denominator or divisor. Without a GCD the actuall numbers display for example 17 and 20 will display as 17:20 Any suggestions on how to resolve this? Thanks for the info Debra. IIt was good to be able to follow your example. Some of my numbers haven't divided easily either but it helped to have the ratio formula as a starter and benchmark. 17:20 (17/20) can't be reduced anymore. What needs to be resolved? Elizabeth – when the result is irreducible, just estimate – if you can get away with it : 17:20 = 17/20 = 0.85 = 1:1.85. Not good enough for precision engineering but adequate for other applications. If you require the value to be precisely as what excel calculates, just replace the GCD function with MIN stastical function. The only thing you need to take care is that the numbers should be in ascending order, if you want it in the format "1:XX". Otherwise, the result would be "XX:1? if the numbers are in descending order. And totbean, just for your information 1:1.85 is not the same as 0.85. "0.85 = 1:1.1764? I am not so good at maths @ Debra: can u pls give any other formula...the formula u gave here is not working Great timp Stya. How do you control the number of decimal places the MIN statistical function calculates to? Something in the cell format? I meant 'tip' :[ Sorry. John, Cell format is something that is used to control our view what excel does. so there is no way that you can control that using cell format. Even you change the format, the result does not vary. what you can do is to use the "Round" function with either "up" or "down" or as it is to control the result. One another way to keep control over excel calculation is to change it from excel options itself, the precision calculation. I am not sure about this, but did come across elsewhere. Hi Debra, I tried your formula and it did not work. It is "#NAME?" instead of a figure. And no, my quotes are not the curley ones, they are the straight ones just like yours. Please advise. Thank You:) Hi Debra, Thanks a million. It works now and this is great:) If you know the largest number will never have more than 7 digits in it (seems to be an Excel limit for this method), I think you can use the following formula (which does not require the Analysis ToolPak add-in) to calculate your ratio... A **quick** test seems to show that I don't need all those # signs in the numerator for the formula I posted earlier, this seems to work the same way... Rick- can you make it a 1:XXX with the above formula?
{"url":"http://blog.contextures.com/archives/2009/01/16/calculate-a-ratio-in-excel/","timestamp":"2014-04-19T06:52:11Z","content_type":null,"content_length":"103742","record_id":"<urn:uuid:38a52758-3ec0-48a2-b461-68287e9eb44b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Another random problem Post reply Another random problem a point is randomly selected with a rectangle whose vertices are (0,0), (2,0), (2,3) and (0,3). What is the probability that the x-coordinate of the point is less than the y-coordinate? Re: Another random problem I think it is 1/3 The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another random problem In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another random problem Hi bobbym how did you get that? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another random problem x ∈ [0,2], y ∈ [0,3] both are uniform distributions. There are 3 y's for every 2 x's. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another random problem Hi bobbym i think i switched x and y coordinates. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another random problem Or the answer is the area enclosed by the rectangle on top and the line y = x. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another random problem Hi bobbym yes that's how i did it,i just flipped the rectangle around the y=x line. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another random problem Hi anonimnystefy; It is late and time for you to rest. I will see you tomorrow. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another random problem The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another random problem Is it not 2:20 AM? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another random problem Yes it is. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another random problem Yawning, sleepy eyelids, a general buildup of toxins and a lowering of body temperature all signalling the need for sleep... In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another random problem And yet math is awaiting. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another random problem Let the problems of the day be sufficient for the day. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another random problem Don't understand. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another random problem Math will wait. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Full Member Re: Another random problem actually, bobbym The point (x,y) satisfies x<y if and only if it belongs to the shaded triangle bounded by the lines x=y,y=2 , and , x=0 the area of which is 2. The rectangle has area 6, so the probability in question is 1/3 I see you have graph paper. You must be plotting something Re: Another random problem Hi cooljackiec Bobbym's answer is correct. Take a closer look and try plotting a few values fir (x,y) to see which area the points will belong. And, welcome to the forum! The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another random problem Hi cooljackiec; The area above the triangle that the line y = x makes is twice as large as the area below the triangle. y is a 2 to 1 favorite. There is only one probability that is twice its complement. That is 2 / 3. The probability the P(y > x ) = 2 / 3 so P(x < y ) = 2 / 3. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Post reply
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=205346","timestamp":"2014-04-21T02:25:24Z","content_type":null,"content_length":"33288","record_id":"<urn:uuid:845e38dd-e8ea-433a-86fa-e371c78e7aad>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Arbitrary Gridlines Don't you wish you had more control over Excel's chart gridlines? Wouldn't you like Excel to draw gridlines where you want them? There isn't a lot of built in support for this, but over the years people have learned a few tricks to generate charts like this: Here is how to get your own custom gridlines, with arbitrary spacing (and arbitrary labels). You can use this technique for many purposes: • Show target or cut off levels for the plotted values. • Show statistics, such as mean, control limits, etc. • Generate log or probability gridline scales with your own scale parameters. See "Flexible Log Axes" at http://www.tushar-mehta.com This technique is closely related to that in Arbitrary Axis Scale, which demonstrates how to put axis ticks and labels at arbitrary positions along an axis. I use a slightly different approach in Add a Horizontal Line to a Column Chart. Chart 1 is a scatter chart showing 40 randomly generated data points. The data I will use to generate my gridlines (and labels, while I'm at it) is shown below: Axis X Axis Y Label 0 0.7 alpha 0 1.2 beta 0 1.9 gamma 0 2.2 delta 0 3.1 epsilon 0 3.6 eta 0 4.2 theta To make gridlines parallel to the X axis, all my X values are equal to the Y axis minimum (zero in this case). The Axis Y values correspond to the places I want my axis ticks. (To generate gridlines parallel to the Y axis, put the values under X and zeros--or Y axis minimums--under Y.) Chart 2 shows the dummy gridline data plotted, as a series of magenta squares along the Y axis. In Chart 3, I have added positive X Error Bars to my dummy series, long enough to stretch across the plot area. I also manually set the X axis maximum; adding the error bars causes Excel to extend the axis scale. (The error bars are magenta for clarity). In Chart 4, I have added dummy labels to the gridlines, using the technique described in Arbitrary Axis Scale. The easiest way to get the grid labels is to apply data labels to each point in the dummy series, then change the label to display the desired text. To do this easily, you need Rob Bovey's XY Chart Labeler, a free and absolutely must have add-in available from http://www.appspro.com . It's compatible with every version of Excel since 97 (and I think there's also an earlier version, but even I use 97). Use the Labeler to add the 'Labels' to the dummy series added above, aligned to the left of the points. See the magenta labels in Chart 4. In Chart 5 I have cleaned things up for presentation. I removed the original (default) horizontal gridlines. I formatted my custom gridlines so they were hairline width (like the default ones) and I removed the crossbar (the "T") at their ends. In the patterns tab of the Format Y Axis dialog, I set the major ticks, minor ticks, and tick labels to "none". I adjusted the size of the plot and the X axis scales. I changed all the magenta stuff to black, and formatted the dummy series ("Grid Y") to have no marker. I removed the "Axis Y" entry from the legend. (Single click twice--don't double click--on the legend entry until only the entry is selected, then press delete.) Now wasn't that easy!? DummySeries.zip is a zipped Excel file with an easy demonstration of the steps to create arbitrary gridlines, or an arbitrary axis (see Arbitrary Axis Scale). My Simulated Probability Chart Example (below) is an application of the Arbitrary Axis Scale and Arbitrary Gridlines techniques. The Reciprocal Axis Chart is another implementation of these techniques. This zipped Excel file has yet another example of a chart using an arbitrary axis scale and gridlines. The data is sparse for early times and dense later, so neither a Line chart nor an XY Scatter chart will do. Arbitrary Gridlines and Axis Labels shows how to improve the appearance of an axis and its related gridlines using this technique. Peltier Technical Services, Inc., Copyright © 2013. All rights reserved. You may link to this article or portions of it on your site, but copying is prohibited without permission of Peltier Technical Services. Microsoft Most Valuable Professional My MVP Profile
{"url":"http://peltiertech.com/Excel/Charts/ArbitraryGridlines.html","timestamp":"2014-04-20T18:23:32Z","content_type":null,"content_length":"14710","record_id":"<urn:uuid:9bc31833-a1d9-4bef-af44-7df5dd1e915c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
C++ Operator: How to deal with operator overloading? September 29th, 2006, 04:28 PM #1 Elite Member Power Poster Join Date Feb 2005 "The Capital" C++ Operator: How to deal with operator overloading? Q: How to deal with operator overloading? A: Operator overloading looks like a high hill for starters but the key point in their understanding is not considering operator overloading anything other/different from a simple function. This helps a lot especially for beginners! Consider them normal functions with pre-defined prototypes (for syntactical reasons/benefits while putting them to use so that they just look the way they are used for the fundamental types). The arithmetic operators work on two operands, for example +, -, /, *, etc. Since there are two things that they work on the function needs to know what those two things(operands) are. The operands are passed to them as arguments to the function. For member functions we show only one argument because of the implicit "this" argument. So, member overloads work on "this" and the second argument commonly named as "rhs" from right-hand-side (of course, you are free to name it differently). As for non-member overloads, there is no "this" involved, hence you give two arguments. The operators would work for your types as the same way they work with the pre-defined types (like say any integral type). They will work for the same way for your class' objects as well. That's it! (Note that there are also operators that work on just one operand ++, --. Please refer to this FAQ - C++ Operator: How to overload postfix increment and decrement operators?) That's all there is in operator overloading 101 - the basics. Operator overloading is not something more than mere functions. Q: How to better overload arithmetic operators? A: It is recommended to write operator 'op' in terms of their operator 'op='. That is, operator+ in terms of operator+=, operator- in terms of operator-= and similarly operator* and operator/. There are basically two advantages of doing it that way: 1. You would need to only maintain 'op=' version for your classes. 2. There could be a gain in performance owing to Return Value Optimization or RVO (see references). Take a look at the following sample: // just need to maintain this function... template<typename T> T& operator+=(const T& rhs) // do stuff.. return *this; // no maintenance required... once += is maintained well template<typename T> const T operator+(const T& lhs, const T& rhs) return T(lhs)+=rhs; //RVO // first argument by value template<typename T> const T operator+(T lhs, const T& rhs) // copy constructor called as well return lhs+=rhs; //no RVO as the return object is named There is no problem with having overloads independently but why ignore the above mentioned advantages? 'operator+=' and 'operator+' can be completely independently defined as well. But don't you think that is a kind of rework/duplicacy (writing same logic twice) when there is an alternative way that may give you an extra bit of performance and also keep your worries away while maintaining the code that you only need to modify 'operator+=' when you need to and leave operator+ untouched? Another point to note above are two versions of 'operator+'. You only need 1 of those. Even if 'operator+' is or is not defined in terms of 'operator+=' what happens is the object gets "named" and hence all possibilities of "un-named RVO" are lost. There could be gains on certain compilers due to "named RVO". In that case (named RVO), both would be equivalent. And the conclusion that we can derive is that however there (while comparing first argument passed as 'T' or 'const T&') may be no performance boost on such compilers (with both named and unnamed RVO) still you get the advantage related to code maintainance. Doing it this way also makes it unnecessary to make the overloads 'op' as friends. Q: Why do we need operator overloads as friends? A: There are basically 2 options - member overloads or non-member overloads. Non-static member functions have this implicit "this" pointer as an argument. Refer to this FAQ - C++ General: What is the 'this' pointer?. This makes functions that require the first argument to be something different that the same object's type to be written as members. For example, consider 'operator<<'. It is usually overloaded for 'ostream' operations. The prototype is similar to this: template <typename T> std::ostream& operator<<(std::ostream& ostr, const Vector<T>& vect); It could also be written the following way because there is no "special" restriction about keeping 'std::ostream&' as the first operand/argument: template<typename T> std::ostream& operator<<(const Vector<T>& vect, std::ostream& ostr); But now let us see how that affects us. For the first case, you can use the operator the following way: std::cout << Vector<int>(); //considering class template Vector<T> has a default constructor For the second definition of 'operator<<', you would not be able to use the above syntax. The primary objective of operator overloading is to keep the semantics as close to their counter-parts for fundamental types as possible. For the second version, we would need to call the operator as follows: operator<<(Vector<int>(), std::cout); which is not we want. Otherwise, what's the use of operator overloading when we can even write a function 'void DisplayVectorContents()' to do that? Hence, 'operator&lt< cannot be a member - it has to be a non-member for us to be able to specify the first argument as 'std::ostream'. Take a look at the following working sample for // forward declaration template <typename T> class Vector; template <typename T> std::ostream& operator<<(std::ostream&, const Vector<T>&); template <typename T> class Vector std::complex<T> mycomplex; Vector() {} Vector(T i_component, T j_component) : mycomplex(i_component, j_component) {} // member operators Vector& operator+=(const Vector& rhs) return *this; Vector& operator-=(const Vector& rhs) return *this; Vector& operator*=(const Vector& rhs) return *this; Vector& operator/=(const Vector& rhs) return *this; friend std::ostream& operator<< <T>(std::ostream&, const Vector<T>&); template <typename T> const Vector<T> operator+(const Vector<T>& lhs, const Vector<T>& rhs) return Vector<T>(lhs)+=rhs; template <typename T> const Vector<T> operator-(const Vector<T>& lhs, const Vector<T>& rhs) return Vector<T>(lhs)-=rhs; template <typename T> const Vector<T> operator*(const Vector<T>& lhs, const Vector<T>& rhs) return Vector<T>(lhs)*=rhs; template <typename T> const Vector<T> operator/(const Vector<T>& lhs, const Vector<T>& rhs) return Vector<T>(lhs)/=rhs; template <typename T> std::ostream& operator<<(std::ostream& ostr, const Vector<T>& vect) ostr << "i component : " << vect.mycomplex.real() << "\n"; ostr << "j component : " << vect.mycomplex.imag() << "\n"; return ostr; int main() Vector<double> first_vector(10, 100); Vector<double> second_vector(10, 100); std::cout << first_vector; std::cout << second_vector; std::cout << "Sum : \n" << (first_vector+second_vector); std::cout << "Diff : \n" << (first_vector-second_vector); std::cout << "Prod : \n" << (first_vector*second_vector); std::cout << "Div : \n" << (first_vector/second_vector); return 0; You can remove the templatization of the code if it bothers you and write 'class Vector' for a type where 'T' is double. 1. Return value optimization at comp.lang.c++.moderated 2. Named return value optimization at msdn.microsoft.com 3. Operator overloading Last edited by cilu; April 16th, 2007 at 02:48 AM.
{"url":"http://forums.codeguru.com/showthread.php?401691-C-Operator-How-to-deal-with-operator-overloading&mode=hybrid","timestamp":"2014-04-18T22:04:08Z","content_type":null,"content_length":"73310","record_id":"<urn:uuid:2ad2e0b8-892c-456a-980b-24ff30485352>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Question about the graph of averages September 7th 2012, 07:18 AM #1 Sep 2012 Question about the graph of averages So I was studying the graph of averages for a scatterplot (when you plot the average of all y-values corresponding to a single x-value) and my book said that if the graph of averages falls on a straight line, then that line is the regression line. However, I'm not sure when the graph of averages would fall on a straight line. Would this only occur if the scatterplot itself had a correlation coefficient of 1 or -1? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/statistics/203059-question-about-graph-averages.html","timestamp":"2014-04-19T03:06:21Z","content_type":null,"content_length":"28945","record_id":"<urn:uuid:837535b2-1d88-4161-aa5e-f99ce700fdba>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Formalization Thesis T.Forster@dpmms.cam.ac.uk T.Forster at dpmms.cam.ac.uk Sat Dec 29 06:45:27 EST 2007 Tim, Thank you for your interesting post. It has unleashed a storm of discussion, as i knew it would - none of which have i had time to read - tho' i am looking forward to it. I am risking contributing this post without having read the rest of the correspondence beco's i suspect that the point i am about to make will be made by no-one else except possibly Randall Holmes, and he seems to be off-line. I am thinking about the attitude to the paradoxes taken by ZF and its congenors. The basic message it brings is that there is a simple uniform explanation of the paradoxes, and that is the error of thinking that the problematic objects are sets. (As you know, i am a student of a set theory that doesn't take this point of view, so of course i would be saying all this wouldn't i!) We should remember that altho' the non-sethood of some of these problematic collections is a theorem of pure logic (LPC), the non-sethood of - for example - the universe - is not. Perhaps this difference matters..? Chow's principle could be false if there are genuine mathematical facts about some of these large dodgy collections (V for example) that do not lend themselves to representation as facts about wellfounded sets. I do not have any compelling examples of such mathematical assertions, but i am alive to the possibility that there might be some. By setting its face against taking these entities seriously ZF & co are betting on there *not* being many such mathematics. There may, indeed, not be any such mathematics, but it's by no means an open-and-shut case. It would be if the nonexistence of all the naughty objects were provable in predicate calculus but it isn't. Happy new Year More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2007-December/012394.html","timestamp":"2014-04-19T04:22:23Z","content_type":null,"content_length":"4081","record_id":"<urn:uuid:76b43b51-597e-4285-aa10-6f0cbaa887fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometric functions Posted by Jon on Sunday, February 10, 2008 at 1:53pm. Find the value of Sec A. and I have a right triangle BC=30 and BA=34 I chose D • Trigonometric functions - Guido, Sunday, February 10, 2008 at 2:04pm There is missing data here. Is 30 the hypotenuse or one of the legs of the right triangle? The same for 34. There is no way for me to know if BC is one the legs or is BA is the hypotenuse or Once I know that, I can proceed to help you. • Trigonometric functions - Jon, Sunday, February 10, 2008 at 2:13pm BA is the hypotenuse BC is the opposite leg □ Trigonometric functions - Guido, Sunday, February 10, 2008 at 2:28pm We use the Pythagorean Theorem to find length AC as step 1. In other words, we MUST find the ADJACENT LEG. AC^2 + BC^2 = BA^2 AC^2 + (30)^2 = (34)^2 AC^2 900 = 1156 AC^2 = 1156 - 900 AC^2 = 256 We now take the square root of both sides of the equation to find AC. sqrt{AC^2} = sqrt{256} AC = 16 We now have all three sides. We need secant. What is secant? Secant = hypotenuse/adjacent Well, 34 is our hypotenuse and we just found AC (our adjacent). See it? Secant = 34/16 The improper fraction 34/16 = 2_1/8 as a mixed number when reduced to the lowest term. However, 2_1/8 written as an improper fraction is 17/8. Final Answer: Choice (A) Related Questions Geometry Please Help With One Multiple Choice - Write the ratios for sin A and ... Geometry One multiple choice question! - Write the ratios for sin A and cos A. {... math help please - 2. A teacher asks her class of 22 students, “What is your age... math - indicates required items An operation * is defined by the relation x*y = ... Trigonometry - Am I doing something wrong? I have to use the Law of Consines ... Another Math Statistical Question - Assume Coca Bottling is a company producing ... math - 2. -29+30=1 3. -32+9=-23 4. 10+37=47 5. 34+22=56 7. -4+(-50)= -54 9. 26... statistics - Construct a frequency distribution of the ages that 25 randomly ... Trig - P(-15/17,-8/17) is found on the unit circle. Find sin theta and cos theta... math ALG 2! - Find three consecutive integers such that the sum of the second ...
{"url":"http://www.jiskha.com/display.cgi?id=1202669589","timestamp":"2014-04-20T22:28:00Z","content_type":null,"content_length":"9790","record_id":"<urn:uuid:d57efd2d-19ff-4e83-beb3-ad634bc3f7dc>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Covington, GA SAT Math Tutor Find a Covington, GA SAT Math Tutor ...I am working as a tutor now because I enjoy working with individuals and small groups instead of large classes. I am willing to work with students of any age. My students have described me as friendly, caring, fun, and a good teacher. 27 Subjects: including SAT math, Spanish, reading, English ...I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High School level in both private and public schools. I have chosen to leave the classroom to tutor from home so that I can be a stay at home mom. 10 Subjects: including SAT math, geometry, algebra 1, algebra 2 ...This past spring I taught a prep course for students in college so I have significant experience mastering the MCAT and helping others to do so. I took Genetics while in college and achieved a high A. I have since taken more advanced and related biology courses and have been a TA for a college level genetics class. 11 Subjects: including SAT math, chemistry, physics, biology ...I am certified to teach math, and I have seven years of classroom experience. I am qualified to tutor for the ASVAB. I took this test in high school and did very well. 28 Subjects: including SAT math, reading, geometry, biology ...I will be graduating in 2015. I have been doing private math tutoring since I was a sophomore in high school. I believe in guiding students to the answers through prompt questions. 9 Subjects: including SAT math, geometry, algebra 1, algebra 2 Related Covington, GA Tutors Covington, GA Accounting Tutors Covington, GA ACT Tutors Covington, GA Algebra Tutors Covington, GA Algebra 2 Tutors Covington, GA Calculus Tutors Covington, GA Geometry Tutors Covington, GA Math Tutors Covington, GA Prealgebra Tutors Covington, GA Precalculus Tutors Covington, GA SAT Tutors Covington, GA SAT Math Tutors Covington, GA Science Tutors Covington, GA Statistics Tutors Covington, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/Covington_GA_SAT_math_tutors.php","timestamp":"2014-04-17T15:46:58Z","content_type":null,"content_length":"23806","record_id":"<urn:uuid:e9978f5a-213a-480a-bbc0-82b8942b6f0f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
MathsNet: Postgraduate Projects You may contact a Proposer directly about a specific project or contact the Postgraduate Admissions Secretary with general enquiries. Title Network performance subject to agent-based dynamical processes Group(s) Industrial and Applied Mathematics, Statistics and Probability Proposer(s) Dr Keith Hopcraft, Dr Simon Preston Networks – systems of interconnected elements – form structures through which information or matter is conveyed from one part of an entity to another, and between autonomous units. The form, function and evolution of such systems are affected by interactions between their constituent parts, and perturbations from an external environment. The challenge in all application areas is to model effectively these interactions which occur on different spatial- and time-scales, and to discover how i) the micro-dynamics of the components influence the evolutionary structure of the network, and ii) the network is affected by the external environment(s) in which it is embedded. Description Activity in non-evolving networks is well characterized as having diffusive properties if the network is isolated from the outside world, or ballistic qualities if influenced by the external environment. However, the robustness of these characteristics in evolving networks is not as well understood. The projects will investigate the circumstances in which memory can affect the structural evolution of a network and its consequent ability to function. Agents in a network will be assigned an adaptive profile of goal- and cost-related criteria that govern their response to ambitions and stimuli. An agent then has a memory of its past behaviour and can thereby form a strategy for future actions and reactions. This presents an ability to generate ‘lumpiness’ or granularity in a network’s spatial structure and ‘burstiness’ in its time evolution, and these will affect its ability to react effectively to external shocks to the system. The ability of externally introduced activists to change a network’s structure and function - or agonists to test its resilience to attack - will be investigated using the models. The project will use data of real agent’s behaviour. Title Fluctuation Driven Network Evolution Group(s) Industrial and Applied Mathematics, Statistics and Probability Proposer(s) Dr Keith Hopcraft, Dr Simon Preston A network’s growth and reorganisation affects its functioning and is contingent upon the relative time-scales of the dynamics that occur on it. Dynamical time-scales that are short compared with those characterizing the network’s evolution enable collectives to form since each element remains connected with others in spite of external or internally generated Description ‘shocks’ or fluctuations. This can lead to manifestations such as synchronicity or epidemics. When the network topology and dynamics evolve on similar time-scales, a ‘plastic’ state can emerge where form and function become entwined. The interplay between fluctuation, form and function will be investigated with an aim to disentangle the effects of structural change from other dynamics and identify robust characteristics. Title Numerical methods for stochastic partial differential equations Group(s) Scientific Computation, Statistics and Probability Proposer(s) Prof Michael Tretyakov Numerics for stochastic partial differential equations (SPDEs) is one of the central topics in modern numerical analysis. It is motivated both by applications and theoretical study. Description SPDEs essentially originated from the filtering theory and now they are also widely used in modelling spatially distributed systems from physics, chemistry, biology and finance acting in the presence of fluctuations. The primary objectives of this project include construction, analysis and testing of new numerical methods for SPDEs. Other Web-page http://www.maths.nott.ac.uk/personal/pmzmt Title Uncertainty quantification for evolutionary PDEs Group(s) Scientific Computation, Statistics and Probability Proposer(s) Dr Kris van der Zee Uncertainty quantification for evolutionary PDEs (Or- Can we rely on our computational simulations, no matter how accurate they are?) The field of uncertainty quantification (UQ), as applied to partial differential equations (PDEs), provides a means for understanding the effect of uncertainty in the parameters on output quantities of interest. This is, for example, extremely important in evolutionary models that describe the degradation of nuclear waste containment systems: A small uncertainty in the parameters may have a large influence on the degradation, and thus may result in unexpected damaged systems that leak nuclear waste. Fortunately, observational data can help reduce uncertainty in models. The models can then become predictive and allow for an assessment of its true future behavior! Description Challenges for students: * How can one quantify the uncertainty in the complex evolutionary models? * Can one determine suitable observation scenarios using Bayesian experimental design principles? * How complex does a model need to be for a predictive simulation? * Can we employ UQ methodologies to reliably predict degradation of nuclear waste containment systems? Depending on the interest of the student, several of these issues (or others) can be addressed. Also, the student is encouraged to suggest a second supervisor, possibly from another group! Relevant • I. Babuska, F. Nobile, and R. Tempone, A systematic approach to model validation based on Bayesian updates and prediction related rejection criteria, Comput. Meth. Appl. Mech. Engrg. Publications 197 (2008), pp. 2517-2539 Other • This work connects to a large interdisciplinary multi-university EPSRC project. Title Index policies for stochastic optimal control Group(s) Statistics and Probability Proposer(s) Dr David Hodge Since the discovery of Gittins indices in the 1970s for solving multi-armed bandit processes the pursuit of optimal policies for this very wide class of stochastic decision processes has been seen in a new light. Particular interest exists in the study of multi-armed bandits as problems of optimal allocation of resources (e.g. trucks, manpower, money) to be shared between competing projects. Another area of interest would be the theoretical analysis of computational methods (for example, approximative dynamic programming) which are coming to the Description fore with ever advancing computer power. Potential project topics could include optimal decision making in the areas of queueing theory, inventory management, machine maintenance and communication networks. Other Keywords: multi-armed bandits, dynamic programming, Markov decision processes Title Uncertainty quantification in palaeo-climate reconstruction Group(s) Statistics and Probability Proposer(s) Dr Richard Wilkinson The climate evolves slowly. Even if we stopped emitting green-houses gases today, we wouldn't see the full effect of the damage already done for at least another 100 years. The instrumental record of past climate and weather goes back at most 300 years, and before then we have to rely on indirect (and inaccurate) data sources. Because of the slow evolution of the climate, this is like only having a very small number of acccurate observations, and so consquently we have very little information that can be used to assess the accuracy of climate simulators, which are the key tool used for predicting how the climate will behave in the future. An important source of information on what the climate was like in the past comes from proxy data sources such as pollen taken from lake deposits, or measurements of the air-content Description (specifically the ratio of oxygen-18 to oxygen-16) stored in glaciers hundreds of thousands of years ago. Reconstructing past climate from these data sources is a difficult task as the measurements are noisy, correlated, and don't have accurate dates attached to them, yet the task is important if we are to understand how the climate evolves and hence be able to predict the future. In this project, we will look at statistical methods for accurate palaeo-climate reconstruction, and aim to provide believeable uncertainty quantifications that accurately represent our degree of confidence/ignorance about what the climate was like in the past. The complex nature of the problem means that it is likely that state-of-the-art Monte Carlo methods will be needed, as well as potentially developing new methods in order to do the inference. Title Semi-Parametric Time Series Modelling Using Latent Branching Trees Group(s) Statistics and Probability Proposer(s) Dr Theodore Kypraios A class of semi-parametric discrete time series models of infinite order where we are be able to specify the marginal distribution of the observations in advance and then build their dependence structure around them can be constructed via an artificial process, termed as Latent Branching Tree (LBT). Such a class of models can be very useful in cases where data are Description collected over long period and it might be relatively easy to indicate their marginal distribution but much harder to infer about their correlation structure. The project is concerned with the development of such models in continuous-time as well as developing efficient methods for making Bayesian inference for the latent structure as well as the model parameters. Moreover, the application of such models to real data would be also of great interest. Title Bayesian methods for analysing computer experiments Group(s) Statistics and Probability Proposer(s) Dr Richard Wilkinson Computer experiments (ie simulators) are used in nearly all areas of science and engineering. The statistical analysis of computer experiments is an exciting and rapidly growing area of statistics which looks at the question of how best to learn from computer models. Examples of the types of challenges faced, and possible areas for a Ph.D, are given below. (i) Computer models are often process models where the likelihood function is intractable (as is common in genetics, ecology, epidemiology etc) and so to do inference we have to use likelihood-free techniques. Approximate Bayesian computation (ABC) methods are a new class of Monte Carlo methods that are becoming increasingly popular with practitioners, but which are largely unstudied by statisticians, and there remains many open questions about their performance. Application areas which use ABC methods are mostly biological (genetics and ecology in particular), but their use is growing across a wide range of fields. (ii) Expensive simulators which take a considerable amount of time to run (eg, climate models), present the challenge of how to learn about the model (its parameters, validity, or its Description predictions etc.) when we only have a limited number of model evaluations available for use. A statistical tool developed in the last decade is the idea of building statistical emulators of the simulator. Emulators are cheap statistical models of the simulator (models of the model) which can be used in place of the simulator to make inferences, and are now regularly used in complex modelling situations such as in climate science. However, there are still many questions to be answered about how best to build and then use emulators. Possible application areas for these methods include climate science, and engineering (such as ground-water flow problems for radio-active waste), as well as many others. (iii) "All models are wrong, but some are useful" - In order to move from making statements about a model to making statements about the system the model was designed to represent, we must carefully quantify the model error - interest lies in what will actually happen, rather than in what your model says will happen! Failure to account for model errors can mean that different models of the same system can give different predictions (see for example the controversy regarding the differing predictions of the large climate models - none of which account for model error!). Assessing and incorporating model error is a new and rapidly growing idea in statistics, and is done by a combination of subjective judgement and statistical learning from data. The range of potential application areas is very wide, but in particular meterology and mechanical engineering are areas where these methods are needed. • Wilkinson, Approximate Bayesian computation (ABC) gives exact results under the assumption of model error, in submission. Available as arXiv:0811.3355. • Wilkinson, Bayesian calibration of expensive multivariate computer experiments. In ‘Large-scale inverse problems and quantification of uncertainty’, 2010, John Wiley and Sons, Relevant Series in Computational Statistics. Publications • Quantifying simulator discrepancy in discrete-time dynamical simulators, Wilkinson, M. Vrettas, D. Cornford, J. E. Oakley. Journal of Agricultural, Biological, and Environmental Statistics: Special issue on Computer models and spatial statistics for environmental science, 16(4), 554-570, 2011 Other See http://www.maths.nottingham.ac.uk/personal/pmzrdw/ for more information. Title Ion channel modelling Group(s) Statistics and Probability Proposer(s) Prof Frank Ball The 1991 Nobel Prize for Medicine was awarded to Sakmann and Neher for developing a method of recording the current flowing across a single ion channel. Ion channels are protein molecules that span cell membranes. In certain conformations they form pores allowing current to pass across the membrane. They are a fundamental part of the nervous system. Mathematically, a single channel is usually modelled by a continuous time Markov chain. The complete process is unobservable but rather the state space is partitioned into two classes, Description corresponding to the receptor channel being open or closed, and it is only possible to observe which class of state the process is in. The aim of single channel analysis is to draw inferences about the underlying process from the observed aggregated process. Further complications include (a) the failure to detect brief events and (b) the presence of (possibly interacting) multiple channels. Possible projects include the development and implementation of Markov chain Monte Carlo methods for inferences for ion channel data, Laplace transform based inference for ion channel data and the development and analysis of models for interacting multiple channels. Title Optimal control in yield management Group(s) Statistics and Probability Proposer(s) Dr David Hodge Serious mathematics studying the maximization of revenue from the control of price and availability of products has been a lucrative area in the airline industry since the 1960s. It is particularly visible nowadays in the seemingly incomprehensible price fluctuations of airline tickets. Many multinational companies selling perishable assets to mass markets now have Description large Operations Research departments in-house for this very purpose. This project would be working studying possible innovations and existing practices in areas such as: customer acceptance control, dynamic pricing control and choice-based revenue management. Applications to social welfare maximization, away from pure monetary objectives, and the resulting game theoretic problems are also topical in home energy consumption and mass online interactions. Title Stochastic Processes on Manifolds Group(s) Statistics and Probability Proposer(s) Dr Huiling Le As well as having a wide range of direct applications to physics, economics, etc, diffusion theory is a valuable tool for the study of the existence and characterisation of solutions of partial differential equations and for some major theoretical results in differential geometry, such as the 'Index Theorem', previously proved by totally different means. The problems Description which arise in all these subjects require the study of processes not only on flat spaces but also on curved spaces or manifolds. This project will investigate the interaction between the geometric structure of manifolds and the behaviour of stochastic processes, such as diffusions and martingales, upon them. Title Analytic methods in probability theory Group(s) Statistics and Probability Proposer(s) Notice: Undefined index: pmzsu in /maths/www/html/postgraduate/projects/index.php on line 477 Notice: Undefined index: pmzsu in /maths/www/html/postgraduate/projects/index.php on line 477 My research focuses on interactions between probability theory, combinatorics and topology. Topics of particular interest: • dependence, limit theorems for dependent variables with applications to dynamical systems, examples and counterexamples to limit theorems; Description • modern analytic methods in probability theory such as stochastic orderings, Stein's type operator, contraction, Poincare - Hardy - Sobolev type inequalities and their • stochastic analysis of rare events such as Poisson approximation and large deviations; • Random groups, Hausdorff dimension and related topics Other information Title Statistical Theory of Shape Group(s) Statistics and Probability Proposer(s) Dr Huiling Le Devising a natural measure between any two fossil specimens of a particular genus, assessing the significance of observed 'collinearities' of standing stones and matching the observed systems of cosmic 'voids' with the cells of given tessellations of 3-spaces are all questions about shape. Description It is not appropriate however to think of 'shapes' as points on a line or even in a euclidean space. They lie in their own particular spaces, most of which have not arisen before in any context. PhD projects in this area will study these spaces and related probabilistic issues and develop for them a revised version of multidimensional statistics which takes into account their peculiar properties. This is a multi-disciplinary area of research which has only become very active recently. Nottingham is one of only a handful of departments at which it is Title Automated tracking and behaviour analysis Group(s) Statistics and Probability Proposer(s) Dr Christopher Brignell In collaboration with the Schools of Computer and Veterinary Science we are developing an automated visual surveillance system capable of identifying, tracking and recording the exact movements of multiple animals or people. The resulting data can be analysed and used as an early warning system in order to detect illness or abnormal behaviour. The three-dimensional Description targets are, however, viewed in a two dimensional image and statistical shape analysis techniques need to be adapted to improve the identification of an individual's location and orientation and to develop automatic tests for detecting specific events or individuals not following normal behaviour patterns. Title Asymptotic techniques in Statistics Group(s) Statistics and Probability Proposer(s) Prof Andrew Wood Asymptotic approximations are very widely used in statistical practice. For example, the large-sample likelihood ratio test is an asymptotic approximation based on the central limit theorem. In general, asymptotic techniques play two main roles in statistics: (i) to improve understanding of the practical performance of statistics procedures, and to provide insight into why some proceedures perform better than others; and (ii) to motive new and improved approximations. Some possible topics for a Ph.D. are Description • Saddlepoint and related approximations • Relative error analysis • Approximate conditional inference • Asymptotic methods in parametric and nonparametric Bayesian Inference Title Statistical Inference for Ordinary Differential Equations Group(s) Statistics and Probability Proposer(s) Dr Theodore Kypraios, Dr Simon Preston, Prof Andrew Wood Ordinary differential equations (ODE) models are widely used in a variety of scientific fields, such as physics, chemistry and biology. For ODE models, an important question is how best to estimate the model parameters given experimental data. The common (non-linear least squares) approach is to search parameter space for parameter values that minimise the sum of Description squared differences between the model solution and the experimental data. However, this requires repeated numerical solution of the ODEs and thus is computationally expensive; furthermore, the optimisation's objective function is often highly multi-modal making it difficult to find the global optimum. In this project we will develop computationally less demanding likelihood-based methods, specifically by using spline regression techniques that will reduce (or eliminate entirely) the need to solve numerically the ODEs. Title Bayesian approaches in palaeontology Group(s) Statistics and Probability Proposer(s) Dr Richard Wilkinson Palaeontology provides a challenging source of problems for statisticians, as fossil data are usually sparse and noisy. Methods from statistics can be used to help answer scientific questions such as when did species originate or become extinct, and how diverse was a particular taxonomic group. Some of these questions are of great scientific interest - for example - did primates coexist with dinosaurs in the Cretaceous? There is no hard evidence either way, but statistical methods can be used to assess the probability that they did coexist. This project will involve building a stochastic forwards model of an evolutionary scenario, and then fitting this model to fossil data. Quantifying different sources of uncertainty is likely to play a key part in the analysis. Relevant • Dating primate divergences through an integrated analysis of palaeontological and molecular data, Wilkinson, M. Steiper, C. Soligo, R.D. Martin, Z. Yang, and S. Tavare, Systematic Publications Biology, 60(1): 16-31, 2011. Other See http://www.maths.nottingham.ac.uk/personal/pmzrdw/ for more information Title Statistical shape analysis with applications in structural bioinformatics Group(s) Statistics and Probability Proposer(s) Dr Christopher Fallaize In statistical shape analysis, objects are often represented by a configuration of landmarks, and in order to compare the shapes of objects, their configurations must first be aligned as closely as possible. When the landmarks are unlabelled (that is, the correspondence between landmarks on different objects is unknown) the problem becomes much more challenging, since both the correspondence and alignment parameters need to be inferred simultaneously. An example of the unlabelled problem comes from the area of structural bioinformatics, when we wish to compare the 3-d shapes of protein molecules. This is important, since the shape of Description a protein is vital to its biological function. The landmarks could be, for example, the locations of particular atoms, and the correspondence between atoms on different proteins is unknown. This project will explore methods for unlabelled shape alignment, motivated by the problem of protein structure alignment. Possible topics include development of: i) efficient MCMC methods to explore complicated, high-dimensional distributions, which may be highly multimodal when considering large proteins; ii) fast methods for pairwise alignment, needed when a large database of structures is to be searched for matches to a query structure; iii) methods for the alignment of multiple structures simultaneously, which greatly exacerbates the difficult problems faced in pairwise alignment. Relevant • Green, P.J. and Mardia, K.V. (2006) Bayesian alignment using hierarchical models, with applications in protein bioinformatics. Biometrika, 93(2), 235-254. Publications • Mardia, K.V., Nyirongo, V.B., Fallaize, C.J., Barber, S. and Jackson, R.M. (2011). Hierarchical Bayesian modeling of pharmacophores in bioinformatics. Biometrics, 67(2), 611-619. Title High-dimensional molecular shape analysis Group(s) Statistics and Probability Proposer(s) Prof Ian Dryden In many application areas it is of interest to compare objects and to describe the variability in shape as an object evolves over time. For example in molecular shape analysis it is common to have several thousand atoms and a million time points. It is of great interest to reduce the dimension to a relatively small number of dimensions, and to describe the variability in shape and coverage properties over time. Techniques Description from manifold learning will be explored, to investigate if the variability can be effectively described by a low dimensional manifold. A recent method for shapes and planar shapes called principal nested spheres will be adapted for 3D shape and surfaces. Also, other non-linear dimension reduction techniques such as multidimensional scaling will be explored, which approximate the geometry of the higher dimensional manifold. The project will involve collaboration with Dr Charlie Laughton of the School of Pharmacy. Relevant Publications • Jung, S., Dryden, I.L. and Marron, J.S. (2012). Analysis of principal nested spheres. Biometrika, 99, 551–568. Other information Title Uncertainty quantifcation for models with bifurcations Group(s) Statistics and Probability, Industrial and Applied Mathematics Proposer(s) Prof Ian Dryden Notice: Undefined index: pmzkac in /maths/www/html/postgraduate/projects/index.php on line 479 Notice: Undefined index: pmzkac in /maths/www/html/postgraduate/projects/ index.php on line 479 , The project will consider Uncertainty Quantification (UQ) when there are bifurcations or discontinuities in the models. Gaussian Process Emulators (GPE) and Generalised Polynomial Chaos (gPC) techniques will be used to construct fast approximations to high-cost deterministic models. Also, an important component of Bayesian UQ is the difficult task of elicitation of the prior distributions of the parameters of interest, which will be investigated. We will exploit the flexibility in the choice of GPE covariance function to deal with cases where the Description dependence on the inputs is not smooth. Lack of smoothness can be handled by dividing the parameter space into elements and using of gPC or GPE on each element, but this is difficult to do automatically. We propose to compute the hypersurfaces at which discontinuities occur, using techniques from numerical bifurcation theory, as preparation for discretising with gPC or GPE methods. Bifrucations arise in carbon sequestration applications, and radioactive waste disposal is another area where elicitation and Bayesian emulation are useful. Title Statistical analysis of neuroimaging data Group(s) Statistics and Probability, Mathematical Medicine and Biology Proposer(s) Dr Christopher Brignell The activity of neurons within the brain can be detected by function magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). The techniques record observations up to 1000 times a second on a 3D grid of points separated by 1-10 millimetres. The data is therefore high-dimensional and highly correlated in space and time. The challenge is to infer the Description location, direction and strength of significant underlying brain activity amongst confounding effects from movement and background noise levels. Further, we need to identify neural activity that are statistically significant across individuals which is problematic because the number of subjects tested in neuroimaging studies is typically quite small and the inter-subject variability in anatomical and functional brain structures is quite large. Title Identifying fibrosis in lung images Group(s) Statistics and Probability, Mathematical Medicine and Biology Proposer(s) Dr Christopher Brignell Many forms of lung disease are characterised by excess fibrous tissue developing in the lungs. Fibrosis is currently diagnosed by human inspection of CT scans of the affected lung regions. This project will develop statistical techniques for objectively assessing the presence and extent of lung fibrosis, with the aim of identifying key factors which determine Description long-term prognosis. The project will involve developing statistical models of lung shape, to perform object recognition, and lung texture, to classify healthy and abnormal tissue. Clinical support and data for this project will be provided by the School of Community Health Sciences. Title Modelling hospital superbugs Group(s) Statistics and Probability, Mathematical Medicine and Biology Proposer(s) Prof Philip O'Neill The spread of so-called superbugs such as MRSA within healthcare settings provides one of the major challenges to patient welfare within the UK. However, many basic questions regarding Description the transmission and control of such pathogens remain unanswered. This project involves stochastic modelling and data analysis using highly detailed data sets from studies carried out in hospital, addressing issues such as the effectiveness of patient isolation, the impact of different antibiotics, and the way in which different strains interact with each other. Title Modelling of Emerging Diseases Group(s) Statistics and Probability, Mathematical Medicine and Biology Proposer(s) Prof Frank Ball When new infections emerge in populations (e.g. SARS; new strains of influenza), no vaccine is available and other control measures must be adopted. This project is concerned with Description addressing questions of interest in this context, e.g. What are the most effective control measures? How can they be assessed? The project involves the development and analysis of new classes of stochastic models, including intervention models, appropriate for the early stages of an emerging disease. Title Structured-Population Epidemic Models Group(s) Statistics and Probability, Mathematical Medicine and Biology Proposer(s) Prof Frank Ball The structure of the underlying population usually has a considerable impact on the spread of the disease in question. In recent years the Nottingham group has given particular attention to this issue by developing, analysing and using various models appropriate for certain kinds of diseases. For example, considerable progress has been made in the understanding of epidemics that are propogated among populations made up of households, in which individuals are typcially more likely to pass on a disease to those in their household than those Description elsewhere. Other examples of structured populations include those with spatial features (e.g. farm animals placed in pens; school children in classrooms; trees planted in certain configurations), and those with random social structure (e.g. using random graphs to describe an individual's contacts). Projects in this area are concerned with novel advances in the area, including developing and analysing appropriate new models, and methods for statistical inference (e.g. using pseudo-likelihood and Markov chain Monte Carlo methods). Title Bayesian Inference for Complex Epidemic Models Group(s) Statistics and Probability, Mathematical Medicine and Biology Proposer(s) Prof Philip O'Neill Data-analysis for real-life epidemics offers many challenges; one of the key issues is that infectious disease data are usually only partially observed. For example, although numbers of Description cases of a disease may be available, the actual pattern of spread between individuals is rarely known. This project is concerned with the development and application of methods for dealing with these problems, and involves using Markov Chain Monte Carlo (MCMC) techniques. Title Bayesian model choice assessment for epidemic models Group(s) Statistics and Probability, Mathematical Medicine and Biology Proposer(s) Prof Philip O'Neill During the last decade there has been a significant progress in the area of parameter estimation for stochastic epidemic models. However, far less attention has been given to the issue Description of model adequacy and assessment, i.e. the question of how well a model fits the data. This project is concerned with the development of methods to assess the goodness-of-fit of epidemic models to data. Title Epidemics on random networks Group(s) Statistics and Probability, Mathematical Medicine and Biology Proposer(s) Prof Frank Ball There has been considerable interest recently in models for epidemics on networks describing social contacts. In these models one first constructs an undirected random graph, which gives the network of possible contacts, and then spreads a stochastic epidemic on that network. Topics of interest include: modelling clustering and degree correlation in the network and Description analysing their effect on disease dynamics; development and analysis of vaccination strategies, including contact tracing; and the effect of also allowing for casual contacts, i.e. between individuals unconnected in the network. Projects in this area will address some or all of these issues. • Ball F G and Neal P J (2008) Network epidemic models with two levels of mixing. Math Biosci 212, 69-87. Relevant • Ball F G, Sirl D and Trapman P (2009) Threshold behaviour and final outcome of an epidemic on a random network with household structure. Adv Appl Prob 41, 765-796. Publications • Ball F G, Sirl D and Trapman P (2010) Analysis of a stochastic SIR epidemic on a random network incorporating household structure. Math Biosci 224, 53-73. Title Robustness-performance optimisation for automated composites manufacture Group(s) Statistics and Probability, Scientific Computation Proposer(s) Prof Frank Ball Notice: Undefined index: pmzkac in /maths/www/html/postgraduate/projects/index.php on line 479 Notice: Undefined index: pmzkac in /maths/www/html/postgraduate/projects/ index.php on line 479 , , Prof Michael Tretyakov Multidisciplinary collaborations are a critical feature of material science research enabling integration of data collection with computational and/or mathematical modelling. This PhD study provides an exciting opportunity for an individual to participate in a project spanning research into composite manufacturing, stochastic modelling and statistical analysis, and scientific computing. The project is integrated into the EPSRC Centre for Innovative Manufacturing in Composites, which is led by the University of Nottingham and delivers a co-ordinated programme of research at four of the leading universities in composites manufacturing, the Universities of Nottingham, Bristol, Cranfield and Manchester. This project focuses on the development of a manufacturing route for composite materials capable of producing complex components in a single process chain based on advancements in the knowledge, measurement and prediction of uncertainty in processing. The necessary developments comprise major manufacturing challenges. These are accompanied by significant mathematical problems, such as numerical solution of coupled non-linear partial differential equations with randomness, the inverse estimation of composite properties and their probability Description distributions based on real-time measurements and the formulation and solution of a stochastic model of the variability in fibre arrangements. The outcome of this work will enable a step change in the capabilities of composite manufacturing technologies to be made, overcoming limitations related to part thickness, component robustness and manufacturability as part of a single process chain, whilst yielding significant developments in mathematics with generic application in the fields of stochastic modelling and inverse problems. The specific aims of this project are: (i) Stochastic simulation of multi-dimensional non-linear stochastic problems; (ii) Stochastic and statistical modelling of fibre variability in Automated Fibre Placement to permit the predictive simulation of range of potential outcomes conditional on monitoring observations made during the process; (iii) Solution of the anisotropic conductivity inverse problem under uncertainty to translate monitoring and simulation of observable parameters to uncertainty quantification of critical unobservable The PhD programme contains a training element, which includes research work as well as traditional taught material. The exact nature of the training will be mutually agreed by the student and their supervisors and will have a minimum of 30 credits (approximately ¼ of a Master course/taught component of an MSc course) of assessed training. The graduate programmes at the School of Mathematical Sciences and the EPSRC Centre for Innovative Manufacturing in Composites provide a variety of appropriate training courses. information We require an enthusiastic graduate with a 1^st class degree in Mathematics (in exceptional circumstances a 2(i) class degree can be considered), preferably of the MMath/MSc level, with good programming skills and willing to work as a part of an interdisciplinary team. A candidate with a solid background in statistics and stochastic processes will have an advantage. The studentship is available for a period of three and a half years and provides an annual stipend of £13,726 and full payment of Home/EU Tuition Fees. Students must meet the EPSRC eligibility criteria.
{"url":"https://www.maths.nottingham.ac.uk/postgraduate/projects/index.php?group=Statistics%20and%20Probability","timestamp":"2014-04-18T08:02:11Z","content_type":null,"content_length":"60844","record_id":"<urn:uuid:1c0610fd-57f0-4180-b5d9-01f5e5491850>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Help with a school project, a statistical survey on education Replies: 15 Last Post: Jul 13, 2013 4:14 PM Messages: [ Previous | Next ] loom91 Re: Help with a school project, a statistical survey on education Posted: Jul 22, 2007 2:36 PM Posts: 17 Registered: 4/25/06 On Jul 21, 4:36 pm, The Qurqirish Dragon <qurqiri...@aol.com> wrote: > On Jul 20, 10:55 am, loom91 <loo...@gmail.com> wrote: > > I'm also looking for specific help on the following topics: > > i)What is a suitable measure of whether girls are stronger at some > > subjects while boys at other subjects? > Assuming you have the individual subject-test scores for each person, > then let mu_bi be the popolation mean for boys in subject i, and mu_gi > be the population mean for girls in subject i. You want to test the > hypotheses (1) mu_bi > mu_gi and (2) mu_gi > mu_bi. performing this > sort of hypothesis test is one of the first things that most > statistics texts cover when considering two related populations. > If exactly one test fails, then your evidence supports a gender bias > in the subject. If both tests fail, then the evidence indicates no > bias (or, more correctly stated, fails to show there is a bias. If > this is a beginning statistics class, then you should be certain to > tell them the difference between those two phrasings) Thanks for your comments. But we are not aiming to measure the absolute difference between boys and girls. I had tried to give a picture of what we are looking for above: "i)What is a suitable measure of whether girls are stronger at some subjects while boys at other subjects? I'm thinking of comparing the percentage of total marks obtained in one subject, standardised against the whole population. For example, consider the variable X = percentage of total marks earned in History+Geography. Next, we the standardised (wrt the entire population) variate corresponding to X, let it be Z. Now we compute the mean of Z over the girls schools ([itex]E_1(Z)[/itex]) and the mean over the boys schools If the first value is larger than the second value (it seems one will have to be positive and the other negative), then we may say that girls prefer humanities more over other subjects than boys. Next we can do the same analysis on the boys vs girls population in coed schools and see if the difference is less. By using the absolute instead of expressing it as percentage of total marks, we can also compare the relative performance (as opposed to preference) of boys and girls in humanities. The same can be done for languages and sciences. Is this a statistically sound measure (unlikely, since I just made it up)? What are the alternatives?" As nyou see, we are not comparing whether boys score more marks than girls in math. We are aiming to measure whether girls are weaker in math than boys *relative to the other subjects*. For example, if a boy scores very low marks in all subjects, but scores comparatively better in Biology, you would say he was strong in Biology, even though mny boys scored more in Biology than him. This is why I was expressing marks in the subject as parcentage of total marks obtained, to judge the relative contribution of the subject irrespective of whether the student is good or bad overall. Then I standardise wrt to the whole population to see whether the subject contributes more or less to a students score than the population average. By taking the mean over the boys population and seeing whether it is more than the mean over the girls population, we can judge whether the subject contributes more to the totals of one sex than the other. Does this make sense? Will it work well? Will something else work better? > > iii)Is there some easily available (preferably free) software that > > will let me do all this analysis (brownie points for fitting > > probability distributions and graphing)? It would be a nightmare to do > > this by hand since we usually work with less than 50 data points > > instead of several hundred. > Off hand, I don't know of free software, but it is likely that your > school has one or more of them already on the school's computers. For > that matter, at this level even Excel will have sufficient tools > (although you may need to install the statistical measurements pack.) Actually, our proposed stat computer lab is stalled because there are not enough plugpoints to put up two more computers :-) > > iv)As it stand right now, we will sample two boys schools, two girls > > schools and one coed school. Is this enough to be statistically > > significant? How many data points should we sample from each school? > > Should this be a constant or proportional to the total number of > > students? > To compare the individual schools, of course, this is fine (as long as > you have a decent sized sample from each). To compare TYPES of > schools, then no, it is insufficient, as you only have have a few data > points. As for the sample size (from each school), I would suggest a > minimum of the larger of 30 or 5% of the student population (these > numbers are the same at 600 students) This way you can likely use a > normal approximation to score distributions, even if the scores are > not normally distributed. Many statistical tests have simpler forms > for normally distributed data. This may allow you to have the class do > the analysis by hand. If you use a software package, then this need is > not important, obviously. In any event, large sample sizes will > improve your confidence levels in the hypothesis tests. We plan on taking 50 data points from each school, about 20-25% of the class size. So you say that we should take the same number of points even if one school has more students than the other? Also, do you mean that the sample size is insufficient to draw reliable conclusions about whether coed schools really lessen the gender differences? > > v)Finally, is the whole proposition so glaringly ridiculous that all > > serious statisticians will simply laugh at it? I hope not :redface: > Not at all. It is great if you can use an example like this (as > opposed to textbook work). This should be a very good problem for a > first-year statistics class. Of course, if your results show a > significant difference between the schools in your district, there may > be some bruised egos in the administration(s), but that is problem > outside the scope of statistics ;-)- Hide quoted text - > - Show quoted text - Just to clear up something, you don't think I'm the teacher, do you? I'm just a 11th grade student. Our syllabus covers very little estimation, mostly descriptive stat, so it'll be helpful if you gave me a few pointers about common problems encountered by statisticians when doing this type of study. Also, the question about which I've abolutely no idea at all is the ii) What is a good way of identifying whether the population in a school indeed consists of discreet stratas? This could be good students/bad students (there is indication from previous results that this may be the case) or in coed schools boys/girls (very likely the case). In case of coed schools, there may even be four stratas: good boys, good girls, bad boys, bad girls. It will be interesting to whether bad boys vs girls show more difference than good boys vs girls. All this sounds very pretty, but I don't know how to separate the population into stratas. Can you help me? Thanks. Date Subject Author 7/20/07 Help with a school project, a statistical survey on education loom91 7/20/07 Re: Help with a school project, a statistical survey on education richardstartz@comcast.net 7/22/07 Re: Help with a school project, a statistical survey on education loom91 7/21/07 Re: Help with a school project, a statistical survey on education The Qurqirish Dragon 7/22/07 Re: Help with a school project, a statistical survey on education loom91 7/22/07 Re: Help with a school project, a statistical survey on education Richard Ulrich 7/23/07 Re: Help with a school project, a statistical survey on education loom91 7/22/07 Re: Help with a school project, a statistical survey on education loom91 7/22/07 Re: Help with a school project, a statistical survey on education loom91 7/22/07 Re: Help with a school project, a statistical survey on education Bob 7/23/07 Re: Help with a school project, a statistical survey on education loom91 7/24/07 Re: Help with a school project, a statistical survey on education Nick 7/25/07 Re: Help with a school project, a statistical survey on education John Kane 7/25/07 Re: Help with a school project, a statistical survey on education loom91 7/13/13 statistical survey to be done on which topics? divya.nair421@gmail.com 7/26/07 Re: Help with a school project, a statistical survey on education loom91
{"url":"http://mathforum.org/kb/message.jspa?messageID=5821024","timestamp":"2014-04-19T13:09:48Z","content_type":null,"content_length":"42390","record_id":"<urn:uuid:1f00db7f-cebc-4ac6-8969-abfc8724c4bd>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Old Mill Creek, IL ACT Tutor Find an Old Mill Creek, IL ACT Tutor ...I look forward to helping you achieve success in your subjects, so please contact me as soon as you are ready to learn! I also offer online lessons with a special discount--contact me for more details if interested.Before I knew I was going to teach French, I was originally going to become a mat... 16 Subjects: including ACT Math, English, chemistry, French ...I taught calculus and differential equations at an engineering school for seven years. This was a two year curriculum. I have a Masters degree in applied mathematics and most coursework for a 18 Subjects: including ACT Math, physics, GRE, calculus I have been tutoring math on a one-on-one basis for many years. I teach Elementary math, Pre-Algebra, Algebra 1, Algebra 2, Geometry, Trigonometry, PreCalculus, Calculus. If you are interested in taking home tutoring classes for your kids, and improving their grades, do not hesitate to contact me. 8 Subjects: including ACT Math, geometry, algebra 1, algebra 2 ...I truly enjoy helping my younger students master the fundamentals of math and literacy since I know these skills will accompany them throughout future academic endeavors and poise them to succeed. If you are considering applying for Chicago’s selective enrollment high schools or are already enga... 38 Subjects: including ACT Math, Spanish, reading, statistics ...When I teach a person, I usually go through many problems because I believe the only way to learn mathematics is through practice. I believe solving more problems allows students to learn how to approach different questions. My main goal in teaching is to strengthen basic knowledge of each concepts and strengthen it by adding more laws and formula to their knowledge. 6 Subjects: including ACT Math, geometry, algebra 2, precalculus Related Old Mill Creek, IL Tutors Old Mill Creek, IL Accounting Tutors Old Mill Creek, IL ACT Tutors Old Mill Creek, IL Algebra Tutors Old Mill Creek, IL Algebra 2 Tutors Old Mill Creek, IL Calculus Tutors Old Mill Creek, IL Geometry Tutors Old Mill Creek, IL Math Tutors Old Mill Creek, IL Prealgebra Tutors Old Mill Creek, IL Precalculus Tutors Old Mill Creek, IL SAT Tutors Old Mill Creek, IL SAT Math Tutors Old Mill Creek, IL Science Tutors Old Mill Creek, IL Statistics Tutors Old Mill Creek, IL Trigonometry Tutors Nearby Cities With ACT Tutor Antioch, IL ACT Tutors Beach Park, IL ACT Tutors Green Oaks, IL ACT Tutors Gurnee ACT Tutors Lake Villa ACT Tutors Lindenhurst, IL ACT Tutors North Chicago ACT Tutors Old Mill Crk, IL ACT Tutors Pleasant Prairie ACT Tutors Round Lake Beach, IL ACT Tutors Round Lake Park, IL ACT Tutors Round Lake, IL ACT Tutors Volo, IL ACT Tutors Wadsworth, IL ACT Tutors Waukegan ACT Tutors
{"url":"http://www.purplemath.com/Old_Mill_Creek_IL_ACT_tutors.php","timestamp":"2014-04-19T07:15:37Z","content_type":null,"content_length":"24175","record_id":"<urn:uuid:c186f570-ed9d-43d8-89ac-52608a658f75>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
box shape question [Archive] - Car Audio Forum - CarAudio.com 06-04-2004, 10:04 AM I am planning on building a sealed box for a 10" Alpine Type R (SWR 1021d) sub. I have looked and read a bit on the whole process, but I have seen some differing points regarding shape and wanted to see what you guys think. I am clear on the whole size (volume) issue - in my case, my sub suggests between a .5 and a .8 cubic volume box. I want to make a box with around .66 cubic feet internal volume. Here is where my confusion lies.........Does the shape matter? I have heard that a perfect cube is a bad idea. I would like to stay away from slanted sides (different bottom and top sizes) because I don't want to have to make angular/mitre cuts. Basically, something resembling a rectangle is easier to make. :) Many people say that as long as it isn't an exact cube you are good to go, but I have seen several places (for example) that talk about the "golden ratio". Is this ratio of side length that important? Will there be a noticable difference in sound? I calculated my side length with this ratio and I don't really like the shape. Also, it appears that the pre-made boxes (Q logic for example) do not follow this ratio. Anyone have any insights? Also, what gauge speaker wire do you suggest? I am pushing this sub with a RF 250a2. By the way, the shape I want to use (outside edge to outside edge) is 12 x 12 x 12 (with 3/4" MDF) (internally 10.5 x 10.5 x 10.5) which gives me an internal volume of .6699. What do you guys think of that?
{"url":"http://www.caraudio.com/forums/archive/index.php/t-60427.html","timestamp":"2014-04-16T08:01:03Z","content_type":null,"content_length":"7676","record_id":"<urn:uuid:60cd84cd-353e-4215-9d0b-23241d4b2ea3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: if an aroma of vanillin fills a room, having the dimensions of 12ftx10.5ftx10.0ft. if the density is 2.4x10^-11g/L how many kilograms of vanillin are in the room? • one year ago • one year ago Best Response You've already chosen the best response. Density formula: \[\rho=\frac{m}{V}\] \[m=\rho V\] ------------------------------------------------------- Volume= 12x10.5x10.0=1260ft^3= 35679 liters \[m=\rho V\] \[m=( 2.4x10^{-11}g/L )(35679L) \] \[m=8.56\times 10^{-7}g=8.56 \times 10^{-10}kg\] Best Response You've already chosen the best response. hey thanks- can you explain how you get from cubic feet to liters? Best Response You've already chosen the best response. You either remembering it or from a table of conversions, 1 Cubic Feet = 28.32 Liters You can arrange it so that the cubic feet is at the bottom when you multiply, to cancel the cubic feet. You want liters here, so: \[1280~ ft^3 \times \frac{28.32~liters}{1~Cubic~feet}=35679~liters\] Best Response You've already chosen the best response. got it. thank you so much! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5060704ce4b0583d5cd24a22","timestamp":"2014-04-20T19:02:57Z","content_type":null,"content_length":"35207","record_id":"<urn:uuid:0ad3f9f0-7a67-40a9-bde4-d1fdcdb36efa>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Fundamental theories are gauge theories It has long puzzled me that such unwanted degrees of freedom of the photon field (arising from the so-called "continuous spin" operators in the Poincare unirrep) should automatically hook up with local phase changes of the fermion field. The thing is however, that the extra degrees of freedom are not just unwanted, but unphysical. The photon is simply not an object suited to being described by fields, since it is a rep of the little group ##ISO(2)##, but fields can only naturally carry reps of the little group ##SO(3)##. Only constrained fields can described the photon. So, when one uses an unconstrained field like ##A_{\mu}##, then as you know, we get two extra nonphysical polarisations that cause unitarity to break down. This extra polarisations can then also interact with the electron, either through scattering, but more importantly, they form part of its Coloumb field. In the field formalism, these extra polarisations manifest as gauge degrees of freedom, so the electron also picks up a guage variance. Since the photons form the electrons Coloumb field or electric charge, which is transformed by global phase changes, these non-physical photons induce a gauged phase. But I doubt that nature "cares" whether we can/can't solve our full interacting theory by perturbation around a free theory. It's nothing to do with solving the theory. Theories with divergences do not exist. So perhaps a clearer way to say it is that there doesn't exist a field theory in four dimensions that doesn't contain a massless spin-1 particle and hence has gauge symmetry in the field formalism.
{"url":"http://www.physicsforums.com/showthread.php?s=7fedaff2cc494d5fd32760d7cbddc81b&p=4629639","timestamp":"2014-04-23T20:27:35Z","content_type":null,"content_length":"62444","record_id":"<urn:uuid:9fe1a3d6-4b1a-4d2b-a66d-932336079ea3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
On best simultaneous approximation S.V.R. Naidu Department of Applied Mathematics, A.U.P.G. Centre Nuzvid 521201, India Abstract: For nonempty subsets $F$ and $K$ of a nonempty set $V$ and a real valued function $f$ on $X\times X$ the notion of $f$-best simultaneous approximation to $F$ from $K$ is introduced as an extension of the known notion of best simultaneous approximation in normed linear spaces. The concept of uniformly quasi-convex function on a vector space is also introduced. Sufficient conditions for the existence and uniqueness of $f$-best simultaneous approximation are obtained. Classification (MSC2000): 41A28, 41A50, 41A52; 54H99, 46A99 Full text of the article: Electronic fulltext finalized on: 2 Nov 2001. This page was last modified: 16 Nov 2001. © 2001 Mathematical Institute of the Serbian Academy of Science and Arts © 2001 ELibM for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/PIMB/066/13.html","timestamp":"2014-04-16T04:22:55Z","content_type":null,"content_length":"3447","record_id":"<urn:uuid:f1c784d3-7912-433c-8d18-c5898f21618f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
How tall is 67 inches in feet? A system of measurement is a set of units which can be used to specify anything which can be measured and were historically important, regulated and defined because of trade and internal commerce. In modern systems of measurement, some quantities are designated as fundamental units, meaning all other needed units can be derived from them, whereas in the early and most historic eras, the units were given by fiat (see statutory law) by the ruling entities and were not necessarily well inter-related or self-consistent. Zeng Jinlian (simplified Chinese: 曾金莲; traditional Chinese: 曾金蓮; pinyin: Zēng Jīnlián; June 25, 1964 – February 13, 1982) was the tallest woman ever verified in medical history, taking over Jane Bunford's record. Zeng is also the only woman counted among the 14 people who have reached verified heights of eight feet or more. At the time of her death at the age of 17 in Hunan, China, Zeng, who would have been 8 feet, 1.75 inches tall (she could not stand straight because of a severely deformed spine), was the tallest person in the world. She had also broken Robert Wadlow's previous record of tallest 17 year-old ever. In the year between Don Koehler's death and her own, she surpassed fellow "eight-footers" Gabriel Estêvão Monjane and Suleiman Ali Nashnush. That Zeng's growth patterns mirrored those of Robert Wadlow is shown in the table below. Many different units of length have been used around the world. The main units in modern use are U.S. customary units in the United States and the Metric system elsewhere. British Imperial units are still used for some purposes in the United Kingdom and some other countries. The metric system is sub-divided into SI and non-SI units. The base unit in the International System of Units (SI) is the metre, defined as "the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second." It is approximately equal to 1.0936 yards. Other units are derived from the metre by adding prefixes from the table below: A rack unit, U or RU is a unit of measure that describes the height of equipment designed to mount in a 19-inch rack or a 23-inch rack. The 19-inch (482.6 mm) or 23-inch (584.2 mm) dimension refers to the width of the equipment mounting frame in the rack including the frame; the width of the equipment that can be mounted inside the rack is less). One rack unit is 1.75 inches (44.45 mm) high. The size of a piece of rack-mounted equipment is frequently described as a number in "U". For example, one rack unit is often referred to as "1U", 2 rack units as "2U" and so on. Law Crime The term crime does not, in modern times, have any simple and universally accepted definition, but one definition is that a crime, also called an offence or a criminal offence, is an act harmful not only to some individual, but also to the community or the state (a public wrong). Such acts are forbidden and punishable by law. Related Websites:
{"url":"http://answerparty.com/question/answer/how-tall-is-67-inches-in-feet","timestamp":"2014-04-20T16:21:39Z","content_type":null,"content_length":"27153","record_id":"<urn:uuid:35edadd0-031b-437f-b305-a9a04584894b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
(ROC) L Receiver Operating Characteristic (ROC) Literature Research PI: Kelly H. Zou, Ph.D. Grant Acknowledgement: This research has been supported by NIH R01-LM007861 "Improved Tumor Resection in Image-Guided Neurosurgery." A. BACKGROUND (EARLY PAPERS AND TEXTBOOKS). • J.P. Egan. Signal Detection Theory and ROC Analysis, Academic Press,New York, 1975. • D.M. Green and J.A. Swets. Signal Detection Theory and Psychophysics. Robert E. Krieger Publishing Co., Huntington,New York, 1974. • D. Jalihal and L.W. Nolte. Signal detection theory and reconstruction algorithms-performance for images in noise. IEEE Transactions on Biomedical Engineering, 41:501-4, 1994. • L.B. Lusted. Signal detectability and medical decision-making. Science, 171:1217-1219, 1971. • N.A. Macmillan and C.D. Creelman. Detection Theory: A User's Guide, Cambridge University Press, Cambridge, 1991. • B.J. McNeil and S.J. Adelstein. Determining the value of diagnostic and screening tests. Journal of Nuclear Medicine, 17:439-448, 1976. • B.J. McNeil, E.Keeler, and S.J. Adelstein. Primer on certain elements of medical decision making. New England Journal of Medicine, 293:211-215, 1975. • W.W. Peterson, T.G. Birdsall, and W.C. Fox. The theory of signal detectibility. Transactions of the IRE Professional Group in Information Theory, PGIT, 2-4:171-212, 1954. • J.A. Swets and R.M. Pickett. Evaluation of Diagnostic Systems: Methods from Signal Detection Theory. Academic Press,New York, 1982. • J.A. Swets. Signal Detection Theory and ROC Analysis in Psychology and Diagnostics: Collected Papers. Lawrence Erlbaum Associates, Publishers, Mahwah, New Jersy, 1995. • M. Treisman and A. Faulkner. The effect of signal probability on the slope of the receiver operating characteristic given by the rating procedure British J. Math. Statist. Psych, 37:199,215,1984. B. OVERVIEWS / REVIEWS. • C.B. Begg. Statistical methods in medical diagnosis. CRC Critical Reviews in Medical Informatics, 1:1-22, 1986. • C.B. Begg. Advances in statistical methodology for diagnostic medicine in the 1980's. Statistics in Medicine, 10:1887-1895, 1991. • G. Champbell. Advances in statistical methodology for the evaluation of diagnostic and laboratory tests. Statistics in Medicine, 13 :499-508,1994. • R.M. Centor. Signal detectability: The use of ROC curves and their analyses. Medical Decision Making, 11:102-6, 1991. • L.S. Erdreich and E.T. Lee. Use of relative operating characteristic analysis in epidemiology: A method for dealing with subjective judgement. American Journal of Epidemiology, 114:649-62, 1981. • D.J. Goodenough, K.Rossmann, and L.B. Lusted. Radiographic applications of receiver operating characteristic (ROC) curves. Radiology, 110:89-95, 1974. • J.A. Hanley. Receiver operating characteristic (ROC) methodology: The state of the art. Critical Reviews in Diagnostic Imaging, 29:307-35, 1989. • A.R. Henderson. Assessing test accuracy and its clinical consequences: a primer for receiver operating characteristic curve analysis. [Review] Annals of Clinical Biochemistry,30:521-39, 1993. • J.K. Hsiao, J.J. Bartko, and W.Z. Potter. Diagnosing diagnoses: Receiver operating characteristic methods and psychiatry. Archives of General Psychiatry, 46:664-7, 1989. • C.E. Metz. Basic principles of ROC analysis. Seminars In Nuclear Medicine, 8:283-98, 1978. • C.E. Metz. ROC methodology in radiologic imaging. Investigative Radiology, 21:720-733, 1986. • C.E. Phelps. The methodologic foundations of studies of the appropriateness of medical care [see comments]. New England Journal of Medicine, 329(17):1241-5, 1993. • Y.T. van der Schouw, A.L. Verbeek, and S.H.Ruijs. Guidelines for the assessment of new diagnostic tests. Invest Radiol, 30:334-40,1995. • M. Schulzer. Diagnostic tests: a statistical review. [Review] Muscle and Nerve, 17(7):815-9, 1994. • J.A. Swets. ROC analysis applied to the evaluation of medical imaging techniques. Investigative Radiology, 14:109-21, 1979. • J.A. Swets. Measuring the accuracy of diagnostic systems. Science, 240:1285-93, 1988. • D.A. Turner. An intuitive approach to receiver operating characteristic curve analysis. Journal of Nuclear Medicine, 19:213-20, 1978. C. DESIGN OF ROC STUDIES / BIAS. • C.B. Begg and R.A. Greenes. Assessment of diagnostic tests when disease verification is subject to selection bias. Biometrics, 39:207-215, 1983. • C.B. Begg. Biases in the assessment of diagnostic tests. Statistics in Medicine, 6:411-423, 1987. • C.B. Begg. Experimental design of medical imaging trials: Issues and options. Investigative Radiology, 24:934-936, 1989. • C.B. Begg and B.J. McNeil. Assessment of radiologic tests: Control of bias and other design considerations. Radiology, 167:565-9, 1988. • K.S. Berbaum, D.D. Dorfman, and E.A. Franken, Jr. Measuring observer performance by ROC analysis: Indications and complications. Investigative Radiology, 24:228-33, 1989. • G.A. Diamond. ROC steady: A receiver operating characteristic curve that is invariant relative to selection bias. Medical Decision Making, 7:238-43, 1987. • G.A. Diamond. ROCky III. Medical Decision Making, 7:247-9, 1987. • G.A. Diamond. Can the discriminant accuracy of a test be determined in the face of selection bias? Medical Decision Making,11:48-56, 1991. • G.A. Diamond. Scotched on the ROCs. Medical Decision Making, 11:198-200, 1991. • G.A. Diamond. What is the effect of sampling error on ROC analysis in the face of verification bias? Medical Decision Making, 12:155-6, 1992. • C.A. Gatsonis and X.H. Zhou. Group sequential designs for comparative ROC studies in diagnostic medicine, Technical report, Department of Health Care Policy, Harvard Medical School, Boston, Massachusetts, 1993. • W.F. Good, D.Gur, W.H. Straub, and J.H. Feist. Comparing imaging systems by ROC studies: Detection versus interpretation. Investigative Radiology, 24:932-3, 1989. • R.Gray, C.B. Begg, and R.A. Greenes. Construction of receiver operating characteristic curves when disease verification is subject to selection bias. Medical Decision Making, 4:151-64, 1984. • R.A. Greenes and C.B. Begg. Assessment of diagnostic technologies: Methodology for unbiased estimation from samples of selectively-verified patients. Investigative Radiology, 20:751-756, 1985. • D.Gur, J.L. King, H.E. Rockette, C.A. Britton, F.L. Thaete, and R.J. Hoy. Practical issues of experimental ROC analysis: Selection of controls. Investigative Radiology, 25:583-6, 1990. • D.Gur, H.E. Rockette, W.F. Good, B.S. Slasky, L.A. Cooperstein, W.H. Straub, N.A. Obuchowski, and C.Metz. Effect of observer instruction on ROC study of chest images. Investigative Radiology, 25:230-4, 1990. • J.A. Hanley and C.B. Begg. Response to ROC steady. Medical Decision Making, 7:244-6, 1987. • J.A. Hanley. Verification bias and the one-parameter logistic ROC curve-some clarifications. [Comment on Diamond 1991]. Medical Decision Making, 11:203-7, 1991. • M.G. Hunink and C.B. Begg. Diamond's correction method-a real gem or just cubic zirconium? [Comment on Diamond 1991]. Medical Decision Making, 11:201-3, 1991. • J.L. King, C.A. Britton, G.D., H.E. Rockette, and P.L. Davis. On the validity of the continuous and discrete confidence rating scales in receiver operating characteristic studies. Investigative Radiology,28:962-963, 1993. • C.E. Metz. Some practical issues of experimental design and data analysis in radiological ROC studies. Investigative Radiology, 24:234-45, 1989. • C.E. Metz and J.H. Shen. Gains in accuracy from replicated readings of diagnostic images: Prediction and assessment in terms of ROC analysis. Medical Decision Making, 12:60-75, 1992. • N.A. Obuchowski. Computing sample size for receiver operating characteristic studies. Investigative Radiology, 29:238-243, 1994. • D.F. Ransohoff and A.R. Feinstein. Problems of spectrum and bias in evaluating the efficacy of diagnostic tests. New England Journal of Medicine, 299:926-930, 1978. • H.E. Rockette, D.Gur, L.A. Cooperstein, N.A. Obuchowski, J.L. King, C.R. Fuhrman, E.K. Tabor, and C.E. Metz. Effect of two rating formats in multi-disease ROC study of chest images. Investigative Radiology, 25:225-9, 1990. • H.E. Rockette, D.Gur, and C.E. Metz. The use of continuous and discrete confidence judgments in receiver operating characteristic studies of diagnostic imaging techniques. Investigative Radiology, 27:169-172, 1992. D. CURVE-FITTING. • I.G. Abrahamson and H.Levitt. Statistical analysis of data from experiments in human signal detection. [Maximum likelihood ROC curve - general location-scale family]. Journal of Mathematical Psychology, 6:391-417, 1969. • G. Campbell and M.V. Ratnaparkhi. An application of Lomax distributions in receiver operating characteristic (ROC) curve analysis. Comm.Statist. A--Theory Methods :1681-1697,1993. • D.D. Dorfman and E.Alf, Jr. Maximum likelihood estimation of parameters of signal detection theory-a direct solution.[Binormal ROC curve-dichotomous diagnostic test]. Psychometrika, 33:117-124, • D.D. Dorfman and K.S. Berbum. Degeneracy and discrete receiver operating characteristic rating data. Academic Radiology,2:907-915,1995. • D.D. Dorfman and E.Alf, Jr. Maximum likelihood estimation of parameters of signal detection theory and determination of confidence intervals-rating method data. [Binormal ROC curve-ordinal data]. Journal of Mathematical Psychology, 6:487-496, 1969. • D.R. Grey and B.J.T. Morgan. Some aspects of ROC curve-fitting: Normal and logistic models. [Corrections and improvements to Dorfman-Alf; minimum chi-square bilogistic]. Journal of Mathematical Psychology, 9:128-139, 1972. • J.A. Hanley. The robustness of the ``binormal'' assumptions used in fitting ROC curves. Medical Decision Making, 8:197-203, 1988. • J.A. Hanley. The use of the 'binormal' model for parametric roc analysis of quantitative diagnostic tests. Statistics in Medicine, 15 : 1575-1585, 1996. • H.L. Kundel, D.D. Dorfman and K.S. Berbaum. Degeneracy and discrete receiver operating characteristic rating data, 2 :907-15,1995. • C.E. Metz and X. Pan. "Proper" binormal ROC curves: theory and maximum-likelihood estimation, unpublished manuscript, University of Chicago. • D. Mossman. Resampling techniques in the analysis of non-binormal ROC data. Med Decis Making , 15:358-66,1995. • J.C. Ogilvie and C.D. Creelman. Maximum-likelihood estimation of receiver operating characteristic curve parameters. Journal of Mathematical Psychology, 5:377-391, 1968. • E. Somoza. Eccentric Diagnostic Tests. Medical Decision Making, 16:15-23, 1996. • J.A. Swets. Form of empirical ROCs in discrimination and diagnostic tasks: Implications for theory and measurement of performance. Psychological Bulletin, 99:181-198, 1986. • A.N. Tosteson and C.B. Begg. A general regression methodology for ROC curve estimation. Medical Decision Making, 8:204-15, 1988. • A.N. Tosteson, J. Wittenberg, and C.B. Begg. ROC curve regression analysis: the use of ordinal regression models for diagnostic test assessment. Environmental Health Perspectives, 8:73-8, 1994. • X.H. Zhou. Testing an underlying assumption on a ROC curve based on rating data. Med Decis Making, 15:276-82,1995. • R.M. Centor and J.S. Schwartz. An evaluation of methods for estimating the area under the receiver operating characteristic (ROC) curve. Medical Decision Making, 5:149-56, 1985. • C. Cox. Location-scale cumulative odds models for ordinal data: a generalized non-linear model approach. Statistics in Medicine,14:1191-203, 1995. • J.A. Hanley and B.J. McNeil. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143:29-36, 1982. • R.D. Hays. ROC: Estimation of the area under a receiver operating characteristic curve. Appl. Psych. Meas, 14 :208,1990. • J.Hilden. The area under the ROC curve and its competitors. Medical Decision Making, 11:95-101, 1991. • D.Katz and B.Foxman. How well do prediction equations predict? Using receiver operating characteristic curves and accuracy curves to compare validity and generalizability. Epidemiology,4 (4):319-26, 1993. • D.K. McClish. Analyzing a portion of the ROC curve. Medical Decision Making, 9:190-5, 1989. • T.O. Nelson. ROC curves and measures of discrimination accuracy: A reply to Swets. [Comment on Swets 1986a]. Psychological Bulletin, 100:128-132, 1986. • J.J. Riera-Diaz. The estimation of event related potentials affected by random shifts and scalings. International Journal of Bio-Medical Computing,38:109-20,1995. • A.J. Simpson and M.J. Fitter. What is the best index of detectability? [Recommends the area under the binormal curve]. Psychological Bulletin, 80:481-488, 1973. • J.A. Swets. Indices of discrimination or diagnostic accuracy: Their ROCs and implied models. Psychological Bulletin, 99:100-117, 1986. • A.Agresti. A survey of models for repeated ordered categorical response data. Statistics in Medicine, 8:1209-1224, 1989. • D.Bamber. The area above the ordinal dominance graph and the area below the receiver operating graph. Journal of Mathematical Psychology, 12:387-415, 1975. • C.A. Beam. Random-effects models in the receiver operating characteristic curve-based assessment of the effectiveness of diagnostic imaging technology: concepts, approaches, and issues. Academic Radiology, 2: supple: 4-13, 1995. • C.A. Beam and H.S. Wieand. A statistical method for the comparison of a discrete diagnostic test with several continuous diagnostic tests. Biometrics, 47:907-919, 1991. • D.A. Block. Comparing two diagnostic tests against the same "gold standard" in the same sample. Biometrics, 53: 73-85, 1997. • C. Cox. Location-scale cumulative odds models for ordinal data: a generalized non-linear model approach. Statistics in Medicine, 15 :1191-203, 1995. • E.R. DeLong, W.B. Vernon, and R.R. Bollinger. Sensitivity and specificity of a monitoring test. Biometrics, 41:947-958, 1985. • E.R. DeLong, D.M. DeLong, and D.L. Clarke-Pearson. Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach. Biometrics, 44:837-45, • D.D. Dorfman, K.S. Berbaum, and C.E. Metz. Receiver operating characteristic rating analysis: Generalization to the population of readers and patients with the jackknife method. Investigative Radiology, 27:723-31, 1992. • B. Emir, S.Wieand, J.Q. Su, and S. Cha. Analysis of repeated markers used to predict progression of cancer. Statistics in Medicine, 17: 2563-78, 1998. • B. Emir, S. Wieand, S.-H. Jung, and Z. Ying. Comparison of diagnostic markers with repeated measurements: A non-parametric ROC curve approach. Staistics in Medicine, 19: 511-23, 2000. • C.A. Gatsonis. Random-effects models for diagnostic accuracy data. Academic Radiology, 2:suppl:14-21, 1995. • J.A. Hanley. Alternative approaches to receiver operating characteristic analyses. Radiology, 168:568-70, 1988. • J.A. Hanley and B.J. McNeil. A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology, 148:839-43, 1983. • R.A. Hilgers. Distribution-free confidence bounds for ROC curves. Methods of Information In Medicine, 30:96-101, 1991. • F. Hsieh and B.W. Turnbull. Nonparametric and semiparametric estimation of the receiver operating characteristic curve. Annals of Statistics, 24: 25-40, 1996. • K. Jensen, H.-H. Muller and H. Schafer. Regional confidence bands for ROC curves. Statistics in Medicine, 19: 493-509, 2000. • V. Kairisto and A. Poola. Software for illustrative presentation of basic clinical characteristics of laboratory tests--GraphROC for Windows. Scandinavian Journal of Clinical & Laboratory Investigation, 222(suppl):43-60, 1995. • K. Kim. A bivariate cumulative probit regression model for ordered categorical data. Statistics in Medicine, 14 :1341-1352, 1995. • G.Ma and W.J. Hall. Confidence bands for receiver operating characteristic curves. Medical Decision Making, 13:191-197, 1993. • D.K. McClish. Comparing the areas under more than two independent ROC curves. Medical Decision Making, 7:149-55, 1987. • D.K. McClish. Determining a range of false-positive rates for which ROC curves differ. Medical Decision Making, 10:283-7, 1990. • B.J. McNeil and J.A. Hanley. Statistical approaches to the analysis of receiver operating characteristic (ROC) curves. Medical Decision Making, 4:137-50, 1984. • A. Moise, B. Clement, M. Raissis and P. Manopoulos. A test for crossing receiver operating characteristic (ROC) curves. Comm. Statist. A -- Theory Methods, 17:1985-2004,1988. • C.E. Metz. Statistical analysis of ROC data in evaluating diagnostic performance. In D.E. Herbert and R.H. Myers, editors, Multiple regression analysis: Applications in the health sciences, pages 365-384,American Institute of Physics, New York,1986. • C.E. Metz. Quantification of failure to demonstrate statistical significance: The usefulness of confidence intervals. Investigative Radiology, 28:59-63, 1993. • C.E. Metz and H.B. Kronman. Statistical significance tests for binormal ROC curves. Journal of Mathematical Psychology, 22:218-243, 1980. • C.E. Metz, J.H. Shen, and B.A. Herman. New methods for estimating a binormal ROC curve from continuously-distributed test results. Unfinished manuscript; presented at the 1990 Joint Statistical Meetings in Anaheim, California. • C.E. Metz, P.-L. Wang, and H.B. Kronman. A new approach for testing the significance of differences between ROC curves measured from correlated data. In F.Deconinck, editor, Information Processing in Medical Imaging: Proceedings of the Eighth Conference, 432-445.Martinus Nijhoff Publishers, The Hague, 1984. • A.Moise, B.Clement, and M.Raissis. A test for crossing receiver operating characteristic (ROC) curves. Communications in Statistics-Theory and Methods, 17:1985-2003, 1988. • P.A. Murtaugh. ROC curves with multiple marker measurements. Biometrics, 51 :1514-22, 1995. • N.A. Obuchowski. Multireader, multimodality receiver operating characteristic curve studies: hypothesis testing and sample size estimation using an analysis of variance approach with dependent observations. Academic Radiology, 2: supple: 22-29,1995. • N.A. Obuchowski Nonparametric analysis of clustered ROC curve data. Unpublished manuscript, 1996 ENAR meeting. • H.E. Rockette, N.A. Obuchowski, and D.Gur. Nonparametric estimation of degenerate ROC data sets used for comparison of imaging systems. Investigative Radiology, 25:835-7, 1990. • H.E. Rockette, N.Obuchowski, C.E. Metz, and D.Gur. Statistical issues in ROC curve analysis. Proceedings of the SPIE, 1234:111-119, 1990. • D.E. Shapiro. The interpretation of diagnostic tests. Statistical Methods in Medical Research, 8: 113-134, 1999. • E.K. Shultz. Multivariate receiver-operating characteristic curve analysis: prostate cancer screening as an example.[Review] Clinical Chemistry, 41:1248-55, 1995. • P.J. Smith, T.J. Thompson, M.M. Engelgau and W.H. Herman. A generalized linear model for analysing receiver operating characteristic curves. Statistics in Medicine, 15 :323-33, 1996. • E. Svensson, and S. Holm. Separation of systematic and random differences in ordinal rating scales. Statistics in Medicine,13:2437-53, 1994. • M. Swaving, H. Van Houwelingen, F.P. Ottes, T. Steerneman. Statistical comparison of ROC curves from multiple readers. Medical Decision Making, 16 : 143-153, 1996. • M.L. Thompson and W.Zucchini. On the statistical analysis of ROC curves. Statistics In Medicine, 8:1277-90, 1989. • A.L. Toledano and C.A. Gatsonis. Regression analysis of correlated receiver operating characteristic data. Academic Radiology,2:supple:30--36, 1995. • A.Y. Toledano and C.A. Gatsonis. Ordinal regression methodology for roc curves derived from correlated data. Statistics in Medicine,15:1807-26,1996. • S. Wieand, M.H. Gail, B.R. James, and K.L. James. A family of nonparametric statistics for comparing diagnostic markers with paired or unpaired data. Biometrika , 76:585-592, 1989. • X.H. Zhou. Testing an underlying assumption on a ROC curve based on rating data. Medical Decision Making,15:276-282,1995. • X.H. Zhou Testing an underlying assumption on a ROC curve based on rating data. Medical Decision Making, 15 :276-82, 1995. • X.H. Zhou and C.A. Gatsonis. A smple method for comparing correlated roc curves using incomplete data. Statistics in Medicine, 15 :1687-1693, 1996. • K.H. Zou, C. M Tempany, J. R. Fielding, and S. G. Silverman. Original smooth receiver operating characteristic curve estimation from continous data: Statistical methods for analyzing the predictive value of spiral CT of ureteral stones. Academic Radiology, 5: 680-687, 1998. • K.H. Zou, W.J. Hall and D.E.Shapiro. Smooth nonparametric receiver operating characteristic curves for continuous diagnostic tests. Statistics in Medicine, 16: 2143-2156, 1997. • K.H. Zou and W. J. Hall. Two transformation models for estimating an ROC curve derived from continous data. Journal of Applied Statistics, 26: 621-631, 2000. • K.H. Zou. Comparison of correlated ROC curves derived from repeated diagnostic test data. Academic Radiology, 8: 225-233, 2001. • T.A. Alonzo and M.S. Pepe. Using a combination of reference tests to assess the accuracy of a new diagnostic test. Statistics in Medicine, 18: 2987-3003, 1999. • G.Baker, S. Evaluating a new test using a reference test with estimated sensitivity and specificity. Communication in Statistics, Part A-Theory and Methods, 20:2739-2752, 1991. • C.B. Begg and C.E. Metz. Consensus diagnoses and ``gold standards''. [Comment on Henkelman, Kay and Bronskill 1990]. Medical Decision Making, 10:29-30, 1990. • G. Campbell and J.M. Deleo. Fundamentals of fuzzy receiver operating characteristic (ROC) functions. Comput. Sci. Statist.: Proc. 21st Symp. Interface (Kenneth Berk and Linda Malone, eds.), Amer. Statist. Assoc. (Alexandria, VA) :543-548,1989. • P.Deneef. Evaluating rapid test for streptococcal pharyngitis: The apparent accuracy of a diagnostic test when there are errors in the standard of comparison. Medical Decision Making, 7:92-96, • S.V. Faraone and M.T. Tsuang. Measuring diagnostic accuracy in the absence of a "gold standard".[Review] American Journal of Psychiatry, 151(5):650-7, 1994. • R.A. Greenberg and J.F. Jekel. Some problems in the determination of the false positive and false negative rates of tuberculin tests. American Review of Respiratory Disease, 100:645-650, 1969. • J.J. Gart and A.A. Buck. Comparison of a screening test and a reference test in epidemiological studies: II. a probabilistic model for the comparison of diagnostic tests. American Journal of Epidemiology, 83:593-602, 1966. • D.A. Grayson. Statistical diagnosis and the influence of diagnostic error. Biometrics, 43:975-984, 1987. • W.J. Hall & D.E. Shapiro A receiver operating characteristic curve for subjective probability of disease assessments in the absence of a gold standard (abstract only). Medical Decision Making, 14:430, 1994. • R.M. Henkelman, I.Kay, and M.J. Bronskill. Receiver operator characteristic (ROC) analysis without truth. Medical Decision Making, 10:24-9, 1990. • S.L. Hui and S.D. Walter. Estimating the error rates of diagnostic tests. Biometrics, 36:167-171, 1980. • S.L. Hui and X.H. Zhou. Evaluation of diagnostic tests without gold standards. Statistical methods in medical research, 7: 354-370, 1998. • H.C. Kraemer. The robustness of common measures of 2X2 association to bias due to misclassifications. American Statistician, 39:286-290, 1985. • N.J.D. Nagelkerke, V.Fidler, and M.Buwalda. Instrumental variables in the evaluation of diagnostic test procedures when the true disease state is unknown. Statistics in Medicine, 7:739-744, 1988. • C.E. Phelps and A.Hutson. Estimating diagnostic test accuracy using a `fuzzy' gold standard. Medical Decision Making, 15:44-57, 1995. • R.M. Poses, R.D. Cebul, and R.M. Centor. Evaluating physicians' probabilistic judgements. Medical Decision Making, 8:233-240, 1988. • R.M. Poses, C.B. Bekes, R.L. Winkler, W.E. Scott, and F.J. Copare. Are two (inexperienced) heads better than one (experienced) head? Archives of Internal Medicine, 150:1874-1878, 1990. • M.Staquet, M.Rozencweig, Y.J. Lee, and F.M. Muggia. Methodology for the assessment of new dichotomous diagnostic tests. Journal of Chronic Disease, 34:599-610, 1981. • V. Torrance-Rynard and S. Walter. Effects of dependent errors in the assessment of diagnostic test performance. Statistics in Medicine, 16: 2157-2175, 1997. • J.S. Uebersax. Statistical modeling of expert ratings on medical treatment appropriateness. Journal of the American Statistical Association, 88:421-427, 1993. • J.S. Uebersax and W.M. Grove. Latent class analysis of diagnostic agreement. Statistics in Medicine, 9:559-572, 1990. • J.S. Uebersax and W.M. Grove. A latent trait finite mixture model for the analysis of rating agreement. Biometrics, 49:823-835, 1993. • P.M. Vacek. The effect of conditional dependence on the evaluation of diagnostic tests. Biometrics, 41:959-968, 1985. • P.N. Valenstein. Evaluating diagnostic tests with imperfect standards. American Journal of Clinical Pathology, 93:252-258, 1990. • S.D. Walter and L.M. Irwig. Estimation of test error rates, disease prevalence, and relative risk from misclassified data. Journal of Clinical Epidemiology, 41:923-38, 1988. • X.H. Zhou and R. Higgs. Assessing the relative accuracies of two screening tests in the presence of verification bias. Statistics in Medicine, 19: 1697-1705, 2000. • X.H. Zhou and R. Higgs. COMPROC and CHECKNORM: computer programs for comparing accuracies of diagnostic tests using ROC curves in the presence of verification bias. Computer Methods and Programs in Biomedicine, 57: 179-186,1998. • X.H. Zhou and C.A. Rodenberg. Estimating a ROC curve in the presence of verification bias. Communication in Statistics, Ther., 27: 635-657. • X.H. Zhou. Maximum likelihood estimators of sensitivity and specificity corrected for verification bias. Commun. Statist. Theor. Meth, 22: 3177-3198, 1993. • X.H. Zhou. Effect of verification bias on positive and negative predictive values. Statistics in Medicine, 13: 1737-1745, 1994. • X.H. Zhou. Nonparametric ML estimate of a ROC area corrected for verification bias. Biometrics, 52: 310-316, 1996. • X.H. Zhou. Comparing the correlated areas under the ROC curves of two diagnostic tests in the presence of verification bias. Biometrics, 54, 453-470, 1998. • X.H. Zhou. Comparing accuracies of two screening tests in the presence of verication bias. JRSS, Ser C, 47: 135-147, 1998. • X.H. Zhou. Correction for verificatin bias in studies of a diagnostic test's accuracy. Statistics Methods in Medical Research, 7: 337-353, 1998. H. META-ANALYSIS. • C.. Berkey, J.J. Anderson and D.C. Hoaglin. Multiple-outcome meta-analysis of clinical trials. Statistics in Medicine, 15 :537-557, 1996. • C.S. Berkey, D.C. Hoaglin, F. Mosteller and G.A. Colditz. A random-effects regression model for meta-analysis. Statistics in Medicine, 14 :195-411, 1995. • W.J. Hall. Efficiency of weighted averages, Technical Report 92/02, Department of Biostatistics, University of Rochester, Rochester, New York, Feb. 1992. • M.Helfand. Meta-analysis in deriving summary estimates of test performance. Medical Decision Making, 13:182-3, 1993. • L.Irwig, A.N.A. Tosteson, C.Gatsonis, J.Lau, G.Colditz, T.Chalmers, and F.Mosteller. Guidelines for meta-analyses evaluating diagnostic tests. Annals of Internal Medicine, 120:667-676, 1994. • J.W. P.F. Kardaun and O.J. W.F. Kardaun. Comparative diagnostic performance of three radiological procedures for the detection of lumbar disk herniation. Methods of Information in Medicine, 29:12-22, 1990. • J.W. P.F. Kardaun, J.Schipper, and R.Braakman. CT, myelography, and phlebography in the detection of lumbar disk herniation: An analysis of the literature. American Journal of Neuroradiology, 10:1111-1122, 1989. • D.L. Kent, D.R. Haynor, E.B. Larson, and R.A. Deyo. Diagnosis of lumbar spinal stenosis in adults: A meta-analysis of the accuracy of CT, MR, and myelography. American Journal of Roentgenology, 158:1135-44, 1992. • B.Littenberg, A.I. Mushlin, and the Diagnostic Technology AssessmentConsortium. Technetium bone scanning in the diagnosis of osteomyelitis: A meta-analysis of test performance. Journal of General Internal Medicine, 7:158-163, 1992. • B.Littenberg and L.E. Moses. Estimating diagnostic accuracy from multiple conflicting reports: A new meta-analytic method. Medical Decision Making, 13:313-321, 1993. • D.K. McClish. Combining and comparing area estimates across studies or strata. Medical Decision Making, 12:274-9, 1992. • A.S. Midgette, T.A. Stukel, and B.Littenberg. A meta-analytic method for summarizing diagnostic test performances. Medical Decision Making, 13:253-257, 1993. • L.E. Moses, D.E. Shapiro, and B.Littenberg. Combining independent studies of a diagnostic test into a summary ROC curve: Data-analytic approaches and some additional considerations. Statistics in Medicine, 12:1293-1316, 1993. • H.E. Rockett, W.L. Campbell, C.A. Britton, J.M. Holbert, J.L. King, D. Gur. Empiric assessment of parameters that affect the design of multireader receiver operating characteristic studies. Academic Radiology, 6: 723-729. • C.M. Rutter and C.Gatsonis. Regression methods for meta-analysis of diagnostic test data. Academic Radiology,2:supple:48-56,1995. • D.E. Shapiro. Meta-analysis in ROC space: Combining independent studies of a diagnostic test, Technical Report 421, [Ph.D. Dissertation]. Department of Statistics, Stanford University, March • D.E. Shapiro. Issues in combining independent estimates of sensitivity and specificity of a diagnostic test. Academic Radiology,2:Suppl:37-47,1995. • J.A. Swets. Sensitivities and specificities of diagnostic tests. Journal of the American Medical Association, 248:548-549, 1982. • X.H. Zhou Empirical Bayes combination of estimated areas under ROC curves using estimating equations. Medical Decision Making, 16 :24-28, 1996. • X.H. Zhou Methods for combining rates from several studies. Statistics in Medicine, 18: 557-566, 1999. • G.Campbell and J.M. Deleo. Fundamentals of fuzzy receiver operating characteristic (ROC) functions. Computer Science and Statistics: Proceedings of the 21st Symposium on the Inference, pages 543-548, 1989. • G.Campbell, D.Levy, A.Lausier, M.R. Horton, and J.J. Bailey. Nonparametric comparison of entire ROC curves for computerized ECG left ventricular hypertrophy algorithms using data from the Framingham Heart Study. Journal of Electrocardiology, 22 Suppl:152-7, 1989. • G.Campbell, D.Levy, and J.J. Bailey. Bootstrap comparison of fuzzy ROC curves for ECG-LVH algorithms using data from the Framingham Heart Study. Journal of Electrocardiology, 23 Suppl:132-7, • G.Campbell. Advances in statistical methodology for the evaluation of diagnostic and laboratory tests. Statistics in Medicine, 13:499-508, 1994. • M.J. Goddard and I.Hinberg. Receiver operator characteristic (ROC) curves and non-normal data: An empirical study. Statistics In Medicine, 9:325-37, 1990. • S.W. Greenhouse and N.Mantel. The evaluation of diagnostic tests. Biometrics, 6:399-412, 1950. • M.Jaraiedi and G.D. Herrin. A non-parametric method for fitting ROC curves for evaluation of industrial inspection performance. Applied Stochastic Models and Data Analysis, 4:79-88, 1988. • V. Kairisto. Optimal bin widths for frequency histograms and ROC curves [letter]. Clinical Chemistry, 41:766-7, 1995. • V. Kairisto. Software for illustrative presentation of basic clinical characteristics of laboratory tests--GraphROC for Windows. Scandinavian Journal of Clinical & Laboratory Investigation, 222 • K.Linnet. Comparison of quantitative diagnostic tests: Type I error, power, and sample size. Statistics in Medicine, 6:147-158, 1987. • K.Linnet. A review on the methodology for assessing diagnostic tests. Clinical Chemistry, 34:1379-1386, 1988. • K.Linnet. Assessing diagnostic tests by a strictly proper scoring rule. Statistics in Medicine, 8:609-618, 1989. • E.A. Robertson, M.H. Zweig, and A.C. VanSteirtghem. Evaluating the clinical efficacy of laboratory tests. American Journal of Clinical Pathology, 79:78-86, 1983. • S.Sukhatme and C.A. Beam. Stratification in nonparametric ROC studies. Biometrics, 50(1):149-63, 1994. • Y.T. van der Schouw, H.Straatman, and A.L. Verbeek. ROC curves and the areas under them for dichotomized tests: empirical findings for logistically and normally distributed diagnostic test results. Medical Decision Making, 14(4):374-81, 1994. • M.H. Zweig. Evaluation of the clinical accuracy of laboratory tests. Archives of Pathology Laboratory Medicine, 112:383-6, 1988. • M.H. Zweig and G.Campbell. Receiver operating characteristic (ROC) plots: A fundamental evaluation tool in clinical medicine. Clinical Chemistry, 39:561-577, 1993. • H. Zou, D.E. Shapiro and W.J. Hall. Smooth nonparametric receiver operating characteristic curves for continuous diagnostic tests (abstract only). Medical Decision Making, 14:431, 1994. • D.P. Chakraborty, E.S. Breatnach, M.V. Yester, B.Soto, and G.T. Barnes. Digital and conventional chest imaging: A modified ROC study of observer performance using simulated nodules. Radiology, 158:35-9, 1986. • D.P. Chakraborty. Maximum likelihood analysis of free-response receiver operating characteristic (FROC) data. Medical Physics, 16:561-8, 1989. • D.P. Chakraborty and L.H. Winter. Free-response methodology: Alternate analysis and a new observer-performance experiment. Radiology, 174:873-81, 1990. • P. DeNeef and D.L. Kent. Using treatment-tradeoff preferences to select diagnostic strategies: Linking the ROC curve to threshold analysis. Medical Decision Making, 13 :126-132,1993. • S. Dreiseitl, L. Ohno-Machado, M. Binder. Comparing three-class diagnostic tests by three-way ROC analysis. Medical Decision Making, 20: 323-331, 2000. • O.Gefeller and H.Brenner. How to correct for chance agreement in the estimation of sensitivity and specificity of diagnostic tests. Methods of Information in Medicine, 33:180-186, 1994. • C.Herrmann, E.Buhr, D.Hoeschen and S.Y. Fan SYnewblock. Comparison of ROC and AFC methods in a visual detection task. Medical Physics, 20(3):805-11, 1993. • K. Hnatkova, J.D. Poloniecki, A.J. Camm, and M.Malik Computation of multifactorial receiver operator and predictive accuracy characteristics. Computer Methods and Programs in Biomedicine, 42 (3):147-56, 1994. • W. Leisenring , M.S. Pepe G. Longton. A marginal regression modelling framework for evaluating medical diagnostic tests. 1996 ENAR meeting. Unpublished manuscript. • P.A. Murtaugh ROC curves with multiple marker measurements. Biometrics, 51 (4):1514-22,1995. • B. Reiser, D. Faraggi. Confidence intervals for the generalized ROC criterion. Biometrics, 53: 644-652, 2000. • H.E. Rockett, J.L. King, J.L. Medina, H.B. Eisen, M.L. Brown and D. Gur. Imaging systems evaluation: effect of subtle cases on the design and analysis of receiver operating characteristic studies. American Journal of Roentgenology, 165 :679-83,1995. • E.K. Shultz. Multivariate receiver-operating characteristic curve analysis: prostate cancer screening as an example. [Review] Clinical Chemistry,41:1248-55, 1995. • S.J. Starr, C.E. Metz, L.B. Lusted, and D.J. Goodenough. Visual detection and localization of radiographic images. Radiology, 116:533-8, 1975. • S.J. Starr, C.E. Metz, and L.B. Lusted. Comments on the generalization of receiver operating characteristic analysis to detection and localization tasks. Physics In Medicine and Biology, 22:376-81, 1977. • W.R. Steinbach and K.Richter. Multiple classification and receiver operating characteristic (ROC) analysis. Medical Decision Making, 7:234-7, 1987. • A. Taube. To judge the judges--kappa, ROC or what?. Arch Anat Cytol Pathol,43:227-33,1995. • L.L. Wheeless, R.D. Robinson, O.P. Lapets, C.Cox, A.Rubio, M.Weintraub, and L.J. Benjamin. Classification of red blood cells as normal, sickle, or other abnormal, using a single image analysis feature. Cytometry, 17(2):159-66, 1994. K. MEDICAL DECISION MAKING (OPTIMAL CUTOFF, UTILITY). • R.Bairagi and C.M. Suchindran. An estimator of the cutoff point maximizing sum of sensitivity and specificity. Sankhya, Series B, Indian Journal of Statistics, 51:263-269, 1989. • G.R. Bergus. When is a test positive? The use of decision analysis to optimize test interpretation. Family Medicine, 25(10):656-60, 1993. • D.Borowiak and J.F. Reed, III. Utility of combining two diagnostic tests. Computer Methods and Programs in Biomedicine, 35:171-175, 1991. • F.A. Connell and T.D. Koepsell. Measures of gain in certainty from a diagnostic test. American Journal of Epidemiology, 121:744-53, 1985. • P.DeNeef and D.L. Kent. Using treatment-tradeoff preferences to select diagnostic strategies: Linking the ROC curve to threshold analysis. Medical Decision Making, 13:126-32, 1993. • W.L. England. An exponential model used for optimal threshold selection on ROC curves. Medical Decision Making, 8:120-31, 1988. • A.Fajardo-Gutierrez, L.Yamamoto-Kimura, L.Yanez-Velasco, J.Garduno-Espinosa, and M.C. Martinez-Garcia. The utility of joint sensitivity and specificity curves applied to a diagnostic test. Salud Publica de Mexico, 36(3):311-7, 1994. • D.G. Fryback and J.R. Thornbury. The efficacy of diagnostic imaging. Medical Decision Making, 11:95-101, 1991. • P.F. Griner, R.J. Mayewski, A.I. Mushlin, and P.Greenland. Selection and interpretation of diagnostic tests and procedures: Principles and applications. Annals of Internal Medicine, 94:553-592, • W.J. Hall and C.E. Phelps. A clinically meaningful measure of the difference between diagnostic technologies. Medical Decision Making, 7:286, 1987. • H.C. Kraemer. Evaluating Medical Tests: Objective and Quantitative Guidelines. Sage Publications, Newbury Park, California, 1992. • C.E. Metz, D.J. Goodenough, and K.Rossmann. Evaluation of receiver operating characteristic curve data in terms of information theory, with applications in radiography. Radiology, 109:297-303, • D.Mossman and E.Somoza. Maximizing diagnostic information from the dexamethasone suppression test: An approach to criterion selection using receiver operating characteristic analysis. Archives of General Psychiatry, 46:653-60, 1989. • D.Mossman and E.Somoza. Assessing improvements in the dexamethasone suppression test using receiver operating characteristics analysis. Biological Psychiatry, 25:159-73, 1989. • D.A. Noe. Selecting a diagnostic study's cutoff value by using its receiver operating characteristic curve. Clinical Chemistry, 29:571-2, 1983. • J.C. Peirce and R.G. Cornell. Integrating stratum-specific likelihood ratios with the analysis of ROC curves. Medical Decision Making, 13:141-51, 1993. • C.E. Phelps and A.I. Mushlin. Focusing technology assessment using medical decision theory. Medical Decision Making, 8:279-289, 1988. • F.Sainfort. Evaluation of medical technologies: A generalized ROC analysis. Medical Decision Making, 11:208-20, 1991. • H.Schafer. Constructing a cut-off point for a quantitative diagnostic test. Statistics in Medicine, 8:1381-1391, 1989. • E.Somoza Classifying binormal diagnostic tests using separation-asymmetry diagrams with constant-performance curves. Medical Decision Making, 14:157-68, 1994. • E.Somoza Classification of diagnostic tests. International Journal of Bio-Medical Computing, 37 :41-55, 1994. • E.Somoza and D.Mossman. ``Biological markers'' and psychiatric diagnosis: Risk-benefit balancing using ROC analysis. Biological Psychiatry, 29:811-26, 1991. • E.Somoza and D.Mossman. Comparing and optimizing diagnostic tests: An information-theoretical approach. Medical Decision Making, 12:179-88, 1992. • E.Somoza and D.Mossman. Comparing diagnostic tests using information theory: The INFO-ROC technique. Journal of Neuropsychiatry and Clinical Neurosciences, 4:214-9, 1992. • E.Somoza, L.Soutullo-Esperon, and D.Mossman. Evaluation and optimization of diagnostic tests using receiver operating characteristic analysis and information theory. International Journal of Bio-Medical Computing, 24:153-89, 1989. • J.Vermont, J.L. Bosson, P.Franccois, C.Robert, A.Rueff, and J.Demongeot. Strategies for graphical threshold determination. Computer Methods and Programs In Biomedicine, 35:141-50, 1991. • C.D. Ward. The differential positive rate, a derivative of receiver operating characteristic curves useful in comparing tests and determining decision levels. Clinical Chemistry, 32:1428-9, 1986. • R.F. Raubertas, L.E. Rodewald, S.G. Humiston, and P.G. Szilagyi. ROC curves for classification trees. Medical Decision Making, 14:169-174, 1994. • J.K. Gohagan, E.L. Spitznagel, M.M. McCrate, and T.B. Frank. ROC analysis of mammography and palpation for breast screening. Investigative Radiology, 19:587-92, 1984. • J.E. Goin. ROC curve estimation and hypothesis testing: Applications to breast cancer detection. Pattern Recognition, 15:263-269, 1982. • X.M. Tu, J. Kowalski, G. Jia. Bayesian analysis of prevalence with covariates using simulation-based techniques: Applications to HIV screening. Statistics in Medicine, 18: 3059-3073, 1999. • G. Tusch. Evaluation of partial classification algorithms using ROC curves. Medinfo, 8 :904-8,1995. • N.A. Obuchowski and D.K. McClish. Sample size determination for diagnostic accuracy studies involving binormal ROC curve parameters. Statistics in Medicine, 16: 1529-1542, 1997. • N.A. Obuchowski. Sample size tables for receiver operating characteristic studies. AJR, 175: 603-108, 2000. • D. Faraggi. THe effect of random measurement error on receiver operating characteristic (ROC) curves. Statistics in Medicine, 19: 61-70, 2000. • M. Coffin, S. Sukhatme. Receiver opearating characteristic studies and measurement errors. Biometrics, 53: 823-837, 1997. • T.A. Alonzo, M.S. Pepe. Distribution-free ROC analysis using binary regression techniques. Biostatistics, 3(3):421-432, 2002. • W.S. Andrus and K.T. Bird. Editorial: Radiology and the receiver operating characteristic (ROC) curve. Chest, 67:378-9, 1975. • J.Ashton and M.L. Moeschberger. A SAS macro for estimating the error rates of two diagnostic tests, neither being a gold standard. Proceedings of the SAS Users Group International Conference, 995-996, 1988. • S.R. Aylward. Continuous mixture modeling via goodness-of-fit ridges. Pattern Recognition, 35(9):1821-1833, 2002. • S.V. Beiden, M.A. Maloof, R.F. Wagner. A general model for finite-sample effects in training and testing of competing classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(12):1561-1569, 2003. • S.V. Beiden, R.F. Wagner, G. Campbell. Components-of-variance models and multiple-bootstrap experiments: An alternative method for random-effects, receiver operating characteristic analysis. Academic Radiology, 7(5):341-349, 2000. • S.V. Beiden, R.F. Wagner, G. Campbell et al. Components-of-variance models for random-effects ROC analysis: The case of unequal variance structures across modalities. Academic Radiology, 8 (7):605-615, 2001. • S.V. Beiden, R.F. Wagner, G. Campbell et al. Analysis of uncertainties in estimates of components of variance in multivariate ROC analysis. Academic Radiology, 8(7):616-622, 2001. • B.M. Bennett. On tests for equality of predictive values for t diagnostic procedures. Statistics in Medicine, 4:535-539, 1985. • K. Bernstein, K. Matthews. Correlation of system performance parameters to ROC analysis of PET/CT images. Medical Physics, 31(6):1819-1819, 2004. • J.Brismar. Understanding receiver-operating-characteristic curves: A graphic approach. American Journal of Roentgenology, 157:1119-21, 1991. • W.S. Browner and T.B. Newman. Are all significant p values created equal? the analogy between diagnostic tests and clinical research. Journal of the American Medical Association, 257:2459-2463, • D.P. Chakraborty, K.S. Berbaum Observer studies involving detection and localization: Modeling, analysis, and validation. Medical Physics, 31(8):2313-2330, 2004. • H.P. Chan, B. Sahiner, R.F. Wagner, et al. Classifier design for computer-aided diagnosis: Effects of finite sample size on the mean performance of classical and neural network classifiers. Medical Physics, 26(12):2654-2668, 1999. • Y.Charpak, C.Blery, and C.Chastang. Designing a study for evaluating a protocol for the selective performance of preoperative tests. Statistics in Medicine, 6:813-822, 1987. • J. Chen J, K.M. Conigrave, P. Macaskill, et al. Combining carbohydrate-deficient transferrin and gamma-glutamyltransferase to increase diagnostic accuracy for problem drinking. Alcohol and Alcoholism, 38(6):574-582, 2003. • M.S. Chester. Human visual perception and ROC methodology in medical imaging. Physics In Medicine and Biology, 37:1433-76, 1992. • T.E. Cohn, D.G. Green, and W.P. Tanner, Jr. Receiver operating characteristic analysis: Application to the study of quantum fluctuation effects in optic nerve of rana pipiens. Journal of General Physiology, 66:583-616, 1975. • R. Dahyot, P. Charbonnier, F. Heits A Bayesian approach to object detection using probabilistic appearance-based models. Pattern Analysis and Applications, 7(3):317-332, 2004. • R.Detrano. Accuracy curves: An alternative graphical representation of probability data. Journal of Clinical Epidemiology, 42:983-6, 1989. • S. Dreiseitl, L. Ohno-Machado, M. Binder. Comparing three-class diagnostic tests by three-way ROC analysis. Medical Decision Making, 20(3):323-331, 2000. • D.C. Edwards, C.E. Metz, R.W. Nishikawa. The hypervolume under the ROC hypersurface of "near-guessing" and "near-perfect" observers in N-class classification tasks. IEEE Transactions of Medical Imaging, 24(3):293-299, 2005. • D.C. Edwards, C.E. Metz, M.A. Kupinski. Ideal observers and optimal ROC hypersurfaces in N-class classification. IEEE Transactions on Medical Imaging, 23(7):891-895, 2004. • D. Faraggi, B. Reiser, E.F. Schisterman. ROC curve analysis for biomarkers based on pooled assessments. Statistics in Medicine, 22(15):2515-2527, 2003. • T. Fawcett, P.A. Flach A response to Webb and Ting's On the application of ROC analysis to predict classification performance under varying class distributions. Machine Learning, 58(1):33-38, • A.R. Feinstein. On the sensitivity, specificity, and discrimination of diagnostic tests. Clinical Pharmacology and Therapeutics, 17:104-118, 1975. • C. Ferri, J. Hernandez-Orallo, M.A. Salido. Volume under the ROC surface for multi-class problems. Lecture Notes in Artificial Intelligence, 2837:108-120, 2003. • J.Fuentes and M.Martinez. An alternative to receiver-operating characteristic curves. Clinical Chemistry, 29:1445, 1983. • J. Furnkranz, P.A. Flach. ROC 'n' rule learning - Towards a better understanding of covering algorithms. Machine Learning, 58(1):39-77, 2005. • M.H. Gail and S.B. Green. A generalization of the one-sided two-sample Kolmogorov-Smirnov statistic for evaluating diagnostic tests. Biometrics, 32:561-570, 1976. • O. Gefeller and H. Brenner. How to correct for chance agreement in the estimation of sensitivity and specificiety of diagnostic tests. Meth. Inform. in Med, :180-6,1994. • A.J. Girling. Rank statistics expressible as integrals under P-P-plots and receiver operating characteristic curves. Journal of the Royal Statistical Society Series B-Statistical Methodology, 62:367-382, 2000. • P.Glasziou and J.Hilden. Decision tables and logic in decision analysis. Medical Decision Making, 6:154-60, 1986. • G. Gorog. An Excel program for calculating and plotting receiver-operator characteristic (ROC) curves, histograms and descriptive statistics. Computers in Biology and Medicine, 24(3):167-9, 1994. • M. Greiner, D. Pfeiffer, R.D. Smith. Principles and practical application of the receiver-operating characteristic analysis for diagnostic tests. Preventive Veterinary Medicine, 45(1-2):23-41, • D.H. Gustafson, F. Sainfort, M. Eichler, et al. Developing and testing a model to predict outcomes of organizational change. Health Services Research, 38(2):751-776, 2003. • K.O. Hajian-Tilaki, J.A. Hanley. Comparison of three methods for estimating the standard error of the area under the curve in ROC analysis of quantitative data. Academic Radiology, 9 (11):1278-1285, 2002. • A.P. Hallstrom and G.B. Trobaugh. Specificity, sensitivity, and prevalence in the design of randomized trials: A univariate analysis. Controlled Clinical Trials, 6:128-135, 1985. • T. Hansen, H. Neumann. A biologically motivated scheme for robust junction detection. Lecture Notes in Computer Science, 2525:16-26, 2002 • M.B. Harrington. Some methodological questions concerning receiver operating characteristic (ROC) analysis as a method for assessing image quality in radiology. Journal of Digital Imaging, 3:211-8, 1990. • R.D. Hays. ROC: Estimation of the area under a receiver operating characteristic curve. Applied Psychological Measurement Inc., 14:208-208, 1990. • J.W. Hoppin, M.A. Kupinski, G.A. Kastis, et al. Objective comparison of quantitative imaging modalities without the use of a gold standard. IEEE Transactions on Medical Imaging, 21(5):441-449, • J.M. Irvine. Assessing target search performance: the free-response operator characteristic model. Optical Engineering, 43(12):2926-2934,2004. • H. Ishwaran, C.A. Gatsonis. A general class of hierarchical ordinal regression models with applications to correlated ROC analysis. Canadian Journal of Statistics-Revue Canadienne de Statistique, 28(4):731-750, 2000. • R.H. Jones and M.W. McClatchey. Beyond sensitivity, specificity and statistical independence. Statistics in Medicine, 7:1289-1295, 1988. • V. Kairisto and A. Poola. Software for illustrative presentation of basic clinical characteristics of laboratory tests--GraphROC for Windows. Scandinavian Journal of Clinical & Laboratory Investigation, 222 (supple):43-60,1995. • L.Kuncheva and K.Andreeva. DREAM: a shell-like software system for medical data analysis and decision support. Computer Methods and Programs in Biomedicine, 40(2):73-81, 1993. • M.A. Kupinski,D.C. Edwards, M.L. Giger, et al. Ideal observer approximation using Bayesian classification neural networks. IEEE Transactions on Medical Imaging, 20(9):886-899, 2001. • P.A. Lachenbruch. Multiple reading procedures: The performance of diagnostic tests. Statistics in Medicine, 7:549-557, 1988. • A.R. Laird, B.P. Rogers, M.E. Meyerand. Comparison of Fourier and wavelet resampling methods. Magnetic Resonance in Medicine, 51(2):418-422, 2004. • R.A. Lew and P.S. Levy. Estimation of prevalence on the basis of screening tests. Statistics in Medicine, 8:1225-1230, 1989. • R.B. Lirio, I.C. Donderiz, and M.C. PerezAbalo. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies. International Journal of Bio-Medical Computing, 31:117-26, 1992. • B. Liu, C.E. Metz, Y.L. Jiang. An ROC comparison of four methods of combining information from multiple images of the same patient. Medical Physics, 31(9):2552-2563, 2004. • A.Y. Liu, E.F. Schisterman, E. Teoh. Sample size and power calculation in comparing diagnostic accuracy of biomarkers with pooled assessments. Journal of Applied Statistics 31(1):49-59, 2004. • A.Y. Liu, E.F. Schisterman, Y. Zhu. On linear combinations of biomarkers to improve diagnostic accuracy. Statistics in Medicine, 24(1):37-47, 2005. • Y. Lu, D.N. Heller, S.J. Zhao. Receiver operating characteristic (ROC) analysis for diagnostic examinations with uninterpretable cases. Statistics in Medicine, 21(13):1849-1865, 2002. • K. Lui. A discussion on the conventional estimator of sensitivity and specificity in multiple tests. Statistics in Medicine, 8:1231-1240, 1989. • P. Macaskill. Empirical Bayes estimates generated in a hierarchical summary ROC analysis agreed closely with those of a full Bayesian analysis. Journal of Clinical Epidemiology, 57(9):925-932, • F.A. Mann, C.F. Hildebolt, and A.J. Wilson. Statistical analysis with receiver operating characteristic curves. Radiology, 184:37-8, 1992. • R.J. Marshall. The predictive value of simple rules for combining two diagnostic tests. Biometrics, 45:1213-1222, 1989. • C. Metz. Models in medicine IV. ROC analysis: Methodology and practical applications. Medical Physics, 30(6):1442-1442, 2003. • D.K. McClish and D.Quade. Improving estimates of prevalence by repeated testing. Biometrics, 41:81-89, 1985. • M.J. Monsour, A.T. Evans, and L.L. Kupper. Confidence intervals for post-test probability. Statistics in Medicine, 10:443-456, 1991. • A.I. Mushlin, A.S. Detsky, C.E. Phelps, P.W. O'Connor D.K. Kido, W.Kucharczyk, D.W. Giang, C.Mooney, C.M. Tansey and W.J. Hall. The accuracy of magnetic resonance imaging in patients with suspected multiple sclerosis. The Rochester-Toronto Magnetic Resonance Imaging Study Group. JAMA, 269(24):3146-51, 1993. • D.B. Mumford. ROC analysis. British Journal of Psychiatry, 157:301, 1990. • C.T. Nakas, C.T. Yannoutsos. Ordered multiple-class ROC analysis with continuous measurements. Statistics in Medicine, 23(22):3437-3449, 2004. • N.A. Obuchowski. How many observers are needed in clinical studies of medical imaging? American Journal of Roentgenology, 182(4):867-869, 2004. • N.A. Obuchowski. Fundamentals of clinical research for radiologists - ROC analysis American Journal of Roentgenology, 184(2):364-372, 2005. • W.Palmas, T.A. Denton, A.P. Morise, and G.A. Diamond. Afterimages: integration of diagnostic information through Bayesian enhancement of scintigraphic images American Heart Journal, 128(2):281-7, 1994 Aug. • J. Paetz. Finding optimal decision scores by evolutionary strategies. Artificial Intelligence in Medicine, 32(2):85-95, 2004. • S.H. Park, J.M. Goo, C.H. Jo. Receiver operating characteristic (ROC) curve: Practical review for radiologists. Korean Journal of Radiology, 5(1):11-18, 2004. • G. Parker, D. Hadzi-Pavlovic, L. Both, et al. Measuring disordered personality functioning: to love and to work reprised. Acta Psychiatrica Scandinavica, 110(3):230-239, 2004. • J.L. Pater and A.R. Willan. Clinical trials as diagnostic tests. Controlled Clinical Trials, 5:107-113, 1984. • P.E. Politser. Explanations of statistical concepts: Can they penetrate the haze of Bayes? Methods of Information in Medicine, 23:99-108, 1984. • C. Perlich, F. Provost, J.S. Simonoff. Tree induction vs. logistic regression: A learning-curve analysis. Journal of Machine Learning Research, 4(2):211-255, 2004. • G. Regehr, J. Colliver. On the equivalence of classic ROC analysis and the loss-function model to set cut points in sequential testing. Academic Medicine, 78(4):361-364, 2003. • M.Rodriguez, I.Moussa, J.Froning, M.Kochumian, and V.F. Froelicher. Improved exercise test accuracy using discriminant function analysis and "recovery ST slope". Journal of Electrocardiology, 26 (3):207-18, 1993. • H.E. Rockett, J.L. KIng, J.L. Medina, H.B. Eisen, M.L. Brown, D. Gur. Imaging systems evaluation: effect of subtle cases on the design and analysis of receiver operating characteristic studies. American Journal of Roentgenology,165:679-83, 1995. • B. Sahiner, H.P. Chan, N. Petrick, et al. Feature selection and classifier performance in computer-aided diagnosis: The effect of finite sample size. Medical Physics, 27(7):1509-1522, 2000. • M.S. Scher, B.L. Jones, D.A. Steppe, et al. Functional brain maturation in neonates as measured by EEG-sleep analyses. Clinical Neurophysiology, 114(5):875-882, 2003. • E.F. Schisterman, D. Faraggi, B.Reiser. Adjusting the generalized ROC curve for covariates. Statistics in Medicine, 23(21):3319-3331, 2004. • P.M. Shankar, V.A. Dumane, T. George, et al. Classification of breast masses in ultrasonic B scans using Nakagami and K distributions. Physics in Medicine and Biology, 48(14):2229-2240, 2003. • P.M. Shankar, V.A. Dumane, C.W. Piccoli, et al. Computer-aided classification of breast masses in ultrasonic B-scans using a multiparameter approach. IEEE Transactions on Ultrasonics Ferroelectrics and Frequency Control 50(8):1002-1009, 2003. • H.C. Sox, Jr. Probability theory in the use of diagnostic tests. Annals of Internal Medicine, 104:60-66, 1986. • C. Stephan, S. Wesseling, T. Schink, et al. Comparison of eight computer programs for receiver-operating characteristic analysis. Clinical Chemistry, 49(3):433-439, 2003. • R.G. Swensson, J.L. King, D. Gur. A constrained formulation for the receiver operating characteristic (ROC) curve based on probability summation. Medical Physics, 28(8):1597-1609, 2001. • Y. Takahashi, K. Murase, H. Higashino, et al. Receiver operating characteristic (ROC) analysis of images reconstructed with iterative expectation maximization algorithms. Annals of Nuclear Medicine, 15(6):521-525, 2001. • A.Taube. Sensitivity, specificity and predictive values: A graphical approach. Statistics in Medicine, 5:585-591, 1986. • L.A. Thibodeau. Evaluating diagnostic tests. Biometrics, 37:801-804, 1981. • L.A. Thibodeau. Sensitivity and specificity. Encyclopedia of Statistical Sciences, 8:370-372, 1988. • M.L. Thompson. Assessing the diagnostic accuracy of a sequence of tests. Biostatistics, 4(3):341-351, 2003. • M.Treisman and A.Faulkner. The effect of signal probability on the slope of the receiver operating characteristic given by the rating procedure. British Journal of Mathematical and Statistical Psychology, 37199-215, 1984. • P.M. Vacek. The effect of correlation between two diagnostic tests on maximum likelihood estimates of test error rates and disease prevalences. In ASA Proceedings of the Statistical Computing Section, 271-276, 1984. • P.M. Vacek, M.Kresnow, and R.M. Potter. The sampling distributions of estimators for the error rates of diagnostic tests. ASA Proceedings of the Statistical Computing Section, 438-440, 1985. • R. Wagner. Multiple-reader, multiple-case (MRMC) ROC analysis in diagnostic imaging, computer-aided diagnosis, and statistical pattern recognition. Medical Physics, 29(6):1331-1331, 2002. • R.F. Wagner, S.V. Beiden, G. Campbell, et al. Assessment of medical imaging and computer-assist systems: Lessons from recent experience. Academic Radiology, 9(11):1264-1277, 2002. • R.F. Wagner, S.V. Beiden, C.E. Metz. Continuous versus categorical data for ROC analysis: Some quantitative considerations. Academic Radiology, 8(4):328-334, 2001. • J.J. Wang, Z. Wang, G.K. Aguirre, et al. To smooth or not to smooth? - ROC analysis of perfusion fMRI data. Magnetic Resonance Imaging, 23(1):75-81, 2005. • G.I. Webb, K.M. Ting. On the application of ROC analysis to predict classification performance under varying class distributions. Machine Learning, 58(1):25-32, 2005. • M. Weiner, R. Centor. Web-based, interactive receiver operating characteristics (ROC) analysis. Journal of the American Medical Informatics Association, Suppl. S:1058-1058, 2001. • T. Wigren, S. Remle, B. Wahlberg. Analysis of a low-complexity change detection scheme. International Journal of Adaptive Control and SIgnal Processing, 14(5):481-503, 2000. • A.A. White and J.R. Landis. A general categorical data methodology for evaluating medical diagnostic tests. Communication in Statistics, Part A-Theory and Methods, 117:567-605, 1982. • T.Yanagawa and B.C. Gladen. Estimating disease rates from a diagnostic test. American Journal of Epidemiology, 119:1015-1023, 1984. • T.Yanagawa and F.Kasagi. Estimating prevalence and incidence of disease from a diagnostic test. Statistical Theory and Data Analysis, 783-801, 1985. • K.H. Zou, S.K. Warfield, J.R. Fielding, et al. Statistical validation based on parametric receiver operating characteristic analysis of continuous classification data. Academic Radiology, 10 (12):1359-1368, 2003.
{"url":"http://www.spl.harvard.edu/archive/spl-pre2007/pages/ppl/zou/roc.html","timestamp":"2014-04-18T01:01:42Z","content_type":null,"content_length":"63381","record_id":"<urn:uuid:1822807c-7364-4ab2-a148-1530cb5ccfff>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Each tutorial is in one or more of the following four formats: cdf: computable document format Interactive presentation that can be explored in your browser with the free CDF Player plugin. Can also be viewed and modified using Mathematica 8+. nb: Mathematica notebook These are interactive presentations which can be run in Mathematica 5+. Also viewable with the CDF Player plugin, but you won't be able to run them. Try the html versions instead. pdf: portable document format Download Acrobat Reader to view pdfs. Can be viewed right here in your browser. Periodically perturbed system html nb Two-and three-level systems Atomic physics calculation and visualization package web page Orbital angular momentum of light More tutorials: • Term calculator for equivalent electrons (nb) • Mini-Lab on Confocal Fabry-Perot Spectrum Analyzers and the He-Ne Laser (pdf) • Nanoscopy with nitrogen-vacancy centers in diamond: the movie (m4v)
{"url":"http://budker.berkeley.edu/Tutorials/index.html","timestamp":"2014-04-19T11:56:37Z","content_type":null,"content_length":"7079","record_id":"<urn:uuid:79202764-ab24-48d0-84cf-fe6e9c299f96>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Ordinary Generating Function for Mobius up vote 8 down vote favorite Is there any information known for the Ordinary Generating Function for Mobius? $$ \sum_{n=1}^{\infty} {\mu(n)}x^n $$ I know that 1. It has radius of convergence 1. 2. Does not have limit as $x\rightarrow 1$. My question is 1. Does it have limit as $x\rightarrow \textrm{exp}(i\theta)\neq 1$? 2. Is there a similar result for $\textrm{exp}(i\theta)\neq 1$ with $$ \sum_{n\leq x}\mu(n)=o(x)$$ i.e. is there a result of this type? $$ \sum_{n\leq x}\mu(n)\textrm{exp}(in\theta) = o(x)$$ EDIT1: Aside from my questions, there are some known results for the OGF for Mobius. EDIT2: Found a reference for (III) Reference: JASON P. BELL, NILS BRUIN, AND MICHAEL COONS "TRANSCENDENCE OF GENERATING FUNCTIONS WHOSE COEFFICIENTS ARE MULTIPLICATIVE" available at arXiv:1003.2221v2 P. Borwein, T. Erd´elyi, and F. Littman, Polynomials with coefficients from a finite set available at http://www.cecm.sfu.ca/personal/pborwein/PAPERS/P195.pdf (I) This function has the unit circle as natural boundary. (II) This function is a transcendental function. (III) It is not bounded on any open sector($\{z:|z|<1, z=re^{i\theta}, \alpha<\theta<\beta\}$)of unit disc. This result (III) together with Baire Category answers my question 1 with a dense set of $\theta$'s, but as Prof Tao mentioned, there can be a possibility that it is bounded for some values of $\ nt.number-theory analytic-number-theory add comment 2 Answers active oldest votes Mobius randomness heuristics suggest that $\sum_n \mu(n) (r e^{i\theta})^n$ does not converge to a limit as $r \to 1^-$ for any (or at least almost any) $\theta$. If it did converge for some $ \theta$, then we would have $$\sum_n \mu(n) e^{in\theta} \psi_k(n) \to 0$$ as $k \to \infty$, where $$ \psi_k(n) := (1-2^{-k-1})^n - (1-2^{-k})^n.$$ The cutoff function $\psi_k$ is basically a smoothed out version of the indicator function $1_{[2^k,2^{k+1}]}$. The Mobius randomness heuristic (discussed for instance in this article of Sarnak) then suggests that the sum $\sum_n \mu(n) e^{in\theta} \psi_k(n)$ should have a typical size of $2^{k/2}$, and so should not decay to zero as $k \to \infty$. vote Note from Plancherel's theorem that the $L^2_\theta$ mean of $\sum_n \mu(n) e^{in\theta} \psi_k(n)$ is indeed comparable to $2^{k/2}$. This does not directly preclude the (very unlikely) 12 scenario that this exponential sum is very small for many $\theta$ and only large for a small portion of the $\theta$, but if one optimistically applies various versions of the Mobius down randomness heuristic (with square root gains in exponential sums) to guess higher moments of $\sum_n \mu(n) e^{in\theta} \psi_k(n)$ in $\theta$, one is led to conjecture a central limit vote theorem type behaviour for the distribution of this quantity (as $\theta$ ranges uniformly from $0$ to $2\pi$), i.e. it should behave like a complex gaussian with mean zero and variance $\sim 2^k$, and in particular it should only be $O(1)$ about $O(2^{-k})$ of the time, and Borel-Cantelli then suggests divergence for almost every $\theta$ at least. Unfortunately this is all very heuristic, and it seems difficult with current technology to unconditionally rule out a strange conspiracy that makes $\sum_n \mu_n (re^{i\theta})^n$ bounded as $r \to 1$ for some specific value of $\theta$ (say $\theta = \sqrt{2} \pi$), though this looks incredibly unlikely to me. (One can use bilinear sums methods, e.g. Vaughan identity, to get some nontrivial pointwise upper bounds on these exponential sums when $\theta$ is highly irrational, but I see no way to get pointwise lower bounds, since $L$-function methods will not be available in this setting, and there may well be some occasional values of $\theta$ and $k$ for which these sums are actually small. But it is likely at least that the $\theta=0$ theory can be extended to rational values of $\ add comment For (2), the answer is yes - see for example this paper of Baker and Harman. up vote 9 down Thank you for this. Actually their paper is conditional. Indeed, unconditional result by H. Davenport is qjmath.oxfordjournals.org/content/os-8/1/313.full.pdf. Anyway, that answers 2. – i707107 Apr 3 '13 at 21:01 1 Fair enough, although on the first page it cites unconditional results as well. – Greg Martin Apr 4 '13 at 5:57 Yeah. They cited Davenport's result. I enjoyed reading it. – i707107 Apr 4 '13 at 18:27 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory analytic-number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/126442/ordinary-generating-function-for-mobius?sort=oldest","timestamp":"2014-04-21T12:31:30Z","content_type":null,"content_length":"59645","record_id":"<urn:uuid:32d46e25-380a-464a-b70a-d028249c0f79>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Kaggle Digit Recognizer: K-means optimisation attempt I recently wrote a blog post explaining how Jen and I used the K-means algorithm to classify digits in Kaggle’s Digit Recognizer problem and one of the things we’d read was that with this algorithm you often end up with situations where it’s difficult to classify a new item because if falls between two labels. We decided to have a look at the output of our classifier function to see whether or not that was the case. As a refresher, this was our original function to determine the label for a new test data row… (defn which-am-i [unranked-value] (let [all-the-gaps (map #(find-gap %1 unranked-value) all-the-averages)] [(ffirst (sort-by second all-the-gaps)) all-the-gaps])) …and that would return a result like this: user> (which-am-i (first test-data)) [0 ([0 1763.5688862988827] [1 2768.1143197890624] [2 2393.9091578180937] [3 2598.4629450761286] [4 2615.1233720558307] [5 2287.1791665580586] [6 2470.096959417967] [7 2406.0132574502527] [8 2489.3635108564304] [9 2558.0054056506265])] We first changed the function to return another entry in the vector representing the top two picks: (defn which-am-i [unranked-value] (let [all-the-gaps (map #(find-gap %1 unranked-value) all-the-averages) top-two (take 2 (sort-by second all-the-gaps))] [(ffirst (sort-by second all-the-gaps)) top-two all-the-gaps])) If we run that: user> (which-am-i (first test-data)) [2 ([2 1855.3605185002546] [0 2233.654619238101]) ([0 2233.654619238101] [1 2661.7148686603714] [2 1855.3605185002546] [3 2512.429018687221] [4 2357.637631775974] [5 2457.9850052966344] [6 2243.724487123002] [7 2554.1158473740174] [8 2317.567588716217] [9 2520.667565741239])] In this case the difference between the top 2 labels is about 400 but we next changed the function to output the actual difference so we could see how close the top 2 were over the whole test set: (defn which-am-i [unranked-value] (let [all-the-gaps (map #(find-gap %1 unranked-value) all-the-averages) top-two (take 2 (sort-by second all-the-gaps)) difference-between-top-two (Math/abs (apply - (map second top-two)))] [(ffirst (sort-by second all-the-gaps)) top-two difference-between-top-two all-the-gaps])) We then ran it like this, taking just the first 10 differences on this occasion: user> (take 10 (->> test-data (map which-am-i) (map #(nth % 2)))) (378.2941007378465 523.6102802591759 73.57510749262792 3.8557350283749656 5.806672422475231 183.90928740097775 713.1626629833258 335.38646365464047 538.6191727330108 161.68429111785372) From visually looking at this over a larger subset we noticed that there seemed to be a significant number of top twos within a distance of 50. We therefore changed the function to use shuffle to randomly pick between those two labels when the distance was that small. (defn which-am-i [unranked-value] (let [all-the-gaps (map #(find-gap %1 unranked-value) all-the-averages) top-two (take 2 (sort-by second all-the-gaps)) difference-between-top-two (Math/abs (apply - (map second top-two))) very-close (< difference-between-top-two 50) best-one (if very-close (ffirst (shuffle top-two)) (ffirst top-two))] [best-one top-two all-the-gaps])) Our assumption was that this might marginally improve the performance of the algorithm but it actually made it marginally worse, with a new accuracy of 79.3% At this point we weren’t really sure what we should be doing to try and improve the algorithm’s performance so we moved onto random forests but it’s interesting that you can get a reasonable accuracy even with such a simple approach!
{"url":"http://www.markhneedham.com/blog/2012/10/27/kaggle-digit-recognizer-k-means-optimisation-attempt/","timestamp":"2014-04-17T18:31:40Z","content_type":null,"content_length":"45784","record_id":"<urn:uuid:b6b79abe-70f0-43dd-9fb7-86b93bd6718e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Source: http://www.sing365.com Reebok Commercial Part 2 Beware, if E equals MC squared Than A.I. equals ankles in intensive care The answer keeps you glued to the ground suspended in air The answer to every prayer for every player everywhere Try to compare Try to block the shot if you dare You get the best seat in the house, it’s called the wheelchair The math ?umopis?, you should be a reebok owner Killer crossovers split opponent’s in half like moses All contenders are left defenseless They end up in hospital beds with swollen tendens, takin’ anit-depressants They can’t seem to deal with them pressure I came to give you this message I got the ammo for your weapons, and the answer for your question /
{"url":"http://www.sing365.com/music/lyric.nsf/PrintLyrics?openForm&ParentUnid=C6EC0B03567A176C482568A800144893","timestamp":"2014-04-19T00:38:04Z","content_type":null,"content_length":"3050","record_id":"<urn:uuid:91fdef00-11c2-4a6e-822a-54f3813c84ea>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Whats everyone's favorite color? Mines is RED and BLACK! • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/509bff8ae4b0a825f21ae4a7","timestamp":"2014-04-16T07:48:51Z","content_type":null,"content_length":"168373","record_id":"<urn:uuid:713e3f2e-df59-42f5-955c-938e0bf8070b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric Quantization John Baez August 11, 2000 Geometric quantization is a marvelous tool for understanding the relation between classical physics and quantum physics. However, it's a bit like a power tool - you have to be an expert to operate it without running the risk of seriously injuring your brain. Here's a brief sketch of how it goes. This is pretty terse; for the details you'll have to read the series of articles on geometric quantization on the sci.physics.research archive. 1. We start with a CLASSICAL PHASE SPACE: mathematically, this is a manifold X with a symplectic structure ω. 2. Then we do PREQUANTIZATION: this gives us a Hermitian line bundle L over X, equipped with a U(1) connection D whose curvature equals iω. L is called the PREQUANTUM LINE BUNDLE. Warning: we can only do this step if ω satisfies the BOHR-SOMMERFELD CONDITION, which says that ω/2π defines an integral cohomology class. If this condition holds, L and D are determined to isomorphism, but not canonically. 3. The Hilbert space H[0] of square-integrable sections of L is called the PREQUANTUM HILBERT SPACE. This is not yet the Hilbert space of our quantized theory - it's too big. But it's a good step in the right direction. In particular, we can PREQUANTIZE CLASSICAL OBSERVABLES: there's a map sending any smooth function on X to an operator on H[0]. This map takes Poisson brackets to commutators, just as one would hope. The formula for this map involves the connection D. 4. To cut down the prequantum Hilbert space, we need to choose a POLARIZATION, say P. What's this? Well, for each point x in X, a polarization picks out a certain subspace P[x] of the complexified tangent space at x. We define the QUANTUM HILBERT SPACE, H, to be the space of all square-integrable sections of L that give zero when we take their covariant derivative at any point x in the direction of any vector in P[x]. The quantum Hilbert space is a subspace of the prequantum Hilbert space. Warning: for P to be a polarization, there are some crucial technical conditions we impose on the subspaces P[x]. First, they must be ISOTROPIC: the complexified symplectic form ω must vanish on them. Second, they must be LAGRANGIAN: they must be maximal isotropic subspaces. Third, they must vary smoothly with x. And fourth, they must be INTEGRABLE. 5. The easiest sort of polarization to understand is a REAL POLARIZATION. This is where the subspaces P[x] come from subspaces of the tangent space by complexification. It boils down to this: a real polarization is an integrable distribution P on the classical phase space where each space P[x] is Lagrangian subspace of the tangent space T[x]X. 6. To understand this rigamarole, one must study examples! First, it's good to understand how good old SCHRÖDINGER QUANTIZATION fits into this framework. Remember, in Schrödinger quantization we take our classical phase space X to be the cotangent bundle T^*M of a manifold M called the CLASSICAL CONFIGURATION SPACE. We then let our quantum Hilbert space be the space of all square-integrable functions on M. Modulo some technical trickery, we get this example when we run the above machinery and use a certain god-given real polarization on X = T^*M, namely the one given by the vertical vectors. 7. It's also good to study the BARGMANN-SEGAL REPRESENTATION, which we get by taking X = C^n with its god-given symplectic structure (the imaginary part of the inner product) and using the god-given KÄHLER POLARIZATION. When we do this, our quantum Hilbert space consists of analytic functions on C^n which are square-integrable with respect to a Gaussian measure centered at the origin. 8. The next step is to QUANTIZE CLASSICAL OBSERVABLES, turning them into linear operators on the quantum Hilbert space H. Unfortunately, we can't quantize all such observables while still sending Poisson brackets to commutators, as we did at the prequantum level. So at this point things get trickier and my brief outline will stop. Ultimately, the reason for this problem is that quantization is not a functor from the category of symplectic manifolds to the category of Hilbert spaces - but for that one needs to learn a bit about category theory. Basic Jargon Here are some definitions of important terms. Unfortunately they are defined using other terms that you might not understand. If you are really mystified, you need to read some books on differential geometry and the math of classical mechanics before proceeding. • complexification - We can tensor a real vector space with the complex numbers and get a complex vector space; this process is called complexification. For example, we can complexify the tangent space at some point of a manifold, which amounts to forming the space of complex linear combinations of tangent vectors at that point. • distribution - The word "distribution" means many different things in mathematics, but here's one: a "distribution" V on a manifold X is a choice of a subspace V[x] of each tangent space T[p](X), where the choice depends smoothly on x. • Hamiltonian vector field - Given a manifold X with a symplectic structure ω, any smooth function f: X -> R can be thought of as a "Hamiltonian", meaning physically that we think of it as the energy function and let it give rise to a flow on X describing the time evolution of states. Mathematically speaking, this flow is generated by a vector field v(f) called the "Hamiltonian vector field" associated to f. It is the unique vector field such that ω(.,v(f)) = df In other words, for any vector field u on X we have ω(u,v(f)) = df(u) = u f The vector field v(f) is guaranteed to exist by the fact that ω is nondegenerate. • integrable distribution - A distribution on a manifold X is "integrable" if at least locally, there is a foliation of X by submanifolds such that V[x] is the tangent space of the submanifold containing the point x. • integral cohomology class - Any closed p-form on a manifold M defines an element of the pth deRham cohomology of M. This is a finite-dimensional vector space, and it contains a lattice called the pth integral cohomology group of M. We say a cohomology class is integral if it lies in this lattice. Most notably, if you take any U(1) connection on any Hermitian line bundle over M, its curvature 2-form will define an integral cohomology class once you divide it by 2 π i. This cohomology class is called the first Chern class, and it serves to determine the line bundle up to • Poisson brackets - Given a symplectic structure on a manifold M and given two smooth functions on that manifold, say f and g, there's a trick for getting a new smooth function {f,g} on the manifold, called the Poisson bracket of f and g. This trick works as follows: given any smooth function f we can take its differential df, which is a 1-form. Then there is a unique vector field v(f), the Hamiltonian vector field associated to f, such that ω(-,v(f)) = df Using this we define {f,g} = ω(v(f),v(g)) It's easy to check that we also have {f,g} = dg(v(f)) = v(f) g so {f,g} says how much g changes as we differentiate it in the direction of the Hamiltonian vector field generated by f. In the familiar case where M is R^2n with momentum and position coordinates p[i], q[i], the Poisson brackets of f and g work out to be {f,g} = sum[i] df/dp[i] dg/dq[i] - df/dq[i] dg/dp[i] • square-integrable sections - We can define an inner product on the sections of a Hermitian line bundle over a manifold X with a symplectic structure. The symplectic structure defines a volume form which lets us do the necessary integral. A section whose inner product with itself is finite is said to be square-integrable. Such sections form a Hilbert space H[0] called the "prequantum Hilbert space". It is a kind of preliminary version of the Hilbert space we get when we quantize the classical system whose phase space is X. • symplectic structure - A symplectic structure on a manifold M is a closed 2-form ω which is nondegenerate in the sense that for any nonzero tangent vector u at any point of M, there is a tangent vector u at that point for which w(u,v) is nonzero. • U(1) connection - The group U(1) is the group of unit complex numbers. Given a complex line bundle L with an inner product on each fiber L[x], a U(1) connection on L is a connection such that parallel translation preserves the inner product. • vertical vectors - Given a bundle E over a manifold M, we say a tangent vector to some point of E is vertical if it projects to zero down on M. The only way to learn the rules of this Game of games is to take the usual prescribed course, which requires many years, and none of the initiates could ever possibly have any interest in making these rules easier to learn. - Hermann Hesse, The Glass Bead Game © 2000 John Baez
{"url":"http://math.ucr.edu/home/baez/quantization.html","timestamp":"2014-04-21T14:39:54Z","content_type":null,"content_length":"11361","record_id":"<urn:uuid:eb4a4b9a-c059-4ec7-8f6f-c0d04ce762ef>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
A fifth of alcohol is equal to how many pints of alcohol? United States customary units are a system of measurements commonly used in the United States. The U.S. customary system developed from English units which were in use in the British Empire before American independence. Consequently most U.S. units are virtually identical to the British imperial units. However, the British system was overhauled in 1824, changing the definitions of some units used there, so several differences exist between the two systems. The majority of U.S. customary units were redefined in terms of the meter and the kilogram with the Mendenhall Order of 1893, and in practice, for many years before. These definitions were refined by the international yard and pound agreement of 1959. The U.S. primarily uses customary units in its commercial activities, while science, medicine, government, and many sectors of industry use metric units. The SI metric system, or International System of Units is preferred for many uses by NIST An alcoholic beverage is a drink that contains ethanol. Alcoholic beverages are divided into three general classes for taxation and regulation of production: beers, wines, and spirits (distilled beverages). They are legally consumed in most countries around the world. More than 100 countries have laws regulating their production, sale, and consumption. Beer is the third most popular drink in the world, after water and tea. Alcoholic beverages have been consumed by humans since the Neolithic era; the earliest evidence of alcohol was discovered in Jiahu, dating from 7000–6600 BC. The production and consumption of alcohol occurs in most cultures of the world, from hunter-gatherer peoples to nation-states. Two Pints of Lager and a Packet of Crisps is a British comedy television series sitcom that ran from 26 February 2001 to 24 May 2011 and starring Sheridan Smith, Will Mellor, Ralf Little, Natalie Casey, Kathryn Drysdale and Luke Gell. Created and written by Susan Nickson, it is set in the town of Runcorn in Cheshire, England, and originally revolves around the lives of five twenty-somethings. In September 2007 Little confirmed in an interview on This Morning that he had left the series. The seventh series aired following a two year hiatus in 2008 with Little's character being killed off in the first episode. Smith and Drysdale did not return for the ninth and final series with their departures being mentioned briefly in the first episode, Freddie Hogan and Georgia Henshaw were brought in as their replacements making Mellor and Casey the only original characters left in the series. Writer & Creator, Susan Nickson also left before the ninth series. The core cast have been augmented by various recurring characters throughout the series, including Beverly Callard, Lee Oakes, Hayley Bishop, Alison Mac, Thomas Nelstrop and Jonathon Dutton. The show was first broadcast in 2001 on BBC Two. The title was inspired by the 1980 hit single "Two Pints of Lager and a Packet of Crisps Please" by Splodgenessabounds. In recipes, quantities of ingredients may be specified by mass (commonly called weight), by volume, or by count. For most of history, most cookbooks did not specify quantities precisely, instead talking of "a nice leg of spring lamb", a "cupful" of lentils, a piece of butter "the size of a walnut", and "sufficient" salt. Informal measurements such as a "pinch", a "drop", or a "hint" (soupçon) continue to be used from time to time. In the US, Fannie Farmer introduced the more exact specification of quantities by volume in her 1896 Boston Cooking-School Cook Book. The system of imperial units or the imperial system (also known as British Imperial) is the system of units first defined in the British Weights and Measures Act of 1824, which was later refined and reduced. The system came into official use across the British Empire. By the late 20th century, most nations of the former empire had officially adopted the metric system as their main system of measurement, but some Imperial units are still used in the United Kingdom and Canada. Scotland had a distinct system of measures and weights until at least the late 18th century, based on the ell as a unit of length, the stone as a unit of mass and the boll and the firlot as units of dry measure. This official system coexisted with local variants, especially for the measurement of land. The system is said to have been introduced by David I of Scotland (1124–53), although there are no surviving records until the 15th century when the system was already in normal use. Standard measures and weights were kept in each burgh, and these were periodically compared against one another at "assizes of measures", often during the early years of the reign of a new monarch. Nevertheless, there was considerable local variation in many of the units, and the units of dry measure steadily increased in size from 1400 to 1700. Health Medical Pharma Health Medical Pharma Related Websites:
{"url":"http://answerparty.com/question/answer/a-fifth-of-alcohol-is-equal-to-how-many-pints-of-alcohol","timestamp":"2014-04-21T01:07:03Z","content_type":null,"content_length":"31078","record_id":"<urn:uuid:1278f37d-ce8f-4b56-a198-37b91dba6dfc>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
A multibody factorization method for independently moving objects Results 1 - 10 of 130 , 1999 "... Scene flow is the three-dimensional motion field of points in the world, just as optical flow is the twodimensional motion field of points in an image. Any optical flow is simply the projection of the scene flow onto the image plane of a camera. In this paper, we present a framework for the computat ..." Cited by 127 (9 self) Add to MetaCart Scene flow is the three-dimensional motion field of points in the world, just as optical flow is the twodimensional motion field of points in an image. Any optical flow is simply the projection of the scene flow onto the image plane of a camera. In this paper, we present a framework for the computation of dense, non-rigid scene flow from optical flow. Our approach leads to straightforward linear algorithms and a classification of the task into three major scenarios: (1) complete instantaneous knowledge of the scene structure, (2) knowledge only of correspondence information, and (3) no knowledge of the scene structure. We also show that multiple estimates of the normal flow cannot be used to estimate dense scene flow directly without some form of smoothing or regularization. 1 - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2003 "... This paper presents an algebro-geometric solution to the problem of segmenting an unknown number of subspaces of unknown and varying dimensions from sample data points. We represent the subspaces with a set of homogeneous polynomials whose degree is the number of subspaces and whose derivatives at a ..." Cited by 117 (29 self) Add to MetaCart This paper presents an algebro-geometric solution to the problem of segmenting an unknown number of subspaces of unknown and varying dimensions from sample data points. We represent the subspaces with a set of homogeneous polynomials whose degree is the number of subspaces and whose derivatives at a data point give normal vectors to the subspace passing through the point. When the number of subspaces is known, we show that these polynomials can be estimated linearly from data; hence, subspace segmentation is reduced to classifying one point per subspace. We select these points optimally from the data set by minimizing certain distance function, thus dealing automatically with moderate noise in the data. A basis for the complement of each subspace is then recovered by applying standard PCA to the collection of derivatives (normal vectors). Extensions of GPCA that deal with data in a highdimensional space and with an unknown number of subspaces are also presented. Our experiments on low-dimensional data show that GPCA outperforms existing algebraic algorithms based on polynomial factorization and provides a good initialization to iterative techniques such as K-subspaces and Expectation Maximization. We also present applications of GPCA to computer vision problems such as face clustering, temporal video segmentation, and 3D motion segmentation from point correspondences in multiple affine views. - In European Conference on Computer Vision , 2004 "... Recovery of three diensWXzm (3D) sD) e and otion of non-sN;m[ s cenes fro a onocular videosdeomWW is i portant forapplications like robot navigation and hu an co puter interaction. If every point in thes cene rando ly oves it is i - posW=J= to recover the non-rigids-r es In practice, any non-rigid o ..." Cited by 72 (10 self) Add to MetaCart Recovery of three diensWXzm (3D) sD) e and otion of non-sN;m[ s cenes fro a onocular videosdeomWW is i portant forapplications like robot navigation and hu an co puter interaction. If every point in thes cene rando ly oves it is i - posW=J= to recover the non-rigids-r es In practice, any non-rigid objects e.g. the hu an face under various expres[XFX] defor with certains tructures Theirs hapes can be regarded as a weighted co bination of certains hapebasXJ Shape and otion recovery unders uchs ituations has attracted uch interesX Previous work onthis proble [6, 4, 13] utilized only orthonor ality consWJNm ts on the ca era rotations (ro- tation constraints).This paper proves that usJ] only the rotation cons]N]m ts res]N] in a biguous and invalid smWWX];m[ The a biguity arisX fro the fact that thesmX e bas+ are not unique becaus their linear transJW ation is a news et of eligiblebasib To eli inate the a biguity, we propos as et of novel consNXNm ts basis constraints, which uniquely deter ine thesmW e bas;F We prove that, under the weak-p ers ective projection odel, enforcing both the bas= and the rotation consW+;m ts leads to a closNm[JF slosNm to the proble of non-rigids hape and otion recovery. The accuracy and robus;Wm[ of ourclos=;m[J slos=; is evaluated quantitatively on sm thetic data and qualitatively on real videoseomWN;JN 1 - In CVPR , 2009 "... We propose a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all oth ..." Cited by 71 (6 self) Add to MetaCart We propose a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all other data points. In general, finding such a SR is NP hard. Our key contribution is to show that, under mild assumptions, the SR can be obtained ’exactly ’ by using ℓ1 optimization. The segmentation of the data is obtained by applying spectral clustering to a similarity matrix built from this SR. Our method can handle noise, outliers as well as missing data. We apply our subspace clustering algorithm to the problem of segmenting multiple motions in video. Experiments on 167 video sequences show that our approach significantly outperforms state-of-the-art methods. 1. - In ECCV , 2006 "... Abstract. We cast the problem of motion segmentation of feature trajectories as linear manifold finding problems and propose a general framework for motion segmentation under affine projections which utilizes two properties of trajectory data: geometric constraint and locality. The geometric constra ..." Cited by 69 (0 self) Add to MetaCart Abstract. We cast the problem of motion segmentation of feature trajectories as linear manifold finding problems and propose a general framework for motion segmentation under affine projections which utilizes two properties of trajectory data: geometric constraint and locality. The geometric constraint states that the trajectories of the same motion lie in a low dimensional linear manifold and different motions result in different linear manifolds; locality, by which we mean in a transformed space a data and its neighbors tend to lie in the same linear manifold, provides a cue for efficient estimation of these manifolds. Our algorithm estimates a number of linear manifolds, whose dimensions are unknown beforehand, and segment the trajectories accordingly. It first transforms and normalizes the trajectories; secondly, for each trajectory it estimates a local linear manifold through local sampling; then it derives the affinity matrix based on principal subspace angles between these estimated linear manifolds; at last, spectral clustering is applied to the matrix and gives the segmentation result. Our algorithm is general without restriction on the number of linear manifolds and without prior knowledge of the dimensions of the linear manifolds. We demonstrate in our experiments that it can segment a wide range of motions including independent, articulated, rigid, non-rigid, degenerate, non-degenerate or any combination of them. In some highly challenging cases where other state-of-the-art motion segmentation algorithms may fail, our algorithm gives expected results. 2 1 - In CVPR , 2007 "... Over the past few years, several methods for segmenting a scene containing multiple rigidly moving objects have been proposed. However, most existing methods have been tested on a handful of sequences only, and each method has been often tested on a different set of sequences. Therefore, the compari ..." Cited by 64 (5 self) Add to MetaCart Over the past few years, several methods for segmenting a scene containing multiple rigidly moving objects have been proposed. However, most existing methods have been tested on a handful of sequences only, and each method has been often tested on a different set of sequences. Therefore, the comparison of different methods has been fairly limited. In this paper, we compare four 3-D motion segmentation algorithms for affine cameras on a benchmark of 155 motion sequences of checkerboard, traffic, and articulated scenes. 1. , 2007 "... This paper describes methods for recovering time-varying shape and motion of non-rigid 3D objects from uncalibrated 2D point tracks. For example, given a video recording of a talking person, we would like to estimate the 3D shape of the face at each instant, and learn a model of facial deformation. ..." Cited by 50 (1 self) Add to MetaCart This paper describes methods for recovering time-varying shape and motion of non-rigid 3D objects from uncalibrated 2D point tracks. For example, given a video recording of a talking person, we would like to estimate the 3D shape of the face at each instant, and learn a model of facial deformation. Time-varying shape is modeled as a rigid transformation combined with a non-rigid deformation. Reconstruction is ill-posed if arbitrary deformations are allowed, and thus additional assumptions about deformations are required. We first suggest restricting shapes to lie within a lowdimensional subspace, and describe estimation algorithms. However, this restriction alone is insufficient to constrain reconstruction. To address these problems, we propose a reconstruction method using a Probabilistic Principal Components Analysis (PPCA) shape model, and an estimation algorithm that simultaneously estimates 3D shape and motion for each instant, learns the PPCA model parameters, and robustly fills-in missing data points. We then extend the model to model temporal dynamics in object shape, allowing the algorithm to robustly handle severe cases of missing data. - In European Conference on Computer Vision , 2000 "... . This paper extends the recovery of structure and motion to image sequences with several independently moving objects. The motion, structure, and camera calibration are all a-priori unknown. The fundamental constraint that we introduce is that multiple motions must share the same camera paramete ..." Cited by 47 (0 self) Add to MetaCart . This paper extends the recovery of structure and motion to image sequences with several independently moving objects. The motion, structure, and camera calibration are all a-priori unknown. The fundamental constraint that we introduce is that multiple motions must share the same camera parameters. Existing work on independent motions has not employed this constraint, and therefore has not gained over independent static-scene reconstructions. We show how this constraint leads to several new results in structure and motion recovery, where Euclidean reconstruction becomes possible in the multibody case, when it was underconstrained for a static scene. We show how to combine motions of high-relief, low-relief and planar objects. Additionally we show that structure and motion can be recovered from just 4 points in the uncalibrated, fixed camera, case. Experiments on real and synthetic imagery demonstrate the validity of the theory and the improvement in accuracy obtained usin... , 2006 "... We present an algebraic geometric approach to 3-D motion estimation and segmentation of multiple rigid-body motions from noise-free point correspondences in two perspective views. Our approach exploits the algebraic and geometric properties of the so-called multibody epipolar constraint and its asso ..." Cited by 42 (15 self) Add to MetaCart We present an algebraic geometric approach to 3-D motion estimation and segmentation of multiple rigid-body motions from noise-free point correspondences in two perspective views. Our approach exploits the algebraic and geometric properties of the so-called multibody epipolar constraint and its associated multibody fundamental matrix, which are natural generalizations of the epipolar constraint and of the fundamental matrix to multiple motions. We derive a rank constraint on a polynomial embedding of the correspondences, from which one can estimate the number of independent motions as well as linearly solve for the multibody fundamental matrix. We then show how to compute the epipolar lines from the first-order derivatives of the multibody epipolar constraint and the epipoles by solving a plane clustering problem using Generalized PCA (GPCA). Given the epipoles and epipolar lines, the estimation of individual fundamental matrices becomes a linear problem. The clustering of the feature points is then automatically obtained from either the epipoles and epipolar lines or from the individual fundamental matrices. Although our approach is mostly designed for noise-free correspondences, we also test its performance on synthetic and real data with moderate levels of noise. , 2006 "... In its full generality, motion analysis of crowded objects necessitates recognition and segmentation of each moving entity. The difficulty of these tasks increases considerably with occlusions and therefore with crowding. When the objects are constrained to be of the same kind, however, partitioning ..." Cited by 42 (1 self) Add to MetaCart In its full generality, motion analysis of crowded objects necessitates recognition and segmentation of each moving entity. The difficulty of these tasks increases considerably with occlusions and therefore with crowding. When the objects are constrained to be of the same kind, however, partitioning of densely crowded semi-rigid objects can be accomplished by means of clustering tracked feature points. We base our approach on a highly parallelized version of the KLT tracker in order to process the video into a set of feature trajectories. While such a set of trajectories provides a substrate for motion analysis, their unequal lengths and fragmented nature present difficulties for subsequent processing. To address this, we propose a simple means of spatially and temporally conditioning the trajectories. Given this representation, we integrate it with a learned object descriptor to achieve a segmentation of the constituent motions. We present experimental results for the problem of estimating the number of moving objects in a dense crowd as a function of time. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=5851","timestamp":"2014-04-20T19:34:14Z","content_type":null,"content_length":"41724","record_id":"<urn:uuid:e93ca39f-bbb1-4366-8396-ec8d0660225c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Implicit Differentiation October 24th 2008, 11:29 AM #1 Oct 2008 Implicit Differentiation When air expands adiabatically (without gaining or losing heat), its pressure P and volume V are related by the equation PV^1.4=C where C is a constant. Suppose that at a certain instant the volume is 510 cubic centimeters and the pressure is 81kPa and is decreasing at a rate of 7 kPa/minute. At what rate in cubic centimeters per minute is the volume increasing at this instant? how do approach this problem? Differentiate your equation wrt time. $P\cdot V^{1.4}=C$ $P\cdot 1.4V^{0.4}\cdot \frac{dV}{dt}+V^{1.4}\cdot \frac{dP}{dt}=0$ Enter in all your knowns and solve for dV/dt October 24th 2008, 12:20 PM #2
{"url":"http://mathhelpforum.com/calculus/55490-implicit-differentiation.html","timestamp":"2014-04-19T07:15:07Z","content_type":null,"content_length":"32248","record_id":"<urn:uuid:26d1518c-eb7d-4243-8a23-a1353d2adbd6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the smallest diameter ring a non-convex polyhedron can pass through in 3-space? up vote 4 down vote favorite The question is mostly in the title. Imagine I have some non-convex polyhedron $P$, and I would like to find the smallest diameter ring that it can pass through in 3-space, undergoing any necessary rotations as it does so. Is there an efficient way to calculate $D_{ring}$? Pressing my luck, can I find the set of rotations for $P$ as it passes through the ring? geometry polyhedra computer-science add comment 2 Answers active oldest votes This is the "piano movers problem", also known as the motion planning problem, which has an enormous literature. Check out http://en.wikipedia.org/wiki/Motion_planning up vote 6 down vote accepted Very cool, thanks! – UltraBlue06 Nov 30 '11 at 21:50 add comment Just a side remark on convex polyhedra: Each of the regular polyhedra except the cube can pass through a circle of radius smaller than the smallest-radius cylinder circumscribing the polyhedron. This is proved in Tudor Zamfirescu's delightful paper, "Convex polytopes passing through circles" (PDF link). There is quite a nice (non-algorithmic) literature on this up vote 5 problem. down vote 1 Thanks, I enjoyed reading the paper. =) – UltraBlue06 Dec 2 '11 at 19:19 Zamfirescu has a light touch despite deep thoughts. :-) – Joseph O'Rourke Dec 2 '11 at 20:33 add comment Not the answer you're looking for? Browse other questions tagged geometry polyhedra computer-science or ask your own question.
{"url":"http://mathoverflow.net/questions/82312/what-is-the-smallest-diameter-ring-a-non-convex-polyhedron-can-pass-through-in-3/82343","timestamp":"2014-04-18T01:10:11Z","content_type":null,"content_length":"56901","record_id":"<urn:uuid:5e539060-7f97-4ab5-af07-7d5cf4c52069>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Crum Lynne Math Tutor Find a Crum Lynne Math Tutor ...If you want to learn the techniques for working with scientific equipment, then it's important to use real equipment. But that should not be the goal in elementary science: few of the techniques of today will still be done the same way by the time they grow up. The most valuable thing is for th... 16 Subjects: including algebra 1, algebra 2, English, geometry ...I believe in helping students to understand and enjoy math as I do, I will not do the work for the student but will help them understand the process behind it. As a recent graduate of college I understand how students think and how to communicate to them so they will understand the material.I ha... 13 Subjects: including algebra 1, algebra 2, calculus, geometry ...I am currently attending Wilmington University in pursuit of a master's degree in K-6 elementary education. As a substitute teacher and a mother of two, I have plenty of experience tutoring, mentoring, and instructing students. My love for teaching is motivated by a love for learning. 11 Subjects: including prealgebra, reading, grammar, English ...I have also had a great deal experience in the Elementary schools during my practicums, and have taught a variety of lessons at each school for 1st, 2nd, and 4th grades. In addition, I have worked as an assistant in a kindergarten classroom. As an Elementary Ed. major, I have taken and excelled... 15 Subjects: including prealgebra, algebra 1, reading, English ...Science I am available to tutor chemistry, physics and any electrical related topics. I have taken AP physics and AP chemistry. I scored a 5 on AP chemistry and for physics I scored a 5 on mechanics and a 4 on electricity and magnetism. 15 Subjects: including algebra 2, chemistry, physics, trigonometry
{"url":"http://www.purplemath.com/Crum_Lynne_Math_tutors.php","timestamp":"2014-04-20T19:28:59Z","content_type":null,"content_length":"23720","record_id":"<urn:uuid:fc141090-dfe9-49bd-9d40-ca7653ffe98d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Conservation of mechanical energy I have a problem with my train of thought. I have narrowed it down to several premises, and I think that one of them must be wrong. Premise 1 Δ total energy (E) = Δ kinetic energy (K) + Δ potential energy (U) (assuming that there is no change in any of the other forms of energy in the system) Premise 2 Δ E = q + w Consider the system of a block. The block is initially traveling upwards (+y direction), away from the earth's surface. At t=0 it has a kinetic energy of 1/2mv^2 (v is non zero). It initially also has gravitational potential energy of 0. The velocity eventually decreases to 0 (at t[1]). The whole time (from t=0 to t[1]), the earth's gravitational attraction is exerting a force of mg on the block. No other force is acting on the block, since it is already in motion at t=0. It appears that the work done by the gravitational force on the block is -mgh. But, ΔE = 0, because the kinetic energy has simply been transformed into potential energy. So, how can work be non zero (-mgh)? It seems that (looking at the block as the system) ΔE = 0 = W= -mgh (no heat transfer, so q=0, no friction, no other energy transfer) Since this is impossible, I know that I did something wrong. Thanks for your help!
{"url":"http://www.physicsforums.com/showthread.php?t=373936","timestamp":"2014-04-19T19:40:29Z","content_type":null,"content_length":"38646","record_id":"<urn:uuid:9f8d55fd-3791-44f1-90c6-9283f8196e18>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: April 2010 [00565] [Date Index] [Thread Index] [Author Index] Re: problems with NMinimize • To: mathgroup at smc.vnet.net • Subject: [mg109363] Re: problems with NMinimize • From: Bill Rowe <readnews at sbcglobal.net> • Date: Fri, 23 Apr 2010 03:49:45 -0400 (EDT) On 4/22/10 at 6:43 AM, ngb at ecs.soton.ac.uk (Neil Broderick) wrote: >Hi, I am trying to use NMinimize to find the solutions to various >numerical equations and I keep getting error messages concerning >non-numerical values. For example consider the following: >In[2]:= NMinimize[Abs[NIntegrate[Sin[x], {x, -a, b}]]^2, {a, b}] <error messages snipped> Even if this did work as you expected with no error messages, this would be a very inefficient way to solve the problem. NMinimize works by repeatedly evaluating the expression to be minimized. That means the numerical integration problem is repeatedly solved. In this case, there is a symbolic result for the integral. So doing: In[4]:= int = Integrate[Sin[x], {x, -a, b}] Out[4]= cos(a)-cos(b) And this clearly is zero for Abs[a]==Abs[b]. So, any pair of numbers satisfying this last will be a minimum for the square of the absolute value for the integral. But if you didn't want to find the minimum by inspection, then doing In[5]:= NMinimize[Abs[int], {a, b}] Out[5]= {4.80727*10^-14,{a->0.0961182,b->0.0961182}} returns one of many possible solutions. Note, I didn't bother with squaring the absolute value since minimizing the absolute value is the same as minimizing the square of the absolute value.
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Apr/msg00565.html","timestamp":"2014-04-18T00:16:11Z","content_type":null,"content_length":"26183","record_id":"<urn:uuid:b39d92ae-c7c6-48c4-94dc-f4dc03ddec25>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Translating Words into Math Date: 8 Mar 1995 22:51:36 -0500 From: Anonymous Subject: Algebra. Dear Dr. Math, I need help with my Algebra. We are doing story problems. I DON'T want you to do them for me. I just need help on understanding them so that I can solve them by myself. I am stuck on one particular problem. The number is decreased by 1/3 of the number. The result is 62. Find the number. Please help me. Brittany I Date: Thu, 09 Mar 1995 12:57:09 +0000 From: Dr. Math Subject: Re: Algebra. Hello there! I think the hardest thing about doing these problems is translating sentences into mathematical equations. Basically, you have to pick them apart, piece by piece, translating one part at a time until what you have left is an equation that you can solve using Algebra. So let's dig in. Actually, I'll make up a new problem that's similar to yours, and I'll lead you through the translation process on that one. Here it is: A number is increased by 3/4 of the number, and the result is 28. What is the number? One of the first things we want to do is to replace "the number" by a variable. You can call it anything you want, and in this case I'll call it x. So we go through the sentence and wherever we see the phrase "the number" or "itself" or something that obviously refers to the number we're talking about, we replace that by an x. So we get: x is increased by 3/4 of x, and the result is 28. What is x? Also, one of the first things you want to do in these sentences is find out where the equal sign goes. Typically, you'll look for the verb "is," or "the result is," or if they're really throwing it out at your face, "is equal to." In our problem, we find our equal sign in "and the result is": x increased by 3/4 of x = 28. (I changed the grammar just a teency bit) Now what does 3/4 of x mean? It means we multiply 3/4 times x. So now we have x increased by 3/4 * x = 28. Now there's only one thing left. We need to figure out what "increased by" means. What would I mean if we were baking cookies, and I said to you "increase the amount of chocolate chips by two cups"? I'd mean that you would add two extra cups of chocolate chips to the batter, i.e. ADD two cups, and we'd both have a stomach ache. So the phrase "increased by" is probably going to mean "plus." x + 3/4 * x = 28. And now we have a bona fide Algebraic sentence. Notice the period at the end. Usually people don't write those, but I wanted to show you that even when we write down something as weird as "x + 3/4 x = 28", it can always be read as a real English sentence, with punctuation at the end and everything. So see if you can apply the same techniques to solve your problem. If you still have trouble, write back! -Ken "Dr." Math
{"url":"http://mathforum.org/library/drmath/view/57309.html","timestamp":"2014-04-19T08:22:03Z","content_type":null,"content_length":"7765","record_id":"<urn:uuid:51b17439-aec2-4a45-ba15-066f866ecb8f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
OpenMx - Advanced Structural Equation Modeling Tue, 03/16/2010 - 15:28 Your objective function needs Your objective function needs to be changed. The code you have: mxAlgebra(-2*sum(log(pclass1%x%Gompertz1.objective + pclass2%x%Gompertz1.objective)), name="mixtureObj") adds the Gompertz1 objective to itself. Change the second one to Gompertz2, and it should work. Tue, 03/16/2010 - 15:44 PEBKAC error. Everything PEBKAC error. Everything comes up roses when I try to mix two different classes instead of two of the same class. Pending my permission to post the data, I'll wrap this up into something for the models/passing directory and the documentation. Mon, 07/26/2010 - 13:10 Hi all, I am following Ryne's Hi all, I am following Ryne's code on fitting GMM, but ran into a few error messages. My model is simpler: I fit a Latent Basis 2-class GMM with 9 repeated assessments, constraining time loadings at times 1 & 4. I am starting with a model that has no within-class intercept & slope variances (so, my 'phi' matrix is constrained to 0). I get several error messages. First, when I am creating Model Expectations for Mixture (Parent) Model by specifying the sum of class proportions equal to 1, I get the following error: > classP <- mxMatrix ("Full", 2, 1, free = TRUE, lbound = 0, + labels = c ("pclass1", "pclass2"), name = "classProbs") > classS <- mxAlgebra (sum(classProbs), name = "classSum") > constant <- mxMatrix ("Iden", 1, name = "con") > classC <- mxConstraint ("classSum", "=", "con", name = "classCon") Error in mxConstraint("classSum", "=", "con", name = "classCon") : unused argument(s) ("=", "con") Of course, subsequently, I can't specify the MxModel, since I don't have the classC object, and I get an error when running the full mixture model. I am attaching the complete R syntax file and would greatly appreciate any leads on how to modify it. Attachment Size Grouped-based GMM.R 4.21 KB Mon, 07/26/2010 - 13:21 The mxConstraint() syntax has The mxConstraint() syntax has changed. Try using: mxConstraint(classSum == con, name = "classCon") Note the absence of quotes in the first argument. MxConstraints and now specified much like MxAlgebras are specified. Mon, 07/26/2010 - 15:06 Thank you! That solved the Thank you! That solved the issure of constrains. However, when I run the final model with > mixedResults <- mxRun (mixedModel) I am getting the following error message: Error: The job for model 'Group Based GMM 2cl' exited abnormally with the error message: Expected covariance matrix is not positive-definite in data row 30 at iteration 266. In addition: Warning message: In model 'Group Based GMM 2cl' NPSOL returned a non-zero status code 6. The model does not satisfy the first-order optimality conditions to the required accuracy, and no improved point for the merit function could be found during the final linesearch (Mx status RED) Mon, 07/26/2010 - 15:39 I would be tempted to keep I would be tempted to keep theta elements positive, thusly: manCov <- mxMatrix ("Diag", 9, 9, free = TRUE, values = rep (25, times = 9), labels = 'residual', name = 'theta') manCov <- mxMatrix ("Diag", 9, 9, free = TRUE, values = rep (25, times = 9), labels = 'residual', lbound = 0, name = 'theta') And I wonder if this has anything to do with a "mancave" :) Tue, 07/27/2010 - 13:43 Thank you, Thank you, Michael. specifying lower bounds on the theta matrix did help the estimation process. What I realized, however, is that factor loadings for the slope parameter were equal across latent classes, thus, representation for one of classes was 0% (makes sense, since trajectories were specified as being the same). When specifying class-specific loadings for the slope (so that trajectories would differ across classes), I run into a new problem of R complaining: Error: Unknown reference 'lambdaS1' detected in the entity 'lambda1' in model 'groupGMM1' Here is the modified syntax I'm using to which, I think, the error message is referring: ## factor loadings for slope cl1 time1 <- mxMatrix ("Full", 9, 1, free = c(FALSE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE), values = c(0, .5, .8, 1, 1, 1, 1, 1, 1), label = paste ('t', 1:9, 'cl1', sep = ""), name = 'lambdaS1') ## factor loadings for slope cl2 time2 <- mxMatrix ("Full", 9, 1, free = c(FALSE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE), values = c(0, .5, .8, 1, .8, .7, .6, .5, .4), label = paste ('t', 1:9, 'cl2', sep = ""), name = 'lambdaS2') ## factor loading matrix lambda loadings1 <- mxAlgebra (cbind (lambdaInt, lambdaS1), name = 'lambda1') loadings2 <- mxAlgebra (cbind (lambdaInt, lambdaS2), name = 'lambda2') ## model expectations: meanAlg1 <- mxAlgebra (alpha1 %*% t(lambda1), dimnames = list(NULL, names), name = 'mean1') covAlg1 <- mxAlgebra (lambda1 %*% phi %*% t(lambda1) + theta, dimnames = list(names, names), name = 'cov1') meanAlg2 <- mxAlgebra (alpha2 %*% t(lambda2), dimnames = list(NULL, names), name = 'mean2') covAlg2 <- mxAlgebra (lambda2 %*% phi %*% t(lambda2) + theta, dimnames = list(names, names), name = 'cov2') model1 <- mxModel ("groupGMM1", factorMeans1, factorCov, manCov, unit, time, loadings1, meanAlg1, covAlg1, mxFIMLObjective ('cov1', 'mean1')) Tue, 08/03/2010 - 17:10 time1 is not included in your time1 is not included in your model. That should do the trick.
{"url":"http://openmx.psyc.virginia.edu/thread/444","timestamp":"2014-04-20T05:54:24Z","content_type":null,"content_length":"47774","record_id":"<urn:uuid:ced2c5dc-9fa2-45ce-971f-7dc64439672f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
The Secret History of Mathematicians The Secret History of Mathematicians Arthur Cayley: Mathematician Laureate of the Victorian Age. Tony Crilly. xxiv + 609 pp. Johns Hopkins University Press, 2006. $69.96. James Joseph Sylvester: Jewish Mathematician in a Victorian World. xvi + 461 pp. Johns Hopkins University Press, 2006. $69.96. Historian George Sarton often said that science advances in darkness, invisible to the majority of people, who are more interested in battles and other noisier activities. In his 1957 book The Study of the History of Mathematics, Sarton went on to say that if the history of science is secret, then the history of mathematics is doubly so, "for the growth of mathematics is unknown not only to the general public, but even to scientific workers." Sarton's words help us understand why few have ever heard of Arthur Cayley (1821-95) or James Joseph Sylvester (1814-97), two of the most profound and prolific mathematicians of the Victorian era. Cayley's seminal investigations of matrix algebra, which constituted only a tiny portion of his 967 papers, were crucial for the development of linear algebra. The terms matrix, determinant and Jacobian, familiar to most science students, were invented by Sylvester, an enthusiastic poet who called himself the "mathematical Adam." It is not clear when Cayley and Sylvester first met, but by 1847 they were corresponding to share thoughts about mathematics. They rallied and inspired each other for nearly four decades, drawn together by their common circumstances. Each had triumphed on the University of Cambridge's fearsome Tripos examinations: Sylvester placed second in 1837 and Cayley first in 1842. These accomplishments qualified them for mathematics professorships, but such positions were few and openings rare. Cayley was a Trinity College fellow at Cambridge for a few years and could have remained so, teaching routine courses and living a communal life. But had he married, he would have been compelled to resign his fellowship and then perhaps scratch out a clergyman's living at one of the parishes belonging to Trinity, far from the academic community. Sylvester's options were fewer. As a Jew, he could not even collect his Cambridge degree, since in order to do so he would have had to subscribe to the Thirty-Nine Articles of the Church of England. Cayley and Sylvester would travel other avenues. By 1846 both were studying law at London's Inns of Court, and each stayed on there for several years, Cayley drafting legal documents as a barrister-at-law and Sylvester working as an actuary and barrister. The first had escaped the stone, parental grip of the Church of England. The second was its unloved bastard. Turning away legal and actuarial work, hoarding every spare minute for mathematics, Cayley and Sylvester communicated with each other about their research almost daily, building a new theory of algebraic invariants. So intertwined were their lives as mathematicians, both during and after their careers at the Inns of Court, that when E. T. Bell wrote his famous (and flawed) history, Men of Mathematics, he christened them the "Invariant Twins." It is appropriate that Johns Hopkins University Press should publish their biographies simultaneously. Arthur Cayley: Mathematician Laureate of the Victorian Age, by well-known historian of mathematics Tony Crilly, is the first full-length account of Cayley's life. The book draws on many primary sources. Crilly begins every chapter with a brief abstract and divides the material into short sections, with subheads that help keep the reader oriented. He includes surprisingly few mathematical formulas and integrates his explanations of these smoothly with the biographical material. Cayley was born into a comfortable family. His father, Henry, was a "Russia merchant," and Arthur spent the first seven years of his life in St. Petersburg, where his family enjoyed gracious living and high culture among British expatriates, at the price of harsh winters and isolation. The family returned to England in 1828. Young Cayley attended King's College London, where his abilities were quickly recognized, and then entered Trinity College, Cambridge, in 1838—the year of Queen Victoria's coronation. When a biographical subject has led a calm, pleasant life, it is the biographer who must suffer, searching for engaging matter. Fortunately for Crilly, the Victorian society in which Cayley moved was close-knit and lively. George Boole, Charles Dickens, Francis Galton, James Clerk Maxwell and William Thomson (later Lord Kelvin) are but a few Cayley intimates found in these pages. (Crilly includes such details as that Cayley disliked Dickens's novels, preferring instead those of Jane Austen.) Anyone interested in the emerging role of the research mathematician in England will find Crilly's book particularly rewarding. Cayley finally attained a professorship at the age of 41, when he became the first person elected to the Sadlerian chair of mathematics at Cambridge. In accepting the position, he happily traded income for leisure. Scientific research had by that time gained acceptance at Cambridge. However, pure mathematics was regarded as mere mental gymnastics, or at best a scientific tool; "mixed mathematics" (what today we call applied mathematics) was valued more. Shortly after Cayley was appointed, William Thomson expressed that popular bias to Hermann von Helmholtz: "Oh! that the CAYLEYS would devote that skill that they have to such things [Kirchoff's work on electrical conducting plates] instead of to pieces of algebra which possibly interest four people in the world, certainly not more, and possibly also only the one person who works." Cayley was elected president of the British Association for the Advancement of Science in 1883, an honor that no pure mathematician had enjoyed in nearly 40 years. The association's annual meeting in Southport attracted scientists, politicians, preachers and the working class. Fear of mathematics being nothing new, an anxious rumor spread there that the new president would require a blackboard and a supply of chalk! Cayley's portrait was painted, and his address was reported in the Times. A look back at the meeting's proceedings confirms that there were very few presentations in pure mathematics. The reader is left with the sad impression that despite England's pride in its "mathematician laureate," Cayley might have been better appreciated in France or Germany. Cayley's lifelong friend Sylvester was loquacious and quick-tempered, Cayley's emotional antipode. Sylvester has been a focus of research for historian and mathematician Karen Parshall for more than two decades. In her 1998 book James Joseph Sylvester: Life and Work in Letters, she examined his mathematics with skill and insight. Sylvester sprinkled all of his writing with witty reflections, philosophical digressions and even lines of verse. Readers of his letters have developed a thirst to know more about him, and with the arrival of Parshall's masterful biography, James Joseph Sylvester: Jewish Mathematician in a Victorian World, they can imbibe at last. Parshall turns over the political and religious soils that alternately starved and nourished Sylvester's work. Mathematics fertilizes her account, but she distributes it sparingly. What grows is a rich story of a stubborn genius willing to fight the world so that he might bestow his gifts. We are introduced to Sylvester as a troubled child who is becoming increasingly aware of the anti-Semitism that will impede him. We later watch his hopes soar and then plummet as he struggled with the reality that in the 19th century few academic positions, particularly those at Oxford and Cambridge, could be achieved by an English Jew. After college, he had an unhappy stint for a few years as a professor of natural philosophy at secular University College London. In his mid-20s, he resigned the position to take another, across the Atlantic at the University of Virginia. By offering an invitation to one of England's rising scholars, the University of Virginia hoped to elevate its own prestige. By accepting, Sylvester hoped to fashion there the sort of research professorship that he had longed to find at home. The two daydreams combined into a nightmare. When he arrived in 1841, the students, who were sons of the Southern aristocracy, cheered him. They welcomed the great man with an "illumination" in his honor, burning candles between stolen barrels of tar and, according to a contemporary account, "raising an unholy racket." When the students' fantasy of an Oxbridge don was spoiled by "a little, bluff, beef-fed English cockney, perfectly insignificant in his appearance, and raw and awkward in his manners," their admiration soon gave way to insubordination and threats of violence. That Sylvester acquired a "sword cane" for protection, and that he was later attacked by a pair of brothers, are very likely, Parshall concludes. That Sylvester drew it and struck a nonfatal blow, as a student reported, remains an unconfirmed legend. Back in England, Sylvester anticipated a bleak future. He had no job and had accomplished little mathematically in the preceding two years. In December 1844 he accepted an offer of a position as an actuary at the newly created Equity and Law Life Assurance Society at Lincoln's Inn Fields in London. To do so must have been painful for someone who wanted so much to do research as a mathematician. A couple of years later he began studying for the bar. Sylvester's business efforts and his intense mathematical collaboration with Cayley during this phase of his life were successful but exhausting. When a professorship at the Royal Military Academy in Woolrich became available in 1855, one that did not require taking an oath "on the true faith of a Christian," he applied, and with the help of an influential friend, Lord Brougham, he got the position. The Academy was more concerned with training future officers than with producing pure mathematics. Nonetheless, for 15 years Sylvester persisted in struggling to mold his position to suit his aspirations, until in 1869 he was forced into mandatory retirement. Seeking solace, he turned to poetry, publishing Laws of Verse (1870), an unreadable study of versification that prompted one reviewer to advise the author to concentrate instead on mathematics. Over the next six years, Sylvester kept busy in London with a variety of activities that included music and public speaking. It seemed impossible for him to attain another academic position. In 1876, salvation came from America in the form of an offer from the new Johns Hopkins University to lead a graduate program in mathematics and launch the American Journal of Mathematics, a research journal that continues today. For the next seven years, Sylvester would be surrounded by enthusiastic students and appreciative colleagues. While he was there, the political and social scenery back in England was being transformed as new rights were won by working men, the emerging middle class and Jews. At age 70, Sylvester was elected to the Savilian professorship of geometry at Oxford. He wiped away tears as he left behind his grateful students and colleagues in Baltimore. It is difficult for us to appreciate fully the emotions that he felt as he was finally embraced by Oxford. "She is a good dear mother our University here," he wrote to Cayley, "and stretches out her arms with impartial fondness to take all her children to her bosom even those she has not reared at her breast." Cayley and Sylvester differed wildly in personality and background. But mathematically they were often of a single mind. Together they were pioneers, breaking free of the powerful attraction exerted by Newton, who was at the center of the solar system that was English mathematics. The abstract algebra that Cayley and Sylvester developed, so useless in the opinion of Thomson, would prove essential to Werner Heisenberg, Albert Einstein and other future scientists. Both of these biographies will be appreciated by anyone who is amused by this cosmic irony.
{"url":"http://www.americanscientist.org/bookshelf/id.3135,content.true,css.print/bookshelf.aspx","timestamp":"2014-04-16T21:54:27Z","content_type":null,"content_length":"93729","record_id":"<urn:uuid:aba77ff3-e1e9-4a7e-8931-fd6cdaaf8eef>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Does probability come from quantum physics? (Phys.org)—Ever since Austrian scientist Erwin Schrodinger put his unfortunate cat in a box, his fellow physicists have been using something called quantum theory to explain and understand the nature of waves and particles. But a new paper by physics professor Andreas Albrecht and graduate student Dan Phillips at the University of California, Davis, makes the case that these quantum fluctuations actually are responsible for the probability of all actions, with far-reaching implications for theories of the universe. Quantum theory is a branch of theoretical physics that strives to understand and predict the properties and behavior of atoms and particles. Without it, we would not be able to build transistors and computers, for example. One aspect of the theory is that the precise properties of a particle are not determined until you observe them and "collapse the wave function" in physics parlance. Schrodinger's famous thought experiment extends this idea to our scale. A cat is trapped in a box with a vial of poison that is released when a radioactive atom randomly decays. You cannot tell if the cat is alive or dead without opening the box. Schrodinger argued that until you open the box and look inside, the cat is neither alive nor dead but in an indeterminate state. For many people, that is a tough concept to accept. But Albrecht says that, as a theoretical physicist, he concluded some years ago that this is how probability works at all scales, although until recently, he did not see it as something with a crucial impact on research. That changed with a 2009 paper by Don Page at the University of Alberta, Canada. "I realized that how we think about quantum fluctuations and probability affects how we think about our theories of the universe," said Albrecht, a theoretical cosmologist. One of the consequences of quantum fluctuations is that every collapsing wave function spits out different realities: one where the cat lives and one where it dies, for example. Reality as we experience it picks its way through this near-infinity of possible alternatives. Multiple universes could be embedded in a vast "multiverse" like so many pockets on a pool table. There are basically two ways theorists have tried to approach the problem of adapting quantum physics to the "real world," Albrecht said: You can accept it and the reality of many worlds or multiple universes, or you can assume that there is something wrong or missing from the theory. Albrecht falls firmly in the first camp. "Our theories of cosmology say that quantum physics works across the universe," he said. For example, quantum fluctuations in the early universe explain why galaxies form as they did—a prediction that can be confirmed with direct observations. The problem with multiple universes, Albrecht said, is that it if there are a huge number of different pocket universes, it becomes very hard to get simple answers to questions from quantum physics, such as the mass of a neutrino, an electrically neutral subatomic particle. "Don Page showed that the quantum rules of probability simply cannot answer key questions in a large multiverse where we are not sure in which pocket universe we actually reside," Albrecht said. One answer to this problem has been to add a new ingredient to the theory: a set of numbers that tells us the probability that we are in each pocket universe. This information can be combined with the quantum theory, and you can get your math (and your calculation of the mass of a neutrino) back on track. Not so fast, say Albrecht and Phillips. While the probabilities assigned to each pocket universe may seem like just more of the usual thing, they are in fact a radical departure from everyday uses of probabilities because, unlike any other application of probability, these have already been shown to have no basis in the quantum theory. "If all probability is really quantum theory, then it can't be done," Albrecht said. "Pocket universes are much, much more of a departure from current theory than people had assumed." The paper is currently posted on the ArXiv.org preprint server and submitted for publication and has already stimulated considerable discussion, Albrecht said. "It forces us to think about the different kinds of probability, which often get confused, and perhaps can help draw a line between them," he said. More information: albrecht.ucdavis.edu/ 1.9 / 5 (18) Feb 05, 2013 The whole point of the schroedinger cat experiment is the absurdity of extrapolating from quantum to macro or even universe scales. The cat knows if its alive, even if we only know a probability figure for its demise. An autopsy would clearly show when the cat died, not when it entered some superimposed state. 3.4 / 5 (5) Feb 05, 2013 The paper is pretty neat. An example of what is being talked about here is coin flipping. How can the probability of the outcome of a coin flip be traced back to quantum mechanical uncertainty? As it turns out, it all starts with water and polypeptides in your nervous system. Quantum uncertainty at that level introduces enough probability in molecular transport to evoke less-than-perfect physical prowess, which ultimately leads imprecise control of the energy required to repeatedly produce a perfect flip and snatch. 1.2 / 5 (18) Feb 05, 2013 Quantum computing is neither faster, neither more secure than the classical computing (but it's still good grant and salary generator for physicists, who manage to pretend the opposite): Why quantum computing is hard - and quantum cryptography is not provably secure, No quantum trick – no matter how complex or exotic – can improve the capacity of quantum optical communication 1.7 / 5 (12) Feb 05, 2013 BTW The article title "does probability come from quantum physics" is logically orthogonal to the preprint abstract "..we claim there is no physically verified fully classical theory of 4.6 / 5 (9) Feb 05, 2013 Just another non-falsifiable speculation. Whydening Gyre 1.4 / 5 (11) Feb 05, 2013 "God does not play with Dice." - Albert Einstein. He has, however, found a coin or two to flip... 2 / 5 (8) Feb 05, 2013 @EyeNStein The cat There are at least two cats. Your feeble attempt to square QM with classical physics employing a hidden English predication failed Just another non-falsifiable speculation. You're wrong. Perhaps SOME experiments are beyond present abilities, but others are doable by brighter minds than you 1.4 / 5 (9) Feb 05, 2013 two cats. The invention of a second non-verifiable cat doesn't add to our understanding of the universe it only adds confusion. If you had mentioned that micro scale vibrators can be simultaneously vibrating and not: I would have agreed. But concious cats and the cyanide release sensors constitute a measurement-improbability collapses-clasical reality appears. No second cat is required. 4.5 / 5 (15) Feb 05, 2013 You can accept it and the reality of many worlds or multiple universes, or you can assume that there is something wrong or missing from the theory. WOW! What a ridiculous statement! Many, many physicists accept the veracity of quantum theory but reject the idea of multiple universes. Multiple universe are NOT required by the theory, and are completely speculative! Whether or not other universes exist is a matter of philosophical OPINION. Moreover, the issue is unscientific in nature, since there's not even any way to test it! You'd expect to hear this sort of thing on TV and among the general public, but from a physicist! Unbelievable! 1.6 / 5 (14) Feb 05, 2013 Whether or not other universes exist is a matter of philosophical OPINION Well, not quite. For me it's mostly a question of physically and logically rigorous definition of multiverse. 1.9 / 5 (9) Feb 05, 2013 Whether or not other universes exist is a matter of philosophical OPINION. So is the question of existence itself. Existence is completely dependent upon what David Bohm described as "implicit order," which is only peripherally understood 1.3 / 5 (16) Feb 05, 2013 One of the consequences of quantum fluctuations is that every collapsing wave function….. It is interesting to note that up to now we still could not understand 'wave function of what, or what is wavy? Conventionally, it is just a mathematical abstraction; this is one real problem in quantum mechanics. Understanding the mechanism of quantum mechanics as below, could answer the problem. 3.9 / 5 (11) Feb 05, 2013 Quantum computing is neither faster, neither more secure than the classical computing . . . Unfortunately, this article is about probability and its origins, not quantum computing. 1 / 5 (2) Feb 05, 2013 Studies reported on this site have shown that "collapse" of a wave function is an "overshadowing" by another wave function and that the overshadowed function(s)still exist. In addition, the "observer" is not the research staff but a hidden, ordering "field." 1.6 / 5 (13) Feb 05, 2013 The whole point of the schroedinger cat experiment is the absurdity of extrapolating from quantum to macro or even universe scales. The cat knows if its alive, even if we only know a probability figure for its demise. An autopsy would clearly show when the cat died, not when it entered some superimposed state. Cat knowing, forensics, the ASPCA, and T gondii are some of the many parameters not included in the thought experiment. 3.4 / 5 (14) Feb 05, 2013 The human sensors are a magical waveform collapsing mechanism... um... no. The observer is as part of the system as anything else... there IS NO OBSERVER. Yes, think about it... the human eye looking at something to observe is no different than photons interacting with a rock. Interacting with matter... be it a rock, a magnetic field, a human eye... they are all the same, they all change the properties of the thing they are interacting with. Why the continual, somewhat arrogant, fixation that the human body is a magical device that collapses waveforms into a reality? Is this because you think there is a 'soul'... some part of the human body that is outside the bounds of physics? 2.3 / 5 (9) Feb 05, 2013 The cat knows if its alive, even if we only know a probability figure for its demise. An autopsy would clearly show when the cat died, not when it entered some superimposed state. Collapsing the wavefunction also decides when the cat died. Consider that at the moment of observation the cat exists in all possible states it can take while it is in the box, which includes all the possible histories, and all possible times of death. The laws of quantum mechanism apply equally to what and where and when. The isotope that relases the poison by decaying is not in superposition only for decaying or not decaying, but when it decays if it does. The cat knowing that it is alive is irrelevant, because nobody outside the box can read its mind telepathically. 1 / 5 (8) Feb 05, 2013 One crucial factor missed out is the human mind, not measurable and yet it could be the main factor responsible for materializing which probability or pocket of universes it desires. At one time, the earth was flat, but now the mind now "knows" that the earth is actually a ball in space. 1 / 5 (5) Feb 05, 2013 One other explanation for probability is that the lightcone of any event is not complete prior to the event, so there is no way to verify total input prior to an event. The laws may be deterministic, but input is not. As for multiworlds, it is a question of whether we view time as a vector from a determined past into a probabilistic future, or the changing configuration of what is, that turns future probability into actuality. Consider this image; http://en.wikiped...film.svg Do we view it as the present moving from left to right, or the frames moving right to left? Before a race, there are many possible winners, but after it, only one. 2.9 / 5 (7) Feb 05, 2013 Hello ValeriaT & Whydening Gyre it's that crazy 'language' guy again. I am more curious about what make US think in terms of probability? Probability is OUR problem, is it not, as we are unable to probe the quantum region with our own senses. We can't determine many things for certain so we have to use a 'best (mathematical) guess'. I don't see anything wrong in that, as long as we remember it's WE who are 'guessing' and NOT the system under examination. Take a macro situation for example. Police may use Rossmo's formula, 'Dragnet' computer program or other stats to predict where some criminal will strike again. They know the past events criminal so look for a pattern. Sometimes it works sometimes it doesn't. But the reality is that only the criminal knows (doesn't know) where to strike next. I've always had a 'realistic' problem with the poor old cat; the cat will try to escape before dying and evidence of that will be on the walls of the box. The cat certainly knows what's coming! 1.5 / 5 (8) Feb 06, 2013 seems it at least partially depends on if you consider that the cat is in BOTH states or in NEITHER state, if its in NEITHER then there is no requirement for multiverse obviously. But this whole 'collapsing the waveform' or 'quantum uncertainty' seems to be mistaking ignorance for epiphany to me. Ugh, if a phenomena requires an observer to be in a collapsed state, how did dinosaurs exist, or perhaps they didnt? Copenhagen taken to its extreme form seems to be sort of creationism to me, with the implicit assumption that before humans there was some sort of 'universal observer'. 3.2 / 5 (5) Feb 06, 2013 The state of the cat is entirely irrelevant to the issue of probability. The probability that the cat is dead or alive is identical - by the design of the experiment - as the probability that the radioactive source decays. The state of the cat is only a macroscopic indicator of that decay. The problem with the question of course, is not having a precise definition of probability that is devoid from connection with the real world. The words "choice" or "selection" are part of the definition of probability and therefore make the definition dependent upon physical existence. There can be no choice or selection without physical existence. On the other hand defining probability without any physical connection seems to leave the definition simply a matter of creating a weighted set, and comparing the weight of one element with the weight of the set. This certainly has no connection to quantum mechanics. 1 / 5 (5) Feb 06, 2013 The only hope to reach a convincing interpretations of quantum mechanics is by modelling the dynamics of measurements. This has been done in a rich enough model, which clarifies the cat issue: the cat dies because of its interaction with the apparatus. This approach leads to a minimal interpretation: the statistical interpretation, about which Einstein wrote even in his last year. Trying to solve our problems with QM in other universes, as the authors do, is a non-minimal approach; it can and should be neglected. 2.6 / 5 (5) Feb 06, 2013 StarGazer2011, yes I can your point but then you are making a distinction about who is valid observer? While the dinosaurs were around so were other creatures that had sensory organs so are you saying that they are not valid observers in QM context? The same goes for the sound of a falling tree. On a planet where life is prolific we can hardly claim the absence of an observer so it is only 'experiments' like the cat about which we can argue. Put it another way, has anyone actually seen a subatomic particle? No, of couse not, we don't have the physical apparatus with which to do so yet here we all are (me included)discussing 'prob. in QM'. Yet again there are those that have a 'biocentric' view. It seems that I can no longer claim Cogito ergo sum and if that assertion is wrong, I haven't written anything and you aren't reading it. One thing for sure; If nobody opens the box the cat will die anyway but then maybe hpothetical cats live forever! 2.3 / 5 (12) Feb 06, 2013 Copenhagen taken to its extreme form seems to be sort of creationism to me, with the implicit assumption that before humans there was some sort of 'universal observer'. The problem many people seem to be having with QM is that they interpret the word "observer" to mean a literal concious and cognitive observer. What the word in QM means is simply the interchange of information. The superposition can only be in all states if no other thing depends on it being in a certain state. The classical world arises from these dependencies where A depends on B depends on C depends on... where each limits the possible states of the other according to the rules of physics until you're left with no wiggle room for the wavefunctions. For that purpose, a rock can be the observer, and that is also the reason why Shrödingers cat has to be in a box so nothing else can interact with it until the box is opened. Quantum systems only behave in "weird" ways when you isolate them. 1.6 / 5 (7) Feb 06, 2013 Okay natello so you're an AWT and battling against hard to get you're point across...man, you're in for a rough ride. But I've been in (and still am) that situation so hang in there. Y'know what is really confusing is this:[actual converstation between me and an astrophysicist] LD;'...Black hole is what?...' AsPh; '...in laymans terms a superdense object causing distortion.' LD; 'Er, I'm sorry, since the Aether theory has been rejected... a distortion in what?' AsPh; '...in the fabric of space-time'. LD; 'Fabric...what is that then?' Instances like this tell me that terms like 'a sea of virtual particles slipping in and out of existence...' is no more valid than an Aether. But then I have a theory of my own that is different to most of the others which is partly expressed in the the public domain. The only trouble is that I don't fully understand it myself, but I am working on it. When I'm ready it will be in print. As I said, hang in there. vlaaing peerd 2 / 5 (1) Feb 06, 2013 I don't think flipping a coin on macro scale has anything to do with probability or in the least so little it has no influence on the outcome. When you are able to count in all influencing factors one should be able to exactly predict the outcome of the flipping coin. Probably (no pun intended) the bigger the scale the less likely quantum scale probability would have influence on it. 1 / 5 (7) Feb 06, 2013 Had the professor and his student done their homework, they would have found it to be the other way around. Quantum theory began with Max Planck who first proposed the existence of fixed energy states. Planck's famous radiation equation was based on it and the laws of probability that were first applied to thermodynamics. Planck noteded the similarity between gases and electron movements and called random electron currents as a "gas". His energy equation was definitely based highly on probability equations, resulting in his energy state equation, which led to his radiation equation. See "Planck's Columbia Lectures". 5 / 5 (3) Feb 06, 2013 The whole point of the schroedinger cat experiment is the absurdity of extrapolating from quantum to macro or even universe scales. I didn't see anyone really give you a full response. The whole point of the schroedinger cat is not to point out the absurdity of scaling up quantum effects to macro-scale objects. It is to provide a reference that is based in reality that surrounds us. For a LOT of people, it is far easier to understand "the cat is in an undetermined state" than it is to understand any explanation using waves and particles. The cat "experiment" was never meant to be taken seriously. When I explain it, I always end it with, "But in this case we know the cat is DEAD. Nobody poked air holes in the box..." You'd be surprised how often someone says, "Yeah if they poked air holes in we could see the cat..." And ta-da, a light goes off. :) 1.5 / 5 (8) Feb 06, 2013 Axe and Tigger, well said. But I like LarryD's line: "as long as we remember it's WE who are 'guessing' and NOT the system under examination." Assume humans don't exist. Everything else in the universe is guided by physical laws that govern energy/magnetism, period. We make choices and attempts to measure, therefore we are the only reason for "probability". The universe outside the sphere of influence of any being capable of choice has absolutely no random occurences....just physics. Probability as in the precise point position of a quantum of energy such an electron or photon is again, only because we can't measure it with enough precision to make a definitive statement...the universe knows precisely where every quantum of energy present within it's boundaries is. We have to remember that we can only perceive/alter one local reality and that, at the end of the day we are still governed by the same physical laws that the universe is. 3.4 / 5 (5) Feb 06, 2013 To quote Paul Steinhardt: "An infinite number of universes with an infinite variety of physical properties implies that everything that can happen does happen in some universe. A theory that predicts everything predicts 1.7 / 5 (13) Feb 06, 2013 I do not believe in QT. First of all "random" does not exist. Period. We use the word random the same way we use the words "dark matter". We know there is some forces determining the result of an event. We just don't know what they all are, and we don't understand how they work entirely. If we did have complete understanding of any given system, we would be able to 100% predict any behavior. So to answer the articles title question. NO probability does not come from quantum physics. In fact quantum physics comes from our inability to understand all the possible factors. Probability exists only as a factor of our uncertainty. AKA lack of understanding. The word "random" is an insult to science. If we were able to copy all of the information contained in every atom of the "radioactive element" into a computer simulation. And we knew at the sub atomic lvl what causes one atom to decay, which is not "random". We could determine the exact moment of the cats death. Unwitnessed. 3 / 5 (6) Feb 06, 2013 For an intersting thought experiment on how the conscious observer is (or isn't) a crucial part of QM one may take a look at the "Wiegner's friend" extension of the Schrödinger's cat experiment. A rather more dramatic form can be achieved by placing a conscious observer (again Wiegner's friend) inside the box. 1.4 / 5 (9) Feb 06, 2013 It's kind of funny and ironic to me if you really think the whole thing through, the thing we try to predict most often is most likely the most "random"/unpredictable thing in the universe. A conscious decision. It is otherwise obvious to me that a multiverse does not exist. There is no subatomic "fly-in-the-ointment" where everthing else in the universe happened exactly the same way as in another univers until in one universe this "fly" being akin to a plinko chip went left instead of right, thus branching off and making another universe. This is just stupid. It is fun to imagine but so obviously not the case. Only the most simplistic of minds could believe that. This is why Einstein said "God does not play dice with the universe." Now having said that, is it possible that there could be a universe where I made the conscious choice to stop and have coffee this morning on the way to work. Thus changing the causality of every other event branching out from my tree of 1.4 / 5 (9) Feb 06, 2013 Well yes I believe so, but it would not mean an infinite number of universes. There would simply be one universe for every conscious decision ever made by every being that ever existed. Far less than 5 / 5 (2) Feb 06, 2013 I do not believe in QT. First of all "random" does not exist. Period. We know there is some forces determining the result of an event. We just don't know what they all are, and we don't understand how they work entirely. If we did have complete understanding of any given system, we would be able to 100% predict any behavior. I don't know how you can say with certainty that randomness does not exist when you openly admit we do not know everything there is to know about physics. Simply put, what if the laws of physics include randomness? We have plenty of evidence of random events occuring. It seems more likely that random behaviour DOES exist, and we simply don't know why it exists. That question would then fall into a huge category of very interesting unknowns we can confirm, but do not fully understand yet. 1.5 / 5 (10) Feb 06, 2013 I don't know how you can say with certainty that randomness does not exist when you openly admit we do not know everything there is to know about physics. Simply put, what if the laws of physics include randomness? We have plenty of evidence of random events occuring. It seems more likely that random behaviour DOES exist, and we simply don't know why it exists. That question would then fall into a huge category of very interesting unknowns we can confirm, but do not fully understand yet. I know because you cannot show me a single event in nature that is random. In fact many things we can predict with current technology would be considered whichcraft 100 years ago. The weather per se. The Schrodingers cat thought experiment the article sites refers specifcally to the "random" decay of radioactive isotope, yet this event is so "random" we know exactly how long it takes and in fact use it in our most acurate clocks. Making it one of the least "random" things we know of. 3.9 / 5 (11) Feb 06, 2013 I know because you cannot show me a single event in nature that is random. Radioactive decay would be one (of many) examples. While you can fit curves to random processes (distributions) you cannot predict when an individual event will occur. Randomness is constrained - but it can be shown that it cannot be the result of smaller, fully deterministic processes. (such smaller processes would constitute 'hidden variables'). And by the Bell tests we have a strong argument that hidden variables do not exist. You could only save that with non-local hidden variables as in the deBrogli-Bohm interpretation of QM. But that opens up an entirely different box of problems. And there is a strong indication that complete determinism isn't in the cards. (e.g. if you do double slit experiments with single electrons in the apparatus you still get interference patterns) 1.3 / 5 (12) Feb 06, 2013 "Radioactive decay would be one (of many) examples." I already dealth with this specific arguement in my previous post. If radioactive decay is random, then how is it the most precise event we can use to determine the passage of time with. If we can predict exactly how long it will take to happen, than how can it possibly be a random event? There is a very specific set of rules governing radioactive decay. We simply do not understand. I tell you again. Your quantum theory is "witchcraft" and very very unscientific. 3.3 / 5 (7) Feb 06, 2013 "Radioactive decay would be one (of many) examples." I already dealth with this specific arguement in my previous post. If radioactive decay is random, then how is it the most precise event we can use to determine the passage of time with. If we can predict exactly how long it will take to happen, than how can it possibly be a random event? There is a very specific set of rules governing radioactive decay. We simply do not understand. I tell you again. Your quantum theory is "witchcraft" and very very unscientific. You don't know how an atomic clock works. It has nothing to do with radioactive decay, which is random. 1.5 / 5 (10) Feb 06, 2013 I wasn't talking about atomic clocks. I was talking about half-lifes which is a probabalistic event. (You know, related to what we're discussing). Yet is predictable enough that we can use carbon-14 dating to determine the "probabalist" yet very accurate age of many things. In fact can you name a method used more often to determine the age of something? Again the mere fact that we can come up with a formula to accurately predict the nature of something "random" implies that it is in fact not random at all. 3.8 / 5 (10) Feb 06, 2013 If radioactive decay is random, then how is it the most precise event we can use to determine the passage of time with. Over many decay events you can average out - that's how we can use decay for measuring time (if there were few events then the dating via radiooactivity wouldn't work nearly as well). It's like flipping a coin - you can't predict any one specific outcome, but over a million coin tosses you can map all the possible paths and see that the paths that lead to a near 50/50 split are VASTLY in the majority over those that are all heads - so betting on teh average outcome is better than betting on the extreme one. I think you need to look up the difference between a probabilistic event and a probability distribution. 1.6 / 5 (13) Feb 06, 2013 First of all "random" does not exist. Period. I don't think so. Sometimes the information is simply lost. For example, the surface of water droplet cannot reflect the movements of all water molecules which are bouncing and colliding inside of it. Some of these movements are simply lost for the observer from outside for ever. How we can be sure, that the randomness of quantum mechanics doesn't work in the similar way? 1 / 5 (1) Feb 06, 2013 The paper is pretty neat. An example of what is being talked about here is coin flipping. How can the probability of the outcome of a coin flip be traced back to quantum mechanical uncertainty? As it turns out, it all starts with water and polypeptides in your nervous system. Quantum uncertainty at that level introduces enough probability in molecular transport to evoke less-than-perfect physical prowess, which ultimately leads imprecise control of the energy required to repeatedly produce a perfect flip and snatch. I just have to say Tektrix, that this almost a goddamn PERFECT comment. I've reused it (with a reference) on Facebook, and about to add it to Google . 1.7 / 5 (12) Feb 06, 2013 Nothing comes from nothing. 1.4 / 5 (10) Feb 06, 2013 Guys, "random" is a human construct. Every immediate reality (what we term as "the present") has a set of variables that came together to create it, this happens everywhere in the universe. If 2 supposed identical samples containing C14 display differing rates of decay, there is a reason...a mechanism causing the differential, whether we know what it is or not. If you knew EVERY variable, there is no probability. This is why we need it. We don't know and likely never will. QT is a wonderful thought exercise, but as long as we experience time as linear it will always be just that. "It's like flipping a coin - you can't predict any one specific outcome.." - AP See Tetrix's 1st post. Control every variable down to the quantum level and you can make the coin land the same way every time. 1.4 / 5 (10) Feb 06, 2013 Antialias. You are making a moot point. The simple fact that we can detect that different elements have different decay rates at all tells you it isn't random. The fact that these rates are measurable and can be generalized with a formula tells you that they are not random. It't obvious if you think things through to their logical conclusion. We simply cannot observe all of the data and information of the system to understand it completely. At the risk of being obtuse through repitition I say again. I like to use a computer simulation analogy. It's also akin to a star trek transporter. If we could map all of the information contained in an atom, (or group of atoms) and we knew completely how all the subatomic actions worked, we would be able to simulate with 100% certainty how the system would evolve by running the simulation faster than real time. Or teleport said object by remapping the data onto remotely located atoms. 1.5 / 5 (8) Feb 06, 2013 Not only randomness is an evident essential aspect of QM, claiming determinism as a universal fact to predict everything is not serious today: already deterministic gravitationally bound systems exceeding 3 bodies have been shown for more than a century to totally lose determinable properties beyond a time dependant predictability horizon. Beyond that horizon, only probalilistic approaches allow to do statisical predictions for those chaotic systems ruled by resonance. Determinism, although very useful, is only a practical simplification, just as linearity and so on, applicable in some limited artificial cases, which are idealising reality, which is evidently independant of us humans. It is our perception and interpretation to understanding, which is totally subjective. BTW, the theory of scale relativity still gives the most elegant interpretation, but this is another sensitive argument. 3.9 / 5 (7) Feb 06, 2013 Control every variable down to the quantum level and you can make the coin land the same way every time. Since you can't control it to the quantum level that's a nonsensical approach. At the quantum level values aren't discrete but a probability distribution. 1 / 5 (10) Feb 06, 2013 The problem with multiple universes, Albrecht said, is that it if there are a huge number of different pocket universes This article got into exactly the opposite conclusion. In this article The Multiverse Created Probability To Explain The Multiverse (synopsis) Whydening Gyre 1 / 5 (11) Feb 06, 2013 Does probability come from quantum physics? I would say - Whydening Gyre 1.4 / 5 (11) Feb 06, 2013 One crucial factor missed out is the human mind, not measurable and yet it could be the main factor responsible for materializing which probability or pocket of universes it desires. At one time, the earth was flat, but now the mind now "knows" that the earth is actually a ball in space. Humans knew that LONG before they thought it was flat... 3.7 / 5 (3) Feb 06, 2013 Your faith is not supported by experiment. "We could determine the exact moment of the cats death. Unwitnessed." - Jalmy 4.2 / 5 (5) Feb 06, 2013 You are confusing random with unpredictable. A coin toss can produce completely random results, and yet the statistical character of those tosses are absolutely predictable. "The fact that these rates are measurable and can be generalized with a formula tells you that they are not random." - Jalmy If Statistics wasn't predictable, then it would have no value. Since it does, it is. Since it is, you are wrong. 1.4 / 5 (10) Feb 07, 2013 That is my whole point fool. Something being predictable at any lvl means it is not random. If there is a patern to it then it isn't "random" that's the whole point. Way to entirely miss the big picture though. Try to read the entirety of my post, then make a stupid comment after please. TY. 1.6 / 5 (13) Feb 07, 2013 Your faith is not supported by experiment. Really? So the literally uncountable amount of information we are processing in computer simulations is all wrong? We might as well stop doing it. Since we cannot support the results with experiment. Why do we spend billions of dollars devolping these huge supercomputers to run science simulations? Because the enormous amount of math and the complexity and vastness of the data sets make it impossible to test with experiment. Your uncertainty principle is wrong. Your so called "experiments" that you think prove it are just misinterpreted. Einstein thought it was BS and so do I. If you actually spent some time thinking about things maybe you would too. 1.4 / 5 (9) Feb 07, 2013 "The cat knowing its alive" is not irrelevant. The cat would be very aware of being locked in a confined space. Its only human vanity that is prepared to create whole universes rather than admit the obvious classical nature of a cat collapsing its own wave function even if it is not fully sentient. 1 / 5 (5) Feb 07, 2013 Control every variable down to the quantum level and you can make the coin land the same way every time. Since you can't control it to the quantum level that's a nonsensical approach. At the quantum level values aren't discrete but a probability distribution. No, they are discrete. Just not by OUR measuring capabilities, which is why I conceded we need "probability" in the same post you drew the above quote from. IF you had the ability to measure the orbit of the electron around a the nucleus of Hydrogen atom you could apply a vector coordinate to every position it occupies. These would be discrete coordinates, not probabilistic ones. 3.1 / 5 (14) Feb 07, 2013 At the quantum level values aren't discrete but a probability distribution. In the scope of CI QM, yes. But what about out of it? What if QM does not = reality? Is it still all "random"? Because you said so? Because the Copenhagenists said so? Or just because the sky is blue? What makes you so certain (that nothing is certain)? Isn't that allready a flawed logic construct to begin with? The Bell experiments? There were no loophole-free Bell tests done yet. Double-slit? Without a truckload of fundamental assumptions, essentially inconclusive.. What else? Nuclear decay? As random as a humans lifespan. While the average might converge to a certain value, it doesn't strictly dictate the path of an individual (which cannot be predicted without fully understanding the underlying principles and having sufficient data). At this point, the only reasonable and honest conclusion that can be drawn is, that we can NOT be certain of either.. Ergo, WE SIMPLY DO NOT KNOW (yet). 3.7 / 5 (6) Feb 07, 2013 Something being predictable at any lvl means it is not random. If there is a patern to it then it isn't "random" that's the whole point. Your whole point is wrong, then. You can measure the probability of something happening with great accuracy. That does not mean that an individual event of that class is not random. Since you say half-life is not random, consider carefully what half-life means. For a given sample of element X, 50% of the atoms in that sample will decay within the half-life. So if element X has a half-life of 10 minutes, some atoms will decay in 1 second. Some in 2 minutes. Some in 9 minutes and four seconds. Some in 15 minutes. Some in an hour... You can say with great certainty that the half-life is 10 minutes. You can not say with any certainty when exactly a single atom (a single event) will decay. This is not because there are hidden variables we do not know about. It is because the decay is random. 2.8 / 5 (11) Feb 07, 2013 IF you had the ability to measure the orbit of the electron around a the nucleus of Hydrogen atom you could apply a vector coordinate to every position it occupies. These would be discrete coordinates, not probabilistic ones. I'm affraid it might be a bit more complicated.. There is no such thing as an absolute discrete coordinate either, as that would also require an absolute discrete point in time. And in the light of last several years of data, Planck time doesn't seem to fit that bill. Everything is in constant motion, so every "discrete" coordinate is in essence just an average over a chosen time interval. No matter how sensitive/precise the measurements are, it will allways be like that. Sort of like the HUP in praxis. In a certain sense, one could almost say that there is no such thing as a tangible NOW - just past, and future.. This actually makes all the arguments about discrete or probabilistic reality quite moot, as the ultimate answer seems to be neither, nor. 1.7 / 5 (12) Feb 07, 2013 Sorry rkolter you missed my point again. Perhaps you did not thorougly read. The important part of it is that the probability amounts of each element is different and repeatable to a certain degree of uncertainty. If it was truely random. All elements would have the same probability of decay. But they dont. They have different decay rates. And repeatable. Even though you have to go to a great many years to see the patern. There IS A PATERN. Maybe your problem is you do not understand the word "random". Here is the definition as given by Webster's. 1 a: lacking a definite plan, purpose, or pattern See the word pattern is key. Another corroborating piece of evidence is the fairly recent observation that the sun is influencing these decay rates. In times of solar flares for example decay rates increase. Again this is showing there are things influencing your completely not random "random" decay rates. I guess Erwin Schrödinger needs to come up with a new trigger device. Whydening Gyre 1.4 / 5 (11) Feb 07, 2013 You are confusing random with unpredictable. A coin toss can produce completely random results, and yet the statistical character of those tosses are absolutely predictable. Aren't statistics and probability flip sides of the same coin? "The fact that these rates are measurable and can be generalized with a formula tells you that they are not random." - Jalmy If Statistics wasn't predictable, then it would have no value. Since it does, it is. Since it is, you are wrong. Aren't statistics and probability flip sides of the same coin? 1 / 5 (6) Feb 07, 2013 "I'm affraid it might be a bit more complicated.." Daywalk - I'll add the words "in a specific inertial reference frame" to what I said regarding the vector coodinates. Relative to the universe, everything is in motion. Magnetically trapped in a vaccuum chamber here on earth, we simply lack the ability to measure the coordinates the electron occupies in our inertial reference frame. 4.2 / 5 (5) Feb 07, 2013 ...The important part of it is that the probability amounts of each element is different and repeatable... If it was truely random. All elements would have the same probability of decay... 1 a: lacking a definite plan, purpose, or pattern See the word pattern is key. I removed your personal attacks and snide comments to try to pull the core of your argument out of your rant. At the core of your arguement is the assumption that "no pattern" means necessarily "no solution is more likely than any other". This is not the case. It is perfectly reasonable to have a process that is random, with some options more or less likely. Also, you excluded the second definition from Webster: (2) Relating to, having, or being elements or events with definite probability of occurrence This is what we've been talking about. 1.4 / 5 (9) Feb 07, 2013 Exactly rkolter, your version of random is the same "random" that we use when talking about computers generating random numbers. Which again is not at all actually random. It is simply beyond our conscious ability to descern, or for the sake of most uses it is good enough. But for the specifics of this topic and our argument that type of random is not good enough, and it is most definitly not what I have been talking about. Yes "no solution is more likely than any other" is true randomness. Which my whole arguement that you keep helping me prove. True randomness DOES NOT EXIST. The word is a tool to explore with, just as is the random generator in a computer. It is a tool, nothing more nothing less. Just as quantum mechanics is a tool. I think we got a bit side tracked from the original article though. Just to clairfy what my stance is. I think the asumption that true randomness exists is the flaw that leads to the creation of multiverse which the existance of is obsurd. 2.1 / 5 (7) Feb 07, 2013 I don't think flipping a coin on macro scale has anything to do with probability or in the least so little it has no influence on the outcome. Regardless of what you think, the authors of the article use this very example to illustrate the premise. And AP'org is right- the intrinsic nature of QM makes the levels of control you are suggesting, impossible. The idea that perfect knowledge lends perfect control is centuries old and has been firmly refuted by the huge body of work in QM. 1 / 5 (3) Feb 07, 2013 LarryD...your comment about a "biocentric" view is well received. I tend to believe that anything, both matter and wave can be considered an observer...anything that can interact Consciousness is not required. The cat is either dead or alive, not both. A silly take on this would be, at what point in the classical sense of this mind experiment is the cat actually dead? Or is it alive? Is it when the first photon enters the observer's eye and an image of the cat forms? Is it when the observer examines the cat at rest to see if it is alive? Is it when the vet declares the poor beast deceased or in good health? I know it's silly, but my point is, existence is observation to some extent, perhaps entirely. The cat is observed by virtue of the fact that it exists. Obviously, such a claim has far reaching implications and really isn't done any justice in my flip response. 3.2 / 5 (13) Feb 07, 2013 Aren't statistics and probability flip sides of the same coin? Yes, as long as you keep in mind that "statistics" only deals with things that have happened,,,, useful assessing causality. Probability is the assessment of what might happen, once IT happens, it's no longer a probably event, it is an actual event with 100% probability (it happened.) It is an important distinction. 3.5 / 5 (8) Feb 07, 2013 when talking about computers generating random numbers. That's why no one talks about computer generating random numbers but only about computers generating pseudo-random numbers. They are numbers that are generated via an algorithm (and hence absolutely predictable since the next number depends on the previous one). The difference with truly random events in real life is that they aren't dependent on each other (and again there are tests that can be used to distinguish whether events are indpenedent or not) 1.5 / 5 (8) Feb 07, 2013 Yes. Independent events. That search continues... ...and language evolves to give meaning to that. 1.8 / 5 (8) Feb 07, 2013 Probability comes from the language used. Lite never comments. No probability there. lol Whydening Gyre 1.4 / 5 (11) Feb 07, 2013 Probability comes from the language used.{/q] SOrry, Tau. gotta disagree with ya there - probability was here first...:) Lite never comments. No probability there. lol And I'm waiting for the day he/she/it does not 1 every comment I make. I'll have to go out and by a lottery ticket if THAT ever happens. 3.2 / 5 (14) Feb 08, 2013 Probability comes from the language used. Lite never comments. No probability there. lol Lite is a bot. It has a blacklist of usernames, and continuously scours ALL articles in a predefined time interval. If it finds a comment by a blacklisted user, it automatically rates it with an assigned value (mostly 1). As the bot is running on a server/computer, which runs off the electricity grid, and is connected to the internet through a multitude of devices (etc.), a fair bit of probability to its function still applies.. The admins who (should) manage this site seemingly don't give a damn about the comments sections, which in turn often gets pestered by spam, etc. So the next best thing in case the admins don't give a f***, is to give a s***. Yea, I know, very on topic.. NOT! But who cares, eh? :-) Whydening Gyre 1.4 / 5 (11) Feb 08, 2013 That might be right except for a couple small quirks. I've noticed lite does not rate certain users or if it does, it rates them a 5. Interesting coincidence, that... If I didn't know better, I'd say it's my very competitive wife. However, she has to ask me how to use Word, so... 3 / 5 (12) Feb 08, 2013 Daywalk - I'll add the words "in a specific inertial reference frame" to what I said regarding the vector coodinates. Doesn't matter. There is no such thing as 'perfect vacuum', neither is it possible to perfectly isolate any volume from the 'outside world' - no matter what you do, it will allways remain part of the Even passing an event horizon can not be strictly considered a causal disconnect (as often believed). What falls inside a BH does not dissapear, it becomes a contributing part of a bigger whole, which still affects the 'outside', be it either by gravity, resulting magnetic field(s), etc.. The only way how to achieve such 'disconnect' (on paper) is to define a threshold bellow which everything is considered negligible, or essentially non-existent. Well hello, I present to thee the magnifficient theory of QM.. When it comes to motion (or 'change') in a fundamental context, it can neither be considered absolutely discrete, nor truely random. 1 / 5 (6) Feb 08, 2013 ...uncountable amount of information... is another label for what is called the Continuum. That introduces randomness. Yes, all computer simulations are wrong. Spending money is fun. You are taught that. 1 / 5 (5) Feb 08, 2013 DW- Well said. Especially your last line. It is essentially the point I was trying to make. If no sentient life populated the universe, everything would occur as "the universe's" laws of physics dictate they should...nothing can be truly random. 1 / 5 (1) Feb 08, 2013 @ValeriaT - Quantum computers would not be faster regular ones when calculating something like 3x5 or 2 4. However when calculating probabilities it clearly should be much much faster regular computers (when it would start working at all). And grants would be one of the ways to speed up that development. 1 / 5 (6) Feb 08, 2013 The answer is the theory is wrong or God DOES throw dice. 1.8 / 5 (10) Feb 08, 2013 No 'theory' is wrong. Just obsolete or superseded. See Heilocentrism vs. Geocentrism. Luckily humans weary of complexity. The less math (or assumption) involved, the greater chance(!) of appeal among humans. (Deists and/or Theists love this added advantage of simplicity until belief and/or faith becomes obsolete or Self referential-ism is frowned upon in science. You know this. 1.4 / 5 (9) Feb 09, 2013 I'm not even convinced that the single radioactive atom which is imperiling Schroedingers cat is in a superposition state. Its more similar to a point on the screen of the two slits experiment. That point is either illuminated or not by impact of a single photon. The only superposition in that experiment exists while the photon is in flight through both of the slits simultaneously. The scattering of dots on the screen is a classical probability distribution not a superimposed fuzzy pattern. A geiger counter never registers half a count, even when sealed in a box with a cat. A 'classical' autopsy still reveals when the cat died once the box is observed. 2.6 / 5 (5) Feb 09, 2013 The scattering of dots on the screen is a classical probability distribution No it is not. If it were you'd see as a result the combined results of the same experiment done twice, but with one slit occluded each time. But instead you see an interference pattern. 1.4 / 5 (9) Feb 10, 2013 antialias_physorg: You misunderstand me. I'm just saying that the photon dots are either there or not: Rather than the photons which travel through both slits and are superimposed in flight. The dots distribution follows a classical calculatable wave interference, but like decaying atoms you never know where the next one will be till it has happened. 1 / 5 (7) Feb 10, 2013 The existence of photons dots on the target screen is clear tangible evidence, that these photons do behave like the localized particles. The wave-interference-like distribution of photons indicates, these photons are surrounded with wave, which affects their path. Because photons itself do behave like the particles (and they're usually much smaller, than the slit itself), it's evident, that the wave behavior of photons doesn't come from photons itself, but from vacuum which surrounds them. It's a simple clean logics, which is inaccessible for people, who are using the complex math. It's simply another hidden layer of reality for them. But the physicists and high-school teachers don't want you to use such a clean transparent logics, because they would lose their informational monopoly and social influence, if not job. Most of people want to understand the things as cleanly as possible and they resort to complex formal description of reality only when they don't see any other option. 1.4 / 5 (10) Feb 10, 2013 I admit, for fooled sheep who are downvoting me obstinately is difficult to admit, they're fooled sheep - but this is simply, how the reality currently is. This is the power of new ideas, the power of new vision of reality. The problem with its understanding isn't in complexity of math involved - it is in your head and your prejudices only - as it always was. The situation well known from medieval times just repeats again. You didn't expect, you may appear in the position of hypocritical reactionary and ignorant opponents of Galielo or Pasteur? Well, it just happens now. The world is changing - but the thinking of average people remains. 1 / 5 (2) Feb 10, 2013 'Oh wise oracle my head is full of pain. Free me of this confusion and make the answer plain.' 'What troubles you, my son? Your question does not seem a simple one' 'Oh wise and wonderful Oracle I'm on nerve, what say you about the Normal D Curve.' 'Probability is the game, I roll the dice after careful aim' 'But Oracle, if on probability you depend surely means you know not the end!' 'I know all without a doubt, of QM and what it's about!' 'Oracle, why play if you can see what every outcome is to be?' 'How typical and not very bright, I toss the die to see if I'm right.' 'Oh come now Oracle with me you jest, surely numbers on a die have equal probability at best.' 'For a human that's pretty neat, insinuating that I'm a cheat' 'Wave or particle, where is it, Oracle?' 'Neither and both, not here yet everywhere you see, that makes 100% probability' Don't forget that Bohr said to Einstein ' Don't tell God what to do'. Hmmm. what kind of probability density function describes God? 2 / 5 (4) Feb 10, 2013 Not a theory, just a "what if" idea I have been contemplating... When we look out at all the galaxies, we are actually looking out past the boundry of our universe, like inside a marble looking out and space time bends in on itself, and so we are looking out at every possible Universe, every possible shape and fluctuation the big bang could have taken since the begining of time? Like waves of photons interacting, these island universes can also interact within the greater multiverse. This might explain a lot.. Thoughts? not rated yet Feb 12, 2013 Well, here's a clue. The "observer" is only a simplification for what is really happening. In reality, the waveform is colapsed by decoherence from interaction with the environment. So, if my pet yellow dog is trained to bark at a live cat, and he doesn't bark when the box is opened, he becomes the observer and by hearing his bark or lack of it, I transfer that information to the environment and colapse the waveform. If however, I am out to pick up the paper and not there to hear the bark or lack of it, the cat is still in an undetermined state, with equal probability of being both alive and dead. Yes, you can perform a necropsy (on an animal, not an autopsy) and determine if the cat died earlier. This is simular to quantum erasure. It doesn't change the outcome of the thought experiment. Of course, that trick is easily demonstrated with electrons and the right setup of slits and detectors. This stuff really isn't that hard. Really, it isn't. not rated yet Feb 12, 2013 Some may have missed my point. The outcome of the thought experiment doesn't depend on a human observer, or a dog or an insect. It only depends on the colapse of the probability wave by decoherence through interaction with the surrounding environment. Of course, if the outcome of the experiment has absolutely no interaction with the surrounding environment, none whatsoever, then the wave has not colapsed and the cat lives forever in an indeterminate state. More likely though, the cat would start to smell, the smell would attract flies, and so on until the interaction with the environment was great enough to colapse the waveform. Just accept it. Quantum mechanics is here to stay. As the Borg would say, resistance is futile. 1 / 5 (1) Feb 13, 2013 GPhillip, yes you are probably right QM is here to stay but then so are Newtons Laws, Classical music and Hot Dogs. Every generation thinks it has the final answer and history has proved that to be wrong (Hot Dogs excluded). Each generation provides another step up a very long ladder and QM is such. A couple of hundred years hence they will be claiming their version of QM is 'it' with maybe a new 'state' other than wave or particle (a 'wavicle'???). But that's progress;accept it and move on. The theories we have now answer some questions but a good theory not only makes predictions it asks the all important and vital next question. Forget the cat and try something that really needs an answer. For example,an experiement on whether a ghost is a collapsed wave from the person claiming to see one. You may or not believe in such entities but there many millions of people (maybe billions) both layman and scientists who do. Is QM up to the task? 3 / 5 (2) Feb 13, 2013 Rather than the photons which travel through both slits and are superimposed in flight. The dots distribution follows a classical calculatable wave interference, but like decaying atoms you never know where the next one will be till it has happened. If that were the case they would fly through one slit or the other - because you could as easily put the screen at the slit (in which case you DON'T get an interference pattern) But we DO see an interference pattern in the 2-slit experiment and hence they don't fly through either one slit or the other. So the 'dot' is not present until measured. The photon is superimposed with itself because you get an interference pattern even if you space the photons so far apart that there is on average only one in the apparatus at any one time. (You can also do it with single electrons and get the same result) not rated yet Feb 13, 2013 Thinking that Quantum Mechanics is just a temporary best solution and that something will come along later that makes more intuitive sense is just another way of trying to reject the science that has been proven to be absolutely accurate in 10's of thousands of experiments. Quantum Mechanics is no more a temporary best solution than the math of 2 and 2 = 4. It's correct and will always be correct and will not be proven incorrect at any point in the future. Of course we may find simpler expressions of the same concept as it is studied further. Einstein's original presentation of GR took grad level math abilities to understand, but after studying it for decades, simpler explanations of the same concept, without the complex math were possible. As far as trying to explain paranormal phenomena with quantum effects, I'm afraid that is beyond the capabilities of the science. The scientific method requires experiments to be performed that can disprove the validity of a hypothesis. not rated yet Feb 13, 2013 The very reason that some phenomena is classified as paranormal is that is cannot be subjected to falsification by experiment. So, some things like ghosts, UFO's, big foot, etc., are likely to always remain outside the capabilities of scientific investigation. Simply, science can not and does not provide an answer to everything. It has it's limitations. 1 / 5 (5) Feb 13, 2013 The very reason that some phenomena is classified as paranormal is that is cannot be subjected to falsification by experiment Unfortunately in contemporary physics many phenomena are ignored if not denied just because they depend on higher number of parameters, than the contemporary physicists can recognize. Typically the cold fusion and various psychic phenomena fall into this category: despite they're manifesting with quite tangible effects often, the general lack of reproducibility refuses the physicists who are motivated with safety and reliability of research, not with desire to reveal something new. Illustratively speaking, for modern physicists even the lightning is not real, until it cannot be reliably predicted and reproduced. 1 / 5 (5) Feb 13, 2013 "refuses the physicists" should be "repels the physicists" 1 / 5 (4) Feb 13, 2013 GPhillip, nonsense! In no way would or could I reject QM. Just because I can point to a certain type of 'pseudo failure' doesn't mean that I reject QM. I think your remark indicated that you didn't get my point but ValeriaT did! I really do think that we have reached the point, or will be soon, where we can tackle such questions as the 'paranormal'; and therein lies the problem. As long as we give that label to some event scientists won't touch it. Some such events have been shown to be electromagnetic but as far as I know no one has yet tried to reproduce alleged events not because it is outside our understanding but because of location. Setting up a fully equipped lab in someone's house is, I admit, hardly feasible but come on fellas, surely there's some bright individual(s) who can overcome this. It seems that I have more faith in our science. 5 / 5 (2) Feb 15, 2013 I probably didn't say this well at all. Richard Hamming, a founder of modern computer science, said it much better: Mathematics addresses only a part of human experience. Much of human experience does not fall under science or mathematics but under the philosophy of value, including ethics, aesthetics, and political philosophy. To assert that the world can be explained via mathematics amounts to an act of faith. 1 / 5 (3) Feb 16, 2013 GPhillip have you ever read the complete works of Plato? There are numerous remarks there that would also apply...but then those guys had a lot of time to sit and think etc. So I do agree with you that if we think maths will provide all the answers then we are misguided. But then again, the topic here is about probability and maths can only supply us with possibilities. The 'numbers game' with regard to criminals has many times failed (The Yorkshire Ripper being a famous one)and where human intuition, philosophy etc become the way forward. But having said that, maths and science are 'good inventions' so we ought to use them.
{"url":"http://phys.org/news/2013-02-probability-quantum-physics.html","timestamp":"2014-04-21T03:08:13Z","content_type":null,"content_length":"209783","record_id":"<urn:uuid:46672b4d-f44a-455d-a6d2-296b2ac2f6e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching to the Math Standards with Volume 3, Issue C ::: September 1999 Teaching to the Math Standards with Adult Learners by Esther D. Leonelli For the last 10 years, I have been an advocate for standards-based teaching of mathematics and numeracy to adult basic education (ABE), General Educational Development (GED), and adult English for Speakers of Other Languages (ESOL) students. It has been quite a journey, a learning experience, and the most fulfilling part of my adult education career since I returned to teaching adults in 1985. By "standards-based," I mean a set of values and important ideas used to judge methods of instruction and assessment. With respect to math instruction, I mean both content and methodology based upon the National Council of Teachers of Mathematics (NCTM) Curriculum and Evaluation Standards for School Mathematics (1989), which was adapted for ABE instruction by Massachusetts teachers. My Conversion I was trained as a secondary math education major in the late 1960s. When I first taught adults, from 1971 to 1973, I tried to incorporate the methods I learned in college. These methods were based upon the "new math movement" and Piaget's work with children, used manipulatives, and included a deductive, but very directed, approach. These did not translate well into my adult education work at that time. I found that my students, who were in an individualized math lab in a Boston program that prepared them for medical training and professions, did not want to be led through these "discovery" lessons. They wanted to be shown the "rule" so they could apply it to the problems in the book and to the math tasks needed for the particular job they were planning to pursue. I could do this very well. And so I taught math skills one on one, in a linear way, using pencil and paper and remedial arithmetic skills textbooks. I taught basic computation by rote, using decontexualized situations. Once the students mastered computation skill using only numbers, then I showed them how the skills were applied to word problems, which were chosen for the particular skill to be practiced and mastered. I continued to teach this way when I returned to the adult education classroom in 1985, teaching ABE and GED level students. I was reluctant to use the manipulatives the Cuisenaire rods, the base-10 blocks that were a part of my math education training. I gave up trying to have my students discover the math concepts they were trying to master, although I wished that my students could rely on their own reasoning powers to reconstruct the theory or rule if forgotten. I reverted to teaching math the way I was taught. My methods worked okay. I relied on textbooks with many practice exercises and the answers in the back of the back. I could get my students to pass the competency-based math tests my center used for the alternative adult diploma credential we granted, one test at a time. That made my students feel confident and good. But I felt something was lacking when they couldn't remember how to divide fractions once we moved on to another math topic. I was disconcerted when they had to return to my class to prepare for the Licensed Practical Nurse (LPN) test or the college entry test. Something wasn't sticking. The math was "learned" for the test and then forgotten. Students depended on me and on the textbook for answers and rules. And, my methods did not work well for the students who were non-native English speakers. Needless to say, I felt that the approach I was taking needed changing. By 1989, the GED test had changed to include more emphasis on problem-solving and higher-order thinking skills. It allowed more use of estimation skills and required that fewer complicated calculations be done without the use of calculator. A team of GED teachers in Massachusetts took a good look at the test and came to the conclusion that how we were teaching math should change. I joined that team. Around the same time, I attended a multisession workshop on teaching basic mathematics at the Adult Literacy Resource Institute in Boston. This was a mathematical re-awakening for me and an invitation to reconsider my own practice. The workshop introduced me to new national developments in the area of curriculum, methodology, evaluation, and teacher training in school mathematics that moved math teaching beyond the "back to basics" movement of the last two decades. These ideas, along with research and practice in classrooms and constructivist theory, were incorporated into NCTM Standards, the seminal document for the math-standards movement. I read the Standards, became a "believer," and in the process, also became a lifelong math learner. Through attendance at NCTM conferences I got to see first-hand the exciting changes in pedagogy and assessment advocated by proponents of the NCTM standards. I saw less direct instruction and more modeling of mathematical behavior by teachers. I saw less "drill and kill" practice and more interesting problems for investigation by students. I saw less individual seatwork and more cooperative lessons and conversation in the classroom around math, and fewer answers from the teacher and more sharing by and among students of individual strategies for solving problems. The workshops were intended to engage me in learning more math, which they did. And they showed me how the activities could be engaging for my learners. In one workshop I was introduced to international developments in math education, the "realistic maths" curriculum from the Freudenthal Institute, the Netherlands. Their approach asks students to make mathematical sense from graphical images of the real world. One of the "geometry" problems that I like to pose to my students came from that workshop (see below). It requires not only visualization, but also the physical handling of concrete materials and group discussion to come up with an optimal solution. Here are two views of a building: Front View Side View What is the least number of blocks you can use to build the building? What is the maximum number of blocks you can use to build the building? This activity let me view students at work alone and together, solving a concrete problem. When they build their structures I see what they saw. What I learned is that many of my students have never had the opportunity to build and play and visualize. I also realized that I was making a lot of assumptions when I "lectured" or "demonstrated." I had assumed that my students could "read" a picture and could learn to interpret word problems by my teaching of "key" words and formulas. Standards and Frameworks The NCTM Standards were based upon the assumption that, in the late twentieth century, American society has four new social goals for school education: (1) mathematically literate workers, (2) lifelong learning, (3) opportunity for all, and (4) an informed electorate. To meet these (1989) societal goals, the Standards state further, that: Educational goals for students must reflect the importance of mathematical literacy. Toward this end, the K-12 standards articulate five general goals for all students: (1) that they learn to value mathematics, (2) that they become confident in their ability to do mathematics, (3) that they become mathematical problem solvers, (4) that they learn to communicate mathematically, and (5) that they learn to reason mathematically. Points 2 through 3 appear in the first three "process standards" of the document: Math as communications, math as problem solving, and math as reasoning. The fourth process standard mathematical connections relates to the inter-relatedness of math topics and the connection of math to other disciplines. Several instructional themes permeate the NCTM Standards: Concrete and problem-centered approaches to teaching math concepts; emphasis on estimation and visualization in realistic contexts; and using cooperative learning techniques. The Massachusetts ABE Math team found that these practices coupled with the four process standards are completely in harmony with notions of good adult education practice and so they included these in Massachusetts ABE Math Standards. The "how" of teaching math is followed by the "what to teach." The NCTM Standards content strands are described for three groups of K-12 learners: K-5; middle grades 6-8; high school level 9-12. The ABE math team found that the content of much of ABE and GED mathematics fell within the middle-grade math range and so focussed their content standards on those standards. My own view is that today's GED test covers school mathematics content up to 8th grade. The Massachusetts Numeracy Framework roughly parallels these content stands with seven Numeracy strands. I find that a standards-based approach to teaching adult basic math fits well with good adult education practice. The approach is learner-centered, involves a solid theory of learning for understanding, and addresses the wide diversity of cultural background, learning styles, and abilities of the learners whom I teach. And, it addresses math content and skills that are relevant for the new millenium. What I take personally from the NCTM Standards is this: - Learning (and doing) mathematics empowers adult learners; - Math (and number sense) comes from real life; - Mathematics is more than arithmetic competency and a set of rules to be memorized; - Mathematics is investigation, communication, and a way of thinking about the world. In terms of content, how and what math I teach, the math must be meaningful and connected to adults but also must stretch them beyond where they are. It must be more than teaching computation. And, since algebra is a "gatekeeper" to entry into and success in further education, my commitment to civil rights and equal opportunity compels me to ensure that adult math instruction includes some algebra (Moses, 1997). In My Classroom The learning of math as well as the doing of math includes moving along a continuum from and among the concrete to the representational to the abstract. "Digits" were born to represent fingers and toes; to "calculate" originally meant to use stones to count. In actual practice, mathematicians, scientists, technicians, draftsmen, engineers people who use math everyday often use graphic and concrete models to do math work. So I try to incorporate the use of a "hands-on" curriculum. It starts, as in real life, with concrete models, incorporates graphics and representational activities, includes, as well, games, writing, and the use of mathematical language and symbolism, and finally, integrates technology. For example, I use a range of visual models to help learners conceptualize fractions, decimals, and percents. They construct number lines using folded paper, to demonstrate halves, quarters, eighths. Pie graphs of a day's activities are drawn with colored pencils or developed on computer from spread sheets. I try to teach decimals and percentages at the same time, so that the students can relate these two concepts. Thus, students use the folded paper which represent fractions to analyze a candy sale's bar graphs, which are calibrated in percentages. They describe the pie graphs in fractions as well as in percentages. Building on students' experiences with percents in everyday life, we construct the meaning of percents in more complex situations. But I try to do more with manipulatives than just use them to develop and demonstrate concepts. The blocks or tiles or other concrete things are often themselves part of the problem. I recently conducted a bean-bag race in the hallway of the learning center where I work. Two students walked along a track, dropping bean-bags every two seconds, while a third student kept time. The rest of the class, who hadn't viewed it directly, had to look at the bean-bag drops and tell which student walked faster and how they knew. One student spent a number of minutes animatedly explaining to me how the walks differed and how he had analyzed the situation. In explaining his reasoning, he pointed out the differences in the "proportional" distances between the two sets of bean-bags and how that translated into different speeds. As he talked, he got very excited with his own understanding and explanation, and exclaimed at the end of his analysis "and that's math!" From Real Life Students from other countries use different procedures than found in many adult education texts, particularly for several of the common computation operations such as subtraction and long division (Schmitt, 1991). Despite this, and although the operations did not make sense as taught to many American-born students the first time around, adult education texts teach only the US algorithm (a rule or recipe for a mathematical procedure or operation). Standards-based math teaching respects students' thinking, background knowledge, and development of their own algorithms for computation. I try to teach my students by listening to their explanations of their own thinking and ways of doing math. One of my GED students, Leo, was a "street smart" learner. He could apply his own experience in playing the numbers to solving the combinatorial problems I posed in class. (For example, "how many different outfits can you make with three shirts and four pairs of pants?") But he couldn't do a two-digit division problem the "long way," and he felt that would hamper his passing the GED. We spend about 15 minutes after class one day, talking through a long-division problem. In drawing out his thinking, I found he understood the concept of division as repeated subtraction and urged him to use that strategy. In the process, he came up with a method of division that made sense to him and which he could articulate and repeat successfully. Although he claims that he now used a method I "showed" him, it was his own algorithm that he was able to apply confidently in his work, not one that you could find in any GED book. With my more basic students, those still working on addition and subtraction, I use an investigation of the concepts of carrying and borrowing. Several useful card games, such as Close to 100 and Close to 0, build on the learner's sense and experience with numbers using 100 and 1000 as benchmarks (see box). "Closed to 100" • Using only single-digit cards, deal players hands of six cards. • Players choose, from their hands, four cards, forming two two-digit numbers that add up to a number as close to 100 as possible. • Players keep their own score. Points are the difference between the sum of the two two-digit numbers as 100. • Deal seven hands. The player with the lowest total score wins. (Russell, et al., 1998) The games gives learners a chance to use numbers in the context of a real-life social situation: a card game. Results can be discussed, strategies shared, and which simulates mental math activities that adults need for daily life, such as calculating change from a dollar, adding or subtracting percents, making purchases. Besides being fun, it is learning in a social context. One of my formerly homeless students graciously shared with me many of his strategies for estimation. He often practiced his multiplication skills by estimating the bricks in a building wall, then counting one-by-one or multiplying row by height to check his number reasoning. Today I give "Elliot's Walk" a true story as a problem to my students to assess their proportional reasoning and communication skills in using math. Here it is: Elliot took a walk from his apartment to Harvard Square one day and counted his paces as he walked. He figures that his pace is approximately 2 feet long. He counted as high as 3,000 paces and then stopped counting just as he got to the Square. Approximately how far did he walk? Investigation, Communication I try to teach for understanding, using a problem-posing, questioning approach that connects the areas in which learners have strengths. For example, instead of directly teaching my learners to do these problems: 1. 3x5 + 6 2. 102 + 10x5 3. 25/5 + 35/7 using the PEMDAS or "Please Excuse My Dear Aunt Sally," (parenthesis, exponents, multiplication, division, addition, subtraction ) rule for order of operations, I ask them to generate their own expressions in my Number of the Day activity. This also gives me a daily assessment of the depth and breadth of my students' grasp of computation, order of operations, and use of symbolic notation. I write "Number of the Day" on the board, and a box with a number next to it. I ask the students to write as many numerical expressions as they can that equal the number of the day. Student responses the day the number was 350 included: 300 + 50; 200 + 100 + 25 + 25; 70 x 5; 50 x 6 + 50; 182 + 26; 150x2 + 50. My adult learners have access in the classroom to appropriate technology: calculators and computers. Rather than being a crutch, calculators are an invaluable aid in teaching basic mathematics because of their speed, accuracy, decimal display, and memory. Many of my learners already rely on calculators in every day situations as workers and consumers. I use them as instructional tools to assist in the development of concepts, to help reinforce skills, to promote higher level thinking, and to enhance problem-solving instruction. By freeing them from the routine, long, or complex calculations, more time can be spent on conjecturing and reasoning. In Conclusion It is challenging to change one's practice to the values and practices of the NCTM and ABE Math Standards, particularly when the ideal is not readily shared by other teachers and is not in the experience of our learners. I find many learners look to me for answers when I'm trying to have them develop that capacity themselves. One of my students complained to his counselor that "at the end of the day, we're tired from working, and she expects us to think." It is easy to fall back into old methods of direct instruction and worksheets and workbook pages. Also, teaching using the math standards means covering less material while taking time for discovery. That's hard when students need their GEDs by June. So sometimes this way of teaching brings its own discomforts, to me and to my students. The only evidence I have for the success of my approach that "less is more" is that I rarely cover all the material in the GED textbooks in the 14 to 15 weeks of my course. Yet most of my students seem to have the confidence to take and pass the test. My experience with teaching a range of learners, from literacy students to GED, has convinced me that teaching for learning in the vision of the math standards is good adult education practice. Curry, D., Schmitt, M.J., & Waldron, S. (1996). A Framework for Adult Numeracy Standards: The Mathematical Skills and Abilities Adults Need to be Equipped for the Future. Boston, MA: World Education. Leonelli, E.& Schwendeman, R. (eds.) (1994). The ABE Math Standards Project, Vol 1: The Massachusetts Adult Basic Education Math Standards. Holyoke, MA: Holyoke Community College/SABES Regional Massachusetts Department of Education (1996) Massachusetts Curriculum Frameworks for K-12 and Adults. Malden, MA: MA DOE. Moses, R. (1997) "Mathematics, Literacy, Citizenship, and Freedom," presentation at NCTM 75th Annual Meeting, April 19, Minneapolis, MN. National Council of Teachers of Mathematics (1989). Curriculum and Evaluation Standards for School Mathematics. Reston VA: NCTM. Russell, S.,J., Tierney, C., Mokros, J., Economopoulos, K., et al. (1998). Investigations in Number, Data, and Space (TM). White Plains, NY: Cuisenaire/Dale Seymour Publications. Schmitt, M.J., (1991). The Answer is Still the Same It Doesn't Matter How You Got It! A Booklet for Math Teachers and Math Students Who Come from Multicultural Backgrounds. Boston, MA: Author. About the Author Esther D. Leonelli is technology coordinator and a math instructor at the Community Learning Center, Cambridge, MA. She is a co-founder and past president of the Adult Numeracy Network, an affiliate of the National Council of Teachers of Mathematics. Esther moderates the numeracy list, an electronic discussion list for adult education practitioners (numeracy @worldstd.com). During 1998-99 she was an NIFL Leadership Fellow. Updated 7/27/07 :: Copyright © 2005 NCSALL
{"url":"http://www.ncsall.net/index.html@id=348.html","timestamp":"2014-04-16T07:13:12Z","content_type":null,"content_length":"37384","record_id":"<urn:uuid:4ee6b798-6dad-4c4a-8dd3-202695f0bf04>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
Hotel Occupancy Hello! I am trying to build a hotel occupancy program Write a program that calculates the occupancy rate for each floor of a hotel. The program should start by asking for the number of floors that the hotel has. A loop should then iterate once for each floor. During each iteration, the loop should ask the user for the number of rooms on the floor and the number of them that are occupied. After all the iterations, the program should display the number of rooms the hotel has, the number of them that are occupied, the number that are vacant, and the occupancy rate for the hotel. Input Validation: Do not accept a number less than one for the number of floors. Do not accept a number less than 10 for the number of rooms on a floor. The equation is: Occupancy rate = number of rooms occupied / total number of rooms. As far as the code is concerned, I think it is good except I don't know why my occupied rooms is not display /calculating incorrectly. Here's my code: import javax.swing.JOptionPane; //needed for GUI import java.text.DecimalFormat; //needed to format the Output public class DTHotelOccupancy {//Begin class public static void main(String[] args) {//Begin main method String input; //To hold the user's input final int MIN_FLOORS = 1; //Minimum amount of floors final int MIN_ROOMS = 10; //Minimum amount of rooms per floor int floors, //Number of floors in hotel rooms = 0, //Number of available rooms in each floor occrooms = 0, //Number of rooms occupied totalrooms = 0, //Number of total rooms totalroomsocc = 0, //Total Number of rooms occupied vacant = 0; //Number of vacant rooms double occrate = 0; //Occupancy rate DecimalFormat formatter = new DecimalFormat("#,##0.0"); //format the scores //Get the number of floors in the hotel input =JOptionPane.showInputDialog("How many Floors are in the Hotel?\n"+ "\tThe number of floors must be at least " + MIN_FLOORS); //Convert floors into integer floors = Integer.parseInt(input); //Validate the number of floors entered. while (floors < MIN_FLOORS) input =JOptionPane.showInputDialog("You have Entered a value less than the 1"+ "How many Floors are in the Hotel? "+ "The number of floors must be at least " + MIN_FLOORS); //Convert floors into integer floors = Integer.parseInt(input); //For Loop for (int i = 1; i <= floors;i++) input = JOptionPane.showInputDialog("\tHow many rooms are on floor " + i + "? "); //Convert rooms into integer rooms = Integer.parseInt(input); //Validate the number of rooms per floor while (rooms < 10) input = JOptionPane.showInputDialog("You have entered the number of rooms less than 10 per floor!"+ "\t\nHow many rooms are on floor " + i + "? "); //Convert rooms into integer rooms = Integer.parseInt(input); input = JOptionPane.showInputDialog("How many rooms on floor " + i + " are occupied? "); //Convert occupied rooms into integer occrooms = Integer.parseInt(input); while (occrooms > rooms) input = JOptionPane.showInputDialog("You have entered the number of occupied rooms greater than the nummber of rooms per floor!"+ "\t\nHow many rooms are on floor " + i + "? "); //Convert occupiedrooms into integer occrooms = Integer.parseInt(input); totalrooms = (floors * rooms); //Calculate the Number of rooms totalroomsocc = (totalrooms - occrooms); //Calculate the total number of rooms occupied vacant = (totalrooms - totalroomsocc); //Calculate the Number of rooms Vacant occrate = ((double)occrooms / totalrooms); //Calculate OccupancyRate //Display the results JOptionPane.showMessageDialog(null, "\tTotal Rooms:" + totalrooms + "\t\nRooms occupied: " +totalroomsocc+ "\t\nRooms vacant: " +vacant + "\t\nOccupancy Rate: " + formatter.format(occrate), //End the program. }//End main method }//End class Please help me out with the calculating for the total rooms occupied Thank you
{"url":"http://www.dreamincode.net/forums/topic/268890-hotel-occupancy-rate-calculation/","timestamp":"2014-04-21T00:19:49Z","content_type":null,"content_length":"83296","record_id":"<urn:uuid:fca5f2b2-1bd1-4ee9-bded-65cc0fc1dab4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Staten Island Algebra 2 Tutor ...I have a comprehensive knowledge of tonal music theory and thrive during discussions of the topic, allowing me to easily boost my student's comprehension of the material. Throughout my student career, Mathematics has always been my top grade, achieving near perfect scores term after term. In my... 33 Subjects: including algebra 2, physics, calculus, GRE ...I work with clients ranging in all socio-economic and cultural backgrounds. I am very culturally sensitive, and work with vastly diverse populations throughout the week. My own personal struggle with ADHD has fueled my interest in working with students/persons with learning disabilities. 20 Subjects: including algebra 2, English, reading, writing ...A lot depends on how a student is emotionally prepared to handle the stress of taking a test. In Physics and Chemistry (to some extent in Astronomy) I use conceptual approach, analogies, and all kinds of silly situations, which help reveal student's misconceptions and help understand the correct... 18 Subjects: including algebra 2, chemistry, calculus, physics ...Work can be produced upon request. Through the years, I have had numerous public speaking classes. All of which I aced and used to wow peers and teachers in future classes. 13 Subjects: including algebra 2, reading, chemistry, geometry ...I look forward to meeting and helping you reach new heights in mathematics.I grew up and attended school in Brooklyn and therefore had to take all the NYS math regents. I finished high school math in 10th grade since I was part of the accelerated track in math and science in elementary school th... 10 Subjects: including algebra 2, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/staten_island_ny_algebra_2_tutors.php","timestamp":"2014-04-19T17:15:41Z","content_type":null,"content_length":"24140","record_id":"<urn:uuid:371bfbbc-ab96-4aca-8622-898bf5851b5a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
% tutorial.tex - a short descriptive example of a LaTeX document % For additional information see Tim Love's ``Text Processing using LaTeX'' % http://www-h.eng.cam.ac.uk/help/tpl/textprocessing/ % You may also post questions to the newsgroup comp.text.tex % draft, flee, opening, 12pt % insert PostScript figures % controllabel line spacing %\usepackage[nofiglist,notablist]{endfloat} %\ usepackage{endfloat} %\usepackage{natbib} %\usepackage{epstopdf} %\renewcommand{\efloatseparator}{\mbox{}} % the following produces 1 inch margins all around with no header or footer % beyond 25.mm % beyond 25.mm % beyond 25.mm %\bibpunct{(}{)}{;}{a}{,}{,} % SOME USEFUL OPTIONS: % indent paragraph by this much % space between paragraphs % \mathindent 20.mm % indent math equations by this much % post-script figures here or in /. % Helps LaTeX put figures where YOU want % 90% of page top can be a float % 90% of page bottom can be a float % only 10% of page must to be text % make title footnotes alpha-numeric %An Empirical Model of Spatial Competition with an Application to %Cement % --------------------- end of the preamble --------------------------- \documentclass[12pt]{article} \usepackage{chicago} \usepackage{graphicx} \usepackage{setspace} \usepackage{amsmath,amsthm,amssymb,amstext} \usepackage{rotating} \usepackage{soul} \usepackage{epsfig} \usepackage{lscape} \ setcounter{MaxMatrixCols}{10} %TCIDATA{OutputFilter=LATEX.DLL} %TCIDATA{Version=5.50.0.2953} %TCIDATA{} %TCIDATA{BibliographyScheme=BibTeX} %TCIDATA{LastRevised=Thursday, April 08, 2010 13:15:05} %TCIDATA{} \topmargin =15.mm \oddsidemargin =0.mm \evensidemargin =0.mm \headheight =0.mm \headsep =0.mm \textheight =220.mm \textwidth =165.mm \parindent 10.mm \parskip 0.mm \newcommand{\MyTabs}{ \ hspace*{25.mm} \= \hspace*{25.mm} \= \hspace*{25.mm} \= \hspace*{25.mm} \= \hspace*{25.mm} \= \hspace*{25.mm} \kill } \graphicspath{{../Figures/}{../data/:}} \renewcommand{\topfraction}{0.9} \ renewcommand{\bottomfraction}{0.9} \renewcommand{\textfraction}{0.1} \alph{footnote} \input{tcilatex} \begin{document} \begin{abstract} % beginning of the abstract The theoretical literature of industrial organization shows that the distances between consumers and firms have first-order implications for competitive outcomes whenever transportation costs are large. To assess these effects empirically, we develop a structural model of competition among spatially differentiated firms and introduce a GMM estimator that recovers the structural parameters with only regional-level data. We apply the model and estimator to the portland cement industry. The estimation fits, both in-sample and out-of-sample, demonstrate that the framework explains well the salient features of competition. We estimate transportation costs to be \$0.30 per tonne-mile, given diesel prices at the 2000 level, and show that these costs constrain shipping distances and provide firms with localized market power. To demonstrate policy-relevance, we conduct counter-factual simulations that quantify competitive harm from a hypothetical merger. We are able to map the distribution of harm over geographic space and identify the divestiture that best mitigates harm. \end{abstract} % REQUIRED \thispagestyle{empty} \begin{center} ECONOMIC ANALYSIS GROUP DISCUSSION PAPER \vspace{2.8cm} Competition among Spatially Differentiated Firms: An Empirical Model with an Application to Cement \vspace{0.25cm} By \vspace{0.25cm} Nathan H. Miller* and Matthew Osborne** \\[0pt] EAG 10-2 $\quad$ March 2010 \end {center} \vspace{0.45cm} \noindent EAG Discussion Papers are the primary vehicle used to disseminate research from economists in the Economic Analysis Group (EAG) of the Antitrust Division. These papers are intended to inform interested individuals and institutions of EAG's research program and to stimulate comment and criticism on economic issues related to antitrust policy and regulation. The Antitrust Division encourages independent research by its economists. The views expressed herein are entirely those of the authors and are not purported to reflect those of the United States Department of Justice or the Bureau of Economic Analysis. \vspace{0.25cm} \noindent Information on the EAG research program and discussion paper series may be obtained from Russell Pittman, Director of Economic Research, Economic Analysis Group, Antitrust Division, U.S. Department of Justice, LSB 9446, Washington DC 20530, or by e-mail at russell.pittman@usdoj.gov. Comments on specific papers may be addressed directly to the authors at the same mailing address or at their email addresses. \vspace{0.25cm} \noindent Recent EAG Discussion Paper and EAG Competition Advocacy Paper titles are listed at the end of this paper. To obtain a complete list of titles or to request single copies of individual papers, please write to Janet Ficco at the above mailing address or at janet.ficco@usdoj.gov or call (202) 307-3779. In addition, recent papers are now available on the Department of Justice website at http://www.usdoj.gov/atr/public/eag/discussion\_papers.htm. Beginning with papers issued in 1999, copies of individual papers are also available from the Social Science Research Network at www.ssrn.com. \vspace{0.25cm} \noindent\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\ _\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_% \_\_\_\_\_\_\_\_\_ \noindent * \ Economic Analysis Group, Antitrust Division, U.S. Department of Justice. Email: nathan.miller@usdoj.gov. \noindent ** \ Bureau of Economic Analysis. Email: matthew.osborne@bea.gov. We thank Robert Johnson, Ashley Langer, Russell Pittman, Chuck Romeo, Jim Schmitz, Charles Taragin, Raphael Thomadsen, and seminar participants at the Bureau of Economic Analysis, Georgetown University, and the U.S. Department of Justice for valuable comments. Sarolta Lee, Parker Sheppard, and Vera Zavin provided research assistance. \newpage \ thispagestyle{empty} % end of the abstract \newpage \thispagestyle{empty} \setcounter{page}{1} \onehalfspacing \section{Introduction} Geography is understudied in the empirical literature of industrial organization. Although the theoretical literature has established that the physical distances between firms and consumers have first-order implications for competitive outcomes whenever transportation costs are large (e.g., % \citeN{hotelling29}, \citeN{agt79}, \citeN{salop79}, \citeN{thissevives88}, % \citeN{econ89}, \citeN{vogel08}), the complexities associated with modeling spatial differentiation have made it difficult to translate theoretical insights into workable empirical models.\footnote{% Surely it would be too strong to claim that, in empirical industrial organization, space is the final frontier.} Standard empirical methodologies simply sidestep spatial differentiation through the delineation of distinct geographic markets. This simplifies estimation but requires the dual assumptions that (1) transportation costs are sufficiently large to preclude substantive competition across market boundaries, and (2) transportation costs are sufficiently small that spatial differentiation is negligible within markets. It can be difficult to meet both conditions.\footnote{\citeN{syverson04} discusses how this tension can compel researchers to seek compromise between markets that are ``too small'' and markets that are ``too large''. It is sometimes argued that markets that are too large may overstate the intensity of competition while markets that are too small may understate competition.} In practice, markets are often based on political borders of questionable economic significance (e.g., state or county lines). Nonetheless, market delineation is employed routinely in studies of industries characterized by high transportation costs, including ready-mix concrete (e.g., \citeN{syverson04}, % \citeN{syversonhortacsu07}, \citeN {cw09}), portland cement (e.g., % \citeN{salvo08}, \citeN{ryan06}), and paper (e.g., \citeN{pes03}).\footnote{% These valuable contributions focus on wide range of topics, including the competitive impacts of horizontal and vertical mergers, heterogeneity in plant productivity and its implications for competition, the inference of market power, and dynamic investment decisions.} Our purpose is to introduce an alternative empirical framework. To that end, we develop and estimate a structural model of competition among spatially differentiated firms that accounts for transportation costs in a realistic and tractable manner. We focus on production and consumption within a two dimensional Euclidean space, which we refer to informally as a geographic space. Competition involves a discrete number of plants, each endowed with a physical location, and a continuum of consumers that spans the space. Each plant sets a distinct price to each consumer, taking into consideration its proximity to the consumer and the proximities of its competitors. Thus, plants discriminate between elastic and inelastic consumers based on the pre-determined plant and consumer locations, and the model resembles the theoretical work of \citeN{thissevives88}. Competitive outcomes depend on the magnitude of transportation costs and the degree of spatial differentiation within the geographic space. We discretize the geographic space into small ``consumer areas'' to operationalize the model. Since each plant may ship to each consumer area, the model diverges starkly from more conventional approaches in which plants do not compete across market boundaries. We employ standard differentiated-product methods to model competition within consumer areas. On the supply side, domestic plants compete in prices given capacity constraints and the existence of a competitive fringe of foreign importers. On the demand side, consumers select the plant that maximizes utility, taking into consideration their proximity to the plants, the plant prices, and a nested logit error term. To be clear, the plants are differentiated primarily by price and location -- we assume that the product itself is homogeneous. We derive an equation that characterizes the equilibrium price and market share for each plant-area pair as a function of data and parameters.\footnote{% We refer to the fraction of potential demand in a consumer area that is captured by a given plant as the ``market share'' of the plant. We select the term purely for expositional convenience. We do not argue that consumer areas reflect antitrust markets in any sense; indeed, a defining characteristic of the model is that it avoids market delineation entirely.} The central challenge for estimation is that prices and market shares are unobserved in the data, at least at the plant-area level. We develop a generalized method of moments (GMM) estimator that exploits variation in data that are more often observed: regional level consumption, production, and prices. The key insight is that each candidate parameter vector corresponds to an equilibrium set of plant-area prices and market shares. We compute numerical equilibrium for each candidate parameter vector using a large-scale nonlinear equation solver developed in \citeN{dfsane06}. We then aggregate the predictions of the model to the regional level and evaluate the distance between the data and the predictions. The estimator can be interpreted as having inner and outer loops: the outer loop minimizes an objective function over the parameter space while the inner loop computes numerical equilibrium for each candidate parameter vector. We show that the estimator consistently recovers the structural parameters of the data generating process in an artificial data experiment. %\footnote{We conduct an artificial data %experiment to evaluate the accuracy of the estimator. %The results suggest that the estimator consistently recovers the structural parameters %of the data generating process.} We apply the model and the estimator to the portland cement industry in the U.S. Southwest over the period 1983-2003. The choice of industry conveys at least three substantive advantages to the analysis. First, transportation costs contribute substantially to overall consumer costs because portland cement is inexpensive relative to its weight. Second, it may be reasonable to treat portland cement as a homogenous product because strict industry standards govern the production process.\footnote{% Many plants produce a number of different types of portland cement, each with slightly different specifications and characteristics (e.g., superior early strength or higher sulfate resistance). These products are close substitutes for most construction projects.} This conformity matches the simplicity of the demand system, in which spatial considerations (e.g., plant and consumer locations) are the main source of plant heterogeneity. Third, high quality data on the industry are publicly available. We obtain information on regional consumption, production, and average prices, as well as limited information on cross-region shipments, from annual publications of the United States Geological Survey. We pair these regional-level metrics with information on the location and characteristics of portland cement plants from publications of the Portland Cement Association. We exploit variation in these data to estimate the model. The results of estimation suggest that consumers pay roughly \$0.30 per tonne-mile, given diesel prices at the 2000 level.\footnote{% Strictly speaking, the model identifies consumer willingness-to-pay for proximity to the plant, which incorporates transportation costs as well as any other distance-related costs (e.g., reduced reliability). We refer to this willingness-to-pay as the transportation cost, although the concepts may not be precisely equivalent.} Given the shipping distances that arise in numerical equilibrium, this translates into an average transportation cost of \$24.61 per metric tonne over the sample period -- sufficient to account for 22 percent of total consumer expenditure. Costs of this magnitude have real effects on competition in the industry. We focus on two such effects: First, transportation costs constrain the distance that cement can be shipped economically. The results indicate that cement is shipped only 92 miles on average between the plant and the consumer; by contrast, a counter-factual simulation suggests that the average shipment would be 276 miles absent transportation costs. Second, transportation costs insulate firms from competition and provide localized market power. For instance, the prices that characterize numerical equilibrium decrease systematically in the distance between the plant and the consumer, as do the corresponding market shares.\footnote{% These patterns are precisely what economic theory would predict given that consumers pay the costs of transportation in the portland cement industry.} The estimation procedure produces impressive in-sample and out-of-sample fits despite parsimonious demand and marginal cost specifications. The model predictions explain 93 percent of the variation in regional consumption, 94 percent of the variation in regional production, and 82 percent of the variation in regional prices. The model predictions also explain 98 percent of the variation in cross-region shipments, even though we withhold the bulk of these data from estimation. As we detail in an appendix, the quality of these fits is underscored by the rich time-series variation in these data. Together, the regression fits suggest that a small quantity of exogenous data, properly utilized, may be sufficient to explain some of the most salient features of competition in the portland cement industry. We interpret this as substantial support for the power of the analytical framework. We suspect that our method may prove useful for future research and policy endeavors relating to international trade, environmental economics, and industrial organization. One such application is merger simulation, an important tool for competition policy. We use counter-factual simulations to evaluate the effects of a hypothetical merger between two multi-plant cement firms in 1986. We find that the merger reduces consumer surplus by \$1.4 million if no divestitures are made, and we are able to map the distribution of this harm over the U.S. Southwest. The overall magnitude of the effect is modest relative to the amount of commerce -- by way of comparison, we calculate total pre-merger consumer surplus to be more than \$239 million. We then consider the six possible single-plant divestitures, and find that the most powerful reduces consumer harm by 56 percent.\footnote{% We refer to the divesture plan that offsets the greatest amount of consumer harm among the set of single-plant divestitures as the most powerful. We do not attempt to characterize, in any way, the appropriate course of action for an antitrust authority.} At least two caveats are important. First, the estimation procedure rests on the uniqueness of equilibrium at each candidate parameter vector, but there is no theoretical reason to expect this condition to hold generally. To assess the issue, we conduct a Monte Carlo experiment that computes numerical equilibrium using several different starting points for each of 6,300 randomly drawn candidate parameter vectors. The results are strongly supportive of uniqueness, at least in our application (see the appendices for details). Second, the promise of spatial differentiation creates incentives for firms to locate optimally in order to secure a base of profitable customers, provide separation from an efficient competitor, and/or deter nearby entry. We abstract from these considerations entirely and instead assume that firm location is pre-determined and exogenous. Nonetheless, our framework could help define stage-game payoffs in more dynamic models that endogenize firm location choices (e.g., as in % \citeN{seim06} and \citeN{aguirregabiria06}). Our work builds on recent contributions to the industrial organization literature that model competition among spatially differentiated retail firms (e.g., \citeN{thomadsen051}, \citeN{davis06}, \citeN{mcmanus09}). These papers employ a framework in which each firm sets a single price to all consumers, consistent with practice in most retail settings, and recover the structural parameters with firm-level data and more standard estimation techniques (e.g., \citeN{blp95}).\footnote{% Thomadsen (2005) may be the closest antecedent to our work. Thomadsen shows that a supply-side equilibrium condition can be substituted for firm-level market shares in estimation. We develop the potency of equilibrium conditions more fully: given an equilibrium condition and aggregate data, estimation is feasible with neither firm-level prices nor firm-level market shares. \citeN {aguirregabiria06} develop a sophisticated model of spatial differentiation but do not take the model to data.} We make two distinct contributions to this literature. First, our model extends the existing framework to incorporate firms that price discriminate among consumers. In such a setting, firm-level data are no longer sufficient to support standard estimation techniques. Thus, our second contribution is the demonstration that estimation is feasible with relatively aggregated data. Of course, the regional-level data we use in our application would also support estimation in simpler retail settings. More generally, our results indicate that firm-level heterogeneity can affect competitive outcomes even when the product is relatively homogenous. Refinements to the standard toolbox available for structural research in these settings have been outstripped in empirical industrial organization by models of observed and (especially) unobserved product heterogeneity. We suspect that models of competition for homogenous-product industries are currently of heightened value, not only because of the imbalance in the literature, but also because these industries provide fertile testing grounds for methodological innovations regarding industry dynamics (e.g., as in \citeN{ryan06}). We hope that the framework we introduce helps, incrementally, to redirect the attention of researchers to these settings. The paper proceeds as follows. In Section \ref{sec:ind}, we sketch the relevant institutional details of the portland cement industry, focusing on transportation costs, production technology, and trends in production and consumption. We develop the empirical model of Bertrand-Nash competition in Section \ref{sec:model} and discuss our data sources in Section \ref% {sec:data}. Then, in Section \ref{sec:est}, we develop the estimator and provide identification arguments. We present the estimation results in Section \ref{sec:res}, discuss the merger simulations in Section \ref% {sec:sim}, and then conclude. \section{The Portland Cement Industry \label{sec:ind}} Portland cement is a finely ground dust that forms concrete when mixed with water and coarse aggregates such as sand and stone. Concrete is an essential input to many construction and transportation projects, either as pourable fill material or as pre-formed concrete blocks, because its local availability and lower maintenance costs make it more economical than substitutes such as steel, asphalt, and lumber (e.g., \citeN{vanoss02}).% \footnote{% We draw heavily from the publicly available documents and publications of the United States Geological Survey and the Portland Cement Association to support the analysis in this section. We defer detailed discussion of these sources for expositional convenience.} Most portland cement is shipped by truck to ready-mix concrete plants or construction sites, in accordance with contracts negotiated between individual purchasers and plants.\footnote{% A smaller portion is shipped by train or barge to terminals, and only then distributed to consumers by truck. Shipment via terminals reduces transportation costs for more distant consumers. Roughly 23 percent of portland cement produced in the United States was shipped through terminals in 2003.} Transportation costs contribute substantially to overall consumer expenditures because portland cement is inexpensive relative to its weight, a fact that is well understood in the academic literature. For example, Scherer et al (1975) estimates that transportation would have accounted for roughly one-third of total consumer expenditures on a hypothetical 350-mile route between Chicago and Cleveland, and a 1977 Census Bureau study determines that most portland cement is consumed locally -- for example, more than 80 percent is transported within 200 miles.\footnote{% Scherer et al (1975) examined more than 100 commodities and determined that the transportation costs of portland cement were second only to those of industrial gases. Other commodities identified as having particularly high transportation costs include concrete, petroleum refining, alkalies/chlorine, and gypsum.} More recently, \citeN{salvo08b} presents evidence consistent with the importance of transportation costs in the Brazilian portland cement industry. A recent report prepared for the Environmental Protection Agency identifies five main variable input costs of production: raw materials, coal, electricity, labor, and kiln maintenance (\citeN{epa09}). In the production process itself, a feed mix composed of limestone and supplementary materials is fed into large rotary kilns that reach temperatures of 1400-1450$^\circ$ Celsius. The combustion of coal is the most efficient way to generate this extreme heat. Kilns generally operate at peak capacity with the exception of an annual shutdown period for maintenance. It is possible to adjust output by extending or shortening the maintenance period -- for example, plants may simply forego maintenance at the risk of kiln damage and/or breakdowns. The feed mix exits the kiln as semi-fused clinker modules. Once cooled, the clinker is mixed with a small amount of gypsum, placed into a grinding mill, and ground into tiny particles averaging ten micrometers in diameter. This product -- portland cement -- is shipped to purchasers either in bulk or packaged in smaller bags. We focus on the production and consumption of portland cement in the U.S. Southwest -- by which we mean California, Arizona and Nevada -- over the period 1983-2003. This region accounts for roughly 15 percent of domestic portland cement production and consumption during the sample period. Figure % \ref{fig:map1} provides a map of the region based on the plant locations in the final year of the sample. As shown, most plants are located along an interstate highway and nearby one or more population centers. Although some firms own more than one plant, production capacity in the area is not particularly concentrated -- the capacity-based Herfindahl-Hirschman Index (HHI) of 1260 is well below the threshold level that defines highly concentrated markets in the 1992 Merger Guidelines. The plants also face competition from foreign importers that ship portland cement through the customs offices of San Francisco, Los Angeles, San Diego, and Nogales. Still, transportation costs may insulate some plants from both foreign and domestic competition.\footnote{% We observe little entry and exit over the sample period. The sole entrant (Royal Cement) began operations in 1994 and the only two exits occurred in 1988. This stability is consistent with substantial sunk costs of plant construction, as documented in \citeN{ryan06}. Plant ownership is somewhat more fluid; we observe fourteen changes in plant ownership, spread among nine of the sixteen plants.} \begin{figure}[t] \ centering \includegraphics[height=4in]{map-col-hwy-2.eps} \caption{{\protect\footnotesize {Portland Cement Production Capacity in the U.S. Southwest circa 2003. }}} \label{fig:map1} \end{figure} % The Figure is a map of California, Arizona, and Nevada. Cement plants are marked with circles, with % higher capacity plants getting bigger circles. The four import points -- San Francisco, Los Angeles, % San Diego, and Nogales -- are marked with black squares. See text for details. In Figure \ref{fig:imports}, we plot total consumption and production in the U.S. Southwest over the sample period, together with two measures of foreign imports. Several patterns are apparent. Both consumption and production increase over 1983-2003, and both metrics are highly cyclical. However, consumption is more cyclical than production, so that the gap between consumption and production increases in overall activity; foreign importers provide additional supply whenever domestic demand outstrips domestic capacity. Strikingly, observed foreign imports are nearly identical to ``apparent imports,'' which we define as consumption minus production, consistent with negligible net trade between the U.S. Southwest and other domestic areas. Finally, we note that the average free-on-board price charged by domestic plants in the region (not shown) falls over the sample period from \$107 per metric tonne to \$74 per metric tonne, primarily due to lower coal and electricity costs.\footnote{% Prices are in real 2000 dollars.} \begin{figure}[t] \centering \includegraphics[height=3in] {imports.eps} \caption{{\protect\footnotesize {Consumption, Production, and Imports of Portland Cement. Apparent imports are defined as consumption minus production. Observed imports are total foreign imports shipped into San Francisco, Los Angeles, San Diego, and Nogales.}}} \label{fig:imports} \end{figure} %The figure plots consumption, production, and imports over the sample period. It also plots the apparent imports, %defined as consumption minus production. The text describes the relevant data patterns in detail. \section{Empirical Model \label{sec:model}} \subsection{Overview} We develop a structural model of competition that accounts for transportation costs in a realistic and tractable manner. We focus on production and consumption within a single geographic space. Competition involves a discrete number of plants, each endowed with a physical location, and a continuum of consumers that spans the space. Each plant sets a distinct price to each consumer, taking into consideration its proximity to the consumer and the proximities of its competitors. To operationalize the model, we discretize the geographic space into small consumer areas. We employ standard techniques to model competition within consumer areas: plants charge area-specific prices subject to exogenous capacity constraints, and consumer demand is nested logit. We derive an equation that characterizes the equilibrium price and market share for each plant-area pair. \subsection{Supply} We take as given that there are $F$ firms, each of which operates some subset $\Im_f$ of the $J$ plants. Each plant is endowed with a set of attributes, including a physical location. Consumers exist in $N$ different areas that span the geographic space. Every plant can ship into every area. Firms set prices at the plant-area level to maximize variable profits: $$\pi_{f} = \underbrace{\sum_{j \in \Im_{f}} \sum_{n} P_{jn}Q_{jn}(\boldsymbol{% \theta_d})}_{\text{variable revenues}} - \ underbrace{\sum_{j \in \Im_{f}} \int_0^{Q_j (\boldsymbol{\theta_d})}MC_j(Q; \boldsymbol{\theta_c}) dQ}_{% \text{variable costs}}, \label{eq:vprofs}$$ where $Q_{jn}$ and $P_{jn}$ are the quantity and price, respectively, of plant $j$ in area $n$, $MC_j$ is the marginal cost of production, and $Q_j$ is total plant production (i.e., $Q_j=\sum_n Q_{jn}$). The vectors $% \boldsymbol{\theta_c}$ and $\ boldsymbol{\theta_d}$ include the supply and demand parameters, respectively, and together form the joint parameter vector $\boldsymbol{\theta} = (\boldsymbol{\theta_c}, \boldsymbol{\theta_d})$% . The marginal cost function can accommodate most forms of scale economies or diseconomies. We choose a functional form that captures the plant-level capacity constraints that are important for portland cement production. In particular, we let marginal costs increase flexibly in production whenever production exceeds some threshold level. The marginal cost function is: $$MC(\boldsymbol{w}_ {j},Q_{j}(\boldsymbol{\theta_d}); \boldsymbol{\theta_c}) = \boldsymbol{w}_{j}^\prime \boldsymbol{\alpha} + \gamma \; 1 \left\{ \frac{% Q_j(\boldsymbol{\theta_d})}{CAP_j}>u \right\} \left(\frac{Q_{j} (% \boldsymbol{\theta_d})}{CAP_{j}} - u \right) ^\phi , \label{eq:mc}$$ where $\boldsymbol{w_j}$ is a vector that includes the relevant marginal cost shifters and $CAP_j$ is the maximum quantity that plant $j$ can produce. The parameter $\nu$ is the utilization threshold above which marginal costs increase in production, the parameter $\phi$ determines the curvature of the marginal cost function when utilization exceeds $\nu$, and the combination $\gamma(1-\nu)^\phi$ represents the marginal cost penalty associated with production at capacity. The marginal cost function is continuously differentiable in production for $\phi>1$. The vector of cost parameters is $\boldsymbol{\theta_c}= (\boldsymbol{\alpha}, \gamma, \nu, \phi)$. We let the domestic firms compete against a competitive fringe of foreign importers, which we denote as plant $J+1$. We assume that the fringe is a non-strategic actor and that import prices are exogenously set based on some marginal cost common to all importers. The fringe is endowed with one or more geographic locations and ships into every consumer area. It sets a single price across all consumer areas (i.e., the fringe does not price discriminate), consistent with perfect competition among importers. \subsection{Demand} We specify a nested logit demand system that captures the two most important characteristics that differentiate portland cement plants: price and location. To that end, we assume that each area features many potential consumers. Each consumer either purchases cement from one of the domestic plants or the importer, or foregoes a purchase of portland cement altogether. We refer to the domestic plants and the importer as the inside goods, and refer to the option to forego a purchase as the outside good. We place the inside goods in a separate nest from the outside good. We express the indirect utility that consumer $i$ receives from plant $j$ as a function of the relevant plant and location observables: $$u_{ij} = \beta_0 + \boldsymbol{x}_{jn}^\prime \boldsymbol{\beta} + \zeta_i + \lambda \epsilon_{ij}, \label{eq:util}$$ where the vector $\boldsymbol{x}_{jn}$ includes the price of cement and the distance between the plant and the area, as well as other plant-specific demand shifters. The idiosyncratic portion of the indirect utility function is composed of consumer-specific shocks to the desirability of the inside good ($\zeta_i$) and the desirability of each plant ($\epsilon_{it}$). We assume that the combination $\zeta_i + \lambda \epsilon_{ij}$ has an extreme value distribution in which the parameter $\lambda$ characterizes the extent to which valuations of the inside good are correlated across consumers.% \footnote{\citeN{cardell97} derives the conditions under which the specified taste shocks produce the extreme value distribution. Tastes are perfectly correlated if $\lambda=0$ and tastes are uncorrelated if $\lambda=1$. In the latter case the model collapses to a standard logit.} We normalize the mean utility of the outside good to zero, so that the indirect utility associated with foregoing cement purchase is $u_{i0}=\epsilon_{i0}$, with $\ epsilon_{i0} $ also having the extreme value distribution. The vector of demand parameters is $\boldsymbol{\theta_d}=(\beta_0, \boldsymbol{\beta}, \lambda)$.% \footnote{% We exclude product characteristics from the specification because portland cement is a homogenous good, at least to a first approximation. It may be desirable to control for observed product characteristics in other applications. (Though the presence of unobserved characteristics would pose a challenge to the estimation procedure.) Also, we assume that consumers pay the cost of transportation, consistent with practice in the cement industry. One could alternatively incorporate distance into the marginal cost function.% } The nested logit structure yields an analytical expression for the market shares captured by each plant. The market shares are specific to each plant-area pair because the relative desirability of each plant varies across areas: $$S_{jn}(\boldsymbol{P}_{n}; \boldsymbol{\theta_d}) = \frac{\exp(\beta_0+ \lambda I_{n})}{1+\exp(\beta_0+\lambda I_{n})} * \frac{\exp(\boldsymbol{x}% _{jn}^\prime \boldsymbol{\beta})}{\sum_{k=1}^{J+1} \exp(\boldsymbol{x}% _{kn}^\prime \boldsymbol{\ beta})}, \label{eq:shares}$$ where $\boldsymbol{P}_n$ is a vector of area-specific prices and $I_{n}= \ln \left( \sum_{k=1}^{J+1} \exp(\boldsymbol{x}_{kn}^\prime \boldsymbol{\beta}) \right)$ is the inclusive value of the inside goods. The first factor in this expression is the marginal probability that a consumer in area $n$ selects an inside good, and the second factor is the conditional probability that the consumer purchases from plant $j$ given selection of an inside good. Both take familiar logit forms due to the distributional assumptions on idiosyncratic consumer tastes. The demand system maps cleanly into supply: the quantity sold by plant $j$ to area $n$ is $Q_{jn} = S_{jn}M_n$, where $M_n$ is the potential demand of area $n$.\footnote{% The substitution patterns between cement plants are characterized by the independence of irrelevant alternatives (IIA) within the inside good nest. We argue that IIA is a reasonable approximation for our application. Portland cement is purchased nearly exclusively by ready-mix concrete plants and other construction companies. These firms employ similar production technologies and compete under comparable demand conditions. We are therefore skeptical that meaningful heterogeneity exists in consumer preferences for plant observables (e.g., price and distance). Without such heterogeneity, the IIA property arises quite naturally -- for example, the random coefficient logit demand model collapses to standard logit when the distribution of consumer preferences is degenerate.} \subsection{Equilibrium} The profit function yields the first-order conditions that characterize equilibrium prices for each plant-area pair: $$Q_{jn} + \sum_{n} \sum_{k \in \Im_{f}} (P_{kn}-MC_{kn})\frac{\partial Q_{kn}% }{\ partial P_{jn}}=0.$$ Since each plant competes in every consumer area and may price differently across areas, there are $J\times N$ first-order conditions. For notational convenience, we define the block-diagonal matrix $\boldsymbol{\Omega(P)}$ as the combination of $n=1,\dots,N$ sub-matrices, each of dimension $J \times J$% . The elements of the sub-matrices are defined as follows: $$\Omega^n_ {jk}(\boldsymbol{P}) = \left\{ \begin{array}{ll} \frac{\partial Q_{jn}}{\partial P_{kn}} & \text{if j and k have the same owner} \\ 0 & \text{otherwise},% \end{array} \right. \label{eq:omega}$$ where the demand derivatives take the nested logit forms. Thus, the elements of each sub-matrix $\Omega^n_{jk}(\boldsymbol{P})$ characterize substitution patterns within area $n$, and the matrix $\ boldsymbol{\Omega(P)}$ has a block diagonal structure because prices in one area do not affect demand in other areas. We can now stack the first-order conditions: $$\boldsymbol{P} = \boldsymbol{MC (P)} - \boldsymbol{\Omega(P)}^{-1}\boldsymbol{% Q(P)}, \label{eq:equil}$$ where $\boldsymbol{P}$, $\boldsymbol{MC(P)}$, and $\boldsymbol{Q(P)}$ are vectors of prices, marginal costs and quantities, respectively. Provided that marginal cost parameter $\phi$ exceeds one, the mappings $\boldsymbol{% MC(P)}$, $\boldsymbol{\Omega(P)}$, and $\boldsymbol{Q(P)}$ are each continuously differentiable. Further, the price vector belongs to a compact set in which each price is (1) greater or equal to the corresponding marginal cost, and (2) smaller than or equal to the price a monopolist would charge to the relevant area. Therefore, by Brower's fixed-point theorem, Bertrand-Nash equilibrium exists and is characterized by the price vector $% \boldsymbol{P^*}$ that solves Equation \ref{eq:equil}.\ footnote{% The existence proof follows \citeN{aguirregabiria06}. Multiple equilibria may exist.} Spatial price discrimination is at the core of the firm's pricing problem: firms maximize profits by charging higher prices to nearby consumers and consumers without a close alternative. Aside from price discrimination, the firm's pricing problem follows standard intuition. For example, a firm that contemplates a higher price for cement from plant $j$ to area $n$ must evaluate a number of effects: (1) the tradeoff between lost sales to marginal consumers and greater revenue from inframarginal consumers; (2) whether the firm would recapture lost sales with its other plants; and (3) whether the lost sales would ease capacity constraints and make the plant more competitive in other consumer areas. \section{Data Sources and Summary Statistics \label{sec:data}} \subsection{Data sources} We cull the bulk of our data from the Minerals Yearbook, an annual publication of the U.S. Geological Survey (USGS). The Minerals Yearbook is based on an annual census of portland cement plant and contains regional-level information on portland cement consumption, production, and free-on-board prices.\footnote{% The census response rate is typically well over 90 percent (e.g., 95 percent in 2003), and USGS staff imputes missing values for the few non-respondents based on historical and cross-sectional information. Other academic studies that feature these data include \citeN{mcbride83}, \citeN{rosenbaumreading88}% , \citeN{rosenbaum94}, \citeN{jansrosenbaum96}, \citeN{ryan06}, and % \citeN{syversonhortacsu07}. The Minerals Yearbook provides the average free-on-board price charged the plants located in each region, rather than the price paid by the consumers in each region.} The four relevant regions include Northern California, Southern California, Arizona, and Nevada. We observe annual consumption in each region over the period 1983-2003. The USGS combines the Arizona and Nevada regions when reporting production and prices over 1983-1991, and no usable production or price information is available for Nevada over 1992-2003.\footnote{% The USGS combines Nevada with Idaho, Montana and Utah starting in 1992. We adjust the Arizona data to remove the influence of a single plant located in New Mexico whose production is aggregated into the region. We detail this adjustment in an appendix. Also, it is worth noting that the USGS does not intend for its regions to approximate geographic markets. Rather, the regions are delineated such that plant-level information cannot be backward-engineered from the Minerals Yearbook.} The Minerals Yearbook also includes information on the price and quantity of portland cement that is imported into the U.S. Southwest via the customs offices in San Francisco, Los Angeles, San Diego, and Nogales. We also make use of more limited data on cross-region shipments from the California Letter, a second annual publication of the USGS. The California Letter provides information on the quantity of portland cement that is shipped from plants in California to consumers in Northern California, Southern California, Arizona, and Nevada. However, the level of aggregation varies over the sample period, some data are redacted to protect sensitive information, and no information is available before 1990. In total, we observe 96 data points: \begin{tabular}{p{2.7in}p{2.7in}} & \end{tabular} % CA to N. CA over 1990-2003 S. CA to N. CA over 1990-1999 % CA to S. CA over 2000-2003 S. CA to S. CA over 1990-1999 % CA to AZ over 1990-2003 S. CA to AZ over 1990-1999 % CA to NV over 2000-2003 S. CA to NV over 1990-1999 % N. CA to N. CA over 1990-1999 N. CA to AZ over 1990-1999 \noindent We withhold the bulk of these data from the estimation procedure and instead use the data to conduct out-of-sample checks on the model predictions.\footnote{% The California Letter is based on a monthly survey rather than on the annual USGS census. As a result, the data are not always consistent with the Minerals Yearbook. We normalize the data prior to estimation so that total shipments equal total production in each year.} We supplement the USGS data with basic plant-level information from the Plant Information Survey (PIS), an annual publication of the Portland Cement Association. The PIS provides the location of each portland cement plant in the United States, together with its owner, the annual kiln capacity, and various other kiln characteristics.\footnote{% We multiply kiln capacity by 1.05 to approximate cement capacity, consistent with the industry practice of mixing clinker with a small amount of gypsum (typically 3 to 7 percent) in the grinding mills.} We approximate consumer areas using counties, which meet the criterion of being small relative to the overall geographic space -- there are 90 counties in the U.S. Southwest. We collect county-level data from the Census Bureau on construction employment and residential construction permits to account for county-level heterogeneity in potential demand. Finally, we collect data on diesel, coal, and electricity prices from the Energy Information Agency, data on average wages of durable good manufacturing employees from the BEA, and data on crushed stone prices from the USGS; we exploit state-level variation for all but the diesel data. \subsection{Summary statistics} We provide summary statistics on consumption, production, and prices for each of the regions in Table \ref {tab:swData}. Some patterns stand out: First, substantial variation in each metric is available, both inter-temporally and across regions, to support estimation. Second, Southern California is larger than the other regions, whether measured by consumption or production. Third, consumption exceeds production in Northern California, Arizona, and Nevada; these shortfalls must be countered by cross-region shipments and/or imports. The observation that plants in these regions charge higher prices is consistent with transportation costs providing some degree of local market power.\footnote {% The data on cross-region shipments are also suggestive of large transportation costs. For example, more than 90 percent of portland cement produced in Northern California was shipped to consumers in Northern California over 1990-1999.} Finally, imports are less expensive than domestically produced portland cement. This discrepancy may exist in part because the reported prices exclude duties; more speculatively, domestic producers may be more reliable or may maintain relationships with consumers that support higher prices. \begin{table}[tbp] \caption{Consumption, Production, and Prices} \ begin{center} \begin{tabular}[h]{lcccc} \hline\hline \rule[0mm]{0mm}{5.5mm}Description & Mean & Std & Min & Max \\ \hline \rule[0mm]{0mm}{5.5mm}\textit{Consumption} & & & & \\ $\;$ Northern California & 3,513 & 718 & 2,366 & 4,706 \\ $\;$ Southern California & 6,464 & 1,324 & 4,016 & 8,574 \\ $\;$ Arizona & 2,353 & 650 & 1,492 & 3,608 \\ $\;$ Nevada & 1,289 & 563 & 416 & 2,206 \\ \rule [0mm]{0mm}{5.5mm}\textit{Production} & & & & \\ $\;$ Northern California & 2,548 & 230 & 1,927 & 2,894 \\ $\;$ Southern California & 6,316 & 860 & 4,886 & 8,437 \\ $\;$ Arizona-Nevada & 1,669 & 287 & 1050 & 2,337 \\ \rule[0mm]{0mm}{5.5mm}\textit{Domestic Prices} & & & & \\ $\;$ Northern California & 85.81 & 11.71 & 67.43 & 108.68 \\ $\;$ Southern California & 82.81 & 16.39 & 62.21 & 114.64 \\ $\; $ Arizona-Nevada & 92.92 & 14.24 & 75.06 & 124.60 \\ \rule[0mm]{0mm}{5.5mm}\textit{Import Prices} & & & & \\ $\;$ U.S. Southwest & 50.78 & 9.30 & 39.39 & 79.32 \\ \hline \multicolumn{5}{p{3.8in}}{{\ footnotesize {\rule[0mm]{0mm}{0mm}Statistics are based on observations at the region-year level over the period 1983-2003. Production and consumption are in thousands of metric tonnes. Prices are per metric tonne, in real 2000 dollars. Import prices exclude duties. The region labeled ``Arizona-Nevada" incorporates information from Nevada plants only over 1983-1991.}}}% \end{tabular}% \end{center} \end{table} In Table \ref{tab:swDistData}, we explore the spatial characteristics of the regions in more depth, based on the plant locations of 2003. First, the average county in Northern California is 65 miles from the nearest domestic cement plant. Since the comparable statistics for Southern California, Arizona, and Nevada are 72 miles, 92 miles, and 100 miles, respectively, one may infer that average transportation costs may differ substantively across the four regions. Second, the average \textit{additional} distance to the second closest domestic plant is 44 miles in Northern California, 11 miles in Southern California, 82 miles in Arizona, and 77 miles in Nevada, suggestive that plants in Arizona and Nevada may hold more local market power than plants elsewhere. Finally, the average distance to the nearest customs office is 123 miles in Northern California, 110 miles in Southern California, 181 miles in Arizona, and 281 miles in Nevada, suggestive that imports may constrain domestic prices less severely in Arizona and Nevada. The latter two empirical patterns are consistent with the higher cement prices observed in Arizona and Nevada. \begin{table}[tbp] \ caption{Distances between Counties and Plants} \begin{center} \begin{tabular}[h]{lcccc} \hline\hline \rule[0mm]{0mm}{5.5mm}Description & Mean & Std & Min & Max \\ \hline \multicolumn{5}{l}{\rule[0mm] {0mm}{5.5mm}\textit{Miles to the closest plant}} \\ $\;$ Northern California & 64.65 & 30.00 & 7.36 & 115.39 \\ $\;$ Southern California & 72.28 & 39.74 & 18.58 & 127.46 \\ $\;$ Arizona & 91.62 & 40.01 & 29.13 & 163.99 \\ $\;$ Nevada & 100.04 & 61.97 & 17.38 & 232.03 \\ \multicolumn{5}{l}{\rule[0mm]{0mm}{5.5mm}\textit{Additional miles to the second closest plant}} \\ $\;$ Northern California & 43.95 & 49.84 & 0.49 & 176.07 \\ $\;$ Southern California & 11.22 & 8.83 & 0.49 & 31.08 \\ $\;$ Arizona & 81.96 & 55.65 & 0.69 & 172.38 \\ $\;$ Nevada & 77.38 & 43.28 & 12.82 & 177.20 \\ \ multicolumn{5}{l}{\rule[0mm]{0mm}{5.5mm}\textit{Miles to the closest import point}} \\ $\;$ Northern California & 122.65 & 67.42 & 4.33 & 283.00 \\ $\;$ Southern California & 110.17 & 67.43 & 30.29 & 221.28 \\ $\;$ Arizona & 180.80 & 90.26 & 14.07 & 314.91 \\ $\;$ Nevada & 281.45 & 81.93 & 170.09 & 442.02 \\ \hline \multicolumn{5}{p{3.8in}}{{\footnotesize {\rule[0mm]{0mm}{0mm}Distances are calculated based on plant locations in 2003. There are 46 counties in Northern California, 12 counties in Southern California, 15 counties in Arizona, and 17 counties in Nevada.}}}% \end{tabular}% \ end{center} \end{table} \section{Estimation \label{sec:est}} \subsection{Overview} The challenge for estimation is that prices and market shares are unobserved in the data, at least at the plant-area level. We develop a GMM estimator that exploits variation in data that are more often observed: regional level consumption, production, and prices. The key insight is that each candidate parameter vector corresponds to an equilibrium set of plant-area prices and market shares. We compute equilibrium numerically for each candidate parameter vector, aggregate predictions of the model to the regional-level, and then evaluate the ``distance'' between the data and the aggregate predictions. The estimation routine is an iterative procedure defined by the following steps: \begin{enumerate} \ item Select a candidate parameter vector $\boldsymbol{\widetilde{\theta}}$. \item Compute the equilibrium price and market share vectors. \item Calculate regional-level metrics based on the equilibrium vectors. \item Evaluate the regional-level metrics against the data. \item Update $\boldsymbol{\widetilde{\theta}}$ and repeat steps 1-5 to convergence. \end{enumerate} The estimation procedure can be interpreted as having both an inner loop and an outer loop: the inner loop computes equilibrium for each candidate parameter vector and the outer loop minimizes an objective function over the parameter space. We discuss the inner loop and the outer loop in turn, and then address some additional details. \subsection{Computation of numerical equilibrium} We use a large-scale nonlinear equation solver developed in \citeN{dfsane06} to compute equilibrium. The equation solver employs a quasi-Newton method and exploits simple derivative-free approximations to the Jacobian matrix; it converges more quickly than other algorithms and does not sacrifice precision. We define a numerical Bertrand-Nash equilibrium as a price vector for which $\parallel \boldsymbol{\iota(P)} \parallel / \; \text{dim}(% \boldsymbol{\iota(P)}) < \delta$, where $\parallel \cdot \parallel$ denotes the Euclidean norm operator and $$\boldsymbol{\iota(P)} = \boldsymbol{\Omega(P)}(\boldsymbol{P} -\boldsymbol{% MC(P)}) - \boldsymbol{Q(P)}.$$ We denote the equilibrium prices that correspond to the candidate parameter vector $\boldsymbol{\widetilde{\theta}}$ as $\widetilde{P}_{jnt}(\boldsymbol {% \widetilde{\theta}},\boldsymbol{\chi})$, where $\boldsymbol{\chi}$ includes the exogenous data. We denote the corresponding equilibrium market shares as $\widetilde{S}_{jnt}(\boldsymbol{\widetilde {\theta}},\boldsymbol{\chi})$. From a computational standpoint, our construction of $\boldsymbol{\iota(P)}$ avoids the burden of inverting $\boldsymbol{\Omega(P)}$ that would be required by the straight application of Equation \ref{eq:equil}. Further, the structure of the problem permits us to compute equilibrium separately for each period. The price vector that characterizes the equilibrium of a given period has length $J_t \times N$ so that, for example, the equilibrium price vector for 2003 has $14 \times 90 =1,260$ elements.\footnote{% We set $\delta=$1e-13. Numerical error can propagate into the outer loop when the inner loop tolerance is substantially looser (e.g., 1e-7), which slows overall estimation time and can produce poor estimates. The inner loop tolerance is not unit free and must be evaluated relative to the price level. We also note that, in settings characterized by constant marginal costs, one could ease the computational burden of the inner loop by solving for equilibrium prices in each consumer area separately.} The empirical analogs to the computed plant-area prices and market shares are not observed in the data, so we aggregate the prices and market shares to construct more useful regional metrics. For notational precision, we define the sets $\aleph_r$ and $\jmath_r$ as the counties and plants, respectively, located in region $r$. The aggregated region-period metrics take the form: \begin{eqnarray} \widetilde{C}_{rt}(\boldsymbol{\widetilde{\theta}},\boldsymbol{\chi}% )&=&\sum_{n\in\aleph_r}\left(1-\widetilde{S}_ {0nt}(\boldsymbol{\widetilde{% \theta}},\boldsymbol{\chi})\right)M_{nt} \notag \\ \widetilde{Q}_{rt}(\boldsymbol{\widetilde{\theta}},\boldsymbol{\chi}% )&=&\sum_{j\in\jmath_r}\widetilde{S}_{jnt}(\ boldsymbol{\widetilde{\theta}},% \boldsymbol{\chi})M_{nt} \\ \widetilde{P}_{rt}(\boldsymbol{\widetilde{\theta}},\boldsymbol{\chi}% )&=&\sum_{j\in\jmath_r}\sum_n\frac{\widetilde{S}_{jnt}(\boldsymbol{% \widetilde{\theta}},\boldsymbol{\chi})M_{nt}}{\sum_{j\in\jmath_r}\sum_n% \widetilde{S}_{jnt}(\boldsymbol{\widetilde{\theta}},\boldsymbol{\chi})M_{nt}}% \widetilde{P}_{jnt}(\boldsymbol{\widetilde{\ theta}},\boldsymbol{\chi}), \notag \end{eqnarray} where $\widetilde{C}_{rt}(\boldsymbol{\widetilde{\theta}},\boldsymbol{\chi})$% , $\widetilde{Q}_{rt}(\boldsymbol{\widetilde{\theta}},\boldsymbol{\ chi})$, and $\widetilde{P}_{rt}(\boldsymbol{\widetilde{\theta}},\boldsymbol{\chi})$ are total consumption, total production, and weighted-average price, respectively. We calculate regional consumption for each of the four regions in the data -- Northern California, Southern California, Arizona, and Nevada. We calculate production and prices for Northern California, Southern California, and a combined Arizona-Nevada region. We denote the empirical analogs $C_{rt}$, $Q_{rt}$, and $P_{rt}$. We find that, in practice, we can better identify some of the model parameters by exploiting information on aggregated cross-region shipments. We denote the quantity of shipments from region $r$ to region $s$ as $% \widetilde{Q}^s_{rt}(\boldsymbol{\widetilde{\theta}},\boldsymbol{\chi})$. The shipments take the form: \begin{eqnarray} \widetilde{Q}^s_{rt}(\boldsymbol{\widetilde{\theta}},\boldsymbol{\chi}% )&=&\sum_{j\in\jmath_r}\sum_{n\in\aleph_s}\widetilde{S}_{jnt}(\boldsymbol{% \ widetilde{\theta}},\boldsymbol{\chi})M_{nt}, \end{eqnarray} We calculate the quantity of portland cement produced by plants in California (both Northern and Southern) that is shipped to consumers in Northern California. The empirical analog, which we denote $Q^s_{rt}$, is available over the period 1990-2003. We withhold data on other cross-region shipments from estimation because the number of parameters that we estimate exceeds the lengths of the available time-series. The withheld data, however, provide natural out-of-sample tests on the model predictions. \subsection{Objective function} We estimate the parameters using the standard GMM framework for systems of nonlinear regression equations (e.g., \citeN{greene03}, page 369). The equations we use compare the metrics computed in the inner loop to their empirical analogs: \begin{eqnarray} \boldsymbol{C}_r&=&\boldsymbol{\widetilde{C}}_r(\boldsymbol{\theta},% \boldsymbol{\chi})+\boldsymbol{e_r^1} \notag \\ \boldsymbol{Q}_r&=&\ boldsymbol{\widetilde{Q}}_r(\boldsymbol{\theta},% \boldsymbol{\chi})+\boldsymbol{e_r^2} \\ \boldsymbol{P}_r&=&\boldsymbol{\widetilde{P}}_r(\boldsymbol{\theta},% \boldsymbol{\chi})+\boldsymbol{e_r^3} \notag \\ \boldsymbol{Q}^s_{r}&=&\boldsymbol{\widetilde{Q}}^s_r(\boldsymbol{\theta},% \boldsymbol{\chi})+\boldsymbol{e_{rs}^4}. \notag \label{eq:moments} \end{eqnarray} We write the equations in vector form with one element per period. There are four consumption moments, three production moments, three price moments, and one cross-region shipments moment. We have 21 observations on each of the consumption, production and price moments, and 14 observations on the cross-region shipments moment. We interpret the disturbances as measurement error, and assume the disturbances to have expectation zero and contemporaneous covariance matrix $\boldsymbol{\Sigma}$. The GMM estimator is: $$\boldsymbol{\widehat{\theta}} = \arg \min_{\boldsymbol{\Theta}} \boldsymbol{e% }(\boldsymbol{\ widetilde{\theta}},\boldsymbol{\chi})^\prime \boldsymbol{A}% ^{-1} \boldsymbol{e}(\boldsymbol{\widetilde{\theta}},\boldsymbol{\chi}),$$ where $\boldsymbol{e}(\boldsymbol{\widetilde{\theta}},\ boldsymbol{\chi})$ is a vector of empirical disturbances obtained by stacking the nonlinear regression equations and $\boldsymbol{A}$ is a positive definite weighting matrix. We employ the usual two-step procedure to obtain consistent and efficient estimates (\citeN{hansen82}). We first minimize the objective function using $\boldsymbol{A}=\boldsymbol{I}$. We then estimate the contemporaneous covariance matrix and minimize the objective function a second time using the weighting matrix $\boldsymbol{A} = \boldsymbol{% \widehat{\Sigma}} \otimes\boldsymbol{I}.$ We compute standard errors that are robust to both heteroscedasticity and arbitrary correlations among the error terms of each period, using the methods of \citeN{hansen82} and % \citeN{neweymcfadden94}.\ footnote{% Measurement error that is zero in expectation is sufficient for consistency. Estimation of the contemporaneous covariance matrix $\boldsymbol{\Sigma}$ is complicated by the fact that we observe consumption, production, and prices over 1983-2003 but cross-region shipments over 1990-2003. We use methods developed in \citeN{sz73} and \citeN{hhs90} to account for the unequal numbers of observations.} \subsection{Potential demand} We normalize the potential demand of each county using two exogenous demand predictors that we observe at the county level: the number of construction employees and the number of new residential building permits. We regress regional portland cement consumption on the demand predictors (aggregated to the regional level), impute predicted consumption at the county level based on the estimated relationships, and then scale predicted consumption by a constant of proportionality to obtain potential demand.\footnote{% The regression of regional portland cement consumption on the demand predictors yields an $R^2$ of 0.9786, which foreshadows an inelastic estimate of aggregate demand. Additional predictors, such as land area, population, and percent change in gross domestic product, contribute little additional explanatory power. We use a constant of proportionality of 1.4, which is sufficient to ensure that potential demand exceeds observed consumption in each region-year observation.} The results indicate that potential demand is concentrated in a small number of counties. In 2003, the largest 20 counties account for 90 percent of potential demand, the largest 10 counties account for 65 percent of potential demand, and the largest two counties -- Maricopa County and Los Angeles County -- together account for nearly 25 percent of potential demand.\footnote{% The largest five counties are Maricopa County (3,259 thousand metric tonnes), Los Angeles County (3,128 thousand metric tonnes), Clark County (1,962 thousand metric tonnes), Riverside County (1,803 thousand metric tonnes) and San Diego County (1,733 thousand metric tonnes).} In the time-series, potential demand more than doubles over 1983-2003, due to greater activity in the construction sector and the onset of the housing bubble. \subsection{The geographic space \label{sec:ussw}} Our restricted geographic focus eases the computational burden of the estimation routine. For instance, a national sample would require the computation of more than 300 thousand equilibrium plant-county prices in each time period, for every outer loop iteration. The geographical restriction is valid provided that gross domestic inflows/outflows are insubstantial. The data provide some support. Most directly, the California Letter indicates that more than 98 percent of cement produced in Southern California was shipped within the U.S. Southwest over the period 1990-1999, and more than 99 percent of cement produced in California (both Northern and Southern) was shipped within the U.S. Southwest over the period 2000-2003.% \footnote{% Analogous statistics for Northern California over 1990-1999 are unavailable due to data redaction.} We consider outflows from Arizona unlikely because the Minerals Yearbook indicates that consumption routinely exceeds production in that state, and we consider outflows from Nevada unlikely because production capacity is low relative to potential demand. Since \textit{net} domestic inflows/outflows are insubstantial (see Figure \ref% {fig:imports}), these data patterns suggest that gross inflows are also insubstantial. \subsection{Identification \label{sec:ID}} We use an artificial data experiment to test identification. We draw 40 data sets, each with 21 time periods, using our model and a vector of ``true'' parameters as the data generating process. We then seek to recover the parameter values with the GMM estimation procedure. The exogenous data includes the plant capacities, the potential demand of counties, the diesel price, the import price, and two cost shifters. We randomly draw capacity and potential demand from the data (with replacement), and we draw the remaining data from normal distributions.\footnote{% Specifically, we use the following distributions: diesel price $\sim N(1,0.28)$, import price $\sim N(50,9)$, cost shifter 1 $\sim N (60,15)$, and cost shifter 2 $\sim N(9,2).$ We redraw data that are below zero. We also redraw data that lead the estimator to nonsensical areas of parameter space.} We hold plant and county locations fixed to maintain tractability, and rely on the random draws on capacity, potential demand, and diesel prices to create variation in the distances between production capacity and consumers. Table \ref{tab:fakedata} shows the results of the experiment. Interpretation is complicated somewhat because we use non-linear transformations to constrain the price and distance coefficients below zero, constrain the coefficients on the cost shifters and the over-utilization cost above zero, and constrain the inclusive value and utilization threshold coefficients between zero and one. We defer details on these transformations to Appendix % \ref{app:estdet}. As shown, the means of the estimated coefficients are close to (transformed) true parameters. The means of the price and distance coefficients, which are of particular interest, are within 6 percent and 11 percent of the truth, respectively. The root mean-squared errors tend to be between 0.45 and 0.66 -- the two exceptions that generate higher mean-squared errors are the import dummy and the over-utilization cost, which appear to be less cleanly identified. \begin{table}[tbp] \caption{Artificial Data Test for Identification} \begin{center} \begin{tabular}[h]{lccccc} \hline\hline \rule[0mm]{0mm}{6.5mm}Variable & Parameter & Truth ($\theta$) & Transformed (% $\widetilde{\theta}$) & Mean Est & RMSE \\ \hline \multicolumn{3}{l}{\rule[0mm]{0mm}{4.5mm}\textit{Demand}} & & & \\ \; Cement Price & $\beta_1$ & -0.07 & -2.66 & -2.51 & 0.66 \\ \; Miles$\times$Diesel Price & $\beta_2$ & -25.00 & 3.22 & 2.86 & 0.59 \\ \; Import Dummy & $\beta_3$ & -4.00 & -4.00 & -6.07 & 1.23 \\ \; Intercept & $\beta_0$ & 2.00 & 2.00 & 1.11 & 0.51 \\ \; Inclusive Value & $\lambda$ & 0.09 & -2.31 & -1.73 & 0.54 \\ \multicolumn {3}{l}{\rule[0mm]{0mm}{4.5mm}\textit{Marginal Costs}} & & & \\ \; Cost Shifter 1 & $\alpha_1$ & 0.70 & -0.36 & -0.88 & 0.51 \\ \; Cost Shifter 2 & $\alpha_2$ & 3.00 & 1.10 & 0.54 & 0.45 \\ \; Utilization Threshold & $\nu$ & 0.90 & 2.19 & 1.71 & 0.59 \\ \; Over-Utilization Cost & $\gamma$ & 300.00 & 5.70 & 6.14 & 1.05 \\ \hline \multicolumn{6}{p{6.3in}}{{\footnotesize {\rule[0mm]{0mm}{0mm} Results of GMM estimation on 40 data sets that are randomly drawn based on the ``true'' parameters listed. The parameters are transformed prior to estimation to place constraints on the parameter signs/magnitudes (see Appendix \ref% {app:estdet}). Mean Est and RMSE are the mean of the estimated (transformed) parameters and the root mean-squared error, respectively. }}}% \end{tabular}% \end {center} \end{table} To further build intuition, we explore some of the empirical relationships (graphed in Figure \ref{fig:ID}) that drive parameter estimates in our application. On the demand side, the price coefficient is primarily identified by the relationship between the consumption and price moments. In panel A, we plot cement prices and the ratio of consumption to potential demand (``market coverage'') over the sample period. The two metrics have a weak negative correlation, consistent with downward-sloping but inelastic aggregate demand. The distance coefficient is primarily identified by (1) the cross-region shipments moment, and (2) the relationship between the consumption and production moments. To explore the second source of identification, we plot the gap between production and consumption (``excess production'') for each region over the sample period. In many years, excess production is positive in Southern California and negative elsewhere, consistent with inter-regional trade flows. The magnitude of these implied trade flows helps drive the distance coefficient. Interestingly, the implied trade flows are higher later in the sample, when the diesel fuel is less expensive. \begin{figure}[t] \centering \includegraphics[height=4in]{stats_SWsample.eps} \caption{{\protect\footnotesize {Empirical Relationships in the U.S. Southwest. Panel A plots average cement prices and market coverage. Prices are in dollars per metric tonne and market coverage is defined as the ratio of consumption to potential demand (times 100). Panel B plots excess production in each region, which we define as the gap between between production and consumption. Excess production is in millions of metric tonnes. Panel C plots average coal prices, electricity prices, durable-goods manufacturing wages, and crushed stone prices in California. For comparability, each time-series is converted to an index that equals one in 2000. Panel D plots the average cement price and industry-wide utilization (times 100). } }} \label{fig:ID} \end{figure} On the supply side, the parameters on the marginal cost shifters are primarily identified by the price moments. In panel C, we plot the coal price, the electricity price, the durable-goods manufacturing wage, and the crushed stone price for California. Coal and electricity prices are highly correlated with the cement price (e.g., see panel A), consistent with a strong influence on marginal costs; inter-regional variation in input prices helps disentangle the two effects. It is less clear that wages and crushed stone prices are positively correlated with cement prices. The utilization parameters are primarily identified by the relationship between the production moments (which determine utilization) and the price moments. In panel D, we plot cement prices and industry-wide utilization over the sample period. The two metrics are negatively correlated over 1983-1987 and positively correlated over 1988-2003. \section{Estimation Results \label{sec:res}} \subsection{Specification and fits} We estimate the model with parsimonious specifications of the utility and marginal cost functions. Specifically, the utility specification includes the plant-county price, the ``distance'' between the plant and county, a dummy for the import option, and an intercept. We proxy distance using a diesel price index interacted with the miles between the plant and the center of the county (in thousands). The marginal cost specification incorporates the five variable inputs identified by \citeN{epa09}. The constant portion of marginal costs includes shifters for the price of coal, the price of electricity, the average wages of durable-goods manufacturing employees, and the price of crushed stone. We let marginal costs increase in production once utilization exceeds some (estimated) threshold value, as written in Equation \ref{eq:mc}, and we normalize $\phi=1.5$ to ensure the theoretical existence of equilibrium. Before turning to the parameter estimates, we note that this specification produces impressive in-sample and out-of-sample fits. In Figure \ref% {fig:fitsregX}, we plot observed consumption against predicted consumption (panel A), observed production against predicted production (panel B), and observed prices against predicted prices (panel C). The model explains 93 percent of the variation in regional consumption, 94 percent of the variation in regional production, and 82 percent of the variation in regional prices. The model also generates accurate out-of-sample predictions. In panel D, we plot observations on cross-region shipments against the corresponding model predictions. We use only 14 of these observations in the estimation routine -- the remaining 82 data points are withheld from the estimation procedure and do not directly influence the estimated parameters. Even so, the model explains 98 percent of the variation in these data.\ footnote{% We provide additional information on the estimation fits in Appendix \ref% {app:fits}.} \begin{figure}[t] \centering \includegraphics[height=4in]{fits-regX.eps} \caption{{\protect\ footnotesize {GMM Estimation Fits for Regional Metrics. Consumption, production, and cross-region shipments are in millions of metric tonnes. Prices are constructed as a weighted-average of plants in the region, and are reported as dollars per metric tonne. The lines of best fit and the reported $R^2$ values are based on univariate OLS regressions.}}} \label{fig:fitsregX} \end{figure} %The figure shows four scatter-plots. The first plots predicted consumption against observed consumption (panel A). The second %plots predicted production against observed consumption (panel B). The third plots predicted prices against observed prices %(panel C). The fourth plots predicted cross-region shipments against observed shipments. Each plot also shows the line of best fit. %and the resulting r-squared value. The text discusses the fits in detail. \subsection{Demand estimates and transportation costs \label{sec:demres}} Table \ref{tab:GMM} presents the parameter estimates of the GMM procedure. The price and distance coefficients are the two primary objects of interest on the demand side; both are negative and precisely estimated.\footnote{% The other demand parameters take reasonable values and are precisely identified. The negative coefficient on the import dummy may be due to consumer preferences for domestic plants or to the fact that observed import prices do not reflect the full price of imported cement (e.g., the data exclude duties). The inclusive value coefficient suggests that consumer tastes for the different cement providers are highly correlated, inconsistent with the standard (non-nested) logit model.} The ratio of these coefficients identifies consumers' willingness-to-pay for proximity, incorporating transportation costs and any other distance-related costs (e.g., reduced reliability). In the following discussion, we refer to the willingness-to-pay as the transportation cost, although the two concepts may not be strictly equivalent. \begin{table}[tbp] \caption{Estimation Results } \begin{center} \begin{tabular}[h]{lccc} \hline\hline \rule[0mm]{0mm}{6.5mm}Variable & Parameter & Estimate & St. Error \\ \hline \ multicolumn{3}{l}{\rule[0mm]{0mm}{4.5mm}\textit{Demand}} & \\ \; Cement Price & $\beta_1$ & -0.087 & 0.002 \\ \; Miles$\times$Diesel Price & $\beta_2$ & -26.42 & 1.78 \\ \; Import Dummy & $\beta_3$ & -3.80 & 0.06 \\ \; Intercept & $\beta_0$ & 1.88 & 0.08 \\ \; Inclusive Value & $\lambda$ & 0.10 & 0.004 \\ \multicolumn{3}{l}{\rule[0mm]{0mm}{4.5mm}\textit{Marginal Costs}} & \\ \; Coal Price & $\ alpha_1$ & 0.64 & 0.05 \\ \; Electricity Price & $\alpha_2$ & 2.28 & 0.47 \\ \; Hourly Wages & $\alpha_3$ & 0.01 & 0.04 \\ \; Crushed Stone Price & $\alpha_4$ & 0.29 & 0.31 \\ \; Utilization Threshold & $\nu$ & 0.86 & 0.01 \\ \; Over-Utilization Cost & $\gamma$ & 233.91 & 38.16 \\ \hline \multicolumn{4}{p{4.3in}}{{\footnotesize {\rule[0mm]{0mm}{0mm}GMM estimation results. Estimation exploits variation in regional consumption, production, and average prices over the period 1983-2003, as well as variation in shipments from California to Northern California over the period 1990-2003. The prices of cement, coal, and crushed stone are in dollars per metric tonne. Miles are in thousands. The diesel price is an index that equals one in 2000. The price of electricity is in cents per kilowatt-hour, and hourly wages are in dollars per hour. The marginal cost parameter $\phi$ is normalized to 1.5, which ensures the theoretical existence of equilibrium. Standard errors are robust to heteroscedasticity and contemporaneous correlations between moments.}}}% \end{tabular}% \end{center} \end{table} First, however, we briefly summarize the implied price elasticities. We estimate that the aggregate elasticity is -0.16 in the median year. This inelasticity is precisely what one should expect based on economic theory and the fact that portland cement composes only a small fraction of total construction expenses. Indeed, \citeN{syverson04} makes a similar argument for ready-mix concrete, which accounts for only two percent of total construction expenses according to the 1987 Benchmark Input-Output Tables. The cost share of portland cement (an input to ready-mix concrete) is surely even lower. By contrast, we estimate that the median firm-level elasticity is -5.70, consistent with substantive competition between firms. Finally, we estimate that the domestic elasticity -- which captures the responsiveness of domestic demand to domestic prices, holding import prices constant -- is -1.11 in the median year. The discrepancy between the aggregate and domestic elasticities is suggestive that imports have a disciplining effect on domestic prices. Returning to transportation costs, we estimate that consumers pay roughly \$0.30 per tonne mile, given diesel prices at the 2000 level.\footnote{% The calculation is simply $\frac{26.42}{0.087}\frac {index}{1000}=0.3037$, where $index=1$ in 2000.} Given the shipping distances that arise in numerical equilibrium, this translates into an average transportation cost of \$24.61 per metric tonne over the sample period -- sufficient to account for 22 percent of total consumer expenditure. Transportation costs of this magnitude have real effects on the industry. We develop two such effects here: (1) transportation costs constrain the distance that cement can be shipped economically; and (2) transportation costs insulate firms from competition and provide some degree of localized market power. In Figure \ref{fig:miles4}, we plot the estimated distribution of the shipping distances over 1983-2003. We calculate that portland cement is shipped an average of 92 miles, that 75 percent of portland cement is shipped under 110 miles, and that 90 percent is shipped under 175 miles.% \footnote{% The average shipping distance fluctuates between a minimum of 72 miles in 1983 and a maximum of 114 miles in 1998, and is highly correlated with the diesel price index.} To better place these numbers in context, we ask the question: ``How far would portland cement be shipped if transportation costs were negligible?'' We perform a counter-factual simulation in which we normalize the distance coefficient to zero, keeping the other coefficients at their estimated values. The results suggest that, in an average year, portland cement would have been shipped on average 276 miles absent transportation costs. Intriguingly, the ratio of actual miles shipped to this simulated measure provides a unit-free measure that could enable cross-industry comparisons. The ratio in our application is 0.33. \begin{figure}[t] \centering \includegraphics[height=3in]{miles4.eps} \caption {{\protect\footnotesize {The Estimated Distribution of Miles Shipped over 1983-2003.}}} \label{fig:miles4} \end{figure} % The figure is a non-parametric bar chart that characterizes the distances that cement is shipped. % See text for summary statistics and more details. We now develop the empirical evidence regarding localized market power. We start with an illustrative example. Figure \ref {fig:priceshare} shows the prices (Map A) and market shares (Map B) that characterize numerical equilibrium for the Clarksdale plant in 2003, evaluated the optimized coefficient vector. We mark the location of the Clarksdale plant with a star, and mark other plants with circles. The Clarksdale plant captures more than 40 percent of the market in the central and northeastern counties of Arizona. It charges consumers in these counties its highest prices, typically \$80 per metric tonne or more. Both market shares and prices are lower in more distant counties, and in many counties the plant captures less than one percent of demand despite substantial discounts. The locations of competitors may also influence market share and prices, though these effects are difficult to discern on the map. \begin{figure}[t] \centering \includegraphics[height=4in]{priceshare-erika.eps} \caption{{\protect\footnotesize {Equilibrium Prices and Market Shares for the Clarksdale Plant in 2003. The Clarksdale plant is marked with a star, and other plants are marked with circles.}}} \label{fig:priceshare} \end{figure} %The figure is a map of California, Arizona, and Nevada. In Map A, counties for which the Clarksdale plant has higher prices %are shaded darker. In Map B, counties for which the plant has higher market shares are shaded darker. The location of the %Clarksdale plant is marked with a star, and the locations of competitors are marked with circles. The text provides a description % of the spatial price and share patterns that emerge. We explore these relationships more rigorously with regression analysis, based on the prices and market shares that characterize equilibrium at the optimized coefficient vector. We regress price and market share on three independent variables: the distance between the plant and the county, the distance between the county and the nearest other domestic plant, and the estimated marginal cost of the plant. We define distance as miles times the diesel index, and use a log-log specification to ease interpretation. We focus on three data samples, composed of the plant-county pairs with distances of 0-100, 100-200, and 200-300, respectively. Our objective is purely descriptive and the regression coefficients should not be interpreted as consistent estimates of any underlying structural parameters. Table \ref{tab:disteffect} presents the results, which are consistent with the illustrative example and demonstrate that (1) plants have higher prices and market shares in counties that are closer; and (2) plants have lower prices and market shares in counties that have nearby alternatives. For the closest plants and counties, a 10 percent reduction in distance is associated with prices and market shares that are 0.9 percent and 14 percent higher, respectively. For the same sample, a 10 percent reduction in the distance separating the county from its the closest alternative is associated with prices and market shares that are 0.7 percent and 11 percent lower, respectively. Interestingly, these price effects attenuate for plants and counties that are somewhat more distant, whereas the market share effects amplify. \begin{table}[tbp] \caption{Plant Prices and Market Shares} \begin{center} \begin{tabular}[h]{lcccccc} \hline\hline \rule[0mm]{0mm}{5.5mm}{\footnotesize {Dependent Variable:}} & {\ footnotesize {ln(Price)}} & {\footnotesize {ln(Price)}} & {\footnotesize {ln(Price)}} & {\footnotesize {ln(Share)}} & {\footnotesize {ln(Share)}} & {\footnotesize {% ln(Share)}} \\ \rule[0mm]{0mm} {5.5mm}{\footnotesize {Distance from Plant:}} & {\footnotesize {0-100}} & {\footnotesize {100-200}} & {\footnotesize {200-300% }} & {\footnotesize {0-100}} & {\footnotesize {100-200}} & {\ footnotesize {% 200-300}} \\ \hline \rule[0mm]{0mm}{6.5mm}{\footnotesize {ln(Distance from Plant)}} & {\footnotesize {-0.098*}} & {\footnotesize {-0.038*}} & {\footnotesize {% -0.003}} & {\ footnotesize {-1.369*}} & {\footnotesize {-2.655*}} & {\footnotesize {-5.902*}} \\ & {\footnotesize {(0.026)}} & {\footnotesize {(0.009)}} & {\footnotesize {% (0.009)}} & {\footnotesize {(0.102)}} & {\footnotesize {(0.186)}} & {\footnotesize {(0.195)}} \\ \rule[0mm]{0mm}{6.5mm}{\footnotesize {ln(Distance to Nearest}} & {\footnotesize {0.071*}} & {\footnotesize {0.018*}} & {\footnotesize {0.007*} % } & {\footnotesize {1.073*}} & {\footnotesize {0.813*}} & {\footnotesize {% 1.279*}} \\ $\; \;$ {\footnotesize {Alternative)}} & {\footnotesize {(0.019)}} & {\footnotesize {(0.004)}} & {\ footnotesize {(0.002)}} & {\footnotesize {% (0.117)}} & {\footnotesize {(0.081)}} & {\footnotesize {(0.098)}} \\ \rule[0mm]{0mm}{6.5mm}{\footnotesize {ln(Marginal Cost)}} & {\footnotesize {% 0.723*}} & {\footnotesize {0.835*}} & {\footnotesize {0.841*}} & {\footnotesize {-1.126*}} & {\footnotesize {-2.057*}} & {\footnotesize {% -3.212*}} \\ & {\footnotesize {(0.054)}} & {\footnotesize {(0.014)}} & {\footnotesize {% (0.013)}} & {\footnotesize {(0.304)}} & {\footnotesize {(0.645)}} & {\footnotesize {(0.874)}} \\ & & & & & & \\ {\footnotesize {R}$^2$} & {\footnotesize {0.7186}} & {\footnotesize {0.9209}} & {\footnotesize {0.9499}} & {\footnotesize {0.5278}} & {\footnotesize {% 0.4735}} & {\footnotesize {0.5407}} \\ {\footnotesize {\# of Obs.}} & {\footnotesize {2,840}} & {\footnotesize {% 5,460}} & {\footnotesize {6,088}} & {\footnotesize {2,840}} & {\footnotesize {5,460}} & {\footnotesize {6,088}} \\ \hline \multicolumn{7}{p{5.8in}}{{\footnotesize {\rule[0mm]{0mm}{0mm}Results of OLS regression. The units of observation are at the plant-county-year level. The dependent variables are the natural logs of the plant-county specific prices and market shares that characterize numerical equilibrium at the GMM estimates. Distance from Plant is the miles between the plant and county, times a diesel price index that equals one in 2000. Distance to Nearest Alternative is the miles between the county and the nearest other domestic plant, times the diesel price index. Marginal Cost is the marginal cost of the plant implied by the GMM estimates. All regressions include an intercept. Standard errors are robust to heteroscedasticity and correlations among observations from the same plant. Statistical significance at the one percent level is denoted by *.}}}% \end {tabular}% \end{center} \end{table} \subsection{Marginal cost estimates} The marginal costs estimates shown in Table \ref{tab:GMM} correspond to a marginal cost of \$69.40 in the mean plant-year (weighted by production). Of these marginal costs, \$60.50 is attributable to costs related to coal, electricity, labor and raw materials, and the remaining \$8.90 is attributable to high utilization rates. In Figure \ref{fig:costs}, we plot the three metrics over 1983-2003, together with the average prices that arise in numerical equilibrium at the optimized parameter vector. The constant portion of marginal costs declines through the sample period, due primarily to cheaper coal and electricity, whereas the utilization portion increases. We estimate that utilization-related expenses account for roughly 25 percent of overall marginal costs over 1997-2003, during the onset of the housing bubble. Finally, we note that the average markup (i.e., price minus marginal cost) is quite stable through the sample period around its mean of \$17.20. \begin{figure}[t] \centering \includegraphics[height=3in]{costs.eps} \caption{{\protect\footnotesize {Estimated Marginal Costs and Average Prices.% }}} \label{fig:costs} \end{figure} % The figure plots estimated marginal costs, estimated constant marginal costs, estimated marginal costs due to high % utilization, and average prices, over the sample period. The text discusses the relationships between these % metrics in detail. We calculate that the average plant-year observation has variable costs of \$51 million by integrating the marginal cost function over the production levels that arise in numerical equilibrium. Virtually all of these variable costs -- 98.5 percent -- are due to coal, electricity, labor and raw materials, rather than due to high utilization. Thus, although capacity constraints may have substantial affects on marginal costs, the results suggest that their cumulative contribution to plant costs can be minimal. Taking the accounting statistics further, we calculate that the average plant-year has variable revenues of \$73 million and that the average gross margin (variable profits over variable revenues) is 0.32. As argued in % \citeN{ryan06}, margins of this magnitude may be needed to rationalize entry given the sunk costs associated with plant construction.\footnote{% These gross margins are consistent with publicly-available accounting data. For instance, Lafarge North America -- one of the largest domestic producers -- reports an average gross margin of 0.33 over 2002-2004.}$^,$\footnote{% Fixed costs are well understood to be important for production, as well. The trade journal \textit{Rock Products} reports that high capacity portland cement plants incurred averaged \$6.96 in maintenance costs per production tonne in 1993 (\citeN{rock94}). Evaluated at the production levels that correspond to numerical equilibrium in 1993, this number implies that the average plant would have incurred \$5.7 million in maintenance costs relative to variable profits of \$17.7 million. The GMM estimation results suggest that the bulk of these maintenance costs are best considered fixed rather than due to high utilization rates. Of course, the static nature of the model precludes more direct inferences about fixed costs.} Finally, we discuss the individual parameter estimates shown in Table \ref% {tab:GMM}, each of which deviates somewhat from production data available from the Minerals Yearbooks and \citeN{epa09}. To start, the coal parameter implies that plants burn 0.64 tonnes of coal to produce one tonne of cement, whereas in fact plants burn roughly 0.09 tonnes of coal to produce each tonne of cement. The electricity parameter implies that plants use 228 kilowatt-hours per tonne of cement, whereas the true number is closer to 150. Each tonne of cement requires approximately 0.34 employee-hours yet the parameter on wages is essentially zero. Lastly, the crushed stone coefficient of 0.29 is too small, given that roughly 1.67 tonnes of raw materials (mostly limestone) are used per tonne of cement. We suspect that these discrepancies are due to measurement error in the data.\footnote{% In particular, the coal prices in the data are free-on-board and do not reflect any transportation costs paid by cement plants; cement plants may negotiate individual contracts with electrical utilities that are not reflected in the data; the wages of cement workers need not track the average wages of durable-goods manufacturing employees; and cement plants typically use limestone from a quarry adjacent to the plant, so the crushed stone price may not proxy the cost of limestone acquisition (i.e., the quarry production costs).} \subsection {A comparison to standard methods} The standard method of structural analysis for homogenous product industries assumes independent markets and Cournot competition. In this section, we contrast some of our results to those generated by the standard method in % \citeN{ryan06}, a recent paper that estimates a structural model of the portland cement industry based on data from the Minerals Yearbook and the Plant Information Summary. We focus on two economic concepts -- the aggregate elasticity of demand and the consequences of high utilization -- for which our model generates distinctly different estimates than the standard method. These discrepancies do not diminish the substantial contribution of \citeN{ryan06}, which embeds the standard method within an innovative dynamic discrete choice game and focuses primarily on the dynamic parameters. Rather, the discrepancies suggest two reasons that our model may sometimes provide more reasonable results than conventional approaches. First, we estimate the aggregate elasticity of demand to be -0.16 in the median sample year whereas Ryan works with an aggregate elasticity of -2.96, obtained from a constant elasticity demand system. The difference is due to specific specification choices -- the constant elasticity demand system produces an aggregate elasticity of -0.15 once housing permits are included as a control.\footnote{% See Table 3 in \citeN{ryan06}. We consider the inelastic estimate more plausible because portland cement is a minor cost for most construction projects (see Section \ref {sec:demres}).} Ryan cannot use the inelastic estimate because, within the context of Cournot competition, it implies firm elasticities that are small and inconsistent with profit maximization. This occurs because the Cournot model restricts each firm elasticity to be linearly related to the aggregate elasticity according the relationship $e_j = e/s_j$, where $e_j$, $e$, and $s_j$ denote the firm elasticity, the aggregate elasticity, and the firm market shares, respectively. This critique is fundamental: the standard method can be inappropriate for intermediate goods, such as portland cement, that account for only a fraction for the total production costs of the final good. By contrast, the nested logit demand system divorces the firm elasticities from the aggregate elasticity and, in our case, produces inelastic aggregate demand and elastic firm demand. Second, the two methods produce vastly different estimates of the marginal cost curve once utilization reaches the threshold level (which both methods place just above 0.85). We estimate that marginal costs increase gradually so that full utilization increases marginal costs by a total of \$12.25 relative to utilization below the threshold. By contrast, Ryan estimates that the slope of the marginal cost curve past the threshold is nearly infinite.\footnote{% Our coefficient $\gamma$ is roughly analogous to Ryan's $\delta_2$ coefficient. We estimate $\gamma$ to be 233.91, while Ryan estimates $% \delta_2$ to be $1157\times 10^7$. See Table 4 in \citeN{ryan06}.} We suspect that the difference is data driven. The standard method requires data on firm-level utilization. However, firm production is not available from the publicly-available data, and Ryan imputes utilization as annual capacity divided by annualized daily capacity. In Figure \ref{fig:ryan}, we plot total production, total annual capacity, and total annualized daily capacity in the U.S. Southwest over 1983-2003, together with total consumption. Both production and consumption are pro-cyclical, and actual utilization (i.e., production over annual capacity) varies substantively and predictably with demand conditions. By contrast, annual capacity simply tracks annualized daily capacity so that Ryan's utilization proxy is uncorrelated with demand conditions. The strength of the relationship between utilization and demand is precisely what identifies the magnitude of utilization costs. Thus, we suspect that the lower data requirements of our model -- estimation is feasible when some variables of interest (e.g., firm-level production) are unobserved -- may improve economic estimates. \begin{figure}[t] \centering \includegraphics[height=3in]{ryan.eps} \caption{{\protect\footnotesize {Consumption, Production, and Two Capacity Measures.}}} \label{fig:ryan} \end{figure} % The figure plots production, consumption, annual utilization, and annualized daily utilization, over the sample % period. The text discusses the relationships between these metrics in detail. \section{An application to competition policy \label{sec:sim}} The model and estimator may prove useful for a variety of policy endeavors. One potential application is merger simulation, an important tool for competition policy. In this subsection, we use counter-factual simulations to evaluate a hypothetical merger between Calmat and Gifford-Hill in 1986. During that year, Calmat and Gifford-Hill operated six plants and accounted for 43 percent of industry capacity in the U.S. Southwest. We calculate the loss of consumer surplus due to the unilateral effects of the merger, map the distribution of harm over the U.S. Southwest, and evaluate six alternative divestiture plans.\footnote{% We follow standard practice to perform the counterfactuals. For each merger simulation, we define a matrix $\boldsymbol{\Omega^{post}(P)}$ using Equation \ref{eq:omega} and the post-merger structure of the industry. We compute the equilibrium post-merger price vector as the solution to Equation % \ref{eq:equil}, substituting $\boldsymbol{\Omega^{post}(P)}$ for $% \boldsymbol{\Omega(P)}.$ Following \citeN{mcfadden81} and % \ citeN{smallrosen81}, the change in consumer surplus due to the merger is: $$\Delta CS =\sum_{n=1}^N \frac{\ln (1+\exp(\beta_0+ \lambda I_{nt}^{pre}))-\ln (1+\exp(\beta_0+ \lambda I_{nt}^{post}))}{\ beta_1} M_n, otag$$ where $I_{n}^{pre}$ is the inclusive value of the inside goods calculated using equilibrium pre-merger prices, $I_n^{post}$ is the inclusive value calculated using equilibrium post-merger prices, and $\beta_1$ is the price coefficient.} Table \ref{tab:mergsim} shows that the merger reduces consumer surplus by \$1.40 million in 1986, absent a divestiture. The magnitude of the effect is modest relative to the amount of commerce; by way of comparison, we calculate total pre-merger consumer surplus to be more than \$239 million. We refer to the six plants available for divestiture as Calmat1, Calmat2, Calmat3, Gifford-Hill1, Gifford-Hill2, and Gifford-Hill3, respectively. The single-plant divestitures mitigate between 31 percent and 56 percent of the harm. The ``optimal'' divestiture -- that of Gifford-Hill2 -- results in consumer harm of only \$614 thousand. \begin{table}[tbp] \caption{Divestitures and Consumer Surplus} \begin{center} \begin{tabular}[h] {lccccccc} \hline\hline & \multicolumn{7}{c}{\rule[0mm]{0mm}{5.5mm}\underline{Required Divestiture}} \\ \rule[0mm]{0mm}{4.5mm} & {\footnotesize {None}} & {\footnotesize {Calmat1}} & {\footnotesize {Calmat2}} & {\footnotesize {Calmat3}} & {\footnotesize {% Gifford-Hill1}} & {\footnotesize {Gifford-Hill2}} & {\footnotesize {% Gifford-Hill3}} \\ \hline \rule[0mm]{0mm}{5.5mm} {\footnotesize {$\ Delta$ Surplus}} & -1,397 & -964 & -618 & -797 & -827 & -614 & -891 \\ \rule[0mm]{0mm}{7.5mm} {\footnotesize {\% Mitigated}} & $\cdot$ & 31\% & 56\% & 43\% & 41\% & 56\% & 36\% \\ \hline \multicolumn {8}{p{6.1in}}{{\footnotesize {\rule[0mm]{0mm}{0mm}Results of counterfactual simulations. $\Delta$ Surplus is the change in consumer surplus due to a hypothetical merger between Calmat and Gifford-Hill in 1986, and is reported in thousands of 2000 dollars. \% Mitigated is calculated relative to the change in consumer surplus that occurs when no plant is divested. We consider six single-plant divestiture plans, and refer to the different plants as Calmat1, Calmat2, Calmat3, Gifford-Hill1, Gifford-Hill2, and Gifford-Hill3. }}}% \end{tabular}% \end{center} \end{table} We map the distribution of consumer harm over the U.S. Southwest in Figure % \ref{fig:mapdivest}, both for the merger without divestiture (panel A) and under the optimal divestiture plan (panel B). As shown in panel A, the unilateral effects of the merger are concentrated in Southern California and Arizona. Together, Maricopa County and Los Angeles County account for 60 percent of consumer harm, and more than 90 percent of consumer harm occurs only 10 counties. The best single-plant divestiture mitigates consumer harm in Southern California but does little to reduce harm in Maricopa County (see panel B). The results of an additional counterfactual exercises, in which we also divest one of the Arizona plants, suggest that a two-plant divestiture can mitigate this harm as well (results not shown). \begin{figure}[t] \centering \includegraphics[height=4in]{Consumer-Harm2.eps} \caption{{\protect\footnotesize {Loss of Consumer Surplus Due to a Hypothetical Merger between Calmat and Gifford-Hill}}} \label{fig:mapdivest} \end{figure} \section{Conclusion \label{sec:conc}} We develop a structural model of competition among spatially differentiated firms. The model accounts for transportation costs in a realistic and tractable manner. We estimate the model with relatively disaggregated data and recover the underlying structural parameters. We argue that the model and estimator together provide an appealing framework with which to evaluate competition in industries characterized by transportation costs and relatively homogenous products. We apply the model and estimator to the portland cement industry and demonstrate that (1) the framework explains the salient features of competition, (2) the framework provides novel insights regarding transportation costs and spatial differentiation, and (3) the framework could inform merger analysis and other competition policy endeavors. Although the model is static, it could be utilized to define payoffs in more dynamic settings. Such extensions could examine a number of research topics -- such as entry deterrence and product differentiation -- that have been emphasized in the theoretical literature of industrial organization since at least \citeN{hotelling29}. \newpage \bibliographystyle{chicago} \bibliography{cement} \clearpage \section{References} Aguirregabiria, V. and G. Vicentini (2006). Dynamic spatial competition between multistore firms. Mimeo. Berry, S., J. Levinsohn, and A. Pakes (1995, July). Automobile prices in market equilibrium. Econometrica, 841–890. Cardell, S. N. (1997). Variance components structures for the extreme value and logistic distributions with applications to models of heterogeneity. Journal of Economic Theory 13, 185–213. Collard-Wexler, A. (2009). Productivity dispersion and plant selection in the ready-mix concrete industry. Mimeo. d’Aspremont, C., J. Gabszewicz, and J.-F. Thisse (1979). On Hotelling’s “Stability in Competion”. Econometrica 47, 1145–1150. Davis, P. (2006). Spatial competition in retail markets: Movie theaters. The RAND Journal of Economics 37, 964–982. Dunne, T., S. Klimek, and J. A. Schmitz (2009). Does foreign competition spur productivity? Evidence from post-WWII U.S. cement manufacturing. Federal Reserve Bank of Minneapolis Staff Report. Economides, N. (1989). Stability in competition. Journal of Economic Theory 47, 178–194. EPA (2009). Regulatory Impact Analysis: National Emission Standards for Hazardous Air Pollutants from the Portland Cement Manufacturing Industry. Prepared by RTI International. Greene, W. H. (2003). Econometric Analysis 5th Ed. Prentice-Hall Upper Saddle River NJ. Hansen, L. (1982). Large sample properties of generalized method of moments estimators. Econometrica 50, 1029–1054. Hotelling, H. (1929). Stability in competition. Economic Journal 39, 41–57. Hwang, H.-s. (1990). Estimation of a linear SUR model with unequal numbers of observations. Review of Economics and Statistics 72. Jans, I. and D. Rosenbaum (1997, May). Multimarket contact and pricing: Evidence from the U.S. cement industry. International Journal of Industrial Organization 15, 391–412. La Cruz, W., J. Mart´ınez, and M. Raydan (2006). Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Mathematics of Computation 75, 1429–1448. Levenberg, K. (1944). A method for the solution of certain non-linear problems in least squares. Quarterly Journal of Mathematics 2, 164–168. Marquardt, D. (1963). An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal of Applied Mathematics 11, 431–331. McBride, M. (1983, December). Spatial competition and vertical integration: Cement - concrete revisited. American Economic Review 73, 1011–1022. McFadden, D. (1981). Econometric Models of Probabilistic Choice. MIT Press. Structural Analysis of Discrete Data, C.F. Manski and D. McFadden (eds.). McManus, B. (Forthcoming 2009). Nonlinear pricing in an oligopoly market: the case of specialty coffee. RAND Journal of Economics. Newey, W. K. and D. McFadden (1994). Large sample estimation and hypothesis testing. Handbook of Econometrics 4. Pesendorfer, M. (2003). Horizontal mergers in the paper industry. RAND Journal of Economics 34, 495–515. Rock-Products (1994). Cement plant operating cost study. Overland Park, KS: Intertic Publishing. Rosenbaum, D. and S. Reading (1988). Market structure and import share: A regional market analysis. Southern Economic Journal 54 (3), 694–700. Rosenbaum, D. I. (1994). Efficiency v. collusion: evidence cast in cement. Review of Industrial Organization 9, 379–392. Ryan, S. (2009). The costs of environmental regulation in a concentrated industry. Mimeo. Salop, S. (1979). Monopolistic competition with outside goods. Bell Journal of Economics 10, 141–156. Salvo, A. (2008). Inferring market power under the threat of entry: The case of the Brazilian cement industry. Mimeo. Salvo, A. (2010). Trade flows in a spatial oligolopy: Gravity fits well, but what does it explain? Canadian Journal of Economics 43, 63–96. Seim, K. (2006). An empirical model of firm entry with endogenous product-type choices. The RAND Journal of Economics 37, 619–640. Small, K. and H. Rosen (1981). Applied welfare economics with discrete choice methods. Econometrica 49, 105–130. Srivastava, J. and M. K. Zaatar (1973). A Monte Carlo comparison of four estimators of the dispersion matrix of a bivariate normal population, using incomplete data. Journal of the American Statistical Association 68, 180–183. Syverson, C. (2004, December). Market structure and productivity: A concrete example. Journal of Political Economy 112, 1181–1222. Syverson, C. and A. Horta¸csu (2007). Cementing relationships: Vertical integration, foreclosure, productivity, and prices. Journal of Political Economy 115, 250–301. Thisse, J.-F. and X. Vives (1988). On the strategic choice of spatial price policy. American Economic Review 78, 122–137. Thomadsen, R. (2005). The effect of ownership structure on prices in geographically differentiated industries. RAND Journal of Economics 36, 908–929. Van Oss, H. G. and A. Padovani (2002). Cement manufacture and the environment, Part I: Chemistry and technology. Journal of Industrial Ecology 6, 89–105. Vogel, J. (2008, June). Spatial competition with heterogeneous firms. Journal of Political Economy 116 (3), 423–466. \appendix %\singlespace \section{Regression fits \label{app:fits}} \ renewcommand{\theequation}{A-\arabic{equation}} \setcounter{equation}{0} %\renewcommand{\thefigure}{A-\arabic{figure}} %\setcounter{figure}{0} In this appendix, we develop further the quality of the estimation fits. We focus on the ability of the model to predict the inter-temporal variation that exist in the data. Figure \ref{fig:fits1} aggregates the data and the model predictions across regions, and plots the resulting time-series. Panel A shows consumption, panel B shows production, panel C shows imports, and panel D shows average prices (imports are defined as production minus consumption). In each case, the model predictions mimic the inter-temporal patterns observed in the data. Univariate regressions of the data on the prediction explain 96 percent of the variation total consumption, 75 percent of the variation in total production, 76 percent of the variation in imports, and 91 percent of the variation in average prices.\footnote{% The model does not fully capture the fall in average prices over the 1980s and early 1990s. One possible explanation is that the model, as specified, does not incorporate potential changes to total factor productivity. % \ citeN{schmitz09} review the evidence regarding productivity and argue that the gradual elimination of onerous clauses from labor contracts improved productivity in the 1980s.} \begin{figure}[t] \ centering \includegraphics[height=4in]{fits-insample.eps} \caption{{\protect\footnotesize {GMM Estimation Fits for Aggregate Metrics. The solid lines plot data and the dashed lines plot predictions. Consumption, production, and imports are in millions of metric tonnes. Imports are defined as production minus consumption. Prices are constructed as a weighted-average of the plant-county prices and are reported in dollars per metric tonne. The $R^2$ values are calculated from univariate regressions of the observed metric on the predicted metric. }}} \label{fig:fits1} \end{figure} %The figure shows four graphs that characterize the estimation fits. Panel A is for total consumption, panel B is for %total production, panel C is for apparent imports, and panel D is for average prices. Each graph plots the predicted % and observed values over the sample period. The graphs also show the R-squared from a regression of the observed value % on the predicted value. See text for details. Figure \ref{fig:outfits2} provides analogous time-series fits for eight of the time-series for cross-region shipments. The data in panel A pertain to shipments from California (both Northern and Southern) to Northern California, and are used in estimation. The data in the remaining panels are excluded from the estimation procedure, so the corresponding fits are out-of-sample. As shown, the model predictions are close to the data in each panel and tend to track the variation well (when variation exists). \begin{figure}[t] \centering \includegraphics[height=4in]{fits-outsample.eps} \ caption{{\protect\footnotesize {GMM Estimation Fits for Cross-Region Shipments. The solid lines plot data and the dashed lines plot predictions. Shipments are expressed as a percentage of production in California (panels A-D) or Southern California (panels E-H).}}} \label{fig:outfits2} \end{figure} %The figure shows eight graphs that characterize the regression fits for the cross-region shipments. %Each graph plots both predicted and observed shipments for one particular region-to-region %combination, over the sample period. The region-to-region combination are, in order: %CA to N.CA, CA to S.CA, CA to AZ, CA to NV, S.CA to N.CA, S.CA to S.CA, S.CA to AZ, and S.CA to NV. %See text for details. \section{The uniqueness of equilibrium} The estimation procedure rests on uniqueness of equilibrium at each candidate parameter vector. The results of a Monte Carlo experiment suggest the assumption holds, at least in our application. We consider 300 parameter vectors for each of the 21 years in the sample, for a total of 6,300 candidate parameter vectors. For each $\theta_i \in \boldsymbol{\theta}$, we draw from the distribution $N(\widehat{\mu}_i,\widehat{\sigma}_i^ 2)$, where $% \widehat{\mu}_i$ and $\widehat{\sigma}_i$ are the coefficient and standard error, respectively, reported in Table \ref{tab:GMM}. We then compute the numerical equilibrium for each parameter vector, using eleven different starting vectors. We define the elements of the starting vectors to be $% P_{jnt}=\phi \overline{P_t},$ where $\overline{P_t}$ is the average price of portland cement and $\phi = 0.5, 0.6, \dots, 1.4, 1.5$. Thus, we start the equation solver at initial prices that sometimes understate and sometimes overstate the average prices in the data. The experiment produces eleven equilibrium price vectors for each parameter vector. We calculate the standard deviation of each price element across the eleven observations. Thus, we would calculate 1,260 standard deviations for a typical equilibrium price vector with 1,260 plant-county elements. The experiment provides support for the uniqueness condition if these standard deviations are small.% \footnote{% The equal-solver computes numerical equilibria for 90.3 percent of the candidate vectors. See Appendix \ref{app:estdet} for a discussion of non-convergence in the inner-loop.} This proves to be the case. In fact, the maximum maximum standard deviation is \textit{zero}, considering all prices and draws, so the experiment finds no evidence of multiple equilibria. \section {Estimation details \label{app:estdet}} We minimize the objective function using the Levenberg-Marquardt algorithm (% \citeN{levenberg44}, \citeN{marquardt63}), which interpolates between the Gauss-Newton algorithm and the method of gradient descent. We find that the Levenberg-Marquardt algorithm outperforms simplex methods such as simulated annealing and the Nelder-Mead algorithm, as well as quasi-Newton methods such as BFGS. We implement the minimization procedure using the nls.lm function in R, which is downloadable as part of the minpack.lm package. We compute numerical equilibrium using Fortran code that builds on the source code of the dfsane function in R. The dfsane function implements the nonlinear equation solver developed in \citeN{dfsane06} and is downloadable as part of the BB package. We find that Fortran reduces the computational time of the inner loop by a factor of 30 or more, relative to the dfsane function in R. The computation of equilibrium for each time period can be parallelized, which further speeds the inner loop calculations. The numerical computation of equilibrium takes between 2 and 12 seconds for most candidate parameter vectors when run on a 2.40GHz dual core processor with 4.00GB of RAM. We use observed prices to form the basis of the initial vector in the inner loop computations, which limits the distance that the nonlinear equation solver must ``walk'' to compute numerical equilibrium. In practice, the equation solver occasionally fails to compute a numerical equilibrium at the specified tolerance level (1e-13) within the specified maximum number of iterations (600). The candidate parameter vectors that generate non-convergence in the inner loop tend to be less economically reasonable, and may be consistent with equilibria that are simply too distant from observed prices. When this occurs, we construct regional-level metrics based on the price vector that comes closest to satisfying our definition of numerical equilibrium. We constrain the signs and/or magnitudes of some parameters, based on our understanding of economic theory and the economics of the portland cement industry, because some parameter vectors hinder the computation of numerical equilibrium in the inner loop. For instance, a positive price coefficient would preclude the existence of Bertrand-Nash equilibrium. We use the following constraints: the price and distance coefficients ($\beta_1$ and $% \beta_2$) must be negative; the coefficients on the marginal cost shifters ($% \ boldsymbol{\alpha}$) and the over-utilization cost ($\gamma$) must be positive; and the coefficients on the inclusive value ($\lambda$) and the utilization threshold ($\nu$) must be between zero and one. We use nonlinear transformations to implement the constraints. As examples, we estimate the price coefficient using $\widetilde{\beta_1}=\log(-\beta_1)$ in the GMM procedure, and we estimate the inclusive value coefficient using $\widetilde{% \lambda} = \log\left(\frac{\lambda}{1-\lambda}\right)$. We calculate standard errors with the delta method. \section{Data details} We make various adjustments to the data in order to improve consistency over time and across different sources. We discuss some of these adjustments here, in an attempt to build transparency and aid replication. The Minerals Yearbook reports the total production and average price of plants in the ``Nevada-Arizona-New Mexico'' region over 1983-1991, and in the ``Arizona-New Mexico'' region over 1992-2003. We scale the USGS production data downward, proportional to plant capacity, to remove for the influence of the single New Mexico plant. Since the two plants in Arizona account for 89 percent of kiln capacity in Arizona and New Mexico in 2003, we scale production by 0.89. The portland cement plant in Riverside closed its kiln permanently in 1988 but continued operating its grinding mill with purchased clinker. We include the plant in the analysis over 1983-1987, and we adjust the USGS production data to remove the influence of the plant over 1988-2003 by scaling the data downward, proportional to plant grinding capacities. Since the Riverside plant accounts for 7 percent of grinding capacity in Southern California in 1988, so we scale the production data for that region by 0.93. We exclude one plant in Riverside that produces white portland cement. White cement takes the color of dyes and is used for decorative structures. Production requires kiln temperatures that are roughly 50$^\circ$C hotter than would be needed for the production of grey cement. The resulting cost differential makes white cement a poor substitute for grey cement. The PCA reports that the California Cement Company idled one of two kilns at its Colton plant over 1992-1993 and three of four kilns at its Rillito plant over 1992-1995, and that the Calaveras Cement Company idled all kilns at the San Andreas plant following the plant's acquisition from Genstar Cement in 1986. We adjust plant capacity accordingly. The data on coal and electricity prices from the Energy Information Agency are available at the state level starting in 1990. Only national-level data are available in earlier years. We impute state-level data over 1983-1989 by (1) calculating the average discrepancy between each state's price and the national price over 1990-2000, and (2) adjusting the national-level data upward or downward, in line with the relevant average discrepancy. %%% Appendix Figures %%% %\renewcommand{\thefigure}{A-\arabic{figure}} %\setcounter{figure}{0} \end{document}
{"url":"http://www.justice.gov/atr/public/eag/257581.tex","timestamp":"2014-04-17T15:55:33Z","content_type":null,"content_length":"118116","record_id":"<urn:uuid:6f3d309a-420a-413c-bbc7-a247bdd5a1a1>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Vermont, CA ACT Tutor Find a Vermont, CA ACT Tutor ...Communicating clearly, succinctly and with correct and proper grammar can be taught. From these basic skills, a student's own creativity and imagination can take flight and from there, the possibilities are endless. I love to help a student realize the potential in writing. 72 Subjects: including ACT Math, English, writing, reading ...I have studied microbiology extensively as part of my two years at medical school. I am familiar with gram negative vs gram postive bacteria (ex. strep vs. e coli) and I am also familiar with categorizing bacteria according to shape (ex. cocci - strep, trepenoma pallidum - spirochete). I am also... 43 Subjects: including ACT Math, English, calculus, reading ...I taught reading to adults for 13 years. Most of my students had English as a second language so phonics was important as was vocabulary development. I also worked with students so they could visualize the meaning of what they read to have a full understanding. 22 Subjects: including ACT Math, reading, English, biology ...Don't think " Oh well, it well get better next year." It won't! Fuzzy understanding at this level will be a disaster in future courses. Get help now. 24 Subjects: including ACT Math, chemistry, English, calculus ...It is important to lay a solid foundation in Algebra 1, and I know all the common student mistakes and how to correct them. A little self-confidence in this class grows in Algebra 2 and Precalculus. Algebra 2 can seem overwhelming if students didn't get something down correctly in Algebra 1. 12 Subjects: including ACT Math, calculus, geometry, algebra 2 Related Vermont, CA Tutors Vermont, CA Accounting Tutors Vermont, CA ACT Tutors Vermont, CA Algebra Tutors Vermont, CA Algebra 2 Tutors Vermont, CA Calculus Tutors Vermont, CA Geometry Tutors Vermont, CA Math Tutors Vermont, CA Prealgebra Tutors Vermont, CA Precalculus Tutors Vermont, CA SAT Tutors Vermont, CA SAT Math Tutors Vermont, CA Science Tutors Vermont, CA Statistics Tutors Vermont, CA Trigonometry Tutors Nearby Cities With ACT Tutor Bicentennial, CA ACT Tutors Cimarron, CA ACT Tutors Dockweiler, CA ACT Tutors Farmer Market, CA ACT Tutors Foy, CA ACT Tutors Griffith, CA ACT Tutors Lafayette Square, LA ACT Tutors Miracle Mile, CA ACT Tutors Oakwood, CA ACT Tutors Pico Heights, CA ACT Tutors Rimpau, CA ACT Tutors Sanford, CA ACT Tutors Santa Western, CA ACT Tutors Wilcox, CA ACT Tutors Wilshire Park, LA ACT Tutors
{"url":"http://www.purplemath.com/vermont_ca_act_tutors.php","timestamp":"2014-04-17T04:35:15Z","content_type":null,"content_length":"23635","record_id":"<urn:uuid:1e3e0784-174e-4974-8b83-52c5f4dd46e8>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Week 4 Ilab by Dvv402 Statistics – Lab Week 4 Statistical Concepts: * Probability * Binomial Probability Distribution Calculating Binomial Probabilities * Open a new MINITAB worksheet. * We are interested in a binomial experiment with 10 trials. First, we will make the probability of a success ¼. Use MINITAB to calculate the probabilities for this distribution. In column C1 enter the word ‘success’ as the variable name (in the shaded cell above row 1. Now in that same column, enter the numbers zero through ten to represent all possibilities for the number of successes. These numbers will end up in rows 1 through 11 in that first column. In column C2 enter the words ‘one fourth’ as the variable name. Pull up Calc > Probability Distributions > Binomial and select the radio button that corresponds to Probability. Enter 10 for the Number of trials: and enter 0.25 for the Event probability:. For the Input column: select ‘success’ and for the Optional storage: select ‘one fourth’. Click the button OK and the probabilities will be displayed in the Worksheet. * Now we will change the probability of a success to ½. In column C3 enter the words ‘one half’ as the variable name. Use similar steps to that given above in order to calculate the probabilities for this column. The only difference is in Event probability: use 0.5. * Finally, we will change the probability of a success to ¾. In column C4 enter the words ‘three fourths’ as the variable name. Again, use similar steps to that given above in order to calculate the probabilities for this column. The only difference is in Event probability: use 0.75. Plotting the Binomial Probabilities 1. Create plots for the three binomial distributions above. Select Graph > Scatter Plot and Simple then for graph 1 set Y equal to ‘one fourth’ and X to ‘success’ by clicking on the variable name and using the “select” button below the list of variables. Do this two more times and for...
{"url":"http://www.studymode.com/essays/Week-4-Ilab-921751.html","timestamp":"2014-04-20T05:52:52Z","content_type":null,"content_length":"32020","record_id":"<urn:uuid:93fab782-d06d-4bf9-9adb-ed083a26344b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
Stochastic Theory of a Data-Handling System with Multiple Sources Results 1 - 10 of 262 - IEEE/ACM Transactions on Networking , 1993 "... Absfruct- The emerging high-speed networks, notably the ATM-based Broadband ISDN, are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of burstiness characteristics. A prime instrument for controlling congestion in the network is admission ..." Cited by 267 (5 self) Add to MetaCart Absfruct- The emerging high-speed networks, notably the ATM-based Broadband ISDN, are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of burstiness characteristics. A prime instrument for controlling congestion in the network is admission control, which limits calls and guarantees a grade of service determined by delay and loss probability in the multiplexer. We show, for general Markovian traffic sources, that it is possible to assign a notional effective bandwidth to each source which is an explicitly identi-fied, simply computed quantity with provably correct properties in the natural asymptotic regime of small loss probabilities. It is the maximal real eigenvalue of a matrix which is directly obtained from the source characteristics and the admission criterion, and for several sources it is simply additive. We consider both fluid and point process models and obtain parallel results. Numerical results show that the acceptance set for heterogeneous classes of sources is closely approximated and conservatively bounded by the set obtained from the effective bandwidth approximation. Also, the bandwidth-reducing properties of the Leaky Bucket regulator are exhibited numerically. For a source model of video teleconferencing due to Heyman et al. with a large number of states, the effective bandwidth is easily computed. The equivalent bandwidth is bounded by the peak and mean source rates, and is monotonic and concave with respect to a parameter of the admission criterion. Coupling of state transitions of two related asynchronous sources always increases their effective bandwidth. 1. , 2002 "... A fundamental problem in text data mining is to extract meaningful structure from document streams that arrive continuously over time. E-mail and news articles are two natural examples of such streams, each characterized by topics that appear, grow in intensity for a period of time, and then fade aw ..." Cited by 260 (2 self) Add to MetaCart A fundamental problem in text data mining is to extract meaningful structure from document streams that arrive continuously over time. E-mail and news articles are two natural examples of such streams, each characterized by topics that appear, grow in intensity for a period of time, and then fade away. The published literature in a particular research field can be seen to exhibit similar phenomena over a much longer time scale. Underlying much of the text mining work in this area is the following intuitive premise --- that the appearance of a topic in a document stream is signaled by a "burst of activity," with certain features rising sharply in frequency as the topic emerges. , 1993 "... We show the existence of effective bandwidths for multiclass Markov fluids and other types of sources that are used to model ATM traffic. More precisely,we show that when such sources share a buffer with deterministic service rate, a constraint on the tail of the buffer occupancy distribution is a l ..." Cited by 187 (14 self) Add to MetaCart We show the existence of effective bandwidths for multiclass Markov fluids and other types of sources that are used to model ATM traffic. More precisely,we show that when such sources share a buffer with deterministic service rate, a constraint on the tail of the buffer occupancy distribution is a linear constraint on the number of sources. That is, for a small loss probability one can assume that each source transmits at a fixed rate called its effective bandwidth. When traffic parameters are known, effective bandwidths can be calculated and may be used to obtain a circuit-switched style call acceptance and routing algorithm for ATM networks. The important feature of the effective bandwidth of a source is that it is a characteristic of that source and the acceptable loss probability only.Thus, the effective bandwidth of a source does not depend on the number of sources sharing the buffer nor on the model parameters of other types of sources sharing the buffer. - IEEE Transactions on Automatic Control , 1994 "... Motivated by recent development in high speed networks, in this paper we study two types of stability problems: (i) conditions for queueing networks that render bounded queue lengths and bounded delay for customers, and (ii) conditions for queueing networks in which the queue length distribution of ..." Cited by 171 (20 self) Add to MetaCart Motivated by recent development in high speed networks, in this paper we study two types of stability problems: (i) conditions for queueing networks that render bounded queue lengths and bounded delay for customers, and (ii) conditions for queueing networks in which the queue length distribution of a queue has an exponential tail with rate `. To answer these two types of stability problems, we introduce two new notions of traffic characterization: minimum envelope rate (MER) and minimum envelope rate with respect to `. Based on these two new notions of traffic characterization, we develop a set of rules for network operations such as superposition, input-output relation of a single queue, and routing. Specifically, we show that (i) the MER of a superposition process is less than or equal to the sum of the MER of each process, (ii) a queue is stable in the sense of bounded queue length if the MER of the input traffic is smaller than the capacity, (iii) the MER of a departure process from a stable queue is less than or equal to that of the input process (iv) the MER of a routed process from a departure process is less than or equal to the MER of the departure process multiplied by the MER of the routing process. Similar results hold for MER with respect to ` under a further assumption of independence. These rules provide a natural way to analyze feedforward networks with multiple classes of customers. For single class networks with nonfeedforward routing, we provide a new method to show that similar stability results hold for such networks under the FCFS policy. Moreover, when restricting to the family of two-state Markov modulated arrival processes, the notion of MER with respect to ` is shown to be - IEEE Journal on Selected Areas in Communications , 1995 "... Abstract-A new approach to determining the admissibility of variable bit rate (VBR) traffic in buffered digital networks is developed. In this approach all traffic presented to the network is assumed to have been subjected to leaky-bucket regulation, and extremal, periodic, on-off regulated traffic ..." Cited by 150 (9 self) Add to MetaCart Abstract-A new approach to determining the admissibility of variable bit rate (VBR) traffic in buffered digital networks is developed. In this approach all traffic presented to the network is assumed to have been subjected to leaky-bucket regulation, and extremal, periodic, on-off regulated traffic is considered; the analysis is based on fluid models. Each regulated traffic stream is allocated bandwidth and buffer resources which are independent of other traffic. Bandwidth and buffer allocations are traded off in a manner optimal for an adversarial situation involving minimal knowledge of other traffic. This leads to a single-resource statistical-multiplexing problem which is solved using techniques previously used for unbuffered traffic. VBR traffic is found to be divisible into two classes, one for which statistical multiplexing is effective and one for which statistical multiplexing is ineffective in the sense that accepting small losses provides no advantage over requiring lossless performance. The boundary of the set of admissible traffic sources is examined, and is found to be sufficiently linear that an effective bandwidth can be meaningfully assigned to each VBR source, so long as only statistically-multiplexable sources are considered, or only nonstatistically-multiplexable sources are considered. If these two types of sources are intermixed, then nonlinear interactions occur and fewer sources can be admitted than a linear theory would predict. A qualitative characterization of the nonlinearities is presented. The complete analysis involves conservative approx-imations; however, admission decisions based on this work are expected to be less overly conservative than decisions based on alternative approaches. I. , 1995 "... The algorithms described in this thesis are designed to schedule cells in a very high-speed, parallel, input-queued crossbar switch. We present several novel scheduling algorithms that we have devised, each aims to match the set of inputs of an input-queued switch to the set of outputs more effici ..." Cited by 138 (4 self) Add to MetaCart The algorithms described in this thesis are designed to schedule cells in a very high-speed, parallel, input-queued crossbar switch. We present several novel scheduling algorithms that we have devised, each aims to match the set of inputs of an input-queued switch to the set of outputs more efficiently, fairly and quickly than existing techniques. In Chapter 2 we present the simplest and fastest of these algorithms: SLIP --- a parallel algorithm that uses rotating priority ("round-robin") arbitration. SLIP is simple: it is readily implemented in hardware and can operate at high speed. SLIP has high performance: for uniform i.i.d. Bernoulli arrivals, SLIP is stable for any admissible load, because the arbiters tend to desynchronize. We present analytical results to model this behavior. However, SLIP is not always stable and is not always monotonic: adding more traffic can actually make the algorithm operate more efficiently. We present an approximate analytical model of this behavior. SLIP prevents starvation: all contending inputs are eventually served. We present simulation results, indicating SLIP's performance. We argue that SLIP can be readily implemented for a 32x32 switch on a single chip. In Chapter 3 we present i-SLIP, an iterative algorithm that improves upon SLIP by converging on a maximal size match. The performance of i-SLIP improves with up to log 2 N iterations. We show that although it has a longer running time than SLIP, an i-SLIP scheduler is little more complex to implement. In Chapter 4 we describe maximum or maximal weight matching algorithms based on the occupancy of queues, or waiting times of cells. These algorithms are stabl... , 1996 "... This paper presents a personal view of work to date on effective bandwidths, emphasising the unifying role of the concept: as a summary of the statistical characteristics of sources over different time and space scales; in bounds, limits and approximations for various models of multiplexing unde ..." Cited by 132 (4 self) Add to MetaCart This paper presents a personal view of work to date on effective bandwidths, emphasising the unifying role of the concept: as a summary of the statistical characteristics of sources over different time and space scales; in bounds, limits and approximations for various models of multiplexing under quality of service constraints; and as the basis for simple and robust tariffing and connection acceptance control mechanisms for poorly characterized traffic. The framework assumes only stationarity of sources, and illustrative examples include periodic streams, fractional Brownian input, policed and shaped sources, and deterministic multiplexing. - IEEE/ACM Transactions on Networking , 1993 "... We propose a new way for evaluating the performance of packet switching communication networks under a fixed (session based) routing strategy. Our approach is based on properly bounding the probability distribution functions of the system input processes. The bounds we suggest, which are decaying ex ..." Cited by 122 (3 self) Add to MetaCart We propose a new way for evaluating the performance of packet switching communication networks under a fixed (session based) routing strategy. Our approach is based on properly bounding the probability distribution functions of the system input processes. The bounds we suggest, which are decaying exponentials, possess three convenient properties: When the inputs to an isolated network element are all bounded, they result in bounded outputs, and assure that the delays and queues in this element have exponentially decaying distributions; In some network settings bounded inputs result in bounded outputs; Natural traffic processes can be shown to satisfy such bounds. Consequently, our method enables the analysis of various previously intractable setups. We provide sufficient conditions for the stability of such networks, and derive upper bounds for the interesting parameters of network performance. 1 Introduction In this paper we consider data communication networks, and the problem of ev... , 1995 "... We analyse the queue Q L at a multiplexer with L inputs. We obtain a large deviation result, namely that under very general conditions lim L!1 L \Gamma1 log P[Q L ? Lb] = \GammaI (b) provided the offered load is held constant, where the shape function I is expressed in terms of the cumulant ..." Cited by 114 (11 self) Add to MetaCart We analyse the queue Q L at a multiplexer with L inputs. We obtain a large deviation result, namely that under very general conditions lim L!1 L \Gamma1 log P[Q L ? Lb] = \GammaI (b) provided the offered load is held constant, where the shape function I is expressed in terms of the cumulant generating functions of the input traffic. This provides an improvement on the usual effective bandwidth approximation P[Q L ? b] e \Gammaffib , replacing it with P[Q L ? b] e \GammaLI(b=L) . The difference I(b) \Gamma ffi b determines the economies of scale which are to be obtained in large multiplexers. If the limit = \Gamma lim t!1 t t (ffi) exists (here t is the finite time cumulant of the workload process) then lim b!1 (I(b) \Gamma ffi b) = . We apply this idea to a number of examples of arrivals processes: heterogeneous superpositions, Gaussian processes, Markovian additive processes and Poisson processes. We obtain expressions for in these cases. is zero for independent arrivals, but positive for arrivals with positive correlations. Thus economies of scale are obtainable for highly bursty traffic expected in ATM multiplexing. - IEEE/ACM Trans. Networking , 1993 "... This paper, together with [1] and [2], opens a new window for the study of queueing performance in a richer, heterogeneous input environment. It offers a unique way to understand the effect of second- and higher-order input statistics on queues, and develops new concepts of traffic measurement, netw ..." Cited by 114 (28 self) Add to MetaCart This paper, together with [1] and [2], opens a new window for the study of queueing performance in a richer, heterogeneous input environment. It offers a unique way to understand the effect of second- and higher-order input statistics on queues, and develops new concepts of traffic measurement, network control and resource allocation for high speed networks in the frequency domain. The technique developed in this paper applies to the analysis of queue response to the individual effects of input power spectrum, bispectrum, trispectrum, and input rate steady state distribution. Our study provides clear evidence that of the four input statistics, the input power spectrum is most essential to queueing analysis. Furthermore, input power in the lowfrequency band has a dominant impact on queueing performance, whereas high-frequency power to a large extent can be neglected. The research reported here was supported by NSF under grant NCR-9015757 and by Texas Advanced Research Program under gr...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1622","timestamp":"2014-04-20T10:29:59Z","content_type":null,"content_length":"42818","record_id":"<urn:uuid:4fc6d041-9ae1-4900-b3e8-bd4c25ef1a50>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Nonlinear Polarization <<< | >>> | Feedback In the RP Photonics Buyer's Guide you can find plenty of suppliers for hundreds of photonics products. Ask RP Photonics how calculations and simulations of nonlinear processes may benefit your product development. Definition: the part of the light-induced electric polarization which depends nonlinearly on the electric field of the light German: nichtlineare Polarisation When light propagates in a transparent medium, its electric field causes some amount of electric polarization in the medium, i.e. some density of electric dipole moment. (This must not be confused with the polarization of the light field, which is the direction of its electric field.) That polarization propagates together with the electromagnetic field in the form of a polarization wave. Whereas at low light intensities the electric polarization is proportional to the electric field strength, nonlinear contributions become important at high optical intensities, as they can e.g. be produced with lasers. Second-order Nonlinear Polarization The second (lowest) order of nonlinear polarization can arise from a χ^(2) nonlinearity which can occur only in crystal materials with a non-centrosymmetric crystal structure. (Nonlinear effects at crystal surfaces are an exception.) The nonlinear polarization then has a component which depends quadratically on the electric field of an incident light wave. More precisely, the tensor nature of the nonlinear susceptibility needs to be considered: where P[i] is the i-th Cartesian coordinate of the polarization, χ^(2) is the nonlinear susceptibility, and E(t) is the optical electric field. More commonly, this is written as with the nonlinear tensor d. Many tensor components can actually be zero for symmetry reasons, depending on the crystal class. The nonlinear polarization contains frequency components which are not present in the exciting beam(s). Light with such frequencies can then be generated in the medium (→ nonlinear frequency conversion). For example, if the input field is monochromatic, the nonlinear polarization also has a component with twice the input frequency (→ frequency doubling). As the polarization has the form of a nonlinear polarization wave, the frequency-doubled light is also radiated in the direction of the input beam. Other examples are sum and difference frequency generation, optical rectification, parametric amplification and oscillation. Third-order Nonlinear Polarization The next higher order of nonlinear polarization can arise from a χ^(3) nonlinearity, as it occurs in basically all media. This can give rise to various phenomena: Phase Matching In many cases, the nonlinear mixing products can be efficiently accumulated over a greater length of crystal only if phase matching is achieved. Otherwise, the field amplitudes at the exit face, generated at different locations in the crystal, essentially cancel each other, and the apparent nonlinearity is weak. Some nonlinear effects, however, are either automatically phase-matched (e.g. self-phase modulation) or do not need phase matching (e.g. Raman scattering). [1] D. A. Kleinman, “Nonlinear dielectric polarization in optical media”, Phys. Rev. 126 (6), 1977 (1962) See also: nonlinearities, nonlinear index, polarization waves, nonlinear crystal materials, nonlinear frequency conversion, phase matching Categories: nonlinear optics, physical foundations How do you rate this article? Your general impression: don't know poor satisfactory good excellent Technical quality: don't know poor satisfactory good excellent Usefulness: don't know poor satisfactory good excellent Readability: don't know poor satisfactory good excellent Comments: Found any errors? Suggestions for improvements? Do you know a better web page on this topic? Spam protection: (enter the value of 5 + 8 in this field!) If you want a response, you may leave your e-mail address in the comments field, or directly send an e-mail. If you like our website, you may also want to get our newsletters!
{"url":"http://www.rp-photonics.com/nonlinear_polarization.html","timestamp":"2014-04-19T19:35:15Z","content_type":null,"content_length":"25526","record_id":"<urn:uuid:fd0c1f31-8970-460c-9577-aad89cb3d76a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding Nonlinear Vector Analysis October 10, 2011 Today's R&D engineers and scientists tasked with researching and designing high-performance, active RF devices face many challenges. One key challenge involves characterizing the devices' nonlinear behavior so that it can be reduced to provide linear high-power solutions. This task is especially vital in the telecommunications industry where nonlinear device behavior contributes to information interference and the reduction in effective bandwidth. In this industry, amplifiers are considered an indispensable component and yet due to their nonlinear behavior, are often the cause of wasted frequency spectrum. Designing a power amplifier to operate only within its region of linear operation also results in an inefficient use of the available power. Amplifiers are often driven into the nonlinear region of operation and then linearized around that point. Consequently, it has become increasingly important to understand the nonlinear behaviors of active RF components such as power amplifiers and frequency doublers which, in turn, makes accurate measurement of a device's nonlinear behavior all the more crucial. All active devices exhibit nonlinear behaviors to varying degrees and therefore, the list of possible devices that could benefit from this measurement insight can be quite large. Examples of this are LNAs, T/R modules and converters, as well as passive devices such as inductors with magnetic cores or filters driven at high powers. Unfortunately, making nonlinear measurements is not an easy task, especially considering that currently available tools and models for accomplishing this goal are generally difficult to use and do not provide the information required. What's needed is a solution specifically designed to accurately measure the nonlinear effects of active RF devices. With this information, designers will then be better equipped to control and minimize nonlinear device behavior and more accurately, easily and quickly create linear high-power solutions. Agilent has introduced two primary methods for measuring the nonlinear effects of a device under test (DUT): nonlinear component characterization and X-parameters. Nonlinear component characterization provides calibrated, vector-corrected waveforms of the incident, transmitted and reflected waves from the DUT. Vector calibration, power calibration and the use of a poly-harmonic phase reference and calibration removes the systematic error terms.With this measurement, all receivers must be measured simultaneously for each frequency point. The information derived from nonlinear component characterization measurements enables the engineer to better understand and more deterministically control the nonlinear behavior of the DUT. X-parameters are the logical, mathematically-correct extension of S-parameters into a nonlinear, large-signal operating environment (see the sidebar, "A Closer Look at X-Parameters"). X-parameter measurements require an additional source which is used to drive the DUT with both a large and small signal tone at the appropriate frequencies and phases, at the same time. Careful control of the phase and amplitude of these signals is therefore critical. Measuring the amplitudes and phases of the scattered waves under these conditions allows for the identification of X-parameters. These parameters provide the engineer with information on such things as device gain and match, while the device is operating in either a linear or nonlinear state. The X-parameters can be extracted into Agilent's Advanced Design System (ADS) or displayed like S-parameters. A Closer Look at X-Parameters X-parameters are to nonlinear measurements what S-parameters are to linear measurements. Consider, for example, that S-parameters were developed as a method to analyze and model the linear behavior of RF components and play a key role in analyzing, modeling and designing complex systems which cascade multiple individual components. They are related to familiar measurements such as S11 input match, S22 output match, S21 gain/loss, and S12 isolation, and can be easily imported into electronic simulation tools. While extremely useful and powerful, S-parameters do have limitations and are defined only for small-signal linear systems. S-parameters are measured using traditional network analyzers. To a smaller extent, network analyzers can also provide insight into the nonlinear behavior of devices via approximation techniques such as using un-ratioed receiver measurements and offsetting the measurement receivers' frequency from the source stimulus frequency. By employing these techniques, simple gain compression, harmonic amplitude only, frequency converter match, conversion loss/gain, and group delay can be measured. In contrast to S-parameters, X-parameters were developed to represent and analyze the nonlinear and linear behavior of RF components in a much more robust and complete manner. As an extension of S-parameters under large-signal operating conditions, the devices are driven into saturation (the real-word operating environment for many components) and then the X-parameters are measured under these conditions. When making this measurement, no knowledge is used or required concerning the internal circuitry of the DUT. Rather, the measurement is a stimulus response model of the voltage waves. In other words, the absolute amplitude and cross frequency relative phase of the fundamental, and all related harmonics, are accurately measured and represented by X-parameters. Because the X-parameters relate cross-frequency dependencies, there are usually many more X-parameters than S-parameters, such as in the case of the gain of the output fundamental frequency to the input third harmonic. Here, there are eight X-parameters for this simple case with only one harmonic and no power dependency. In contrast, there can never be more than four S-parameters.X-parameters also depend explicitly on the large signal state of the device, making input power a variable. In contrast, S-parameters are assumed to be power independent. The accurate and robust nature of X-parameters makes them extremely useful for engineers and scientists trying to better understand the nonlinear behavior of their active components. As an example, consider the design of a power amplifier in which the designer drives the amplifier into the nonlinear region to get the maximum output power and to extract the maximum efficiency. A feedback circuit is then used to compensate for the nonlinear effects, causing the output to behave like a high-power linear device. In this case, the typical approach to suppressing the power amplifier's harmonic outputs is through the use of filters and other components. But, if the filtering component's input match does not match the output match of the specific harmonic of interest (generated by the amplifier), then the harmonic's attenuation level could be significantly different from what the designer anticipates. This situation may cause the designer to brut force a solution by 'trial and error' -- a very tedious and time consuming experience at best. One way to avoid this dilemma is by obtaining accurate phase and amplitude information from the X-parameters and then employing appropriate simulation tools. With this approach, designers can design the most robust systems possible in the shortest amount of time and with the highest degree of accuracy. Utilizing a solution that employs both nonlinear component characterization and X-parameters provides critical insight, essential to accurately measuring a device's nonlinear behavior. Agilent Technologies' Nonlinear Vector Network Analyzer (NVNA) supports both methods in a highly integrated, powerful and simple to use instrument. With a minimum amount of external hardware, this solution effectively converts a 4-port PNA-X microwave network analyzer into a high-performance nonlinear analyzer from 10 MHz to 67 GHz (Figure 1). Because the NVNA is based on a standard PNA-X microwave network analyzer, it provides all the power, flexibility and measurement capability of the PNA-X for linear measurements. It can then easily switch into the NVNA mode for nonlinear measurement. Figure 1: Agilent’s NVNA software, for use with the PNA-X microwave network analyzer, establishes a new industry standard in RF nonlinear network analysis from 10 MHz to 67 GHz. With the NVNA's nonlinear component characterization, all of the DUT's input and output spectra are measured. Both the amplitude and phase of the full spectra -- fundamental, harmonics, mixing terms and cross-frequency products -- are then displayed in the PNA-X network analyzer. The relative phase and absolute amplitude of any of the frequencies of interest can also be displayed. With this information, the designer can design a matching circuit to remove the nonlinear effects of, for example, the third harmonic as the amplitude and phase of that spectral component is explicitly known. This data can be displayed in frequency, time or power domains, as well as in terms of user-defined ratios such as I/V to display dynamic loadlines. With its ability to display data in various domains, the NVNA provides the designer with greater insight into nonlinear component behavior. For example, if the DUT's output is distorted in the time domain, the designer can change to the frequency domain display and observe the individual frequency components' amplitude and phase. Next, the power can be varied to observe the sensitivity and level of significance which that spectral component has for given power levels, relative to the fundamental frequency of interest. The designer might also want to measure the group delay through a frequency doubler. This is a relatively easy task with the NVNA as it can measure the input and output stimulus phase -- relative to the calibrated phase reference -- as well as the signals amplitude. As an additional benefit, all measurement-based data collected with the NVNA can be exported to design models of the designer's choice. The NVNA's other method of measuring nonlinear component behavior is through the use of nonlinear scattering parameters or X-parameters. Such functionality provides an accurate portrayal of both nonlinear device and cascaded nonlinear device behavior using measurement-based data. Additionally, the X-parameters can be accurately cascaded from individual devices using Agilent's ADS to simulate and design more complex modules and systems. These parameters separate out a critical term from the measurement, XT, which enables accurate nonlinear design and simulation by taking cross-frequency mismatch properly into account for nonlinear components. When a single-tone stimulus is applied to the DUT, the X-parameters capture the device behavior at the fundamental frequencies and harmonics. Using a two-tone stimulus captures the fundamentals, harmonics and mixing products behavior. This is particularly useful in analyzing the bandwidth dependent characteristics and gaining additional insight into memory effects. X-parameters can also be extracted from 3-port mixers and converters. One large tone is applied to the RF and one large tone is applied to the LO. All three ports will be fully characterized in a similar fashion to the previously stated two port amplifier case. Combinations of amplifiers and mixers/converters can then be cascaded to predict and optimize module and system design and performance. Accurately measuring and reducing a device's nonlinear behavior is crucial to creating linear high-power solutions for use in a range of applications in aerospace and defense and telecommunications, just to name a few. While conventional solutions to this challenge fail to provide the information and accuracy required, Agilent's NVNA software uses component characterization and X-parameters to quickly, easily and with the highest degree of accuracy measure nonlinear behavior in a DUT. With its full match correction and accurate amplitude, as well as cross-frequency relative phase information, the NVNA is today providing a new standard in accuracy and insight into the behaviors of nonlinear components. About Agilent Technologies Agilent Technologies Inc. (NYSE: A) is the world's premier measurement company and a technology leader in communications, electronics, life sciences and chemical analysis. The company's 19,000 employees serve customers in more than 110 countries. Agilent had net revenues of $5.4 billion in fiscal 2007. Information about Agilent is available on the Web at www.agilent.com. Press Release: Agilent Technologies Introduces World’s Highest Performing 67 GHz PNA-X Vector Network Analyzer Agilent Technologies Expands Industry’s Most Flexible PNA-X Network Analyzer for Active Device Test with 13.5, 43.5, 50 GHz Models Agilent Technologies Introduces Breakthrough Technology to Analyze Nonlinear Behaviors of Active Components Backgrounder: Understanding Component Characterization Using the PNA-X Series Microwave Network Analyzers Agilent Documents: For more information, go to www.agilent.com/find/nvna Contacts: Janet Smith, Agilent +1 970 679 5397 > More Press Releases > More Backgrounders
{"url":"http://www.agilent.com/about/newsroom/tmnews/background/NVNA/index.html","timestamp":"2014-04-19T00:03:00Z","content_type":null,"content_length":"39239","record_id":"<urn:uuid:4db2e161-3968-4029-a4be-9ac460da46c4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Kepler's 3rd Law and the Doppler Effect I'm assuming that "c" is the speed of light. What is the M? Mass of the planet? Sorry, I forgot to define my variables: M is the planet's mass, v is the velocity relative to your line of sight, v is the circular velocity of the orbit, [itex]\Delta \lambda[/itex] is the shift in wavelength relative to its rest frame value, [itex]\lambda[/itex] is the rest frame value of the wavelength, c is the speed of light, a is the semimajor axis, and G is the gravitational constant.
{"url":"http://www.physicsforums.com/showthread.php?t=93232","timestamp":"2014-04-20T14:15:38Z","content_type":null,"content_length":"39430","record_id":"<urn:uuid:86386212-edc1-4cf8-bad7-e41921754f8e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
CSR PS-308 Split Packages Split Bundles of Flat-Size and Irregular Parcel Mailpieces Updated September 2005 PS-308 (705.8.9.4) This CSR discusses balancing (or leveling) of bundles for those limited situations in which the pieces that would ordinarily constitute the “last bundle” for a presort destination are less than the minimum amount required for the destination. In addition, this CSR clarifies rate eligibility for the preparation of bundles of flat-size and irregular parcel mailpieces prepared under the following circumstances: 1. When the pieces that would ordinarily be in the “last bundle” or “balanced bundle” for a presort destination are fewer than the minimum number required for the destination. 2. When the weight of the last bundle or balanced bundle is less than the required minimum weight. Mailers use software that balances (or levels) the number of pieces or the weight of their bundles. This may result in multiple bundles with fewer than the required minimum number of pieces or less than the required minimum weight for a presort destination. The number of bundles created because of balancing may not exceed the number of bundles that would have resulted if only the last bundle did not meet the minimum requirements. We allow mailers to prepare a last bundle or a balanced bundle for a presort destination when the bundle is less than the minimum weight or it contains fewer than the minimum number of pieces without loss of rate eligibility providing the “logical bundle” to a presort destination meets the minimum quantity for the rate claimed. For example, consider two mailings of bundles of flat-size Standard Mail pieces that mailers might bundle and place on pallets: ● Mailing A has 17 pieces to a presort destination; 12 pieces in one bundle reach the maximum height for one bundle to maintain its integrity, leaving just 5 pieces for the “last bundle." The mailer could make one bundle of 12 pieces and one “last bundle” of 5 pieces (a total of two bundles) as allowed under DMM 705.8.9.4*. Alternatively, the mailer could balance the bundles by placing 9 pieces in one bundle and 8 pieces in the other bundle (still just two bundles). ● Mailing B has 18 pounds of pieces to a presort destination; 10 pounds in one bundle reach the maximum height for one bundle to maintain its integrity. This mailer could make one bundle of 10 pounds and one “last bundle” of 8 pounds (a total of two bundles); or the mailer could balance the bundles by making two bundles of 9 pounds each (still just 2 In both cases, the mailers are creating the same number of bundles, but are balancing them to create more stable bundles. Bundle balancing may result in bundles to one presort destination having less than the minimum number of pieces or being of less than the minimum weight. However, in all cases, this must NOT create additional bundles for the Postal Service to handle. Similar balancing (or leveling) also is acceptable for bundles of flat-size Periodicals and Package Services mailpieces placed in sacks or on pallets, bundles of Standard Mail flat-size pieces placed in sacks, bundles of Standard Mail, Periodicals, and Package Services irregular parcels placed in sacks or on pallets, and for bundles of First-Class Mail flat-size mailpieces placed in flat trays. *See also DMM 705.8.9.3, 705.8.9.5, 335.2.7, 345.2.10, 365.2.8, 375.2.7, 385.2.7, 445.2.8, 465.2.6, 475.2.6, and 485.2.6. Sherry Suggs Mailing Standards United States Postal Service Washington DC 20260-3436
{"url":"http://pe.usps.gov/text/CSR/PS-308.htm","timestamp":"2014-04-17T21:25:05Z","content_type":null,"content_length":"48645","record_id":"<urn:uuid:b6603353-ed23-4895-8eb1-159fd40df715>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Great Neck Estates, NY Find a Great Neck Estates, NY Calculus Tutor ...I do research on machine learning in music & audio processing applications. In my spare time, I enjoy hiking, traveling, learning languages, producing/recording music, and cooking. I speak English and Russian fluently, and have basic/intermediate Spanish. 10 Subjects: including calculus, physics, geometry, statistics ...I primarily tutor high school math and science (physics and chemistry), but I have also tutored college level math (calculus, statistics), mechanical engineering (thermodynamics, fluid mechanics, and heat transfer) and computer science. The thing that makes a little different from most tutors is... 26 Subjects: including calculus, chemistry, physics, statistics ...It dealt with applying statistical methods in biology and medicine. Topics covered were Statistical analysis for cross-sectional studies and case-control studies. Applications of categorical data analysis such as Chi-Square tests, Poisson Approximation, binomial tests, and Logistic Regressions ... 18 Subjects: including calculus, statistics, algebra 1, algebra 2 ...Over the last 15 years, I have worked in Management Consulting and Investment Banking, but tutoring and education are now my primary focus.I have been tutoring various standardized tests (SAT, GMAT, GRE, SSAT, SHSAT) for 5 years and have over 25 years of experience tutoring in general. My GMAT s... 11 Subjects: including calculus, geometry, algebra 1, algebra 2 ...Coming from an engineering background, I seek to engage students in using critical thinking in solving practical problems. Moreover, I am able to communicate fluently in French and Spanish and I intend on using this asset to accommodate students who are seeking immersion in these foreign languag... 23 Subjects: including calculus, Spanish, physics, geometry
{"url":"http://www.purplemath.com/Great_Neck_Estates_NY_Calculus_tutors.php","timestamp":"2014-04-19T02:32:05Z","content_type":null,"content_length":"24714","record_id":"<urn:uuid:b0e5db9b-060e-4efa-b9f2-3c3072cc4404>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Local ring December 5th 2010, 11:17 AM Local ring Could You help me with this task? "Proof that a commutative ring a is a local ring iff for any $a, b \in A$, from this that $a+b=1$ we have $a$ is an invertible element or $b$ is an invertible element." Proof. $(\Rightarrow)$$a,b\in A$ - any element which satysfied equality $a+b=1$ and suppose that $a,botin U(A)$, where $U(A)$ is a set of invertible elements of $A$. We have then that $a,b\in \ mathfrak{m}$. Since $\mathfrak{m}$ is an ideal, then also $1=a+b\in \mathfrak{m}$, and so $\mathfrak{m}=A$. This is contradiction, so $a \in U(A)$ or $b \in U(A)$. $(\Leftarrow)$ - I'm not sure. Could You help me with this implication? December 5th 2010, 12:20 PM Could You help me with this task? "Proof that a commutative ring a is a local ring iff for any $a, b \in A$, from this that $a+b=1$ we have $a$ is an invertible element or $b$ is an invertible element." Proof. $(\Rightarrow)$$a,b\in A$ - any element which satysfied equality $a+b=1$ and suppose that $a,botin U(A)$, where $U(A)$ is a set of invertible elements of $A$. We have then that $a,b\in \ mathfrak{m}$. Since $\mathfrak{m}$ is an ideal, then also $1=a+b\in \mathfrak{m}$, and so $\mathfrak{m}=A$. This is contradiction, so $a \in U(A)$ or $b \in U(A)$. $(\Leftarrow)$ - I'm not sure. Could You help me with this implication? Let $\displaystyle{S:=A-U(A)$ , and take $x,y\in S$ . If $x+yotin S$ then $x+y\in U(A)\Longrightarrow \exists c\in A\,\,s.t.\,\,1=c(x+y)=cx+cy\Longrightarrow cx\in U(A)\,\,or\,\,cy\in U(A)$ , but in both cases we get a straightforward contradiction (can you see why?) , so it must be that $x+y\in S$ and from here $S$ is an ideal, and we're thus done. December 5th 2010, 12:53 PM I found longer solution to this implication. Your is shorter and easier ;)
{"url":"http://mathhelpforum.com/advanced-algebra/165360-local-ring-print.html","timestamp":"2014-04-24T10:14:25Z","content_type":null,"content_length":"12650","record_id":"<urn:uuid:017d44c5-72cc-49ab-981c-011bb459c5c3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
WebSCHEDULE - Course Details MATH 695 - Independent Study at Cañada College for Spring 2013 (CRN : 43851) Most Mathematics courses have prerequisites that are listed as part of the course description in the Schedule of Classes. Before registering for a Mathematics course, be sure you have completed the stated prerequisite. Note to all Algebra students: The Math Department uses a single textbook for the Elementary and Intermediate Algebra sequence. This allows students to complete the Algebra sequence in three different ways: a four semester sequence of MATH 111, 112, 122, and 123 each covering one fourth of the book; a two semester sequence of MATH 110 and MATH 120 each covering half of the book; or a combination of the above. Please see your counselor to be sure you take the correct course. MATH 695 INDEPENDENT STUDY Units (Grade Option) 0.5-6 (No more than 3 units per semester); Class Hours: By Arrangement; Recommended: Eligibility for READ 836, and ENGL 836 or ESL 400; Prerequisite(s): None. Description: Self-paced, individualized instruction is provided in selected areas to be arranged with an instructor and student and approved by the dean. Varying modes of instruction can be used -- lecture, laboratory, research, skill development, etc. May be repeated for credit up to 6 units. Transfer: CSU. Cañada College MAP Department: Number of Units: Textbook: View textbook in Bookstore Section Information as of Sunday, April 20th 2014 - 09:41:58 pm │Instructor│Meeting Date│Meeting Time│Days│Building│Room│Section│ Session │ │Enriquez,A│01/14-05/17 │TBA │ │16 │0108│AA │Other Independent Study │ ┃ Critical Dates for this Course ┃ ┃Last day to add class: │28-JAN-2013 ┃ ┃Last day to drop with a refund: │28-JAN-2013 ┃ ┃Last day to drop without a "W": │03-FEB-2013 ┃ ┃Last day to drop with a "W": │25-APR-2013 ┃ ┃Transferable: │CSU ┃ View all sections for MATH 695 - Independent Study
{"url":"https://webschedule.smccd.edu/course_detail_desc.php?Term_Desc=Spring%202013&Subj_Code=MATH&Crse_Numb=695&College_Desc=Canada%20College&CRN=43851","timestamp":"2014-04-21T04:43:09Z","content_type":null,"content_length":"8580","record_id":"<urn:uuid:5f5bd044-cbef-4c03-b7de-c01994f491b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
iterative deepening iterative deepening <algorithm> A graph search algorithm that will find the shortest path with some given property, even when the graph contains cycles. When searching for a path through a graph, starting at a given initial node, where the path (or its end node) has some desired property, a depth-first search may never find a solution if it enters a cycle in the graph. Rather than avoiding cycles (i.e. never extend a path with a node it already contains), iterative deepening explores all paths up to length (or "depth") N, starting from N=0 and increasing N until a solution is found. Last updated: 2004-01-26 Try this search on Wikipedia, OneLook, Google Nearby terms: ITAR « Iterated Function System « iteration « iterative deepening » iterator » Iternet » IT governance Copyright Denis Howe 1985
{"url":"http://foldoc.org/iterative+deepening","timestamp":"2014-04-20T21:25:42Z","content_type":null,"content_length":"5263","record_id":"<urn:uuid:cf20996f-07c7-4658-b0d9-0b10fbb57e25>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
markov process and classification of states April 18th 2011, 06:56 PM #1 Mar 2010 markov process and classification of states a)classify each of the states for the following markov chain (ie transient, recurent absorbing) and state how many classes there are 0.3 0.7 0 0.3 0 0.7 i have that each state is recurent and there is a single recurent class. b)show that this markov chain is aperiodic without computing P^n (ie use the fact that if we can transition to a state in integer values>1 then the chain has no specific period. i am still trying to make sense of the periodicity concept. Does it mean that we can get back to a certain state, having started there after transitioning to all other possible states in only certain integer valus or multiples of this. Or does it mean that we can get back to a certain state having started there in any number of steps, but it is not actually necassary to transition through all possible states. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/178026-markov-process-classification-states.html","timestamp":"2014-04-17T17:08:39Z","content_type":null,"content_length":"30268","record_id":"<urn:uuid:4eafc5bb-cccc-4634-8023-b1225d13be41>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Posted by Jasmine20 on Wednesday, January 24, 2007 at 10:42pm. Okay now i need explanation how to do these types of problems. Directions: Divide Realize that you have 7x^3y^3 on the bottom. Now in order to get rid of that, you want to have the same thing in the top, multiplied by something else. That way, the 7x^3y^3 on the top and bottom will cross out, because any number over itself is 1. So, take out 7x^3y^3 from the numerator: Now once you cross out the (7x^3y^3) from the fraction, you are left with x^2y^2-3xy+2. So, to factor this, you need the beginnings of the parentheses to be xy, so when you multiply xy times xy you will get x^2y^2. (xy )(xy ) Now, you know that the ends of the parentheses must multiply to 2, so they will be 2 and 1, but we don't know if they are positive and negative yet. (xy 2)(xy 1) Now, you want the outer terms multiplied plus the inner terms multiplied to equal negative 3. So, if the 2 and 1 were positive, we'd end up with +3 and we want -3, so we know the signs inside must be where did you get: the -3xy+2 I took out 7x^3y^3 from everything in the numerator. So, from the 7x^5y^5, I took out 7x^3y^3 and separated it from x^2y^2, because if you multiply those 2, you will end up with the original 7x^5y^5. I did the same for -21x^4y^4. If you remove from that 7x^3y^3 (basically, what will you multiply by 7x^3y^3 to get back to -21x^4y^4?), you will get -3xy. For the last part, 7x^3y^3 times 2 is the original, 14x^3y^3. Wow, that sounds way more confusing than it is. If you ask your teacher to show you it is much easier written and pointed out than me trying to explain it on here. • math,algebra,help - Tina, Monday, April 25, 2011 at 6:45pm 3y divided by 7 Related Questions algebra - I need help with this problem: Dividing 7x^5y^5-21x^4y^4+14x^3y^3/7x^... math help! - Directions are: Divide The problem is: (7x^5y^5-21^x^4y^4+14x^3y^3... hard math - please someone help me! i've tried so hard! factor m^6 + n^2 28x^2y^... Math - I cannot figure this out I keep getting the order pair(1.24,-3.44), but ... math,factoring,help - Can someone please show me step by step how to do these. I... math - Solve the systems of equations by elimination. 1. -2x+y=7 2. 2x+y=-9 6x+... intermediate algebra - Simplify the following expressions, and rewrite it in an ... algebra - find the product of (x^2-3x+5)with the quotient of (21x^6- 14x^5-7x^3)... math - 2x+3y=6 (solve for x) 2x+3y=6 Get unknown on one side and everything else... Derive - Find the derivative of sin(2x + 3y)= 3xy + 5y-2. I have this so far 1) ...
{"url":"http://www.jiskha.com/display.cgi?id=1169696523","timestamp":"2014-04-20T14:38:54Z","content_type":null,"content_length":"9987","record_id":"<urn:uuid:ddd6d085-d19f-47af-9924-b824e9be9816>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: An Invariant-Sum Characterization of Benford's Law Pieter C. Allaart Vrije Universiteit Amsterdam1 The accountant Nigrini remarked that in tables of data distributed according to Benford's Law, the sum of all elements with rst digit d (d = 1 2 :: 9) is approxi- mately constant. In this note, a mathematical formulation of Nigrini's observation is given and it is shown that Benford's Law is the unique probability distribution such that the expected sum of all elements with rst digits d1 :: dk is constant for every xed k. Keywords and phrases: First signi cant digit, Benford's law, mantissa function, AMS 1990 subject classi cation: 60A10. 1 Introduction The main goal of this article is to give a mathematical proof of an empirical observation of the accountant M. Nigrini. In his Ph.D. thesis (1992), Nigrini observed that tables of unmanipulated accounting data closely follow Benford's Law (see x2 below), and that in su ciently long lists of data for which Benford's Law holds, the sum of all entries with leading digit d is constant for various d. (cf. Nigrini, 1992, pp. 70/71).
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/548/2721597.html","timestamp":"2014-04-17T22:18:29Z","content_type":null,"content_length":"8199","record_id":"<urn:uuid:2fd5af83-beab-4e6a-a7f3-7e1750b9d2b3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Process Standards Posted by: degarcia | November 24, 2008 Process Standards The five fundamental processes that characterize “doing” mathematics are problem solving, communication, reasoning and proof, representation, and connections. (Elementary Mathematics is Anything but Elementary p. 5) Problem Solving Problem solving in math means “becoming involved in a task for which the solution method is not known in advance. To find a solution, students must use previously acquired knowledge and, through this process, gain new mathematical understandings” (Bahr & Garcia, 2010). Problem solving should be an integral part of daily math. Problems can be drawn from real life experiences and applications. The problems selected should be carefully analyzed so the teacher can predict the mathematical ideas that will underscore students’ problem solving efforts. To promote problem solving, students must feel safe in their learning environment. They need to know they are free to explore, take risks, share, and argue strategies with one another. This type of environment will also allow them to confidently work through problems and challenges, and view themselves as capable mathematical thinkers. Specific problem solving strategies, or heuristics, is also a focus of the teacher. This includes things like simplifying the complexity of a problem, drawing a diagram or picture of a problem, seeing patterns, guessing and checking, and working backwards. Problem solving also has a metacognitive component. Teachers can promote this level of thoughtfulness by asking questions: • What do we know from the information in this problem? • What information does it ask for? • Is there any missing information that might make solving the problem easier? • What should we do first, second, etc.? • Are we getting closer to the solution? • Should we try something else? • Why is that true? These types of questions places the responsibility for success on the students. Mathematics communication is both a means of transmission and a component of what it means to “do” mathematics. Teacher have to provide an environment in which students can risk expressing their beginning efforts to communicate their thinking. Teachers must be patient while students begin to do this, because communicating in math doesn’t come naturally to students. The NCTM standards provide a complete list of standards in communication and the benefits of communicating in math. To view them, click here. When students share, they should genuinely listen to one another, compare it to their own ideas, evaluate it, then share their own opinions. Teachers can use probing and prompting questions during discussions as scaffolding. In older grades, students should be encouraged to elaborate more. Writing in math is also beneficial to deepening mathematical understanding. In the primary grades, they rely more on pictures and as they get older will be able to form more complete sentences and thoughts. Writing in math also allows them to practice using mathematical vocabulary and symbols. Their writing skills are consequently enhanced as they practice justifying and writing in this expository form. Just like writing in any other content area, the teacher will have to model how this should be done effectively. Reasoning and Proof Reasoning is a habit and should provide a context for developing important mathematical ideas. Questioning is the key! Ask WHY? Mathematics involves discovery, so invite students to make conjectures and create, refine, and evaluate them. Also, allow students to explore and explain their own reasoning. It’s often best to start students off with what they know, and then build from there. This is where students can take advantage of manipulatives and using technology to solve problems and explore their conjectures. Several virtual manipulative websites exist to get the same practice with manipulatives while utilizing technology, especially if the manipulatives in the classroom are limited. One such website is found here. Using manipulatives will help reinforce concepts to all students, especially students with learning disabilities and English language learners. Sometimes younger children need a discrepant or contradictory event to verify their reasoning. They tend to overgeneralize an idea, which means they may apply reasoning from one context to a context where the same reasoning does not really apply. Through enough exploration and discovery the students will be able to accommodate and assimilate the new mathematical reasoning into the correct Also encourage students to look for patterns. These patterns can be spatial, temporal, logical, and sequential. This is asking students to show a mathematical idea in more than one way. There are five ways to represent thinking: 1) manipulative models 2) static pictures 3) written symbols 4) spoken/written language 5) real-world situations or contexts Real- life situations are very valuable to the students because it gives them something more concrete to work with, and they begin to see the real purpose and meaning behind using the mathematical concept. Because adults think in symbols and children do not, children support their thinking with examples they have seen in the real world. Representations are used by children first to display the problem, then to find a solution, and finally use tools to solve similar problems. This will especially be useful to special needs children and English language learners to use situations they are familiar with. The following are examples of solving problems using models, pictures, numbers and words. “Connecting is the experience of mentally relating one object to another” (Bahr & Garcia, 2010). Elementary Mathematics is Anything but Elementary (2010) identifies six types of connections distinguished by what types of thoughts are being connected: 1) representations 2) problem solving strategies or conjectures 3) prior and current math learning 4) mathematical topics 5) mathematics and other subjects 6) mathematics and real-life situations If you encourage these connections, it will increase your students’ mathematical reasoning abilities. One of the roles of teachers is to compare strategies students share and help students see the connections between those strategies. It is essential to make these connections with prior math learning, so the learning is logical and builds from what students know. Just when you build a structure, you need to have a firm foundation before you can start building the structure. The same is true in math. When students make connections between math concepts it’s like they have formed a neighborhood of strong buildings and can see how the neighborhood works and functions together, instead of each building functioning or existing on its own. Children learn about the world in connected ways, so balanced math instruction will help children do this. Integrating other subjects with math has to be meaningful. Therefore, if you simply read a book with a math concept in it, that is not successful integration. Here is also a website that has successful ways to integrate math with literature, history, geography, health, art, and music. “When mathematics is consistently used to solve problems in other subject area contexts, connections, to real life occur consistently” (Bahr & Garcia, 2010). Bahr, Damon L. & Lisa Ann de Garcia. (2010). Elementary mathematics is anything but elementary. USA: Wadsworth, Cengage Learning. Lesh, R. A., T. R. Post, and M. J. Behr. 1987. Representations and translations among representations in mathematics learning and problem solving. In Problems of representation in the teaching and learning of mathematics, ed. C. Janvier, 33-40. Hillsdale, NJ: Lawrence Erlbaum. Posted in Pedagogy | Tags: Process Standards in Math
{"url":"http://mathteachingstrategies.wordpress.com/2008/11/24/process-standards/","timestamp":"2014-04-20T15:51:08Z","content_type":null,"content_length":"45603","record_id":"<urn:uuid:59566c75-d781-4563-8baf-93c1e1146ff0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
by Category: Related Title(s): view larger image Lawrence Leff, M.S. - All books by this author Barron’s E-Z Series - All books in this series Known for many years as Barron’s Easy Way Series, the new editions of these popular self-teaching titles are now Barron’s E-Z Series. Brand-new cover designs reflect all new page layouts, which feature extensive two-color treatment, a fresh, modern typeface, and more graphic material than ever— charts, graphs, diagrams, instructive line illustrations, and where appropriate, amusing cartoons. Meanwhile, the quality of the books’ contents remains at least as high as ever. Barron’s E-Z books are self-help manuals focused to improve students’ grades in a wide variety of academic and practical subjects. For most subjects, the level of difficulty ranges between high school and college-101 standards. Although primarily designed as self-teaching manuals, these books are also preferred by many teachers as classroom supplements—and for some courses, as main textbooks. E-Z books review their subjects in detail, and feature both short quizzes and longer tests with answers to help students gauge their learning progress. Subject heads and key phrases are set in a second color as an easy reference aid. An experienced math teacher breaks down Barron’s E-Z Precalculus into easy-to-follow lessons for self-teaching and rapid learning. The book features a generous number of step-by-step demonstration examples as well as tables, graphs, and graphing-calculator-based approach. Major topics include: algebraic methods; functions and their graphs; complex numbers; polynomial and rational functions; exponential and logarithmic functions; trigonometry and polar coordinates; counting and probability; binomial theorem; calculus preview; and much more. Exercises at the end of each chapter. About The Author: Lawrence Leff is Assistant Principal and Chairman of the Mathematics Department at Franklin D. Roosevelt High School, New York City. Table of Contents: STUDY UNIT I: ALGEBRA AND GRAPHING METHODS ● Basic Algebraic Methods ● Rational and Irrational Expressions ● Graphing and Systems of Equations ● Functions and Quadratic Equations ● Complex Numbers and the Quadratic Formula STUDY UNIT II: FUNCTIONS AND THEIR GRAPHS ● Special Functions and Equations ● Polynomial and Rational Functions ● Exponential and Logarithmic Functions STUDY UNIT III: TRIGONOMETRIC ANALYSIS ● Trigonometry ● Graphing Trigonometric Functions ● Trigonometric Identities and Equations ● Solving Triangles STUDY UNIT IV: POLAR COORDINATES AND CONIC SECTIONS ● Polar Coordinates and Parametric Equations ● Conic Sections and Their Equations STUDY UNIT V: NUMBER PATTERNS AND CALCULUS PREVIEW ● Sequences, Series, and Counting ● Calculus Preview Answers to Checkup Exercises Paperback / 480 Pages / 7 13/16 x 10 / 2010
{"url":"http://barronseduc.stores.yahoo.net/0764144650.html","timestamp":"2014-04-17T18:43:45Z","content_type":null,"content_length":"40272","record_id":"<urn:uuid:0a9230aa-004a-4d42-9f28-e2273e5270d6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Kids.Net.Au - Encyclopedia > Coulomb's Law In physics, Coulomb's Law is an inverse-square law indicating the magnitude and direction of electrical force that one stationary, electrically charged substance of small volume exerts on another. When one is interested only in the magnitude of the force (and not in its direction), it may be easiest to consider a simplified, scalar version of the Law F = \frac{\left|q_1 q_2\right|}{4 \pi \epsilon_0 r^2} </math> where q[1] is the charge on one substance, q[2] is the charge on the other, r is the distance between them, and ε[0] is a universal constant, the permittivity of vacuum. (See physical constants for more information. Note that 1/(μ[0]ε[0]) = c^2 × 10^-7, where μ[0] is the permeability of vacuum and c is the speed of light.) Among other things, this formula says that the magnitude of the force is directly proportional to the magnitude of the charges of each substance and inversely proportional to the square of the distance between them. The force F acts on the line connecting the two charged objects. For calculating the direction and magnitude of the force simultaneously, one will wish to consult the full-blown vector version of the Law \mathbf{F} = \frac{q_1 q_2 \mathbf{r}}{4 \pi \epsilon_0 \left|\mathbf{r}\right|^3} </math> where the vector r connects the two substances, and the other symbols are as before. (The r vector in the numerator indicates that the force should be along the vector connecting the two substances. |r| has been raised to the third power instead of the second in the denominator in order to normalize the length of the r in the numerator to 1.) In either formulation, Coulomb's Law is fully accurate only when the substances are static (i.e. stationary), and remain approximately correct only for slow movement. When movement takes places, magnetic fields are produced that alter the force on the two substances. Especially when rapid movement takes place, the electric field will also undergo a transformation described by Einstein's theory of relativity. All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/co/Coulomb's_Law","timestamp":"2014-04-21T04:35:26Z","content_type":null,"content_length":"15463","record_id":"<urn:uuid:345c11b0-a7f2-475d-9c4e-c602547d1f04>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Divide Double Digits Edit Article Edited by N'cha!, Trigtchr, FlaminPunkMonkey, Flickety and 23 others Dividing double digits is very similar to long division with a single-digit divisor, but it does require some extra multiplication and thinking. 1. 1 Set your problem up in long division format: ☆ Example: 236 & divide; 28 becomes 2. 2 Guess how many times the divisor (the 28, in our example) can go into the dividend (the 236, in our example), then write down that number. 3. 3 Multiply your guess and the divisor (which in our example is 28) and write the result under the original dividend. 4. 4 Subtract the dividend and the multiplication result from step 3. 5. 5 Continue the guess-multiply-subtract process until you reach zero OR a subtotal which is smaller than the divisor. 6. 6 Decide how to deal with the remainder. When the result of the subtraction is smaller than the divisor, you have a remainder. You can write the remainder as a fraction, using the divisor as the □ In our example, the answer would be 8 12/28, which would reduce in lowest terms to 8 3/7. 7. 7 If you want to produce a decimal rather than a fraction, you need to add a ".0" to the end of your original dividend. (In our example, the 236 becomes 236.0) 8. 8 Bring down the zero and stick it on the end of your latest subtraction result. 9. 9 Estimate how many times your divisor can go into this new subtotal and write that down. 10. 10 Multiply again... 11. 11 ...then subtract again. 12. 12 Keep repeating the "stick on a zero/estimate/multiply/subtract" process until you have enough decimal places OR until it subtracts to zero, whichever comes first. • In this example, we were working with 28. Keep in mind that 10 x 28 = 280, which means that 5 x 28 is half of that, or 140. Since 236 is between 280 and 140, your first guess should be between 5 and 10. That's one reason why 8 is a good number. • If, at any point, your subtraction results in a negative number, your guess was too high. Erase that entire step and try a smaller guess. • If, at any point, your subtraction results in a number larger than your divisor, your guess wasn't high enough. Erase that entire step and try a larger guess. Things You'll Need • Pencil • Paper • Calculator (the quickest way) Article Info Thanks to all authors for creating a page that has been read 83,371 times. Was this article accurate?
{"url":"http://www.wikihow.com/Divide-Double-Digits","timestamp":"2014-04-19T08:28:54Z","content_type":null,"content_length":"71162","record_id":"<urn:uuid:8dca851b-8e88-4270-8845-49ba1ef9e3e9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
What Constraints Entail: Part 2 Thu 3 Nov 2011 Last time we derived an entailment relation for constraints, now let's get some use out of it. Reflecting Classes and Instances Most of the implications we use on a day to day basis come from our class and instance declarations, but last time we only really dealt with constraint products. For example given: #if 0 class Eq a => Ord a instance Eq a => Eq [a] we could provide the following witnesses ordEq :: Ord a :- Eq a ordEq = Sub Dict eqList :: Eq a :- Eq [a] eqList = Sub Dict But this would require a lot of names and become remarkably tedious. So lets define classes to reflect the entailment provided by class definitions and instance declarations and then use them to reflect themselves. class Class b h | h -> b where cls :: h :- b infixr 9 :=> class b :=> h | h -> b where ins :: b :- h instance Class () (Class b a) where cls = Sub Dict instance Class () (b :=> a) where cls = Sub Dict Now we can reflect classes and instances as instances of Class and (:=>) respectively with: -- class Eq a => Ord a where ... instance Class (Eq a) (Ord a) where cls = Sub Dict -- instance Eq a => Eq [a] where ... instance Eq a :=> Eq [a] where ins = Sub Dict That said, instances of Class and Instance should never require a context themselves, because the modules that the class and instance declarations live in can't taken one, so we can define the following instances which bootstrap the instances of (:=>) for Class and (:=>) once and for all. #ifdef UNDECIDABLE instance Class b a => () :=> Class b a where ins = Sub Dict instance (b :=> a) => () :=> b :=> a where ins = Sub Dict These two instances are both decidable, and following a recent bug fix, the current version of GHC HEAD supports them, but my local version isn't that recent, hence the #ifdef. We can also give admissable-if-not-ever-stated instances of Class and (:=>) for () as well. instance Class () () where cls = Sub Dict instance () :=> () where ins = Sub Dict Reflecting the Prelude So now that we've written a handful of instances, lets take the plunge and just reflect the entire Prelude, and (most of) the instances for the other modules we've loaded. instance Class () (Eq a) where cls = Sub Dict instance () :=> Eq () where ins = Sub Dict instance () :=> Eq Int where ins = Sub Dict instance () :=> Eq Bool where ins = Sub Dict instance () :=> Eq Integer where ins = Sub Dict instance () :=> Eq Float where ins = Sub Dict instance () :=> Eq Double where ins = Sub Dict instance Eq a :=> Eq [a] where ins = Sub Dict instance Eq a :=> Eq (Maybe a) where ins = Sub Dict instance Eq a :=> Eq (Complex a) where ins = Sub Dict instance Eq a :=> Eq (Ratio a) where ins = Sub Dict instance (Eq a, Eq b) :=> Eq (a, b) where ins = Sub Dict instance (Eq a, Eq b) :=> Eq (Either a b) where ins = Sub Dict instance () :=> Eq (Dict a) where ins = Sub Dict instance () :=> Eq (a :- b) where ins = Sub Dict instance Class (Eq a) (Ord a) where cls = Sub Dict instance () :=> Ord () where ins = Sub Dict instance () :=> Ord Bool where ins = Sub Dict instance () :=> Ord Int where ins = Sub Dict instance ():=> Ord Integer where ins = Sub Dict instance () :=> Ord Float where ins = Sub Dict instance ():=> Ord Double where ins = Sub Dict instance () :=> Ord Char where ins = Sub Dict instance Ord a :=> Ord (Maybe a) where ins = Sub Dict instance Ord a :=> Ord [a] where ins = Sub Dict instance (Ord a, Ord b) :=> Ord (a, b) where ins = Sub Dict instance (Ord a, Ord b) :=> Ord (Either a b) where ins = Sub Dict instance Integral a :=> Ord (Ratio a) where ins = Sub Dict instance () :=> Ord (Dict a) where ins = Sub Dict instance () :=> Ord (a :- b) where ins = Sub Dict instance Class () (Show a) where cls = Sub Dict instance () :=> Show () where ins = Sub Dict instance () :=> Show Bool where ins = Sub Dict instance () :=> Show Ordering where ins = Sub Dict instance () :=> Show Char where ins = Sub Dict instance Show a :=> Show (Complex a) where ins = Sub Dict instance Show a :=> Show [a] where ins = Sub Dict instance Show a :=> Show (Maybe a) where ins = Sub Dict instance (Show a, Show b) :=> Show (a, b) where ins = Sub Dict instance (Show a, Show b) :=> Show (Either a b) where ins = Sub Dict instance (Integral a, Show a) :=> Show (Ratio a) where ins = Sub Dict instance () :=> Show (Dict a) where ins = Sub Dict instance () :=> Show (a :- b) where ins = Sub Dict instance Class () (Read a) where cls = Sub Dict instance () :=> Read () where ins = Sub Dict instance () :=> Read Bool where ins = Sub Dict instance () :=> Read Ordering where ins = Sub Dict instance () :=> Read Char where ins = Sub Dict instance Read a :=> Read (Complex a) where ins = Sub Dict instance Read a :=> Read [a] where ins = Sub Dict instance Read a :=> Read (Maybe a) where ins = Sub Dict instance (Read a, Read b) :=> Read (a, b) where ins = Sub Dict instance (Read a, Read b) :=> Read (Either a b) where ins = Sub Dict instance (Integral a, Read a) :=> Read (Ratio a) where ins = Sub Dict instance Class () (Enum a) where cls = Sub Dict instance () :=> Enum () where ins = Sub Dict instance () :=> Enum Bool where ins = Sub Dict instance () :=> Enum Ordering where ins = Sub Dict instance () :=> Enum Char where ins = Sub Dict instance () :=> Enum Int where ins = Sub Dict instance () :=> Enum Integer where ins = Sub Dict instance () :=> Enum Float where ins = Sub Dict instance () :=> Enum Double where ins = Sub Dict instance Integral a :=> Enum (Ratio a) where ins = Sub Dict instance Class () (Bounded a) where cls = Sub Dict instance () :=> Bounded () where ins = Sub Dict instance () :=> Bounded Ordering where ins = Sub Dict instance () :=> Bounded Bool where ins = Sub Dict instance () :=> Bounded Int where ins = Sub Dict instance () :=> Bounded Char where ins = Sub Dict instance (Bounded a, Bounded b) :=> Bounded (a,b) where ins = Sub Dict instance Class () (Num a) where cls = Sub Dict instance () :=> Num Int where ins = Sub Dict instance () :=> Num Integer where ins = Sub Dict instance () :=> Num Float where ins = Sub Dict instance () :=> Num Double where ins = Sub Dict instance RealFloat a :=> Num (Complex a) where ins = Sub Dict instance Integral a :=> Num (Ratio a) where ins = Sub Dict instance Class (Num a, Ord a) (Real a) where cls = Sub Dict instance () :=> Real Int where ins = Sub Dict instance () :=> Real Integer where ins = Sub Dict instance () :=> Real Float where ins = Sub Dict instance () :=> Real Double where ins = Sub Dict instance Integral a :=> Real (Ratio a) where ins = Sub Dict instance Class (Real a, Enum a) (Integral a) where cls = Sub Dict instance () :=> Integral Int where ins = Sub Dict instance () :=> Integral Integer where ins = Sub Dict instance Class (Num a) (Fractional a) where cls = Sub Dict instance () :=> Fractional Float where ins = Sub Dict instance () :=> Fractional Double where ins = Sub Dict instance RealFloat a :=> Fractional (Complex a) where ins = Sub Dict instance Integral a :=> Fractional (Ratio a) where ins = Sub Dict instance Class (Fractional a) (Floating a) where cls = Sub Dict instance () :=> Floating Float where ins = Sub Dict instance () :=> Floating Double where ins = Sub Dict instance RealFloat a :=> Floating (Complex a) where ins = Sub Dict instance Class (Real a, Fractional a) (RealFrac a) where cls = Sub Dict instance () :=> RealFrac Float where ins = Sub Dict instance () :=> RealFrac Double where ins = Sub Dict instance Integral a :=> RealFrac (Ratio a) where ins = Sub Dict instance Class (RealFrac a, Floating a) (RealFloat a) where cls = Sub Dict instance () :=> RealFloat Float where ins = Sub Dict instance () :=> RealFloat Double where ins = Sub Dict instance Class () (Monoid a) where cls = Sub Dict instance () :=> Monoid () where ins = Sub Dict instance () :=> Monoid Ordering where ins = Sub Dict instance () :=> Monoid [a] where ins = Sub Dict instance Monoid a :=> Monoid (Maybe a) where ins = Sub Dict instance (Monoid a, Monoid b) :=> Monoid (a, b) where ins = Sub Dict instance Class () (Functor f) where cls = Sub Dict instance () :=> Functor [] where ins = Sub Dict instance () :=> Functor Maybe where ins = Sub Dict instance () :=> Functor (Either a) where ins = Sub Dict instance () :=> Functor ((->) a) where ins = Sub Dict instance () :=> Functor ((,) a) where ins = Sub Dict instance () :=> Functor IO where ins = Sub Dict instance Class (Functor f) (Applicative f) where cls = Sub Dict instance () :=> Applicative [] where ins = Sub Dict instance () :=> Applicative Maybe where ins = Sub Dict instance () :=> Applicative (Either a) where ins = Sub Dict instance () :=> Applicative ((->)a) where ins = Sub Dict instance () :=> Applicative IO where ins = Sub Dict instance Monoid a :=> Applicative ((,)a) where ins = Sub Dict instance Class (Applicative f) (Alternative f) where cls = Sub Dict instance () :=> Alternative [] where ins = Sub Dict instance () :=> Alternative Maybe where ins = Sub Dict instance Class () (Monad f) where cls = Sub Dict instance () :=> Monad [] where ins = Sub Dict instance () :=> Monad ((->) a) where ins = Sub Dict instance () :=> Monad (Either a) where ins = Sub Dict instance () :=> Monad IO where ins = Sub Dict instance Class (Monad f) (MonadPlus f) where cls = Sub Dict instance () :=> MonadPlus [] where ins = Sub Dict instance () :=> MonadPlus Maybe where ins = Sub Dict Of course, the structure of these definitions is extremely formulaic, so when template-haskell builds against HEAD again, they should be able to be generated automatically using splicing and reify, which would reduce this from a wall of text to a handful of lines with better coverage! An alternative using Default Signatures and Type Families Many of the above definitions could have been streamlined by using default definitions. However, MPTCs do not currently support default signatures. We can however, define Class and (:=>) using type families rather than functional dependencies. This enables us to use defaulting, whenever the superclass or context was (). #if 0 class Class h where type Sup h :: Constraint type Sup h = () cls :: h :- Sup h default cls :: h :- () cls = Sub Dict class Instance h where type Ctx h :: Constraint type Ctx h = () ins :: Ctx h :- h default ins :: h => Ctx h :- h ins = Sub Dict instance Class (Class a) instance Class (Instance a) #ifdef UNDECIDABLE instance Class a => Instance (Class a) instance Instance a => Instance (Instance a) instance Class () instance Instance () This seems at first to be a promising approach. Many instances are quite small: #if 0 instance Class (Eq a) instance Instance (Eq ()) instance Instance (Eq Int) instance Instance (Eq Bool) instance Instance (Eq Integer) instance Instance (Eq Float) instance Instance (Eq Double) But those that aren't are considerably more verbose and are much harder to read off than the definitions using the MPTC based Class and (:=>). #if 0 instance Instance (Eq [a]) where type Ctx (Eq [a]) = Eq a ins = Sub Dict instance Instance (Eq (Maybe a)) where type Ctx (Eq (Maybe a)) = Eq a ins = Sub Dict instance Instance (Eq (Complex a)) where type Ctx (Eq (Complex a)) = Eq a ins = Sub Dict instance Instance (Eq (Ratio a)) where type Ctx (Eq (Ratio a)) = Eq a ins = Sub Dict instance Instance (Eq (a, b)) where type Ctx (Eq (a,b)) = (Eq a, Eq b) ins = Sub Dict instance Instance (Eq (Either a b)) where type Ctx (Eq (Either a b)) = (Eq a, Eq b) ins = Sub Dict Having tested both approaches, the type family approach led to a ~10% larger file size, and was harder to read, so I remained with MPTCs even though it meant repeating "where ins = Sub Dict" over and In a perfect world, we'd gain the ability to use default signatures with multiparameter type classes, and the result would be considerably shorter and easier to read! Fake Superclasses Now, that we have all this machinery, it'd be nice to get something useful out of it. Even if we could derive it by other means, it'd let us know we weren't completely wasting our time. Let's define a rather horrid helper, which we'll only use where a and b are the same constraint being applied to a newtype wrapper of the same type, so we can rely on the fact that the dictionaries have the same representation. evil :: a :- b evil = unsafeCoerce refl We often bemoan the fact that we can't use Applicative sugar given just a Monad, since Applicative wasn't made a superclass of Monad due to the inability of the Haskell 98 report to foresee the future invention of Applicative. There are rather verbose options to get Applicative sugar for your Monad, or to pass it to something that expects an Applicative. For instance you can use WrappedMonad from Applicative. We reflect the relevant instance here. instance Monad m :=> Applicative (WrappedMonad m) where ins = Sub Dict Using that instance and the combinators defined previously, we can obtain the following applicative :: forall m a. Monad m => (Applicative m => m a) -> m a applicative m = m \\ trans (evil :: Applicative (WrappedMonad m) :- Applicative m) ins Here ins is instantiated to the instance of (:=>) above, so we use trans to compose ins :: Monad m :- Applicative (WrappedMonad m) with evil :: Applicative (WrappedMonad m) :- Applicative m to obtain an entailment of type Monad m :- Applicative m in local scope, and then apply that transformation to discharge the Applicative obligation on m. Now, we can use this to write definitions. [Note: Frustratingly, my blog software inserts spaces after <'s in code] (< &>) :: Monad m => m a -> m b -> m (a, b) m < &> n = applicative $ (,) < $> m < *> n Which compares rather favorably to the more correct (< &>) :: Monad m => m a -> m b -> m (a, b) m < &> n = unwrapMonad $ (,) < $> WrapMonad m < *> WrapMonad n especially considering you still have access to any other instances on m you might want to bring into scope without having to use deriving to lift them onto the newtype! Similarly you can borrow < |> and empty locally for use by your MonadPlus with: instance MonadPlus m :=> Alternative (WrappedMonad m) where ins = Sub Dict alternative :: forall m a. MonadPlus m => (Alternative m => m a) -> m a alternative m = m \\ trans (evil :: Alternative (WrappedMonad m) :- Alternative m) ins The correctness of this of course relies upon the convention that any Applicative and Alternative your Monad may have should agree with its Monad instance, so even if you use Alternative or Applicative in a context where the actual Applicative or Alternative instance for your particular type m is in scope, it shouldn't matter beyond a little bit of efficiency which instance the compiler picks to discharge the Applicative or Alternative obligation. Note: It isn't that the Constraint kind is invalid, but rather that using unsafeCoerce judiciously we can bring into scope instances that don't exist for a given type by substituting those from a different type which have the right representation. 13 Responses to “What Constraints Entail: Part 2” 1. Doug McClean Says: November 16th, 2011 at 3:17 pm Working on the problem of expressing the pattern behind the relationship between sort/sortBy, maximum/maximumBy, etc, I have a few observations. First, you’ve shown the existence of a type Dict :: Constraint -> * (which I suggest might more naturally be named Evidence in keeping with most of the work I have seen on qualified types, but that’s not important). A little bit of magic (the TH approach that you mention would do it) should suffice to make this a surjective type function. There’s a plausible argument that it’s injective as well (since we take a nominal typing approach to constraints and ignore ordering of constraint conjunctions), but I don’t think this accomplishes much so let’s ignore it. Second, I postulate the existence of another (surjective, but surely not injective) type function Qualifier :: * -> Constraint which extracts the context of its argument and returns the empty constraint if its argument is unqualified. Third, twisting the same idea, I postulate the existence of another surjective type function (for which I don’t have a good name…) Unqualified :: * -> * which discards the context if its argument is qualified and is otherwise the identity function. Taking 1 and assuming 2 and 3, then shouldn’t we be able to have a function: by :: t -> Evidence (Qualifier t) -> Unqualified t? Working through the translation for an example, and ignoring the term level for a bit, we’d get: sort :: (Ord a) => [a] -> [a] Qualifier ((Ord a) => [a] -> [a]) ~ Ord a Evidence (Ord a) ~ OrdDict a Unqualified ((Ord a) => [a] -> [a]) ~ [a] -> [a] (sort `by`) :: OrdDict a -> [a] -> [a] … which is what we were looking for. Sadly/oddly this is applicable at types which are not (possibly-qualified) arrow types, but it seems fairly harmless: (True `by`) :: () -> Bool (assuming () is the type of evidence for the empty constraint?) Did I skip any steps? Is there a reason in principle why we couldn’t write either Qualifier or Unqualified (I could see how we might need a language extension…). Stepping back to the term level, I’m a bit stuck. I think I am stuck on understanding your concrete syntax more than on any fundamental reason why this is impossible, but I’m not sure. 2. Doug McClean Says: November 16th, 2011 at 3:30 pm One more note: The existence of GHC is a proof for the possibility of doing this at the term level, right, because it erases qualified types during the translation to the core language by doing this same evidence passing scheme? 3. Doug McClean Says: November 16th, 2011 at 3:32 pm Oops, one more: If you really were concerned about expressions like (True `by`) you could strengthen the type of by from by :: t -> Evidence (Qualifier t) -> Unqualified t by :: (Unqualified t ~ a -> b) => t -> Evidence (Qualifier t) -> Unqualified t 4. Edward Kmett Says: November 16th, 2011 at 5:28 pm The problem with Qualifier and Unqualified is that unification of types involving => isn’t done by simple unification, but instead by bi-implication for constraints, so while you can write some types with them, you typically can’t build inhabitants. 5. Doug McClean Says: November 16th, 2011 at 5:51 pm That explains why you can’t define them in Haskell, but I don’t think it bars you from adding them as primitives, does it? And the type checker already has the implementation that would back the I don’t have access to a machine with bleeding edge GHC on it at the moment, but is (c => t) -> Dict c -> t a syntactically valid type? If not, why not? 6. Edward Kmett Says: November 16th, 2011 at 7:20 pm discharge :: (c => t) -> Dict c -> t discharge t Dict = t works fine. as for whether the type checker has the machinery, that is somewhat harder to answer, becuase it would depend on the details of the largely underspecified constraint simplifier. 7. Edward Kmett Says: November 16th, 2011 at 7:23 pm Dan Peebles has some nice examples for things like a more polymorphic ‘on’ that could benefit, but in the meantime, one can use explicit proxies to work around the inability to name the constraint implicitly. 8. Doug McClean Says: November 17th, 2011 at 11:39 am Ah, ok, I’m on the right page now. This can’t work because context reduction/constraint simplification is such a mess. Probably a good thing, too, because for something like this you wouldn’t want any context reduction in the first place, you’d want the naively inferred context that is based solely on what class functions are actually mentioned in a definition. This is because you might want to occasionally use an ordering on, say, lists other than the lexical one in the prelude (say, by sum or whatever). If the context gets reduced from Ord ([Salary]) to Ord Salary then you’ve already lost your chance. Proxies it is. 9. Edward Kmett Says: November 17th, 2011 at 2:13 pm As an aside, you can _write_ type family Constraints t :: Constraints type instance Constraints (p => q) = p you just can’t use it anywhere. =/ The compiler complains when you go to use it that you need LiberalTypeSynonyms even if you have LiberalTypeSynonyms turned on. ;) 10. Doug McClean Says: November 17th, 2011 at 2:43 pm Yeah, I was thinking of that definition and the corresponding type family Unqualified t :: * type instance Unqualified (p => q) = q Alas there is a logical reason why it doesn’t make sense. {-# LANGUAGE CommunistTypeSynonyms #-}? 11. Edward Kmett Says: November 17th, 2011 at 2:57 pm I raised the issue to Max. Not sure there is a good way for it to be defined in user land, because () => q and q are interchangeable, so it would require compiler support. The most compelling use case is to make code like on :: (p a, p b) => (c -> c -> d) -> (forall x. p x => x -> c) -> a -> b -> d inhabitable for ((++) `on` show) but it impacts all sorts of unrelated areas in the compiler. 12. Philip J-F Says: March 1st, 2012 at 2:04 am Inspired by this discussion I wrote a post showing how to get multiple, passable, effectively first-class, instances. I borrowed your applicative example: http://joyoftypes.blogspot.com/2012/02/ 13. zzo38 Says: June 26th, 2012 at 2:08 pm I like this! Nevertheless they are not perfect, not like real superclass, it is why I wanted to define my own programming language to correct these things, but at least Haskell has been improved by these thing so that much works OK.
{"url":"http://comonad.com/reader/2011/what-constraints-entail-part-2/","timestamp":"2014-04-21T04:32:27Z","content_type":null,"content_length":"136596","record_id":"<urn:uuid:c222e0ea-0fce-42d5-b66b-d057f3f4a3f7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Apache Junction Algebra 1 Tutor ...Whatever the student's needs are, it is my job as a teacher to meet them. I have tutored and taught subjects from remedial reading using basic phonics to algebra 1. I have a deep-rooted belief that the best learning takes place when the student is engaged and having fun. 16 Subjects: including algebra 1, reading, writing, geometry ...I will be conducting individual and group classes starting May 2014 at my home located in Mesa, AZ in E. Guadalupe and S. Hawes. 13 Subjects: including algebra 1, reading, English, writing ...Any age, any level, children are always a joy to tutor. I have participated in structured programs like America Reads as well as being an Instructional Assistant for Mesa Public Schools. It is no secret that teaching these children is a lifelong passion of mine. 33 Subjects: including algebra 1, English, reading, physics ...I have taught and tutored everything from basic mathematics up through Calculus, Differential Equations and Mathematical Structures. Just a little about my work and research. While my PhD is in Mathematics, my research area has been Mathematics Education. 9 Subjects: including algebra 1, calculus, geometry, GED ...I am also able to adapt different ways of presenting material and making it engaging to the you, as well as adapt to each unique student. I have an unparalleled amount of patience and also make it fun to learn math... even for those who claim to "hate" math! The process of learning and figuring it out for yourself (with my guidance) is more rewarding than having it explained to you. 17 Subjects: including algebra 1, reading, calculus, algebra 2
{"url":"http://www.purplemath.com/Apache_Junction_algebra_1_tutors.php","timestamp":"2014-04-20T10:52:48Z","content_type":null,"content_length":"24016","record_id":"<urn:uuid:c957d7a3-c845-4b96-8147-940782127a53>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Practices Video Series: Mathematical Practice #5 We are continuing with Mathematical Practice #5 in our video series on the Standards for Mathematical Practice. If you’ve missed any of this video series, you can catch up with our previous posts: • Mathematical Practice #1: Make sense of problems and persevere in solving them. • Mathematical Practice #2: Reason abstractly and quantitatively. • Mathematical Practice #3: Construct viable arguments and critique the reasoning of others. • Mathematical Practice #4: Model with mathematics. There are multiple parts to each practice. The parts help students develop the habit of mind that is the main practice. Remember that the practices are defined as ways to help students become mathematically proficient. As we look at each practice, think of ways we can help students to take ownership of these practices. In the fifth video, students are learning how to determine the sum of integers. The Essential Question asks: “Are the sum of two integers positive, negative or zero and how can you tell?” Observe how the teacher immediately makes a real life connection for the students giving meaning to the mathematics. What questions does she ask? Students are using multiple representations to display the mathematics. When they begin to work on problems in their groups, they will be able to use these strategies, thereby building their proficiency. Mathematical Practice #5. Use appropriate tools strategically. • Mathematically proficient students consider the available tools when solving a mathematical problem. These tools might include pencil and paper, concrete models, a ruler, a protractor, a calculator, a spreadsheet, a computer algebra system, a statistical package, or dynamic geometry software. • Proficient students are sufficiently familiar with tools appropriate for their grade or course to make sound decisions about when each of these tools might be helpful, recognizing both the insight to be gained and their limitations. • Mathematically proficient students at various grade levels are able to identify relevant external mathematical resources, such as digital content located on a website, and use them to pose or solve problems. They are able to use technological tools to explore and deepen their understanding of concepts. As you look at your classroom, you probably see students with varying degrees of expertise in this practice. Our job, as educators, is to help students develop a habit of mind that helps them naturally think before they begin, make sense of what they are doing and persevere in their work. Ask yourself: Do your students use manipulatives to assist them in making sense of a problem? Are students aware that there may be more than one way to find a solution? Are students given time to discover the rules of integers rather than just being told? As students take ownership of their learning and develop expertise using the mathematical practices, the content standards (knowledge, skills and understandings, procedural skills and fluency, and application and problem solving) will make sense, allowing students to achieve success in mathematics. Comments are closed for this entry.
{"url":"http://www.bigideaslearning.com/blog/professional-development/math-practices-video-series-mathematical-practice-5","timestamp":"2014-04-19T14:29:07Z","content_type":null,"content_length":"36402","record_id":"<urn:uuid:90f5b9e4-cbf3-4e4b-aade-a91b0abc360f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Recent Advances in Applied Mathematics Articles Recently published articles from Advances in Applied Mathematics. May 2014 Fan Chung We generalize the notion of quasirandomness which concerns a class of equivalent properties that random graphs satisfy. We show that the convergence of a graph sequence under the spectral distance May 2014 William C. Abram | Jeffrey C. Lagarias Path sets are spaces of one-sided infinite symbol sequences associated to pointed graphs (G,v0), which are edge-labeled directed graphs with a distinguished vertex v0. Such sets arise naturally as May 2014 Jean-Christophe Aval | Adrien Boussicault | Mathilde Bouvel | Matteo Silimbani This article investigates combinatorial properties of non-ambiguous trees. These objects we define may be seen either as binary trees drawn on a grid with some constraints, or as a subset of the May 2014 John Steenbergen | Caroline Klivans | Sayan Mukherjee In this paper, we consider a variation on Cheeger numbers related to the coboundary expanders recently defined by Dotterrer and Kahle. A Cheeger-type inequality is proved, which is similar to a May 2014 Fredrik Johansson | Brian Nakamura We consider the problem of enumerating permutations with exactly r occurrences of the pattern 1324 and derive functional equations for this general case as well as for the pattern avoidance (r=0) May 2014 Douglas Bowman | Alon Regev This paper proves explicit formulas for the number of dissections of a convex regular polygon modulo the action of the cyclic and dihedral groups. The formulas are obtained by making use of the May 2014 Ilse Fischer | Martina Kubitzke Given two sequences a=(an) and b=(bn) of complex numbers such that their generating series can be written as rational functions where the denominator is a power of 1−t, we consider their Segre April 2014 Young Jin Suh In this paper we give a characterization of real hypersurfaces in the noncompact complex two-plane Grassmannian SU(2,m)/S(U(2)⋅U(m)), m⩾2, with Reeb vector field ξ belonging to the maximal April 2014 Sam Miner | Igor Pak We initiate the study of limit shapes for random permutations avoiding a given pattern. Specifically, for patterns of length 3, we obtain delicate results on the asymptotics of distributions of April 2014 Nicholas A. Loehr | Elizabeth Niese The shuffle conjecture (due to Haglund, Haiman, Loehr, Remmel, and Ulyanov) provides a combinatorial formula for the Frobenius series of the diagonal harmonics module DHn, which is the symmetric April 2014 Julia Hörrmann | Daniel Hug | Michael Andreas Klatt | Klaus Mecke A stationary Boolean model is the union set of random compact particles which are attached to the points of a stationary Poisson point process. For a stationary Boolean model with convex grains we April 2014 Michael DiPasquale We study the module Cr(P) of piecewise polynomial functions of smoothness r on a pure n-dimensional polytopal complex P⊂Rn, via an analysis of certain subcomplexes PW obtained from the March 2014 Vanessa Chatelain | Jorge Luis Ramírez Alfonsín This is a continuation of an early paper Chatelain et al. (2011) [3] about matroid base polytope decomposition. We will present sufficient conditions on a matroid M so its base polytope P(M) has a March 2014 Valérie Berthé | Hitoshi Nakada | Rie Natsui | Brigitte Vallée This paper studies digit-cost functions for the Euclid algorithm on polynomials with coefficients in a finite field, in terms of the number of operations performed on the finite field Fq. The March 2014 Mourad E.H. Ismail | Zeinab S.I. Mansour We give new derivations of properties of the functions of the second kind of the Jacobi, little and big q-Jacobi polynomials, and the symmetric Al-Salam–Chihara polynomials for q>1. We also study March 2014 Shuya Chiba | Shinya Fujita | Ken-ichi Kawarabayashi | Tadashi Sakuma We prove a variant of a theorem of Corrádi and Hajnal (1963) [4] which says that if a graph G has at least 3k vertices and its minimum degree is at least 2k, then G contains k vertex-disjoint March 2014 Robert J. Marsh | Sibylle Schroll We study a circular order on labelled, m-edge-coloured trees with k vertices, and show that the set of such trees with a fixed circular order is in bijection with the set of RNA m-diagrams of March 2014 Axel Hultman Given a permutation statistic s:Sn→R, define the mean statistic s¯ as the class function giving the mean of s over conjugacy classes. We describe a way to calculate the expected value of s on a February 2014 Miguel A. Méndez | Jean Carlos Liendo A symmetric set operad is a monoid in the category of combinatorial species with respect to the operation of substitution. From a symmetric set operad, we give here a simple construction of a February 2014 Susanna Dann The lower dimensional Busemann–Petty problem asks whether origin-symmetric convex bodies in Rn with smaller volume of all k-dimensional sections necessarily have smaller volume. The answer is February 2014 Sean McGuinness In 1971, Nash-Williams proved that if G is a simple 2-connected graph on n vertices having minimum degree at least 13(n+2), then any longest cycle C in G is also edge-dominating; that is, each February 2014 Alexey Ovchinnikov This paper is devoted to integrability conditions for systems of linear difference and differential equations with difference parameters. It is shown that such a system is difference isomonodromic February 2014 Zhicong Lin | Jiang Zeng Generalizing recent results of Egge and Mongelli, we show that each diagonal sequence of the Jacobi–Stirling numbers Jc(n,k;z) and JS(n,k;z) is a Pólya frequency sequence if and only if z∈[−1,1] February 2014 Fumihiko Nakano | Taizo Sadahiro We study a generalization of Holteʼs amazing matrix, the transition probability matrix of the Markov chains of the ‘carries’ in a non-standard numeration system. The stationary distributions are February 2014 Miklós Bóna We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k−1 from the closest leaf converges to a rational constant ck as January 2014 William Y.C. Chen | Jeremy J.F. Guo | Larry X.W. Wang We introduce the notion of infinitely log-monotonic sequences. By establishing a connection between completely monotonic functions and infinitely log-monotonic sequences, we show that the January 2014 Hassane Kone In A. Tsang (2010) [21], a representation theorem was established for continuous valuations on Lp-spaces whose underlying measure is non-atomic. In this paper, we generalize the results of A. January 2014 Carolyn Chun | Dillon Mayhew | James Oxley Let M be an internally 4-connected binary matroid and N be an internally 4-connected proper minor of M. In our search for a splitter theorem for internally 4-connected binary matroids, we proved January 2014 Carolyn Chun | Dillon Mayhew | James Oxley In our quest to find a splitter theorem for internally 4-connected binary matroids, we proved in the preceding paper in this series that, except when M or its dual is a cubic Möbius or planar October 2013 Masao Ishikawa | Masahiko Ito | Soichi Okada A compound determinant identity for minors of rectangular matrices is established. Given an (s+n−1)×sn matrix A with s blocks of n columns, we consider minors of A by picking up in each block the October 2013 Antonio Giambruno | Mikhail Zaicev If L is a special Lie algebra over a field of characteristic zero, its sequence of codimensions is exponentially bounded. The PI-exponent measures the exponential rate of growth of such sequence October 2013 Bridget Eileen Tenner Mesh patterns are a generalization of classical permutation patterns that encompass classical, bivincular, Bruhat-restricted patterns, and some barred patterns. In this paper, we describe all mesh October 2013 Thorsten Holm | Peter Jørgensen | Martin Rubey We give a complete classification of torsion pairs in the cluster category of Dynkin type Dn, via a bijection to new combinatorial objects called Ptolemy diagrams of type D. For the latter we give October 2013 William Y.C. Chen | Oliver X.Q. Gao | Peter L. Guo A signed labeled forest is defined as a (plane) forest labeled by 1,2,…,n along with minus signs associated with some vertices. Signed labeled forests can be viewed as an extension of signed October 2013 Marc Chamberland | Armin Straub Convergent infinite products, indexed by all natural numbers, in which each factor is a rational function of the index, can always be evaluated in terms of finite products of gamma functions. This October 2013 Matthias Lenz We show that the f-vector of the matroid complex of a representable matroid is log-concave. This proves the representable case of a conjecture made by Mason in 1972.... September 2013 Patrick Devlin | Edinah K. Gnang We investigate using Sage [5] the special class of formulas made up of arbitrary but finite combinations of addition, multiplication, and exponentiation gates. The inputs to these formulas are September 2013 Zhi-Wei Sun The Franel numbers given by fn=∑k=0n(nk)3 (n=0,1,2,…) play important roles in both combinatorics and number theory. In this paper we initiate the systematic investigation of fundamental September 2013 Su-Ping Cui | Nancy S.S. Gu For a given prime p, by studying p-dissection identities for Ramanujanʼs theta functions ψ(q) and f(−q), we derive infinite families of congruences modulo 2 for some ℓ-regular partition functions, September 2013 Jayant Shah Analyzing shape manifolds as Riemannian manifolds has been shown to be an effective technique for understanding their geometry. Riemannian metrics of the types H0 and H1 on the space of planar
{"url":"http://www.journals.elsevier.com/advances-in-applied-mathematics/recent-articles/","timestamp":"2014-04-19T04:28:57Z","content_type":null,"content_length":"95985","record_id":"<urn:uuid:7916f7d4-5772-4547-b6f8-8100ba90dbbd>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum physicists simulate Dirac equation, one of the cornerstones in physics The latest news from academia, regulators research labs and other things of interest Posted: January 6, 2010 Quantum physicists simulate Dirac equation, one of the cornerstones in physics (Nanowerk News) Researchers of the Institute for Quantum Optics and Quantum Information (IQOQI) in Innsbruck, Austria, used a calcium ion to simulate a relativistic quantum particle, demonstrating a phenomenon that has not been directly observable so far: the Zitterbewegung. They have published their findings in the current issue of the journal Nature. In the 1920s quantum mechanics was already established and in 1928 the British physicist Paul Dirac showed that this theory can be merged with special relativity postulated by Albert Einstein. Dirac's work made quantum physics applicable to relativistic particles, which move at a speed that is comparable to the speed of light. The Dirac equation forms the basis for groundbreaking new insights, e.g. it provides a natural description of the electron spin and predicts that each particle also has its antiparticle (anti matter). In 1930, as a result of the analysis of the Dirac equation, the Austrian Nobel laureate Erwin Schrödinger first postulated the existence of a so called Zitterbewegung (quivering motion), a kind of fluctuation of the motion of a relativistic Rainer Blatt, Gerhard Kirchmair, Rene Gerritsma, Florian Zähringer and Christian Roos used a calcium ion to simulate a relativistic quantum particle, demonstrating a phenomenon that has not been directly observable so far: the Zitterbewegung. "According to the Dirac equation such a particle does not move in a linear fashion in a vacuum but 'jitters' in all three dimensions," Christian Roos from the Institute for Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences (ÖAW) explains. "It is not clear whether this Zitterbewegung can be observed in real systems in nature." Quantum simulation of a particle Physical phenomena are often described by equations, which may be too complicated to solve. In this case, researchers use computer simulations to answer open questions. However, even for small quantum systems, classical computers have not enough power to manage the processing of the data; thus, scientists, such as Richard Feynman, proposed to simulate these phenomena in other quantum systems experimentally. The preconditions for doing this – detailed knowledge about the physics of these systems and an excellent control over the technology and set-up - have been set by the research group headed by Rainer Blatt by conducting experiments with quantum computers over the last few years; they are now able to carry out quantum simulations experimentally. "The challenges with these experiments are to recreate the equations in the quantum system well, to have a high level of control over the various parameters and to measure the results," Christian Roos says. The experimental physicists of the IQOQI trapped and cooled a calcium ion and in this well-defined state, a laser coupled the state of the particle and the state of the relativistic particle to be simulated. "Our quantum system was now set to behave like a free relativistic quantum particle that follows the laws of the Dirac equation," Rene Gerritsma explains, a Dutch Postdoc working at the IQOQI and first author of the work published in Nature. Measurements revealed the features of the simulated particle. "Thereby, we were able to demonstrate Zitterbewegung in the experimental simulation and we were also able to determine the probability of the distribution of a particle," Gerritsma says. In this very small quantum system the physicist simulated the Dirac equation only in one spatial dimension. "This simulation was a proof-of-principle experiment," Roos says, "which, in principle, can also be applied to three-dimensional dynamics if the technological set-up is adjusted accordingly." Simulation of antiparticles Due to the extremely high level of control over the physical regime of the simulated particle, the scientists were able to modify the mass of the object and to simulate antiparticles. "In the end, our approach was very simple but you have to come up with the idea first," says Christian Roos, whose team of scientists was inspired by a theoretical proposal of a Spanish group of researchers. The work was supported by the Austrian Science Funds (FWF) and the European Commission. Subscribe to a free copy of one of our daily Nanowerk Newsletter Email Digests with a compilation of all of the day's news.
{"url":"http://www.nanowerk.com/news/newsid=14241.php","timestamp":"2014-04-21T12:15:22Z","content_type":null,"content_length":"38359","record_id":"<urn:uuid:1a93650b-a4c1-455c-bcdd-bf7479511ad8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Marcus Hook Math Tutor Find a Marcus Hook Math Tutor ...Thanks for visiting my page, and best of luck!Scored 780/800 on SAT Math in high school and 800/800 on January 26, 2013 test. Routinely score 800/800 on practice tests. Taught high school math and have extensive experience tutoring in SAT Math. 19 Subjects: including algebra 1, algebra 2, calculus, geometry ...I taught Prealgebra with a national tutoring chain for five years. I have taught Prealgebra as a private tutor since 2001. I completed math classes at the university level through advanced 12 Subjects: including geometry, logic, algebra 1, algebra 2 ...However, I am preparing to become a math teacher in the near future, so I'm also proficient in math and sciences. Currently, I am working as a substitute teacher throughout New Castle County. I've had the pleasure of working with students from pre-school through high school. 23 Subjects: including SAT math, ACT Math, probability, writing I have a Ph.D. in particle physics from Duke University, but what I have always loved to do most is teach. I taught introductory and intermediate physics classes at New College, Duke University and RPI. Some years ago I started to tutor one-on-one and have found that, more than classroom instruction, it allows me to tailor my teaching to students' individual needs. 21 Subjects: including algebra 1, algebra 2, calculus, SAT math ...I have a lot of experience working with children who learn differently and have unique needs when it comes to academics. I completed my MEd in 2010 in Special Education. I am certified in PA to teach Special Education grades PK-12. 20 Subjects: including SAT math, dyslexia, geometry, algebra 1 Nearby Cities With Math Tutor Aston Math Tutors Brookhaven, PA Math Tutors Chester Township, PA Math Tutors Chester, PA Math Tutors Claymont Math Tutors Collingdale, PA Math Tutors Eddystone, PA Math Tutors Garnet Valley, PA Math Tutors Linwood, PA Math Tutors Logan Township, NJ Math Tutors Lower Chichester, PA Math Tutors Parkside, PA Math Tutors Trainer, PA Math Tutors Upland, PA Math Tutors Yeadon, PA Math Tutors
{"url":"http://www.purplemath.com/marcus_hook_pa_math_tutors.php","timestamp":"2014-04-16T13:34:28Z","content_type":null,"content_length":"23753","record_id":"<urn:uuid:7da50d59-7db8-4851-9eb3-9d6799245fe5>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Undergraduate Course Descriptions Unless otherwise noted, all courses are 3 credits. MATH 1060 Basic Mathematics with Algebra This course covers the arithmetic of whole numbers, signed numbers, fractions, decimals and percents, its primary coverage is polynomial arithmetic, algebraic expressions, factoring, solving equations (linear and quadratic) with applications and graphing. Note: Credits for this Basic Skills course are not applicable toward degree requirements. Prerequisite: None Successful completion of Math Basic Skills requirement is necessary for all the following courses. Successful completion of Math Basic Skills Requirements means obtaining a score of 20 on Basic Skills Math test or grade P in Math 1060. MATH 1090 Mathematical Concepts (This is a UCC - Area 3E course.) This course is intended to provide a wide ranging exposure to mathematical ideas expected of a liberal arts undergraduate. Topics include: Voting, Fair division, Apportionment, Graphs and Networks, Consumer Finance, Statistics and Probability. The course is designed for students not majoring in business, the sciences or math. MATH 1100 Contemporary Mathematics (This is a UCC - Area 3E course.) This course is intended to provide a wide ranging exposure to mathematical ideas expected of a liberal arts undergraduate. Topics include Sets, Logic, Statistics, Probability, Number Systems and Problem Solving. The course is designed for students not majoring in business, the sciences or math. MATH 1110 Algebra and Geometry with Applications This is a course with emphasis on studying practical problems with mathematical models. Topics include: Problem solving, number theory, introduction to functions and modeling, systems of equations and matrices, exponential and logarithmic functions, linear inequalities in two variables and geometry. Prerequisite: Math 1100 MATH 1150 College Algebra A comprehensive study of algebraic functions and their properties. Topics include the real numbers system, exponents and radicals, solving equations and inequalities, functions and their graphs, polynomial functions and rational functions. MATH 1160 Precalculus (This is a UCC - Area 3E course.) A comprehensive study of exponential, logarithmic and trigonometric functions. Topics include function properties, exponential, logarithmic and trigonometric functions (their properties and graphs), solving exponential and logarithmic equations, trigonometric functions (their properties and graphs), trigonometric identities and solving trigonometric equations. Prerequisite: MATH 1150 or by placement with permission from the Department Chairperson. MATH 1170 Business Math (This is a UCC - Area 3E course.) A study of algebraic and transcendental functions, including their properties and graphs with a focus on applications to business. Topics include algebraic fundamentals, equations and inequalities, polynomial functions and graphs, exponential and logarithmic functions and mathematics of finance. MATH 1300 Elementary Statistics (This is a UCC - Area 3E course.) This course studies the development of statistical concepts with applications to various disciplines. Topics include descriptive and inferential statistical techniques. The latter are explained in terms of concepts from probability theory such as normal distribution, t-distribution, sampling theory, estimation, confidence intervals, hypothesis testing, t-test, Chi square test, analysis of variance and regression and correlation. The software package SPSS is used to perform statistical analysis. Not open to science or mathematics majors. MATH 1350 Algebra, Trigonometry, and Functions (4 credits) (This is a UCC - Area 3E course.) A comprehensive study of algebraic and elementary transcendental functions. Topics include: the real number system, solving equations and inequalities, function properties, algebraic functions and their graphs, exponential and logarithmic functions (their properties and graphs), solving exponential and logarithmic equations, trigonometric functions (their properties and graphs) trigonometric identities and solving trigonometric equations. Students may be admitted into the course based on the results of a placement test. MATH 1400 Quantitative Mathematics I An introduction to functions, equations, matrix algebra, linear programming, and mathematics of finance. Topics include Equations and Inequalities, Functions and Graphs, Matrix Algebra, Linear Programming: Graphical Analysis as well as the Simplex Method, and Mathematics of Finance, Markov Chains (optional) MATH 1450 Quantitative Mathematics II This course covers essential ideas of the calculus: functions, limits, continuity, differentiation and applications, antiderivatives and definite integrals. Business applications are stressed. Trigonometry is not required. Prerequisite: MATH 1400 MATH 1600 Calculus I (4 credits) (This is a UCC - Area 3E course.) Limit and continuity of functions, the Intermediate Value Theorem, derivatives, differentiation rules, Rolle's theorem and the Mean Value Theorem, applications of differentiation, antiderivatives, definite integrals and the Fundamental Theorem of Calculus. Prerequisite: Math 1160 or Math 1350 or by placement. MATH 1610 Calculus II (4 credits) Indefinite and definite integrals and their estimation, techniques of integration., improper integrals, L'Hospital's Rule, applications of integration, infinite series, power series and introduction to taylor polynomials and approximations. Prerequisite: MATH 1600 MATH 2000 Logic and Methods of Higher Mathematics An introduction to rigorous reasoning through logical and intuitive thinking. The course will provide logical and rigorous mathematical background for study of advanced math courses. Students will be introduced to investigating, developing, conjecturing, proving and disproving mathematical results. Topics include formal logic, set theory, proofs, mathematical induction, functions, partial ordering, relations and the integers. Prerequisite: MATH 1600 MATH 2010 Calculus III (4 credits) Study of vectors and the Geometry of Space; vector valued functions, differentiation and integration of vector-valued functions; calculus of functions of several variables including partial differentiation and multiple integrals; higher order derivatives and their applications; Vector Fields; Line and Surface integrals. Prerequisite: MATH 1610 MATH 2020 Linear Algebra An introductory course in the theory of linear transformations and vector spaces. Topics include: systems of equations, matrices, determinants, general vector spaces, inner product spaces, eigenvalues and eigenvectors. Prerequisite: MATH 1610 MATH 2120 Survey of Mathematics This course surveys number theory, graph theory and combinatorics, and the history of mathematics. Prerequisite: MATH 1610 MATH 2300 Statistics (4 credits) A rigorous course for math and science majors covering: measures of central tendency, measures of variation, graphical techniques for univariate and bivariate data, correlation and regression, probability, binomial and normal distributions, estimation, confidence interval, testing of hypothesis, contingency tables, analysis of variance, nonparametric methods; use of packages such as SAS, Minitab , etc. is strongly emphasized. Prerequisite: None MATH 3010 Modern Algebra An introduction to groups, isomorphisms, rings, integral domains, fields and polynomial rings. Emphasis is placed on the development of theorems and techniques of proofs using definitions and Prerequisite: MATH 2000 or CS 2600 MATH 3110 Number Theory This is an introductory course in Number Theory for students interested in mathematics and the teaching of mathematics. The course covers basic notions of integers and sequences, divisibility, and mathematical induction. It also covers standard topics such as Prime Numbers; the Fundamental Theorem of Arithmetic; Euclidean Algorithm; the Diophantine Equations; Congruence Equations and their Applications (e.g. Fermat’s Little Theorem); Multiplicative Functions (e.g. Euler’s Phi Function); Application to Encryption and Decryption of Text; The Law of Quadratic Reciprocity. Prerequisite: MATH 2000 MATH 3220 Differential Equations A study of the methods of solution and applications of ordinary differential equations. Topics include: first and second order equations, existence and uniqueness of solutions, separation of variables, exact equations, integrating factors, linear equations, undetermined coefficients, variation of parameters, transform methods, series solutions, systems of equations and elementary numerical methods. Prerequisite: MATH 1610 MATH 3230 Foundations of Geometry Foundations of Geometry presents the different axiomatic approaches to the study of geometry with specific applications to finite, Euclidean, and non-Euclidean geometries with extensive use of constructions to explore ideas, properties, and relationships. Technology will be used throughout the course to encourage these open-ended explorations. The role of different types of proofs will be developed throughout the course. Prerequisites: MATH 1610 and (MATH 2000 or CS 2600) MATH 3240 Probability and Statistics (4 credits) A mathematical treatment of probability as well as statistics. Topics include probability axioms, discrete and continuous sample spaces, random variables, mathematical expectation, probability functions; basic discrete and continuous distribution functions; multivariate random variables. Also covered is Central Limit Theorem, confidence intervals, hypotheses testing and Linear regression. Software such as SAS or Minitab may be used for hypotheses testing and regression problems. Prerequisite: MATH 1610 MATH 3260 Mathematical Models in Finance and Interest Theory A course on the formulation, analysis, and interpretation of advanced mathematical models in finance and interest theory. Computers and technology will be used to give students a hands-on experience in developing and solving their own models. Applications to “real-world” problems in interest theory, including the development of complex annuity models, will be emphasized. The course will cover the fundamentals needed for the second actuarial exam. The primary focus will be on the financial models. Prerequisite: MATH 1610 MATH 3320 Statistical Computing Students solve statistical problems on the computer with the help of statistical packages, such as SAS, BMD, Mystat, etc., and learn to interpret the outputs and draw inferences. Topics include analysis of variance with and without interactions, correlation and regression analysis, general linear models, multiple comparisons and analysis of contingency tables. Prerequisite: MATH 3240 MATH 3340 Applied Regression Analysis This is a comprehensive treatment of regression analysis course, statistical topics including: simple linear regression, least square estimates, ANOVA table, F-test, R-square, multiple regression, using dummy variables, selections of the “best subset” of predictor variables, checking model assumptions and Logistic regression. The computer package, SAS, will be used through out the course and applications to real life data will be an integral part of the course. Prerequisite: MATH 3240 MATH 3350 Introduction to Numerical Analysis Treatment of numerical methods including numerical integration, numerical solution of equations and systems of equations, approximation of functions, numerical solution of differential equations, applications and computer implementation of numerical methods. Prerequisite: MATH 2020 MATH 3800 Linear and Nonlinear Optimization Iterative Algorithms, Optimization process and Linear Programming (LP), including the Graphical method and Simplex method. Duality and Sensitivity analysis, LP applications in business and health. Nonlinear Unconstrained problems and various Descent methods. Nonlinear Constrained optimization, including Primal, Penalty, and Barrier methods. Prerequisite: MATH 2020 MATH 3990 Selected Topics (1- 3 credits) A topic not covered by an existing course is offered as recommended by the department and approved by the dean. The number of credits for MATH 3990 may vary from 1 to 3 for a selected topic. MATH 3990 can not be credited more than twice, each on a different topic, towards degree requirements. Prerequisite: Department Chairperson's permission MATH 4010 Applied Algebra A course covering applications of Modern Algebra. Topics include Boolean algebras and applications to switching theory; symmetry groups in three dimensions, monoids and machines, applications of rings, fields, and Galois theory including error-correcting codes. Prerequisite: MATH 3010 MATH 4110 Advanced Discrete Mathematics This is an advanced course in discrete mathematics primarily dealing with discrete dynamical systems, algorithms, combinatorics and Graph Theory. Emphasis is placed on complexity of algorithms, on existence and optimization problems in Graph Theory and on associated algorithms. Prerequisite: MATH 2020 or CS 2600 MATH 4120 Time Series Analysis This is an applied statistical methods course in time series modeling of empirical data observed over time. Prerequisite: MATH 3340 or (MATH 3240 and instructor's permission) MATH 4130 Experimental Design for Statistics For processes of any kind that have measurable inputs and outputs, Design of Experiments (DOE) methods guide you in the optimum selection of inputs for experiments, and in the analysis of results. Full factorial as well as fractional factorial designs are covered – see the course outline for additional details. Software such as SAS or S-Plus will be used for testing and regression problems. Prerequisite: MATH 3240 with at least a C- or the approval by the chairperson of Math Dept MATH 4150 Topics from Applied Mathematics The objective of this course is to give the student an understanding and appreciation of applied mathematics. This will be accomplished by working through a variety of problems from the physical sciences. Emphasis will be on modeling scientific phenomena rather than developing mathematical methods. Basic ideas and concepts will first be illustrated on simple problems, and they will eventually be extended to more complicated systems. Prerequisite: MATH 3220 MATH 4210 Mathematical Statistics A theoretical treatment of statistical topics including: distribution theory, sampling, point and interval estimation, methods of estimation such as maximum likelihood estimation, properties of estimators, Neyman-Pearson Lemma, hypothesis testing, power of a test, and linear models. Prerequisite: MATH 3240 MATH 4220 Complex Analysis Elements of complex analysis. Topics include: complex numbers, analytic functions, Cauchy integral theorem, Cauchy integral formula, power series and conformal mapping. Prerequisite: MATH 2010 MATH 4230 Real Analysis A rigorous approach to the theory of functions of real variables. Topics include: metric spaces, sets, limits, sequences, continuity, uniform continuity, differentiation, Riemann integration, sequences and series of functions, and Riemann-Stieltjes integral. Prerequisite: MATH 2010 MATH 4250 Introduction to Topology This course uses the concepts of set theory, analysis and group theory in a rigorous manner to explore the fundamental mathematical definitions of continuity, homeomorphism, metrics, product topology, compactness, connectedness, the fundamental group and homotopy theory. Prerequisite: MATH 3010 MATH 4900 Mathematics Seminar (2 credits) (This is a Writing Intensive-WI course.) This is a required course for all mathematics majors and should be taken, if possible, in the junior year. The course will be led by a faculty member and conducted in an inquiry based fashion, with coverage of topics determined by the interests of the student and faculty. Each student will complete a project of study in an area of mathematics or it applications, culminating in a presentation to the faculty and students, and a final paper submitted to the faculty advisor. Prerequisite: Must be registered in a 4000-level mathematics course, or must have successfully completed a 4000-level course. Math 4900 registration form may be downloaded from here. MATH 4990 Independent Study (1- 3 credits) An individual research project under the direction of a faculty member and with the approval of the chairperson. The number of credits for each independent study may vary from 1 to 3 per semester, up to a limit of 6 credits.
{"url":"http://www.wpunj.edu/cosh/departments/math/courses/coursedescriptionsF2011.dot","timestamp":"2014-04-20T23:29:38Z","content_type":null,"content_length":"74315","record_id":"<urn:uuid:d2df0e1c-dbde-4c7a-87d2-def3dcef514e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
January 2012 A surprising amount of what we do in computer security relies upon the use of random numbers and yet not many of us actually take the time to think about how these numbers are being generated. We blithely assume that our computer, when required, can generate a truly random number. But wait! How can a machine that works in a deterministic manner generate something that is truly random. The simple answer is that it can't. The best we can do is to generate a number that is The fact that we rely upon pseudorandom numbers is a potential problem for IT security. After all, if a machine is using a known algorithm to generate a number that your system then treats as random, what is to stop an attacker from calculating that same number if he knows your algorithm. It is a fundamental truth of any strong computer security that you must assume that an attacker knows the algorithms that you have used. So, perhaps one should spend a little more time understanding how pseudorandom numbers are generated in order that they are not relied upon inappropriately. The measure of random in a number is known as . Not the entropy as physicists use it, but entropy as cryptographers use it. Entropy is, in essence, how uncertain you are about a number. This means that the entropy is not necessarily related to the number of bits in a number but instead to the number of possible values that number could have taken. Imagine being able to discard the number of bits in a number about which you were certain. The number of bits that remain are the number of bits of entropy . Suppose, for example, a 100 bit number could take on two possible values, then you only need one bit to differentiate those two values. Hence, such a number would be described as having 1bit of entropy. Obviously this is an extreme case, and I have not complicated the issue by factoring in the different types of probability distribution. Most mathematicians tend to define the entropy of a number X as: H(X) ≔ − ∑P(X=x) log₂P(X=x) where the summation is from 0 to a number x, and P(X=x) is the probability of X taking the value x. So, how do you generate a number that has to greatest possible entropy. There has been much research on using activities such as mouse movements or keystrokes to generate a number that can be But so far it appears that the activities picked to generate the numbers have proved to be not quite as random as first thought: some typists are remarkably accurate and consistent! Other work has focused on incorporating some form of physical device into a computer that can generate truly random numbers based upon provably behaviour. This has two drawbacks: generating large random numbers can take an unacceptably long time, and the device might fail rendering the numbers predictable, without you knowing. Which brings us back to the generation of pseudorandom numbers. Most accept that this is the best method of generating numbers with sufficient entropy, whilst making the mechanism efficient and accessible enough for widespread use in computing. There are many algorithms in the literature but I would recommend an approach such as described in Fortuna which was an improvement by Ferguson, Schneier and Koho on a previous method called Yarrow. Fortuna, like most approaches, relies upon a which is a relatively small but truly random number. But, it allows for the use of modern encryption techniques such as AES which mean that although the algorithm is known it is highly unlikely that the resulting pseudorandom number sequence can be reproduced. What Fortuna shows is that rather than rely upon standard libraries on their own, you construct a routine that, yes, uses standard libraries, but recognises the potential attacks against those and seeks to minimise the threat by incorporating elements such as AES and SHA thereby reserving some secret information to the local machine, decreasing the chances that a third party could reproduce your number(s). So, in short, the best way to use random numbers is by generating pseudrandom numbers but in such a way that "unreliable" sources of randomness are replaced by effectively creating your own store of entropy upon which you can draw and keeping the means by which that store is generated secret by virtue of a secret key. For a superb, free book that goes into great depth about random number generation, see Luc Devroye 's book (originally published with Springer-Verlag, New York, 1986). One of the perennial problems with symmetric-key encryption has been establishing a secure channel over which to pass the secret key. It has always been much easier to compromise the transmission of the keys than to try to find some weakness in the encryption algorithm. Military and government organisations have put in place elaborate methods of passing secret keys: they pass secrets more generally so using similar channels to pass an encryption key is not a great leap. However, as everyone has become more connected, and especially with the commercialisation of the Internet, encryption has become a requirement for the vast majority of networked users. Thus, the traditional methods of passing secret keys is impractical, if only because you might not actually know who you want to communicate with in an encrypted fashion. Diffie-Helman Key Exchange It was realised in the 1970’s that encryption needed to be more accessible, so a great deal of work was done on algorithms that could ensure that a key had been passed over relatively insecure channels was not compromised. Leaders in the field were Whifield Diffie and Martin Helman. In doing this work, it was realised that there was another way of tackling the problem. In the late 1970’s Diffie and Helman published their seminal paper entitled “New Directions in Cryptography” and public-key encryption was born. When the implementation of the early public-key encryption methods was compared to symmetric-key encryption it was found that the public-key encryption was significantly slower and hence communication using public-key encryption alone was, at the very least, going to require far more processing power than would otherwise have been the case. But wait! The original problem being studied was secure key transmission so why not use public-key encryption to securely transmit the symmetric-key, and then use the faster, more efficient symmetric-key encryption algorithms. Iin essence, that is how most so called public-key encryption works today. The majority of public-key encryption algorithms rely upon a mathematical subtlety. One of the earliest was based upon prime numbers. If one has a number (N) that is derived from multiplying two prime numbers, and N is very large, it is practically impossible to calculate what the two constituent prime numbers were (known as factorisation). It’s not that you can’t calculate the two prime numbers. There are many algorithms for doing so. It is that it takes such a long time to successfully compute the algorithm (sometimes thousands of years) that by the time you finish, the information you can recover is worthless. Even with the huge budgets available to governments, it is possible to reduce these timescales only marginally. Public-key algorithms therefore are able to use one of the prime number constituents of N as part of the public-key element of the encryption process without fearing that the other might be discovered. It’s a fact of life that even with the massive increases in computing power over recent years, traditional algorithms for factorising these large numbers means public-key encryption is still quite secure. Of course, “secure” is a relative word and there are many ways of recovering the private element that was used to derive N. These do not involve some fiendishly clever mathematical method (although a small army of mathematicians are seeking to do this) but rather simple methods such as accessing the PC that holds the private element! As ever, the weakest link determines the strength of the chain. Having said that, there is now an emerging threat the public-key encryption which does not rely upon such trivial methods: Quantum Computing, a field first introduced by Richard Feynman in 1982. To understand why Quantum Computing poses a threat one first needs to understand a little about how it works. In the computers that we are familiar with, processing is done using bits. A bit has two possible values: 0 and 1. A quantum computer uses qubits. A qubit can also have the values 0 and 1, but also any combination of the two simultaneously. In quantum physics this is called superposition. So, if you have 2 qubits you can have 4 possible states, 3 qubits gives 8 possible states; and all simultaneously. In a bit-based computer you have the same number of possible states but only one exists at any one time. It is the fact that these states can exist simultaneously in a quantum computer which is both counter-intuitive and extraordinarily powerful. These qubits are manipulated using quantum logic gates in the same way that conventional computation is done by manipulating bits using logic gates. The detailed maths is not important here. Suffice to say that you can operate on multiple simultaneous states, thereby increasing the amount of computation you can undertake over what you could otherwise do in a conventional computer. So, if you had a 2 qubit quantum computer you could theoretically compute at 4 times the speed of a 2 bit conventional computer. There are a few problems though, in translating theory into practice. First, the algorithms developed over many years for conventional computers have been optimised for their architecture, and different algorithms are needed to run a quantum computer. Hence, trying to compare speeds of conventional and quantum computers is like comparing apples with bananas. However, one of the earliest algorithms developed for quantum computing was by Peter Shor. Shor’s 1994 algorithm was specifically designed to factorise numbers into their prime number components. Shor’s algorithm runs on a quantum computer in polynomial time, which means that for a number N it takes only logN to successfully complete the algorithm. Even with the largest numbers in use today in public-key encryption, that means that it is perfectly feasible to factorise the number in meaningful Since Peter Shor developed his quantum computing algorithm many others have been developed, and it is worth noting that a significant number of these algorithms are aimed at breaking the underlying mathematics that supports more recent public-key encryption. The fact that methods other than the use of prime numbers have been developed is being very quickly followed by their quantum computing algorithm counterpart. A Young Heisenberg Second, there is the much quoted, but less well understood, Heisenberg’s Uncertainty Principle. The bottom line is that this principle tells us that if you observe something then you affect it. Hence, measuring a qubit causes it to adopt one state but that state might not have been the state resulting from the computation but could have been altered by you observing it. So, the moment you try to measure the answer calculated by your superfast quantum computer it loses its ability to give you the correct answer. That would seem to render quantum computing rather pointless. But, there is another quantum effect that can be employed: quantum entanglement. This is where, if two objects have previously interacted, they stay “connected” such that by studying the state of one object you can determine the state of the other. Hence, you can determine the state of the qubit with your answer by studying another object with which it is entangled. Again, this is counter-intuitive, but has been proven, so all one really needs to know is that there is a way of getting your answer out of your quantum computer. Quantum Computer On An Optical Bench Lastly, there is a small matter of implementation. Until recently this was a show stopper. Typically quantum computers are based on light, as photons are relatively easy to work with and measure. However, anyone who has seen an optical bench will know that they are enormous. They require great weight to prevent even the slightest vibration and they are prone to all manner of environmental factors. Hardly the small “chips” we are used to! Also, typically these implementations are aimed at running one algorithm: the algorithm is built into the design. Having said that it wasn’t that long ago that conventional computers with far less processing power than the average PC required air conditioned luxury and small army of attendants to keep them functioning. It did not take very long for the size to shrink dramatically and for the environmental requirements to be quite relaxed. Not surprising then that there is a company in Canada (D Wave Systems Inc) who already offer access to quantum based computing. It’s expensive and the size of a room, but we all know that won’t be the case for long. D-Wave's Bold Statement On Their Website 2011 bought about some major developments which might well make 2012 the year quantum computing comes of age. Most significant of these was by a team at Bristol University, who developed a small chip which housed a quantum computer. It had only 2 qubits but it was of a size that could be used for the mass market, and, crucially, it was programmable. We are now entering a new era where we have programmable, relatively inexpensive, relatively small, quantum computers visible on the horizon, and we know that such computers have the potential to undermine the mathematics upon which current public-key encryption depends. Given all of that, maybe it’s time to reconsider the role of public-key encryption and where it might be more sensible to rely wholly on symmetric-key encryption. Here is a brief video we produced at the University of Surrey to introduce cyber security, and the work we are doing there: If, like me, you use Facebook, Twitter, LinkedIn (and a whole host of other social media sites) then, again like me, you might have wondered how people actually read your pages. I've long wondered how much of the page visitors read. Especially so after I read the statistics about how little time visitors spend on each page on the web. In a matter of a few seconds, what information is the reader really scanning? I've seen a lot of very impressive research being done on the subject and many erudite papers being published. All of this is backed by a great deal of experimentation which involves complex equipment and analysis. Hence, when I saw the product being offered by Eyetrack I thought that it represented a fun, accessible way that everyone can answer the question about their own sites. Using a webcam, plus a bit of calibartion, the software will track the eyes of a reader to a specific page. The data collected can then be turned into a report. Not some great long winded piece of statistics but an easy to understand visual map of how the visitor uses your page. You might not be surprised to learn that the item most Facebook visitors look are the faces. In addition to how much time the user is spending on each part of the page and the corollary, what they don't read, you can also track how a visitor visually moves around the page. And, it's not the same for each type of social networking site. Take, for example, LinkedIn (below) and compare it with Facebook (above): It would appear that visitors do actually read the text on LinkedIn, but very much at the "headline" level. All those extra bits of infromation added by LinkedIn to the right hand pane about who esle was viewed, etc seem to go to waste. And then, of course, there's Twitter: Are viewers reading the first one or two words of a Tweet only? That's all the time you have to capture their attention! But compare Twitter with Facebook: it does appear to be text not photos being scanned in Twitter. One very interesting area that is worth studying is the deviation in viewing patterns caused by the client that is being used to view the page. With so many apps now available for viewing each of the social networking sites, it might be interesting to see if the viewing pattern is in any way governed by the way it is presented. Twitter in particular has a large variety of clients, so how does that affect the viewing: So, thanks to Eyetrack for enabling us all to do a bit of research that is both fun and produces useful results.
{"url":"http://www.profwoodward.org/2012_01_01_archive.html","timestamp":"2014-04-16T10:32:57Z","content_type":null,"content_length":"104162","record_id":"<urn:uuid:8e45f85b-0397-4850-bd48-e90547507782>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Methods for designing submerged breakwaters are still being developed, particularly in respect of the 3D nature of wave-breakwater interaction. Many of the available design tools are inefficient as they are not able to provide any information on the spatial distribution of the wave field around breakwaters, and cannot therefore guarantee reliability and accuracy for the engineer. There is thus a need for an engineering design tool with the ability to model spatial variation of wave height. This paper proposes a method based on machine learning algorithms for predicting the nearshore wave field behind a submerged breakwater that includes both 2D and 3D effects. The proposed numerical model has been validated by various scales of laboratory data. Comparisons reveal the ability of the proposed model to predict the wave field around submerged breakwater. Submerged Breakwaters; Wave field; 3D; Wave transmission; Artificial Neural Networks; Numerical modeling Adams, C.B. and C.J. Sonu, 1986. Wave Transmission Across Submerged Near-Surface Breakwaters. In ASCE, pp. 1729–1738. Bellotti, G., 2004. A simplified model of rip currents systems around discontinuous submerged barriers. Coastal Engineering, 51 (4), pp.323–335.http://dx.doi.org/10.1016/j.coastaleng.2004.04.001 Briganti R, J.W. Van der Meer, M. Buccino, M. Calabrese, 2003. Wave transmission behind low crested structures. Proc 3rd Coastal Structures Conference. Buccino M., D.I. Vicinanza, C.M. Caceres, 2009. 3D wave field behind impermeable low crested structures. Journal of Coastal Research, 56, pp.477–481. Buccino, M and M. Calabrese, 2007. Conceptual approach for prediction of wave transmission at lowcrested breakwaters. Journal of Waterway, Port, Coastal and Ocean Engineering, 133(3), pp.213– 224. Caceres I., M.J.F. Stive, A. Sanchez-Arcilla, L.H. Trung, 2008., Quantifcation of changes in current intensities induced by wave overtopping around low crested structures. Coastal Engineering, 55, d'Angremond K., J.W. Van der Meer, R.J. De Jong, 1996. Wave transmission at low-crested structures. Proceedings of the 22th International Conference on Coastal Engineering Orlando, FL, USA, Goda Y., and J.P. Ahrens, 2008. New formulation of wave transmission over and through low-crested structures. Proceedings of the 31st International Conference of Coastal Engineering, 4. Goda, Y. and Y. Suzuki, 1976. Estimation of incident and reflected waves in random wave experiments. In Proc. 15th Int. Conf. on Coastal Engineering, ASCE. pp. 828–845. Hagan, M.T., H.B. Demuth, M.H. Beale, 1996. Neural network design, PWS Pub. Hanson, H. and N.C. Kraus, 1991. Numerical Simulation of Shoreline Change at Lorain, Ohio. Journal of Waterway, Port, Coastal, and Ocean Engineering, 117(1), pp.1–18.http://dx.doi.org/10.1061/(ASCE) Haykin, S., 1998. Neural Networks: A Comprehensive Foundation by Simon Haykin (1998, Hardcover, Subsequent Edition): A Comprehensive Foundation, Prentice Hall. Hur D.S., W.D. Lee. W.C. Cho, 2012. Three-dimensional flow characteristics around permeable submerged breakwaters with open inlet. Ocean Engineering, 44, pp.100-116.http://dx.doi.org/10.1016/ Johnson, H.K., T.V. Karambas, I. Avgeris, B. Zanuttigh, D. Gonzalez Marco, and I. Caceres, 2005. Modelling of waves and currents around submerged breakwaters. Journal of Coastal Engineering, 52, Johnson, J.W., R.A. Fuchs, and J.R. Morison, 1951. The damping action of submerged breakwaters. Transactions, American Geophysical Union, Vol. 32, No. 5, 704-718.http://dx.doi.org/10.1029/ Kambekar A.R., and Deo M.C. 2003, Estimation of pile group scour using neural networks, Applied Ocean Research, Elsevier, Oxford, UK, 25(4) 225-234. Kramer, M., B. Zanuttigh, J.W van der Meer, C. Vidal, F.X. Gironella, 2005. Laboratory experiments on low-crested breakwaters. Coastal Engineering. 52, 867-885.http://dx.doi.org/10.1016/ Losada, I.J., R. Silva, M.A. Losada, 1996. 3-D non-breaking regular wave interaction with submerged breakwaters. Coastal Engineering, 28, pp.229–248.http://dx.doi.org/10.1016/0378-3839(96)00019-1 Mase, H., M. Sakamoto, T. Sakai, 1995. Neural Network for Stability Analysis of Rubble-Mound Breakwaters. Journal of Waterway, Port, Coastal, and Ocean Engineering, 121(6), pp.294–299.http:// Medina, J.R., J.A., GonzÃlez-EscrivÃ, J.M.Garrido, J. De Rouck, 2002. Overtopping analysis using neural networks. Proceedings of the 28th International Conference on Coastal Engineering, ASCE, pp. Medina, J.R., 1999. Neural network modelling of runup and overtopping. ASCE Proc Coastal Structures Santander, 1, p.421-429. Panizzo, A, R. Briganti, 2007. Analysis of wave transmission behind low-crested breakwaters using neural networks. Coastal Engineering, 54(9), pp.643–656.http://dx.doi.org/10.1016/ Panizzo, A., R. Briganti, J.W. Van der Meer, L. Franco, 2003. Analysis of wave transmission behind low crested structures using neural networks, Proc. 4th Int. Coastal Structures Conference, Portland, Oregon. Rumelhart, D.E., G.E. Hinton, R.J. Williams, 1986. Learning representations by back-propagating errors., 323(6088), pp.533–536. Schlurmann, T., M. Bleck, and H. Oumeraci, 2002. Wave transformation at artificial reefs described by the Hilbert-Huang transformation. Proceedings of the 28th International Conference on Coastal Engineering, 2, pp.1791–1803. Seabrook, S.R., K.R. Hall, 1998. Wave transmission at submerged rubblemound breakwaters. Proceedings of the Coastal Engineering Conference, 2, pp.2000–2013. van der Meer, J.W., R. Briganti, B. Zanuttigh, and B. Wang, 2005. Wave transmission at low-crested structures, including oblique wave attack. Journal of Coastal Engineering, 52, 915-929.http:// van Gent, M.R.A., H.F.P. van den Boogaard, 1998. Neural network modelling of forces on vertical structures. Proceedings of the International Conference on Coastal Engineering, 1(26). pp.2096– 2109. Verhaeghe, H., J.D. Rouck and J. van der Meer, 2008, Combined classifier-quantifier model: A 2- phases neural model for prediction of wave overtopping at coastal structures, Coastal Engineering, Elsevier, 55 (2008) 357-374.http://dx.doi.org/10.1016/j.coastaleng.2007.12.002 Verhaeghe, H., 2005. Neural network prediction of wave overtopping at coastal structures. Ph.D. Thesis, Universiteit Gent, Gent, Belgium, ISBN 90-8578-018-7. Vicinanza D., I. Caceres, M. Buccino, X. Gironella. and M. Calabrese, 2009. Wave disturbance behind low-crested structures: Diffraction and overtopping effects. Coastal Engineering, 56, pp.1173– Wamsley, T.V., J.P. Ahrens, 2003. Computation of Wave Transmission Coefficients at Detached Breakwaters for Shoreline Response Modeling. ASCE Conference Proceedings, 147(40733), p.49. Werbos, P.J., 1988. Backpropagation: past and future. In Neural Networks, 1988., IEEE International Conference on. vol.1 pp. 343 –353. Zanuttigh, B., L. Martinelli, 2008. Transmission of wave energy at permeable low crested structures.Coastal Engineering, 55(12), pp.1135–1147.http://dx.doi.org/10.1016/j.coastaleng.2008.05.005 Full Text: This work is licensed under a Creative Commons Attribution 3.0 License
{"url":"http://journals.tdl.org/icce/index.php/icce/article/view/6706","timestamp":"2014-04-16T21:59:37Z","content_type":null,"content_length":"24521","record_id":"<urn:uuid:1abbe77f-2982-431f-8abe-bbcd897a8f07>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
question in pilot wave theory 1. Velocity v is continuous. 2. v is the velocity of the particle. (Note, however, that group velocity is a velocity of a wave.) 3. Particle. 4. Wave guides the particle in space. 5. Pilot wave theory is the original de Broglie matter wave theory. I would also suggest you to read some basics, such as
{"url":"http://www.physicsforums.com/showthread.php?p=4257762","timestamp":"2014-04-21T04:50:14Z","content_type":null,"content_length":"22956","record_id":"<urn:uuid:a3261b2f-1613-4099-9401-1e6e27953a73>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the set of primes "translation-finite"? up vote 29 down vote favorite The definition in the title probably needs explaining. I should say that the question itself was an idea I had for someone else's undergraduate research project, but we decided early on it would be better for him to try adjacent and less technical questions. So it's not of importance for my own work per se, but I'd be interested to know if it easily reduces to a known conjecture/fact/ counterexample in number theory. Apologies if the question is too technical/localized/unappealing/bereft of schemes. Given a subset $X$ of the natural numbers $N$, and given $n \in N$, we write $X-n$ for the backward translate of $X$, i.e. the set {$\{x-n : x\in X\}$}. We say that $X$ is translation-finite if it has the following property: for every strictly increasing sequence n[1] < n[2] < in $N$, there exists k (possibly depending on the sequence) such that $(X-n_1) \cap (X-n_2) \cap \dots\cap (X-n_k)$ is finite or empty. Thus every finite set is trivially translation-finite: and if the elements of $X$ form a sequence in which the difference between successive terms tends to infinity, then $X$ is translation-finite and we can always take k=2. Moreover: • if $X$ contains an infinite arithmetic progression, or if it has positive (upper) Banach density, then it is NOT translation finite; • there exist translation-finite sets which, when enumerated as strictly increasing sequences, grow more slowly than any faster-than-liner function. • there exist translation-finite sets containing arbitrarily long arithmetic progressions. These resultlets suggest the question in the title, but I don't know enough about number theory to know if it's a reasonable question. Note that if, in the definition, we were to fix k first (i.e. there exists k such that for any sequence (n[j])...) then we would get something related to Hardy-Littlewood conjectures; but I was hoping that this might not be necessary to resolve the present EDIT (2nd Nov) It's been pointed out below that the question reduces in some sense to a pair of known, hard, open problems. More precisely: if the answer to the question is yes, then we disprove the Hardy-Littlewood k-tuples conjecture; if the answer is no, then there are infinitely many prime gaps bounded by some absolute constant, and this is thought to be beyond current techniques unless one assumes the Eliott-Halberstam conjecture. Added in 2013: Stefan Kohl points out that the latter is Yitang Zhang's famous recent result. However, as Will Sawin points out in comments, a negative answer to the main question would imply there are 3-tuple configurations occurring infinitely often in the primes, and (see thelink in Will's comment) this is thought to be out of reach even if we assume the EH conjecture holds. 1 What do you mean by "the elements of X form a sequence tending to infinity"? Does that imply something other than X being an infinite set? – Darsh Ranjan Oct 30 '09 at 0:07 aargh, typo/brain spasm - I missed out some important words. I meant that the gap size has to tend to infinity. I've edited the original post to correct this. – Yemon Choi Oct 30 '09 at 2:00 5 Yay for non-scheme questions! – Theo Johnson-Freyd Oct 30 '09 at 2:12 1 @Scott: makes sense to me. @Yemon: thanks for nicely summarizing th results in the question body! – Ilya Nikokoshev Nov 27 '09 at 23:52 1 Because the existence of any $3$-tuples which occurs infinitely often in primes is still out of reach, this is still open. See: mathoverflow.net/questions/132731/… – Will Sawin Jul 4 '13 at 16:26 show 1 more comment 1 Answer active oldest votes As you mention, this is related to the Hardy-Littlewood k-tuple conjecture. In particular, if their conjecture is true, then the primes are not translation-finite. Indeed, it is possible to find an increasing sequence n[1] < n[2] < n[3] < ⋯ so that for every k, the first k n[i]s form an admissible k-tuple. (For example, I think n[i] = (i+1)! works.) Then, by the k-tuple conjecture, infinitely many such prime constellations exist and so for all k, (X-n[1]) ∩ (X-n[2]) ∩ ⋯ ∩ (X-n[k]) is infinite. (Here and below, X is the set of primes.) However, maybe we can prove that the primes are not translation finite by some other means. Unfortunately, the technology is not quite good enough to do that. Proving that the primes up vote 19 are not translation finite would, in particular, prove that there exist n[1] < n[2] such that (X-n[1]) ∩ (X-n[2]) is infinite. In particular, this implies that the gap n[2]-n[1] occurs down vote infinitely often in primes, and so p[n+1]-p[n] is constant infinitely often. (The standard notation p[n] indicates the n^th prime.) The best known upper bound for the size of small gaps in primes is that liminf[n→∞] (p[n+1]-p[n])/log p[n] = 0. This was established by Goldston and Yildirim around 2003 and the proof was later simplified. To the best of my knowledge, the best conditional result is by the same authors; they show that given the Elliott-Halberstam conjecture, the prime gap is infinitely often at most 20 or so. Thanks! That's very clear. So, if I understand you correctly: a proof of HL's k-tuple conjecture would imply the primes are not translation finite; while a proof that the primes are not translation finite would imply a bound on prime gaps that is beyond current knowledge. So my question essentially reduces to known open problems I guess. – Yemon Choi Nov 1 '09 at 23:52 Which probably means it deserves a CW status + open-problems tag. – Ilya Nikokoshev Nov 2 '09 at 0:04 As well as rewriting the question so the two open problems become clear. – Ilya Nikokoshev Nov 2 '09 at 0:04 1 @Yemon Choi: That's correct. Proving the primes are translation finite would disprove the k-tuples conjecture, while proving the primes aren't translation finite would prove bounds on small prime gaps. – aorq Nov 2 '09 at 3:27 1 Why does the infinitude of $(X-n_1)\cap(X-n_2)$ imply that the gap $n_2-n_1$ occurs infinitely often? Couldn't some smaller gaps occur infinitely often instead? – François G. Dorais♦ Dec 27 '12 at 4:59 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory co.combinatorics analytic-number-theory fourier-analysis open-problem or ask your own question.
{"url":"http://mathoverflow.net/questions/3347/is-the-set-of-primes-translation-finite","timestamp":"2014-04-17T18:23:11Z","content_type":null,"content_length":"69300","record_id":"<urn:uuid:63e652c1-f093-4395-86f3-5178fea2457e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
Reinforcement Learning A Survey Sponsored High Speed Downloads A. Gosavi Reinforcement Learning: A Tutorial Survey and Recent Advances Abhijit Gosavi Department of Engineering Management and Systems Engineering successful supervised learning. In fact, in sharp con-trast with supervised learning problems where only a single data-set needs to be collected, repeated inter- A Survey of Reinforcement Learning Literature Kaelbling, Littman, and Moore Sutton and Barto Russell and Norvig Presenter Prashant J. Doshi CS594: Optimal Decision Making Behavior Interview and Reinforcement Survey (Cont’d) Favorite Recreation and Leisure Reinforcers Read the following list of reinforcers to students, and check all that apply. Ask the student, Journal of Arti cial In telligence Researc h 4 (1996) 237-285 Submitted 9/95; published 5/96 Reinforcemen t Learning: A Surv ey Leslie P ac k Kaelbling 1 A Comprehensive Survey of Multi-Agent Reinforcement Learning Lucian Bus‚oniu, Robert Babu ska, Bart De Schutter AbstractŠMulti-agent systems are rapidly nding applications Survey Reinforcement Learning Victor Dolk September 6, 2010 Eindhoven University of ecThnology Department of Mechanical Engineering Den Dolech 2, 5600MB Eindhoven, Netherlands TRANSFER LEARNING FOR REINFORCEMENT LEARNING DOMAINS: A SURVEY A fourth measure of transfer efficacy is that of the ratio of areas define d by two learning curves. Subgoal Identifications in Reinforcement Learning: A Survey 185 states are the terminal conditions, and subtask policies aim for reaching the subgoals. A Survey of Reinforcement Learning and Agent-Based Approaches to Combinatorial Optimization Victor Miagkikh May 7, 2012 Abstract This paper is a literature review of evolutionary computations, reinforcement learn- a comprehensive survey of multiagent reinforcement learning by: busoniu, l., r. babuska, and b. de schutter leen-kiat soh, january 14, 2013 A Survey of RL in Relational Domains (Van Otterlo 2005) In this survey we take a reinforcement learning perspective, which means that we iden-tify various forms of relational MDPs and corresponding abstraction formalisms. 154 The main characteristics of the data stream model imply the following constraints: It is impossible to store all the data from the data stream. Multiagent Reinforcement Learning for Multi-Robot Systems: A Survey Erfu Yang and Dongbing Gu Department of Computer Science, University of Essex Delft University of Technology Delft Center for Systems and Control Technical report 07-019 A comprehensive survey of multi-agent reinforcement learning∗ REINFORCEMENT INVENTORIES FOR CHILDREN AND ADULTS _____ I N S T R U C T I O N S ... Learning a New Language 2. Taking Piano Lessons 3. Reading 4. Being Read to 5. Looking at Books Section 3 Data Sheets Page 37 of 49. Reinforcement Inventory for Children Preference-Based Reinforcement Learning: A preliminary survey ChristianWirthandJohannesFürnkranz KnowledgeEngineering,TechnischeUniversitätDarmstadt,Germany {cwirth,fuernkranz}@ke.tu-darmstadt.de Abstract. Preference-based reinforcement learning has gained signifi- Multi-Agent Reinforcement Learning: a critical survey YoavShoham RobPowers TrondGrenager ComputerScienceDepartment StanfordUniversity Stanford,CA94305 Reinforcement learning embraces the full complexity of these problems by requiring both interactive, sequential prediction as in imitation learning as well as complex reward Learning Reinforcement (2 Pages) 1 User Name (The email address you used when creating your Company Profile) 2 Do you have a formal program for reinforcing the learnings in your A Survey on Multiagent Reinforcement Learning Towards Multi-Robot Systems Erfu Yang University of Essex Wivenhoe Park, Colchester CO4 3SQ, Essex, United Kingdom A SURVAY OF REINFORCEMENT LEARNING METHODS IN THE WINDY AND CLIFF-AW LKING GRIDWORLDS Ryan J. Meuth Department of Electrical and Computer Engineering Approximate Reinforcement Learning: An Overview Lucian Bus¸oniu ∗, Damien Ernst†, Bart De Schutter , Robert Babuˇska ∗ ∗Delft Center for Systems & Control, Delft Univ. of Technology, Netherlands; GRONDMAN et al. : A SURVEY OF ACTOR-CRITIC REINFORCEMENT LEARNING: STANDARD AND N ATURAL POLICY GRADIENTS 3 with u drawn from the probability distribution function (x; ) 3 Reinforcement Learning zForm of unsupervised learning – Machine is never told what the correction action is. – Negative / positive rewards given to the machine based A Brief Survey of Operant Behavior ... process is not trial-and-error learning. ... Operant reinforcement not only shapes the topography of behavior, it maintains it in strength long after an operant has been formed. Schedules of reinforcement are Reinforcement Learning Assume the world is a Markov Decision Process - transition and rewards unknown; states and actions known. Two objectives: Reinforcement Learning in Online Stock Trading Systems Abstract Applications of Machine Learning (ML) to stock market analysis include Portfolio ... with a survey of earlier approaches outlines in [13]. However, a plethora of alternatives exist, some A (Revised) Survey of Approximate Methods for Solving Partially Observable Markov Decision Processes Douglas Aberdeen National ICT Australia, Canberra, Australia. Multi-Instance Learning: A Survey Zhi-Hua Zhou National Laboratory for Novel Software Technology, Nanjing University, ... reinforcement learning where the labels of the training instances are delayed, in multi-instance learning there is no any delay. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237–285, 1996. [136] S. Kapetanakis and D. Kudenko. Improvingon the reinforcementlearning of coordinationin cooperativemulti-agent systems. Reinforcement Learning is considerably more difficult for continuous-time systems than for discrete-time systems, and its development has lagged. ... Invited survey paper. [9] A. G. Barto, R. S. Sutton, and C. Anderson, “Neuron-like adaptive ele- Reinforcement Learning and Automated Planning: A Survey . Ioannis Partalas, Dimitris Vrakas and Ioannis Vlahavas . Department of Informatics . Aristotle University of Thessaloniki Reinforcement Learning Sampler CS 536: Machine Learning Littman (Wu, TA) ... “Reinforcement Learning: A survey” in Journal of Artificial Intelligence Research. Bertsekas & Tsitsiklis (1996). Neuro-Dynamic Programming. Tesauro (1992). “Practical Issues in Temporal Difference Recommended Reading Sutton & Barto (1998). Reinforcement Learning: An Introduction. Kaelbling, Littman and Moore (1996). “Reinforcement Learning: A survey” in Journal of Artificial Intelligence In this paper we briefly survey reinforcement learning, a machine learning paradigm that is especially well-suited to learning control policies for mobile robots. We discuss some of its shortcomings, and introduce a framework for effectively A Brief Survey of Parametric Value Function Approximation ... Reinforcement learning aims at estimating the optimal policy without knowing the model and from interactions with the system. Value functions can no longer be computed, Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research. Volume 4, 1996. • G. Tesauro. TD-Gammon, a self-teaching backgammon program, achieves master-level play. Neural Computation 6(2), 1995. • http://ai.stanford.edu/~ang/ Reinforcement Learning II: Q-learning Hal Daumé III Computer Science University of Maryland [email protected] CS 421: Introduction to Artificial Intelligence 28 Feb 2012 Many slides courtesy of Dan Klein, Stuart Russell, ... Midcourse survey, qualitative Transfer in Reinforcement Learning: a Framework and a Survey Alessandro Lazaric Abstract Transfer in reinforcement learning is a novel research area that focuses Multi-Agent Reinforcement Learning: A Survey Lucian Bus¸oniu Robert Babuˇska Bart De Schutter Delft Center for Systems and Control Delft University of Technology survey of how reinforcement learning methods react in general to canine training techniques. The SARSA Algorithm: The SARSA algorithm is an on-policy temporal difference reinforcement learning method. The algorithm builds a value table using the SARSA update rule, which updates which is based on reinforcement learning from spectrum survey data introduced in [3]. The rein-forcement learning algorithm returns the vector of the scored channels based on possible interference in particular channel. A survey of machine learning First edition by Carl Burch for the Pennsylvania Governor’s School for the Sciences c 2001, Carl Burch. ... Reinforcement learning is much more challenging than supervised learning, and researchers still don’t Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237–285, 1996. [136]S. Kapetanakis and D. Kudenko. Improving on the reinforcement learning of coordination in cooperative multi-agent systems. In Proceedings Reinforcement learning in robotics, an application using Lego Mindstorms ... Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research, 4:237-285, 1996. Title: Microsoft PowerPoint - Robotics 1 - Cioffi.ppt Author: Administrator Reinforcement learning: a survey. Journal of Artificial Intelligence Research , 4:237–285, 1996. Andrew Y. Ng, Adam Coates, Mark Diel, Varun Ganapathi, Jamie Schulte, Ben Tse, Eric Berger, and Eric Liang. Autonomous inverted helicopter flight via reinforcement lea rning. ing developed in machine learning and data mining areas. There has been a large amount of work on transfer learning for reinforcement learning inthe machine learning literature A survey of both the statistical and ethical problems may be found in [Ros96]. ... Robin Pemantle/Random processes with reinforcement 35 4.4. Learning A problem of longstanding interest to psychologists is how behavior is learned. BUS¸ONIU et al.: A COMPREHENSIVE SURVEY OF MULTIAGENT REINFORCEMENT LEARNING 157 A. Contribution and Related Work This paper provides a detailed discussion of the MARL
{"url":"http://ebookily.org/pdf/reinforcement-learning-a-survey","timestamp":"2014-04-23T10:07:53Z","content_type":null,"content_length":"43312","record_id":"<urn:uuid:f94652af-aac1-451d-b563-f53c5e180933>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
General Structure Functions Montero de Juan, Francisco Javier and Tejada Cazorla, Juan Antonio and Yáñez Gestoso, Francisco Javier (1994) General Structure Functions. Kybernetes, 23 (3). pp. 10-19. ISSN 0368-492X Restricted to Repository staff only until 2020. Official URL: http://www.emeraldinsight.com/journals.htm?articleid=875592&show=abstract A general concept of a structure function is proposed by considering a general order topology, where possible degrees of performance for the system and its components are going to be represented. Finite multistate structure functions and continuum structures can therefore be viewed as particular cases. Gives general definitions of minimal path and minimal cut, allowing general reliability bounds based on them. These are applied to some multivalued structures. Item Type: Article Uncontrolled Continuum structures Subjects: Sciences > Mathematics > Operations research ID Code: 15961 References: Barlow, R.E. and Proschan, B., Statistical Theory of Reliability and Life Testing, To Begin With, Silver Spring, MD, 1975. Baxter, L.A., “Continuum Structures”, International Journal of Applied Probability, Vol. 21 No. 4, 1984, pp. 802-15. Baxter, L.A., Continuum Structures II, Mathematical Proceedings of the Cambridge Philosophical Society, Vol. 99 No. 2, 1986, pp. 331-8. Block, H.W. and Savits, T.W., “Continuous Multistate Structure Functions”, Operations Research, Vol. 32, 1984, pp. 703-14. Montero, J., “Coherent Fuzzy Systems”, Kybernetes, Vol. 17 No. 4, 1988, pp. 28-33. Montero, J. and Tejada, J., “Some Results on Fuzzy Systems”, in Rose, J. (Ed.), Cybernetics and Systems: The Way Ahead, Thales, Lytham St Annes, 1987, pp. 591-4. Goguen, L., “L-fuzzy Sets”, Journal of Mathematical Analysis and Applications, Vol. 18 No. 1, 1967, pp. 145-74. French, S., Decision Theory, Ellis Horwood, Chichester, 1986. Vansnick, J.C., “Intensity of Preferences”, in Sawaragi, Y., Inoue, K. and Nakayama, H. (Eds), Toward Interactive and Intelligent Decision Support Systems II, Springer-Verlag, Berlin, 1987, pp. 220-9. Willard, S., General Topology, Addison-Wesley, Reading, MA, 1970. Montero, J., “Observable Structure Functions”, Kybernetes, Vol. 22 No. 2, 1993, pp. 31-9. Montero, J., Tejada, J. and Yañez, J., ”Structural Properties of Continuum Systems”, European Journal of Operational Research, Vol. 45 Nos 2-3, 1990, pp. 231-40. Deposited On: 16 Jul 2012 11:05 Last Modified: 06 Feb 2014 10:35 Repository Staff Only: item control page
{"url":"http://eprints.ucm.es/15961/","timestamp":"2014-04-20T06:17:19Z","content_type":null,"content_length":"28909","record_id":"<urn:uuid:3480060d-b232-4970-8815-d44837a96a12>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
(idea) by Woundweavr Sat Nov 13 1999 at 14:22:36 Level of extremeness. As something increases in size or the amount, magnitude increases. (idea) by Adalgeirr Thu Jul 20 2000 at 18:08:40 The magnitude of a can be found by the following Assume V= ai+bj+ck Then: ||V|| = (a^2+b^2+c^2)^1/2 (thing) by Grzcyrgba Fri Mar 23 2001 at 13:38:18 In astronomy, magnitude is a logarithmic measurement of observed brightness. The logarithmic nature of magnitudes comes from the fact that the response of the human eye to light is logarithmic. The magnitude units devised by ancient Greek astronomers correspond to equal flux ratios, or m(a) - m(b) = const * log ------ (1) implying a logarithmic scale. Five magnitudes corresponds to a factor of 100 difference in flux, which makes the constant in the equation above equal to -2.5. Magnitudes are typically measured with reference to an astronomical standard, usually the star Vega which has a magnitude of 0.0. Increasing magnitudes denote fainter objects, thus a star with a magnitude of -1.0 is brighter than a star with a magnitude of +1.0. The unaided human eye can detect objects with magnitudes as faint as approximately 6.0. The deepest telescopic image -- the Hubble Deep Field -- detected objects as faint as 28.0 to 29.0, more than a thousand million times fainter. The apparent magnitude of an object, usually denoted by a lower case m, is the perceived brightness of an object. Thus The Sun has an apparent magnitude of -26.7, but only because it is nearby, cosmically speaking. The absolute magnitude of an object, denoted by an upper case M is the apparent magnitude an object would have if we were observing it from a distance of ten parsecs. The absolute magnitude of the Sun is around +4.75, while the apparent magnitude of an average galaxy would be about -19.5. The apparent and absolute magnitudes of an object (if both are known) can be used to determine the distance to an object, using the formula m - M = ( 5 * log(D) ) - 5 + A (2) where D is the distance in parsecs, and A is a fudge factor which accounts for extinction by dust and gas between us and the object. m is usually straightforward to measure, but M requires that you know something about the object in advance. Magnitudes are often measured in specific wavelengths or filters, and many filter sets have been defined for specific uses. Often, objects are described in terms of their color, which is the difference in observed magnitudes in different filters. For example, (B - V) = m(B) - m(V) (3) are filters in the filter set, measuring light, respectively. By equation (1), this corresponds to a ratio of fluxes observed in the different filters. The color can, for example, tell you the of an object, or tell you the amount of interstellar reddening . You may also see bolometric magnitude , which is the sum of all emitted radiation from radio waves gamma rays , used to measure the total of an object. (definition) by Webster 1913 Wed Dec 22 1999 at 1:00:39 Mag"ni*tude (?), n. [L. magnitudo, from magnus great. See Master, and cf. Maxim.] Extent of dimensions; size; -- applied to things that have length, breath, and thickness. Conceive those particles of bodies to be so disposed amongst themselves, that the intervals of empty spaces between them may be equal in magnitude to them all. Sir I. Newton. 2. Geom. That which has one or more of the three dimensions, length, breadth, and thickness. Anything of which greater or less can be predicated, as time, weight, force, and the like. Greatness; grandeur. "With plain, heroic of mind." Greatness, in reference to influence or effect; importance; as, an affair of magnitude. The magnitude of his designs. Bp. Horsley. Apparent magnitude Opt., the angular breadth of an object viewed as measured by the angle which it subtends at the eye of the observer; -- called also apparent diameter. -- Magnitude of a star Astron., the rank of a star with respect to brightness. About twenty very bright stars are said to be of first magnitude, the stars of the sixth magnitude being just visible to the naked eye. Telescopic stars are classified down to the twelfth magnitude or lower. The scale of the magnitudes is quite arbitrary, but by means of photometers, the classification has been made to tenths of a magnitude. <-- the difference in actual brightness between magnitudes is now specified as a factor of 2.512, i.e. the difference in brightness is 100 for stars differing by five magnitudes. --> © Webster 1913. The 25 brightest stars of the night sky atom, molecule, nucleus, proton, neutron, electron scientific notation Exodus 8:2 Proxima Centauri color Vanity of vanities Stellar Description Ladyfest Brazilian flag Adaptive Differential Pulse Code Modulation dwarf nova Greek Architecture fixed stars Response.Write October 4, 2001 light curve Apalachicola River sign bit how to locate an earthquake's epicenter vector component Mirfak resurgent dome
{"url":"http://everything2.com/title/magnitude","timestamp":"2014-04-18T21:53:21Z","content_type":null,"content_length":"35557","record_id":"<urn:uuid:b736d80f-76f9-4976-9701-2a994b32fb73>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
Steam Tables Pdf Sponsored High Speed Downloads 12.4. STEAM TABLES 1011 12.4 SteamTables SATURATEDSTEAM-TEMPERATURETABLE Spec.vol. m3=kg Int.Ener. kJ/kg Enthalpy kJ/kg Entropy kJ=(kgoK) T oC P bar Sat. liq. 2 Steam Tables… What They Are…How to Use Them The heat quantities and temperature/ pressure relationships referred to in this Handbook are taken from the Properties Saturated Steam: TEMPERATURE Table STEAM TABLES ( from M. D. Koretsky, "Engineering and Chemical Thermodynamics", John Wiley & Sons, 2004) DOE-HDBK-1012/1-92 JUNE 1992 DOE FUNDAMENTALS HANDBOOK THERMODYNAMICS, HEAT TRANSFER, AND FLUID FLOW Volume 1 of 3 U.S. Department of Energy FSC-6910 Property Tables •If you have 2 properties, you can find the others using the thermodynamic property tables. •E.g. If you have pressure and temperature for saturated steam temperatures pressure table steam temperature ... 19 Table 3. Compressed Water and Superheated Steam (continued) 0.04 MPa (t s = 75.857 °C) 0.05 MPa (t s = 81.317 °C) 0.06 MPa (t s = 85.926 °C) Charotar Publishing House Pvt. Ltd. Opposite Amul Dairy, Civil Court Road, Post Box No. 65, ANAND 388 001 India Telephone: (02692) 256237, Fax: (02692) 240089, e-mail: [email protected], Website: Steam Tables Definitions Saturated Steam is a pure steam at the temperature that corresponds to the boiling temperature of water at the existing pressure. FIGURE 8.1 Critical Point Mollier diagram for steam. (Source: Com- bustion Engineering Steam Tables. Repro- duced with permission of Combustion PROPERTIES OF SATURATED STEAM Temperature Heat in BTU’s per LB. Gauge Pressure PSIG °F °C Heat of the Liquid Latent Heat Total Heat Specific DOE-HDBK-1019/1-93 NUCLEAR PHYSICS AND REACTOR THEORY. ABSTRACT. The . Nuclear Physics and Reactor Theory. Handbook was developed to assist nuclear facility operating contractors in providing operators, maintenance personnel, and the technical APPENDIX B Thermodynamic Tables — Metric Units Per earlier agreement, please include from Cengel, Y and Boles, M (2002) Thermodynamics: an Engineering Approach. 4th ed. Properties of Saturated Steam Specfic Heat Content Latent Vacuum Saturated VolumeBtu per Ib. Heat of Inches of Temp Cu. ft. Saturated Saturated Vaporization APPENDIX C Properties of Steam (SI units) Table C.1 Saturated Steam (Temperature) From NBS/NRC Steam Tables by L. Haar, et.al., New York: Hemisphere Publishing, TI-GCM-01 CM Issue 6 Dry Saturated Steam Tables Specific enthalpy Specific Gauge pressure Temperature volume Water (hf) Evaporation (hfg) Steam (hg) steam STEAM TABLE EQUIPMENT For Models: STE, STE-SO, ESTE, HMB, SP, DSMS, HDSE, SMS, FERR-ST & EURO-ST ... There is no installation clearances required for the electric steam tables. 2 GENERAL INSTALLATION 1. If crated, carefully remove unit from the crate. International Steam Tables – IAPWS‐IF97 1. IAPWS‐IF97 Range of Validity This application is based on the Industrial Formulation IAPWS‐IF97 [1], supplementary backward equations SATURATED STEAM * Heat content is the number of BTU/lb needed to reach the condition indicated starting with water at 32°F. Thermodynamic Properties – Saturated Steam (Values to Nearest Even Digits) Steam Tables This is the table for the properties of Saturated Steam and Saturated Water. It shows us the correlation of data for various pressures and temperatures. STEAM 1 1. INTRODUCTION The thermophysical properties of water are of interest in many industrial and research applications. Official international formulations for water properties are developed and maintained by the International Association Microsoft Word - Steam Tables.doc Author: arossi Created Date: 2/13/2002 7:32:50 PM ... ST-01 Steam Tables B curved Author: Kelvin Situ Created Date: 1/1/2008 6:02:57 AM ... Shown with optional Sneeze guard and close front Model No. 3029 Steam Table Burner System Mushroom Burner Burner Shield Gas Valve Model Description Total STEAM TABLES 2 Steam Tables (bar, psi, °C, °F) These tables are prepared based on comparison of external sources - we take no guarantee for its accuracy. Steam Tables: Thermodynamic and Transport Properties and Computer Programs for Vapor and Liquid States of Water in SI Units, Hemisphere Publishing Corporation, Washington, DC (1984). [7] W. Wagner and A. Pruss, New International Formulation for the steam tables if the phase is set to liquid at the triple point. 2. Critical Point When the temperature rises above the critical point, it is no longer possible to clearly distinguish between the liquid and the gaseous phase. Such a condition is referred to as Steam Tables, Imperial, Saturated Water, Liquid-Vapor, Pressure STANDARDS www.baeresources.com Basic & Advanced Engineering Resources SteamTable.doc Page 1 of 6 SATURATED WATER TABLES (Temperature)† Temp Press Specific Volume Enthalpy Internal Energy Entropy °F psia ν Abstracted from ASME Steam Tables (1967) with permission of the publisher. The American Society of Mechanical Engineers, 345 east 47th Street, New York, NY 10017 GENERAL ENGINEERING DATA Properties Of Superheated Steam. Created Date: 1 Where Flow Measurement Meets Innovation… tel: 732.952.5324 fax: 732.727.8911 [email protected] www.niceinstrumentation.com Saturated Steam Tables Wolfgang Wagner · Hans-Joachim Kretzschmar International Steam Tables Properties of Water and Steam Based on the Industrial Formulation IAPWS-IF97 Steam Tables, SI, Superheated Water Vapor STANDARDS www.baeresources.com Basic & Advanced Engineering Resources Pressure = 6 kPa Specific Internal Models: ST-120/2 - 2 Bay ST-120/3 - 3 Bay ST-240/4 - 4 Bay Admiral Craft Equipment Corp. Admiral Craft Equipment Corp. Adcraft’s new open well steam tables are constructed of Steam Tables DLL User’s Manual 1.0 | Necdet Kurul, 2011 2 Pressure from triple point up to, o 100 MPa if temperature is less than 773.15 °K, Properties of Superheated Steam = specific volume, cubic feet per pound h g = total heat of steam, BTU per pound Pressure Lbs. per Sq. In. Total Temperature --Degrees Fahrenheit (t) Properties of Saturated Steam - Pressure in Bar Note! Because the specific volume also decreases with increasing pressure, the amount of heat energy Recitation 10.213 03/11/02 Spring 2002 YT Steam Tables Appendix F of the textbook are thermodynamics data for steam. Table F.1. is for saturated steam. SUPERHEATED STEAM Superheated steam, as already stated, is steam the temperature of which exceeds that of saturated steam at the same pressure. ... These values have been calculated from Marks and Davis Steam Tables STEAM TABLES We Cater to Your Needs with our Array of Aluminum Containers Trinidad’s disposable aluminum containers are convenient, allowing food to be STEAM TABLES C-1 SPEC 300 S. 84th Avenue Wausau, WI 54401 Phone: 800-544-3057 Fax: 715-842-3125 www.piperonline.net Design Basics Hot Food Units Model# # of Wells A Wattage Amperage NEMA Cap Number Ship Wt. 120V 208V 240V 120V 208V 240V(lbs) Steam emTebSteam emTe mltsmbbC bmltsmbbC esm bs lmeb esm bs lme e aa Tsbm ta m sbe aa Tsbm ta m s ST a t eb ST a t e Steam Tables C In This Section Steam Tables. Premium quality tables at a cost that won’t break your bank. Fed up with having to choose between owning quality food service equipment and making a profit? The combination of www.vemcoinc.com Thermodynamic Properties of Steam An understanding of the basic thermodynamics of steam allows us to properly size equipment and to design piping systems ASPEN steam table tutorial.docx 2 This problem will be more difficult for ASPEN since it solves each unit operation (block) in a sequential manner. Introducing Excel Based Steam Table Calculations into Thermodynamics Curriculum Abstract To perform and document engineering analyses, a tool with consistent utilization and ready APPENDIX 111: STEAM TABLES - THERMODYNAMIC DATA FOR WATER AT SATURATED VAPOR PRESSURES AND TEMPERATURES Derivation: Equation of State of Keenan et al. (1969); Title: Tables and diagrams of the thermal properties of saturated and superheated steam Author: Lionel Simeon Marks, Harvey Nathaniel Davis IMPORTANT The Prescan performs the Steam Tables Add-On Instruction’s logic routine as all ... steam and you need to know the corresponding pressure of saturated steam. This instruction also provides the enthalpy, entropy, and specific volume of liquid Appendix 2 PROPERTY TABLES AND CHARTS (ENGLISH UNITS) | 933 Table A–1E Molar mass, gas constant, and critical-point properties Table A–2E Ideal-gas specific heats of various
{"url":"http://ebookilys.org/pdf/steam-tables-pdf","timestamp":"2014-04-17T05:12:55Z","content_type":null,"content_length":"39617","record_id":"<urn:uuid:7626bc2b-7ce3-40a6-8152-9514508fb1e3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
First Quantization is a mystery... but de-quantizing perhaps not up vote 5 down vote favorite There is an well-known infamous DICTUM: -Second Quantization is a functor, First Quantization is a mystery-. Indeed, second quantization is the "Fock functor", which builds the Fock space in a canonical way out of the Hilbert space of a single particle. But, what about first quantization? There is probably no hope to canonically associate an Hilbert space to the manifold of states of a classical particle (mathematically, there seem to be an inherent element of choice as far as turning functions into operators). However, there is (I suspect) some functorial description for going the other way around, FROM the quantum scenario INTO the classical one (corresponding to the limit $h\rightarrow 0$). If this is true, there maybe a "fiber" of candidate quantum descriptions, all collapsing into the same classical one. Any place where this has been worked out clearly? quantum-mechanics ct.category-theory higher-category-theory Possibly relevant question: mathoverflow.net/questions/10678 – José Figueroa-O'Farrill Jun 10 '11 at 18:49 add comment 1 Answer active oldest votes You may be interested in the answers to a question on physics.SE, "Quantum Mechanics on Manifold". The gist of it is that there is a plethora of different quantization schemes (canonical quantization, geometric quantization, ...). As you note, they all have the same classical theory as a limit. Overview: Quantization Methods: A Guide for Physicists and Analysts up vote 4 That said, speaking as a physicist (caveat emptor), I think the paper down vote pretty much covers everything that is of practical relevance. Unsurprisingly, the Schrödinger equation on an embedded manifold $M \subseteq \mathbb{R}^3$ depends not only on intrinsic data, like the metric or curvature, but also on extrinsic data about the embedding of the manifold inside 3D space. That's to be expected because an electron can tunnel through the full 3D space and hence take "extrinsic short-cuts". I think that's also the reason why there is so much ambiguity in quantization schemes: it's simply not something that you can do completely add comment Not the answer you're looking for? Browse other questions tagged quantum-mechanics ct.category-theory higher-category-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/67461/first-quantization-is-a-mystery-but-de-quantizing-perhaps-not?sort=votes","timestamp":"2014-04-20T08:49:00Z","content_type":null,"content_length":"52941","record_id":"<urn:uuid:92d9e77d-4d97-4d2d-b4fd-4a66aa517089>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Marcus Hook Math Tutor Find a Marcus Hook Math Tutor ...Thanks for visiting my page, and best of luck!Scored 780/800 on SAT Math in high school and 800/800 on January 26, 2013 test. Routinely score 800/800 on practice tests. Taught high school math and have extensive experience tutoring in SAT Math. 19 Subjects: including algebra 1, algebra 2, calculus, geometry ...I taught Prealgebra with a national tutoring chain for five years. I have taught Prealgebra as a private tutor since 2001. I completed math classes at the university level through advanced 12 Subjects: including geometry, logic, algebra 1, algebra 2 ...However, I am preparing to become a math teacher in the near future, so I'm also proficient in math and sciences. Currently, I am working as a substitute teacher throughout New Castle County. I've had the pleasure of working with students from pre-school through high school. 23 Subjects: including SAT math, ACT Math, probability, writing I have a Ph.D. in particle physics from Duke University, but what I have always loved to do most is teach. I taught introductory and intermediate physics classes at New College, Duke University and RPI. Some years ago I started to tutor one-on-one and have found that, more than classroom instruction, it allows me to tailor my teaching to students' individual needs. 21 Subjects: including algebra 1, algebra 2, calculus, SAT math ...I have a lot of experience working with children who learn differently and have unique needs when it comes to academics. I completed my MEd in 2010 in Special Education. I am certified in PA to teach Special Education grades PK-12. 20 Subjects: including SAT math, dyslexia, geometry, algebra 1 Nearby Cities With Math Tutor Aston Math Tutors Brookhaven, PA Math Tutors Chester Township, PA Math Tutors Chester, PA Math Tutors Claymont Math Tutors Collingdale, PA Math Tutors Eddystone, PA Math Tutors Garnet Valley, PA Math Tutors Linwood, PA Math Tutors Logan Township, NJ Math Tutors Lower Chichester, PA Math Tutors Parkside, PA Math Tutors Trainer, PA Math Tutors Upland, PA Math Tutors Yeadon, PA Math Tutors
{"url":"http://www.purplemath.com/marcus_hook_pa_math_tutors.php","timestamp":"2014-04-16T13:34:28Z","content_type":null,"content_length":"23753","record_id":"<urn:uuid:7da50d59-7db8-4851-9eb3-9d6799245fe5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
More Meter Madness Okay, after having read WB8ICN’s postings on QRPedia, I decided to try to measure the impedence of both of my meters. I dug a 1.5M ohm resistor out of the pile, and hooked some jumper leads to it. I then did the following: 1. Measure the resistance of the resistor, using the DVM as an ohm meter (R) 2. Measure the voltage of a 9 volt battery, using the DVM as a volt meter. (V) 3. Hook the resistor in series with the battery, and measure the resulting voltage. (V[m]) What we are looking for is the resistance of the meter, which we will call R[m]. A little math should show that R[m] = R * V[m] / (V – V[m]). Okay, that heavy lifting aside, let’s see what we discovered about my two meters: For my better meter, a nice Radio Shack PC interfaceable multimeter, I got the following readings: 1. R = 1.496M ohms 2. V = 9.24 volts 3. V[m] = 7.94 volts Plugging these values into the formula, we get a value for R[m] of 9.18M ohms, considerably off from the 10M ohms that we used as the “nominal” value. Later, I’ll go back and work up the correction factor for the previous nights example, but for now, let’s move onto my second meter, a cheapy $20 one that I’ve used mostly to check my car battery and continuity. I recorded the following values: 1. R = 1.480M ohms 2. V = 9.14 volts 3. V[m] = 3.68 volts Plugging these values in, I get about .997M ohms, or about 1Mohm! No wonder my previous readings were so crazy, with the 4.4M ohm resistance of my series resistors, this thing was dividing the peak value down considerably. Okay, let’s go back to my dummy load experiment. First, on meter number one, we find that R[m] of 9.18M ohms. If we go through my various calculations and correct for the measured values, we find that the output power should be around 4.71 watts. Doing the same for measurement number two, we’d get an output power of around 4.67 watts, both in excellent agreement! (Note, I didn’t measure the voltage drop of the diode, I’m assuming 0.7 v. The data sheet says the maximum drop would be 1v). Write a comment Recent Comments • m0n5t3r on Visual Cryptography • Barrie Gilbert on Crystal Sets to Sideband, by Frank W. Harris
{"url":"http://brainwagon.org/2009/01/12/more-meter-madness/","timestamp":"2014-04-18T16:15:20Z","content_type":null,"content_length":"41905","record_id":"<urn:uuid:91f99575-b88e-4934-9be2-26f97ec56a58>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Centripetal Force vs Centrifugal Force Neither are forces in the sense that neither have the tendency to cause masses to accelerate. Careful here. Yes the centripetal is real and measurable with real forcemeters. Yes the centrifugal force is a convenient fiction to make the maths easy. How come? Velocity is a vector quantity and change of velocity may be effected by change of magnitude or change of direction. Either constitute an acceleration. Now for centripetal , this is always provided by the normal suppliers or agent of So for instance consider a weight whirling round on a string. The centripetal is provided by the tension in the string - they are one and the same. However the weight is also in constant acceleration. This acceleration is directed towards the centre of of rotation (inwards). This is in accordance with Newtons second law = mass times acceleration. So the weight is not in equilibrium. This is so fundamental and important I will repeat it So the weight is not in equilibrium. Shazam ! * # ! Enter the centrifugal force This ficticious is exactly equal in magnitude to but opposite in direction to the centripetal Let us pretend that such a is acting on the weight. Now the weight is in equilibrium Yay! The weight is said to be in equilibrium under the action of the tension in the string and the opposing centrifugal force This trick or transformation was first proposed by D'Alembert. It enables us to reduce a problem in dynamics ( using Newtons Laws of motion) to a simpler one in statics (using the laws of equilibrium). Note that you never see both the centripetal and the centrifugal force appear in the same analysis you see either one or the other. When the centrifugal force is used the acceleration is zero, since the system is in equilibrium. When the centripetal is invoked the system is not in equilibrium but accelerates. This acceleration is provided by the centripetal So as regards gravity and acceleration, the acceleration due to gravity = the centripetal acceleration or the of gravity = the centripetal . Here '= ' means the same as. Gravity and the centripetal (acceleration) are not additional, they are one and the same. Does this help? Oh and welcome to Physics Forums
{"url":"http://www.physicsforums.com/showthread.php?t=645048&highlight=centrifugal+force","timestamp":"2014-04-18T18:23:37Z","content_type":null,"content_length":"58648","record_id":"<urn:uuid:32fdc2f1-d649-48dc-be3c-fc7114311c57>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Creative Mathematical Genius Born: Erlangen, Germany, March 23, 1882 Died: Bryn Mawr, Pennsylvania, April 14, 1935 Creative Mathematical Genius It might be that Emmy Noether was designed for mathematical greatness. Her father Max was a math professor at the University of Erlangen. Scholarship was in her family; two of her three brothers became scientists as well. Emmy would surpass them all. Ultimately Max would become best known as Emmy Noether's father. Amalie Emmy Noether spent an average childhood learning the arts that were expected of upper middle class girls. Girls were not allowed to attend the college preparatory schools. Instead, she went to a general "finishing school," and in 1900 was certified to teach English and French. But rather than teaching, she pursued a university education in mathematics She audited classes at Erlangen as one of two women among thousands of men, then took the entrance exam. She entered the University of Göttingen in 1903, again as an auditor, and transferred back to Erlangen in 1904 when the university finally let women enroll. She received her mathematics Ph.D. in 1907. Noether worked at the Mathematical Institute of Erlangen, without pay or title, from 1908 to 1915. It was during this time that she collaborated with the algebraist Ernst Otto Fischer and started work on the more general, theoretical algebra for which she would later be recognized. She also worked with the prominent mathematicians Hermann Minkowski, Felix Klein, and David Hilbert, whom she had met at Göttingen. In 1915 she joined the Mathematical Institute in Göttingen and started working with Klein and Hilbert on Einstein's general relativity theory. In 1918 she proved two theorems that were basic for both general relativity and elementary particle physics. One is still known as "Noether's Theorem." But she still could not join the faculty at Göttingen University because of her gender. Noether was only allowed to lecture under Hilbert's name, as his assistant. Hilbert and Albert Einstein interceded for her, and in 1919 she obtained her permission to lecture, although still without a salary. In 1922 she became an "associate professor without tenure" and began to receive a small salary. Her status did not change while she remained at Göttingen, owing not only to prejudices against women, but also because she was a Jew, a Social Democrat, and a pacifist.* During the 1920s Noether did foundational work on abstract algebra, working in group theory, ring theory, group representations, and number theory. Her mathematics would be very useful for physicists and crystallographers, but it was controversial then. There was debate whether mathematics should be conceptual and abstract (intuitionist) or more physically based and applied (constructionist). Noether's conceptual approach to algebra led to a body of principles unifying algebra, geometry, linear algebra, topology, and logic. In 1928-29 she was a visiting professor at the University of Moscow. In 1930, she taught at Frankfurt. The International Mathematical Congress in Zurich asked her to give a plenary lecture in 1932, and in the same year she was awarded the prestigious Ackermann-Teubner Memorial Prize in mathematics. Nevertheless, in April 1933 she was denied permission to teach by the Nazi government. It was too dangerous for her to stay in Germany, and in September she accepted a guest professorship at Bryn Mawr College. She also lectured at the Institute for Advanced Study in Princeton. The guest position was extended, but in April 1935 she had surgery to remove a uterine tumor and died from a postoperative infection. * Gottfried E. Noether, "Emmy Noether (1882-1935)," in Louise S. Grinstein and Paul J. Campbell: Women of Mathematics: A Bibliographic Sourcebook (New York, Greenwood Press), 1987, pp. 165-170.
{"url":"http://www.sdsc.edu/ScienceWomen/noether.html","timestamp":"2014-04-16T16:00:48Z","content_type":null,"content_length":"5819","record_id":"<urn:uuid:000e4fe4-016b-47d4-aa36-fcff1adc1638>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Apache Junction Algebra 1 Tutor ...Whatever the student's needs are, it is my job as a teacher to meet them. I have tutored and taught subjects from remedial reading using basic phonics to algebra 1. I have a deep-rooted belief that the best learning takes place when the student is engaged and having fun. 16 Subjects: including algebra 1, reading, writing, geometry ...I will be conducting individual and group classes starting May 2014 at my home located in Mesa, AZ in E. Guadalupe and S. Hawes. 13 Subjects: including algebra 1, reading, English, writing ...Any age, any level, children are always a joy to tutor. I have participated in structured programs like America Reads as well as being an Instructional Assistant for Mesa Public Schools. It is no secret that teaching these children is a lifelong passion of mine. 33 Subjects: including algebra 1, English, reading, physics ...I have taught and tutored everything from basic mathematics up through Calculus, Differential Equations and Mathematical Structures. Just a little about my work and research. While my PhD is in Mathematics, my research area has been Mathematics Education. 9 Subjects: including algebra 1, calculus, geometry, GED ...I am also able to adapt different ways of presenting material and making it engaging to the you, as well as adapt to each unique student. I have an unparalleled amount of patience and also make it fun to learn math... even for those who claim to "hate" math! The process of learning and figuring it out for yourself (with my guidance) is more rewarding than having it explained to you. 17 Subjects: including algebra 1, reading, calculus, algebra 2
{"url":"http://www.purplemath.com/Apache_Junction_algebra_1_tutors.php","timestamp":"2014-04-20T10:52:48Z","content_type":null,"content_length":"24016","record_id":"<urn:uuid:c957d7a3-c845-4b96-8147-940782127a53>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
If you can dream it, you can do it. I'm sensing a pattern here.. Stefan wants to be with Katherine. Damon wants to be with Katherine. Stefan wants to be with Elena. Damon wants to be with Elena. Stefan leaves town. Damon leaves town. Stefan breaks up with Elena. Damon breaks up with Elena. Stefan becomes a ripper. Damon becomes a ripper. Seriously Damon..
{"url":"http://princessniyo.tumblr.com/archive","timestamp":"2014-04-18T19:04:18Z","content_type":null,"content_length":"86970","record_id":"<urn:uuid:092cda12-0081-4f97-b77f-51b99e5fdd27>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Answer to Random Drug Testing Question 12/24/11 Moderators: mvs_staff, forum_admin, Marilyn In today's Parade there was a question about random drug testing. Every 3 months one quarter of a fixed population is randomly selected for testing. The question was what is the probability that a person would be selected over the course of one year. Marilyn's answer was 25%, but the actual probability is 68%. The probability of not being selected is 0.75x0.75x0.75x0.75 = 0.316, so the probability of being selected at least once is 1 - 0.316 = 0.684. The binomial probability function defines the probabilities of being selected n times out of N possible times with p being the probability of being selected each time, and q = 1-p: P(n/N) = [N!/n!(N-n)!] p^n q^(N-n) The probability of being selected exactly once over the course of the year is: P(1/4) = [4!/3!1!] [0.25^1][0.75^3] = 0.422 P(2/4) = [4!/2!2!] [0.25^2][0.75^2] = 0.211 Three times: P(3/4) = [4!/1!3!] [0.25^3][0.75^1] = 0.047 Four times: P(4/4) = [4!/4!0!] [0.25^4][0.75^0] = 0.004 The most likely outcome is being selected once (42.2%). A probability of 25% is nowhere to be found in any of the outcomes. Posts: 3 Joined: Sat Sep 06, 2008 4:16 pm On 12/25/11, Marilyn wrote: I manage a drug-testing program for an organization with 400 employees. Every three months, a random-number generator selects 100 names for testing. Afterward, these names go back into the selection pool. Obviously, the probability of an employee being chosen in one quarter is 25 percent. But what is the likelihood of being chosen over the course of a year? -- Jerry Haskins, Vicksburg, Miss. The probability remains 25 percent, despite the repeated testing. One might think that as the number of tests grows, the likelihood of being chosen increases, but as long as the size of the pool remains the same, so does the probability. Goes against your intuition, doesn't it? Juan Largo wrote:In today's Parade there was a question about random drug testing. Every 3 months one quarter of a fixed population is randomly selected for testing. The question was what is the probability that a person would be selected over the course of one year. Marilyn's answer was 25%, but the actual probability is 68%. The probability of not being selected is 0.75x0.75x0.75x0.75 = 0.316, so the probability of being selected at least once is 1 - 0.316 = 0.684. The binomial probability function defines the probabilities of being selected n times out of N possible times with p being the probability of being selected each time, and q = 1-p: P(n/N) = [N!/n!(N-n)!] p^n q^(N-n) The probability of being selected exactly once over the course of the year is: P(1/4) = [4!/3!1!] [0.25^1][0.75^3] = 0.422 P(2/4) = [4!/2!2!] [0.25^2][0.75^2] = 0.211 Three times: P(3/4) = [4!/1!3!] [0.25^3][0.75^1] = 0.047 Four times: P(4/4) = [4!/4!0!] [0.25^4][0.75^0] = 0.004 The most likely outcome is being selected once (42.2%). A probability of 25% is nowhere to be found in any of the outcomes. Marilyn frequently answers a diferent question then what was seems, to others, to be what was actually asked. This isn't an unpredictable situation, since the questions are short and English can be ambiguous. But it would be helpeful if she could try to indicate what it is she is answering. It could be that she was answering "Over the course of a year, does the probability that an employee will be chosen in any given quarter change?" Many people intuitively feel that if you aren't chosen in the spring test, the chances you will be selected in the summer will go up. And if you aren't chosen at all in the first three tests of a year, it is almost certain you will be chosen for the fourth since each test picks 1/4 of the employees. She could have easily indicated that by saying "the likelihood of being chosen for any given test We can't know for certain what she was thinking when she gave this blatantly incorrect answer. But I don't think she will understand your math; she has ignored examples in the past where similar math has proven her incorrect. Her application of probability is usually based more on intuition than math, as evidenced by here repeated incorrect (or at least, correct only for a question that isn't what most others see in the text) replies about sequences of dice rolls. She does have a better intuition than most, but she places too much confidence in it. Posts: 1814 Joined: Tue Mar 10, 2009 11:01 am I'd be willing to go along with what you said, but Marilyn did not explain her answer at all. She just threw out a number without offering any reasoning or rationale behind it, which is very unlike her. Maybe she was having a bad day. Posts: 3 Joined: Sat Sep 06, 2008 4:16 pm Juan Largo wrote:I'd be willing to go along with what you said, but Marilyn did not explain her answer at all. She just threw out a number without offering any reasoning or rationale behind it, which is very unlike her. Maybe she was having a bad day. On the contrary, it is very unusual for her to offer any support for her answers. She usually just states it and expects everybody to see how obvious it is. The best recent example is the question about dice sequences. She answered it three times teh exact same way, by stating that it should seem obvious which sequence was more likely. Posts: 1814 Joined: Tue Mar 10, 2009 11:01 am Once again Marilyn presents her question in an ambiguous manner. The phrase "being chosen over the course of a year" has several different meanings. It could mean at any time during the year a test occurs, what's the probability of being selected. In that case, Marilyn is correct. But it could also mean the probability of being selected sometime during the year; i.e., there exists a date among the 365/6 in the year such that a drug test is held on that date and you were selected. That is 68%, as our first commenter just computed. (In fact, it is close to 1-1/e). Posts: 6 Joined: Sat Nov 29, 2008 8:43 pm jimvb wrote:Once again Marilyn presents her question in an ambiguous manner. The phrase "being chosen over the course of a year" has several different meanings. It could mean at any time during the year a test occurs, what's the probability of being selected. Only by misinterpretation. It's an easy mistake to make, but clearly in error. A direct comparison was made between "being chosen in one quarter" and "being chosen over the course of a year." That is, comparing one period to another. Your interpretation above requires the comparison to be between "being chosen in one quarter" and "being chosen in another quarter," in which case the word "year" is superfluous. Posts: 1814 Joined: Tue Mar 10, 2009 11:01 am Well done, Juan! In 17 words and a one line of math you have succinctly explained the answer to the problem posed in this week's Parade column. This is a simple problem from freshman statistics, so it's odd that Marilyn got it wrong. One can only hope that readers will be motivated to ponder this problem and seek to understand for themselves how to solve it, rather than accepting the incorrect answer given in the column. Juan Largo wrote:In today's Parade there was a question about random drug testing. Every 3 months one quarter of a fixed population is randomly selected for testing. The question was what is the probability that a person would be selected over the course of one year. Marilyn's answer was 25%, but the actual probability is 68%. The probability of not being selected is 0.75x0.75x0.75x0.75 = 0.316, so the probability of being selected at least once is 1 - 0.316 = 0.684. The binomial probability function defines the probabilities of being selected n times out of N possible times with p being the probability of being selected each time, and q = 1-p: P(n/N) = [N!/n!(N-n)!] p^n q^(N-n) The probability of being selected exactly once over the course of the year is: P(1/4) = [4!/3!1!] [0.25^1][0.75^3] = 0.422 P(2/4) = [4!/2!2!] [0.25^2][0.75^2] = 0.211 Three times: P(3/4) = [4!/1!3!] [0.25^3][0.75^1] = 0.047 Four times: P(4/4) = [4!/4!0!] [0.25^4][0.75^0] = 0.004 The most likely outcome is being selected once (42.2%). A probability of 25% is nowhere to be found in any of the outcomes. Posts: 1 Joined: Sun Oct 23, 2011 12:34 pm Parade 12/25/11 wrote: I manage a drug-testing program for an organization with 400 employees. Every three months, a random-number generator selects 100 names for testing. Afterward, these names go back into the selection pool. Obviously, the probability of an employee being chosen in one quarter is 25 percent. But what is the likelihood of being chosen over the course of a year? -- Jerry Haskins, Vicksburg, Miss. Marilyn responds: The probability remains 25 percent, despite the repeated testing. One might think that as the number of tests grows, the likelihood of being chosen increases, but as long as the size of the pool remains the same, so does the probability. Goes against your intuition, doesn't it? I think Marilyn did not answer the question actually asked which is "what is the likelihood of being chosen over the course of a year?" This would be one minus the product of not being chosen in any quarter for four consecutive quarters: Code: Select all cumulative quarters, cumulative probability of being chosen 1, 1-0.75 =0.25 2, 1-(0.75)^2=0.4375 3, 1-(0.75)^3=0.5781 4, 1-(0.75)^4=0.6836 So, over the course of four consecutive quarters any employee has a 0.6836 probability of being chosen for testing. Posts: 2076 Joined: Mon Jun 18, 2007 9:21 am The probability remains 25 percent, despite the repeated testing. One might think that as the number of tests grows, the likelihood of being chosen increases, but as long as the size of the pool remains the same, so does the probability. Goes against your intuition, doesn't it? After thinking about this some more, what Marilyn undoubtedly meant was that the likelihood of being selected over the course of a year is unchanged by being selected or not being selected in any previous quarters. It's a common fallacy that if you weren't selected beforehand you are "due" to be selected next time. Maybe Marilyn inferred that the person was committing this fallacy from the way the question was stated and she was just trying to dispel it. If that's the case, my apologies to Marilyn. Posts: 3 Joined: Sat Sep 06, 2008 4:16 pm I don't think you need to apologize to Marilyn. Marilyn didn't answer the question that was asked. Someone could have asked a question about what your odds are of being picked in the 4th drawing of the year if you weren't picked in the first 3 (or if you were picked in the first 3). Either way, the answer would be 25%. But that wasn't the question asked. According to the earlier post, here is what the question said, in part: "Obviously, the probability of an employee being chosen in one quarter is 25 percent. But what is the likelihood of being chosen over the course of a year?" I just don't see any reasonable way to interpret the question the way that Marilyn did. The question writer is asking what are the chances of being chosen sometime in the year (presumably in at least 1 of the 4 drawings), not the chances of being chose in any one quarter. It is right there in the question "Obviously, the probability of an employee being chosen in one quarter is 25%." Then Marilyn goes on to explain that the chance of being chosen in any one quarter is 25%. Marilyn definitely blew this one. I hope she writes a correction. Posts: 2 Joined: Sat Jan 07, 2012 9:47 am svalancius wrote:Marilyn definitely blew this one. I hope she writes a correction. She has already admitted the answer was incorrect, but not a mistake. Instead, she made a tongue-in-cheek comment about being influenced by eggnog, and said she will address it in two weeks (probably the delay caused by the publishing process). Posts: 1814 Joined: Tue Mar 10, 2009 11:01 am I'm new to the forum - where did she say she would address this in two weeks? What Marilyn wrote was true; it just didn't answer the question. I'm glad to hear she is going to address it in a future column. Posts: 2 Joined: Sat Jan 07, 2012 9:47 am svalancius wrote:I'm new to the forum - where did she say she would address this in two weeks? She answers daily questions on the parade website. Follow the links from the "home" page of this forum, or use this link What Marilyn wrote was true; it just didn't answer the question. I'm glad to hear she is going to address it in a future column. It's also true that frogs have short stumpy legs, but that has no bearing on the question she was asked. An answer can still be wrong, even if true, if it doesn't address the question. But Marilyn has a very poor record of admitting when she is wrong - as in, almost never, and only when she can find an acceptable excuse. There were at least two other answers in the past couple of months that were outright wrong, and she ignored them. Probably because she can't argue that her answers were right for a different question. There was even one a couple of years ago where she broke the answer down into three separate cases, and each one was wrong for different reasons. I mention that because the proof that one was wrong follows the same reasoning as the drug-test answer, that the chance of a thing happening in at least one of two attempts is less than the twice the chance of it happening in one. Posts: 1814 Joined: Tue Mar 10, 2009 11:01 am I enjoy Marilyn's column, but I was so taken aback at her answer to the question on the morning of Dec. 25 that I registered to participate in this site so that I could comment, only to run into the (reasonable) delays in establishing eligibility to participate. In the meantime, the points I wished to make have been ably and thoroughly made by other commenters. I will make one observation that simply rephrases the point others have made. I focus on intuition; that is, beyond the conclusive mathematical demonstration presented earlier in the comments, intuition shows the error of Marilyn's answer to the question that was -- unambiguously -- asked. Marilyn's answer was: "One might think that as the number of tests grows, the likelihood of being chosen increases, but as long as the size of the pool remains the same, so does the probability. Goes against your intuition, doesn't it?" Sorry, but as the number of tests grows with a constant pool, the probability of the event occurring at least once in the course of the tests increases. To wit: The odds of rolling a "6" when a six-sided die is tossed once is 1/6. So I throw the die 100 times (an increased number of tests) as the size of the pool remains the same (the die has six sides in each test); do the odds of getting at least one "6" across the 100 throws remain 1 in 6? Of course not. That's why Russian Roulette played 100 times in sequence -- if our intuition means anything --is nearly certain to be fatal. Looking forward to Marilyn's correction. Posts: 1 Joined: Sun Dec 25, 2011 9:21 am Location: Gainesville FL jerrygf wrote:Sorry, but as the number of tests grows with a constant pool, the probability of the event occurring at least once in the course of the tests increases. We can only speculate on how and/or why Marilyn made this mistake, but the following sequence of inferences does seem plausible: • Many people do feel - intuitively - that random chance has a memory. That if heads has come up on three consecutive coin flips, somehow tails is more likely on the next one so as to "even things out." A person on this list, who feels he understands probabilty but does not, has incorrectly called this "regression towards the mean" and claimed it was true in a more complex situation. • I call it "correction toward the mean." • Marilyn knows of this tendency, and has probably answered many questions involving "correction toward the mean." Her first mistake was, without reading the question carefully, she assumed this was another. • The only way it made sense to be a question about "correction toward the mean," was if the probability asked about was the probability in one quarter. • So Marilyn's second mistake was that she answered the wrong question, and ... • Her third was when she admonished the reader about thinking "correction toward the mean" was true. Posts: 1814 Joined: Tue Mar 10, 2009 11:01 am Who is online Users browsing this forum: No registered users and 0 guests
{"url":"http://www.marilynvossavant.com/forum/viewtopic.php?t=1740?f=1","timestamp":"2014-04-16T16:00:32Z","content_type":null,"content_length":"48914","record_id":"<urn:uuid:e001fe9c-59ed-4a87-809a-730b9aaa0d4a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
: Greek Pupils Excel at the 53rd International Mathematical Olympiad Greek Pupils Excel at the 53rd International Mathematical Olympiad By Areti Kotseli on July 20, 2012 in News Greek pupils shined during the 53rd International Mathematical Olympiad receiving a Gold, a Silver and three Bronze medals as well as an honorable mention in Argentina on Monday July 16, 2012. During the contest, Greek pupils proved to be the brightest minds of the Eurozone, leaving Germans, Italians, Dutchmen, Spaniards and others behind. Every single pupil representing Greece received a distinction, bringing a Gold Medal to our country for the second consecutive year. Our talented youth continued the long-year tradition of distinctions in International Mathematical Olympiads proving right the volunteer work of the Hellenic Mathematical Society. Greek pupils were escorted by Professor at the National Technical University of Athens Anargyros Fellouris and Mathematician Evaggelos Zotos. A total of 548 participants from 100 countries competed July 4-16 in the annual math competition designed for pre-collegiate students. South Korea ranked first overall in the 2012 competition, with China as runner-up. Next years event will take place in Santa Marta, Colombia. The Greek students who received distinctions are: 1.Lolas Panagiotis, Trikala, Gold Medal 2.Dimakis Panagiotis, Athina, Silver Medal 3.Mousatov Alexandros, Athens, Bronze Medal 4.Skiadopoulos Athinagoras, Rhodes, Bronze Medal 5.Tsinas Konstantinos, Trikala, Bronze Medal 6.Tsampasidis Zacharias, Katerini Honorable Mention Power through Knowledge "I hope for nothing. I fear nothing. I am free." Nikos Kazantzakis
{"url":"http://www.network54.com/Forum/248068/thread/1342859386/last-1342864909/Greek+Pupils+Excel+at+the+53rd+International+Mathematical+Olympiad","timestamp":"2014-04-19T14:53:51Z","content_type":null,"content_length":"30193","record_id":"<urn:uuid:2b5e06e7-bbb6-4f4b-b954-56ed2cb9fd0d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
How to replace failure by a list of successes Results 11 - 20 of 61 - International Conference on Logic Programming , 1999 "... At present, the field of declarative programming is split into two main areas based on different formalisms; namely, functional programming, which is based on lambda calculus, and logic programming, which is based on firstorder logic. There are currently several language proposals for integrating th ..." Cited by 20 (3 self) Add to MetaCart At present, the field of declarative programming is split into two main areas based on different formalisms; namely, functional programming, which is based on lambda calculus, and logic programming, which is based on firstorder logic. There are currently several language proposals for integrating the expressiveness of these two models of computation. In this thesis we work towards an integration of the methodology from the two research areas. To this end, we propose an algebraic approach to reasoning about logic programs, corresponding to the approach taken in functional programming. In the first half of the thesis we develop and discuss a framework which forms the basis for our algebraic analysis and transformation methods. The framework is based on an embedding of definite logic programs into lazy functional programs in Haskell, such that both the declarative and the operational semantics of the logic programs are preserved. In spite of its conciseness and apparent simplicity, the embedding proves to have many interesting properties and it gives rise to an algebraic semantics of logic programming. It also allows us to reason about logic programs in a simple calculational style, using rewriting and the algebraic laws of combinators. In the embedding, the meaning of a logic program arises compositionally from the meaning of its constituent subprograms and the combinators that connect them. In the second half of the thesis we explore applications of the embedding to the algebraic transformation of logic programs. A series of examples covers simple program derivations, where our techniques simplify some of the current techniques. Another set of examples explores applications of the more advanced program development techniques from the Algebra of Programming by Bird and de Moor [18], where we expand the techniques currently available for logic program derivation and optimisation. To my parents, Sandor and Erzsebet. And the end of all our exploring Will be to arrive where we started And know the place for the first time. - In 2nd International Workshop on Practial Aspects of Declarative Languages, volume 1753 of LNCS , 2000 "... . Pattern matching is a great convenience in programming. However, pattern matching has its problems: it conicts with data abstraction; it is complex (at least in Haskell, which has pattern guards, irrefutable patterns, n+k patterns, as patterns, etc.); it is a source of runtime errors; and lastly, ..." Cited by 17 (1 self) Add to MetaCart . Pattern matching is a great convenience in programming. However, pattern matching has its problems: it conicts with data abstraction; it is complex (at least in Haskell, which has pattern guards, irrefutable patterns, n+k patterns, as patterns, etc.); it is a source of runtime errors; and lastly, one cannot abstract over patterns as they are not a rst class language construct. This paper proposes a simplication of pattern matching that makes patterns rst class. The key idea is to treat patterns as functions of type a!Maybe bi.e., a!(Nothing|Just b); thus, patterns and pattern combinators can be written as functions in the language. 1 Introduction A hotly debated issue in the language Haskell [HJW92] has been patterns. What are their semantics? Do we want n+1 patterns? Do we need @-patterns? When do we match lazily and when do we match strictly? Do we need to extend patterns with pattern guards? And etc. In this paper I will propose, not another extension, but a - Department of Computer Science, University of Utrecht , 1999 "... The distinctive merit of the declarative reading of logic programs is the validity ofallthelaws of reasoning supplied by the predicate calculus with equality. Surprisingly many of these laws are still valid for the procedural reading � they can therefore be used safely for algebraic manipulation, pr ..." Cited by 16 (4 self) Add to MetaCart The distinctive merit of the declarative reading of logic programs is the validity ofallthelaws of reasoning supplied by the predicate calculus with equality. Surprisingly many of these laws are still valid for the procedural reading � they can therefore be used safely for algebraic manipulation, program transformation and optimisation of executable logic programs. This paper lists a number of common laws, and proves their validity for the standard (depth- rst search) procedural reading of Prolog. They also hold for alternative search strategies, e.g. breadth- rst search. Our proofs of the laws are based on the standard algebra of functional programming, after the strategies have been given a rather simple implementation in Haskell. 1 - In Proc. 2011 ACM Conference on Object-Oriented Programming Systems, Languages, and Applications , 2011 "... In many projects, lexical preprocessors are used to manage different variants of the project (using conditional compilation) and to define compile-time code transformations (using macros). Unfortunately, while being a simple way to implement variability, conditional compilation and lexical macros hi ..." Cited by 12 (4 self) Add to MetaCart In many projects, lexical preprocessors are used to manage different variants of the project (using conditional compilation) and to define compile-time code transformations (using macros). Unfortunately, while being a simple way to implement variability, conditional compilation and lexical macros hinder automatic analysis, even though such analysis is urgently needed to combat variability-induced complexity. To analyze code with its variability, we need to parse it without preprocessing it. However, current parsing solutions use unsound heuristics, support only a subset of the language, or suffer from exponential explosion. As part of the TypeChef project, we contribute a novel variability-aware parser that can parse almost all unpreprocessed code without heuristics in practicable time. Beyond the obvious task of detecting syntax errors, our parser paves the road for further analysis, such as variability-aware type checking. We implement variability-aware parsers for Java and GNU C and demonstrate practicability by parsing the product line MobileMedia and the entire X86 architecture of the Linux kernel with 6065 variable features. - The Journal of Logic Programming , 1994 "... This paper presents a new program analysis framework to approximate call patterns and their results in functional logic computations. We consider programs containing non-strict, nondeterministic operations in order to make the analysis applicable to modern functional logic languages like Curry or TO ..." Cited by 10 (0 self) Add to MetaCart This paper presents a new program analysis framework to approximate call patterns and their results in functional logic computations. We consider programs containing non-strict, nondeterministic operations in order to make the analysis applicable to modern functional logic languages like Curry or TOY. For this purpose, we present a new fixpoint characterization of functional logic computations w.r.t. a set of initial calls. We show how programs can be analyzed by approximating this fixpoint. The results of such an approximation have various applications, e.g., program optimization as well as verifying safety properties of programs. 1 - In ????, pages , 2007 "... Abstract. Parser combinators are higher-order functions used to build parsers as executable specifications of grammars. Some existing implementations are only able to handle limited ambiguity, some have exponential time and/or space complexity for ambiguous input, most cannot accommodate left-recurs ..." Cited by 10 (0 self) Add to MetaCart Abstract. Parser combinators are higher-order functions used to build parsers as executable specifications of grammars. Some existing implementations are only able to handle limited ambiguity, some have exponential time and/or space complexity for ambiguous input, most cannot accommodate left-recursive grammars. This paper describes combinators, implemented in Haskell, which overcome all of these limitations. - In Proc. Haskell workshop "... In this paper, we show how to manipulate syntax with binding using a mixed representation of names for free variables (with respect to the task in hand) and de Bruijn indices [5] for bound variables. By doing so, we retain the advantages of both representations: naming supports easy, arithmetic-free ..." Cited by 9 (2 self) Add to MetaCart In this paper, we show how to manipulate syntax with binding using a mixed representation of names for free variables (with respect to the task in hand) and de Bruijn indices [5] for bound variables. By doing so, we retain the advantages of both representations: naming supports easy, arithmetic-free manipulation of terms; de Bruijn indices eliminate the need for α-conversion. Further, we have ensured that not only the user but also the implementation need never deal with de Bruijn indices, except within key basic operations. Moreover, we give a hierarchical representation for names which naturally reflects the structure of the operations we implement. Name choice is safe and straightforward. Our technology combines easily with an approach to syntax manipulation inspired by Huet’s ‘zippers’[10]. Without the ideas in this paper, we would have struggled to implement EPIGRAM [19]. Our example—constructing inductive elimination operators for datatype families—is but one of many where it proves invaluable. - IN PROC. OF THE 20TH INTERNATIONAL WORKSHOP ON FUNCTIONAL AND (CONSTRAINT) LOGIC PROGRAMMING (WFLP 2011 , 2011 "... In this paper we present our first steps towards a new system to compile functional logic programs of the source language Curry into purely functional Haskell programs. Our implementation is based on the idea to represent non-deterministic results as values of the data types corresponding to the res ..." Cited by 9 (4 self) Add to MetaCart In this paper we present our first steps towards a new system to compile functional logic programs of the source language Curry into purely functional Haskell programs. Our implementation is based on the idea to represent non-deterministic results as values of the data types corresponding to the results. This enables the application of various search strategies to extract values from the search space. We show by several benchmarks that our implementation can compete with or outperform other existing implementations of Curry.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=377301&sort=cite&start=10","timestamp":"2014-04-18T01:00:36Z","content_type":null,"content_length":"37754","record_id":"<urn:uuid:181686f1-033b-4963-9ddb-2c63320df465>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Utilizing the Concept of the Golden Proportion Written by Dino S. Javaheri, DMD, and Sara Shahnavaz, DDS Friday, 31 May 2002 19:00 Adhering to smile design principles is essential when initiating the restoration of teeth in the anterior region of the mouth. When the dentist does not use smile design guidelines, there is no blueprint for tooth preparation and restoration. In the absence of a plan, teeth may be underprepared or overprepared, leading to a compromise of the aesthetic and functional results. In addition, the laboratory technician must be provided with an adequate amount of information to allow fabrication of ideal restorations. Providing the technician with only a stone model communicates minimal aesthetic information. In order to achieve optimal aesthetic results, the dentist must provide the technician with a plan specifically designed for each patient. This must include the proper position of the incisal edges, incisal plane, midline position, the length and width of the teeth, tissue height, axial inclination, embrasure form, line angles, and surface texture.^1-3 A mathematical concept called the Golden Proportion has provided guidelines for tooth preparation and fabrication of the restorations. Application of this principle will, therefore, help to improve design, functional results, laboratory communication, and patient satisfaction. This article discusses the Golden Proportion and its role in dental aesthetics. The Golden Proportion is based on a very specific mathematical proportion that is acquired from mathematics and nature.^4-6 This proportion, also referred to as the Divine Proportion or Magic ratio, describes a number often encountered when taking the ratio of distance in simple geometric figures such as the pentagram, decagon, and dodecagon.^7 To understand the Golden Proportion from a mathematical standpoint, consider a line segment with a length of one. Divide this segment into two segments, a shorter segment, x, and a longer segment, 1-x. The Golden Proportion is one where the ratio of the shorter segment to the longer segment (x:1-x) is equal to the ratio of the longer segment to the whole segment(1-x:1).^8 The mathematical equation that needs to be solved for x is x/ (1-x)=(1-x)/1. Solving this equation results in a shorter segment length of 0.382 and the longer segment length at 0.618.^7,9 This is the only ratio that solves the equation. This proportion can be found throughout the universe; from the spirals of galaxies to the spiral of a Nautilus seashell; from the harmony of music to the beauty in art.8,9 A botanist will find it in the growth patterns of flowers and plants, while a zoologist sees it in the breeding of rabbits.^10 The ancient Greeks have often been credited with defining the Golden Proportion.^11 Pythagorus and Phidius are believed to have identified it.^11 The Golden Proportion was closely studied by the Greek sculptor Phidias, hence the designated mathematical symbol for the Golden Proportion ratio is phi.^9 The Greeks studied phi through their mathematics and used it in their architecture. The Parthenon at Athens is a classic example of the use of the Golden Proportion. The initial discovery of the Golden Proportion by the Greeks has been challenged by architects and archeologists.8 Badawy ^12 claims that the use of proportions was initiated by the ancient Egyptians, based on his study of Egyptian buildings, sculptures, and paintings. The ancient Egyptians used it in the construction of the great pyramids and in the design of hieroglyphs found on tomb walls.^13 Separately, the ancients of Mexico applied the Golden Proportion while building the Sun pyramid at Teotihuacan.^8 Although debate remains as to the origin of this proportion, it is clearly evident in Egyptian, Greek, Roman, Romanesque, Gothic, and Renaissance culture.^9,14 The Golden Proportion is also evident in the arts.^15 Renowned artists such as Michelangelo, Raphael, and Leonardo da Vinci made use of this concept.^16 Leonardo da Vinci drew the "ideal man" using the Golden Proportion, and the head of Mona Lisa was drawn using this relationship. Evidence suggests that classical music composed by Mozart, Beethoven, and Bach incorporated the Golden Proportion.^ 17 Whether the use of the Golden Proportion by these artists and musicians was by design, intuition, or accident is not known. In the 19th century, the French mathematician and scientist, Edouard Lucas, documented the Golden Proportion as it occurs in nature. The French architect Le Corbusier applied his concepts to modern architecture. Today, windows, furniture, automobiles, and smiles are designed using the concept of the Golden Proportion. In dentistry, the Golden Proportion has been suggested as one possible mathematical approach to development of ideal size and shape relationships for maxillary teeth. These principles can be used to determine the width of the teeth as they relate to each other. There are many important design considerations that accompany application of the Golden Proportion. Only after the incisal edge position, incisal plane, gingival plane, and central incisor length have been determined, can the Golden Proportion be applied. As applied to the maxillary teeth, the Golden Proportion requires a 62% reduction in the viewing width of each tooth, beginning with the central incisor, and proceeding posteriorly. It is important to distinguish between the viewing width and the clinical width of a tooth. Viewing width is measured on a straight-on frontal photograph of the patient's smile. Anatomical measurements taken in a patient's mouth or on a model will not provide this relationship. To apply the concept of the Golden Proportion, a lateral incisor can be arbitrarily assigned a viewing width of 1, in which case the central incisor width should be 1.6, the cuspid 0.62, the first premolar 0.38, and the second premolar 0.23.^4,18 This allows for a smile dominated by the central incisors, with the other teeth becoming progressively smaller. A variation of the Golden Proportion was suggested by Snow.^19 He proposed that the Golden Proportion could be used to develop symmetry, dominance, and proportion for aesthetically pleasing smiles. To simplify these proportions, the assigned values for the six anterior teeth were added (assigning 1.6 to the central, 1 to the lateral, and 0.62 to the cuspid). Then the assigned width of each tooth was divided by the total to determine the relative percentage. The resulting percentages were: central incisor 25%, lateral incisor 16%, and cuspid 9%. These percentages illustrate the dominance of the central incisors, which are 50% of the cuspid-to-cuspid width. Following is a clinical case in which the concept of the Golden Proportion is applied. Case presentation Figure 1. Preoperative view of patient presenting with aesthetic concerns. The patient, a 42-year-old female, was unhappy with the appearance of her maxillary anterior teeth (Figure 1). She had four veneers that had been placed approximately 15 years ago. One had debonded and the other three had marginal leakage. She was unhappy with her smile because of the color and shape/size discrepancies of the maxillary anterior teeth. The patient did not display the mandibular teeth at full smile or during conversation, and therefore was not concerned about the appearance of these teeth. Clinical Exam The clinical examination revealed leaking porcelain veneers on the central incisors and the left lateral incisor. The veneer on the maxillary right cuspid had debonded. The tissue heights were slightly uneven, resulting in disproportional sizes of the teeth. The general periodontal health, however, was good. The patient's occlusion exhibited loss of anterior guidance, which created group function. The patient did not report any tooth pain, and there was no history of TMJ symptoms. The patient sought a smile of uniform symmetry, appropriate tooth shape, and lighter color. Tissue recontouring was discussed to allow for development of proper proportions of the new restorations. To evaluate the position of the incisal edge of the teeth, a direct mock-up was prepared. Although this can be accomplished with a wax-up, a direct mock-up not only allows the patient to preview and approve the length of the veneers, but serves as a preparation guide for the clinician. Determining the location of the incisal edge of the central incisors is one of the most important factors for a successful reconstruction. The three determinants to be considered are occlusion, phonetics, and aesthetics. The occlusion should be characterized by anterior guidance, cuspid disclusion, lack of slide between centric relation and centric occlusion, absence of balancing or working side interferences, and equal and simultaneous force on each tooth when the teeth are in contact, with no deflection when additional force is exerted. Phonetics is also used as a guide to determine the position of the incisal edge. During the production of "F" or "V" sounds, the maxillary anterior teeth should lightly touch the vermilion border of the lower lip. Finally, both the clinician and the patient should evaluate the aesthetics of the proposed incisal edge position. Figure 2. Tracing from preoperative photograph to apply smile design principles. Figure 3. Proposed incisal edge position and gingival heights are marked. Figure 4. New central incisor length is measured using a ruler. Figure 5. A 75% width-to-height ratio is determined and marked. Figure 6. The shapes of the central incisors are drawn using the length and width measurements. After determining the incisal edge position, the length and tissue heights of the central incisors are established. From this point a trace drawing can be used to apply design concepts. It is important to remember that the Golden Proportion is applied to the viewing width of teeth, using a frontal photograph. Using tracing paper, the patient's teeth are outlined (Figure 2). The new incisal edge position and the gingival position are placed on the working outline (Figure 3). To give the central incisors a distinct rectangular shape, the width-to-height ratio should be approximately 75%. The length of the central incisors is measured on the photograph and multiplied by 0.75 to give the correct photographic width for these teeth (Figures 4 and 5). After marking the width measurement, the proposed shape of the new central incisors can be drawn (Figure 6). At this time, the width of the other anterior teeth can be established using the Golden Proportion. Figure 7. The width of the central incisors is reduced by 62% to give the width of the lateral incisors. Figure 8. The shapes of the lateral incisors are drawn. Figure 9. A 62% reduction in width of the lateral incisors gives the profile width of the cuspids. The width of the central incisors is multiplied by 0.62, thereby establishing the width of the lateral incisors. This new width of the lateral incisors is marked, and the shapes are drawn (Figures 7 and 8). The same technique is utilized for the profile view of the cuspids and premolars (Figure 9). It is important to note that the Golden Proportion only approximates the width of each tooth, not the length. This tracing helps to define the treatment plan. The tracing shows where tissue recontouring is required and helps in preparation design. The tracing demonstrates whether the widths of the teeth must be altered in order to design a proportional and symmetrical smile. To do this may require aggressive preparation of the teeth to open the contact areas, so the laboratory technician can establish the new proportions. Prior to prescribing the final veneers, the provisional restorations are fabricated to the guidelines set forth in the smile design process. This gives the practitioner and patient an opportunity to evaluate the proposed aesthetics and occlusion. If necessary, changes can be made in the provisional restorations and relayed to the laboratory technician. Figure 10. The final smile designed using the concept of the Golden Proportion. After patient approval, the final pressed ceramic restorations were placed using a resin cement. The patient's occlusion and aesthetics were re-evaluated 1 week after cementation. The patient reported no sensitivity or complications. Occlusion and gingival health were excellent. The patient was given a mandibular bleaching tray to achieve a shade match between the mandibular and maxillary teeth. The completed case is shown in Figure 10. It is not necessary to achieve the width of the anterior teeth exactly as described by the Golden Proportion when designing an anterior reconstruction. In fact, these exact proportions rarely occur in the natural dentition. The Golden Proportion is just one of many factors involved in smile design. The value of the Golden Proportion is as a diagnostic tool in evaluating a smile, and as a guide to veneer preparation and fabrication. In fact, several studies have indicated that wide variation exists for patients and dentists regarding ideal anterior tooth proportions.^20-22 In a study by Rosenstiel et al,^21 549 dentists evaluated computer images of the same six maxillary anterior teeth. Dentists preferred 80% proportions when viewing short or very short teeth, and the Golden Proportion (62%) for very tall teeth. There was no identifiable preference for teeth of normal length or tall teeth, and choices could not be predicted based on gender, specialist training, experience, or patient volume. The results of a similar study by Kokich et al^22 demonstrated that orthodontists, general dentists, and lay people detect specific aesthetic discrepancies at different levels of change. In the case of the Golden Proportion, lay people did not discern a lateral incisor narrowing until the deviation reached 4 mm. The patient who presents for cosmetic rehabilitation will probably not be comfortable with some of the wide deviations identified in the previous studies. Nevertheless, these studies demonstrate that no single rule or formula can be used to generalize across a population. The design elements presented here do not represent a complete discussion of the available principles and techniques of smile design. In addition, patients and dentists will vary in their preferences. The principles described here offer one set of guidelines for aesthetic dentistry. In the case of the Golden Proportion, the exact proportions are not as important as are the concepts of symmetry and the use of a logical approach to aesthetic restoration of the maxillary anterior teeth.F The authors would like to thank Frontier Dental Laboratory for fabricating the restorations presented in the article. 1. Moskowitz ME, Nayyar A. Determinants of dental esthetics: a rational for smile analysis and treatment. Compend Contin Educ Dent. 1995;16:1164-1166. 2. Eubank J. Macroesthetic elements of smile design. J Am Dent Assoc. 2001;132:39-45. 3. Morely J. Smile design: specific considerations. J Calif Dent Assoc. 1997;25:633-637. 4. Preston JD. The Golden Proportion revisited. J Esthet Dent. 1993;5:247-251. 5. Gillen RJ, Schwartz RS, Hilton TJ, et al. An analysis of selected normative tooth proportions. Int J Prosthodont. 1994;7:410-417. 6. Herz-Fischler R. A Mathematical History of the Golden Number. New York, NY: Dover Publications Inc; 1998. 7. Ghyka MC. The Geometry of Art and Life. New York, NY: Dover Publications Inc; 1977. 8. Williams R. The Golden Proportion. The Geometrical Foundation of Natural Structure: Source Book of Design. New York, NY: Dover Publications Inc; 1979. 9. Huntley HE. The Divine Proportion: A Study In Mathematical Beauty. New York, NY: Dover Publications Inc; 1970. 10. Fischler R. How to find the golden number without really trying. Fibonacci Quart. 1981:19:406-410. 11. Boyer CB. History of Mathematics. New York, NY: Wiley & Sons; 1968. 12. Badawy A. A History of Egyptian Architecture. 3 volumes. New York, NY: University of California Press; 1966-68. 13. Robins G, Shute C. Mathematical bases of ancient Egyptian architecture and graphic art. Historia Math. 1985;12:107-122. 14. Fields M. Practical mathematics of roman times. Mathematics Teacher. 1933;26:77-84. 15. Kappraff J. Connections: The Geometric Bridge Between Art and Science. New York, NY: McGraw-Hill, Inc; 1991. 16. Kemp, Martin. Spirals of life: D'Arcy Thompson and Theodore Cook, with Leonardo and Durer in retrospect. Physis Riv Internaz Storia Sci. 1995;32:37-54. 17. Evans B. Number and Form and Content: A Composer's Path Of Inquiry. The Visual Mind. Cambridge, Mass: Leonardo Book Series, MIT Press; 1993. 18. Gillen RJ, Schwartz RS, Hilton TJ, et al. An analysis of selected normative tooth proportions. Int J Prosthodont. 1994;7:410-417. 19. Snow SR. Esthetic smile analysis of maxillary anterior tooth width: the golden percentage. J Esthet Dent. 1999;11:177-184. 20. Wagner IV, Carlsson GE, Ekstrand K, et al. A comparative study of assessment of dental appearance by dentists, dental technicians, and laymen using computer-aided image manipulation. J Esthet Dent. 1996;8:199-205. 21. Rosenstiel SF, Ward DH, Rashid RG. Dentists' preferences of anterior tooth proportion: a web-based study. J Prosthodont. 2000;9:123-136. 22. Kokich VO Jr, Kiyak HA, Shapiro PA. Comparing the perception of dentists and lay people to altered dental esthetics. J Esthet Dent. 1999;11:311-324. Dr. Javaheri is an assistant professor in the Advanced Education in General Dentistry residency program at University of the Pacific (UOP). He is the course director for the UOP continuing education program "Setting New Standards in Cosmetic Dentistry." Dr. Javaheri has published numerous articles in various publications. He maintains an aesthetic/restorative practice in Alamo, Calif. Dr. Javaheri can be reached by e-mail at This e-mail address is being protected from spambots. You need JavaScript enabled to view it . Dr. Shahnavaz maintains a practice in Alamo, Calif.
{"url":"http://www.dentistrytoday.com/restorative/1889","timestamp":"2014-04-19T06:10:49Z","content_type":null,"content_length":"70435","record_id":"<urn:uuid:5b2c52aa-9aba-48e6-bb53-cbb4b2d4e1aa>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
POP Seminar, 02-07-03 Patricia Johann Rutgers University Parametric Polymorphism, Observational Equivalence, and Monadic Computation In this talk we propose a calculus, called PolyComp, supporting polymorphism, type- and term-level recursion, and computation (i.e., monadic) types. We give PolyComp an operational semantics, which we use to construct parametric models for it. We show that equivalence in these models coincides with corresponding notions of observational equivalence and that these, in turn, depend on the types at which observations can be made. Finally, we use our models to derive operational properties of PolyComp which are independent of the choice of monad, and also discuss how monads which model specific computational effects can be incorporated into our framework. Successfully incorporating new language features into parametric models of observational equivalence is nontrivial because operational techniques are often not modular. In particular, term-level recursion typically does not interact well with other features. To our knowledge, ours is the first parametric model of observational equivalence for a calculus supporting all the features of PolyComp. This is joint work with Neil Ghani. Principles of Programming Seminars
{"url":"http://www.cs.cmu.edu/afs/cs/Web/Groups/pop/seminar/051214.html","timestamp":"2014-04-19T15:12:50Z","content_type":null,"content_length":"4088","record_id":"<urn:uuid:e849bfb9-d2e8-4197-9048-6b244f64b63e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
C# .NET It is bit difficult to give exact algorithm for this problem but Just for reference you can refer following solution- float[] used = new float[n + 1]; //used[j] is the amount of space in bin j already used up. int i, j; //Initialize all used entries to 0.0 //Sort S into descending(nonincreasing)order, giving the sequence S1 >= S2 >= ... >= Sn. for(i = 1; i <= n; i++) //Look for a bin in which s[i] fits. for(j = 1; j <= n; j++) if (used[j] + S[i] <= 1.0) { bin[i] = j; used[j] += S[i]; break; //exit for(j) except this refer these links for information-
{"url":"http://www.nullskull.com/q/10447494/3-d-bin-packing-problem-in-c-sharp.aspx","timestamp":"2014-04-16T04:26:04Z","content_type":null,"content_length":"14270","record_id":"<urn:uuid:d6a3dea2-cbc0-45e4-b3d5-c73ce3ae2507>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Example of multi dimensional array in turbo c? i dont know how to manipulate array in turbo c Example of multi dimensional array in turbo c? An array is a variable that has space to hold 1 or more instances of only ONE data type. unsigned int fuzz[256]; This allocates enough memory for 256 unsigned integers, referenced by the variable name fuzz. You can store up to 256 unsigned integer numbers in that array. Example: 4,5,7,10,etc. If you want to write a number to the array, you would write something like: fuzz[2]=4; In this case, the unsigned integer 4 is written to the 3rd location of array fuzz (array numbers start at 0, not 1!). A multi-dimensional array is like a matrice. Using my previous example, you can think of a single array as 1 row of numbers. A multi-dimensional array can be thought as having multiple rows of data. unsigned int foo[256][256]; This is a two dimensional array. Think of it as 1 row of numbers, with another row of numbers beneath it. To write to the first location in the SECOND row, you could do this: foo[1,0]=4. The first number in the bracket represents the row number (0-1 in this case) and the second number represents the location on that particular row. The number 4 is written to the second row, 1st column on that row. No comments:
{"url":"http://c-array2.blogspot.com/2009/07/example-of-multi-dimensional-array-in.html","timestamp":"2014-04-17T15:26:42Z","content_type":null,"content_length":"51080","record_id":"<urn:uuid:67fa8390-5f1b-4016-83fd-3b8459d2fdc4>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Gödel and the end of physics Stephen Hawking "Gödel and the end of physics" audio: start (ogg vorbis) start (realaudio) download (ogg vorbis) download (realaudio) This lecture is the intellectual property of Professor S.W.Hawking. You may not reproduce, edit, translate, distribute, publish or host this document in any way with out the permission of Professor In this talk, I want to ask how far can we go, in our search for understanding and knowledge.wIll we ever find a complete form of the laws of nature. bY a complete form, I mean a set of rules, that in principle at least, enable us to predict the future to an arbitrary accuracy, knowing the state of the universe at one time. A qualitative understanding of the laws, has been the aim of philosophers and scientists, from Ahristottal onwards. But it was Newton's Principia Mathematica in 1687, containing his theory of universal gravitation, that made the laws quantitative and precise. This led to the idea of scientific determinism, which seems first to have been expressed by Le-plass. iF at one time, one knew the positions and velocities of all the particles in the universe, the laws of science should enable us to calculate their positions and velocities, at any other time, past or future. The laws may or may not have been ordained by God, but scientific determinism asserts that he does not intervene, to break them. At first, it seemed that these hopes for a complete determinism would be dashed, by the discovery early in the 20th century, that events like the decay of radio active atoms, seemed to take place at random. It was as if God was playing dice, in Einstein's phrase. But science snatched victory from the jaws of defeat, by moving the goal posts, and redefining what is meant by a complete knowledge of the universe. It was a stroke of brilliance, whose philosophical implications have still not been fully appreciated.mUch of the credit belongs to Paul Dirac, my pree-decessor but one in the Lucasian chair, though it wasn't motorized in his time. Dirac showed how the work of Erwin Schroedinger and Werner Heisenberg, could be combined in new picture of reality, called quantum theory. In quantum theory, a particle is not characterized by two quantities, its position and its velocity, as in classical Newtonian theory.iNstead it is described by a single quantity, the wave function. tHe size of the wave function at a point, gives the probability that the particle will be found at that point, and the rate at which the wave function changes from point to point, gives the probability of different velocities. One can have a wave function that is sharply peaked at a point.tHis corresponds to a state in which there is little uncertainty in the position of the particle.hOwever, the wave function varies rapidly, so there is a lot of uncertainty in the velocity. Similarly, a long chain of waves has a large uncertainty in position, but a small uncertainty in velocity.oNe can have a well defined position, or a well defined velocity, but not both. This would seem to make complete determinism impossible.iF one can't accurately define both the positions, and the velocities, of particles at one time, how can one predict what they will be in the future. It is like weather forecasting.tHe forecasters don't have an accurate knowledge of the atmosphere at one time.jUst a few measurements at ground level, and what can be learnt from satellite photographs.tHats why weather forecasts are so unreliable. However, in quantum theory, it turns out one doesn't need to know both the positions, and the velocities. If one knew the laws of physics, and the wave function at one time, then something called the Schroedinger equation, would tell one how fast the wave function was changing with time. This would allow one to calculate the wave function at any other time. One can therefore claim that there is still determinism, but it is a determinism on a reduced level.iNstead of being able accurately to predict two quantities, position and velocity, one can predict only a single quantity, the wave function. We have re-defined determinism, to be just half of what Le-plass thought it was. sOme people have tried to connect the unpredictability of the other half, with con'shusness, or the intervention of supernatural beings.bUt it is difficult to make either case, for something that is completely random. In order to calculate how the wave function develops in time, one needs the quantum laws that govern the universe.sO how well do we know these laws. As Dirac remarked, Maxwell's equations of light, and the relativistic wave equation, which he was too modest to call the Dirac equation, govern most of physics, and all of chemistry and biology. So in principle, we ought to be able to predict human behavior, though I can't say I have had much success myself. The trouble is that the human brain contains far too many particles, for us to be able to solve the equations.bUt it is comforting to think we might be able to predict the nematode worm, even if we can't quite figure out humans. Quantum theory, and the Maxwell and Dirac equations, indeed govern much of our life, but there are two important areas beyond their scope.oNe is the nuclear forces.tHe other is gravity. The nuclear forces are responsible for the Sun shining, and the formation of the elements, including the carbon and oxygen of which we are made. And gravity caused the formation of stars and planets, and indeed, of the universe itself.sO it is important to bring them into the scheme. The so called weak nuclear forces, have been unified with the Maxwell equations, by Abdus Salahm and Stephen Weinberg, in what is known as, the Electro weak theory. The predictions of this theory have been confirmed by experiment, and the authors rewarded with Nobel prizes. The remaining nuclear forces, the so called strong forces, have not yet been successfully unified with the electro weak forces, in an observationally tested scheme. Instead, they seem to be described by a similar but separate theory, called QCD. It is not clear who, if anyone, should get a Nobel prize for QCD, but David Gross and Gerard teh Hooft, share credit for showing the theory gets simpler at high energies. I had quite a job to get my speech synthesizer to pronounce Gerrard's surname.iT wasn't familiar with ap-osstrophee t. The electro weak theory, and QCD, together constitute the so called Standard Model of particle physics, which aims to describe everything except gravity. The standard model seems to be adequate for all practical purposes, at least for the next hundred years.bUt practical or economic reasons, have never been the driving force in our search for a complete theory of the universe. No one working on the basic theory, from Galeelaeo onward, has carried out their research to make money, though Dirac would have made a fortune if he had patented the Dirac equation. He would have had a royalty on every television, walkman, video game and computer. The real reason we are seeking a complete theory, is that we want to understand the universe, and feel we are not just the victims of dark and mysterious forces.iF we understand the universe, then we control it, in a sense. The standard model is clearly unsatisfactory in this respect. fIrst of all, it is ugly and ad hoc.tHe particles are grouped in an apparently arbitrary way, and the standard model depends on 24 numbers, whose values can not be deduced from first principles, but which have to be chosen to fit the observations. What understanding is there in that?cAn it be Nature's last word. The second failing of the standard model, is that it does not include gravity.iNstead, gravity has to be described by Einstein's General Theory of Relativity. General relativity, is not a quantum theory, unlike the laws that govern everything else in the universe. aLthough it is not consistent to use the non quantum general relativity, with the quantum standard model, this has no practical significance at the present stage of the universe, because gravitational fields are so weak. However, in the very early universe, gravitational fields would have been much stronger, and quantum gravity would have been significant. Indeed, we have evidence that quantum uncertainty in the early universe, made some regions slightly more or less dense, than the otherwise uniform background. We can see this in small differences in the background of microwave radiation from different directions. The hotter, denser regions will condense out of the expansion as galaxies, stars and planets. All the structures in the universe, including ourselves, can be traced back to quantum effects in the very early stages.iT is therefore essential to have a fully consistent quantum theory of gravity, if we are to understand the universe. Constructing a quantum theory of gravity, has been the outstanding problem in theoretical physics, for the last 30 years. It is much, much more difficult than the quantum theories of the strong and electro weak forces.tHese propagate in a fixed background of space and time.oNe can define the wave function, and use the Schroedinger equation to evolve it in time. But according to general relativity, gravity is space and time.sO how can the wave function for gravity, evolve in time.aNd anyway, what does one mean by the wave function for gravity. It turns out that, in a formal sense, one can define a wave function, and a Schroedinger like equation for gravity, but that they are of little use in actual calculations. Instead, the usual approach is to regard the quantum spacetime, as a small perturbation of some background spacetime, generally flat space. The perturbations can then be treated as quantum fields, like the electro weak and QCD fields, propagating through the background spacetime. In calculations of perturbations, there is generally some quantity, called the effective coupling, which measures how much of an extra perturbation, a given perturbation generates. If the coupling is small, a small perturbation, creates a smaller correction, which gives an even smaller second correction, and so on.pErturbation theory works, and can be used to calculate to any degree of accuracy. An example is your bank account.tHe interest on the account, is a small perturbation.a very small perturbation if you are with one of the big banks). The interest is compound.tHat is, there is interest on the interest, and interest on the interest on the interest.hOwever, the amounts are tiny.tO a good approximation, the money in your account, is what you put there. On the other hand, if the coupling is high, a perturbation generates a larger perturbation, which then generates an even larger perturbation. An example would be borrowing money from loan sharks.tHe interest can be more than you borrowed, and then you pay interest on that.iT is disastrous. With gravity, the effective coupling is the energy or mass of the perturbation, because this determines how much it warps spacetime, and so creates a further perturbation. However, in quantum theory, quantities like the electric field, or the geometry of spacetime, don't have definite values, but have what are called, quantum fluctuations. These fluctuations have energy.iN fact, they have an infinite amount of energy, because there are fluctuations on all length scales, no matter how small. Thus treating quantum gravity as a perturbation of flat space, doesn't work well, because the perturbations are strongly coupled. Supergravity was invented in 1976 to solve, or at least improve, the energy problem.It is a combination of general relativity with other fields, such that that each species of particle, has a super partner species. tHe energy of the quantum fluctuations of one partner is positive, and the other negative, so they tend to cancel.It was hoped the infinite positive and negative energies would cancel completely, leaving only a finite remainder. In this case, a perturbation treatment would work, because the effective coupling would be weak. However in 1985, people suddenly lost confidence that the infinities would cancel. tHis was not because anyone had shown that they definitely didn't cancel.iT was reckonned it would take a good graduate student, 300 years to do the calculation, and how would one know they hadn't made a mistake on page two. Rather it was because Ed Witten declared that string theory, was the true quantum theory of gravity, and supergravity was just an approximation, valid when particle energies are low, which in practice, they always are. In string theory, gravity is not thought of as the warping of spacetime.iNstead, it is given by string diagrams, networks of pipes that represent little loops of string, propagating through flat spacetime. The effective coupling, that gives the strength of the junctions where three pipes meet, is not the energy, as it is in supergravity. Instead it is given by what is called, the dilaton, a field that has ~not been observed.if the dilaton had a low value, the effective coupling would be weak, and string theory, would be a good quantum theory.bUt it is no earthly use for practical purposes. In the years since 1985, we have realized that both supergravity and string theory, belong to a larger structure, known as M theory.why it should be called M Theory, is completely obscure. M theory, is not a theory in the usual sense.Rather it is a collection of theories, that look very different, but which describe the same physical situation. These theories are related by mappings, or correspondences, called dualities, which imply that they are all reflections of the same underlying theory. Each theory in the collection, works well in the limit, like low energy, or low dilaton, in which its effective coupling is small, but breaks down when the coupling is large. This means that none of the theories, can predict the future of the universe, to arbitrary accuracy.for that, one would need a single formulation of M-theory, that would work in all situations. Up to now, most people have implicitly assumed that there is an ultimate theory, that we will eventually discover.Indeed, I myself have suggested we might find it quite soon. However, M-theory has made me wonder if this is true.Maybe it is not possible to formulate the theory of the universe in a finite number of statements. This is very reminiscent of Goedel's theorem.This says that any finite system of axyoms, is not sufficient to prove every result in mathematics. Goedel's theorem is proved using statements that refer to themselves.sUch statements can lead to paradoxes.aN example is, this statement is false. If the statement is true, it is false.and if the statement is false, it is true. Another example is, the barber of Corfoo shaves every man who does not shave himself. Who shaves the barber?if he shaves himself, then he doesn't, and if he doesn't, then he does. Goedel went to great lengths to avoid such paradoxes, by carefully distinguishing between mathematics, like 2+2 =4,and meta mathematics, or statements about mathematics, such as mathematics is cool, or mathematics is consistent. that is why his paper is so difficult to read.but the idea is quite simple. First Goedel showed that each mathematical formula, like 2+2=4, can be given a unique number, the Goedel number.the Goedel number of 2+2=4, is *. Second, the meta mathematical statement, the sequence of formulas A, is a proof of the formula B, can be expressed as an arithmetical relation between the Goedel numbers for A- and B. Thus meta mathematics can be mapped into arithmetic, though I'm not sure how you translate the meta mathematical statement, 'mathematics is cool'. Third and last, consider the self referring Goedel statement, G.tHis is, the statement G can not be demonstrated from the axyoms of mathematics. Suppose that G could be demonstrated.tHen the axyoms must be inconsistent, because one could both demonstrate G, and show that it can not be demonstrated. On the other hand, if G can't be demonstrated, then G is true.bY the mapping into numbers, it corresponds to a true relation between numbers, but one which can not be deduced from the axyoms. Thus mathematics is either inconsistent, or imcomplete.tHe smart money, is on imcomplete. What is the relation between Goedels theorem, and whether we can formulate the theory of the universe, in terms of a finite number of principles. One connection is obvious.aCcording to the positivist philosophy of science, a physical theory, is a mathematical model.sO if there are mathematical results that can not be proved, there are physical problems that can not be predicted. One example might be the Golbach conjecture.gIven an even number of wood blocks, can you always divide them into two piles, each of which can not be arranged in a rectangle.tHat is, it contains a prime number of Although this is incompleteness of sort, it is not the kind of unpredictability I mean.gIven a specific number of blocks, one can determine with a finite number of trials, whether they can be divided into two primes. But I think that quantum theory and gravity together, introduces a new element into the discussion, that wasn't present with classical Newtonian theory. In the standard positivist approach to the philosophy of science, physical theories live rent free in a Platonic heaven of ideal mathematical models. That is, a model can be arbitrarily detailed, and can contain an arbitrary amount of information, without affecting the universes they describe. But we are not angels, who view the universe from the outside.iNstead, we and our models, are both part of the universe we are describing. Thus a physical theory, is self referencing, like in Goedels theorem.oNe might therefore expect it to be either inconsistent, or imcomplete.tHe theories we have so far, are ~both inconsistent, and imcomplete. Quantum gravity is essential to the argument..THe information in the model, can be represented by an arrangement of particles.aCcording to quantum theory, a particle in a region of a given size, has a certain minimum amount of energy. Thus, as I said earlier, models don't live rent free.tHey cost energy.By Einsteins famous equation, E = mc squared, energy is equivalent to mass.aNd mass causes systems to collapse under gravity. It is like getting too many books together in a library.tHe floor would give way, and create a black hole that would swallow the information. Remarkably enough, Jacob Bekenstein and I, found that the amount of information in a black hole, is proportional to the area of the boundary of the hole, rather than the volume of the hole, as one might have expected. The black hole limit on the concentration of information, is fundamental, but it has not been properly incorporated into any of the formulations of M theory that we have so far. They all assume that one can define the wave function at each point of space.bUt that would be an infinite density of information, which is not allowed. On the other hand, if one can't define the wave function point wise, one can't predict the future to arbitrary accuracy, even in the reduced determinism of quantum theory. What we need, is a formulation of M theory, that takes account of the black hole information limit.bUt then our experience with supergravity and string theory, and the analogy of Goedels theorem, suggest that even this formulation, will be imcomplete. Some people will be very disappointed if there is not an ultimate theory, that can be formulated as a finite number of principles.I used to belong to that camp, but I have changed my mind. I'm now glad that our search for understanding will never come to an end, and that we will always have the challenge of new discovery.wIthout it, we would stagnate. Goedels theorem ensured there would always be a job for mathematicians.I think M theory will do the same for physicists. I'm sure Dirac would have approved. Thank you for listening.
{"url":"http://www.damtp.cam.ac.uk/events/strings02/dirac/hawking/","timestamp":"2014-04-17T15:25:29Z","content_type":null,"content_length":"21115","record_id":"<urn:uuid:085f2b44-5be0-4ba1-b7f0-97e764b624a8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 4 of 4 1. CJM 2010 (vol 62 pp. 961) Multiplicative Isometries and Isometric Zero-Divisors For some Banach spaces of analytic functions in the unit disk (weighted Bergman spaces, Bloch space, Dirichlet-type spaces), the isometric pointwise multipliers are found to be unimodular constants. As a consequence, it is shown that none of those spaces have isometric zero-divisors. Isometric coefficient multipliers are also investigated. Keywords:Banach spaces of analytic functions, Hardy spaces, Bergman spaces, Bloch space, Dirichlet space, Dirichlet-type spaces, pointwise multipliers, coefficient multipliers, isometries, isometric zero-divisors Categories:30H05, 46E15 2. CJM 2009 (vol 61 pp. 124) Characterizing Complete Erd\H os Space The space now known as {\em complete Erd\H os space\/} $\cerdos$ was introduced by Paul Erd\H os in 1940 as the closed subspace of the Hilbert space $\ell^2$ consisting of all vectors such that every coordinate is in the convergent sequence $\{0\}\cup\{1/n:n\in\N\}$. In a solution to a problem posed by Lex G. Oversteegen we present simple and useful topological characterizations of $\ cerdos$. As an application we determine the class of factors of $\cerdos$. In another application we determine precisely which of the spaces that can be constructed in the Banach spaces $\ell^p$ according to the `Erd\H os method' are homeomorphic to $\cerdos$. A novel application states that if $I$ is a Polishable $F_\sigma$-ideal on $\omega$, then $I$ with the Polish topology is homeomorphic to either $\Z$, the Cantor set $2^\omega$, $\Z\times2^\omega$, or $\cerdos$. This last result answers a question that was asked by Stevo Todor{\v{c}}evi{\'c}. Keywords:Complete Erd\H os space, Lelek fan, almost zero-dimensional, nowhere zero-dimensional, Polishable ideals, submeasures on $\omega$, $\R$-trees, line-free groups in Banach spaces Categories:28C10, 46B20, 54F65 3. CJM 2004 (vol 56 pp. 225) Complex Uniform Convexity and Riesz Measure The norm on a Banach space gives rise to a subharmonic function on the complex plane for which the distributional Laplacian gives a Riesz measure. This measure is calculated explicitly here for Lebesgue $L^p$ spaces and the von~Neumann-Schatten trace ideals. Banach spaces that are $q$-uniformly $\PL$-convex in the sense of Davis, Garling and Tomczak-Jaegermann are characterized in terms of the mass distribution of this measure. This gives a new proof that the trace ideals $c^p$ are $2$-uniformly $\PL$-convex for $1\leq p\leq 2$. Keywords:subharmonic functions, Banach spaces, Schatten trace ideals Categories:46B20, 46L52 4. CJM 1999 (vol 51 pp. 26) Separable Reduction and Supporting Properties of Fréchet-Like Normals in Banach Spaces We develop a method of separable reduction for Fr\'{e}chet-like normals and $\epsilon$-normals to arbitrary sets in general Banach spaces. This method allows us to reduce certain problems involving such normals in nonseparable spaces to the separable case. It is particularly helpful in Asplund spaces where every separable subspace admits a Fr\'{e}chet smooth renorm. As an applicaton of the separable reduction method in Asplund spaces, we provide a new direct proof of a nonconvex extension of the celebrated Bishop-Phelps density theorem. Moreover, in this way we establish new characterizations of Asplund spaces in terms of $\epsilon$-normals. Keywords:nonsmooth analysis, Banach spaces, separable reduction, Fréchet-like normals and subdifferentials, supporting properties, Asplund spaces Categories:49J52, 58C20, 46B20
{"url":"http://cms.math.ca/cjm/kw/Banach%20spaces","timestamp":"2014-04-17T18:40:45Z","content_type":null,"content_length":"31905","record_id":"<urn:uuid:7432fe82-6a5c-48e5-ac0d-a248b73fa01c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
When does it work? I have been sharing my experience teaching a group of fifth-graders how to problem-solve around fraction computation and using it as an opportunity to demonstrate the Teaching-Learning Cycle in action. Previously, I wrote about how I planned for a problem-solving lesson and then described my instruction during the following lesson. In this post, I want to discuss how I used assessment and evaluation to monitor the learners' progress and inform future planning and instruction. We were using the clock model as a context for adding fractions and I wanted to gather data about whether or not the learners could determine when this model was an effective approach. I used an existing set of textbook items and asked the fifth-graders to: "Look at the expressions shown below - circle the ones that you think you could use the clock model to solve and place an 'X' through those you could not." from Scott Foresman – Addison Wesley Math [5^th Grade] Once the kids had completed this task, I asked them to solve one of the problems they had circled. As they worked, I gathered data on whether or not they were able to determine when the clock model could work. In analyzing my observations, I noticed what the fifth-graders could do and what they were trying to do. First, they all recognized that fractions involving ninths and sevenths were poor candidates for the clock model. Those who chose to solve #2, #9, and #10 were also fluent in applying prior experiences with the time context. Some learners thought eighths could work (circling #4) and others struggled to see that fifths could work ('X'ing out #3, #5, and #8). These last two areas of approximation gave me some ideas about what to focus on next. The last assessment I gave was intended to gather data about how the fifth-graders might apply the idea of context to a problem that could not be easily solved using the clock model. As a ticket out the door, I asked, "Now what could you do to solve a problem you put an 'X' through?" Based on my evaluation of these assessments, I was prepared to plan for future lessons. What would you do next? No comments:
{"url":"http://deltascape.blogspot.com/2011/06/when-does-it-work.html","timestamp":"2014-04-17T16:08:58Z","content_type":null,"content_length":"119361","record_id":"<urn:uuid:4ea05c85-96ea-4b37-8a5f-658babeebb30>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00512-ip-10-147-4-33.ec2.internal.warc.gz"}